Skip to main content

On utilitarianism, code, and robots

John Stuart Mill codified his father-in-law's philosophical notions into the moral school known as utilitarianism. In simplest terms, Utilitarianism suggests that any moral questions should be resolved by following the principle of "the greatest good to the greatest number." In common practice this resolves, for example, to such dictums as "nobody gets two until everyone has had one."

If I recall correctly Utilitarianism is considered "discredited" by those who discredit things such as Utilitarianism. Or deprecated if you prefer. The principles of utility break down as stakes rise. Sharing cake: no problem. Allocating shared national resources: should work, doesn't. Deciding who lives, who dies: not so good.

In fact the limits of Utility are something that we all recognize (even though I will shortly argue that we are all Utilitarians... up to a point), and this recognition is reflected in a very common Hollywood scenario in which the hero rebels against his Utilitarian superior. Thus, to take the very most recent example I have at hand, scenes such as the opening of "Pacific Rim", where Stringer Bell - pardon me, the military boss - orders the pilots of a robot to proceed to the coast they're supposed to defend, rather than rescue a fishing boat gravely endangered by a storm. "It's ten lives versus a million lives," Stringer spits with icy calculated venom. But film heroes are immune to that toxin. They always save the single life.

Nevertheless, I suggest that we are all Utilitarians... up to a point. It's the default mode of morality, and we use it for quick decisions, all the way up to the moment, different for each of us no doubt, that it fails as a moral framework. Even the most rigid Monist shares birthday cake on principles of utility. Our moral frameworks are spectral in nature rather than monochromatic, and any philosophical analysis that fails to recognize this - or, to invert, that insists on attributing a single tone - is in a vacuum, which is to say, vacuous.

And what does this have to do with code?

Recently I spent five gruelling hours solving a use-case scenario on a website. After I finally got my use-case scenario working, I sighed: my five hours would probably save an average user of the site an entire minute of work... per year. Had I just wasted five hours of my life? Possibly.

Then the principle of utility came to my rescue. In a very simple Utilitarian scheme of things, my 300 minutes of work becomes worthwhile if, and only if, at least 301 people save one minute per year. (A monetary calculation of utility might lead to a higher or lower figure, as it would have to attribute a value to my time and apply this value against the average value to time for the site's user base. But in my moral framework, time is the absolute wealth.)

The conclusion: before embarking on that next subtle and fantastic user-experience enhancement that nobody is really going to notice, determine whether the sum of time saved for your user base is greater than the time you're going to waste making it work.

Then do it anyway, cos that's just how you roll.