notes-philosophy-utilitarianism

My friend ROF once pointed out an important failing of utilitarianism, namely, it is not obvious what sort of mathematical function one should use to take the utilities of a scenario for a bunch of different people, and formulate a single number representing the total (or joint) utility for the group.

You may think this is simple, just add 'em together, but the example that makes you question this is discussed here: http://lesswrong.com/lw/kn/torture_vs_dust_specks/

See also [1].

Another approach here is to use the veil of ignorance. This translates the question of how utility should be distributed among different people into a question of risk-aversion in one person. Would you risk a small probability of horrible happening to you in exchange for a large probability of a tiny gain? If so, what is the threshold?

People do accept such tradeoffs sometimes; for example, every time you get in a car, you accept a tiny risk of terrible injury, a risk that you wouldn't have if you had just stayed home. And yet people get in the car not just to get to work, but also to go somewhere fun.