notes-philosophy-ethics-jointUtility

If we assume that we have a formula for utility for one person, what mathematical function do we then use to derive 'group utility' for a group of people? Call this function the 'aggregation function for utility'.

Define strong utilitarianism as "you should maximize group utility"

Addition proposal

The additive proposal is that group utility = the sum of all individual utilities. The aggregation function is addition.

Fairness criterion

The fairness criterion is that, all else being equal, humans seem to find situations preferable in which desirable things are more evenly distributed between people. For example, if there are 3 people and 60 hours of boring labor must be done per week, then other things being equal (e.g. it is not more efficient for one person to do all of the work), it seems more desirable for each person to work 20 hours per week than for one person to work 60 hours and the others to not work at all.

The addition proposal does not satisfy the fairness criterion, because it assigns equal weight to even and uneven distributions of utility.

Dust speck issue

Strong utilitarianism has an issue that my friend R.O.F. pointed out; "should you torture one person in order to make a very large number of gerbils, who are already content, slightly more content? Should you torture one person in order to make a very large number of people who are watching giggle slightly?". Others have used a dust speck example for this idea.

Weak utilitarianism solution

This is only a 'solution' in the sense in which it puts an additional constraint on the utilitarian mandate, which restricts it to a region in which the dust speck issue doesn't apply.

Define weak utilitarianism as "if you would sacrifice A for B if you were in the situation Z except that possibly you didn't have B, and by taking A away from a person in situation Z you can give B to one or more people, then you should do so."

Weak utilitarianism does not have this issue; it does not demand that you do something that is more bad to the losing party than it is good to the individual winning parties. However, it does demand things like taking a small amount of money from the rich to give to the poor, if the rich would be slightly less content but the poor recipients would greatly suffer less.

Rawls' veil of ignorance proposal (as applied to the aggregation function)

The veil of ignornance proposal (to paraphrase Wikipedia) is that, in selecting the aggregation function, we imagine that we are a person who is going to live in a society ordered by group utility under our aggregation function, but that we don't know our own abilities, tastes, and role in society.

This by itself doesn't directly tell us what the aggregation function is, though.

Multiplication or addition-of-log proposal, by way of Kelly

This proposal extends the Rawls veil of ignorance proposal into a precise proposal.

The motivation is that we look at the chance that we will end up experiencing one of the individual utilities similar to how an investor looks at potential investment outcomes, but when the investment strategy will be iterated. When investment strategy is iterated, given some assumptions, the way to maximize capital growth in the long-term appears to have something to do with the Kelly Criterion. The Kelly Criterion in turn is built on taking the log of utility. This 'motivation' isn't very motivating; I can't think of any particular reason why utilitarian ethics should use the same formula as iterated investment, because the different people's experiences are not iteration and there is no 'compount interest'; but at least it provides us with an idea for a formula that we could use.

So, we might propose that group utility is the sum of the logarithms of individual utility. Note that, as we only really care about rank ordering the outcomes, this is similar to just multiplying the individual utilities together (ie the aggregation function is multiplication).

The connection between multiplication and iterated investment is clear; because of compound interest, it is better to invest in something with a 10% return, and then to take the result and invest it in something else with a 10% return, than it is to invest the initial investment in something with a 20% return, and then something with a 0% return; because the growth rates are multiplied together, and 1.1*1.1 > 1.2*1. As noted above, i can't see any particular reason why this should be relevant for utilitarian ethics.

Another point in favor of this proposal is simplicity. If someone tells you that they have a formula with an addition operator in it and it almost works but not quite, and asks you what they should try next, the obvious answer is multiplication.

The proposal doesn't avoid the dust speck issue; but it does satisfy the fairness criterion, and it does at least require the number of dust specks required to offset the torture of one person to be very large.

The proposal has an issue with zeros. If any participant realizes a utility of 0, then the group utility is zero. The proposal also has an issue with negative utilities; the log of a negative number is undefined. Instead, undesirable outcomes should be represented as fractical utilities (between 0 and 1), and desirable outcomes should be represented as utilities above 1; a 'neutral' utility is 1. This also solves the problem with 0; if a person experiences a neutral utility, which is now 1, there is no problem; and now 0 represents an infinitely undesirable experience, which we can assume is impossible.

The addition-of-log proposal can be further modified by dividing by number of participants (see 'Average vs total happiness issue' below) to get the average-of-log instead of the total log utility.

Average vs total utility issue

Another issue bearing on the choice of aggregation function is average vs. total utility.

Overpopulation criterion

If the aggregation function is addition (total utility), then it is preferable to have a very large number of very low but positive utility values, over having a moderate number of people with moderately high utility values. That is, it is preferable to have an 'overpopulated' world in which everyone is just barely happier than neutral, than to have a moderately populated world in which people are much more happy. This conclusion seems paradoxical to some.

One proposal arising out of this is to use average, rather than total, utility.

However, average utility alone does not satisfy the fairness criterion; average-of-log, however, does.

Don't kill unhappy people criterion

Average utility, or average-of-log utility, have the issue that, all else being equal, these metrics are increased by killing people whose utility or log-utility respectively are less than average.

It may be that this sort of paradox is inescapable as long as only 'plain vanilla' strong utilitarian ethics ('just maximize utility') are used, because without bringing in other distinctions (distinctions such as eg between active or passive choices, potential or actual people, presence or absence of force), one cannot distinguish between not causing a new, not-very-happy person to be born, and forcing an already-living, not-very-happy person to die.

See also https://en.wikipedia.org/wiki/Utilitarianism#Average_v._total_happiness

Mere addition issue

See https://en.wikipedia.org/wiki/Mere_addition_paradox

Some tentative conclusions

Given some utility function used by an individual, use the average of the log of this utility as group utility. (note that the individual utility function may itself already be calculated by taking the log of some other 'pre-utility' quantity, for example wealth; in such a case there is still ANOTHER log used in the aggregation of individual into group utility; for example, having one trillion dollars may not make one 1000x as happy as having one billion dollars, so the utility of wealth is probably already sublinear; however much happier a trillionaire is than a billionaire is measured by individual utility, which is perhaps the log of wealth, and then we still apply a log to these individual utilities in the passage to group utility).

Distinguish between imposition of force/coerion, and not. Only when certain additional conditions hold (what are these? weak utilitarianism? golden rule? this is still an open question (to me; i'm not saying no one knows the answer, just that i don't)) is imposition of force justified.

Apply utilitarianism not separately for each choice, but rather as an axiomatic principal to help in choosing ethical rules (rule utilitarianism). This provides collaboration benefits by allowing eg rules such as 'be honest' (perhaps with some exceptions), which allow people to communicate with each other even when "act utilitarianism" would say that one should lie.