Reframing Average Utilitarianism

https://www.lesswrong.com/posts/xdMscpT2B9qX3ofsp/reframing-average-utilitarianism

World A has a million people with an average utility of 10; world B has 100 people with an average utility of 11. Average utilitarianism says world B is preferable to world A. This seems counterintuitive as it has less total utility, but what if we reframe the question?

Imagine you are behind a veil of ignorance and you have to choose which world you will be instantiated into, becoming one citizen randomly selected from the population. From this perspective, world B is the obvious choice: even though it has far less total utility than word A, you personally get more utility by being instantiated into world B. This remains true even if world B only has 1 citizen, though most people, presumably, have "access to good company" in their utility function.

This reframing seems to invert my intuitions. Though this may just mean I am more selfish than most.

Comment

https://www.lesswrong.com/posts/xdMscpT2B9qX3ofsp/reframing-average-utilitarianism?commentId=LEp4YENXcx4xh3cG7

The line of reply I tend to make here goes something along these lines:

If you’re looking at a veil of ignorance, and choosing between world A or world B, it seems you should be also veiled to whether or not you exist. Whatever probability p you have to exist in world B, you have 10000p the probability of existing in world A, because it has 10000x the number of existing people. So opting for world B seems a much better bet.

As an aside, average views tend not to be considered ‘goers’ by those who specialise in population ethics, but for different reasons:

  1. It looks weird with negative numbers. You can make a world where everyone has lives not worth living (-10, say) by adding people whose lives are also not worth living, but not quite as bad (e.g. −9).

  2. It also looks pretty weird with positive numbers. Mere addition seems like it should be okay, even if it drags down the average.

  3. Inseparability also looks costly. If the averaging is over all beings, then the decision as to whether humanity should euthanise itself becomes intimately sensitive to whether there’s an alien species in the next supercluster who are blissfully happy.

https://www.lesswrong.com/posts/xdMscpT2B9qX3ofsp/reframing-average-utilitarianism?commentId=K3vHgnZkqPxHWmqxM

This has always been how I thought about it. (I consider myself approximately an Average Utilitarian, with some caveats that this is more descriptive than normative)

One person who disagreed with me said: "you are not randomly born into ‘a person’, you are randomly born into ‘a collection of atoms.’ A world with fewer atoms arranged into thinking beings increases the chance that you get zero utility, not whatever the average utility from among whichever patterns have developed consciousness"

I disagree with that person, but just wanted to float it as an alternate way of thinking.

Comment

https://www.lesswrong.com/posts/xdMscpT2B9qX3ofsp/reframing-average-utilitarianism?commentId=YCm2vzPe6GF62ikg4

It’s the difference between SIA and SSA. If you work with SIA, then you’re randomly chosen from all possible beings, and so in world B you’re less likely to exist.

https://www.lesswrong.com/posts/xdMscpT2B9qX3ofsp/reframing-average-utilitarianism?commentId=WjCY73x7HnGtTeLB5

Sure. By hypothesis, you would like to make the future more like world B, on the assumption that you get to live there. That’s what "higher utility" means.

But this set of assumptions seems a little too strong to be helpful when deciding questions like "is it the right thing to create a person who’s going to have a fairly average set of experiences?" Because someone with total-utilitarian preferences would do a better job of satisfying their own preferences (i.e. have a higher utility score) by creating this person.

So the question is not "Would I want there to be more people if it lowered my utility-number?" The question is "Would there being more people lower my utility-number?"