Contents
- Experience maximizing hedonism
- Infinite Boredom
- What SHOULD we do? I’m trying to clarify some feelings I had after reading the post Utopic Nightmares. Specifically, this bit:
But in a future world where advancing technology’s returns on the human condition stop compensating for a state less than perfect hedonism, we can imagine editing boredom out of our lives I would like to describe a toy moral theory that—while not exactly what I believe—gets at why I would consider "eliminating boredom" morally objectionable.
Experience maximizing hedonism
Consider an agent that perceives external reality through a set of sensors S_0, S_1,…S_n . It uses these sensors to build a model of external reality and estimate it’s position in that reality at a point in time as a state M_t. It also has a number of actions A_{t,i} available to it at any given time. The agent estimates the number of reachable future states V(t) = E(|{M_i s.t \exists path Mt ->..-> Mi|}) and "chooses" its actions so as to maximize the value of R(t) for some future time t. Obviously if the agent is dead, it cannot perceive or affect its future state, so it estimates V(dead)=1. Internally the agent is running some kind of hill-climbing algorithm, so it experiences a reward after choosing an action A_{i,t} at time t of the form R(t)=V(t)-V(t-1). In this way, the agent experiences pleasure when it takes actions that increase V(t) and pain when it takes actions that decrease V(t) and over time the agent learns to take actions that maximize V(t).
Infinite Boredom
Now consider the infinite boredom of Utopic Nightmares. In this case the agent reaches a local maximum for V(t) and R(t) is now constant (and equal to zero). But of course there is no reason why R(t) need be zero when V(t) is constant. There’s no reason why we couldn’t have instead used R(t)_c=V(t)-V(t-1)+c for our hill-climb. The agent would experience endless bliss (for positive values of c) or endless suffering (for negative values of c). Human experience suggests that our personal setting for c is in fact significantly negative (as humans suffer greatly from boredom). What might be the value of using a large negative value for c? Consider, perhaps, the case of Simulated Annealing where the algorithm intentionally makes "wrong" moves in order to escape when trapped in a local maximum. The key consideration is that R(t) is not the thing being optimized V(t) is. Changing R(t) in a way that doesn’t increase V(t) doesn’t actually improve the situation of our agent, only it’s perception of the situation. In any case, our minds are the product of evolution, so it appears that historically the fitness maximizing value for c is negative.
What SHOULD we do?
At this point, an important objection can be made. Namely, one cannot derive an ought from an is. Just because humans have an existing bias toward a negative value for c, what does that tell us about what ought to be? Why shouldn’t humans be happy even if relegated to endless boredom? One argument is Chesterton’s fence, i.e. until we are quite sure why we dislike boredom so, we ought not mess with it. Another is that if humans ever become "content" with boredom, we cut off all possibility of further growth (however small). My main point, though, is that I would consider eliminating boredom wrong because it optimizes for our feelings R(t) and not our well being V(t).
"A superintelligent FAI with total control over all your sensory inputs" seems to me a sufficient condition to avoid boredom. Kind of massive overkill. Unrestricted internet access is usually sufficient. You don’t need to edit out pain sensitivity from humans to avoid pain. You can have a world where nothing painful happens to people. Likewise you don’t need to edit out boredom, you can have a world with lots of interesting things in it. Think of all the things a human in the modern day might do for fun, and add at least as many things that are fun and haven’t been invented yet.
Comment
Comment
Comment
I read the original post, and kind of liked it, but I also very much disagreed with it. I am somewhat befuddled by the chain of reasoning in that post, as well as that of the community in general. In mathematics, you may start from some assumptions, and derive lots of things, and if ever you come upon some inconsistencies, you normally conclude that one of your assumptions is wrong (if your derivation is okay). Anyway, here it seems to me, that you make assumptions, derive something ludicrous, and then tap yourself on the shoulder and conclude, that obviously everything has to be correct. To me, that does not follow. If you assume an omnipotent basilisk (if you multiply by infinity), then obviously you can derive anything you damn well please. One concrete example (There were many more in the original post):
Comment
Comment
Comment