Contents
-
- Pain signaling
-
- To what extent is suffering conditional or complex?
-
- Consciousness and suffering Below, I’ve collected some of my thoughts on consciousness. Topics covered (in the post and/or the comments below) include:
-
To what extent did subjective pain evolve as a social signal?
-
Why did consciousness evolve? What function(s) did it serve?
-
What would the evolutionary precursors of ‘full’ consciousness look like?
-
What sorts of human values are more or less likely to extend to unconscious things?
-
Is consciousness more like a machine (where there’s a sharp cutoff between ‘the machine works’ or ‘the machine doesn’t’), or is it more like a basic physical property like mass (where there’s a continuum from very small fundamental things that have small amounts of the property, all the way up to big macroscopic objects that have way more of the property)?
-
How should illusionism (the view that in an important sense we aren’t conscious, but non-phenomenally ‘appear’ to be conscious) change our answers to the questions above?
1. Pain signaling
In September 2019, I wrote on my LW shortform: Rolf Degen, summarizing part of Barbara Finlay’s "The neuroscience of vision and pain":
Humans may have evolved to experience far greater pain, malaise and suffering than the rest of the animal kingdom, due to their intense sociality giving them a reasonable chance of receiving help. From the paper: Several years ago, we proposed the idea that pain, and sickness behaviour had become systematically increased in humans compared with our primate relatives, because human intense sociality allowed that we could ask for help and have a reasonable chance of receiving it. We called this hypothesis ‘the pain of altruism’ [68]. This idea derives from, but is a substantive extension of Wall’s account of the placebo response [43]. Starting from human childbirth as an example (but applying the idea to all kinds of trauma and illness), we hypothesized that labour pains are more painful in humans so that we might get help, an ‘obligatory midwifery’ which most other primates avoid and which improves survival in human childbirth substantially ([67]; see also [69]). Additionally, labour pains do not arise from tissue damage, but rather predict possible tissue damage and a considerable chance of death. Pain and the duration of recovery after trauma are extended, because humans may expect to be provisioned and protected during such periods. The vigour and duration of immune responses after infection, with attendant malaise, are also increased. Noisy expression of pain and malaise, coupled with an unusual responsivity to such requests, was thought to be an adaptation. We noted that similar effects might have been established in domesticated animals and pets, and addressed issues of ‘honest signalling’ that this kind of petition for help raised. No implication that no other primate ever supplied or asked for help from any other was intended, nor any claim that animals do not feel pain. Rather, animals would experience pain to the degree it was functional, to escape trauma and minimize movement after trauma, insofar as possible. Finlay’s original article on the topic: "The pain of altruism". [Epistemic status: Thinking out loud] If the evolutionary logic here is right, I’d naively also expect non-human animals to suffer more to the extent they’re (a) more social, and (b) better at communicating specific, achievable needs and desires. There are reasons the logic might not generalize, though. Humans have fine-grained language that lets us express very complicated propositions about our internal states. That puts a lot of pressure on individual humans to have a totally ironclad, consistent "story" they can express to others. I’d expect there to be a lot more evolutionary pressure to actually experience suffering, since a human will be better at spotting holes in the narratives of a human who fakes it (compared to, e.g., a bonobo trying to detect whether another bonobo is really in that much pain). It seems like there should be an arms race across many social species to give increasingly costly signals of distress, up until the costs outweigh the amount of help they can hope to get. But if you don’t have the language to actually express concrete propositions like "Bob took care of me the last time I got sick, six months ago, and he can attest that I had a hard time walking that time too", then those costly signals might be mostly or entirely things like "shriek louder in response to percept X", rather than things like "internally represent a hard-to-endure pain-state so I can more convincingly stick to a verbal narrative going forward about how hard-to-endure this was".
2. To what extent is suffering conditional or complex?
In July 2020, I wrote on my shortform: [Epistemic status: Piecemeal wild speculation; not the kind of reasoning you should gamble the future on.] Some things that make me think suffering (or ‘pain-style suffering’ specifically) might be surprisingly neurologically conditional and/or complex, and therefore more likely to be rare in non-human animals (and in subsystems of human brains, in AGI subsystems that aren’t highly optimized to function as high-fidelity models of humans, etc.):
- Degen and Finlay’s social account of suffering above.
- Which things we suffer from seems to depend heavily on mental narratives and mindset. See, e.g., Julia Galef’s Reflections on Pain, from the Burn Unit. Pain management is one of the main things hypnosis appears to be useful for. Ability to cognitively regulate suffering is also one of the main claims of meditators, and seems related to existential psychotherapy’s claim that narratives are more important for well-being than material circumstances. Even if suffering isn’t highly social (pace Degen and Finlay), its dependence on higher cognition suggests that it is much more complex and conditional than it might appear on initial introspection, which on its own reduces the probability of its showing up elsewhere: complex things are relatively unlikely a priori, are especially hard to evolve, and demand especially strong selection pressure if they’re to evolve and if they’re to be maintained. (Note that suffering introspectively feels relatively basic, simple, and out of our control, even though it’s not. Note also that *what things introspectively feel like *is itself under selection pressure. If suffering felt complicated, derived, and dependent on our choices, then the whole suite of social thoughts and emotions related to deception and manipulation would be much more salient, both to sufferers and to people trying to evaluate others’ displays of suffering. This would muddle and complicate attempts by sufferers to consistently socially signal that their distress is important and real.)
- When humans experience large sudden neurological changes and are able to remember and report on them, their later reports generally suggest positive states more often than negative ones. This seems true of near-death experiences and drug states, though the case of drugs is obviously filtered: the more pleasant and/or reinforcing drugs will generally be the ones that get used more. Sometimes people report remembering that a state change was scary or disorienting. But they rarely report feeling agonizing pain, and they often either endorse having had the experience (with the benefit of hindsight), or report having enjoyed it at the time, or both. This suggests that humans’ capacity for suffering (especially more ‘pain-like’ suffering, as opposed to fear or anxiety) may be fragile and complex. Many different ways of disrupting brain function seem to prevent suffering, suggesting suffering is the more difficult and conjunctive state for a brain to get itself into; you need more of the brain’s machinery to be in working order in order to pull it off.
- Similarly, I frequently hear about dreams that are scary or disorienting, but I don’t think I’ve ever heard of someone recalling having experienced severe pain from a dream, even when they remember dreaming that they were being physically damaged. This may be for reasons of selection: if dreams were more unpleasant, people would be less inclined to go to sleep and their health would suffer. But it’s interesting that scary dreams are nonetheless common. This again seems to point toward ‘states that are further from the typical human state are much more likely to be capable of things like fear or distress, than to be capable of suffering-laden physical agony.’
3. Consciousness and suffering
Eliezer recently criticized "people who worry that chickens are sentient and suffering" but "don’t also worry that GPT-3 is sentient and maybe suffering". (He thinks chickens and GPT-3 are both non-sentient.) Jemist responded on LessWrong, and Nate Soares wrote a reply to Jemist that I like:
Instrumental status: off-the-cuff reply, out of a wish that more people in this community understood what the sequences have to say about how to do philosophy correctly (according to me). > EY’s position seems to be that self-modelling is both necessary and sufficient for consciousness. That is not how it seems to me. My read of his position is more like: "Don’t start by asking ‘what is consciousness’ or ‘what are qualia’; start by asking ‘what are the cognitive causes of people talking about consciousness and qualia’, because while abstractions like ‘consciousness’ and ‘qualia’ might turn out to be labels for our own confusions, the words people emit about them are physical observations that won’t disappear. Once one has figured out what is going on, they can plausibly rescue the notions of ‘qualia’ and ‘consciousness’, though their concepts might look fundamentally different, just as a physicist’s concept of ‘heat’ may differ from that of a layperson. Having done this exercise at least in part, I (Nate’s model of Eliezer) assert that consciousness/qualia can be more-or-less rescued, and that there is a long list of things an algorithm has to do to ‘be conscious’ / ‘have qualia’ in the rescued sense. The mirror test seems to me like a decent proxy for at least one item on that list (and the presence of one might correlate with a handful of others, especially among animals with similar architectures to ours)." An ordering of consciousness as reported by humans might be: Asleep Human < Awake Human < Human on Psychedelics/Zen Meditation I don’t know if EY agrees with this. My model of Eliezer says "Insofar as humans do report this, it’s a fine observation to write down in your list of ‘stuff people say about consciousness’, which your completed theory of consciousness should explain. However, it would be an error to take this as much evidence about ‘consciousness’, because it would be an error to act like ‘consciousness’ is a coherent concept when one is so confused about it that they cannot describe the cognitive antecedents of human insistence that there’s an ineffable redness to red." But what surprises me the most about EY’s position is his confidence in it. My model of Eliezer says "The type of knowledge I claim to have, is knowledge of (at least many components of) a cognitive algorithm that looks to me like it codes for consciousness, in the sense that if you were to execute it then it would claim to have qualia for transparent reasons and for the same reasons that humans do, and to be correct about that claim in the same way that we are. From this epistemic vantage point, I can indeed see clearly that consciousness is not much intertwined with predictive processing, nor with the "binding problem", etc. I have not named the long list of components that I have compiled, and you, who lack such a list, may well not be able to tell what consciousness is or isn’t intertwined with. However, you can still perhaps understand what it would feel like to believe you can see (at least a good part of) such an algorithm, and perhaps this will help you understand my confidence. Many things look a lot more certain, and a lot less confusing, once you begin to see how to program them." Some conversations I had on Twitter and Facebook, forking off of Eliezer’s tweet (somewhat arbitrarily ordered, and with ~4 small edits to my tweets): **Bernardo Subercaseux: **I don’t understand this take at all. It’s clear to me that the best hypothesis for why chickens have reactions to physical damage that are consistent with our models of expressions of suffering, as also do babies, is bc they can suffer in a similar way that I do. Otoh, best hypothesis for why GPT-3 would say "don’t kill me" is simply because it’s a statistically likely response for a human to have said in a similar context. I think claiming that animals require a certain level of intelligence to experience pain is unfalsifiable... **Rob Bensinger: **Many organisms with very simple nervous systems, or with no nervous systems at all, change their behavior in response to bodily damage—albeit not in the specific ways that chickens do. So there must be some more specific behavior you have in mind here. As for GPT-3: if you trained an AI to perfectly imitate all human behaviors, then plausibly it would contain suffering subsystems. This is because real humans suffer, and a good way to predict a system (including a human brain) is to build a detailed emulation of it. GPT-3 isn’t a perfect emulator of a human (and I don’t think it’s sentient), but there’s certainly a nontrivial question of how we can know it’s not sentient, and how sophisticated a human-imitator could get before we’d start wanting to assign non-tiny probability to sentience. **Bernardo Subercaseux: **I don’t think it’s possible to perfectly imitate all human behavior for anything non-human, in the same fashion as we cannot perfectly imitate all chicken behaviors, of plant behaviors… I think being embodied as a full X is a requisite to perfectly imitate the behavior of X **Rob Bensinger: **If an emulated human brain won’t act human-like unless it sees trees and grass, outputs motor actions like walking (with the associated stable sensory feedback), etc., then you can place the emulated brain in a virtual environment and get your predictions about humans that way. **Bernardo Subercaseux: **My worry is that this converges to the "virtual environment" having to be exactly the real world with real trees and real grass and a real brain made of the same as ours and connected to as many things as ours is connected... **Rob Bensinger: **Physics is local, so you don’t have to simulate the entire universe to accurately represent part of it. E.g., suppose you want to emulate how a human would respond to being slipped notes inside a locked white room. You might have to simulate the room in some sensory detail, but you wouldn’t need to simulate anything outside the room in any detail. You can just decide what you want the note to say, and then simulate a realistic-feeling, realistic-looking note coming into existence in the white room’s mail chute. Bernardo Subercaseux: [Physics is local, so you don’t have to simulate the entire universe to accurately represent part of it. E.g., suppose you want to emulate how a human would respond to being slipped notes inside a locked white room. You might have to simulate the room in some sensory detail...] i) not sure about the first considering quantum, but you might know more than I do. ii) but I’m not saying the entire universe, just an actual human body. In any case, I still think that a definite answer relies on understanding the physical processes of consciousness, and yet it seems to me that no AI at the moment is close to pose a serious challenge in terms of whether it has the ability to suffer. This in opposition to animals like pigs or chicken... **Rob Bensinger: **QM allows for some nonlocal-looking phenomena in a sense, but it still has a speed-of-light limit. I don’t understand what you mean by ‘actual human body’ or ‘embodied’. What specific properties of human bodies are important for human cognition, and expensive to simulate? I think this is a reasonable POV: ‘Humans are related to chickens, so maybe chickens have minds sort of like a human’s and suffer in the situations humans would suffer in. GPT-3 isn’t related to us, so we should worry less about GPT-3, though both cases are worth worrying about.’ I don’t think ‘There’s no reason to worry whatsoever about whether GPT-3 suffers, whereas there are major reasons to worry animals might suffer’ is a reasonable POV, because I haven’t seen a high-confidence model of consciousness grounding that level of confidence in all of that. Bernardo Subercaseux: doesn’t your tweet hold the same if you replace "GPT-3" by "old Casio calculator", or "rock"? **Rob Bensinger: **I’m modeling consciousness as ‘a complicated cognitive something-we-don’t-understand, which is connected enough to human verbal reporting that we can verbally report on it in great detail’. GPT-3 and chickens have a huge number of (substantially non-overlapping) cognitive skills, very unlike a calculator or a rock. GPT-3 is more human-like in some (but not all) respects. Chickens, unlike GPT-3, are related to humans. I think these facts collectively imply uncertainty about whether chickens and/or GPT-3 are conscious, accompanied by basically no uncertainty about whether rocks or calculators are conscious. ( Also, I agree that I was being imprecise in my earlier statement, so thanks for calling me out on that. 🙂 ) **Bernardo Subercaseux: **the relevance of verbal reporting is an entire conversation on its own IMO haha! Thanks for the thought-provoking conversation :) I think we agree on the core, and your comments made me appreciate the complexity of the question at hand! **Eli Tyre: **I mean, one pretty straightforward thing to say: IF chickens are sentient, then the chickens in factory farms are DEFINITELY in a lot of pain. IF GPT-3 is sentient, I have no strong reason to think that it is or isn’t in pain. **Rob Bensinger: **Chickens in factory farms definitely undergo a lot of bodily damage, illness, etc. If there are sentient processes in those chickens’ brains, then it seems like further arguments are needed to establish that the damage is registered by the sentient processes. Then another argument for ‘the damage is registered as suffering’, and another (if we want to establish that such lives are net-negative) for ‘the overall suffering outweighs everything else’. This seems to require a model of what sentience is / how it works / what it’s for. It might be that the explanation for all this is simple—that you get all this for free by positing a simple mechanism. So I’m not decomposing this to argue the prior must be low. I’m just pointing at what has to be established at all, and that it isn’t a freebie. **Eli Tyre: **We have lots of intimate experience of how, for humans, damage and nociception leads to pain experience. And the mappings make straightforward evolutionary sense. Once you’re over the hump of positing conscious experience at all, it makes sense that damage is experienced as negative conscious experience. Conditioning on chickens being conscious at all, it seems like the prior is that their [conscious] experience of [nociception] follows basically the same pattern as a human’s. It would be really surprising to me if humans were conscious and chickens were conscious, but humans were conscious of pain, while chickens weren’t?!? That would seem to imply that conscious experience of pain is adaptive for humans but not for chickens? Like, assuming that consciousness is that old on the phylogenitc tree, why is conscious experience of pain a separate thing that comes later? I would expect pain to be one of the first things that organisms evolved to be conscious of. **Rob Bensinger: **I think this is a plausible argument, and I’d probably bet in that direction. Not with much confidence, though, ‘cause it depends a lot on what the function of ‘consciousness’ is over and above things like ‘detecting damage to the body’ (which clearly doesn’t entail ‘conscious’). My objection was to "IF chickens are sentient, then the chickens in factory farms are DEFINITELY in a lot of pain." I have no objection to ‘humans and chickens are related, so we can make a plausible guess that if they’re conscious, they suffer in situations where we’d suffer.’ Example: maybe consciousness evolved as a cog in some weird specific complicated function like ‘remembering the smell of your kin’ or, heck, ‘regulating body temperature’. Then later developed things like globalness / binding / verbal reportability, etc. My sense is there’s a crux here like Eli: ‘Conscious’ is a pretty simple, all-or-nothing thing that works the same everywhere. If a species is conscious, then we can get a good first-approximation picture by imagining that we’re inside that organism’s skull, piloting its body. Me: ‘Conscious’ is incredibly complicated and weird. We have no idea how to build it. It seems like a huge mechanism hooked up to tons of things in human brains. Simpler versions of it might have a totally different function, be missing big parts, and work completely differently. I might still bet in the same direction as you, because I know so little about which ways chicken consciousness would differ from my consciousness, so I’m forced to not make many big directional updates away from human anchors. But I expect way more unrecognizable-weirdness. More specifically, re "why is conscious experience of pain a separate thing that comes later": https://www.lesswrong.com/posts/HXyGXq9YmKdjqPseW/rob-b-s-shortform-feed?commentId=mZw9Jaxa3c3xrTSCY#mZw9Jaxa3c3xrTSCY [section 2 above] provides some reasons to think pain-suffering is relatively conditional, complex, social, high-level, etc. in humans. And noticing + learning from body damage seems like a very simple function that we already understand how to build. If a poorly-understood thing like consc is going to show up in surprising places, it would probably be more associated with functions that are less straightforward. E.g., it would be shocking if consc evolved to help organisms notice body damage, or learn to avoid such damage. It would be less shocking if a weird esoteric aspect of intelligence eg in ‘aggregating neural signals in a specific efficiency-improving way’ caused consciousness. But in that case we should be less confident, assuming chickens are conscious, that their consciousness is ‘hooked up’ to trivial-to-implement stuff like ‘learning to avoid bodily damage at all’. silencenbetween: This seems to require a model of what sentience is. I think I basically agree with you, that there is a further question of pain=suffering? And that that would ideally be established. But I feel unsure of this claim. Like, I have guesses about consciousness and I have priors and intuitions about it, and that leads me to feeling fairly confident that chickens experience pain. But to my mind, we can never unequivocally establish consciousness, it’s always going to be a little bit of guesswork. There’s always a further question of first hand experience. And in that world, models of consciousness refine our hunches of it and give us a better shared understanding, but they never conclusively tell us anything. I think this is a stronger claim than just using Bayesian reasoning. Like, I don’t think you can have absolute certainty about anything… but I also think consciousness inhabits a more precarious place. I’m just making the hard problem arguments, I guess, but I think they’re legit. I don’t think the hard problem implies that you are in complete uncertainty about consciousness. I do think it implies something trickier about it relative to other phenomena. Which to me implies that a model of the type you’re imagining wouldn’t conclusively solve the problem anymore than models of the sort "we’re close to each other evolutionarily" do. I think models of it can help refine our guesses about it, give us clues, I just don’t see any particular model being the final arbiter of what counts as conscious. And in that world I want to put more weight on the types of arguments that Eli is making. So, I guess my claim is that these lines of evidence should be about as compelling as other arguments. **Rob Bensinger: **I think we’ll have a fully satisfying solution to the hard problem someday, though I’m pretty sure it will have to route through illusionism—not all parts of the phenomenon can be saved, even though that sounds paradoxical or crazy. If we can’t solve the problem, though (the phil literature calls this view ‘mysterianism’), then I don’t think that’s a good reason to be more confident about which organisms are conscious, or to put more weight on our gut hunches. I endorse the claim that the hard problem is legit (and hard), btw, and that it makes consciousness trickier to think about in some ways. David Manheim: **[**It might be that the explanation for all this is simple—that you get all this for free by positing a simple mechanism. So I’m not decomposing this to argue the prior must be low. I’m just pointing at what has to be established at all, and that it isn’t a freebie.] Agreed—but my strong belief on how any sentience / qualia would need to work is that it would be beneficial to evolutionary fitness, meaning pain would need to be experienced as a (fairly strong) negative for it to exist. Clearly, the same argument doesn’t apply to GPT-3. Rob Bensinger: I’m not sure what you mean by ‘pain’, so I’m not sure what scenarios you’re denying (or why). Are you denying the scenario: part of an organism’s brain is conscious (but not tracking or learning from bodily damage), and another (unconscious) part is tracking and learning from bodily damage? Are you denying the scenario: a brain’s conscious states are changing in response to bodily damage, in ways that help the organism better avoid bodily damage, and none of these conscious changes feel ‘suffering-ish’ / ‘unpleasant’ to the organism? I’m not asserting that these are all definitely plausible, or even possible—I don’t understand where consciousness comes from, so I don’t know which of these things are independent. But you seem to be saying that some of these things aren’t independent, and I’m not sure why. **David Manheim: **Clearly something was lost here—I’m saying that the claim that there is a disconnect between conscious sensation and tracking bodily damage is a difficult one to believe. And if there is any such connection, the reason that physical damage is negative is that it’s beneficial. **Rob Bensinger: **I don’t find it difficult to believe that a brain could have a conscious part doing something specific (eg, storing what places look like), and a separate unconscious part doing things like ‘learning which things cause bodily damage and tweaking behavior to avoid those’. I also don’t find it difficult to believe that a conscious part of a brain could ‘learn which things cause bodily damage and tweak behavior to avoid those’ without experiencing anything as ‘bad’ per se. Eg, imagine building a robot that has a conscious subsystem. The subsystem’s job is to help the robot avoid bodily damage, but the subsystem experiences this as being like a video game—it’s fun to rack up ‘the robot’s arm isn’t bleeding’ points. **David Manheim: **That is possible for a [robot]. But in a chicken, you’re positing a design with separable features and subsystems that are conceptually distinct. Biology doesn’t work that way—it’s spaghetti towers the whole way down. **Rob Bensinger: **What if my model of consciousness says ‘consciousness is an energy-intensive addition to a brain, and the more stuff you want to be conscious, the more expensive it is’? Then evolution will tend to make a minimum of the brain conscious—whatever is needed for some function. people can be conscious about almost any signal that reaches their brain, largely depending on what they have been trained to pay attention to This seems wrong to me—what do you mean by ‘almost any signal’? (Is there a paper you have in mind operationalizing this?) **David Manheim: **I mean that people can choose / train to be conscious of their heartbeat, or pay attention to certain facial muscles, etc, even though most people are not aware of them. (And obviously small children need to be trained to pay attention to many different bodily signals.) [Clearly something was lost here—I’m saying that the claim that there is a disconnect between conscious sensation and tracking bodily damage is a difficult one to believe. And if there is any such connection, the reason that physical damage is negative is that it’s beneficial.] (I see—yeah I phrased my tweet poorly.) I meant that pain would need to be experienced as a negative for consciousness to exist—otherwise it seems implausible that it would have evolved. **Rob Bensinger: **I felt like there were a lot of unstated premises here, so I wanted to hear what premises you were building in (eg, your concept of what ‘pain’ is). But even if we grant everything, I think the only conclusion is "pain is less positive than non-pain", not "pain is negative". **David Manheim: **Yeah, I grant that the only conclusion I lead to is relative preference, not absolute value. But for humans, I’m unsure that there is a coherent idea of valence distinct from our experienced range of sensation. Someone who’s never missed a meal finds skipping lunch painful. **Rob Bensinger: **? I think the idea of absolute valence is totally coherent for humans. There’s such a thing as hedonic treadmills, but the idea of ‘not experiencing a hedonic treadmill’ isn’t incoherent. **David Manheim: **Being unique only up to to linear transformations implies that utilities don’t have a coherent notion of (psychological) valence, since you can always add some number to shift it. That’s not a hedonic treadmill, it’s about how experienced value is relative to other things. **Rob Bensinger: **‘There’s no coherent idea of valence distinct from our experienced range of sensation’ seems to imply that there’s no difference between ‘different degrees of horrible torture’ and ‘different degrees of bliss’, as long as the organism is constrained to one range or the other. Seems very false! David Manheim: It’s removed from my personal experience, but I don’t think you’re right. If you read Knut Hamsun’s "Hunger", it really does seem clear that even in the midst of objectively painful experiences, people find happiness in slightly less pain. On the other hand, all of us experience what is, historically, an unimaginably wonderful life. Of course, it’s my inside-view / typical mind assumption, but we experience a range of experienced misery and bliss that seems very much comparable to what writers discuss in the past. Rob Bensinger: This seems like ‘hedonic treadmill often happens in humans’ evidence, which is wildly insufficient even for establishing ‘humans will perceive different hedonic ranges as 100% equivalent’, much less ‘this is true for all possible minds’ or ‘this is true for all sentient animals’. "even in the midst of objectively painful experiences, people find happiness in slightly less pain" isn’t even the right claim. You want ‘people in objectively painful experiences can’t comprehend the idea that their experience is worse, seem just as cheerful as anyone else, etc’ David Manheim: [This seems like ‘hedonic treadmill often happens in humans’ evidence, which is wildly insufficient even for establishing ‘humans will perceive different hedonic ranges as 100% equivalent’, much less ‘this is true for all possible minds’ or ‘this is true for all sentient animals’.] Agreed—I’m not claiming it’s universal, just that it seems at least typical for humans. ["even in the midst of objectively painful experiences, people find happiness in slightly less pain" isn’t even the right claim. You want ‘people in objectively painful experiences can’t comprehend the idea that their experience is worse, seem just as cheerful as anyone else, etc’] Flip it, and it seems trivially true - ‘people in objectively wonderful experiences can’t comprehend the idea that their experience is better, seem just as likely to be sad or upset as anyone else, etc’ **Rob Bensinger: **I don’t think that’s true. E.g., I think people suffering from chronic pain acclimate a fair bit, but nowhere near completely. Their whole life just sucks a fair bit more; chronic pain isn’t a happiness- or welfare-preserving transformation. Maybe people believe their experiences are a lot like other people’s, but that wouldn’t establish that humans (with the same variance of experience) really do have similar-utility lives. Even if you’re right about your own experience, you can be wrong about the other person’s in the comparison you’re making. **David Manheim: ** Agreed—But I’m also unsure that there is any in-principle way to unambiguously resolve any claims about how good/bad relative experiences are, so I’m not sure how to move forward about discussing this. [I also don’t find it difficult to believe that a conscious part of a brain could ‘learn which things cause bodily damage and tweak behavior to avoid those’ without experiencing anything as ‘bad’ per se.] That involves positing two separate systems which evidently don’t interact that happen to occupy the same substrate. I don’t see how that’s plausible in an evolved system. **Rob Bensinger: **Presumably you don’t think in full generality ‘if a conscious system X interacts a bunch with another system Y, then Y must also be conscious’. So what kind of interaction makes consciousness ‘slosh over’? I’d claim that there are complicated systems in my own brain that have tons of causal connections to the rest of my brain, but that I have zero conscious awareness of. (Heck, I wouldn’t be shocked if some of those systems are suffering right now, independent of ‘my’ experience.) **Jacy Anthis: ** [But in that case we should be less confident, assuming chickens are conscious, that their consciousness is ‘hooked up’ to trivial-to-implement stuff like ‘learning to avoid bodily damage at all’.] I appreciate you sharing this, Rob. FWIW you and Eliezer seem confused about consciousness in a very typical way, No True Scotsmanning each operationalization that comes up with vague gestures at ineffable qualia. But once you’ve dismissed everything, nothing meaningful is left. Rob Bensinger: I mean, my low-confidence best guess about where consciousness comes from is that it evolved in response to language. I’m not saying that it’s impossible to operationalize ‘consciousness’. But I do want to hear decompositions before I hear confident claims ‘X is conscious’. **Jacy Anthis: **At least we agree on that front! I would extend that for ‘X is not conscious’, and I think other eliminativists like Brian Tomasik would agree that this is a huge problem in the discourse. **Rob Bensinger: **Yep, I agree regarding ‘X is not conscious’. (Maybe I think it’s fine for laypeople to be confident-by-default that rocks, electrons, etc are unconscious? As long as they aren’t so confident they could never update, if a good panpsychism argument arose.) Sam Rosen: It’s awfully suspicious that:
-
Pigs in pain look and sound like what we would look and sound like if we were in pain.
-
Pigs have a similar brain to us with similar brain structures.
-
The parts of the brain that light up in pig’s brain when they are in pain are the same as the parts of the brain that light up in our brains when we are in pain. (I think this is true, but could totally be wrong.)
-
Pain plausibly evolved as a mechanism to deter receiving physical damage which pigs need just as much as humans.
-
Pain feels primal and simple—something a pig could understand. It’s not like counterfactual reasoning, abstraction, or complicated emotions like sonder. It just strikes me as plausible that pigs can feel lust, thirst, pain and hunger—and humans merely evolved to learn how to talk about those things. It strikes me as less plausible that pigs unconsciously have mechanisms that control "lust", "thirst," "pain," and "hunger" and humans became the first species on Earth that made all those unconscious functional mechanisms conscious. (Like why did humans only make those unconscious processes conscious? Why didn’t humans, when emerging into consciousness, become conscious of our heart regulation and immune system and bones growing?) It’s easier to evolve language and intelligence than it is to evolve language and intelligence PLUS a way of integrating and organizing lots of unconscious systems into a consciousness producing system where the attendant qualia of each subsystem incentivizes the correct functional response. **Rob Bensinger: **Why do you think pigs evolved qualia, rather than evolving to do those things without qualia? Like, why does evolution like qualia? **Sam Rosen: **I don’t know if evolution likes qualia. It might be happy to do things unconsciously. But thinking non-human animals aren’t conscious of pain or thirst or hunger or lust means adding a big step from apes to humans. Evolution prefers smooth gradients to big steps. My last paragraph of my original comment is important for my argument. **Rob Bensinger: **My leading guess about where consciousness comes from is that it evolved in response to language. Once you can report fine-grained beliefs about your internal state (including your past actions, how they cohere with your present actions, how this coherence is virtuous rather than villainous, how your current state and future plans are all the expressions of a single Person with a consistent character, etc.), there’s suddenly a ton of evolutionary pressure for you to internally represent a ‘global you state’ to yourself, and for you to organize your brain’s visible outputs to all cohere with the ‘global you state’ narrative you share with others; where almost zero such pressure exists before language. Like, a monkey that emits different screams when it’s angry, hungry, in pain, etc. can freely be a Machiavellian reasoner: it needs to scream in ways that at least somewhat track whether it’s really hungry (or in pain, etc.), or others will rapidly learn to distrust its signals and refuse to give aid. But this is a very low-bandwidth communication channel, and the monkey is free to have basically any internal state (incoherent, unreflective, unsympathetic-to-others, etc.) as long as it ends up producing cries in ways that others will take sufficiently seriously. (But not maximally seriously, since never defecting/lying is surely not going to be the equilibrium here, at least for things like ‘I’m hungry’ signals.) The game really does change radically when you’re no longer emitting an occasional scream, but are actually constructing sentences that tell stories about your entire goddamn brain, history, future behavior, etc. **Nell Watson: **So, in your argument, would it follow then that feral human children or profoundly autistic human beings cannot feel pain, because they lack language to codify their conscious experience? **Rob Bensinger: **Eliezer might say that? Since he does think human babies aren’t conscious, with very high confidence. But my argument is evolutionary, not developmental. Evolution selected for consciousness once we had language (on my account), but that doesn’t mean consciousness has to depend on language developmentally.
What’s the reason for assuming that? Is it based on a general feeling that value is complex, and you don’t want to generalize much beyond the prototype cases? That would be similar to someone who really cares about piston steam engines but doesn’t care much about other types of steam engines, much less other types of engines or mechanical systems.
I would tend to think that a prototypical case of a human noticing his own qualia involves some kind of higher-order reflection that yields the quasi-perceptual illusions that illusionism talks about with reference to some mental state being reflected upon (such as redness, painfulness, feeling at peace, etc). The specific ways that humans do this reflection and report on it are complex, but it’s plausible that other animals might do simpler forms of such things in their own ways, and I would tend to think that those simpler forms might still count for something (in a similar way as other types of engines may still be somewhat interesting to a piston-steam-engine aficionado). Also, I think some states in which we don’t actively notice our qualia probably also matter morally, such as when we’re in flow states totally absorbed in some task.
Here’s an analogy for my point about consciousness. Humans have very complex ways of communicating with each other (verbally and nonverbally), while non-human animals have a more limited set of ways of expressing themselves, but they still do so to greater or lesser degrees. The particular algorithms that humans use to communicate may be very complex and weird, but why focus so heavily on those particular algorithms rather than the more general phenomenon of animal communication?
Anyway, I agree that there can be some cases where humans have a trait to such a greater degree than non-human animals that it’s fair to call the non-human versions of it negligible, such as if the trait in question is playing chess, calculating digits of pi, or writing poetry. I do maintain some probability (maybe like 25%) that the kinds of things in human brains that I would care most about in terms of consciousness are almost entirely absent in chicken brains.
Comment
I’ve had a few dreams in which someone shot me with a gun, and it physically hurt about as much as a moderate stubbed toe or something (though the pain was in my abdomen where I got shot, not my toe). But yeah, pain in dreams seems pretty rare for me unless it corresponds to something that’s true in real life, as you mention, like being cold, having an upset stomach, or needing to urinate.
Googling {pain in dreams}, I see a bunch of discussion of this topic. One paper says:
I would also add that the fear responses, while participating in the hallucinations, aren’t themselves hallucinated, not any more than wakeful fear is hallucinated, at any rate. They’re just emotional responses to the contents of our dreams.
Since pain involves both sensory and affective components which rarely come apart, and the sensory precedes the affective, it’s enough to not hallucinate the sensory.
I do feel like pain is a bit different from the other interoceptive inputs in that the kinds of automatic responses to it are more like those to emotions, but one potential similarity is that it was more fitness-enhancing for sharp pain (and other internal signals going haywire) to wake us, but not so for sight, sound or emotions. Loud external sounds still wake us, too, but maybe only much louder than what we dream.
It’s not clear that you intended otherwise, but I would also assume not that there’s something suppressing pain hallucination (like a hyperparameter), but that hallucination is costly and doesn’t happen by default, so only things useful and safe to hallucinate can get hallucinated.
Also, don’t the senses evoked in dreams mostly match what people can "imagine" internally while awake, i.e. mostly just sight and sound? There could be common mechanisms here. Can people imagine pains? I’ve also heard it claimed that our inner voices only have one volume, so maybe that’s also true of sound in dreams?
FWIW, I think I basically have aphantasia, so can’t visualize well, but I think my dreams have richer visual experiences.
Comment
I disagree with this statement. For me, the contents of a dream seem only weakly correlated with whether I feel afraid during the dream. I’ve had many dreams with seemingly ordinary content (relative to the baseline of general dream weirdness) that were nevertheless extremely terrifying, and many dreams with relatively weird and disturbing content that were not frightening at all.
Makes sense to me, and seems like a good reason not to update (or to update less) from dreams to ‘pain is fragile’.
I have an alternative hypothesis about how consciousness evolved. I’m not especially confident in it. In my view, a large part of the cognitive demands on hominins consists of learning skills and norms from other hominins. One of a few questions I always ask when trying to figure out why humans have a particular cognitive trait is "How could this have made it cheaper (faster, easier, more likely, etc.) to learn skills and/or norms from other hominins?" I think the core cognitive traits in question originally evolved to model the internal state of conspecifics, and make inferences about task performances, and were exapted for other purposes later. I consider imitation learning a good candidate among cognitive abilities that hominins may have evolved since the last common ancestor with chimpanzees, since as I understand it, chimps are quite bad at imitation learning. So the first step may have been hominins obtaining the ability to see another hominin performing a skill as another hominin performing a skill, in a richer way than chimps, like "That-hominin is knapping, that-hominin is striking the core at this angle." (Not to imply that language has emerged yet, verbal descriptions of thoughts just correspond well to the contents of those thoughts; consider this hypothesis silent on the evolution of language at the moment.) Then perhaps recursive representations about skill performance, like "This-hominin feels like this part of the task is easy, and this part is hard." I’m not very committed on whether self-representations or other-representations came first. Then higher-order things like, "This-hominin finds it easier to learn a task when parts of the task are performed more slowly, so when this-hominin performs this task in front of that-hominin-to-be-taught, this-hominin should exaggerate this part, or that part of the task." And then, "This-hominin-that-teaches-me is exaggerating this part of the task," which implicitly involves representing all those lower order thoughts that lead to the other hominin choosing to exaggerate the task, and so on. This is just one example of how these sorts of cognitive traits could improve learning efficiency, in sink and source. Once hominins encounter cooperative contexts that *require *norms to generate a profit, there is selection for these aforementioned general imitation learning mechanisms to be exapted for learning norms, which could result in metarepresentations of internal state relevant to norms, like emotional distress, among other things. I also think this mechanism is a large part of how evolution implements moral nativism in humans. *Recursive *metarepresentations of one’s own emotional distress can be informative when learning norms as well. Insofar as one’s own internal state is informative about the True Norms, evolution can constrain moral search space by providing introspective access to that internal state. On this view, this is pretty much what I think suffering is, where the internal state is physical or emotional distress. I think this account allows for more or less conscious agents, since for every object-level representation, there can be a new metarepresentation, so as minds become richer, so does consciousness. I don’t mean to imply that full-blown episodic memory, autobiographical narrative, and so on falls right out of a scheme like this. But it also seems to predict that mostly just hominins are conscious, and maybe some other primates to a limited degree, and maybe some other animals that we’ll find have convergently evolved consciousness, maybe elephants or dolphins or magpies, but also probably not in a way that allows them to implement suffering. I don’t feel that I need to invoke the evolution of language for any of this to occur; I find I don’t feel the need to invoke language for most explicanda in human evolution, actually. I think consciousness preceded the ability to make verbal reports about consciousness. I also don’t mean to imply that dividing as opposed to making pies is a *small *fraction of the task demands that hominins faced historically, but I also don’t think it’s the largest fraction. Your explanation does double-duty, with its assumptions at least, and kind of explains how human cooperation is stable where it wouldn’t be by default. I admit that I don’t provide an alternative explanation, but I also feel like it’s outside of the scope of the conversation and I do have alternative explanations in mind that I could shore up if pressed.
Regarding the argument about consciousness evolving as a way for humans to report their internal state, I think there’s a sensible case that "unconscious pain" matters, even when it’s not noticed or reported on by a higher order processes. This plausibly moves the goalposts away from "finding what beings are conscious in the sense of being aware of their own awareness" (as I believe Eliezer has roughly put it). To make this case I’d point to this essay from Brian Tomasik, which I find compelling. I would particularly like to quote this part of it,
Comment
I maybe 1⁄4 agree with this, though I also think this is a domain that’s evidence-poor enough (unless Brian or someone else knows a lot more than me!) that there isn’t a lot that can be said with confidence. Here’s my version of reasoning along these lines (in a Sep. 2019 shortform post):
Comment
Comment
Comment
Thanks for this discussion. :)
I think that’s kind of the key question. Is what I care about as precise as "piston steam engine" or is it more like "mechanical devices in general, with a huge increase in caring as the thing becomes more and more like a piston steam engine"? This relates to the passage of mine that Matthew quoted above. If we say we care about (or that consciousness is) this thing going on in our heads, are we pointing at a very specific machine, or are we pointing at machines in general with a focus on the ones that are more similar to the exact one in our heads? In the extreme, a person who says "I care about what’s in my head" is an egoist who doesn’t care about other humans. Perhaps he would even be a short-term egoist who doesn’t care about his long-term future (since his brain will be more different by then). That’s one stance that some people take. But most of us try to generalize what we care about beyond our immediate selves. And then the question is how much to generalize.
It’s analogous to someone saying they love "that thing" and pointing at a piston steam engine. How much generality should we apply when saying what they value? Is it that particular piston steam engine? Piston steam engines in general? Engines in general? Mechanical devices in general with a focus on ones most like the particular piston steam engine being pointed to? It’s not clear, and people take widely divergent views here.
I think a similar fuzziness will apply when trying to decide for which entities "there’s something it’s like" to be those entities. There’s a wide range in possible views on how narrowly or broadly to interpret "something it’s like".
I think those statements can apply to vanishing degrees. It’s usually not helpful to talk that way in ordinary life, but if we’re trying to have a full theory of repressing one’s emotions in general, I expect that one could draw some strained (or poetic, as you said) ways in which rocks are doing that. (Simple example: the chemical bonds in rocks are holding their atoms together, and without that the atoms of the rocks would move around more freely the way the atoms of a liquid or gas do.) IMO, the degree of applicability of the concept seems very low but not zero. This very low applicability is probably only going to matter in extreme situations, like if there are astronomical numbers of rocks compared with human-like minds.
My intuition that conciousness is not easily classifiable in the same way a piston steam engine would be, even if you knew relatively little about how piston steam engines worked. I note that your viewpoint here seems similar to Eliezer’s analogy in Fake Causality. The difference, I imagine, is that consciousness doesn’t seem to be defined via a set of easily identifiable functional features. There is an extremely wide range of viewpoints about what constitutes a conscious experience, what properties consciousness has, and what people are even talking about when they use the word (even though it is sometimes said to be one of the "most basic" or "most elementary" concepts to us).
The question I care most about is not "how does consciousness work" but "what should I care about?" Progress on questions about "how X works" has historically yielded extremely crisp answers, explainable by models that use simple moving parts. I don’t think we’ve made substantial progress in answering the other question with simple, crisp models. One way of putting this is that if you came up to me with a well-validated, fundamental theory of consciousness (and somehow this was well defined), I might just respond, "That’s cool, but I care about things other than consciousness (as defined in that model)." It seems like the more you’re able to answer the question precisely and thoroughly, the more I’m probably going to disagree that the answer maps perfectly onto my intuitions about what I ought to care about.
The brain is a kludge, and doesn’t seem like the type of thing we should describe as a simple, coherent, unified engine. There are certainly many aspects of cognition that are very general, but most don’t seem like the type of thing I’d expect to be exclusively present in humans but not other animals. This touches on some disagreements I (perceive) I have with the foom perspective, but I think that even people from that camp would mostly agree with the weak version of this thesis.
Comment
Our core, ultimate values are something we know very, very little about.
The true nature of consciousness is something we know almost nothing about.
Which particular computational processes are occurring in animal brains is something we know almost nothing about. When you combine three blank areas of your map, the blank parts don’t cancel out. Instead, you get a part of your map that you should be even more uncertain about. I don’t see a valid way to leverage that blankness-of-map to concentrate probability mass on ‘these three huge complicated mysterious brain-things are really similar to rocks, fungi, electrons, etc.’. Rather, ‘moral value is a kludge’ and ‘consciousness is a kludge’ both make me update toward thinking the set of moral patients are smaller -- these engines don’t become less engine-y via being kludges, they just become more complicated and laden-with-arbitrary-structure. A blank map of a huge complicated neural thingie enmeshed with verbal reasoning and a dozen other cognitive processes in intricate ways, is not the same as a filled-in map of something that’s low in detail and has very few crucial highly contingent or complex components. The lack of detail is in the map, but the territory can be extraordinarily detailed. And any of those details (either in our CEV, or in our consciousness) can turn out to be crucial in a way that’s currently invisible to us. It sounds to me like you’re updating in the opposite direction—these things are kludges, therefore we should expect them (and their intersection, ‘things we morally value in a consciousness-style way’) to be simpler, more general, more universal, less laden with arbitrary hidden complexity. Why update in that direction?
Comment
Thinking about it more, my brain generates the following argument for the perspective I think you’re advocating:
Comment
"Human values" don’t seem to be primarily what I care about. I care about "my values" and I’m skeptical that "human values" will converge onto what I care about.
I have intuitions that ethics is a lot more arbitrary than you seem to think it is. Your argument is peppered with statements to the effect of what would our CEV endorse?. I do agree that some degree of self-reflection is good, but I don’t see any strong reason to think that reflection alone will naturally lead all or most humans to the same place, especially given that the reflection process is underspecified.
You appear to have interpreted my intuitions about the arbitrariness of concepts as instead about the complexity and fragility of concepts, which you expressed in confusion. Note that I think this reflects a basic miscommunication on my part, not yours. I do have some intuitions about complexity, less about fragility; but my statements above were (supposed to be) more about arbitrariness (I think).
Comment
The way I imagine any successful theory of consciousness going is that even if it has a long parts (processes) list, every feature on that list will apply pretty ubiquitously to at least a tiny degree. Even if the parts need to combine in certain ways, that could also happen to a tiny degree in basically everything, although I’m much less sure of this claim; I’m much more confident that I can find the parts in a lot of places than in the claim that basically everything is like each part, so finding the right combinations could be much harder. The full complexity of consciousness might still be found in basically everything, just to a usually negligible degree. I’ve written more on this here.
Comment
I’d say my visualization of consciousness is less like a typical steam engine or table, and more like a Rube Goldberg machine designed by a very confused committee of terrible engineers. You can remove some parts of the machine without breaking anything, but a lot of other parts are necessary for the thing to work. It should also be possible to design an AI that has ‘human-like consciousness’ via a much less kludge-ish process—I don’t think that much complexity is morally essential. But chickens were built by a confused committee just like humans were, so they’ll have their own enormous intricate kludges (which may or may not be the same kind of machine as the Consciousness Machine in our heads), rather than having the really efficient small version of the consciousness-machine.
Note: I think there’s also a specific philosophical reason to think consciousness is pretty ubiquitous and fundamental—the hard problem of consciousness. The ‘we’re investing too much metaphysical importance into our pet obsession’ thing isn’t the only reason anyone thinks consciousness (or very-consciousness-ish things) might be ubiquitous. But per illusionism, I think this philosophical reason turns out to be wrong in the end, leaving us without a principled reason to anthropomorphize / piston-steam-engine-omorphize the universe like that.
Comment
It’s true (on your view and mine) that there’s a pervasive introspective, quasi-perceptual illusion humans suffer about consciousness. But the functional properties of consciousness (or of ‘the consciousness-like thing we actually have’) are all still there, behind the illusion. Swapping from the illusory view to the almost-functionally-identical non-illusory view, I strongly expect, will not cause us to stop caring about the underlying real things (thoughts, and feelings, and memories, and love, and friendship). And if we still care about those real things, then our utility function is still (I claim) pretty obsessed with some very specific and complicated engines/computations. (Indeed, a lot more specific and complicated than real-world piston steam engines.) I’d expect it to mostly look more like how our orientation to water and oars changes when we realize that the oar reflected in the water isn’t really broken. I don’t expect the revelation to cause humanity to replace its values with such vague values that we reshape our lives around slightly adjusting the spatial configurations of rocks or electrons, because our new ‘generalized friendship’ concept treats some common pebble configurations as more or less ‘friend-like’, more or less ‘asleep’, more or less ‘annoyed’, etc. (Maybe we’ll do a little of that, for fun, as a sort of aesthetic project / a way of making the world feel more beautiful. But that gets us closer to my version of ‘generalizing human values to apply to unconscious stuff’, not Brian’s version.)
Comment
Comment
Comment
Like, yes, when I say ‘the moral evaluation function takes the dog’s brain as an input, not the cuteness of its overt behaviors’, I am talking about a moral evaluation function that we have to extract from the human’s brain. But the human moral evaluation function is a totally different function from the ‘does-this-thing-make-noises-and-facial-expressions-that-naturally-make-me-feel-sympathy-for-it-before-I-learn-any-neuroscience?’ function, even though both are located in the human brain.
Comment
Thinking (with very low confidence) about an idealized, heavily self-modified, reflectively consistent, CEV-ish version of me: If it turns out that squirrels are totally unconscious automata, then I think Ideal Me would probably at least weakly prefer to not go around stepping on squirrels for fun. I think this would be for two reasons:
The kind of reverence-for-beauty that makes me not want to randomly shred flowers to pieces. Squirrels can be beautiful even if they have no moral value. Gorgeous sunsets plausibly deserve a similar kind of reverence.
The kind of disgust that makes me not want to draw pictures of mutilated humans. There may be nothing morally important about the cognitive algorithms in squirrels’ brains; but squirrels still have a lot of anatomical similarities to humans, and the visual resemblance between the two is reason enough to be grossed out by roadkill. In both cases, these don’t seem like obviously bad values to me. (And I’m pretty conservative about getting rid of my values! Though a lot can and should change eventually, as humanity figures out all the risks and implications of various self-modifications. Indeed, I think the above descriptions would probably look totally wrong, quaint, and confused to a real CEV of mine; but it’s my best guess for now.) In contrast, conflating the moral worth of genuinely-totally-conscious things (insofar as anything is genuinely conscious) with genuinely-totally-unconscious things seems… actively bad, to me? Not a value worth endorsing or protecting? Like, maybe you think it’s implausible that squirrels, with all their behavioral complexity, could have ‘the lights be off’ in the way that a roomba with a cute face glued to it has ‘the lights off’. I disagree somewhat, but I find that view vastly less objectionable than ‘it doesn’t even matter what the squirrel’s mind is like, it just matters how uneducated humans naively emotionally respond to the squirrel’s overt behaviors’. Maybe a way of gesturing at the thing is: Phenomenal consciousness is an illusion, but the illusion adds up to normality. It doesn’t add up to ‘therefore the difference between automata / cartoon characters and things-that-actually-have-the-relevant-mental-machinery-in-their-brains suddenly becomes unimportant (or even less important)’.
I feel like people on here are just picking apart each other’s arguments without really dealing with the main arguments present. A lot of times it’s not very clear what the focus is on anyways. I think he’s just referring to a different perspective he’s read regarding one way to look at things. Your example of obsession is only used to discredit the legitimacy of that perspective instead of actually adding value to the conversation.
I think the the self-reflective part of evolution brought us the revelation of suffering to our understandings. The self-unaware computations simply operate on pain as a carrot/stick system as they were initially evolved to function as. Most of the laws of civilization is about reducing suffering in the populations. Such realization in evolution has introduced new concepts regarding the relationship between ourselves as an individual self-contained computation and these smaller chunks of functions/computations that exist within us. Because of the carrot/stick functionality, by minimizing suffering, we also achieve what the function was originally designed to do, to help us with our self-preservation. This is the first level of the self-referential loop. In the second loop, we can now see that this type of harm reduction is mainly geared toward the preservation of our own genes as we owe our knowledge to the people who have found out about multicellular organism and genetic makeup of living things. We can then again reflect on this loop to see whether we should do anything different given our new knowledge.
If one accepts Eliezer Yudkowsky’s view on consciousness, the complexity of suffering in particular is largely irrelevant. The claim "qualia requires reflectivity" implies *all *qualia require reflectivity. This includes qualia like "what is the color red like?" and "how do smooth and rough surfaces feel different?" These experiences seem like they have vastly different evolutionary pressures associated with them that are largely unrelated to social accounting. If you find the question of whether suffering in particular is sufficiently complex that it exists in certain animals but not others by virtue of evolutionary pressure, you’re operating in a frame where these arguments are not superseded by the much more generic claim that complex social modeling is necessary to feel anything. If you think Eliezer is very likely to be right, these additional meditations on the nature of suffering are mostly minutiae. [EDIT to note: I’m mostly pointing this out because it appears that there is one group that uses "complex social pressures" to claim animals do not suffer because animals feel nothing and another group that uses "complex social pressures" to claim that animals do not specifically suffer because suffering specifically depend on these things. That these two groups of people just happen to start from a similar guiding principle and happen to reach a similar answer for very different reasons makes me extremely suspicious of the epistemics around the moral patienthood of animals.]
Comment
I don’t know what Eliezer’s view is exactly. The parts I do know sound plausible to me, but I don’t have high confidence in any particular view (though I feel pretty confident about illusionism). My sense is that there are two popular views of ‘are animals moral patients?’ among EAs:
Animals are obviously moral patients, there’s no serious doubt about this.
It’s hard to be highly confident one way or the other about whether animals are moral patients, so we should think a lot about their welfare on EV grounds. E.g., even if the odds of chickens being moral patients is only 10%, that’s a lot of expected utility on the line. (And then there are views like Eliezer’s, which IME are much less common.) My view is basically 2. If you ask me to make my best guess about which species are conscious, then I’ll extremely tentatively guess that it’s only humans, and that consciousness evolved after language. But a wide variety of best guesses are compatible with the basic position in 2. "The ability to reflect, pass mirror tests, etc. is important for consciousness" sounds relatively plausible to me, but I don’t know of a strong positive reason to accept it—if Eliezer has a detailed model here, I don’t know what it is. My own argument is different, and is something like: the structure, character, etc. of organisms’ minds is under very little direct selection pressure until organisms have language to describe themselves in detail to others; so if consciousness is any complex adaptation that involves reshaping organisms’ inner lives to fit some very specific set of criteria, then it’s likely to be a post-language adaptation. But again, this whole argument is just my current best guess, not something I feel comfortable betting on with any confidence. I haven’t seen an argument for any 1-style view that seemed at all compelling to me, though I recognize that someone might have a complicated nonstandard model of consciousness that implies 1 (just as Eliezer has a complicated nonstandard model of consciousness that implies chickens aren’t moral patients).
Comment
The reason I talk about suffering (and not just consciousness) is:
I’m not confident in either line of reasoning, and both questions are relevant to ‘which species are moral patients?’.
I have nonstandard guesses (though not confident beliefs) about both topics, and if I don’t mention those guesses, people might assume my views are more conventional.
I think that looking at specific types of consciousness (like suffering) can help people think a lot more clearly about consciousness itself. E.g., thinking about scenarios like ‘part of your brain is conscious, but the bodily-damage-detection part isn’t conscious’ can help draw out people’s implicit models of how consciousness works. Note that not all of my nonstandard views about suffering and consciousness point in the direction of ‘chickens may be less morally important than humans’. E.g., I’ve written before that I put higher probability than most people on ‘chickens are utility monsters, and we should care much more about an individual chicken than about an individual human’—I think this is a pretty straightforward implication of the ‘consciousness is weird and complicated’ view that leads to a bunch of my other conclusions in the OP. Parts of the OP were also written years apart, and the original reason I wrote up some of the OP content about suffering wasn’t animal-related at all—rather, I was trying to figure out how much to worry about invisibly suffering subsystems of human brains. (Conclusion: It’s at least as worth-worrying-about as chickens, but it’s less worth-worrying-about than I initially thought.)
Thanks for clarifying. To the extent that you aren’t particularly sure about consciousness comes about, it makes sense to reason about all sorts of possibilities related to capacity for experience and intensity of suffering. In general, I’m just kinda surprised that Eliezer’s view is so unusual given that he is the Eliezer Yudkowsky of the rationalist community. My impression is that the justification for the argument your mention is something along the lines of "the primary reason one would develop a coherent picture of their own mind is so they could convey a convincing story about themselves to others—which only became a relevant need once language developed." I was under the impression you were focused primarily on suffering from the first two sections and the similarity of the above logic to the discussion of pain-signaling earlier. When I think about your generic argument about consciousness, I get confused however. While I can imagine why would one benefit from an internal narrative around their goals, desires, etc, I’m not even sure how I’d go about squaring pressures for that capacity with respect to the many basic sensory qualia that people have (e.g. sense of sight, sense of touch) -- especially in the context of language.
Comment
I think things like ‘the ineffable redness of red’ are a side-effect or spandrel. On my account, evolution selected for various kinds of internal cohesion and temporal consistency, introspective accessibility and verbal reportability, moral justifiability and rhetorical compellingness, etc. in weaving together a messy brain into some sort of unified point of view (with an attendant unified personality, unified knowledge, etc.). This exerted a lot of novel pressures and constrained the solution space a lot, but didn’t constrain it 100%, so you still end up with a lot of weird neither-fitness-improving-nor-fitness-reducing anomalies when you poke at introspection. This is not a super satisfying response, and it has basically no detail to it, but it’s the least-surprising way I could imagine things shaking out when we have a mature understanding of the mind.
Suffering is surely influenced by things like mental narratives, but that doesn’t mean it requires mental narratives to exist at all. I would think that the narratives exert some influence over the amount of suffering. For example, if (to vastly oversimplify) suffering was represented by some number in the brain, and if by default it would be −10, then maybe the right narrative could add +7 so that it became just −3.
Top-down processing by the brain is a very general thing, not just for suffering. But I wouldn’t say that all brain processes that are influenced by it can’t exist without it. (OTOH, depending on how broadly we define top-down processing, maybe it’s also somewhat ubiquitous in brains. The overall output of a neural network will often be influenced by multiple inputs, some from the senses and some from "higher" brain regions.)
Local is global for a smaller set of features. Nonhuman animals could have more limited "global you states", even possibly multiple distinct ones at a time, if they aren’t well integrated (e.g. split brain, poor integration between or even within sensory modalities). What’s special about the narrative?
Many animals do integrate inputs (within and across senses), use attention, prioritize and make tradeoffs. Motivation (including from emotions, pain, anticipated reward, etc.) feeds into selective/top-down attention to guide behaviour, and it seems like their emotions themselves can be inputs for learning, not just rewards, since they can be trained to behave in trainer-selected ways in response to their own emotions, generalizing to the same emotions in response to different situations. Animals can answer unexpected questions about things that have just happened and their own actions. See my comment here for some studies. I wouldn’t put little credence on in-the-moment "global you states" in animals, basically as described in global workspace theory.
Comment
I’d probably say human babies and adult chickens are similarly likely to be phenomenally conscious (maybe between 10% and 40%). I gather Eliezer assigns far lower probability to both propositions, and I’m guessing he thinks adult chickens are way more likely to be conscious than human babies are, since he’s said that "I’d be truly shocked (like, fairies-in-the-garden shocked) to find them [human babies] sentient", whereas I haven’t heard him say something similar about chickens.
If animals don’t feel pain, the obvious question is why did they evolve to pretend having qualia that are uniquely human? Especially considering that they evolved long before humans existed.
This is a long post—summary at the top would help for those out of the loop.
If so, it is not deterministic.