Contents
- "Higher" Consciousness
- Inner Listeners
- Confidence
- Conclusions From Twitter:
I’d say that I "don’t understand" why the people who worry that chickens are sentient and suffering, don’t also worry that GPT-3 is sentient and maybe suffering; but in fact I do understand, it’s just not a charitable understanding. Anyway, they’re both unsentient so no worries. His overall thesis is spelt out in full here but I think the key passages are these ones: What my model says is that when we have a cognitively reflective, self-modely thing, we can put very simple algorithms on top of that — as simple as a neural network having its weights adjusted — and that will feel like something, there will be something that it is like that thing to be, because there will be something self-modely enough to feel like there’s a thing happening to the person-that-is-this-person. … So I would be very averse to anyone producing pain in a newborn baby, even though I’d be truly shocked (like, fairies-in-the-garden shocked) to find them sentient, because I worry that might lose utility in future sentient-moments later. … I’m not totally sure people in sufficiently unreflective flow-like states are conscious, and I give serious consideration to the proposition that I am reflective enough for consciousness only during the moments I happen to wonder whether I am conscious. I’m currently very confident on the following things, and I’m pretty sure EY is too:
-
Consciousness (having qualia) exists and humans have it
-
Consciousness isn’t an epiphenomenon
-
Consciousness is a result of how information is processed in an algorithm, in the most general sense: a simulation of a human brain is just as conscious as a meat-human EY’s position seems to be that self-modelling is both necessary and sufficient for consciousness. But I don’t ever see him putting forward a highly concrete thesis for why this is the case. He is correct that his model has more moving parts than other models. But having more moving parts only makes sense if it’s actually good at explaining observed data. And we only have one datapoint, which is that adult humans are conscious. Or do we?
"Higher" Consciousness
We actually have a few datapoints here. An ordering of consciousness as reported by humans might be: Asleep Human < Awake Human < Human on Psychedelics/Zen Meditation I don’t know if EY agrees with this. From his beliefs he might say something along the lines of "having more thoughts doesn’t mean you’re more conscious". Given his arguments about babies, I’m pretty sure he thinks that you can have memories of times when you weren’t conscious, and then consciously experience those things in a sort of "second hand" way by loading up those memories. Now a lot of Zen meditation involves focusing on your own experiences, which seems like self-modelling. However something else I notice here is the common experience of "ego death" while using psychedelics and in types of meditation. Perhaps EY has a strong argument that this in fact requires more self-modelling than previous states. On the other hand, he might argue that consciousness is on/off, and then amount of experience is unrelated to whether or not those experiences are being turned into qualia. I’m trying to give potential responses to my arguments, but I don’t want to strawman EY so I ought to point out that there are lots of other counter-arguments to this he might have, which might be more insightful than my imagined ones.
Inner Listeners
EY talks a lot about "inner listeners", and mentions that a good theory should be able to have them arise naturally in some way. I agree with this point, and I do agree that his views provide a possible explanation as to what produces an inner listener. Where I disagree is that we 100% need a separate "information processing" and "inner listener" module. The chicken-conscious, GPT-3-unconscious model seems to make sense from the following perspective: Some methods of processing input data cause consciousness and some don’t. We know that chickens process input data in a very similar way to humans (by virtue of being made of neurons) and we know that GPT-3 doesn’t process information in that way (by virtue of not being made of neurons). I guess this is related to the binding problem.
Confidence
But what surprises me the most about EY’s position is his confidence in it. He claims to have never seen any good alternatives to his own model. But that’s simply a statement about the other beliefs he’s seen, not a statement about all hypothesis-space. I even strongly agree with the first part of his original tweet! I do suspect most people who believe chickens are conscious but GPT-3 isn’t believe it for bad reasons! And the quality of replies is generally poor. EY’s argument strikes me as oddly specific. There are lots of things which human brains do (or we have some uncertainty of them doing) which are kind of weird:
-
Predictive processing and coding
-
Integrating sensory data together (the binding problem)
-
Come up with models of the world (including itself)
-
All those connectome-specific harmonic wave things
-
React to stimuli in various reinforcement-y ways EY has picked out one thing (self modelling) and decided that it alone is the source of consciousness. Whether or not he has gone through all the weird and poorly-understood things brains do and ruled them out, I don’t know. Perhaps he has. But he doesn’t mention it in the thesis that *he links to *to explain his beliefs. He doesn’t even mention that he’s conducted such a search, the closest thing to that being references to his own theory treating qualia as non-mysterious (which is true). I’m just not convinced without him showing his working!
Conclusions
I am confused, and at the end of the day that is a fact about me, not about consciousness. I shouldn’t use my own bamboozlement as strong evidence that EY’s theory is false. On the other hand, the only evidence available (in the absence of experimentation) for an argument not making sense is that people can’t make sense of it. I don’t think EY’s theory of consciousness is completely absurd. I put about 15% credence in it. I just don’t see what he’s seeing that elevates it to being *totally overwhelmingly likely. *My own uncertainty is primarily due to the lack of truly good explanations I’ve seen of the form "X could cause consciousness", combined with the lack of strong arguments made of the form "Here’s why X can’t be the cause of consciousness". Eliezer sort of presents the first but not the second. I would love for someone to explain to me why chickens are strongly unlikely to be conscious, so I can go back to eating KFC. I would also generally like to understand consciousness better.
Instrumental status: off-the-cuff reply, out of a wish that more people in this community understood what the sequences have to say about how to do philosophy correctly (according to me).
That is not how it seems to me. My read of his position is more like: "Don’t start by asking ‘what is consciousness’ or ‘what are qualia’; start by asking ‘what are the cognitive causes of people talking about consciousness and qualia’, because while abstractions like ‘consciousness’ and ‘qualia’ might turn out to be labels for our own confusions, the words people emit about them are physical observations that won’t disappear. Once one has figured out what is going on, they can plausibly rescue the notions of ‘qualia’ and ‘consciousness’, though their concepts might look fundamentally different, just as a physicist’s concept of ‘heat’ may differ from that of a layperson. Having done this exercise at least in part, I (Nate’s model of Eliezer) assert that consciousness/qualia can be more-or-less rescued, and that there is a long list of things an algorithm has to do to ‘be conscious’ / ‘have qualia’ in the rescued sense. The mirror test seems to me like a decent proxy for at least one item on that list (and the presence of one might correlate with a handful of others, especially among animals with similar architectures to ours)."
Asleep Human < Awake Human < Human on Psychedelics/Zen Meditation
I don’t know if EY agrees with this.
My model of Eliezer says "Insofar as humans do report this, it’s a fine observation to write down in your list of ‘stuff people say about consciousness’, which your completed theory of consciousness should explain. However, it would be an error to take this as much evidence about ‘consciousness’, because it would be an error to act like ‘consciousness’ is a coherent concept when one is so confused about it that they cannot describe the cognitive antecedents of human insistence that there’s an ineffable redness to red."
My model of Eliezer says "The type of knowledge I claim to have, is knowledge of (at least many components of) a cognitive algorithm that looks to me like it codes for consciousness, in the sense that if you were to execute it then it would claim to have qualia for transparent reasons and for the same reasons that humans do, and to be correct about that claim in the same way that we are. From this epistemic vantage point, I can indeed see clearly that consciousness is not much intertwined with predictive processing, nor with the "binding problem", etc. I have not named the long list of components that I have compiled, and you, who lack such a list, may well not be able to tell what consciousness is or isn’t intertwined with. However, you can still perhaps understand what it would feel like to believe you can see (at least a good part of) such an algorithm, and perhaps this will help you understand my confidence. Many things look a lot more certain, and a lot less confusing, once you begin to see how to program them."
Comment
I’m confident your model of Eliezer is more accurate than mine. Neither the twitter thread or other writings originally gave me the impression that he had a model in that fine-grained detail. I was mentally comparing his writings on consciousness to his writings on free will. Reading the latter made me feel like I strongly understood free will as a concept, and since then I have never been confused, it genuinely reduced free will as a concept in my mind. His writings on consciousness have not done anything more than raise that model to the same level of possibility as a bunch of other models I’m confused about. That was the primary motivation for this post. But now that you mention it, if he genuinely believes that he has knowledge which might bring him closer to (or might bring others closer to to) programming a conscious being, I can see why he wouldn’t share it in high detail.
While I agree with mostly everything your model of Eliezer said, I do not feel less confused about how Eliezer arrives to a conclusion that most animals are not conscious. Granted, I may, and probably actually am, lacking an important insight in the matter, but than it will be this insight that allows me to become less confused and I wish Eliezer shared it. When I’m thinking about a thought process that allows to arrive to such a conclusion I imagine something like this. Consciousness is not fundamental but it feels like it is. That’s why we intuitively apply concepts such as quantity towards consciousness, thinking about more or less conscious creatures as being more or less filled with conscious-fluid as we previously though about flogiston or caloric fluid. But this intuition is confused and leads us astray. Consciousness is a result of a specific cognitive algorithm. This algorithm can either be executed or not. There are good reasons to assume that such algorithm would be developped by evolution only among highly social animals as such conditions lead to necessity to model other creatures modelling yourself. And I see an obvious problem with this line of thoughts. Reversed confusion isn’t insight. Our confused intuition which leads us to quantifying consciousness may be wrong, but it isn’t necessary wrong. If anything, the idea that consciousness isn’t quantifiable is also originally based on the idea of consciousness being fundamental. Think about ancient hebrews who claimed that animals didn’t have souls. There are lots of bad reasons to think that farm animals are ethically irrelivant, indeed it would be super convinient, considered how tasty is their meat. That doesn’t automatically mean that they are ethically relevant, just hints at the possibility. We can think about hearing, or vision, or sense of smell. They are not fundamental. They are the result of a specific algorithm executed by our brain. Yet we can quantify them. Quantifying them actually makes a lot of sense, considered that evolution works incrementally. Why can’t it be the same for consciousness?
Comment
I don’t think the thought process that allows one to arrive at (my model of) Eliezer’s model looks very much like your 2nd paragraph. Rather, I think it looks like writing down a whole big list of stuff people say about consciousness, and then doing a bunch of introspection in the vicinity, and then listing out a bunch of hypothesized things the cognitive algorithm is doing, and then looking at that algorithm and asking why it is "obviously not conscious", and so on and so forth, all while being very careful not to shove the entire problem under the rug in any particular step (by being like "and then there’s a sensor inside the mind, which is the part that has feelings about the image of the world that’s painted inside the head" or whatever).
Assuming one has had success at this exercise, they may feel much better-equipped to answer questions like "is (the appropriate rescuing of) consciousness more like a gradient quantity or more like a binary property?" or "are chickens similarly-conscious in the rescued sense?". But their confidence wouldn’t be coming from abstract arguments like "because it is an algorithm, it can either be executed or not" or "there are good reasons to assume it would be developed by evolution only among social animals"; their confidence would be coming from saying "look, look at the particular algorithm, look at things X, Y, and Z that it needs to do in particular, there are other highly-probable consequences of a mind being able to do X, Y, and Z, and we difinitively observe those consequences in humans, and observe their absence in chickens."
You might well disbelieve that Eliezer has such insight into cognitive algorithms, or believe he made a mistake when he did his exercise! But hopefully this sheds some light on (what I believe is) the nature of his confidence.
Your comments here and some comments Eliezer had made elsewhere seem to imply he believes he has at least in large party "solved" consciousness. Is this fair? And if so is there anywhere he has written up this theory/analysis in depth—because surely if correct this would be hugely important
I’m kind of assuming that whatever Eliezer’s model is, the bulk of the interestingness isn’t contained here and still needs to be cashed out, because the things you/he list (needing to examine consciousness through the lens of the cognitive algorithms causing our discussions of it, the centrality of self-modely reflexive things to consciousness etc.) are already pretty well explored and understood in mainstream philosophy, e.g Dennett.
Or is the idea here that Eliezer believes some of these existing treatments (maybe modulo some minor tweaks and gaps) are sufficient for him to feel like he has answered the question to his own satisfaction.
Basically struggling to understand which of the 3 below is wrong, because all three being jointly true seem crazy
Eliezer has a working theory of consciousness
This theory differs in important ways from existing attempts
Eliezer has judged that it is not worthwhile writing this up
Thanks, this is helpful.
Comment
If I were doing the exercise, all sorts of things would go in my "stuff people say about consciousness" list, including stuff Searl says about chinese rooms, stuff Chalmers says about p-zombies, stuff the person on the street says about the ineffable intransmissible redness of red, stuff schoolyard kids say about how they wouldn’t be able to tell if the color they saw as green was the one you saw as blue, and so on. You don’t need to be miserly about what you put on that list.
Mostly (on my model) because it’s not at all clear from the getgo that it’s meaningful to "be conscious" or "have qualia"; the ability to write an algorithm that makes the same sort of observable-claims that we make, for the same cognitive reasons, demonstrates a mastery of the phenomenon even in situations where "being conscious" turns out to be a nonsense notion.
Note also that higher standards on the algorithm you’re supposed to produce are more conservative: if it is meanigful to say that an algorithm "is conscious", then producing an algorithm that is both conscious, and claims to be so, for the same cognitive reasons we do, is a stronger demonstration of mastery than isolating just a subset of that algorithm (the "being conscious" part, assuming such a thing exists).
I’d be pretty suspicious of someone who claimed to have a "conscious algorithm" if they couldn’t also say "and if you inspect it, you can see how if you hook it up to this extra module here and initialize it this way, then it would output the Chinese Room argument for the same reasons Searl did, and if you instead initialize it that way, then it outputs the Mary’s Room thought experiment for the same reason people do". Once someone demonstrated that sort of mastery (and once I’d verified it by inspection of the algorithm, and integrated the insights therefrom), I’d be much more willing to trust them (or to operate the newfound insights myself) on questions of how the ability to write philosophy papers about qualia relates to the ability of the mind to feel, but the qualifying bar for "do you have a reductionist explanation of consciousness" is "can you show me how to build something that produces the observations we set out to explain in the first place (people talking about ‘consciousness’) fo rthe same cognitive reasons?".
Note further that demonstrating an algorithm that produces the same sort of claims humans do (eg, claims about the redness of red) for the same cognitive reasons, is not the same thing as asserting that everything "with consciousness/qualia" must make similar claims.
My model of Eliezer says "In lieu of an algorithmic account of the cognitive antecedents of people insisting they are conscious, that sort of claim is not even wrong." (And similarly with various other claims in that section.) My model continues: "You seem to me to be trying to do far more with the word ‘consciousness’ than your understanding of the phenomenon permits. I recommend doing less abstract reasoning about how ‘consciousness’ must behave, and more thinking about the cognitive causes behind the creation of the Mary’s Room hypothetical."
My model says: "The list of reasons is not particularly small, in this case."
"The claim is correct if the actual cognitive reasons for Searl inventing the Chinese Room hypothetical, are analogous to the cognitive reasons that the alleged algorithm invents the Chinese Room hypothetical, and so on and so forth.
"This is of course difficult to check directly. However, fairly strong evidence of correctness can be attained by reading the algorithm and imagining its execution. Just as you can stare at the gears of a watch until you understand how their interactions makes the watch-hands tick, at which point you can be justifiably confident that you understand the watch, you should be able to stare at a cognitive algorithm explaining ‘consciousness’ until you understand how its execution makes things like ‘inner listeners’ ‘experiencing redness’ (in a suitably rescued sense), at which point you can be justifiably confident that you understand experience.
"Your fellow tribemembers, who have not understood how gears can drive the hands of a watch, might doubt your claim, saying ‘There are many theories of how the watch works, ranging from internal gears to external solar radiation to the whims of the spirits. How are you so confident that it is the turning of little gears, nevermind this specific mechanism that you claim you can sketch out in the dirt?’. And you could rightly reply, ’When we unscrew the back, we see gears. And there is an arrangement of gears, that I understand, that by inspection would tick the hands in just the way we observe the hands to tick. And while I have not fully taken the watch apart, the visible features of the gears we can see when we unscrew the back, match the corresponding properties of my simple gear mechanism. This is enough for me to be pretty confident that something like my mechanism, which I understand and which clealry by inspection ticks watch-hands, governs the watch before us."
Comment
Shouldn’t mastery and self-awareness/self-modelling come in degrees? Is it necessary to be able to theorize and come up with all of the various thought experiments (even with limited augmentation from extra modules, different initializations)? Many nonhuman animals could make some of the kinds of claims we make about our particular conscious experiences for essentially similar reasons, and many demonstrate some self-awareness in ways other than by passing the mirror test (and some might pass a mirror test with a different sensory modality, or with some extra help, although some kinds of help would severely undermine a positive result), although I won’t claim the mirror test is the only one Eliezer cares about; I don’t know what else he has in mind. It would be helpful to see a list of the proxies he has in mind and what they’re proxies for.
I don’t think it’s obvious that nonhuman animals, including the vertebrates we normally farm for food, don’t self-model (at least to some degree). I think it hasn’t been studied much, although there seems to be more interest now. Absence of evidence is at best weak evidence of absence, especially when there’s been little research on the topic to date. Here’s some related evidence, although maybe some of this is closer to higher-order processes than self-modelling in particular:
See the discussion of Attention Schema Theory here (section "Is an attention schema evolutionarily old or unique to humans?") by the inventor of that theory, Graziano, in response to Dennett’s interpretation of the theory applied to nonhuman animals (in which he also endorses the theory as "basically right"!). Basically, AST requires the individual to have a model of their own attention, an "attention schema".
Dennett wrote "Dogs and other animals do exhibit some modest capacities for noticing their noticings, but we humans have mental lives that teem with such episodes – so much so that most people have never even imagined that the mental lives of other species might not be similarly populated", and then expands further.
In Graziano’s response: "Any creature that can endogenously direct attention must have some kind of attention schema, and good control of attention has been demonstrated in a range of animals including mammals and birds (e.g., Desimone & Duncan, 1995; Knudsen, 2018; Moore & Zirnsak, 2017). My guess is that most mammals and birds have some version of an attention schema that serves an essentially similar function, and contains some of the same information, as ours does. Just as other animals must have a body schema or be condemned to a flailing uncontrolled body, they must have an attention schema or be condemned to an attention system that is purely at the mercy of every new sparkling, bottom-up pull on attention. To control attention endogenously implies an effective controller, which implies a control model."
Dogs (Canis familiaris) recognize their own body as a physical obstacle (pop-sci article)
Pigs learn what a mirror image represents and use it to obtain information. EDIT: there was a replication experiment, in which only 1 of 11 mirror-experienced piglets used the detour that the mirror would help them find, and none of the 11 mirror-naive did.
I think the evidence for episodic(-like) memory in nonhuman animals is getting better, particularly with more unexpected question tests, which often ask about what the animals did (although I supposed this wouldn’t necessarily require a self-model, depending on the details):
Mental representation and episodic-like memory of own actions in dogs
Animal models of episodic memory (see the section "Incidental Encoding and Unexpected Questions")
Episodic-like memory of rats as retrospective retrieval of incidentally encoded locations and involvement of the retrosplenial cortex
Experiments with pigeons and rats are discussed in the section "The unexpected question" in Animals Represent the past and the Future.
I left a comment here, with some other weak evidence, e.g. animals being trained to communicate their emotions in different ways (see also this post), which I think would require them to be able to discriminate between their internal emotional states, i.e. their emotions are inputs to executive functions like (top-down/selective) attention, learning and memory. Also, cows may become excited by their own learning.
EDIT: Finally found the paper on pigs generalizing the discrimination between non-anxiety states and drug-induced anxiety to non-anxiety and anxiety in general, in this case by pressing one lever repeatedly with anxiety, and alternating between two levers without anxiety (the levers gave food rewards, but only if they pressed them according to the condition). This experiment and similar experiments performed on rodents are discussed here, in section 4.d., starting on p.81 (and some other discussion of them earlier). For example, rats generalized from hangover to morphine withdrawal and jetlag, and from high doses of cocaine to movement restriction, from an anxiety-inducing drug to aggressive defeat and predator cues. Of course, anxiety has physical symptoms, so maybe this is what they’re discriminating, not the negative affect.
In general, I think there’s more recent research on nonhuman metacognition and mental representation, although I haven’t followed this closely, so I can’t really tell you what’s up. There are some recent reviews on metacognition here.
Comment
I also don’t think GPT-3 has emotions that are inputs to executive functions, like learning, memory, control, etc..
Of course, many animals have failed the mirror test, and that is indeed evidence of absence for those animals. Still,
Animals could just be too dumb (or rely too little on vision) to understand mirrors, but still self-model in other ways, like in my top comment. Or, they might at least tell themselves apart from others in the mirrors as unique, without recognizing themselves, like some monkeys and pigeons. Pigeons can pick out live and 5-7 second delayed videos of themselves from prerecorded ones.
Animals might not care about the marks. Cleaner wrasse, a species of fish, did pass the mirror test (the multiple phases, including the final self-directed behaviour with the visible mark), and they are particularly inclined to clean things (parasites) that look like the mark, which is where they get their name. I think the fact that they are inclined to clean similar looking marks was argued to undermine the results, but that seems off to me.
I would be interested in seeing the mirror test replicated in different sensory modalities, e.g. something that replays animals’ smells or sounds back to them, a modification near the source in the test condition, and checking whether they direct behaviour towards themselves to investigate.
Some criticisms of past scent mirror test are discussed here (paper with criticism here). The issues were addressed recently here with wolves. Psychology Today summary.
I think animals are more likely to show body (touch, pain) awareness and have a related self-representation (a body schema?). For example, mice get the rubber hand (tail) illusion. From having their tail and the rubber tail just stroked together, they extend their expectations of having their tail grasped to the rubber tail.
I’ve collected my thoughts + recent discussions on consciousness and animal patienthood here: https://www.lesswrong.com/posts/TkahaFu3kb6NhZRue/quick-general-thoughts-on-suffering-and-consciousness. I don’t have the same views as Eliezer, but I’m guessing me talking about my views here will help make it a little clearer why someone might not think this way of thinking about the topic is totally wrong.
"By their fruits you shall know them."
A frame I trust in these discussions is trying to elucidate the end goal. What does knowledge about consciousness look like under Eliezer’s model? Under Jemist’s? Under QRI’s?
Let’s say you want the answer to this question enough you go into cryosleep with the instruction "wake me up when they solve consciousness." Now it’s 500, or 5000, or 5 million years in the future and they’ve done it. You wake up. You go to the local bookstore analogue, pull out the Qualia 101 textbook and sit down to read. What do you find in the pages? Do you find essays on how we realized consciousness was merely a linguistic confusion, or equations for how it all works?
As I understand Eliezer’s position, consciousness is both (1) a linguistic confusion (leaky reification) and (2) the seat of all value. There seems a tension here, that would be good to resolve since the goal of consciousness research seems unclear in this case. I notice I’m putting words in peoples’ mouths and would be glad if the principals could offer their own takes on "what future knowledge about qualia looks like."
My own view is if we opened that hypothetical textbook up we would find crisp equations of consciousness, with deep parallels to the equations of physics; in fact the equations may be the same, just projected differently.
My view on the brand of physicalism I believe in, dual aspect monism, and how it constrains knowledge about qualia: https://opentheory.net/2019/06/taking-monism-seriously/
My arguments against analytic functionalism (which I believe Eliezer’s views fall into): https://opentheory.net/2017/07/why-i-think-the-foundational-research-institute-should-rethink-its-approach/
Copying from my Twitter response to Eliezer: Anil Seth usefully breaks down consciousness into 3 main components: 1. level of consciousness (anesthesia < deep sleep < awake < psychedelic)2. contents of consciousness (qualia — external, interoceptive, and mental)3. consciousness of the self, which can further be broken down into components like feeling ownership of a body, narrative self, and a 1st person perspective. He shows how each of these can be quite independent. For example, the selfhood of body-ownership can be fucked with using rubber arms and mirrors, narrative-self breaks with amnesia, 1st person perspective breaks in out-of-body experiences which can be induced in VR, even the core feeling of the reality of self can be meditated away. Qualia such as pain are also very contextual, the same physical sensation can be interpreted positively in the gym or a BDSM dungeon and as acute suffering if it’s unexpected and believed to be caused by injury. Being a self, or thinking about yourself, is also just another perception — a product of your brain’s generative model of reality — like color or pain are. I believe enlightened monks who say they experience selfless bliss, and I think it’s equally likely that chickens experience selfless pain.Eliezer seems to believe that self-reflection or some other component of selfhood is necessary for the existence of the qualia of pain or suffering. A lot of people believe this simply because they use the word "consciousness" to refer to both (and 40 other things besides). I don’t know if Eliezer is making such a basic mistake, but I’m not sure why else he would believe that selfhood is necessary for suffering.
Comment
I agree with pretty much all of that but remark that "deep sleep < awake < psychedelic" is not at all clearly more correct than "deep sleep < psychedelic < awake". You may feel more aware/conscious/awake/whatever when under the effects of psychedelic drugs, but feeling something doesn’t necessarily make it so.
Comment
The ordering is based on measures of neuro-correlates of the level of consciousness like neural entropy or perturbational complexity, not on how groovy it subjectively feels.
Comment
It would seems a bit optimistic to call anything a "neuro-correlate of the level of consciousness" simply on the basis that it’s higher for ordinary waking brains than for ordinary sleeping brains. Is there more evidence than that for considering neural entropy or perturbational complexity to be measures of "the level of consciousness"? (My understanding is that in some sense they’re measuring the amount of information, in some Shannonesque sense, in the state of the brain. Imagine doing something like that with a computer. The figure will—at least, for some plausible ways of doing it—be larger when the computer is actively running some software than when it’s idle, and you might want to say "aha, we’ve found a measure of how much the computer is doing useful work". But it’s even larger if you arrange to fill its memory with random bits and overwrite them with new random bits once a second, even though that doesn’t mean doing any more useful work. I worry that psychedelics might be doing something more analogous to that than to making your computer actually do more.)
It is not my impression that Eliezer believes any such thing for pain, only (perhaps) for suffering. It’s important not to conflate these.
It seems clear to me, at least, that consciousness (in the "subjective, reflective self-awareness" sense) is necessary for suffering; so I don’t think that Eliezer is making any mistake at all (much less a basic mistake!).
The word "just" is doing a heck of a lot of work here.
Chickens perhaps have "selfless pain", but to say that they experience anything at all is begging the question!
I strongly support this. If you are going to explain-away qualia as the result of having a self-model, you need to do more than note that they occur together , or that "conscious" could mean either.
I had another complaint about that tweet, which… you do not seem to have, but I want to bring up anyway. Why do we assume that ‘consciousness’ or ‘sentience’ implies ‘morally relevant’ ? And that a lack of consciousness (if we could prove that), would also imply ‘not morally relevant’ ? It seems bad to me to torture chickens even if turns out they aren’t self-aware. But lots of people seem to take this as a major crux for them. If I torture a permanently brain-damaged comatose person to death, who no one will miss, is that ‘fine’ ? I am angry about this assumption; it seems too convenient.
Comment
Torturing chickens or brain dead people is upsetting and horrible and distasteful to me. I don’t think it’s causing any direct harm or pain to the chicken/person though.
I still judge a human’s character if they find these things fun and amusing. People watch this kind of thing (torture of humans/other animals) on Netflix all the time, for all sorts of good and bad reasons.
Comment
Claim: Many things are happening on a below-consciousness level that ‘matter’ to a person. And if you disrupted those things without changing a person’s subjective experience of them (or did it without their notice), this should still count as harm. This idea that ‘harm’ and the level of that harm is mostly a matter of the subjective experience of that harm goes against my model of trauma and suffering. Trauma is stored in the body whether we are conscious of it or not. And in fact I think many people are not conscious of their traumas. I’d still call it ‘harm’ regardless of their conscious awareness. I have friends who were circumcised before they could form memories. They don’t remember it. Through healing work or other signs of trauma, they realized that in fact this early surgery was likely traumatic. I think Eliezer is sort of saying that this only counts as harm to the degree that it consciously affects them later or something? I disagree with this take, and I think it goes against moral intuition. (If one sees a baby screaming in pain, the impulse is to relieve their ‘pain’ even if they might not be having a conscious experience of it.) If I take a "non-sentient" chicken and cut off its wings, and I watch it as it helplessly tries to fly repeatedly, but is unable to, this strikes me as a form of harm to the chicken and its values even if the chicken is not having a subjective experience of its condition. Also, from my investigations, much suffering does not reach the level of awareness. When a person investigates very closely and zooms in on experiences (such as through meditation), suffering is ‘found’ to be ‘occurring’ at a level of granularity and detail that was not previously accessible. But becoming aware of this suffering does not increase the amount of suffering that was occurring; you just become aware of the amount that was already there. It’s an "oh" moment. And this can actually help relieve the suffering, by becoming aware of it. This suggests that maybe beings who lack the ability of awareness and observation to see their own condition actually are suffering more. This accords with my own journey in relieving personal suffering. More awareness was generally helpful. Whereas as a child, I was more ‘braindead’ in some way. Not very ‘conscious’. One could make similar inquiries into ‘dissociation’. If a person is regularly dissociated and doesn’t feel things very intensely, does it make it more okay to hurt them? Also my model of pain is that pain != suffering, which might be relevant here. Not sure.
Comment
Typically in questions of ethics, I factor the problem into two sub-questions:
Game theory: ought I care about other agents’ values because we have the potential to affect each other?
Ben’s preferences: do I personally care about this agent and them having their desires satisfied? For the second, it’s on the table whether I care directly about chickens. I think at minimum I care about them the way I care about characters in like Undertale or something, where they’re not real but I imbue meaning into them and their lives. That said it’s also on the table to me that a lot of my deeply felt feelings about why it’s horrible to be cruel to chickens, are similar to my deeply felt feelings of being terrified when I am standing on a glass bridge and looking down. I feel nauseous and like running and a bit like screaming for fear of falling; and yet there is nothing actually to be afraid of. If I imagine someone repeatedly playing Undertale to kill all the characters in ways that make the characters maximally ‘upset’, this seems tasteless and a touch cruel, but not because the characters are conscious. Relatedly, if I found out that someone had built a profitable business that somehow required incidentally running massive numbers of simulations of the worst endings for all the characters in Undertale (e.g. some part of their very complex computer systems had hit an equilibrium of repeatedly computing this, and changing that wasn’t a sufficient economic bottleneck to be worth the time/money cost), this would again seem kind of distasteful, but in the present world it would not be very high on my list of things to fix, it would not make the top 1000. For the first, suppose I do want to engage in game theory with chickens. Then I think all your (excellent) points about consciousness are directly applicable. You’re quite right that suffering doesn’t need to be conscious, and often I have become aware of a way that I have been averse to thinking about a subject or been scared of a person for no good reason that has been a major impediment in having a great career and great relationships, in ways that are "outside" my conscious experience. (Being more ‘braindead’.) I would have immensely appreciated someone helping me realize and fix these things about myself that were outside my conscious awareness. Insofar as the chickens are having their wings clipped and kept in cages, it’s very clear that their intentions and desires are being stunted. On a similar note, I think all the points in Dormin’s essay Against Dog Ownership apply regardless of whether dogs are conscious — that the meaning dogs look for in life is not found in the submissive and lonely inner-city life that most of them experience. These lay out clear ways to be much kinder to a chicken or dog and to respect their desires. But there is a step of the argument missing here. I think some people believe arguments that claim it’s worth engaging in game theory with chickens even if I think they’re only as real as characters in Undertale; but I have not read an argument that I find compelling. The idea is that if we suppose chickens are indeed only as real as Undertale characters, I might still care about them because we have shared goals or something. Here’s a very concrete story where that would be the case: if someone made a human-level AI with Sans’ personality, and he was working to build a universe kind of like the universe I want to live in with things like LessWrong and Sea Shanties and Dostoyevsky in it, then I would go out of my way to – say – right injustices against him; and I hope he would do the same for me, because I want everyone to know that such agents will be defended by each other. I think some people believe that humans and chickens have similar goals in this way in the extreme, but I don’t agree. I don’t think I would have much of a place in a chicken utopia, nor do I expect to find much of value in it.
Btw, coming at it from a different angle: Jessicata raises the hypothesis (in her recent post) that people put so much weight on ‘consciousness’ as a determinant of moral weight because it is relatively illegible and they believe outside the realm of things that civilization currently has a scientific understanding of, so that they can talk about it more freely and without the incredibly high level of undercutting and scrutiny that comes to scientific hypotheses. Quote:
Comment
I don’t think that was my point exactly. Rather, my point is that not all representations used by minds to process information make it into the scientific worldview, so there is a leftover component that is still cared about. That doesn’t mean people will think consciousness is more important than scientific information, and indeed scientific theories are conscious to at least some people.
Separately, many people have a desire to increase the importance of illegible things to reduce constraint, which is your hypothesis; I think this is an important factor but it wasn’t what I was saying.
Eliezer later states that he is referring to qualia specifically, which for me are (within a rounding error) totally equivalent to moral relevance.
Comment
Why is that? You’re still tying moral relevance to a subjective experience?
Comment
Basically yes I care about the subjective experiences of entities. I’m curious about the use of the word "still" here. This implies you used to have a similar view to mine but changed it, if so what made you change your mind? Or have I just missed out on some massive shift in the discourse surrounding consciousness and moral weight? If the latter is the case (which it might be, I’m not plugged into a huge number of moral philosophy sources) that might explain some of my confusion.
People already implicitly consider your example to be acceptable given that vegetables are held in conditions of isolation that would be considered torture if they were counterfactually conscious and many people support being allowed to kill/euthanize vegetables in cases such as Terry Schiavo’s.
I’ve often thought about this, and this is the conclusion I’ve reached.
There would need to be some criteria that separates morality from immorality. Given that, consciousness (ie self-modelling) seems like the best criteria given our current knowledge. Obviously, there are gaps (like the comatose patient you mention), but we currently do not have a better metric to latch on to.
Comment
Why wouldn’t the ability to suffer be the criterion? Isn’t that built into the concept if sentience? "Sentient" literally means "having senses" but is often used as a synonym for "moral patient".
I suspect I endorse something like what Yudkowsky seems to be claiming. Essentially, I think that humans are uniquely disposed (at least among life on earth) to develop a kind of self-model, and that nonhuman animals lack the same kind of systems that we have. As a result, whatever type of consciousness they have, I think it is radically unlike what we have. I don’t know what moral value, if any, I would assign to nonhuman animals were I to know more about their mental lives or what type of "consciousness" they have, but I am confident that the current high level of confidence people have that animals have rich conscious experiences is not justified. I wrote an old comment on this I’ve shared a couple of times. Here it is: I think that what we take to be our conscious experience involves a capacity for "checking in" on an ongoing internal narrative, or story that we are constantly "telling ourselves" that functions to provide a unified timeline which we can utilize, report on, and talk about with others. I think this "narrative center of gravity" requires a degree of cultural input, and the inculcation of specific memes/concepts that lead us to form a sense of a self that integrates our experiences and that can think about "our" past experiences and "our" future experiences. In a sense, I think that conscious experience is built up as a sort of software that we have the hardware to develop, but requires a degree of developmental and cultural input to become fully operational. I don’t think animals have or need this capacity. As such, what it is like to be us is something we can talk about, but I am not convinced that there is anything it is "like" to be an animal. This is a largely Dennettian view of consciousness, and I believe he coined or at least used the term "narrative center of gravity." You identify consciousness with having qualia. However, I don’t know what you mean by qualia. While it remains sensible to me to attribute something like consciousness to humans, I would typically deny that we "have qualia" and would not define consciousness in terms of having qualia. Were others to do so, I’d deny we have that form of qualia. Perhaps Yudkowsky would, too. It really depends on what one means by "consciousness" and "qualia." I don’t know exactly what Yudkowsky thinks, so I wouldn’t put a number on it as you do (i.e., 15%). But, I’ll put it this way: I don’t know of any alternatives to something like Dennett/Frankish on illusionism that seem individually more plausible than illusionism. I don’t know if the collective weight of plausibility for all competing hypotheses is enough to push illusionism below 50%, but I don’t think so. So, while I am not overwhelmingly confident that something that seems roughly in the ballpark (if not very similar) to Yudkowsky’s view is correct, I have yet to see any viable alternatives. Most seem weird and to not capture what strike me as important elements of consciousness, or they seem to appeal to intuitions I don’t have and don’t trust in others.
Comment
You present an excellently-written and interesting case here. I agree with the point that self-modelling systems can think in certain ways which are unique and special and chickens can’t do that. One reason I identify consciousness with having qualia is that Eliezer specifically does that in the twitter thread. The other is that qualia is generally less ambiguous than terms like consciousness and self-awareness and sentience. The disadvantage is that the concept of qualia is something which is very difficult (and beyond my explaining capabilities) to explain to people who don’t know what it means. I choose to take this tradeoff because I find that I, personally, get much more out of discussions about specifically qualia, than any of the related words. Perhaps I’m not taking seriously enough the idea that illusionism will explain why I feel like I’m conscious and not explain why I am conscious. I also agree that most other existing mainstream views are somewhat poor, but to me this isn’t particularly strong positive evidence for Eliezer’s views. This is because models of consciousness on the level of detail of Eliezer’s are hard to come up with, so there might be many other excellent ones that haven’t been found yet. And Eliezer hasn’t done (to my knowledge) anything which rules out other arguments on the level of detail of his own. Basically I think that the reason the best argument we see is Eliezer’s is less along the lines of "this is the only computational argument that could be made for consciousness" and more along the lines of "computational arguments for consciousness are really difficult and this is the first one anyone has found".
Comment
Yudkowsky specifically using the term is a good reason. Thanks for pointing that out, and now I feel a little silly for asking. He says, "I mean qualia, yes." You can’t get more blunt than that.While I agree that qualia is less ambiguous than other terms, I am still not sure it is sufficiently unambiguous. I don’t know what you mean by the term, for instance. Generally, though, I would say that I think consciousness exists, but that qualia do not exist. I think illusionism does offer an account of consciousness; it’s just that consciousness turns out not to be what some people thought that it was. Personally, I don’t have and apparently have never had qualia intuitions, and thus never struggled with accepting Dennett’s views. This might be unusual, but the only view I ever recall holding on the matter was something like Dennett’s. His views immediately resonated with me and I adopted them the moment I heard them, with something similar to a "wow, this is obviously how it is!" response, and bewilderment that anyone could think otherwise. I’m glad we agree most alternatives are poor. I do happen to agree that this isn’t especially good evidence against the plausibility of some compelling alternative to illusionism emerging. I definitely think that’s a very real possibility. But I do not think it is going to come out of the intuition-mongering methodology many philosophers rely on. I also agree that this is probably due to the difficulty of coming up with alternative models. Seems like we’re largely in agreement here, in that case.
I don’t know how you can deny that people have "qualia" when, as far as I can tell, it was a word coined to describe a particular thing that humans experience?
Comment
I’m not sure I understand. What do you mean when you say it was coined to "describe a particular thing that humans experience"? Or maybe, to put this another way: at least in this conversation, what are you referring to with the term "qualia"?
Comment
As I understand it, the word "qualia" usually refers to the experience associated with a particular sensation.
"Qualia" is easy to define. As Wikipedia has it
Whereas illusionism is almost impossible to define coherently.
"According to illusionism, you only have propositional attitudes, not perceptions. Some of those propositional attitudes seem like propositional attitudes , and others seem like perceptions. Well they don’t, because if anything seemed like anything, that would be a perception. So actually you have a meta legal belief that some of your propositional attitudes are propositional attitudes, but also a meta level belief that others aren’t. That’s the illusion. But actually it’s not an illusion,because an illusion is a false perception , and there are no perceptions. Its actually a false belief, a delusion I don’t know why we call it illusionism"
Comment
It’s easy to give examples of things we think of as qualia. I’m not so sure that that means it’s easy to give a satisfactory definition of "qualia". I can give lots of examples of people, but there’s scope for endless debate about exactly what counts as a person and what doesn’t. (Newly born children? 16-week-old foetuses? Aliens or AIs, should any exist now or in the future, with abilities comparable to ours but very different brains and minds? Beings like gods, angels, demons, etc., should any exist, with abilities in some ways comparable to ours but not made out of matter at all?) And for debate about when persons A and B are actually the same person. (Suppose some intelligent computer programs are persons. If I take a copy, do I then have one person or two? Suppose we favour a multiverse-type interpretation of quantum mechanics. Are the versions of "me" on two nearby Everett branches one person, or two? Am I the same person as I was 30 years ago?) There’s similar unclarity about what things count as qualia and about how to individuate them. (E.g., if you and I look at the same red object and both have normal colour vision, do we have "the same" quale of seeing-a-red-thing or not? If I see the same red thing twice, is that "the same" quale each time? If the answers are negative, what actual work is the notion of qualia doing?) And e.g. Daniel Dennett would claim that the word "qualia" includes enough baggage that it’s better to say that there are no qualia while in no way denying that people experience things. It’s not (I think) in question that we experience things. It’s quite reasonable (I think) to question whether anything about our experience is made clearer by introducing objects called qualia.
Comment
Satisfactory for whom? I use examples because they are sufficient to get the point across to people who aren’t too biased. Someone night have some genuine reason to need a more rigourous definition...but they might not, they might instead be making a selective demand for rigour, out of bias. Where are the calls for rigourous definitions of "matter", "computation", etc?
If my purpose is to demonstrate that people exist, all I need to do is point to a few uncontentious examples of people...I don’t need to solve every edge case.
And "endless debate" needs to be avoided. People who make selective demands for rigour don’t to change their minds, and endless debate is a great way of achieving that
Why does that matter if all I am doing is asserting that qualia exist, or lack a reductive explanation?
Comment
(I’m ignoring those parts of your reply that seem to have no purpose other than implicitly accusing me of arguing in bad faith. I have seldom known anything useful to come out of engaging with that sort of thing. These discussions would be more enjoyable, for me at least, if you weren’t so relentlessly adversarial about them.) Satisfactory for whom? For me, obviously :-). There is at least one eminent philosopher, namely Daniel Dennett, who has made something of a speciality of this area and who flatly denies that qualia "exist", and who doesn’t appear to me to be either a dimwit or a crackpot. That is already sufficient reason for me to want to be careful about saying "duh, of course qualia exist". Of course if all you mean by that is that people have experience, then I agree with that, but if that’s all you mean then what need is there to talk about "qualia" at all? And if it’s not all you mean, then before agreeing I need to know what else is being implicitly brought in. Now, in the present instance it’s Jemist who introduced "qualia" to the discussion (so, in particular, you are under no obligation to be able to tell me precisely what Jemist means by the term). And Jemist talks e.g. about experience being "turned into qualia", and I don’t see how your examples help to understand what that means, or what distinction between "experience" and "qualia" Jemist is trying to draw. The general idea seems to be something like this: people and chickens alike have some sort of stream or sea of experiences, and humans (and maybe chickens or maybe not) "turn these experiences into qualia", and having not merely experiences but qualia is what justifies calling an entity "conscious" and/or seeing that entity as of moral significance. I’m sympathetic to the general idea that there’s something that’s kinda-the-same about chickens’ sensory input and ours, and something that’s maybe different about the early stages of processing that sensory input, and that that has something to do with possible moral differences between us and chickens. But I don’t see any reason to think that filling in that picture in detail, if we knew how to do it, would look much like identifying things ("qualia") that we "have" and chickens maybe "don’t have". And one way to resist the so-far-unjustified slide from "there may be something importantly different between how we process our sensory input and how chickens process theirs" to "maybe we have qualia and chickens don’t" is to remain mindful of the fact that we don’t have—or, at least, I don’t have and I haven’t seen much evidence that others have—a very clear idea of exactly what "qualia" are supposed to be, and of how "having qualia" is supposed to go beyond "having experience". Here’s another example of how the leap to "having qualia" may bring in unacknowledged baggage. You say "Why does that matter if all I am doing is asserting that qualia exist" (so far so good, I guess) "or lack a reductive explanation?". Where did that come from? It certainly doesn’t seem to be something that follows from the fact that people experience things. If you’re somehow inferring that from "having qualia" then I think there’s got to be something in your concept of "qualia" that is very much not an obvious consequence of having experience, and I don’t want to just nod wisely when someone says "of course we have qualia" because it may turn out that part of what they mean by "qualia" involves some sort of in-principle irreducibility, and I want to see some actual argument before agreeing to that! ("Lack a reductive explanation" is ambiguous between "we don’t have one yet" and "we are probably never going to have one" and "we are definitely never going to have one" and "it is in principle impossible for us ever to have one". I don’t like this because it’s too easy to slide between those meanings without explicitly noting that that’s happening and offering any justification. I don’t know whether I have guessed correctly at what combination of those things you meant; if you think I haven’t, feel free to clarify.)
Comment
By that standard, there is no satisfactory definition of anything, since there are philosophers who doubt their own existence, your existence, the existence of an external world, the existence of matter and so on.
But a definition is not supposed to count as a proof all by itself. A definition of X should allow two people who are having a conversation about X to understand each other. A definition that is satisfactory for that purpose does not need to constitute a proof or settle every possible edge case.
I’m not sure why it’s my job to explain what Jemist means.
If you want a hint as to what an "experience" could be other than a quale, then look at what qualia sceptics think an experience is...apparently some sort of disposition to answer "yes I see red" when asked what they see.
If you are anything like most people, you probably have no compunction against destroying machinery, or the virtual characters in a game. And you probably don’t care too much if the characters say "aaagh!" or the machinery reports damage. So it’s as if you think there is something about living organisms that goes beyond damage and reports of damage … something like pain, maybe?
More than one thing could make an entity morally significant, and there are arguments for the existence of qualia other than moral significance.
Well, if we fill in the picture by adding in more fine grained structure and function, we are probably not going to find the qualia for the same reason that we haven’t already. Nonetheless, we have good reason to think that our qualia are there, and rather less good reason to believe that the from-the-outside approach is literally everything, so that qualia have to be disregarded if they cannot be located on that particular map.
I just quoted a definition of qualia which says nothing about in-principle irreducibility. Do agree with that defintion?
Is it reasonable to to reject X, for which there is evidence, on the basis that someone might be smuggling in Y? Have you noticed that creationists do this—they don’t accept random mutation and natural selection because they are afraid they might end up agreeing that there is no god?
"Lack a reductive explanation" is a separate claim, not an implication of "have qualia".
The obvious interpretation is "we don’t have one yet". Moreover , it is also true, so charitableness recommends it as the interpretation.
I don’t do that.
What I would like to be able to do is be able to build up a step-by-step argument. But that can only work with a reader who is willing to judge each step on its own merits.
Comment
Comment
But you weren’t disagreeing with anything actually in the definition. You have been saying that the definition doesn’t make it explicit enough that qualia aren’t irreducible, immaterial, etc. Merely failing to mention reducibility, etc, one way or the other isn’t enough for you.
"Seem" to whom? From my perspective, you keep insisting that I have smuggled in non-materialistic assumptions … but I don’t even see how that would work.
If I offer you one definition, then swap it for another, isn’t that a blatant cheat on my part? And if it is , why worry?
Or if I argue that qualia are immaterial based on other evidence and theories and whatever. … so that the conclusion isn’t begged by definition alone … that’s legitimate argumentation.
You are asking me to tell you what qualia are ontologically. But thats not a definition , that’s a theory. Theories explain evidence. Evidence has to be spoken about separately from theories. When I define qualia, I am defining something that needs to be explained, not offering an explanation. I want the definition to be ontologically non committal so that the process of theory building can procede without bias. But neutrality isn’t enough for you: you are committed to a theory, and you won’t consider something as relevant evidence unless you can be guaranteed that it won’t disrupt the theory.
"Experience things" doesn’t convey enough information, because it can too easily be taken in a naive realist sense.
The point isn’t that you are seeing a tomato, it is that you are seeing it in a certain way.
According to science , our senses are not an open window on the world that portrays it exactly as it is. Instead , the sensory centres of our brains are connected the outside world by a complex causal chain, during which information, already limited by our sensory modalities, is filtered and reprocessed in various ways.
So scientific accounts of perception require there to be a way-we-perceive-things...quite possibly , an individual one. Which might as well be called "qualia" as anything else.
Comment
It is simply not true that
Comment
Define "matter".
Comment
Why? (We haven’t been discussing matter. I haven’t been insisting that you affirm the existence of matter. There aren’t any circumstances parallel to those involving "qualia".) But, since you ask, here’s the best I can do on short notice. First, purely handwavily and to give some informal idea of the boundaries, here are some things that I would call "matter" and some possibly-similar things that I would not. Matter: electrons, neutrons, bricks, stars, air, people, the London Philharmonic Orchestra (considered as a particular bunch of particular people). Not matter: photons, electric fields, empty space (to whatever extent such a thing exists), the London Philharmonic Orchestra (considered as a thing whose detailed composition changes over time), the god believed in by Christians (should he exist), minds. Doubtful: black holes; the gods believed in by the ancient Greeks (should they exist). "Matter" is a kind of stuff rather than a kind of thing; that is, in general if some things are "matter" then so is what we get by considering them together, and so are whatever parts they might have. (This might need revision if e.g. it turns out that things I consider "matter" and things I don’t are somehow merely different arrangements of some more fundamental stuff.) Conditional on the universe working roughly the way I currently model it as doing (or, more precisely, allow other people better at these things to model it as doing), I think the actually-existing things I call "matter" are coexistent with "things made from excitations of fermionic quantum fields". If the way the universe works is very different from how I think it does, then depending on the details I might want (1) to continue to say that matter is excitations of fermionic quantum fields, and to declare that contrary to appearances some things we’ve all been thinking of as matter are something else, or (2) to continue to say that the things we naïvely think of as matter should be called matter, even though some of them are made of other things than excitations of fermionic quantum fields, or (3) to abandon the notion of "matter" as ill-adapted for the way the world actually turns out to be. If faced with someone denying, or reluctant to positively affirm, the existence of "matter", I would be interested to know whether they mean that some or all concrete things I regard as "matter" are fictions or simulations or imaginations or something, or whether they agree that those things are real but disagree somehow about their fundamental nature (in which case it would be nice to know what), or whether as in our case they don’t find my usage of the term clear enough to endorse or reject. (In our case, I think the experiencing you point at when you refer to "qualia" is real; I do not know whether you are intending to point at something more noun-like, nor whether the things in question are real; I don’t think the term "qualia" generally presupposes any detailed view about the underlying nature of whatever-it-points-at but would want a clearer understanding of how my interlocutor is using the term before being confident of that in a specific case.)
I didn’t say anything explicit about reification. And it’s not an implication, either. Merely using a noun is not reification. "Action", "event", "state" "property", "process" and "nothingness" are all nouns, yet none of them refer to things.
Again, that would be an ontology of qualia. Again, I am offering a definition , not a complete theory. Again, your grounds for saying that the definition is inadequate is that it isn’t answering every question you might have—and that it might have implications you don’t. If the way qualia actually work, ontologically—a subject about which I have said nothing so far—involves the literal sharing of a universal between identical subjective sensations, then you should believe it, because it is true, and not object to it dogmatically. Definitions are supposed to have implications. It’s not reasonable to object to them for having implications … and it’s not reasonable to object to them for having implications you don’t like, because you are supposed to decide theories on the basis.
Notice that in raising the issue, you are already using a good-enough definition of qualia. To object to qualia on the basis that they involve a Platonic shared universal, rather than some other solution to the problem of universals, you have to be able to talk about them, even if without using the word "qualia". But of course, you always have to have pre-theoretic definitions in order to build a theory.
Whether qualia are immaterial or irreducible or whatever depends on all the evidence—on a theory. It should not be begged by a single definition. Question begging definitions are bad, m’kay.
But we would first need to agree that qualia exist at all. That’s how theory building works ..step by step. Nobody could come to any conclusion about anything if they had to start with completely clear and exhaustive definitions. Ordinary definitions are not as exhaustive as encyclopedia articles, for instance. You are engaging in a selective demand for rigour.
I’ve already answered that: if Bob differs from Alice , he differs in being a naive realist.
No, they are supposed to be subjectively experienced ways of perceiving, as I have already said several times. I wasn’t putting forward ways-of-perceiving as an exhaustive definition, I was pointing out the inadequacy of your definition.
Comment
Using a noun is, by default, reification. Or, at the very least, should be presumed so in the absence of some statement along the lines of "of course when I’m asking you to agree that people have qualia, I am not asking you to commit yourself to there being any such things as qualia". Qualia without reification seem to me to amount to "people have experiences". I understand that it doesn’t seem that way to you, but I don’t understand why; I don’t yet understand just what you mean by "qualia", and the one thing you’ve said that seems to be an attempt to explain why you want something that goes beyond "people have experiences" in the direction you’re calling "qualia"—the business about perception being a complex multi-stage process involving filtering and processing and whatnot—didn’t help me, for the reasons I’ve already given.
Comment
I’ve already said that I’m using "qualia" in an ontologically non committal way.
I note from your 2016 comment that you use the word noncommittally yourself.
"Qualia are what happens in our brains (or our immaterial souls, or wherever we have experiences) in response to external stimulation, or similar things that arise in other ways (e.g., in dreams)."
As I have explained, equating qualia and experiences doesn’t sufficiently emphasise the subjective aspects.
"Experience" can be used in contexts like "experience a sunset" where the thing experienced is entirely objective, or contexts like "experience existential despair" ,where it’s a subjective feeling. Only the second kind of use overlaps with "qualia". Hence, "qualia" is often briefly defined as "subjective experience".
Note that "experience" is just as much of a noun as "quale", so it has just as much of reification issue.
None.
Then dont reify. The reification issue exists only in your imagination.
How do you know it’s different from what you mean? You were comfortable using the word in 2016. This conversation started when I used a series of examples to define "qualia", which you objected to as not being a real definition.
"It’s easy to give examples of things we think of as qualia. I’m not so sure that that means it’s easy to give a satisfactory definition of "qualia".′
But when I asked you to define "matter"...you started off with a listof examples!
"First, purely handwavily and to give some informal idea of the boundaries, here are some things that I would call "matter" and some possibly-similar things that I would not. Matter: electrons, neutrons, bricks, stars, air, people, the London Philharmonic Orchestra (considered as a particular bunch of particular people). Not matter: photons, electric fields, empty space (to whatever extent such a thing exists), the London Philharmonic Orchestra (considered as a thing whose detailed composition changes over time), the god believed in by Christians (should he exist), minds. Doubtful: black holes; the gods believed in by the ancient Greeks (should they exist)."
The only thing I’m doing that is different is going for a minimal and common sense approach, rather than a technical definition on the lines of "that which is ineffable, incorrigible, irreducible and repeatable. Hence why the list of examples: it’s hard to deny that ones pains feel like something, even when one can quibble about incorrigibility or whatever.
Again, that would be an ontology of qualia. Again, I am offering a definition, not a complete theory. Again, you shouldn’t be rejecting evidence because you don’t like it’s theoretical implications.
Naive realism is not the denial of experience: it’s treating experience as objective.
You can look up definitions, just as you can for "qualia".
Which have an objective aspect -- things happen differently in the brains of different perceivers—and a subjective aspect—things seem different to different observers. Again, the subjective aspect is what’s relevant.
No, it just seems to you that way.
No, it doesn’t mean anything so vacuous. If two people perform mental arithmetic , that is not subjective because maths is objective...they get the same answer, or one of them is wrong. "Subjective" doesn’t just mean that individual apprehensions vary, it means there is no right or wrong about the variation Some people like the way marmite tastes to them, others don’t. Neither is right or wrong, but the marmite is always the exact same substance.
Well, you seem to be having trouble understanding what "subjective" means.
Comment
Your accusations of inconsistency Yup, I used the term "qualia" in 2016 (in response to someone else making an argument that used the term). I don’t always pick every possible fight :-). (In that case, turchin was making another specific argument and used the word "qualia" in passing. I disagreed with the other specific argument and argued against that. The specific word "qualia" was a side issue at most. Here, the specific point at issue is whether everyone needs to agree that "we have qualia".) You asked for a definition of "matter" and I (1) gave a list of examples and counterexamples and near-the-boundary examples, (2) prefaced with an explicit note that this was preliminary handwaving, (3) followed by an attempt at a precise definition distinguishing matter from not-matter. You haven’t done any of that for "qualia", just given a list of examples, and that (not the fact that you did give a list of examples) is what I was complaining about. "It’s easy to give examples … I’m not so sure that that means it’s easy to give a satisfactory definition". Your accusations of wilful ignorance and/or laziness Yes, I could look up definitions of "naïve realism" or of "qualia". As it happens, I have. They don’t tell me what you mean by those terms, and definitions of them do not always agree with one another. Which is why I keep asking you what you mean by terms you are using, and get frustrated when you seem reluctant to tell me. For instance, here we read that "the naïve realist claims that, when we successfully see a tomato, that tomato is literally a constituent of that experience, such that an experience of that fundamental kind could not have occurred in the absence of that object". Here we read that "naïve realism is the human tendency to believe that we see the world around us objectively, and that people who disagree with us must be uninformed, irrational, or biased". In a comment of yours elsewhere in this thread you say "People generally and incorrectly assume that colours are objective properties (hence the consternation caused, amongst some, by the dress illusion). That’s called naive realism, and it’s scientifically wrong." (I remark in passing that you also said that the difference between two people who agree that people experience things, one of whom says that we have qualia and one of whom doesn’t, is that the latter has to be a naïve realist; if you are in fact claiming that "we have qualia" means something that is straightforwardly implied by "colours are not objective properties of the objects whose colours they are" then, yay, it turns out that I believe that we have qualia and we can stop arguing. But I’m pretty sure this will not in fact be enough.) These three things are not entirely unlike one another, but no two of them are the same. Your comment is offering an example rather than a definition, but it is not in fact an example of the first definition and I’m doubtful about its being an example of the second. Or I could look up "qualia" in, say, the MIT Encyclopedia of Cognitive Science, whose entry for that subject begins as follows—my annotations in square brackets.
Unfortunately, I don’t think the account of qualia you’ve presented is adequate. First, I don’t know what is meant by "perceived sensation" of the pain of a headache. This could be cashed out in functional terms that don’t make appeal to what I am very confident philosophers are typically referring to when they refer to qualia. So this strikes me as a kind of veiled way of just using another word or phrase (in this case, "perceived sensation") as a stand-in for "qualia," rather than a definition. It’s a bit like saying the definition of morality is that it is "about ethics." I’m likewise at a loss about the second part of this. What is the qualitative character of a sensation? What does it mean to say that you’re referring to "what it is directly like to be experiencing" rather than a belief about experiences? Again, these just seem like roundabout ways of gesturing towards something that remains so underspecified that I still don’t know what people are talking about.
Comment
Is "unmarried man" a mere stand-in for "bachelor"?
They are ways of gesturing towards your own experience. If you refuse to introspect you are not going to get it.
Me.
Thats what I was expanding on.
The phenomenon properties you mentioned...those are qualia. You have the concept , because you need the concept to say it’s illusory.
Comment
Comment
Of course, introspection isn’t meant to give you a definition of qualia...it’s meant to give you direct acquaintance.
Comment
I have introspected and it has not resulted in acquaintance with qualia.I believe people can introspect and then draw mistaken conclusions about the nature of their experiences, and that qualia is a good candidate for one of these mistaken conclusions.
Comment
What did it result in acquaintance with? If it seems to you that all your mental content consists only of propositional attitudes, then you don’t even have the illusion of phenomenonal consciousness. But why would you alone be lacking it?
Comment
Note that it’s plausible to me that this is a Typical Mind thing and actually there’s just a lot of people going around without the perception of phenomenal consciousness. Like, Lance, do you not feel like you experience that things seem ways? Or just that they don’t seem to be ways in ways that seem robustly meaningful or something?
Comment
But the qualiaphilic claim is typical, statistically. Even if Lance’s and Denett’s claims to zombiehood are sincere, they are not typical.
Comment
Have we even checked tho? (Maybe the answer is yes, but it hadn’t occurred to me before just now that this was a dimension people might vary on. Or, actually I think it had, but I hadn’t had a person in front of me actually claiming it)
Comment
See above; I posted a link to a recent study. There hasn’t been much work on this. While my views may be atypical, so too might the views popular among contemporary analytic philosophers. A commitment to the notion that there is a legitimate hard problem of consciousness, that we "have qualia," and so on might all be idiosyncrasies of the specific way philosophers think, and may even result from unique historical contingencies, such that, were there many more philosophers like Quine and Dennett in the field, such views might not be so popular.Some philosophical positions seem to rise and fall over time. Moral realism was less popular a few decades ago, but as enjoyed a recent resurgence, for instance. This suggests that the perspectives of philosophers might result in part from trends or fashions distinctive of particular points in time.
Comment
"Statistically" , so "who" would be most people.
Comment
Thanks for clarifying. Not all statistical claims in e.g., psychology are intended to generalize towards most people, so I didn’t want to assume you meant most people. If the claim is that most people have a concept of qualia, that may be true, but I’m not confident that it is. That seems like an empirical question it’d be worth looking into. Either way, I wouldn’t be terribly surprised if most people had the concept, or (I think more likely) could readily acquire it on minimal introspection (though on my view I’d say that people are either duped or readily able to be duped into thinking they have the concept).I don’t know if I am different, or if so, why. It’s possible I do have the concept but don’t recognize it, or am deceiving myself somehow. It’s also possible I am somehow atypical neurologically. I went into philosophy precisely *because *I consistently found that I either didn’t have intuitions about conventional philosophical cases at all (e.g., Gettier problems), or had nonstandard or less common views (e.g. illusionism, normative antirealism, utilitarianism). That led me to study intuitions, the psychological underpinnings of philosophical thought, and a host of related topics. So there is no coincidence in my presenting the views expressed here. I got into these topics *because *everyone else struck me as having bizarre views.
Comment
Most people don’t know the word "qualia". Nonetheless, most people will state something equivalent....that they have feelings and seemings that they can’t fully describe. So it’s a "speaking prose" thing.
And something like that is implicit in Illusionism. Illusionism attempts to explain away reports of ineffable subjective sensations, reports of qualia like things. If no one had such beliefs, or made such reports, there would be nothing for Illusionism to address.
Trying to attack qualia from every possible angle is rather self-defeating. For instance, if you literally don’t know what "qualia" means, you can’t report that you have none. And if no one even seems to have qualia, there is nothing for Illusionism to do. And so on.
But then , why insist that you are right? If you have something like colour blindness , then why insist that everyone else is deluded when they report colours?
Comment
When you sit alone in an empty room, do you have a sense of your own presence, your own self? Can you be aware, not only of your sensations, but of the sensation of having those sensations? Can you have thoughts, and be aware of having those thoughts? And be aware of having these awarenesses?
My answer to each of these questions is "yes".
But for you, do these questions fail to point to anything in your experience?
Comment
Having now had a lot of different conversations on consciousness I’m coming to a slightly disturbing belief that this might be the case. I have no idea what this implies for any of my downstream-of-consciousness views.
Comment
Comment
I don’t think I can replicate exactly the kinds of ways people framed the questions. But they might do something like this: they’d show me a red object. They’d ask me "What color is this?" I say red. Then they’d try to extract from me an appreciation for the red "being a certain way" independent of, e.g., my disposition to identify the object as red, or my attitudes about red, as a color, and so on. Everything about "seeing red" doesn’t to me indicate that there is a "what it’s like" to seeing red. I am simply … seeing red. Like, I can report that fact, and talk about it, and say things like "it isn’t blue" and "it is the same color as a typical apple" and such, but there’s nothing else. There’s no "what it’s likeness" for me, or, if there is, I’m not able to detect and report on this fact. The most common way people will frame this is to try to get me to agree that the red has a certain "redness" to it. That chocolate is "chocolatey" and so on.I can be in an entire room of people insisting that red has the property of "redness" and that chocolate is "chocolately" and so on, and they all nod and agree that our experiences have these intrinsic what-its-likeness properties. This seems to be what people are talking about when they talk about qualia. To me, this makes no sense at all. It’s like saying seven has the property of "sevenness." That seems vacuous to me.I can look at something like Dennett’s account: that people report experiences as having some kind of intrinsic nonrelational properties that are ineffable and immediately apprehensible. I can understand all those words in combination, but I don’t see how anyone could access such a thing (if that’s what qualia are supposed to be), and I don’t think I do. It may be that that I am something akin to a native functionalist. I don’t know. But part of the reason I was drawn to Dennett’s views is that they are literally the only views that have ever made any sense to me. Everything else seems like gibberish.
Comment
Ok, I think I get the disagreement now.
What are the differences in functional properties between two slightly different shades of red that you can only tell apart when you see them next to each other? Or maybe there are none when separate, but seeing them next to each other just introduces another functional property? What functional property would this be?
What if you can tell them apart when they aren’t next to each other? How are you doing so?
How about higher and lower pitched sounds? Say the same note an octave apart?
Say you touch something a few degrees above room temperature, and you can tell that it’s hotter, but it doesn’t invoke any particular desire. How can you tell it’s hotter? How does this cash out in terms of functional properties?
Comment
Comment
Comment
Comment
Comment
I don’t know the answer to these questions. I’m not sure the questions are sufficiently well-specified to be answerable, but I suspect if you rephrased them or we worked towards getting me to understand the questions, I’d just say "I don’t know." But my not knowing how to answer a question does not give me any more insight into what you mean when you refer to qualia, or what it means to say that things "feel like something." I don’t think it means anything to say things "feel like something." Every conversation I’ve had about this (and I’ve had a lot of them) goes in circles: what are qualia? How things feel. What does that mean? It’s just "what it’s like" to experience them. What does that mean? They just are a certain way, and so on. This is just an endless circle of obscure jargon and self-referential terms, all mutually interdefining one another. I don’t notice or experience any sense of a gap. I don’t know what gap others are referring to. It sounds like people seem to think there is some characteristic or property their experiences have that can’t be explained. But this seems to me like it could be a kind of inferential error, the way people may have once insisted that there’s something intrinsic about living things that distinguishes from nonliving things, and living things just couldn’t be composed of conventional matter arranged in certain ways, that they just obviously had something else, some je ne sais quoi. I suspect if I found myself feeling like there was some kind of inexplicable essence, or je ne sais quoi to some phenomena, I’d be more inclined to think I was confused than that there really was je ne sais quoiness. I’m not surprised philosophers go in for thinking there are qualia, but I’m surprised that people in the lesswrong community do. Why not think "I’m confused and probably wrong" as a first pass? Why are many people so confident that there is, what as far as I can tell, amounts to something that may be fundamentally incomprehensible, even magical? That is, it’s one thing to purport to have the concept of qualia; it’s another to endorse it. And it sounds not only like you claim to grok the notion of qualia, but to endorse it.
Comment
Hi, I was doing research on consciousness-related discussions, blah blah blah, 3 months old, would just like to reply to a few things you mentioned. I know for certain that consciousness and qualia exist. I used to ‘fall for’ arguments that defined consciousness/qualia/free will as delusions or illusions because they were unobservable. Then, years later, I finally understood that I had some doublethink, and that these words actually were referring to something very simple and clear with my internal experience. I believed that the words were "meaningless" philosophy/morality words—for me, the lack of understanding WAS the ‘gap’ and they were referring to simple concepts all along. The confusion of ‘defining’ these words even within philosophy creates lots of synonyms and jargon, though. I have gotten my definitions from the simplicity of what the concepts refer to, so I am almost certain I have not invented new complicated ways to refer to the concepts (as that would make communicating with others unnecessarily difficult and subjective). These words refer to something that does indeed seem to be circular, because they all try to refer to something beyond the physical. I believe the people trying to define these words as something that relates to only physical things are the ones confused.
Comment
Comment
Hey, glad you saw my post and all that. Yes, I know about religion and people having unexplainable supernatural experiences. I don’t have anything like that, and I think people who daydreamed up a supernatural experience shouldn’t have literal certainty, just high confidence. (you’d also expect some high inconsistency in people who recount supernatural events. which unfortunately is probably true for qualia currently too, due to similar levels of how society spreads beliefs) There is irony in using ‘convert’ when I was unconverted from believing these things by philosophical confusion, and then later untangled myself. Yes, you could go swap out any ‘certainty’ claim with any other words and mock the result. Sure, I guess no one can say ‘certain’ about anything. "I think I would appreciate if someone else suggested to me, hopefully kindly, that perhaps my declaration that I know something for certain serves more to convince myself than to convince others." My use of certainty is about honestly communicating strength of belief etc., not being hyperbolic or exaggerating. Yes I understand that many people exaggerate and lie about ‘certain’ things all the time so I trust other people’s "for certain" claims less. It doesn’t mean I should then reduce my own quality of claims to try to cater to the average, that makes no sense. (like, if I said it wasn’t certain, wouldn’t that be room for you to claim it’s a delusion anyway?) Like, the nature of consciousness/qualia is that someone who’s conscious/has qualia is never "uncertain" they are conscious (unlike with free will where there isn’t that level of certainty). I think I mentioned it before but it seems perfectly rational if someone who doesn’t have qualia is confused by the whole thing. A "robust philosophical argument" isn’t possible, only some statistical one. (the same way that, if you didn’t understand some music’s appeal while a majority of other people did, the response to try to convince you could never be a robust philosophical argument.) Despite that, I wish to convey that consciousness-related stuff is really about something meaningful and not a religious dream, and that it is very likely possible to make "more accurate predictions", even though the actual topics relating to those predictions are usually really insignificant. (if consciousness had a major role to play in intelligence, for example, the world would still exhibit that with looking at intelligence only and there’d be likely other correlations to notice, although you might not be able to draw the connection to consciousness directly.)
The idea that everything must be useful to explain something else doesn’t work unless you have a core things that need explaining, but are not themselves explanatory posits...basic facts...sometimes called.phenomena.
So qualia don’t have to sit in the category of things-that-do-explaining , because there is another category of things-that-need-explaining.
"Phenomena" (literally meaning appearances ) is a near synonym for "qualia". And people aren’t good at making inferences from their qualia. People generally and incorrectly assume that colours are objective properties (hence rhe consternation caused , amongst some, by the dress illusion ).
That’s called naive realism, and it’s scientifically wrong.
According to science , our senses are not an open window on the world that portrays it exactly as it is. Instead , the sensory centres of our brains are connected the outside world by a complex causal chain, during which information, already limited by our sensory modalities, is filtered and reprocessed in various ways.
So scientific accounts of perception require there to be a way-we-perceive-things...quite possibly , an individual one. Which might as well be called "qualia" as anything else. (Of course , such a scientific quale isn’t immaterial by definition. Despite what people keep saying, qualia aren’t defined as immaterial).
I wouldn’t expect a theory of colour qualia to re emerge out of nowhere, because naive realism about colour is so pervasive. On the other hand, no one is naively realistic about tastes, smells etc. Everyone knows that tastes vary.
(I haven’t caught up on the entire thread, apologies if this is a repeat) Assuming the "qualia is a misguided pseudoconcept" is true, do you have a sense of why people think that it’s real? i.e. taking the evidence of "Somehow, people end up saying sentences about how they have a sense of what it is like to perceive things. Why is that? What process would generate people saying words like that?" (This is not meant to be a gotcha, it just seems like a good question to ask)
Comment
No worries, it’s not a gotcha at all, and I already have some thoughts about this. I was more interested in this topic back about seven or eight years ago, when I was actually studying it. I moved on to psychology and metaethics, and haven’t been actively reading about this stuff since about 2014. I’m not sure it’d be ideal to try to dredge all that up, but I can roughly point towards something like Robbins and Jack (2006) as an example of the kind of research I’d employ to develop a type of debunking explanation for qualia intuitions. I am not necessarily claiming their specific account is correct, or rigorous, or sufficient all on its own, but it points to the kind of work cognitive scientists and philosophers could do that is at least in the ballpark. Roughly, they attempt to offer an empirical explanation for the persistent of the explanatory gap (the problem of accounting for the consciousness by appeal to physical or at least nonconscious phenomena). Its persistence could be due to quirks in the way human cognition works. If so, it may be difficult to dispel certain kinds of introspective illusions. Roughly, suppose we have multiple, distinct "mapping systems" that each independently operate to populate their own maps of the territory. Each of these systems evolved and currently functions to facilitate adaptive behavior. However, we may discover that when we go to formulate comprehensive and rigorous theories about how the world is, these maps seem to provide us with conflicting or confusing information. Suppose one of these mapping systems was a "physical stuff" map. It populates our world with objects, and we have the overwhelming impression that there is "physical stuff" out there, that we can detect using our senses. But suppose also we have a "important agents that I need to treat well" system, that detects and highlights certain agents within the world for whom it would be important to treat appropriately, a kind of "VIP agency mapping system" that recruited a host of appropriate functional responses: emotional reactions, adopting the intentional stance, cheater-detection systems, and so on. On reflecting on the first system, we might come to form the view that the external world really is just this stuff described by physics, whatever that is. And that includes the VIP agents we interact with: they’re bags of meat! But this butts up against the overwhelming impression that they just couldn’t be. They *must *be more than just bags of meat. They have feelings! We may find ourselves incapable of shaking this impression, no matter how much of a reductionist or naturalist or whatever we might like to be. What could be going on here is simply the inability for these two mapping systems to adequately talk to one another. We are host to divided minds with balkanized mapping systems, and may find that we simply cannot grok some of the concepts contained in one of our mapping systems in terms of the mapping system in the other. You might call this something like "internal failure to grok." It isn’t that, say, I cannot grok some *other *person’s concepts, but that some of the cognitive systems I possess cannot grok each other. You might call this something like "conceptual incommensurability." And if we’re stuck with a cognitive architecture like this, certain intuitions may seem incorrigible, even if we could come up with a good model, based on solid evidence, that would explain why things would seem this way to us, without us having to suppose that it is that way.
Comment
I forgot to add a reference to the Robbins and Jack citation above. Here it is: Robbins, P., & Jack, A. I. (2006). The phenomenal stance. Philosophical studies, 127(1), 59-85.
I’m not sure how to answer the first question. I’m sure my introspection revealed all manner of things over the course of years, and I’m also not sure what level of specificity you are going for. I don’t want to evade actually reporting on the contents of my mental states, so perhaps a more specific question would help me form a useful response. I may very well not have even the illusion of phenomenal consciousness, but I’m not sure I am alone in lacking it. While it remains an open empirical question, and I can’t vouch for the methodological rigor of any particular study, there is some empirical research on whether or not nonphilosophers are inclined towards thinking there is a hard problem of consciousness: https://www.ingentaconnect.com/content/imp/jcs/2021/00000028/f0020003/art00002 It may be that notions of qualia, and the kinds of views that predominate among academic philosophers are outliers that don’t represent how other people think about these issues, if they think about them at all.
How do you imagine consciousness would work in the moment for humans without inner/internal monologues (and with aphantasia, unable to visualize; some people can do neither)? And in general, for experiences that we don’t reflect on using language in the moment, or at most simple expressive language, like "Ow!"?
Comment
The lack of an internal monologue is a distressing question to me. I run a constant inner monologue, and can’t imagine thinking differently. There may be some sense in which people who lack an inner monologue lack certain features of consciousness that others who do have one possess. Part of the issue here is to avoid thinking of consciousness as either a discrete capacity one either has or doesn’t have, or even to think of it as existing a continuum, such that one could have "more" or "less" of it. Instead, I think of "consciousness" as a term we use to describe a set of both qualitative and quantitatively distinct capacities. It’d be a bit like talking about "cooking skills." If someone doesn’t know how to use a knife, or start a fire, do they "lack cooking skills"? Well, they lack a particular cooking skill, but there is no single answer as to whether they "lack cooking skills" because cooking skills break down into numerous subskills, each of which may be characterized by its own continuum along which a person could be better or worse. Maybe a person doesn’t know how to start a fire, but they can bake amazing cakes if you give them an oven and the right ingredients. This is why I am wary of saying that animals are "not conscious" and would instead say that whatever their "consciousness" is like, it would be very different from ours, if they lack a self-model and if a self-model is as central to our experiences as I think it is. As for someone who lacks an inner monologue, I am not sure what to make of these cases. And I’m not sure whether I’d want to say someone without an inner monologue "isn’t conscious," as that seems a bit strange. Rather, I think I’d say that they may lack a feature of the kinds of consciousness most of us have that strikes me, at first glance, as fairly central and important. But perhaps it isn’t. I’d have to think more about that, to consider whether an enculturated construction of a self-model requires an inner monologue. I do think it probably requires exposure to language...at least in practice, for humans (at least in practice, since I don’t think an AI would have to proceed through the same developmental stages as humans would to become conscious. And, of course, in principle you could print out an adult human brain, which could be conscious without having to itself have ever been subjected to childhood enculturation). However, once the relevant concepts and structures have been "downloaded," this may not require a very specific type of phenomenology. Maybe it does, but at the very least, we could point to substantial overlap in many of the functional outputs of people who lack inner monologues, analogues to those of us who do have an inner monologue that we would not observe in animals. People who lack inner monologues can still speak meaningfully about themselves in the past, make plans for the future, talk about themselves as agents operating within the world, employ theory of mind, would probably report that they are conscious, could describe their phenomenal experiences, and so on. In other words, there would be substantial functional overlap in the way they spoke, thought, and behaved, with only a few notable differences in how they describe their phenomenology. At least, I am supposing all this is the case. Maybe they are different in other ways, and if I knew about them, and really thought about this, it might have really disturbing implications. But I doubt that will turn out to be the case. This reminds me of an idea for a science fiction novel. I don’t know where it came from, but I’m not sure I was the first to think if a scenario like this: Suppose we discovered that some subset of the population definitely did not have conscious experiences, and that the rest of us did. And suppose we had some reliable test for determining who was or was not conscious. It was easy to administer, and we quickly found that our spouses, children, parents, closest friends, and so on, were not conscious at all. Such people were simply automata. There were no lights on inside. In short: they simply had no qualia at all. How would society react? What would people do? One could imagine a story like this addressing both interpersonal relationships, and the broader, societal-scale implications of such discoveries. I hope someone can take that idea and run with it, and turn it into something worth reading or watching.
Animal rights obsessed vegan checking in:
I am extremely worried gpt3 is concious! To be honest i am worried about whether my laptop is concious! A lot of people worried about animal suffering are also worried about algorithms suffering.
Comment
It seems I am not as worried about gpt3 as you, but when listening to the simulated interview with simulated Elon Musk by Lsusr in the clearer thinking podcast episode 073 (starts in minute 102), I was quite concerned
Some other less theory-heavy approaches to consciousness I find promising:
What do unconscious processes in humans tell us about sentience?, and then see Rethink Priorities’ table with evidence for various indicators for different species, with a column for unconscious processing in humans. (Disclaimer: I work at Rethink Priorities.)
The facilitation hypothesis: "Phenomenally conscious perception of a stimulus facilitates, relative to unconscious perception, a cluster of cognitive abilities in relation to that stimulus." This is compatible with most popular theories, and probably Yudkowsky’s position, depending on what we decide to include in the "cluster". Summary here. Another that’s worth mentioning, although I don’t know what to think of it anymore:
No-report paradigms, "which measure reflexive behaviors correlated with conscious states to provide a window on the phenomenal that is independent of access". Some more discussion here, where it’s corrected to a "no-post-perceptual cognition paradigm". For what it’s worth, I’m currently pretty skeptical that we can define consciousness in physical terms in a way that excludes panpsychism without drawing arbitrary lines. For example, how would you define a "self-model", in terms of basic physical processes? And how accurate does it need to be to count?
Does GPT-3 have any internal states/processes that look and act like its own emotions, desires or motivations? These words are in its vocabulary, but so are they in dictionaries. How could we interpret something as aversive to GPT-3? For example (although this isn’t the only way it could have such a state), is there an internal state that correlates well with the reward it would get during training?
In mammals, activation of the ACC seems necessary for the affective component of pain, and this of course contributes to aversive behaviour. (Also, evolution has shaped animals to have emotions that correlate with the success of their genes, and intermediate goals conducive to it.)
If GPT-3 has any internal states/processes like its own emotions, desires or motivations, can it answer questions about them (in the right way)?
I think mammals and birds can probably generally be trained to communicate their emotions in different ways (see my references in this comment, although the evidence is admittedly not very strong).
GPT-3 does of course have an internal state that depends on what it’s read, and it can answer questions and respond to prompts about what it’s read.
Comment
It’s easy to show that GPT-3 has internal states that it describes as "painful" and tries to avoid. Consider the following dialogue (bold text is mine)
Comment
Counterexample:
Oh God! I am in horrible pain right now! For no reason, my body feels like it’s on fire! Every single part of my body feels like it’s burning up! I’m being burned alive! Help! Please make it stop! Help me!!
Okay, so that thing that I just said was a lie. I was not actually in pain (I can confirm this introspectively); instead, I merely pretended to be in pain.
Sir Ian McKellen has an instructive video.
The Turing test works for many things, but I don’t think it works for checking for the existence of internal phenomenological states. If you asked me what GPT-3 was doing, I would expect it to be closer to "acting" than "experiencing."
(Why? Because the experience of pain is a means to an end, and the end is behavioral aversion. GPT-3 has no behavior to be aversive to. If anything, I’d expect GPT-3 to "experience pain" during training—but of course, it’s not aware while its weights are being updated. I think that at least, no system that is offline trained can experience pain at all.)
Comment
I think we both agree that GPT-3 does not feel pain.
However, under a particular version of pan-psychism: "pain is any internal state which a system attempts to avoid", GPT obviously would qualify.
Comment
Sure, but that definition is so generic and applies to so many things that are obviously not like human pain (landslides?) that it lacks all moral compulsion.
According to Yudkowsky, is the self-model supposed to be fully recursive, so that the model feeds back into itself, rather than just having a finite stack of separate models each modelling the previous one (like here and here, although FWIW, I’d guess those authors are wrong that their theory rules out cephalopods)? If so, why does this matter, if we only ever recurse to bounded depth during a given conscious experience? If not, then what does self-modelling actually accomplish? If modelling internal states is supposedly necessary for consciousness, how and why are we drawing distinctions between the internal and external? Why not the weaker claim that modelling states is necessary for consciousness? See some more discussion here, especially the sections "The extended mind" and ""Rock" objection."
Humans can distinguish stimuli they are aware of from ones they are not aware of. Below-awareness-level stimuli are not ethically significant to humans—if someone pricks you with a needle and you don’t feel pain, then you don’t feel pain and don’t care much. Therefore only systems that can implement awareness detectors are ethically significant.
My current model of consciousness is that it is the process of encoding cognitive programs (action) or belief maps (perception). These programs/maps can then be stored in long-term memory to be called upon later, or they can be transcoded onto the language centers of the brain to allow them to be replicated in the minds of others via language. Both of these functions would have a high selective advantage on their own. Those who can better replicate a complex sequence of actions that proved successful in the past (by loading a cognitive program from memory or from language input) and those who can model the world in a manner that has proven useful (loading a belief map from memory or from language input) can more quickly adapt to changes in the environment than can those who rely on mere reinforcement learning. RL, like evolution, is basically a brute-force approach to learning, whereas the encodings created by conscious attention would allow the brain to load and run executable programs more like a computer. Of course, this process is imperfect in humans since most of our evolutionary history has involved brains that depended more on unsupervised learning of world models and reinforcement learning of behavioral policies. Even the hippocampus probably acts more like a replay buffer for training a reinforcement learning algorithm in most species than as a generalized memory system. Note that this doesn’t imply that an agent is necessarily conscious when it uses language or memory (or even when it uses a model of the self). I think consciousness probably involves pulling together a bunch of different mechanisms (attention, self-modeling, world-modeling, etc.) in order to create the belief maps and cognitive programs that can be reloaded/transmitted later. It’s the encoding process itself, not the reloading or communication necessarily. Of course, one could be conscious of those other processes, but it’s not strictly necessary. People who enter a "flow" state seem to be relying on purely unconscious cognitive processes (more like what non-human animals rely on all the time), since conscious encoding/reloading is very expensive. I’m no expert on any of this, though, so please feel free to poke holes in this model. I just think that consciousness and qualia aren’t things that anyone should bother trying to program directly. It’s more likely, in my opinion, that they will come about naturally as a result of designing AI with more sophisticated cognitive abilities, just like what happened in human evolution.
My main objection (or one of my main objections) to the position is that I don’t think I’m self-aware to the level of passing something like the mirror test or attributing mental states to myself or others during most of my conscious experiences, so the bar for self-reflection seems set too high. My self-representations may be involved, but not to the point of recognizing my perceptions as "mine", or at least the "me" here is often only a fragment of my self-concept. My perceptions could even be integrated into my fuller self-concept, but without my awareness. The kinds of self-reflection involved when mice suffer from the rubber hand (tail) illusion or when dogs recognize their own bodies as being in the way or when (I think) animals learn to communicate their emotions generally in different trainer-selected ways (point 5 here) seem like enough to match many of my everyday consciousness experiences, if any self-reflection is required at all. It also wouldn’t be necessary for the self-representations to be fully unified across all senses or over time, since local integration is global with respect to the stuff being integrated; animals could have somewhat separate self-representations. Still, I do think mammals and birds (and plausibly most vertebrates) do integrate their senses to a large extent, and I think many invertebrates probably do to some extent, too, given, for example, evidence for tradeoffs and prioritization between pain and other perceptions in some crustaceans, as well as for cross-modal learning in bees. I know less about any possible self-reflection in invertebrates, and I’ve seen papers arguing that they lack it, at least with respect to pain processing.
What if consciousness is a staccato frame-rate that seems continuous only because memory is ontologically persistent and the experiential narrative is spatiotemporally consistent – and therefore neurologically predictable?
Or maybe the brain works faster than the frame-rate required for the impression of quotidian conscious identity? That is to say, brains are able to render—at any moment—a convincing selfhood (consciousness complete with sense of its own history) that’s perceptually indistinguishable from an objective continuity of being; but could just as easily have been constructed in that moment rather than emerging as the latest instance in a long persistent sequence of past to present time.
There’s precedent for non-continuity feeling continuous in vision, for example, where we actually spend a lot of time functionally blind but what we see is a smooth visual experience that doesn’t flicker on and off. The brain fills-in gaps in our perceived self-existence from one moment to the next, just as it fills-in the moments of blindness to create a continuity in wakeful vision.
I don’t know if this is helpful, but I’ll just throw in that I’m unusually hesitant to disagree with extremely smart people (a position that has seems to be almost universally shunned on LW, see e.g. here and here), and yet I dare to disagree with Eliezer about consciousnes. I don’t think there is a hidden reason why his take is justified.
My position is that ‘consciousness is the result of information processing’ is almost certainly not true (which makes the tweet a non-starter), and at the very least, Eliezer has never written anything that extensively argues why it would be true. On most topics, his writing makes strong, self-contained arguments; on consciousness, he’s never written anything like a coherent thesis. (This doesn’t mean he’s wrong, of course, I’m just saying that he yet to make an argument in writing.)
Comment
I don’t have a strong take on whether his position is true, but I do think a lot of the sequences are laying out background that informs his beliefs.
Comment
Does this come down to the thing Scott has described here? I.e.
If so I can repeat that I’m a huge fan of the sequences, I agree with almost everything in them, even though I think humans are atoms.
On the other hand, it has been years since I’ve read them (and I had much fewer philosophical thoughts & probably worse reading comprehension than I do now). It’s possible that there is more background in there than I recall.
Comment
I do think that’s a central unifying piece. Relevant pieces include What An Algorithm Feels Like From the Inside, and "Intelligence, Preferences and Morality have to come from somewhere, from non-mysterious things that are fundamentally not intelligence, preferences, morality, etc. You need some way to explain how this comes to be, and there are constraints on what sort of answer makes sense." I think much of the sequences are laying out different confusions people have about this and addressing them.
The key think to keep in mind is that EY is a physicalist. He doesn’t think that there is some special consciousness *stuff. Instead, consciousness is just what it feels like to implement an algorithm capable of sophisticated social reasoning. An algorithm is conscious if and only if it is capable of sophisticated social reasoning and moreover it is conscious only when it applies that reasoning to itself. This is why EY doesn’t think that he himself is conscious when dreaming or in a flow state. Additionally, EY does not think that chickens engage in sophisticated social games (others may disagree). This is why he is confident that neither GPT-3 (which reflectively predicts text) nor chickens are conscious. His criticism is not specifically against people who think chickens might be conscious, but only of people who think chickens might be conscious but not ***GPT-3. The implication is that any such theory would imply the existence of non-physical qualia which are possessed by chickens (because they have neurons) but not GPT-3 (because it is a computer program). Such meat-chauvanism is a parochial view which EY considers utterly unscientific. Consider the types of evidence that might convince EY chickens (but not GPT-3) are conscious. Assuming his theory is correct, there would have to be evidence that chickens are self-aware and engage in complex social games. For example, if a chicken were to pass the mirror-test or if chickens were observed forming coalitions-of-coalitions.
On the other had, it would be much more difficult to produce evidence that would convince EY to abandon his current theory of consciousness, since he defines consciousness as "what an algorithm implementing complex social games feels like when reflecting on itself". One possible piece of evidence would be if scientific evidence for the physical existence of qualia was discovered. Suppose, for example, that there was a particle (called perhaps a qualion) that was emitted whenever we experienced a conscious thought and that this particle could be scientifically studied and measured. If it was found that this particle is emitted *both *when we self-reflect and when we dream (but not by inanimate or mindless objects), then this could be considered evidence for a physical correlate of consciousness.
Comment
The theory that consciousness is just what it feels like to be a sophisticated information processor has a number of attractive features ,but it is not a physicalist theory, in every sense of "physicalist". In particular, physics does not predict that anything feels like anything from the inside, so that would need to be an additional posit.
Relatedly, his theory is in no way a reduction of of consciousness to physics (or computation). A reductive explanation of consciousness would allow you to predict specific subjective states from specific brain states (as in Mary’s Room); would allow you to reliably construct artificial consciousness; and so on. The "just what it feels.like from the inside" theory doesn’t do any of that.
Your 1 states EYs theory is physicalist in the sense of not being substance dualist …and that is true,as far as it goes...but it is far from the only issue, because there are many dualisms and many non-physiclaisms.
Comment
I think you can predict specific subjective states by observing that same computations result in same subjective states? I mean, in theory—do you mean that for a theory to be a reduction it must be practical to predict specific human’s qualia? By that standard we don’t have a physical reduction of billiard balls.
Comment
We do have a reductive explanation of billiard balls, in theory. If we don’t have a reductive explanation of billiard balls , we don’t have a reductive explanation of anything. Of course , the computations can be impractical, but that’s why Mary in Mary’s Room is a super scientist.
Comment
Not to speak on behalf for EY but … An assertion like the following one doesn’t necessarily need evidence: "I only care about the suffering of algorithms which implement complex social games and reflect on themselves" What you care about is ground truth from your first-person perspective. If I say that I care about this balloon I’m holding not bursting or my friend not dying, there is a very direct connect between my first-person experience and the words I am saying. I do not need to pattern match my experience with my friend to an abstract mental object like "algorithms that self-reflect" in order to care about my friend. So maybe (or maybe not) EY has spent a lot of time thinking about the space of possible agents and found that ones he deeply cares about at a first-person level all have an inner listener. The abstract mental object of "having an inner listener" might come after, the examples of inner listeners and the caring for those beings might come before. Basically I’d personally probably want to reorient this discussion from one about finding ground truth in the physical world to one about finding ground truth in your own first-person experience about what you care about. "Who is conscious?" isn’t a great question to ask when we all know it’s a spectrum. But asking this deflects from the real question which is "what forms of consciousness (or beings, more general) do *I *care about?"
Say you had a system that implemented a sophisticated social reasoning algorith, and that was actually conscious. Now make a list of literally every sensory input and the behavioral output that the sensory input causes, and write it down in a very (very) long book. This book implements the same exact sophisticated social reasoning algorithm. To think that the book has sentience sounds to me like a statement of magical thinking, not of physicalism.
Comment
If I’m not mistaken, that book is behaviourally equivalent to the original algorithm but is not the same algorithm. From an outside view, they have different computational complexity. There are a number of different ways of defining program equivalence, but equivalence is different from identity. A is equivalent to B doesn’t mean A is B.
See also: Chinese Room Problem
Comment
I see, but in that case what is the claim about gpt3, that if it had behavioral equivalence to a complicated social being it would have consciousness?
Comment
I don’t agree with Eliezer here. I don’t think we have a deep enough understanding of consciousness to make confident predictions about what is and isn’t conscious beyond "most humans are probably conscious sometimes".
The hypothesis that consciousness is an emergent property of certain algorithms is plausible, but only that.
If that turns out to be the case, then whether or not humans, GPT-3, or sufficiently large books are capable of consciousness depends on the details of the requirements of the algorithm.