Contents
- Example: The Virtue Theory of Metabolism
- An Exercise
- Political Example: PS5 Sales
- Positive Example: Toxoplasma Memes
- Example: Copenhagen Ethics
- Takeaway Thankyou to Elizabeth for a great conversation which spurred me to write up this post. Claim: moral/status/value judgements (like "we should blame X for Y", "Z is Bad", etc) like to sneak into epistemic models and masquerade as weight-bearing components of predictions.
Example: The Virtue Theory of Metabolism
The virtue theory of metabolism says that people who eat virtuous foods will be rewarded with a svelte physique, and those who eat sinful foods will be punished with fat. Obviously this isn’t meant to be a "real" theory; rather, it’s a tongue-in-cheek explanation of the way most laypeople actually think about diet and body weight. Lest ye think this a strawman, let’s look at some predictions made by the virtue theory. As a relatively unbiased first-pass test, we’ll walk through Haidt’s five moral foundations and ask what predictions each of these would make about weight loss when combined with the virtue theory. In particular, we’ll look for predictions about perceived healthiness which seem orthogonal (at least on the surface) to anything based on actual biochemistry.
-
Care/harm: food made with care and love is healthier than food made with indifference. For instance, home cooking is less fattening than restaurant, or factory-farmed meat is more fattening than free-range chicken.
-
Fairness: food made fairly is healthier than food made unfairly. For instance, "fair-trade" foods are less fattening.
-
Loyalty/ingroup: food made by members of an ingroup is more healthy. For instance, local produce is less fattening.
-
Authority/respect: food declared "good" by legitimate experts/authorities is more healthy. Fun fact for American readers: did you know the original food pyramid was created by the department of agriculture (as opposed to the department of health), and bears an uncanny resemblance to the distribution of American agricultural output?
-
Sanctity/purity: impure or unnatural food is unhealthy. For instance, preservatives, artificial flavors, and GMO foods are all more fattening, whereas organic food is less fattening. Maybe I’m cherry-picking or making a just-so story here, but… these sound like things which I think most people do believe, and they’re pretty central to the stereotypical picture of a "healthy diet". That’s not to say that there isn’t also some legitimate biochemistry sprinkled into peoples’ food-beliefs, but even then it seems like the real parts are just whatever percolated out of Authorities. As a purely descriptive theory of how laypeople model metabolism, the virtue theory looks pretty darn strong. Of course if pressed, people will come up with some biology-flavored explanation for why their food-health-heuristic makes sense, but the correlation with virtue instincts pretty strongly suggests that these explanations are post-hoc.
An Exercise
This post isn’t actually about the virtue theory of metabolism. It’s about a technique for noticing things like the virtue theory of metabolism in our own thinking. How can we detect moral/status/value judgements masquerading as components of predictive models? The technique is simple: taboo concepts like "good", "bad", "should", etc in one’s thinking. When you catch yourself thinking one "should" do X, or X is "good", stop and replace that with "I like X" or "X is useful for Y" or "X has consequence Z" or the like. Ideally, you keep an eye out for anything which feels suspiciously-value-flavored (like "healthy" in the examples above) and taboo them. So, for instance, I might notice myself eating vegetables because they are "good for me". I notice that "good" appears in there, so I taboo "good", and ask what exactly these vegetables are doing which is supposedly "good" for me. I may not know all the answers, and that’s fine—e.g. the answer may just be "my doctor says I’ll have fewer health problems if I eat lots of vegetables". But at least I have flagged this as a thing-about-which-I-have-incomplete-knowledge-about-the-physical-gears-involved, rather than an atomic fact that "vegetables are good" in some vague undefined sense. In general, there is absolutely no morality or status whatsoever in any physical law of the universe, at any level of abstraction. If a subquery of the form "what is Good?" ever appears when trying to make a factual prediction, then something has gone wrong. Epistemics should contain exactly zero morality. (Did you catch the "should" in the previous sentence? Here’s the full meaning with "should" taboo’d: if a moral statement is load-bearing in a model of the physical world, then something is factually incorrect in that model. At a bare minimum, even if the end prediction is right, the gears are definitely off.) The usefulness of tabooing "should" is to flush out places where moral statements are quietly masquerading as facts about the world, or as gears in our world-models. Years ago, when I first tried this exercise for a week, I found surprisingly many places where I was relying on vague morally-flavored "facts", similar to "vegetables are good". Food was one area, despite having already heard of the virtue theory of metabolism and already trying to avoid that mistake. The hooks of morality-infected epistemology ran deeper than I realized. Politics-adjacent topics were, of course, another major area where the hooks ran deep.
Political Example: PS5 Sales
Matt Yglesias provides a prototypical example in What’s Wrong With The Media. He starts with an excerpt from a playstation 5 review:
The world is still reeling under the weight of the covid-19 pandemic. There are more Americans out of work right now than at any point in the country’s history, with no relief in sight. Our health care system is an inherently evil institution that forces people to ration life-saving medications like insulin and choose suicide over suffering with untreated mental illness. As I’m writing this, it looks very likely that Joe Biden will be our next president. But it’s clear that the worst people aren’t going away just because a new old white man is sitting behind the Resolute desk—well, at least not this old white man. Our government is fundamentally broken in a way that necessitates radical change rather than incremental electorialism. The harsh truth is that, for the reasons listed above and more, a lot of people simply won’t be able to buy a PlayStation 5, regardless of supply. Or if they can, concerns over increasing austerity in the United States and the growing threat of widespread political violence supersede any enthusiasm about the console’s SSD or how ray tracing makes reflections more realistic. That’s not to say you can’t be excited for those things—I certainly am, on some level—but there’s an irrefutable level of privilege attached to the ability to simply tune out the world as it burns around you. The problem here, Yglesias argues, is that this analysis is bad—i.e. the predictions it makes are unlikely to be accurate: What actually happened is that starting in March the household savings rate soared (people are taking fewer vacations and eating out less) and while it’s been declining from its peak as of September it was still unusually high. [...] The upshot of this is that no matter what you think about Biden or the American health care system, the fact is that the sales outlook for a new video game console system is very good. Indeed, the PS5 sold out, although I don’t know whether Yglesias predicted that ahead of time. So this is a pretty clear example of moral/status/value judgements masquerading as components of a predictive model. Abstracting away the details, the core of the original argument is "political situation is Bad → people don’t have resources to buy PS5". What does that look like if we taboo the value judgements? Well, the actual evidence cited is roughly:
-
Lots of people have COVID
-
American unemployment rate is at an all-time high
-
Health care system forces rationing of medication and doesn’t treat mental illness
-
Bad People aren’t going away (I’m not even sure how to Taboo this one or if there’d be anything left; it could mean any of a variety of Bad People or the author might not even have anyone in particular in mind)
-
Lots of people are concerned about austerity or political violence Reading through that list and asking "do these actually make me think video game console sales will be down?", the only one which stands out as directly relevant—not just mood affiliation with Bad things—is unemployment. High unemployment is a legitimate reason to expect slow console sales, but when you notice that that’s the only strong argument here, the whole thing seems a lot less weighty. (Amusing side note: the unemployment claim was false. Even at peak COVID, unemployment was far lower than during the Great Depression, and it had already fallen below the level of more recent recessions by the time the console review was published. But that’s not really fatal to the argument.) By tabooing moral/status/value claims, we force ourselves to think about the actual gears of a prediction, not just mood-affiliate. Now, in LessWrong circles we tend not to see really obvious examples like this one. We have implicit community norms against prediction-via-mood-affiliation, and many top authors already have a habit of tabooing "good" or "should". (We’ll see an example of that shortly.) But I do expect that lots of people have a general-sense-of-social-Badness, with various political factors feeding into it, and quick intuitive predictions about economic performance (including console sales) downstream. There’s a vague idea that "the economy is Bad right now", and therefore e.g. console sales will be slow or stock prices should be low. That’s the sort of thing we want to notice and taboo. Often, tabooing "Bad" in "the economy is Bad right now" will still leave useful predictors—unemployment is one example—but not always, and it’s worth checking.
Positive Example: Toxoplasma Memes
From Toxoplasma of Rage:
Consider the war on terror. They say that every time the United States bombs Pakistan or Afghanistan or somewhere, all we’re doing is radicalizing the young people there and making more terrorists. Those terrorists then go on to kill Americans, which makes Americans get very angry and call for more bombing of Pakistan and Afghanistan. Taken as a meme, it’s a single parasite with two hosts and two forms. In an Afghan host, it appears in a form called ‘jihad’, and hijacks its host into killing himself in order to spread it to its second, American host. In the American host it morphs in a form called ‘the war on terror’, and it hijacks the Americans into giving their own lives (and tax dollars) to spread it back to its Afghan host in the form of bombs. From the human point of view, jihad and the War on Terror are opposing forces. From the memetic point of view, they’re as complementary as caterpillars and butterflies. Instead of judging, we just note that somehow we accidentally created a replicator, and replicators are going to replicate until something makes them stop. Note that last sentence: "Instead of judging, we just note that somehow we accidentally created a replicator…". This is exactly the sort of analysis which is unlikely to happen without somebody tabooing moral/status/value judgements up-front. If we go in looking for someone to blame, then we naturally end up modelling jihadists as Evil, or interventionist foreign policymakers as Evil, or …, and that feels like it has enough predictive power to explain what’s going on. Jihadists are Evil, therefore they do Bad things like killing Americans, and then the Good Guys kill the Bad Guys—that’s exactly what Good boils down to in the movies, after all. It feels like this model explains the main facts, and there’s not much mystery left—no reason to go asking about memetics or whatever. Interesting and useful insights about memetics are more likely if we first taboo all that. Your enemies are not innately Evil, but even if they were it would be useful to taboo that fact, unpack it, and ask how such a thing came to be. It’s not that "Bad Person does Bad Thing" always makes inaccurate predictions, it’s that humans have built-in intuitions which push us to use that model regardless of whether it’s accurate, for reasons more related to tribal signalling than to heuristic predictive power.
Example: Copenhagen Ethics
The Copenhagen Interpretation of Ethics says that when you observe or interact with a problem, you can be blamed for it. In particular, you won’t be blamed for a problem you ignore, but you can be blamed for benefitting from a problem even if you make the problem better. This is not intended as a model for how ethics "should" work, but rather for how most people think about ethics by default. Tabooing moral/status/value judgements tends to make Copenhagen ethics much more obviously silly. Here’s an example from the original post:
In 2010, New York randomly chose homeless applicants to participate in its Homebase program, and tracked those who were not allowed into the program as a control group. The program was helping as many people as it could, the only change was explicitly labeling a number of people it wasn’t helping as a "control group". The response? "They should immediately stop this experiment," said the Manhattan borough president, Scott M. Stringer. "The city shouldn’t be making guinea pigs out of its most vulnerable." Let’s taboo the "should"s in Mr Stringer’s statement. We’ll use the "We should X" → "X has consequence Z" pattern: replace "they should immediately stop this experiment" with "immediately stopping this experiment would ???> for the homeless". What goes in the ???>? Feel free to think about it for a moment. My answer: nothing. Nothing goes in that ???>. Stopping the experiment would not benefit any homeless people in any way whatsoever. When we try to taboo "should", that becomes much more obvious, because we’re forced to ask how ending the experiment would benefit any homeless people.
Takeaway
Morality has no place in epistemics. If moral statements are bearing weight in world-models, then at a bare minimum the gears are wrong. Unfortunately, I have found a lot of morality-disguised-as-fact hiding within my own world-models. I expect this is the case for most other people as well, especially in politically-adjacent areas. A useful exercise for rooting out some of these hidden-morality hooks is to taboo moral/status/value-flavored concepts like "good", "bad", "should", etc in one’s thinking. Whenever you notice yourself using these concepts, dissolve them—replace them with "I want/like X" or "X is useful for Y" or "X has consequence Z". Three caveats to end on. First caveat: as with the general technique of tabooing words, I don’t use this as an all-the-time exercise. It’s useful now and then to notice weak points in your world-models or habits, and it’s especially useful to try it at least once—I got most of the value out of the first week-long experiment. But it takes a fair bit of effort to maintain, and one week-long try was enough to at least install the mental move. Second caveat: a moral statement bearing weight in a world-model means the model is wrong, but that does not mean that the model will improve if the moral components are naively thrown out. You do need to actually figure out what work those moral statements are doing (if any) in order to replace them. Bear in mind that morality is an area subject to a lot of cultural selection pressure, and those moral components may be doing something non-obviously useful. Final caveat: *before *trying this exercise, I recommend you *already *have the skill of not giving up on morality altogether just because moral realism is out the window. Just because morality is not doing any epistemic work does not mean it’s not doing any instrumental work. Note: I am actively looking for examples to use in a shorter exercise, in order to teach this technique. Ideally, I’d like examples where most people—regardless of political tribe—make a prediction which is obviously wrong/irrelevant after tabooing a value-loaded component. If you have examples, please leave them in the comments. I’d also be interested to hear which examples did/didn’t click for people.
A previous example I ran into was using the phrase "X is an unhealthy relationship dynamic." I found myself saying it in a few different places without actually knowing what I meant by "unhealthy," I think the word unhealthy was being slightly more descriptive than "bad", but not by much. Unfortunately I can’t remember the particular examples that I was originally noticing. Some of them might have related to power differentials, or perceived power differentials. (Which I think I now have a more gearsy model of why you should be wary of them, although I think I may not have every really gotten clear on this point)
Comment
Tbh switching from using "unhealthy" to "bad" can help cause it removes any trace of sophistication, thereby making this kind of usage less rewarding.
Comment
Are you saying it’s better to say "bad" than "unhealthy?" (because it removes the illusion of meaning?)
Comment
Not because it removes the illusion of meaning, but because it makes you sound less cool. I’m thinking of conversations here, in solitary thinking it wouldn’t make much difference imo.
or, you could use unhealthy only to mean things which are likely to decrease your health (mental health included)
Comment
Thats what I meant ofc.
My gut reaction to this post is that it’s importantly wrong. This is just my babbled response, and I don’t have time to engage in a back-and-forth. Hope you find it valuable though! My issue is with the idea that any of your examples have successfully tabooed "should." In fact, what’s happening here is that the idea that we "should taboo ‘should’" is being used to advocate for a different policy conclusion. Let’s use Toxoplasma Memes as an example. Well, just for starters, framing Jihad vs. War on Terror as "toxoplasma" works by choosing a concept handle that evokes a disgust reaction to affect an ethical reframing of the issue. Both Jihad/War on Terror theorists and "Toxoplasma" theorists have causal models that are inseparable from their ethical models. To deny that is not to accomplish a cleaner separation between epistemology and ethics, it’s to disguise reality to give cover to one’s own ethical/epistemic combination. You can do it, sure, it’s the oldest trick in the book. If I were to say you shouldn’t, "because it disguises the truth," I’m just being a hypocrite. Likewise, the fact that "tabooing ‘should’" makes the Copenhagen interpretation of ethics seem *silly *also illustrates how "tabooing ‘should’" is a moral as much as an epistemic move. The point is to make an idea and its advocates look silly. It’s to manipulate their social status and tickle people’s jimmies until they agree with you. It might not work, but that’s the intent. Yglesias simply misrepresented the claim made by at least the snippet of the PS5 review that you cite. The review said:
Comment
Comment
For the record I do get some disgust reaction out of the toxoplasma thing, think it was at least somewhat intentional (and I think I may also endorse it? With significant caveats)
Regardless of the details of why he picked the piece, it’s pretty clear from a clear-headed reading of the review that Yglesias is attributing to the reviewer a claim they did not make. I think the "PS5 will have outstanding sales" and "there are many people who won’t buy a PS5 due to some impact of the pandemic" can both be true and likely are.
Sorry, I know that this runs the risk of being an exchange of essays, making it hard to respond to each point. In Toxoplasma of Rage, the part just prior to the reference to the war on terror goes like this:
I apologize if it’s a bit tangential, but I want to point out that "should" statements are a common source of distortions (in cognitive behavioral therapy). It is often good to clarify the meaning of "should" in that context to see if it’s valid (is it a law of the universe? is it a human law? …). It often just means "I would prefer if", which doesn’t bear as much weight... David Burns explains this clearly and I was struck when he pointed out the linguistic heritage of "should" and how it connects to "scold". Here’s one podcast episode on the topic (there are more, easy to find).I wonder if some of the other distortions (such as "labelling" to sneak a morality judgement) could be subject to a similar treatment. For example, Scott Alexander talks about debates on labelling something a "disease".
Comment
Nice parallel to CBT! On a meta level, I really like comments which take the idea from a post and show a parallel idea from some other area.
My reaction to this post is something like "yes, this is a good practice", but I’ve done it ,and it pushed me out the other side to believe I can say nothing at all without saying some kind of "should" because if you taboo should enough you run into grounding problems. Cf. no free lunch in value learning This is only to add nuance to your point, though, and I think the practice is worthwhile, because until you’ve done it you likely have a lot of big "should"s in your beliefs gumming up the works. I just think it’s worth pointing out that the category of motivated reasons can’t be made to disappear from thought without giving up thinking all together even if they can be examined and significantly reduced and the right starting advice is just to try to remove them all.
Comment
Generally, I let things ground out in "I want X". There’s value in further unpacking the different flavors of "wanting" things, but that’s orthogonal to what this exercise is meant to achieve.
Curated. I actually think this post could be improved a lot – the examples feel a bit weird. I didn’t actually understand the intent of the PS5 example, and I think I agree with another commenter that something about the Toxoplasma example feels off. Nonetheless, this concept has been pretty important to my thinking in recent weeks. I think being able to taboo moral language and focus on explicit gears is a key skill.
Comment
Regarding the toxoplasma example: it sounds like some people have different mental images associated with toxoplasmas than I do. For me, it’s "that virus that hijacks rat brains to remove their fear response and be attracted to cat-smell". The most interesting thing about it, in my head, is that study which found that it does something similar in humans, and in fact a large chunk of cat-owners have evidence of toxoplasma hijack in their brains. That makes it a remarkably wonderful analogy for a meme. It sounds like some other people associate it mainly with cat poop (and therefore the main reaction is "gross!"). Anyway, I agree the post could be improved a lot, and I’d really like to find more/better examples. The main difficulty is finding examples which aren’t highly specific to one person.
Comment
Protozoa, not virus.
It’s a great approach, to avoid moral-carrying connotations unless explicitly talking about morals. I’ve been practicing it for some time, and it makes things clearer. And when explicitly talking about moral judgments, it’s good to specify the moral framework first. A (much) harder exercise is to taboo the words like "truth", "exist", "real", "fact", and replace them with something like "accurate model", "useful heuristic", "commonly accepted shortcut", etc.
This morning I came across a paper reported on a ‘science’ website. The paper was on heteronormativity and was laced with value judgements on the moral perils & injustices that the authors seem to believe flow from refusing to recognise biological sex (not just gender) as a ‘spectrum’. It read like a high school debate presentation. They need to read your post.
I kind of get behind the "self-grown produce is less fattening"—you put in work, it takes out fat. Work, in many people’s minds, is "good". Also, try chewing through a free-range hen, not a chicken. Builds muscles, hen.