[Question] Worst Commonsense Concepts?

https://www.lesswrong.com/posts/EvttEKGbeQHoopczt/worst-commonsense-concepts

Perhaps the main tool of rationality is simply to use explicity reasoning where others don’t, as Jacob Falcovich suggests:

New York Times reporter Cade Metz interviewed me and other Rationalists mostly about how we were ahead of the curve on COVID and what others can learn from us. I told him that Rationality has a simple message: "people can use explicit reason to figure things out, but they rarely do" However, I also think a big chunk of the value of rationality-as-it-exists-today is in its corrections to common mistakes of explicit reasoning. (To be clear, I’m not accusing Jacob of ignoring that.) For example, bayesian probability theory is one explicit theory which helps push a lot of bad explicit reasoning to the side. The point of this question, however, is not to point to the good ways of reasoning. The point here is, rather, to point at *bad *concepts which are in widespread use. For example:

Comment

https://www.lesswrong.com/posts/EvttEKGbeQHoopczt/worst-commonsense-concepts?commentId=cqPKekNrsBFLx2BvK

"Some truths are outside of science’s purview" (as exemplified by e.g. Hollywood shows where a scientist is faced with very compelling evidence of supernatural, but claims it would be "unscientific" to take that evidence seriously).

My favorite way to illustrate this would be that approximately around the end of 19th century/​beginning of 20th century [time period is from memory, might be a bit off] belief in ghosts was commonplace with a lot of interest in doing spiritual seanses, etc, while rare stories of hot rocks falling from the sky were mostly dismissed as tall tales. Then scientists followed the evidence, and now most everybody knows that meteorites are real and "scientific", while ghosts are not, and are "unscientific".

Comment

https://www.lesswrong.com/posts/EvttEKGbeQHoopczt/worst-commonsense-concepts?commentId=w35oiwm3sgnsQWdmD

I tend to agree but only to an extent. To our best understanding, cognition is a process of predictive modelling. Prediction is an intrinsic property of the brain that never stops. A misprediction (usually) causes you to attend to the error and update your model. Suppose we define science as any process that achieves better map-territory convergence (i.e. minimise predictive error). In that case, it is uncontroversial to say that we are all, necessarily, engaged in the scientific process at all times, whether we like it or not. Defining science this way, it is reasonable to say that no claim about reality is, in principle, outside the purview of science. Moral Uncertainty claims that even with perfect epistemic and ontological certainty, we still have to deal with uncertainty about what to do. However, I’ve always struggled to see how the above claim about map-territory convergence applies to goal selection and morality. I am not claiming that goal selection and morality are necessarily outside the purview of science. I am just puzzled by this. How can we make scientific claims about selecting goals? Can we derive an ought from an is? Is it nonsensical to try and apply science to goal selection and morality? I subscribe to physicalism, and I thus believe that goals, decisions and purposes are absurd notions when we boil them down to physics. My puzzlement could be pure illusory but, still, I am puzzled.

Comment

Right, something like "Some objective truths are outside of science’s purview" might have been a slightly better phrasing, but as the goal is to stay at the commonsense level, trying to parse this more precisely is probably out of scope anyway, so can as well stay concise...

https://www.lesswrong.com/posts/EvttEKGbeQHoopczt/worst-commonsense-concepts?commentId=zNLw2uzosDKyoFzHG

But some stuff is explicitly outside of science’s purview, though not in the way you’re talking about here. That is, some stuff is explicitly about, for example, personal experience, which science has limited tools for working with since it has to strip away a lot of information in order to transform it into something that works with scientific methods. Compare how psychology sometimes can’t say much of anything about things people actually experience because it doesn’t have a way to turn experience into data.

Comment

I think this might conflate "science" with something like "statistics". It’s possible to study things like personal experience, just harder at scale. The Hollywood-scientist example illustrates this, I think. Dr. Physicist finds something that wildly conflicts with her current understanding of the world, and would be hard to put a number on, so she concludes that it can’t and shouldn’t be reasoned about using the scientific method.

https://www.lesswrong.com/posts/EvttEKGbeQHoopczt/worst-commonsense-concepts?commentId=6j8ym4Jvs8RHQL9L8

Commonsense ideas rarely include information about the domain of applicability, leading to the need for explicit noting of the law of equal and opposite advice and how to evaluate what sorts of people and situations need the antidote to that commonsense advice.

The tendency towards fallacy of the single cause where explanations feel more true because they are more compact representations and thus easier to think about and generate vivid examples of for further confirmation bias. Modal fallacy also related.

https://www.lesswrong.com/posts/EvttEKGbeQHoopczt/worst-commonsense-concepts?commentId=4E79WZFMNZv55yXuB

‘Justice’ has got to be one of the worst commonsense concepts.

It is used to ‘prove’ the existence of free will and it is the basis of a lot of suboptimal political and economic decision making.

Taboo ‘justice’ and talk about incentive alignment instead.

https://www.lesswrong.com/posts/EvttEKGbeQHoopczt/worst-commonsense-concepts?commentId=awLfA2ZJqmtCRkxHt

To me some of worst commonsense ideas come from the amateur psychology school: "gaslighting", "blaming the victim", "raised by narcissists", "sealioning" and so on. They just teach you to stop thinking and take sides.

Logical fallacies, like "false equivalence" or "slippery slope", are in practice mostly used to dismiss arguments prematurely.

The idea of "necessary vs contingent" (or "essential vs accidental", "innate vs constructed" etc) is mostly used as an attack tool, and I think even professional usage is more often confusing than not.

Comment

https://www.lesswrong.com/posts/EvttEKGbeQHoopczt/worst-commonsense-concepts?commentId=LcpuTkjmqmivMRzXc

I think it would be useful if you edited the answer to add a line or two explaining each of those or at least giving links (for example, Schelling fences on slippery slopes), cause these seem non-obvious to me.

Comment

I actively disagree with the top-level comment as I read it.

I do, however, think this may be a difference of interpretation on the question and it’s domain.

I don’t think it makes very much sense to say that "gaslighting" is a bad idea. It describes a harmful behavior that you may observe in the world.

I think that @cousin_it may be saying, in no conflict with what I said above, that the verbal tag "gaslighting" is frequently used in inappropriate ways, like to shut out a person who says ‘I didn’t do that’ by putting them in a bucket labeled "abuser". I think this is a reasonable observation [1], but I don’t think this is what the question-asker meant. I think they were seeking bad concepts, not concepts that are used in bad ways.

https://www.lesswrong.com/posts/EvttEKGbeQHoopczt/worst-commonsense-concepts?commentId=fBfpApDkCvM6pgEbc

Local Optimisation Leads to Global Optimisation The idea that if everyone takes care of themselves and acts in their own parochial best interest, then everyone will be magically better off sounds commonsensical but is fallacious. Biological evolution, as Dawkins has put it, is an example of a local optimisation process that "can drive a population to extinction while constantly favouring, to the bitter end, those competitive genes destined to be the last to go extinct." Parochial self-interest is indirectly self-defeating, but I keep getting presented with the same commonsense-sounding and magical argument that it is somehow :waves-hands: a panacea.

https://www.lesswrong.com/posts/EvttEKGbeQHoopczt/worst-commonsense-concepts?commentId=Pjje5NJBoQku9y33o

Probably the most persistent and problem-causing is the common sense way to treating things as having essences. By this I mean that people tend to think of things like people, animals, organizations, places, etc. etc. as having properties or characteristics as if they had a little file inside them with various bits of metadata set that define their behavior. But this is definitely not how the world works! The property like this is at best a useful fiction or abstraction that allows simplified reasoning about complex systems, but it also leads to lots of mistakes because most people don’t seem to realize these are like aggregations over complex interactions in the world rather than real things themselves. You might say this is mistaking the map for the territory, but I think framing it this way makes it a little clearer just what is going on. People act as if there was some essential properties of things and think that’s how the world actually is and as a result make mistakes when that model fails to correspond to what actually happens.

https://www.lesswrong.com/posts/EvttEKGbeQHoopczt/worst-commonsense-concepts?commentId=iiog9RXnzLwKMxnnu

Copenhagen interpretation of ethics.

The Copenhagen Interpretation of Ethics says that when you observe or interact with a problem in any way, you can be blamed for it. At the very least, you are to blame for not doing more. Even if you don’t make the problem worse, even if you make it slightly better, the ethical burden of the problem falls on you as soon as you observe it. In particular, if you interact with a problem and benefit from it, you are a complete monster. I don’t subscribe to this school of thought, but it seems pretty popular.

  • This heuristic probably derives partly from the (potentially useful) idea that "good behavior" has to be a function of what you know; "doing the best you can" always has to be understood in the context of what you have/​haven’t learned.

  • It may also arise from a heuristic that says, if you’re involved in a bad situation, you have probably done something wrong yourself. This may be useful for preventing common forms of blame-dodging, much like anti-mafia laws help arrest kingpins who would otherwise not be directly liable for everything done by their organization.

  • However, this ends up rewarding ignorance, and punishing people who are doing as much as they can to help (see article for examples; also see Asymmetric Justice). Other common problems with blame.

  • People often reason as if blame is a conserved quantity; if I’m to blame, then the blame on you must somehow be lessened. This is highly questionable.

  • A problem can easily have multiple important causes. For example, if two snipers attempt to assassinate someone at the same time, should we put more blame on the one whose bullet struck first? Should one be tried for murder, and the other be tried merely for attempted murder?

  • Blaming things outside of your control. Quoting HPMOR, chapter 90:

  • "That’s not how responsibility works, Professor." Harry’s voice was patient, like he was explaining things to a child who was certain not to understand. He wasn’t looking at her anymore, just staring off at the wall to her right side. "When you do a fault analysis, there’s no point in assigning fault to a part of the system you can’t change afterward, it’s like stepping off a cliff and blaming gravity. Gravity isn’t going to change next time. There’s no point in trying to allocate responsibility to people who aren’t going to alter their actions. Once you look at it from that perspective, you realize that allocating blame never helps anything unless you blame yourself, because you’re the only one whose actions you can change by putting blame there. That’s why Dumbledore has his room full of broken wands. He understands that part, at least."

  • See Heroic Responsibility.

The concept of blame is not totally useless. It can play several important roles:

  • Providing proper incentives. In some situations, it can be important to assign blame and punishment in order to shape behavior, especially in contexts where people who do not share common goals are trying to cooperate.

  • Fault analysis. When people do share common goals, it can still be important to pause and think what could have been done differently to get a better result, which is a form of assigning blame (although not with accompanying punishment).

  • Emotional resolution. Sometimes an admittance of guilt is what’s needed to repair a relationship, or otherwise improve some social situation.

  • Norm enforcement. Sometimes an apology (especially a public apology) serves the purpose of reinforcing the norm that was broken. For example, if you fail to include someone in an important group decision, apologizing shows that you think they should be included in future decisions. Otherwise, making decisions without that person could become normal. However, I find that blame discussions often serve none of these purposes. In such a case, you should probably question whether the discussion is useful, and try to guide it to more useful territory.

https://www.lesswrong.com/posts/EvttEKGbeQHoopczt/worst-commonsense-concepts?commentId=NFSWGCGzaXsPMXPJ7

"weird" vs "normal". This concept seems to bundle together "good" and "usual", or at least "bad" with "unusual".

  • If most people are mostly strategic most of the time, then common actions will indeed be strategic ones, so uncommon actions are probably unstrategic. However, in reality, we all have severely bounded rationality (compared to a bayesian superintelligence). "To be human is to make ten thousand errors. No one in this world achieves perfection." This limits the utility of the absurdity heuristic for judging the utility of actions.

  • Even if you aren’t making this conflation, "weird" vs "normal" can encourage a map/​territory error, where "weird" things are thought of as inherently low-probability, and "normal" things inherently high-probability. Bayesians think of probabilities as a property of observing agents rather than inherent to things-in-themselves. To avoid this mistake, people sometimes say things like "since the beginning, not one unusual thing has ever happened" (which can be interpreted as saying that, if we insist on attaching weirdness/​normalness as an inherent property of events, we should consider everything that actually happens as normal).

  • I think the label "weird" also seems to serve as a curiosity-stopper, in some cases, because the inherent weirdness "explains" the unusual observations. EG, "they’re just weird".

https://www.lesswrong.com/posts/EvttEKGbeQHoopczt/worst-commonsense-concepts?commentId=qMwHSMg7KvWWZd9N6

Self-Fulfilling Prophecy The idea is that if you think about something, then it is more likely to happen because of some magical and mysterious *"*emergent" feedback loopiness and complex chaotic dynamics and other buzzwords. This idea has some merit (e.g. if your thoughts motivate you to take effective actions). I don’t deny the power of ideas. Ideas can move mountains. Still, I’ve come across many people who overstate and misapply the concept of a self-fulfilling prophecy. I was discussing existential risks with someone, and they confidently said, "The solution to existential risks is not to think about existential risks because thinking about them will make them more likely to happen." This is the equivalent of saying, "Don’t take any precautions ever because by doing so, you make the bad thing more likely to happen."

Comment

https://www.lesswrong.com/posts/EvttEKGbeQHoopczt/worst-commonsense-concepts?commentId=dm6uMx7EHusdGuvKP

I don’t want to do without the concept. I agree that it is abused, but I would simply contest whether those cases are actually self-fulfilling. So maybe what I would point to, as the bad concept, would be the idea that most beliefs are self-fulfilling. However, in my experience, this is not common enough that I would label it "common sense". Although it certainly seems to be something like a human mental predisposition (perhaps due to confirmation bias, or perhaps due to a confusion of cause and effect, since by design, most beliefs are true).

Comment

You’re right. As romeostevensit pointed out, "commonsense ideas rarely include information about the domain of applicability." My issue with self-fulfilling prophecy is that it gets misapplied, but I don’t think it is an irretrievably bad idea. This insightful verse from the Tao Te Ching is an exemplary application of the self-fulfilling prophecy:

If you don’t trust the people, you make them untrustworthy. It explicitly states a feedback loop.

Comment

You can add it to Self Fulfilling/​Refuting Prophecies as an example

https://www.lesswrong.com/posts/EvttEKGbeQHoopczt/worst-commonsense-concepts?commentId=ZBcejjfaC2g6WaJBu

Metric words (eg "good", "better", "worse") with an implicit privileged metric. A common implicit metric is "social praise/​blame", but people can also have different metrics in mind and argue past each other because "good" is pointing at different metrics. Usually, just making the metric explicit or asking "better in what way?" clears it up. Similar for goal words ("should", "ought", "must", "need", etc) with an implicit privileged goal. Again, you can ask "You say you ‘have to do it’, but for what purpose?" Btw, I’m not against vague goals/​metrics that are hard to make legible, just the implicit, privileged ones.

https://www.lesswrong.com/posts/EvttEKGbeQHoopczt/worst-commonsense-concepts?commentId=N5zuJyBe9EvM3BG5Y

I dislike when people talk about someone "deserving" something when what they mean is they would like that to happen. The word seems to imply that the person may make a demand on reality (or reality’s subcategory of other people!) I suggest we talk about what people earn and what we wish for them instead of using this word that imbues them with a sense of "having a right to" things they did not earn. That is, of coure, not saying we should stop wishing others or ourselves well. Just saying we should be honest that that is what we are doing and use "deserving" only in the rare cases when we want to imbue our wish of opinion with a cosmic sense of purpose or imply in some other way the now common idea. When no longer commonly used in cases where an expression of goodwill (or "badwill" for that matter) will do, it may stand out in such cases and have the proper impact. Of course we are not going to make that change and we wouldn’t, even if this reached enough people, because people LOVE to mythically "deserve" things, and it makes them a lot easier to sell to or infuriate too. We may, however, just privately notice when someone tries to sell us something we "deserve", adress the thanks to the person wishing us well instead of some nebulous "Universe" when someone tells us we "deserve" something good and consider our actual moral shortcomings when the idea creeps up we might "deserve" something bad.

https://www.lesswrong.com/posts/EvttEKGbeQHoopczt/worst-commonsense-concepts?commentId=dXLpCv26fJhPuctPz

Related to facts vs opinions but not quite the same is objective/​subjective dichotomy, popular in conventional philosophy. I find it extremely misleading and contributing a lot to asking wrong questions and accepting ridiculous non sequiturs.

For instance, it’s commonly assumed that things are either subjective or objective. Moreover, if something is subjective it’s arbitrary, not real and not meaningful. To understand why this framework is wrong, one requires good understanding of map/​territory distinction and correspondence. How completely real things like wings of an airplane can exist only in the map, and how maps themselves are embedded in the territory.

But this isn’t part of philosophy 101 and so we get confused arguments about objectiveness of X and whole schools of philosophy, noticing that, in a sense, everything we interact with is subjective, so objectivity either doesn’t exist or its existence doesn’t matter to us, with all kind of implications, some of which do not add to normality.

https://www.lesswrong.com/posts/EvttEKGbeQHoopczt/worst-commonsense-concepts?commentId=68gtvF8eHACLHyRcH

**Radical actions. **The word "radical" means someone trying to find and eliminate root causes of social problems, rather than just their symptoms. Many people pursue radical goals through peaceful means (spreading ideas, starting a commune, attending a peaceful protest or boycotting would be examples), yet "radical act" is commonly used as a synonym to "violent act". Extremism. Means having views far outside the mainstream attitude of society. But also carries a strong negative connotation, in some countries is prohibited by law and mentioned alongside "terrorism" like they’re synonyms, and redefined by Wikipedia as "those policies that violate or erode international human rights norms" (but what if one’s society is opposed to human rights?!) Someone disagreeing with society is not necessarily bad or violent, so this is a bad concept. "Outside of politics". Any choice one makes affects the balance of power somehow, so one cannot truly be outside. In practice the phrase often means that supporting the status quo is allowed, but speaking against it is banned.

https://www.lesswrong.com/posts/EvttEKGbeQHoopczt/worst-commonsense-concepts?commentId=bau2shk4dnYH54p7F

"I like hummus" is a fact, not an opinion

https://www.lesswrong.com/posts/EvttEKGbeQHoopczt/worst-commonsense-concepts?commentId=CGo9qLLkzagRzMnPH

Qualitative vs. quantitative differences /​ of kind vs. of degree It’s not like the distinction is meaningless (in some sense liquid water certainly isn’t "just ice but warmer") but most of the times in my life I recall having encountered it, it was abused or misapplied in one way or another: **(1) **It seems to be very often (usually?) used to downplay some difference between A and B by saying "this is just a difference of degree, not a difference of kind" without explaining why one believes so or pointing out an example of an alternative state of the world in which a difference between A and B would be qualitative. **(2) **It is often ignored that differences of degree can become differences of kind after crossing some threshold (probably most, if not all, cases of latter are like that). At some point ice stops *just *becoming warmer and melts, a rocket stops *just *accelerating and reaches escape velocity, and a neutron start stops *just *increasing in mass and collapses into a black hole. (3) Whenever this distinction is being introduced, it should be clear what is meant by qualitative and quantitative difference in this particular domain of discourse, either with reference to some qualitativeness/​quantitativeness criteria or by having sets of examples of both. For example, when comparing intelligence between species, one could make a case that we see a quantitative difference between ravens and new Caledonian crows but *qualitative *between birds and hookworms. We may not have a single, robust metric for comparing average intelligence between taxa but in this case we know it when we see it and we can reasonably expect other to see the distinction as well. (TL;DR it shouldn’t be based on gut feeling when gut feeling about what is being discussed is likely to differ between individuals)