Social behavior curves, equilibria, and radicalism

https://www.lesswrong.com/posts/Hoh6umyMWSqzPGMJZ/social-behavior-curves-equilibria-and-radicalism

Link post I. Here are some hypotheticals to consider, with a common theme. Note that in each case I’m asking what you would do, rather than what you should do.

Figure 8: an S-curve

  1. More precisely, the rule is "Select how radical you’ll be at random, according to the ideal distribution of radicals versus conformists". Here, "ideal distribution" is what an outside observer would prefer for the distribution to be in general. (I’ve posited a guess at this distribution in Figure 20.) The "outside observer" bit is important: of course you’d prefer for there to be lots of radicals on your pet issue, but it’s not a good rule if you wouldn’t want for it to be universalized to everyone’s pet issue.

Comment

https://www.lesswrong.com/posts/Hoh6umyMWSqzPGMJZ/social-behavior-curves-equilibria-and-radicalism?commentId=bkDhfkQiWNCJiNjxP

I think this works well to describe the behavior of small, well-mixed groups, but as you look at larger societies, it gets more complicated because of the structure of social networks. You don’t get to see how many people overall are wearing face-masks in the whole country, only among the people you interact with in your life. So it’s totally possible that different equilibria will be reached in different locations/​socio-economic classes/​communities. That’s probably one reason why revolutions are more likely to fizzle out than it looks. Another problem arising from the structure of social networks is that the sample of people your interact with is not representative of your real surroundings: people with tons of friends are over-represented among your friends (I had a blog post about this statistical phenomenon a while ago). I’m not sure how one could expand the social behavior curve model to account for that, but it would be interesting.

https://www.lesswrong.com/posts/Hoh6umyMWSqzPGMJZ/social-behavior-curves-equilibria-and-radicalism?commentId=pq9b3HgETRKjyQhd5

What a beautiful model! Indeed it seems like a rediscovery of Granovetter’s threshold model, but still, great work finding it.

I’m not sure "radical" is the best word for people at the edges of the curve, since figure 19 shows that the more of them you have, the more society is resistant to change. Maybe "independent" instead?

https://www.lesswrong.com/posts/Hoh6umyMWSqzPGMJZ/social-behavior-curves-equilibria-and-radicalism?commentId=ur4s8jfbbW7c35b3v

I noticed (while reading your great modeling exercise about an important topic) a sort of gestalt presumption of "one big compartment (which is society itself)" in the write-up and this wasn’t challenged by the end. Maybe this is totally valid? The Internet is a series of tubes, but most of the tubes connect to each other eventually, so it is kinda like all one big place maybe? Perhaps we shall all be assimilated one day. But most of my thoughts about modeling how to cope with differences in preference and behavior focus a lot on the importance of spacial or topological or social separations to minimize conflicts and handle variations in context. My general attitude is roughly: things in general are not "well mixed" and (considering how broken various things can be in some compartments) thank goodness for that! This is a figure from this research where every cell basically represents a spacially embedded agent, and agents do iterated game playing with their neighbors and then react somehow. In many similar bits of research (which vary in subtle ways, and reveals partly what the simulation maker wanted to see) a thing that often falls out is places where most agents are being (cooperatively?) gullible, or (defensively?) cheating, or doing tit-for-tat… basically you get regions of tragic conflict, and regions of simple goodness, and tit-for-tat is often at the boundaries (sometimes converting cheaters through incentives… sometimes with agents becoming complacent because T4T neighbors cooperate so much that it is easy to relax into gullibility… and so on). A lot depends on the details, but the practical upshot for me is that it is helpful to remember that the right thing in one placetime is not always the right thing everywhere or forever. Arguably, "reminding people about context" is just a useful bravery debate position local to my context? ;-) With a very simple little prisoner’s dilemma setup, utility is utility, and it is clear what "the actual right thing" is: lots of bilateral cooperate/​cooperate interactions are Simply Good. However in real life there is substantial variation in cultures and preferences and logistical challenges and coordinating details and so on. It is pretty common, in my experience, for people to have coping strategies for local problems that they project out on others who are far from them, which they imagine to be morally universal rules. However, when particular local coping strategies are transported to new contexts, they often fail to translate to actual practical local benefits, because the world is big and details matter. Putting on a sort of "engineering hat", my general preference then is to focus on small specific situations, and just reason about "what ought to be done here and now" directly based on local details the the direct perception of objective goodness. The REASON I would care about "copying others" is generally either (1) they figured out objectively good behavior that I can cheaply add to my repertoire, or (2) they are dangerous monsters who will try to hurt me of they see me acting differently. (There are of course many other possibilities, and subtleties, and figuring out why people are copying each other can be tricky sometimes.) Your models here seem to be mostly about social contagion, and information cascades, and these mechanisms read to me as central causes of "why ‘we’ often can’t have nice things in practice" …because cascading contagion is usually anti-epistemic and often outright anti-social.

You’re having dinner with a party of 10 at a Chinese restaurant. Everyone else is using chop sticks. You know how to use chop sticks but prefer a fork. Do you ask for a fork? What if two other people are using a fork? I struggled with this one because I will tend to use a chopstick at Chinese restaurants for fun, and sometimes I’m the only one using them, and several times I’ve had the opportunity to teach someone how to use them. The alternative preference in this story would be COUNTERFACTUAL to my normal life in numerous ways. Trying to not fight the hypothetical too much, I could perhaps "prefer a fork" (as per the example) in two different ways: (1) Maybe I "prefer a fork" as a brute fact of what makes me happy for no reason. In this case, you’re asking me about "a story person’s meta-social preferences whose object-level preferences are like mine but modified for the story situation" and I’m a bit confused by how to imagine that person answering the rest of the question. After making an imaginary person be like me but "prefer a fork as a brute emotional fact"… maybe the new mind would also be different in other ways as well? I couldn’t even figure out an answer to the question, basically. If this was my only way to play along, I would simply have directly "fought the hypothetical" forthrightly. (2) However, another way to "prefer a fork" would be if the food wasn’t made properly for eating with a chopstick. Maybe there’s only rice, and the rice is all non-sticky separated grains, and with a chopstick I can only eat one grain at a time. This is a way that I could hypothetically "still have my actual dietary theories intact" and naturally "prefer a fork"… and in this external situation I would probably ask for a fork no matter how unfun or "not in the spirit of the experience" it seems? Plausibly, I would be miffed, and explain things to people close to me who had the same kind of rice, and I would predict that they would realize I was right, nod at my good sense, and probably ask the waiter to give them a fork as well. But in that second try to generate an asnwer, it might LOOK like the people I predicted might copy me would be changing because "I was +1 to fork users and this mapped through a well defined social behavior curve feeling in them" but in my mental model the beginning of the cascade was actually caused by "I verbalized a real fact and explained an actually good method of coping with the objective problem" and the idea was objectively convincing. I’m not saying that peer pressure should always be resisted. It would probably be inefficient for everyone to think from first principles all the time about everything. Also there are various "package deal" reasons to play along with group insanity, especially when you are relatively weak or ignorant or trying to make a customer happy or whatever. But… maybe don’t fall asleep while doing so, if you can help it? Elsewise you might get an objectively bad result before you wake up from sleep walking :-(

Comment

https://www.lesswrong.com/posts/Hoh6umyMWSqzPGMJZ/social-behavior-curves-equilibria-and-radicalism?commentId=ovwSnr84fafq9wFG7

A lot depends on the details, but the practical upshot for me is that it is helpful to remember that the right thing in one placetime is not always the right thing everywhere or forever. [...] However in real life there is substantial variation in cultures and preferences and logistical challenges and coordinating details and so on. Martin Sustrik’s "Anti-Social Punishment" post is great real-life example of this

https://www.lesswrong.com/posts/Hoh6umyMWSqzPGMJZ/social-behavior-curves-equilibria-and-radicalism?commentId=mTAv2cYk3kWg52zjB

Something triggered in me by this response—and maybe similar to part of what you were saying in the later part: sometimes preferences aren’t affected much by the social context, within a given space of social contexts. People may just want to use chopsticks because they are fun, rather than caring about what other people think about them. Also, societal preferences for a given thing might actually decrease when more and more people are interested in them. For example, demand for a thing might cause the price to rise. With orchestras: if lots of people are already playing violin, that increases the relative incentive for others to learn viola.

https://www.lesswrong.com/posts/Hoh6umyMWSqzPGMJZ/social-behavior-curves-equilibria-and-radicalism?commentId=zW9YupRXpjJ2J5nax

Enjoyable. I’m surprised Asch’s conformity experiments are not mentioned, e.g. https://​​www.lesswrong.com/​​posts/​​WHK94zXkQm7qm7wXk/​​asch-s-conformity-experiment.

Comment

https://www.lesswrong.com/posts/Hoh6umyMWSqzPGMJZ/social-behavior-curves-equilibria-and-radicalism?commentId=pC6JKn8DHyYBdbL5E

Thanks for mentioning Asch’s conformity experiment—it’s a great example of this sort of thing! I might come back and revise it a bit to mention the experiment. (Though here, interestingly, a participant’s action isn’t exactly based on the percentage of people giving the wrong answer. It sounds like having one person give the right answer was enough to make people give the right answer, almost regardless of how many people gave the wrong answer. Nevertheless, it illustrates the point that other people’s behavior totally does influence most people’s behavior to quite a large degree, even in pretty unexpected settings.)

https://www.lesswrong.com/posts/Hoh6umyMWSqzPGMJZ/social-behavior-curves-equilibria-and-radicalism?commentId=NgKmNs67t8GZxr3sp

The descriptive part is great, but the prescriptive part is a little iffy. The optimal strategy is not choosing to be "radical" or "conformist". The optimal strategy is: do a Bayesian update on the fact that many other people are doing X, and then take the highest expected utility action. Even better, try to figure out why they are doing X (for example, by asking them) and update on that. It’s true that Bayesian inference is hard and heuristics such as "be at such-and-such point on the radical-conformist axis" might be helpful, but there’s no reason why this heuristic is always the best you can do.

https://www.lesswrong.com/posts/Hoh6umyMWSqzPGMJZ/social-behavior-curves-equilibria-and-radicalism?commentId=y4neZgcMLuQDFL4L3

Curated. This model makes explicit something I’ve had intuitions about for a while (though I wasn’t able to crystallise them nearly as perspicaciously or usefully as UnexpectedValues). Beyond the examples given in the post, I’m reminded of Zvi’s discussion of control systems in his covid series, and also am curious about how this model might apply to valuing cryptocurrencies, which I think display some of the same dynamics. The post is also very well-written. It has the wonderful flavour of a friend explaining something to you by a whiteboard, building up a compelling story almost from first principles with clear diagrams. I find this really triggers my curiosity—I want to go out and survey housemates to pin down the social behavior curves around me; go up to the whiteboard and sketch some new graphs and figure out what they imply, and so forth.

https://www.lesswrong.com/posts/Hoh6umyMWSqzPGMJZ/social-behavior-curves-equilibria-and-radicalism?commentId=92aTWZoEMLXCq9Fik

To jump in on people naming related things, some specific consequences of this type of thing are discussed in Timur Kuran’s "Private Truths, Public Lies" https://​​www.goodreads.com/​​book/​​show/​​1016932.Private_Truths_Public_Lies.

https://www.lesswrong.com/posts/Hoh6umyMWSqzPGMJZ/social-behavior-curves-equilibria-and-radicalism?commentId=WGbtfe8tgw4hBeGmP

A radical is someone who, for many different values of X, is on the far-left or far-right of the social behavior curve for X.

"Select how radical you’ll be at random".

I don’t see why being stubborn about one value of X should have to be correlated with being stubborn about any other value of X, so I’m confused about why there would have to be capital-R "Radicals" who are stubborn about everything, as opposed to having a relatively even division where everybody is radical about some issues and not about others. Being radical can be pretty exhausting, and it seems like a good idea to distribute that workload. I mean, I’m sure that people do tend to have natural styles, but you’re also talking about which style a person should consciously adopt.

Why not either randomly choose how radical you’re going to be on each specific issue independent of all others, or even try to be more radical about issues where you are most sure your view of the ideal condition is correct?

How does all of this hold up when there’s a lot of hysteresis in how people behave? I can think of lots of cases where I’d expect that to happen. Maybe some people just never change the random initial state of their video...

Comment

https://www.lesswrong.com/posts/Hoh6umyMWSqzPGMJZ/social-behavior-curves-equilibria-and-radicalism?commentId=7bnQJ7FheNfqPbseT

Yeah—to clarify, in the last section I meant "select how radical you’ll be for that issue at random." In the previous section I used "radical" to refer to a kind of person (observing that some people do have a more radical disposition than others), but yeah, I agree that there’s nothing wrong with choosing your level of radicalism independently for different issues! And yeah, there are many ways this model is incomplete. Status quo bias is one. Another is that some decisions have more than two outcomes. A third is that really this should be modeled as a network, where people are influenced by their neighbors (and I’m assuming that the network is a giant complete graph). A simple answer to your question might be "draw a separate curve for ‘keep camera on if default state is on’ and ‘turn camera on if default state is off’", but there’s more to say here for sure.

https://www.lesswrong.com/posts/Hoh6umyMWSqzPGMJZ/social-behavior-curves-equilibria-and-radicalism?commentId=82gtTiHJCeqmDWBGP

Interesing! I have something to say about the questions at the beginning of the text. I tried very very hard to answer them, but I found out I just can’t do it without lying to myself! I litterally can’t imagine the situation in order to feel it and make a decision. It’s imposible to imagine myself deciding, rationally, when I do something when someone else does it, or what is the % needed (like in the mask question) in order for me to change my behaviour. But, like everybody else, i change my behavior based on what others do.> So, my question is, do you think that some people do think more rationally about the way to act in those situations or they act impulsively like everyone else and they rationalize their behavior later? Does it have something to do with their inclination towards more math-rich sciences? Or, maybe, it depends on the biological characteristics of the individual? Like personality, for example. And, in a more broader sense, how many of our decisions de you guys think are rational and how many are just rationalized?

I hope it makes sense. Cheers.

Comment

https://www.lesswrong.com/posts/Hoh6umyMWSqzPGMJZ/social-behavior-curves-equilibria-and-radicalism?commentId=bMnWMysKhbGuSoKjh

or what is the % needed (like in the mask question) Maybe try:

  • Thinking about different combinations of co-workers

  • Try the extremes: 100%, 0%. And: you walk in wearing a mask, do you take it off? You walk in with a mask in your pocket do you put it on?

https://www.lesswrong.com/posts/Hoh6umyMWSqzPGMJZ/social-behavior-curves-equilibria-and-radicalism?commentId=5WF3JMRgyMwvQeKJW

I was excited to be reminded by this post of Louis Sachar’s classic *Wayside School *series: in one section of the first puzzle book installment, Sideways Math from Wayside School, you read about students at various places on the social behavior curve for participating in a game of basketball, and are asked to determine who will play under the changing circumstances of that day’s recess.

https://www.lesswrong.com/posts/Hoh6umyMWSqzPGMJZ/social-behavior-curves-equilibria-and-radicalism?commentId=Xyiuxpd2mmdaggPJz

I wonder where the Spiral of Silence fits in here. I guess opposite the Respectability cascade?

society can respond to new information relatively quickly, but does so smoothly. This seems like a good thing.This makes me think of the Concave Disposition. *> I guess it shouldn’t come as a surprise that these concepts are already well-known.*Well I think independent discovery is underrated anyway.

https://www.lesswrong.com/posts/Hoh6umyMWSqzPGMJZ/social-behavior-curves-equilibria-and-radicalism?commentId=bYHzJdjPwahTXJ4Dx

This is a nice essay. I think it could benefit from including a bit more literature, though. I remember seeing a keynote lecture during the wcere in Istanbul some years ago that included social learning models with quite similar results, probably by E. Somanathan. You may also want to check https://​​www.nber.org/​​papers/​​w23110

https://www.lesswrong.com/posts/Hoh6umyMWSqzPGMJZ/social-behavior-curves-equilibria-and-radicalism?commentId=BizwcLmqLCkjHHmkp

I second JenniferRM’s post, most people don’t live in a sigular global soup where they can see 7 billion voices and are equally influenced by all 7 billion. Instead you end up with local pockets through which ideas spread. Which is something I’ve been thinking about how to engineer—how do you create communities that can spread ideas quickly and then influence other communities. P.S. I wonder if the equations for behaviour spread look anything like those for spread of disease and if any of that research could be reusable. Both are geographically localised (we are more influenced by those in proximity), both have super-spreader events and key spots where large number of people are influenced. I guess the differences are that a) sometimes we’re not running away from being infected b) invention of telecom allows ideas to spread faster and more globally than disease.