Link post Follow-up to: Blackmail
[Note on Compass Rose response: This is *not *a response to the recent Compass Rose response, it was written before that, but with my post on Hacker News I need to get this out now. It *has *been edited in light of what was said. His first section is a new counter-argument against a particular point that I made – it is interesting, and I have a response but it is beyond scope here. It does not fall into either main category, because it is addressing a particular argument of mine rather than being a general argument for blackmail. The second counter-argument is a form of #1 below, combined with #2, #3 and #4 (they do tend to go together) so it *is *addressed somewhat below, especially the difference between ‘information tends to be good’ and ‘information chosen, engineered and shared so to be maximally harmful tends to be bad.’ My model and Ben’s of practical results also greatly differ. We intend to hash all this out in detail in conversations, and I hope to have a write-up at some point. Anyway, on to the post at hand.]
There are two main categories of objection to my explicit thesis that blackmail should remain illegal.
Today we will not address what I consider the more challenging category. Claims that while blackmail is bad, making it illegal does not improve matters. Mainly because we can’t or won’t enforce laws, so it is unclear what the point is. Or costs of enforcement exceed benefits.
The category I address here claims blackmail is good. We want more.
Key arguments in this category:
-
Information is good.*
-
Blackmail reveals bad behavior.
-
Blackmail provides incentive to uncover bad behavior.
-
Blackmail provides a disincentive to bad behavior.
-
Only bad, rich or elite people are vulnerable to blackmail.
-
We should strongly enforce all norms on everyone, without context dependence not explicitly written into the norm, and fix or discard any norms we don’t want to enforce in this way.
A key assumption is that blackmail mostly targets *existing true bad behavior. *I do not think this is true. For true *or *bad *or *for existing. For details, see the previous post.
Such arguments also centrally argue against *privacy. *Blackmail advocates often claim privacy is unnecessary or even toxic.
It’s one thing to give up on privacy in practice, for yourself, in the age of Facebook. I get that. It’s another to argue that *privacy is bad. *That *it is bad to not reveal all the information you know. *Including about yourself.
This radical universal transparency position, perhaps even assumption, comes up quite a lot recently. Those advocating it act as if those opposed carry the burden of proof.
No. Privacy is good.
A reasonable life, a good life, *requires *privacy.
I
We need a realm *shielded from signaling and judgment. *A place where what we do *does not change what everyone thinks about us, or get us rewarded and punished. *Where others don’t judge what we do based on the assumption that we are choosing what we do knowing that others will judge us based on what we do. Where we are free from others’ Bayesian updates and those of computers, from what is correlated with what, with how things look. A place to play. A place to experiment. To unwind. To celebrate. To learn. To vent. To be afraid. To mourn. To worry. To be yourself. To *be real. *
We need people there with us *who won’t judge us. *Who won’t *use information against us. *
We need having such trust to not risk our ruin. We need to minimize how much we wonder, if someone’s goal is to get information to use against us. Or what price would tempt them to do that.
Friends. We desperately need real friends.
II
*Norms are not laws. *
Life is full of trade-offs and necessary unpleasant actions that violate norms. This is not a fixable bug. Context is important for both enforcement and intelligent or useful action.
Even if we could fully enforce norms in principle, different groups have different such norms and each group’s/person’s norms are self-contradictory. Hard decisions mean violating norms and are common in the best of times.
A complete transformation of our norms and norm principles, beyond anything I can think of in a healthy historical society, would be required to even attempt full non-contextual strong enforcement of all remaining norms. It is unclear how one would avoid a total loss of freedom, or a total loss of reasonable action, productivity and survival, in such a context. Police states and cults and thought police and similar ideas have been tried and have definitely not improved this outlook.
What we do for fun. What we do to make money. What we do to stay sane. What we do for our friends and our families. What maintains order and civilization. *What **must be done. *
Necessary actions are often the very things others wouldn’t like, or couldn’t handle… *if *revealed in full, with context simplified to what gut reactions can handle.
Or worse, *with context chosen to have the maximally negative gut reactions. *
There are also known dilemmas where *any action taken *would be a norm violation of a sacred value. And lots of values that *claim *to be sacred, because every value wants to be sacred, but which we know *we must treat as not sacred *when making real decisions with real consequences.
Or in many contexts, justifying our actions would require revealing massive amounts of private information that would then cause further harm (and which people very much do not have the time to properly absorb and consider). Meanwhile, you’re taking about the bad-sounding thing, which digs your hole deeper.
We all must do these necessary things. These often violate both norms and formal laws. Explaining them often requires sharing other things we dare not share.
I wish everyone a past and future Happy Petrov Day
*Part of the job of making sausage *is to allow others not to see it. We still get reliably disgusted when we see it.
We *constantly *must claim ‘everything is going to be all right’ or ‘everything is OK.’ That’s *never *true. Ever.
In these, and in many other ways, we live in an unusually hypocritical time. A time when people need be far more afraid both to not be hypocritical, and of their hypocrisy being revealed.
We are a nation of men, not of laws.
But these problems, while improved, wouldn’t go away in a better or less hypocritical time. Norms are not a system that can have full well-specified context dependence and be universally enforced. That’s not how norms work.
III
Life requires privacy so we can not reveal the exact extent of our resources.
If others know exactly what resources we have, they can and will take all of them. The tax man who knows what you *can *pay, what you *would *pay, already knows what you *will *pay. For government taxes, and for other types of taxes.
This is not only about payments in money. It is also about time, and emotion, and creativity, and everything else.
Many things in life claim to be sacred. Each claims all known available resources. Each claims we are blameworthy for any resources we hold back. If we hold nothing back, we have nothing.
That which is fully observed cannot be one’s slack. Once all constraints are known, they bind.
Slack requires privacy. Life requires slack.
The includes our* decision making process. *
If it is known how we respond to any given action, others find best responses. They will respond to incentives. They exploit *exactly *the amount we won’t retaliate against. They feel safe.
We seethe and despair. We have no choices. No agency. No slack.
It is a key protection that one *might *fight back, perhaps massively out of proportion, if others went after us. To any extent.
It is a key protection that one *might *do something good, if others helped you. Rather than others knowing *exactly *what things will cause you to do good things, and which will not.
It is central that one *react when others are gaming the system. *
Sometimes that system is you.
World peace, and doing anything at all that interacts with others, depends upon both *strategic confidence *in some places, and *strategic ambiguity *in others. We need to choose carefully where to use which.
Having all your actions fully predictable and all your information known isn’t Playing in Hard Mode. That’s Impossible Mode.
I now give specific responses to the six claims above. This mostly summarizes from the previous post.
-
Information, by default, is probably good. But this is a tenancy, not a law of physics. As discussed last time, information *engineered to be locally harmful *probably is net harmful. Keep this distinct from incentive effects on bad behavior, which is argument number 4.
-
Most ‘bad’ behavior will be a justification for scapegoating, involving levels of bad behavior that are common. Since such bad behavior is rarely made common knowledge, and allowing it to become common knowledge is often considered far worse behavior than the original action, making it common knowledge forces oversize reaction and punishment. What people are punishing is *that you are the type of person who lets this type of information become common knowledge about you. *Thus you are not a good ally. In a world like ours, where all are anticipating future reactions by others anticipating future reactions, this can be devastating.
-
Blackmail does provide incentive to investigate to find bad behavior. But if found, it also provides incentive to make sure it is never discovered. And what is extracted from the target is often further bad behavior, largely because…
-
Blackmail also provides an incentive to *engineer or provoke *bad behavior, and to maximize the damage that would result from revelation of that behavior. The incentives promoting more bad behavior likely *are stronger *than the ones discouraging it. I argue in the last piece that it is common *even now *for people to engineer blackmail material against others *and often also against themselves, *to allow it to be used as collateral and leverage. That a large part of job interviews is proving that you are vulnerable in these ways. That much bonding is about creating mutual blackmail material. And so on. This seems quite bad.
-
If any money one has can be extracted, then one will permanently be broke. This is a lot of my model of poverty traps – there are enough claiming-to-be-sacred things demanding resources that any resources get extracted, so no one tries to acquire resources or hold them for long. Consider what happens if people in such situations are allowed to borrow money. Even if you are (for any reason) sufficiently broke that you cannot pay money, you have much that you could be forced to say or do. Often this involves deep compromises of sacred values, of ethics and morals and truth and loyalty and friendship. It often involves being an ally of those you despise, and reinforcing that which is making your life a living hell, to get the pain to let up a little. Privacy, and the freedom from blackmail, are the only ways out.
-
A full exploration is beyond scope but section two above is a sketch.
- – I want to be very clear that *yes, information in general is good. *But that is a far cry from the radical claim that all and any information is good and sharing more of it is everywhere and always good.
To support this, there are results from economics / game theory showing that signaling equilibria can be worse than non-signaling equilibria (in the sense of Pareto inefficiency). Quoting one example from http://faculty.econ.ucdavis.edu/faculty/bonanno/teaching/200C/Signaling.pdf
So in theory it seems quite possible that privacy is a sort of coordination mechanism for avoiding bad signaling equilibria. Whether or not it actually is, I’m not sure. That seems to require empirical investigation and I’m not aware of such research.
Comment
I get a 404 for the paper. The part you quoted says "maybe this might happen" but doesn’t give an economic argument that it could happen, it just says "maybe employers don’t pay people enough for it to be worth it". Is there somewhere where the argument is actually made?
Comment
It looks like the code that turns a URL into a link made the colon into part of the link. I removed it so the link should work now. The argument should be in the PDF. Basically you just solve the game assuming the ability to signal and compare that to the game where signaling isn’t possible, and see that the signaling equilibrium makes everyone worse off (in that particular game).
Comment
OK, looking at the argument, I think it makes sense that signalling equilibria can potentially be Pareto-worse than non-signalling equilibria, as they can have more of a "market for lemons" problem. Worth noting that not all equilibria in the game-with-signalling are worse than non-signalling equilibria (I think "no one gets education, everyone gets paid average productivity" is still a Nash equilibrium), it’s just that signalling enables additional equilibria, some of which are bad.
Comment
Not sure what the connection to "market for lemons" is. Can you explain more (if it seems important)?
I agree that is still a Nash equilibrium and I think even a Perfect Bayesian Equilibrium, but there may be a stronger formal equilibrium concept that rules it out? (It’s been a while since I studied all those equilibrium refinements so I can’t tell you which off the top of my head.)
I think under Perfect Bayesian Equilibrium, off-the-play-path nodes formally happen with probability 0 and the players are allowed to update in an arbitrary way on those nodes, including not update at all. But intuitively if someone does deviate from the proposed equilibrium strategy and get some education, it seems implausible that employers don’t update towards them being type H and therefore offer them a higher salary.
Comment
People who haven’t gotten an education are, on average, unproductive, since productive people have a better alternative to not getting an education (namely, getting an education). Similarly, in a market for lemons, cars on the market are, on average, low-quality, since people with high-quality cars have a better alternative to putting them on an open market (namely, continuing to use the car, or selling it in a higher-trust market).
It’s possible, I don’t know the formal stronger equilibrium concepts though.
Now that I think about it, there are even simpler cases of more-available information making Nash equilibria worse. In any finite iterated prisoner’s dilemma with known horizon, the only Nash equilibrium is to always defect. But, in a finite iterated prisoner’s dilemma with unknown geometrically-distributed horizon (sufficiently far away in expectation), there are Nash equilibria that generate mutual cooperation (due to folk theorems).
In a scapegoating environment, having privacy yourself is obviously pretty important. However, you seem to be making a stronger point, which is that privacy in general is good (e.g. we shouldn’t have things like blackmail and surveillance which generally reduce privacy, not just our own privacy). I’m going to respond assuming you are arguing in favor of the stronger point.
This post rests on several background assumptions about how the world works, which are worth making explicit. I think many of these are empirically true but are, importantly, not necessarily true, and not all of them are true.
Implication: it’s bad for people to have much more information about other people (generally), because they would reward/punish them based on that info, and such rewarding/punishing would be unjust. We currently have scapegoating, not justice. (Note that a just system for rewarding/punishing people will do no worse by having more information, and in particular will do no worse than the null strategy of not rewarding/punishing behavior based on certain subsets of information)
Implication: "judge" means to use information against someone. Linguistic norms related to the word "judgment" are thoroughly corrupt enough that it’s worth ceding to these, linguistically, and using "judge" to mean (usually unjustly!) using information against people.
Implication (in the context of the overall argument): a general reduction in privacy wouldn’t lead to norms changing or being enforced less strongly, it would lead to the same norms being enforced strongly. Whatever or whoever decides which norms to enforce and how to enforce them is reflexive rather than responsive to information. We live in a reflex-based control system.
Implication: the system of norms is so corrupt that they will regularly put people in situations where they are guaranteed to be blamed, regardless of their actions. They won’t adjust even when this is obvious.
Implication: people expect to lose value by knowing some things. Probably, it is because they would expect to be punished due to it being revealed they know these things (as in 1984). It is all an act, and it’s better not to know that in concrete detail.
Implication: the control system demands optimistic stories regardless of the facts. There is something or someone forcing everyone to call the deer a horse under threat of punishment, to maintain a lie about how good things are, probably to prop up an unjust regime.
Implication: even in the most just possible system of norms, it would be good to sometimes violate those norms and hide the fact that you violated them. (This seems incorrect to me!)
Implication: the bad guys won; we have rule by gangsters, who aren’t concerned with sustainable production, and just take as much stuff as possible in the short term. (This seems on the right track but partially false; the top marginal tax rate isn’t 100% [EDIT: see Ben’s comment, the actual rate of extraction is higher than the marginal tax rate])
Implication: more generally available information about what strategies people are using helps "our" enemies more than it helps "us". (This seems false to me, for notions of "us" that I usually use in strategy)
Implication (in context): strategic ambiguity isn’t just necessary for us given our circumstances, it’s necessary in general, even if we lived in a surveillance state. (Huh?)
To conclude: if you think the arguments in this post are sound (with the conclusion being that we shouldn’t drastically reduce privacy in general), you also believe the implications I just listed, unless I (or you) misinterpreted something.
Comment
I don’t think the 100% tax rate argument works, for several reasons:
100% is not the short-run maximum extraction rate (Cf "Laffer Curve," which is explicitly short-term).
USGOVT is not really an agent here, some extractors taking all they can are subjectvto the top marginal tax rate & reallocating to themselves using subtler mechanisms like monetary policy and financial regulation (and deregulation, cyclically), boondoggles, other regulatory capture...
If you count other extraction points such as credentialism + high college tuition + need-based financial aid (mostly involving loans), hospital bills, lifetime extraction rate may be a lot higher.
Comment
Good point, I updated towards the extraction rate being higher than I thought (will edit my comment). Rich people do end up existing but they’re rare and are often under additional constraints.
I will attempt to clarify which of these things I actually believe, as best I can, but do not expect to be able to engage deeper into the thread.
Implication: it’s bad for people to have much more information about other people (generally), because they would reward/punish them based on that info, and such rewarding/punishing would be unjust. We currently have scapegoating, not justice. (Note that a just system for rewarding/punishing people will do no worse by having more information, and in particular will do no worse than the null strategy of not rewarding/punishing behavior based on certain subsets of information)
I agree that privacy would be less necessary in a hypothetical world of angels. But I don’t find it convincing that removing privacy would bring about such a world, and arguments of this type (let’s discard a human right like property / free speech / privacy, and a world of angels will result) have a very poor track record.
Comment
Why do you think I’m arguing against privacy in my comment (the one you replied to)? I don’t think I’ve been taking a strong stance on it.
Comment
I think you have been. In every comment you try to cast doubt on justifications for privacy.
Comment
That isn’t the same as arguing against privacy. If someone says "I think X because Y" and I say "Y is false for this reason" that isn’t (necessarily) arguing against X. People can have wrong reasons for correct beliefs.
It’s epistemically harmful to frame efforts towards increasing local validity as attempts to control the outcome of a discussion process; they’re good independent of whether they push one way or the other in expectation.
In other words, you’re treating arguments as soldiers here.
(Additionally, in the original comment, I was mostly not saying that Zvi’s arguments were unsound (although I did say that for a few), but that they reflected a certain background understanding of how the world works)
Comment
Let’s get back to the world of angels problem. You do seem to be saying that removing privacy would get us closer to a world of angels. Why?
Comment
Where? (I actually think I am uncertain about this)
Comment
Maybe I’m misreading and you’re arguing that it will help us and enemies equally? But even that seems impossible. If Big Bad Wolf can run faster than Little Red Hood, mutual visibility ensures that Little Red Hood gets eaten.
Comment
OK, I can defend this claim, which seems different from the "less privacy means we get closer to a world of angels" claim; it’s about asymmetric advantages in conflict situations.
In the example you gave, more generally available information about people’s locations helps Big Bad Wolf more than Little Red Hood. If I’m strategically identifying with Big Bad Wolf then I want more information available, and if I’m strategically identifying with Little Red Hood then I want less information available. I haven’t seen a good argument that my strategic position is more like Little Red Hood’s than Big Bad Wolf’s (yes, the names here are producing moral connotations that I think are off).
So, why would info help us more than our enemies? I think efforts to do big, important things (e.g. solve AI safety or aging) really often get derailed by predatory patterns (see Geeks, Mops, Sociopaths), which usually aren’t obvious to the people cooperative with the original goal for a while. These patterns derail the group and cause it to stop actually targeting its original mission. It seems like having more information about strategies would help solve this problem.
Of course, it also gives the predators more information. But I think it helps defense more than offense, since there are more non-predators to start with than predators, and non-predators are (presently) at a more severe information disadvantage than the predators are, with respect to this conflict.
Anyway, I’m not that confident in the overall judgment, but I currently think more available info about strategies is good in expectation with respect to conflict situations.
Comment
Yes, less privacy leads to more conformity. But I don’t think that will disproportionately help small projects that you like. Mostly it will help big projects that feed on conformity—ideologies and religions.
Comment
OK, you’re right that less privacy gives significant advantage to non-generative conformity-based strategies, which seems like a problem. Hmm.
Comment
Only ones that don’t structurally depend on huge levels of hypocrisy. People can lie. It’s currently cheap and effective in a wide variety of circumstances. This does not make the lies true.
Comment
[edit: actually, I’m just generally confused about what the parent comment is claiming]
Comment
Conformity-based strategies only benefit from reductions in privacy, when they’re based on actual conformity. If they’re based on pretend/outer conformity, then they get exposed with less privacy.
Comment
Ah, gotcha. Yeah that makes sense, although it in turn depends a lot on what you think happens when lack-of-privacy forces the strategy to adapt. (note: following comment didn’t end up engaging with a strong version of the claim, and I ran out of time to think through other scenarios.) If you have a workplace (with a low generativity strategy) in which people are supposed to work 8 hours, but they actually only work 2 (and goof off the rest of the time), and then suddenly everyone has access to exactly how much people work, I’d expect one of a few things to happen:
Comment
I think the main thing is I can’t think of many examples where it seems like the active-ingredient in the strategy is the conformity-that-would-be-ruined-by-information. The most common sort of strategy I’m imagining is "we are a community that requires costly signals for group membership" (i.e. strict sexual norms, subscribing to and professing the latest dogma, giving to the poor), but costly signals are, well, costly, so there’s incentive for people to pretend to meet them without actually doing so. If it became common knowledge that nobody or very few people were "really" doing the work, one thing that might happen is that the community’s bonds would weaken or disintegrate. But I think these sorts of social norms would mostly just adapt to the new environment, in one of a few ways:
come up with new norms that are more complicated, such that it’s harder to check (even given perfect information) whether someone is meeting them. I think this what often happened in academia. (See jokes about postmodernism, where people can review each other’s work, but the work is sort of deliberately inscrutable so it’s hard to see if it says anything meaningful)
people just develop a norm of not checking in on each other (cooperating for the sake of preserving the fiction), and scrutiny is only actually deployed against political opponents. (The latter one at least creates an interesting mutually assured destruction thing that probably makes people less willing to attack each other openly, but humans also just seem pretty good at taking social games into whatever domain seems most plausibly deniable)
Only if you assume everyone loses an equal amount of privacy.
I think you’re pointing in an important direction, but your phrasing sounds off to me. (In particular, ‘scapegoating’ feels like a very different frame than the one I’d use here) If I think out loud, especially about something I’m uncertain about, that other people have opinions on, a few things can happen to me:
Someone who overhears part of my thought process might think (correctly, even!) that my thought process reveals that I am not very smart. Therefore, they will be less likely to hire me. This is punishment, but it’s very much not "scapegoating" style punishment.
Someone who overhears my private thought process might (correctly, or incorrectly! either) come to think that I am smart, and be more likely to hire me. This can be *just as dangerous. *In a world where all information is public, I have to attend to how the process by which I act and think looks. I am incentivized to think in ways that are legibly good. "Judgment" is dangerous to me (epistemically) even if the judgment is positive, because it incentives me against exploring paths that look bad, or are good for incomprehensible reasons.
Comment
This seems like a general argument that providing evidence without trying to control the conclusions others draw is bad because it leads to errors. It doesn’t seem to take into account the cost of reduced info glow or the possibility that the gatekeeper might also introduce errors. That’s before we even consider self-serving bias!
Related: http://benjaminrosshoffman.com/humility-argument-honesty/
TLDR: I literally do not understand how to interpret your comment as NOT a general endorsement of fraud and implicit declaration of intent to engage in it.
Comment
My intent was not that it’s "bad", just, if you do not attempt to control the conclusions of others, they will predictably form conclusions of particular types, and this will have effects. (It so happens that I think most people won’t like those effects, and therefore will attempt to control the conclusions of others.)
Comment
(I feel somewhat confused by the above comment, actually. Can you taboo "bad" and try saying it in different words?)
Comment
Ah, if you literally just mean it increases variance & risk, that’s true in the very short term. In context it sounded to me like a policy argument against doing so, but on reflection it’s easy to read you as meaning the more reasonable thing. Thank you for explaining.
Comment
Hmm. I think I meant something more like your second interpretation than your first interpretation but I think I actually meant a third thing and am not confident we aren’t still misunderstanding each other. An intended implication, (which comes with an if-then suggestion, which was not an essential part of my original claim but I think is relevant) is: If you value being able to think freely and have epistemologically sound thoughts, it is important to be able to think thoughts that you will *neither be rewarded nor punished for… [edit: or *be extremely confident than you have accounted for your biases towards reward gradients]. And the rewards are only somewhat less bad than the punishments. A followup implication is that this is not possible to maintain humanity-wide if thought-privacy is removed (which legalizing blackmail would contribution somewhat towards). And that this isn’t just a fact about our current equilibria, it’s intrinsic to human biology. It seems plausible (although I am quite skeptical) that a small group of humans might be able to construct an epistemically sound world that includes lack-of-intellectual-privacy, but they’d have to have correctly accounted for wide variety of subtle errors. [edit: all of this assumes you are running on human wetware. If you remove that as a constraint other things may be possible]
Comment
further update: I do think rewards are something like 10x less problematic than punishments, because humans are risk averse and fear punishment more than they desire reward. ("10x" is a stand-in for "whatever the psychological research says on how big the difference is between human response to rewards and punishments")
Comment
[note: this subthread is far afield from the article—LW is about publication, not private thoughts (unless there’s a section I don’t know about where only specifically invited people can see things) . And LW karma is far from the sanctions under discussion in the rest of the post.] Have you considered things to reduce the assymetric impact of up- and down-votes? Cap karma value at −5? Use downvotes as a divisor for upvotes (say, score is upvotes / (1 + 0.25 * downvotes)) rather than simple subtraction?
Comment
We’ve thought about things in that space, although any of the ideas would be a fairly major change, and we haven’t come up with anything we feel good enough about to commit to. (We have done some subtle things to avoid making downvotes feel worse than they need to, such as not including the explicit number of downvotes)
Do you think that thoughts are too incentivised or not incentivised enough on the margin, for the purpose of epistemically sound thinking? If they’re too incentivised, have you considered dampening LWs karma system? If they’re not incentivised enough, what makes you believe that legalising blackmail will worsen the epistemic quality of thoughts?
Comment
The LW karma obviously has its flaws, per Goodhart’s law. It is used anyway, because the alternative is having *other *problems, and for the moment this seems like a reasonable trade-off. The punishment for "heresies" is actually very mild. As long as one posts respected content in general, posting a "heretical" comment every now and then does not ruin their karma. (Compare to people having their lives changed dramatically because of one tweet.) The punishment accumulates mostly for people whose *only *purpose here is to post "heresies". Also, LW karma does not prevent anyone from posting "heresies" on a different website. Thus, people can keep positive LW karma even if their main topic is talking how LW is fundamentally wrong as long as they can avoid being annoying (for example by posting hundred LW-critical posts on their personal website, posting a short summary with hyperlinks on LW, and afterwards using LW mostly to debate other topics). Blackmail typically attacks you in real life, i.e. you can’t limit the scope of impact. If losing an online account on a website X would be the worst possible outcome of one’s behavior at the website X, life would be easy. (You would only need to keep your accounts on different websites separated from each other.) It was already mentioned somewhere in this debate that blackmail often uses the *difference between norms *in different communities, i.e. that your local-norm-following behavior in one context can be local-norm-breaking in another context. This is quite *unlike *LW karma.
I’d say thoughts aren’t incentivized enough on the margin, but:
(Separately, I am right now making arguments in terms that I’m fairly confident both of us value, but I also think there are reasons to want private thoughts that are more like "having a *Raemon_*healthy soul", than like being able to contribute usefully to the intellectual commons. (I noticed while writing this that the latter might be most of what a Benquo finds important for having a healthy soul, but unsure. In any case healthy souls are more complicated and I’m avoiding making claims about them for now)
If privacy in general is reduced, then they get to see others’ thoughts too [EDIT: this sentence isn’t critical, the rest works even if they can only see your thoughts]. If they’re acting justly, then they will take into account that others might modify their thoughts to look smarter, and make basically well-calibrated (if not always accurate) judgments about how smart different people are. (People who are trying can detect posers a lot of the time, even without mind-reading). So, them having more information means they are more likely to make a correct judgment, hiring the smarter person (or, generally, whoever can do the job better). At worst, even if they are very bad at detecting posers, they can see everyone’s thoughts and choose to ignore them, making the judgment they would make without having this information (But, they were probably already vulnerable to posers, it’s just that seeing people’s thoughts doesn’t have to make them more vulnerable).
Comment
Comment
Regarding that sentence, I edited my comment at about the same time you posted this.
If someone taking a risk is good with respect to the social good, then the justice process should be able to see that they did that and reward them (or at least not punish them) for it, right? This gets easier the more information is available to the justice process.
Comment
So, much of my thread was respond to this sentence:
some random innocuous status quo thought (neither gets me rewarded nor punished)
some weird thought that seems kind of dumb, which most of the time is evidence about being dumb, which occasionally pays off with something creative and neat. (I’m not sure what kind of world we’re stipulating here. In some "just"-worlds, this sort of thought gets punished (because it’s usually dumb). In some "just worlds" it gets rewarded (because everyone has cooperated on some kind of long term strategy). In some just-worlds it’s hit or miss because there’s a collection of people trying different strategies with their rewards.
some heretical thought that seems actively dangerous, and only occasionally produces novel usefulness if I turn out to be real good at being contrarian.
a thought that is *clearly, legibly *good, almost certainly net positive, either by following well worn paths, or being "creatively out of the box" in a set of ways that are known to have pretty good returns. Even in one of the possible-just-worlds, it seems like you’re going to incentivize the last one much more than the 2nd or 3rd. This isn’t that different from the status quo – it’s a hard problem that VC funders have an easier time investing in people doing something that seems obviously good, then someone with a genuinely weird, new idea. But I think this would crank that problem up to 11, even if we stipulate a just-world. ... Most importantly: the key implication I believe in, is that humans are not nearly smart enough at present to coordinate on anything like a just world, even if everyone were incredibly well intentioned. This whole conversation is in fact probably not possible for the average person to follow. (And this implication in this sentence right here right now is something that could get me punished in many circles, even by people trying hard to do the right thing. For reasons related to Overconfident talking down, humble or hostile talking up)
Comment
This is not responsive to what I said! If you can see (or infer) the process by which someone decided to have one thought or another, you can reward them for doing things that have higher expected returns, e.g. having heretical thoughts when heresy is net positive in expectation. If you can’t implement a process that complicated, you can just stop punishing people for heresy, entirely ignoring their thoughts if necessary.
Average people don’t need to do it, someone needs to do it. The first target isn’t "make the whole world just", it’s "make some local context just". Actually, before that, it’s "produce common knowledge in some local context that the world is unjust but that justice is desirable", which might actually be accomplished in this very thread, I’m not sure.
Thanks for adding this information. I appreciate that you’re making these parts of your worldview clear.
Comment
This is not responsive to what I said! If you can see (or infer) the process by which someone decided to have one thought or another, you can reward them for doing things that have higher > expected returns, e.g. having heretical thoughts when heresy is net positive in expectation.This was most of what I meant to imply. I am mostly talking about rewards, not punishments. I am claiming that rewards distort thoughts similarly to punishments, although somewhat more weakly because humans seem to respond more strongly to punishment than reward.
Comment
You’re continuing to miss the completely obvious point that a just process does no worse (in expectation) by having more information potentially available to it, which it can decide what to do with. Like, either you are missing really basic decision theory stuff covered in the Sequences or you are trolling.
(Agree that rewards affect thoughts too, and that these can cause distortions when done unjustly)
Comment
Yes, I disagree with that point, and I feel like you’ve been missing the completely obvious point that bounded agents have limited capabilities. Choices are costly. Choices are really costly. Your comments don’t seem to be acknowledging that, so from my perspective you seem to be describing an Impossible Utopia (capitalized because I intend to write a post that encapsulates the concept of Which Utopias Are Possible), and so it doesn’t seem very relevant. (I recall claims on LessWrong that a decision process can do no worse with more information, but I don’t recall a compelling case that this was true on bounded human agents. Though I am interested if you have a post that responds to Zvi’s claims in the Choices are Bad series, and/or a post that articulates what exactly you mean by "just" since it sounds like you’re using it as a jargon term that’s meant to encapsulate more information than I’m receiving right now). I’ve periodically mentioned that my arguments about "just worlds implemented on humans". "Just worlds implemented on non-humans or augmented humans" might be quite different, and I think it’s worth talking about too. But the topic here is legalizing blackmail in a human world. So it matters how this will be implemented on the median human, who are responsible for most actions. Notice that in this conversation, where you are and I are both smarter than average, *it is not obvious to both of us *what the correct answer is here, and we have spent some time arguing about it. When I imagine the average human town, or company, or community, attempting to implement a just world that includes blackmail and full transparency, I am imagining either a) lots more time being spent trying to figure out the right answer, b) people getting wrong answers all the time.
Comment
The two posts you linked are not even a little relevant to the question of whether, in general, bounded agents do better or worse by having more information (Yes, choice paralysis might make some information about what choices you have costly, but more info also reduces choice paralysis by increasing certainty about how good the different options are, and overall the posts make no claim about the overall direction of info being good or bad for bounded agents). To avoid feeding the trolls, I’m going to stop responding here.
Comment
I’m not trolling. I have some probability on me being the confused one here. But given the downvote record above, it seems like the claims you’re making are at least less obvious than you think they are. If you value those claims being treated as obvious-things-to-build-off-of by the LW commentariat, you may want to expand on the details or address confusions about them at some point. But, I do think it is generally important for people to be able to tap out of conversations whenever the conversation is seeming low value, and seems reasonable for this thread to terminate.
Comment
In conversations like this, both sides are confused, that is don’t understand the other’s point, so "who is the confused one" is already an incorrect framing. One of you may be factually correct, but that doesn’t really matter for making a conversation work, understanding each other is more relevant.
(In this particular case, I think both of you are correct and fail to see what the other means, but Jessica’s point is harder to follow and pattern-matches misleading things, hence the balance of votes.)
Comment
(I downvoted some of Jessica’s comments, mostly only in the cases where I thought she was not putting in a good faith effort to try to understand what her interlocutor is trying to say, like her comment upstream in the thread. Saying that talking to someone is equivalent to feeding trolls is rarely a good move, and seems particularly bad in situations where you are talking about highly subjective and fuzzy concepts. I upvoted all of her comments that actually made points without dismissing other people’s perspectives, so in my case, I don’t really think that the voting patterns are a result of her ideas being harder to follow, and more the result of me perceiving her to be violating certain conversational norms)
Comment
The clarification doesn’t address what I was talking about, or else disagrees with my point, so I don’t see how that can be characterised with a "Nod". The confusion I refer to is about what the other means, with the question of whether anyone is correct about the world irrelevant. And this confusion is significant on both sides, otherwise a conversation doesn’t go off the rails in this way. Paying attention to truth is counterproductive when intended meaning is not yet established, and you seem to be talking about truth, while I was commenting about meaning.
Comment
Hmm. Well I am now somewhat confused what you mean. Say more? (My intention was for ‘at least one of us is confused’ to be casting a fairly broad net that included ‘confused about the world’, or ‘confused about what each other meant by our words’, or ‘confused… on some other level that I couldn’t predict easily.’)
Having read Zvi’s post and my comment, do you think the norm-enforcement process is just, or even not very unjust? If not, what makes it not scapegoating?
Comment
I think scapegoating has a particular definition – blaming someone for something that they didn’t do because your social environment demands someone get blamed. And that this isn’t relevant to most of my concerns here. You can get unjustly punished for things that have nothing to do with scapegoating.
Comment
Good point. I think there is a lot of scapegoating (in the sense you mean here) but that’s a further claim than that it’s unjust punishment, and I don’t believe this strongly enough to argue it right now.
I found this pretty useful—Zvi’s definitely reflecting a particular, pretty negative view of society and strategy here. But I disagree with some of your inferences, and I think you’re somewhat exaggerating the level of gloom-and-doom implicit in the post.>Implication: "judge" means to use information against someone. Linguistic norms related to the word "judgment" are thoroughly corrupt enough that it’s worth ceding to these, linguistically, and using "judge" to mean (usually unjustly!) using information against people.No, this isn’t bare repetition. I agree with Raemon that "judge" here means something closer to one of its standard usages, "to make inferences about". Though it also fits with the colloquial "deem unworthy for baring [understandable] flaws", which is also a thing that would happen with blackmail and could be bad.>Implication: more generally available information about what strategies people are using helps "our" enemies more than it helps "us". (This seems false to me, for notions of "us" that I usually use in strategy)I can imagine a couple things going on here? One, if the world is a place where may more vulnerabilities are more known, this incentivizes more people to specialize in exploiting those vulnerabilities. Two, as a flawed human there are probably some stressors against which you can’t credibly play the "won’t negotiate with terrorists" card. >Implication: even in the most just possible system of norms, it would be good to sometimes violate those norms and hide the fact that you violated them. (This seems incorrect to me!)I think the assumption is these are ~baseline humans we’re talking about, and most human brains can’t hold norms of sufficient sophistication to capture true ethical law, and are also biased in ways that will sometimes strain against reflectively-endorsed ethics (e.g. they’re prone to using constrained circles of moral concern rather than universality). >Implication: the bad guys won; we have rule by gangsters, who aren’t concerned with sustainable production, and just take as much stuff as possible in the short term. (This seems on the right track but partially false; the top marginal tax rate isn’t 100%)This part of the post reminded me of (the SSC review of) Seeing Like a State, which makes a similar point; surveying and ‘rationalizing’ farmland, taking a census, etc. = legibility = taxability. "all of them" does seem like hyperbole here. I guess you can imagine the maximally inconvenient case where motivated people with low cost of time and few compunctions know your resources and full utility function, and can proceed to extract ~all liquid value from you.
Comment
The post implies it is bad to be judged. I could have misinterpreted why, but that implication is there. If judge just meant "make inferences about" why would it be bad?
But it also helps in knowing who’s exploiting them! Why does it give more advantages to the "bad" side?
Why would you expect the terrorists to be miscalibrated about this before the reduction in privacy, to the point where they think people won’t negotiate with them when they actually will, and less privacy predictably changes this opinion?
Perhaps the optimal set of norms for these people is "there are no rules, do what you want". If you can improve on that, than that would constitute a norm-set that is more just than normlessness. Capturing true ethical law in the norms most people follow isn’t necessary.
Sure, but doesn’t it help me against them too?
Comment
The post implies it is bad to be judged. I could have misinterpreted > why, but that implication is there. If judge just meant "make inferences about" why would it be bad?As Raemon says, knowing that others are making correct inferences about your behavior means you can’t relax. No, idk, watching soap operas, because that’s an indicator of being less likely to repay your loans, and your premia go up. There’s an ethos of slack, decisionmaking-has-costs, strategizing-has-costs that Zvi’s explored in his previous posts, and that’s part of how I’m interpreting what he’s saying here.
Comment
This is really, really clearly false!
This assumes that, upon more facts being revealed, insurance companies will think I am less (not more) likely to repay my loans, by default (e.g. if I don’t change my TV viewing behavior).
More egregiously, this assumes that I have to keep putting in effort into reducing my insurance premiums until I have no slack left, because these premiums really, really, really matter. (I don’t even spend that much on insurance premiums!)
If you meant this more generally, and insurance was just a bad example, why is the situation worse in terms of slack than it was before? (I already have the ability to spend leisure time on gaining more money, signalling, etc.)
Comment
Relevant: https://siderea.dreamwidth.org/1486739.html
It’s true the net effect is low to first order, but you’re neglecting second-order effects. If premia are important enough, people will feel compelled to Goodhart proxies used for them until those proxies have less meaning. Given the linked siderea post, maybe this is not very true for insurance in particular. I agree that wasn’t a great example. Slack-wise, uh, choices are bad. really bad. Keep the sabbath. These are some intuitions I suspect are at play here. I’m not interested in a detailed argument hashing out whether we should believe that these outweigh other factors in practice across whatever range of scenarios, because it seems like it would take a lot of time/effort for me to actually build good models here, and opportunity costs are a thing. I just want to point out that these ideas seem relevant for correctly interpreting Zvi’s position.
Comment
Whence fear of unjust punishment if there is no unjust punishment? Hypothetically there could be (justified) fear of a counterfactual that never happens, but this isn’t a stable arrangement (in practice, some people will not work as hard to avoid the unjust punishment, and so will get punished)
Comment
Most people who have fear of heights don’t often fall in a way that hurts them.
Your notion of trust seems like it’s conflation of two opposite things meant by the word.
The first relates to coordination towards clarity, a norm of using info to improve the commons. The second is about covering for each other in an environment where information is mainly used to extract things from others.
Related: http://benjaminrosshoffman.com/humility-argument-honesty/ http://benjaminrosshoffman.com/against-neglectedness/ http://benjaminrosshoffman.com/model-building-and-scapegoating/
Comment
I replied to this comment on my blog (https://thezvi.wordpress.com/2019/03/15/privacy/#comment-3827)
Ben Hoffman’s views on privacy are downstream of a very extreme world model. On http://benjaminrosshoffman.com/blackmailers-are-privateers-in-the-war-on-hypocrisy a person comments under the name ‘declaration of war’ and Ben says:
Comment
It seems to me like you changed the subject halfway through your comment, from systemic to marginal effects. I’m on the record as having very different opinions about the two.
Your description of the crux seems too extreme to me, but I do think it’s pretty likely—and certainly not obviously false as Zvi seems to think—that in a world without privacy, nasty power structures would pay a heavy price.
I argue in the last piece that it is common *> even now *for people to engineer blackmail material against others *and often also against themselves, *to allow it to be used as collateral and leverage. That a large part of job interviews is proving that you are vulnerable in these ways.I don’t see anything about existing practices for job interviews in the previous piece.
There’s another largely-unaddressed element to the debate: underlying freedoms of transaction and of information-handling. All of the arguments about blackmail are about it as an incentive for something—why are we not debating the things themselves? Arguments against gossip and investigation are not necessarily arguments against blackmail.
Before addressing the incentives, you should seek clarity/agreement on what behaviors you’re trying to encourage and prevent. I still have heard very few examples of things that are acceptable without money involvement (investigating and publishing someone for spite or social one-ups) and become unacceptable only because of the blackmail.
Comment
Some things are acceptable on small quantities but unacceptable in large ones. You don’t want to incentivise those things.
Comment
Comment
We do have some laws that are explicit about scale. For instance speed limits and blood alcohol levels. However, nor everything is easily quantified. Money changing hands can be a proxy for something reaching too large a scale
Comment
Many laws incorporate scaling in terms of damage threshold or magnitude of single incident. We have very few laws that are explicit about scale in terms of overall frequency or number of participants in multiple incidents. City zoning may be one example of success in this area—only allowing so many residents in an area, without specifying who.
There are very few criminal laws such that something is legal only when a few people are doing it, and becomes illegal if it’s too popular. Much more common to just outlaw it and allow prosecutors/judges leeway in enforcing it. I’d argue that this choice gets exercised in ways that are harmful, but it does get the job (permitting low-level incidence while preventing large-scale infractions) done.
s/tenancy/tendency