*Reworked version of a *shortform comment. This is still not the optimal version of this post but it’s the one I had time to re-publish. I’ve spent the past few years trying to get a handle on what it means to be moral. In particular, to be moral in a robust way, that holds up in different circumstances. A year-ish ago, while arguing about what standards scientists should hold themselves to, I casually noted that I wasn’t sure whether, if I were a scientist, and if the field of science were rife with dishonesty, whether it would be better for me to focus on becoming more honest than the average scientist, or focus on Some Other Cause, such as avoiding eating meat. A bunch of arguments ensued, and elucidating my current position on the entire discourse would take a lot of time. But, I do think there was something important I was missing when I first wondered about that. I think a lot of Effective Altruism types miss this, and it’s important. The folk morality I was raised with, generally would rank the following crimes in ascending order of badness:
-
Lying
-
Stealing
-
Killing
-
Torturing people to death (I’m not sure if torture-without-death is generally considered better/worse/about-the-same-as killing) But this is the conflation of a few different things. One axis I was ignoring was "morality as coordination tool" vs "morality as ‘doing the right thing because I think it’s right’." And these are actually quite different. And, importantly, you *don’t get to spend many resources *on morality-as-doing-the-right-thing unless you have a solid foundation of the morality-as-coordination-tool. (This seems true whether "doing the right thing" looks like helping the needy, or "doing God’s work", or whatever) There’s actually a 4x3 matrix you can plot lying/stealing/killing/torture-killing into which are:
-
Harming the ingroup
-
Harming the outgroup (who you may benefit from trading with)
-
Harming powerless people who can’t trade or collaborate with you And I think you need to tackle these mostly in this order. If you live in a world where even people in your tribe backstab each other all the time, you won’t have spare resources to spend on the outgroup or the powerless until your tribe has gotten it’s basic shit together and figured out that lying/stealing/killing each other sucks. If your tribe has it’s basic shit together, then maybe you have the slack to ask the question: "hey, that outgroup over there, who we regularly raid and steal their sheep and stuff, maybe it’d be better if we traded with them instead of stealing their sheep?" and then begin to develop cosmopolitan norms. If you eventually become a powerful empire, you may notice that you’re going around exploiting or conquering and… maybe you just don’t actually want to do that anymore? Or maybe, within your empire, there’s an underclass of people who are slaves or slave-like instead of being formally traded with. And maybe this is locally beneficial. But… you just don’t want to do that anymore, because of empathy, or because you’ve come to believe in principles that say to treat all humans with dignity. S*ometimes *this is because the powerless people would actually be more productive if they were free builders/traders, but sometimes it just seems like the right thing to do. Avoiding harming the ingroup and productive outgroup are things that you’re locally incentived to do because cooperation is very valuable. In an iterated strategy game, these are things you’re incentived to do all the way along. Avoiding harming the powerless is something that you are limited in your ability to do, until the point where it starts making sense to cash in your victory points. I think this is often non-explicit in most discussions of morality/ethics/what-people-should-do. It seems common for people to conflate "actions that are bad because it ruins ability to coordinate" and "actions that are bad because empathy and/or principles tell me they are." I’m not making a claim about exactly how all of this should influence your decisionmaking. The world is complex. Cause prioritization is hard. But, while you’re cause-prioritizing, and while you are deciding on strategy, make sure you keep this distinction in mind.
I’ve had similar thoughts; the working title that I jotted down at some point is "Two Aspects of Morality: Do-Gooding and Coordination." A quick summary of those thoughts: Do-gooding is about seeing some worlds as better than others, and steering towards the better ones. Consequentialism, basically. A widely held view is that what makes some worlds better than others is how good they are for the beings in those worlds, and so people often contrast do-gooding with selfishness because do-gooding requires recognizing that the world is full of moral patients. Coordination is about recognizing that the world is full of other agents, who are trying to steer towards (at least somewhat) different worlds. It’s about finding ways to arrange the efforts of many agents so that they add up to more than the sum of their parts, rather than less. In other words, try for: many agents combine their efforts to get to worlds that are better (according to each agent) than the world that that agent would have reached without working together. And try to avoid: agents stepping on each other’s toes, devoting lots of their efforts to undoing what other agents have done, or otherwise undermining each other’s efforts. Related: game theory, Moloch, decision theory, contractualism. These both seem like aspects of morality because:
"moral emotions", "moral intuitions", and other places where people use words like "moral" arise from both sorts of situations
both aspects involve some deep structure related to being an agent in the world, neither seems like just messy implementation details for the other
a person who is trying to cultivate virtues or become a more effective agent will work on both
Comment
Indeed. Specifically, "right" and "good" are not synonyms.
"Right" and "wrong", that is praisweorthiness and blameability are concepts that belong to deontology. A good outcome in the consequentialist sense, one that is a generally desired, is a different concept from a deontologically right action.
Consider a case where someone dies in an industrial accident , although all rules were followed: if you think the plant manager should be exonerated because he folowed the rules, you are siding with deontology, whereas if you think he should be punished because a death occurred under his supervision, you are siding with consequentialism.
Comment
That’s not how consequentialism works. The consequentialist answer would be to punish the plant manager if and only if doing so would cause the world to become a better place.
I think I like "Do Gooding" in place of where I currently have "altruism" in my title. I used "altruism" despite it actually being more specific than I wanted because I couldn’t think of a succinct enough word-phrase.
Thanks for writing this. I’ve started to shift away from utilitarianism to something that is more a combination of utilitarianism and contract-theory which the utilitarianism being about being altruistic and contract-theory being about building co-operation. I haven’t thought out the specifics of how to make this work in detail yet, only the vague outline. I guess the way you’ve justified focusing on co-operation in the above seems to be in terms of consequences, however people are often reluctant to co-operate with people who will use consequential justifications to break co-operation, so I think it’s necessary to place some intrinsic value on co-operation.
Comment
I understand the direction, but it’s VERY hard to mix the two without it being the case that the contractualism is just a part of consequentialism. Being known as a promise-keeper is instrumentally desirable, and in VERY MANY cases leads to less-short-term-optimal behaviors. But this is just longer-term consequentialist optimization. And, of course, there can be a divergence between your public and private beliefs. It’s quite likely that, even if you’re a pure consequentialist in truth (and acknowledge the instrumental value of contracts and the heuristic/calculation value of deontological-sounding rules), you’ll get BETTER consequences if you signal extra strength to the less-flexible aspects of your beliefs.
Comment
I already tried to address this, although maybe I could have been clearer. If you are just calculating what is the utility from defecting, what is the utility from losing the opportunity and co-operating and building/maintaining trust, then people will see you as manipulative and not trust you. So you need to value co-operation more than that. But then, maybe your point is that you can include this in the utility calculation too? If so, it would be useful for you to confirm.
Comment
This is what rule and virtue (and global) consequentialism are for. You don’t need to be calculating all the time, and as you point out, that might be counterproductive. But every now and then, you should (re)evaluate what rules to follow and what kind of character you want to cultivate.
And I don’t mean this as saying rule or virtue consequentialism is the correct moral theory; I just mean that you should use rules and virtues, as a practical matter, since it leads to better consequences.
Sometimes you will want to break a rule. This can be okay, but should not be taken lightly, and it would be better if your rule included its exceptions. A rule can be something like a very strong prior towards/against certain kinds of acts.
I agree. I think of myself as a utilitarian in the same subjective sense that I think of myself as (kind of) identifying with voting Democrats (not that I’m a US citizen). I disagree with Republican values, but it wouldn’t even occur to me to poison a Republican neighbor’s tea so they can’t go voting. Sure, there’s a sense in which one could interpret "Democrat values" fanatically, so they might imply that I prefer worlds where the neighbor doesn’t vote, where then we’re tempted to wonder whether ends do justify the means in certain situations. But thinking like that seems like a category error if the sense in which I consider myself a Democrat is just one part of my larger political views, where I also think of things in terms of respecting the political process. So, it’s the same with morality and my negative utilitarianism. Utilitarianism is my altruism-inspired life goal, the reason I get up in the morning, the thing I’d vote for and put efforts towards. But it’s not what I think is the universal law for everyone. Contractualism is how I deal with the fact that other people have life goals different from mine. Nowadays, whenever I see discussions like "Is classical utilitarianism right or is it negative utilitarianism after all?" – I cringe.
Hmm, you cross-posted to EA forum, so I guess I’ll reply both places since each might be seen by different folks.
Comment
Crossposted on EA forum* (I think this particular convo is more valuable over there)* The issue isn’t just the conflation, but missing a gear about how the two relate. The mistake I was making, that I think many EAs are making, is to conflate different pieces of the moral model that have specifically different purposes. Singer-ian ethics pushes you to take the entire world into your circle of concern. And this is quite important. But, it’s also quite important that the way that the entire world is in your circle of concern is different from the way your friends and government and company and tribal groups are in your circle of concern. In particular, I was concretely assuming "torturing people to death is generally worse than lying." But, that’s specifically comparing within alike circles. It is now quite plausible to me that lying (or even exaggeration/filtered evidence) among the groups of people I actually have to coordinate with might actually be worse than allowing the torture-killing of others who I don’t have the ability to coordinate with. (Or, might not – it depends a lot on the the weightings. But it is not the straightforward question I assumed at first)
Thanks. (I think honestly the EA forum needs to see this more than LessWrong does so I appreciate some commenting there. I’ll probably reply in both places for lack of a better option)
This is worth exploring, and I think there’s another aspect of it that relates: the distinction between edge and level. Whether you’re improving something, or maintaining a standard. Your comment about