Link post These are some intuitions people often have:
-
You are not required to save a random person, but you are definitely not allowed to kill one
-
You are not required to create a person, but you are definitely not allowed to kill one
-
You are not required to create a happy person, but you are definitely not allowed to create a miserable one
-
You are not required to help a random person who will be in a dire situation otherwise, but you are definitely not allowed to put someone in a dire situation
-
You are not required to save a person in front of a runaway train, but you are definitely not allowed to push someone in front of a train. By extension, you are not required to save five people in front of a runaway train, and if you have to push someone in front of the train to do it, then you are not allowed. Here are some more:
-
You are not strongly required to give me your bread, but you are not allowed to take mine
-
You are not strongly required to lend me your car, but you are not allowed to unilaterally borrow mine
-
You are not strongly required to send me money, but you are not allowed to take mine The former are ethical intuitions. The latter are implications of a basic system of property rights. Yet they seem very similar. The ethical intuitions seem to just be property rights as applied to lives and welfare. Your life is your property. I’m not allowed to take it, but I’m not obliged to give it to you if you don’t by default have it. Your welfare is your property. I’m not allowed to lessen what you have, but I don’t have to give you more of it. My guess is that these ethical asymmetries—which are confusing, because they defy consequentialism—are part of the mental equipment we have for upholding property rights. In particular these well-known asymmetries seem to be explained well by property rights:
-
The act-omission distinction naturally arises where an act would involve taking someone else’s property (broadly construed—e.g. their life, their welfare), while an omission would merely fail to give them additional property (e.g. life that they are not by default going to have, additional welfare).
-
‘The asymmetry’ between creating happy and miserable people is because to create a miserable person is to give that person something negative, which is to take away what they have, while creating a happy person is giving that person something extra.
-
Person-affecting views arise because birth gives someone a thing they don’t have, whereas death takes a thing from them. Further evidence that these intuitive asymmetries are based on upholding property rights: we also have moral-feeling intuitions about more straightforward property rights. Stealing is wrong. If I am right that we have these asymmetrical ethical intuitions as part of a scheme to uphold property rights, what would that imply? It might imply something about when we want to uphold them, or consider them part of ethics, beyond their instrumental value. Property rights at least appear to be a system for people with diverse goals to coordinate use of scarce resources—which is to say, to somehow use the resources with low levels of conflict and destruction. They do not appear to be a system for people to achieve specific goals, e.g. whatever is actually good. Unless what is good is exactly the smooth sharing of resources. I’m not actually sure what to make of that—should we write off some moral intuitions as clearly evolved for not-actually-moral reasons and just reason about the consequentialist value of upholding property rights? If we have the moral intuition, does that make the thing of moral value, regardless of its origins? Is pragmatic rules for social cohesion all that ethics is anyway? Questions for another time perhaps (when we are sorting out meta-ethics anyway). A more straightforward implication is for how we try to explain these ethical asymmetries. If we have an intuition about an asymmetry which stems from upholding property rights, it would seem to be a mistake to treat it as evidence about an asymmetry in consequences, e.g. in value accruing to a person. For instance, perhaps I feel that I am not obliged to create a life, by having a child. Then—if I suppose that my intuitions are about producing goodness—I might think that creating a life is of neutral value, or is of no value to the created child. When in fact the intuition exists because allocating things to owners is a useful way to avoid social conflict. That intuition is part of a structure that is known to be agnostic about benefits to people from me giving them my stuff. If I’m right that these intuitions come from upholding property rights, this seems like an error that is actually happening.
An alternative explanation for act-omission distinction, from Joshua Greene’s *Moral Tribes *(emphasis added):
Crossposted from Katja’s blog:
The root problem here is that the category "moral" lumps together (a) intuitions about what’s intrinsically valuable, (b) intuitions about what the correct coordination protocols are, and (c) intuitions about what’s healthy for a human.
Kantian morality, like the property intuitions you’ve identified, is about (b) ("don’t lie" doesn’t fail gracefully in a mixed world, but makes sense and is coherent as a proposed operating protocol), while Rawlsian morality and the sort of utilitarian calculus people are trying to derive from weird thought experiments about trolleys is about (a) (questions about things like distribution presuppose that we already have decent operating protocols to enable a shared deliberative mechanism, rather than a state of constant epistemic war).
Comment
I mean, yes, but I’m not sure this much impacts Katja’s analysis which is mostly about moral intuitions that are in conflict with moral reasoning. That the category of things we consider when talking about morals, ethics, and axiology is not clean cut (other than perhaps along the lines of being about "things we care about/value") doesn’t really change the dissonance between intuition and reasoning in particular instances.
Comment
I think that the sort of division I’m proposing offers a way to decompose apparently incoherent "moral intuitions" into much more well-defined and coherent subcategories. I think that if someone practiced making this sort of distinction, they’d find this type of dissonance substantially reduced.
In other words, I’m interpreting the dissonance as evidence that we’re missing an important distinction, and then proposing a distinction. In particular I think this is a good alternative to Katja’s proposed writeoff of intuitions that can be explained away by e.g. property rights.
That’s flattering to Rawls, but is it actually what he meant? Or did he just assume that you don’t *need *a mutually acceptable protocol for deciding how to allocate resources, and you can just skip right to enforcing the desirable outcome?
A couple of guesses for why we might see this, which don’t seem to depend on property:
An obligation to act is much more freedom-constraining than a prohibition on an action. The more and more one considers all possible actions with the obligation to take the most ethically optimal one, the less room they have to consider exploration, contemplation, or pursuing their own selfish values. Prohibition on actions does not have this effect.
The environment we evolved in had roughly the same level of opportunity to commit harmful acts, bur far less opportunity to take positive consequentialist action (and far less complicated situations to deal with). It was always possible to hurt your friends and suffer consequences, but it was rare to have to think about the long term consequences of every action.
The consequences of killing, stealing, and hurting people are easier to predict than altruistic actions. Resources are finite, therefore sharing them can be harmful or beneficial, depending on the circumstances and who they are shared with. Other people can defect or refuse to reciprocate. If you hurt someone, they are almost guaranteed to retaliate. If you help someone, there is no guarantee there will be a payoff for you.
Comment
You’re talking about altruism. Trolley problems are about consequentialism proper: they are problematic even if you’re pretty selfish, as long as we can find some five people whose importance to you is about equal.
Comment
The OP draws this tension between consequentialism and ethical asymmetry, and mentions the trolley problem in that context. Therefore, the particular consequentialism under discussion is one which does enjoin symmetry; that is, altruism. We are not talking about consequentialism with respect to arbitrary utility functions here.
Indeed, the large number of people who switch but don’t push seem to care enough about strangers to demonstrate the issue. In the usual hypothetical, the people on the trolley tracks are strangers and thus their importance to you is already about equal. Shouldn’t you be asking that we find people whose importance to you is large? Kurzban-DeScioli-Fein (summary table) ask the question in terms of pushing a friend to save five friends and find that this reduces the discrepancy between switch and push. (Well, how do you define the discrepancy? It reduces the additive discrepancy, but not the logit discrepancy.) In a different attempt to isolate consequences from rules, they ask whether they would want someone else in the trolley problem to take the action. They find a discrepancy with pushing, but substantially smaller than the push/switch discrepancy.
At least where I live, two out of three of those property rights are wrong.
Property rights explicitly give way to more basic human rights. For instance, you are allowed to steal a car if it’s the only way that you can get an injured person to a hospital. And of course, you’re allowed to steal bread if that’s the only way you can get to eat.
I suspect property rights are just a subset of moral intuitions, both coming from the same cognitive and social causes rather than one coming from the other. http://www.daviddfriedman.com/Academic/Property/Property.html doesn’t need much modification to apply to many moral questions. The basic assymetry you’re pointing out (not forced to give, but not allowed to take; not forced to act for good, but prevented from acting for bad) could be framed as simple humility—we don’t know enough to be sure, so bias toward doing nothing. Or it could be a way to create a ratchet effect—never act to make it worse, but sometimes act to make it better. Or it could be an evolved way to maintain power structures.
On deeper reflection, it’s clear that moral intuitions aren’t always what we’d choose as a rational moral framework. It seems likely that this distinction between action and inaction is an artifact, not a truth. Inaction is action, and you’re responsible for all the harm you fail to prevent.
I’m attracted to viewing these moral intuitions as stemming from intuitions about property because the psychological notion of property biologically predates the notion of morality. Territorial behaviors are found in all kinds of different mammals, and prima facie the notion of property seems to be derived from such behaviors. The claim, then, is that during human evolution, moral psychology developed in part by coopting the psychology of territory.
I’m skeptical that anything normative follows from this though.
Comment
That means FAI might want to give us territoriality or some extrapolation of it, if that’s part of what we enjoy and want. Not sure there’s any deeper meaning to "normativity".
Something in this view feels a bit circular to me, correct me if I’m way off mark. Question: why assume that moral intuitions are derived from pre-existing intuitions for property rights, and not the other way around? Reply: because property rights work ("property rights at least appear to be a system for people with diverse goals to coordinate use of scarce resources"), and if they are based on some completely unrelated set of intuitions (morality) then that would be a huge coincidence. Re-reply: yeah, but it can also be argued that morality ‘at least appears to be a system for people with diverse goals to coordinate use of scarce resources’, those resources being life and welfare. More "moral" societies seem to face less chaos and destruction, after all. It works too. It could be that these came first, and property rights followed. It even makes more evolutionary/historical sense. So in other words, we may be able to reduce the entire comparison to just saying that moral intuitions are based on a set of rules of thumb that helped societies survive (much like property rights helped societies prosper), which is basically what every evolutionarist would say when asked what’s the deal with moralilty. And this issue is totally explored already, the general answers ranging from consequentialism—our intuitions, whatever their source, are just suggestions that need to be optimized on the basis of the outcomes of each action—and trolley-problem-morals—we ought to explore the bounds and specifics of our moral intuitions and build our ethics on top of that.
An interesting observation. Somewhat weird source of information on this may come from societies with slavery, where people’s lives could have been property of someone else. (that is, most human societies)
I would guess this is something someone already explored somewhere, but the act-omission distinction seems a natural consequence of intractability of "actions not taken"? The model is this: the moral agent takes a sample from some intractably huge action space. Evaluates each sampled action by some moral function M (for example by rejection sampling based on utility), and does something. From an external perspective, morality likely is about the moral function M (and evaluating agents based on that), in contrast to evaluating them based on the sampling procedure.
This seems to me to be an extremely strained interpretation.
There is another interpretation, which is that strong property rights are moral. I am currently 80% through Atlas Shrugged, which is a very strong thesis for this interpretation. Basically, when you take away property rights, whether the material kind, the action of one’s labor, or the spiritual kind, you give power to those who are best at taking. Ayn Rand presents the results of this kind of thinking, the actions that result, and the society it creates. I strongly recommend you read.
The OP is basically the fairly standard basis of american-style libertarianism. It doesn’t particularly "defy consequentialism" any more than listing the primary precepts of utilitarian consequentialist groups defys deontology. But I don’t think the moral intuitions you list are terribly universal. The closest parallel I can think of is someone listing contemporary american copyright law and listing it’s norms as if they’re some kind of universally accepted system of morals.
I was confused about the title until I realized it means the same thing as "Do ethical asymmetries come from property rights?"
The sentence seems to be cut off in the middle
Testable implication: communities that strongly emphasize upholding property conventions will contain more individuals that share these intuitions, while communities that do not will contain individuals that share fewer. Don’t you agree?
Perhaps I’m being a bit dense here but I have some difficulty in seeing a real asymmetry here—though do see that these are commonly understood views. They seem to be something along the line of logical fallacies like if p then q; q therefore p and then noting that we have q but not finding p is some asymmetry. From the PR view you are suggesting I’m wondering about the concept of killing and owning oneself. I would think the moral/ethical asymmetry is related to killing others versus killing oneself. If we "own" ourselves then we can make the PR argument about not killing others—just like not taking their car or money. But that view implies we can kill ourselves (or sell ourselves into slavery for that matter). That we have constraints on killing our self seems to be the asymmetric setting from a PR view.
Comment
It is not even a norm. If I marry my true love, someone else who loves my spouse may feel miserable as a result. No one is obligated to avoid creating this sort of misery in another person. We might quibble that such a person is immature and taking the wrong attitude, but the "norm" does not make exceptions where the victims are complicit in their own misery, it just prohibits anyone from causing it. We might be able to construct a similar thought experiment for "dire situations". If I invent a new process that puts you out of business by attracting all your customers, your situation may become dire, due to your sudden loss of income. Am I obligated in any way to avoid this? I think not. Those two norms (don’t cause misery or dire situations) only work as local norms, within your local sphere of intimate knowledge. In a large-scale society, there is no way to assure that a particular decision won’t change something that someone depends upon emotionally or economically. This is just a challenge of cosmopolitan life, that I have the ultimate responsibility for my emotional and economic dependencies, in the literal sense that I am the one who will suffer if I make an unwise or unlucky choice. I can’t count on the system (any system) to rectify my errors (though different systems may make my job harder or easier).
Comment
Oops, I misinterpreted "create", didn’t I? My quibble still works. I couldn’t know for sure while trying to conceive a child that my situation would necessarily continue to be sufficient to care for that child (shit can happen to anyone). Even if my circumstances continue as expected my children may develop physical or mental problems that could make them miserable. It’s not a yes/no question, it’s a "how much rusk" question. Where do we draw the line between too much risk and a reasonable risk?
What? No they don’t. Why do you say this?
Comment
I think they sometimes do, or at least it is eminently plausible that they sometimes do. The classic trolley (especially in its bridge formulation) problem is widely considered an example of a way in which the act-omission distinction is at odds with consequentialism. I’m sure you’re aware of the trolley problem, so I’m not bringing it up as an example I think you’re not aware of, but more to note that I’m confused as to why, given that you’re aware of it, you think it doesn’t defy consequentialism. For another example, on one plausible theory in population ethics (the total view), creating a happy person at happiness level x adds to the total amount of happiness in the world, and is therefore just as valuable as increasing an existing person’s level of happiness by x. Thus, not creating this person when you could goes against consequentialism. There are ways to argue that these asymmetries are actually optimal from a consequentialist perspective, but it seems to me the default view would be that they aren’t, so I’m confused why you think that they so obviously are. (I’m not sure that the fact that these asymmetries defy consequentialism would make them confusing—I don’t think (most) humans are intuitive consequentialists, at least not about all cases, so it seems to me not at all confusing that some of our intuitions would prescribe actions that aren’t optimal from a consequentialist perspective.)
Comment
It is no such thing. Anyone who considers it thus, is wrong.
A world where a bystander has murdered a specific fat person by pushing him off a bridge to prevent a trolley from hitting five other specific people, and a world where a trolley was speeding toward a specific person, and a bystander has done nothing at all (when he could, at his option, have flipped a switch to make the trolley crush five other specific people instead), are very different worlds. That means that the action in question, and the omission in question, have different consequences.
Valuable to whom?
No, it doesn’t. This scenario is nonsensical for various reasons (incomparability of "level of happiness" and general implausibility of treating "level of happiness" as a ratio scale is one big one), but from a person-centered view (which is the only kind of view that isn’t absurd), these are vastly different consequences.
The total view (construed in the way that is implied by your comments) is not a plausible theory.
Comment
Comment
That may very well be, but if—for instance—the "most consequentialists" to whom you refer are utilitarians, then the claim that their opinion on this is manifestly nonsensical is exactly the claim I am making in the first place… so any such majoritarian arguments are unconvincing.
The more outlandish you have to make a scenario to elicit a given moral intuition, the less plausible that moral intuition is, and the less weight we should assign to it. In any case, even if the consequences are the same in the modified scenario, that in no way at all means that they’re also the same in the original, unmodified, scenario.
Yet many people seem to find it plausible, including me. Have you written up a justification of your view that you could point me to?
Criticisms of utilitarianism (nor even of total utilitarianism in particular, nor of other similarly aggregative views) are not at all difficult to find. I don’t, in principle, object to providing references for some of my favorite ones, but I won’t put in the effort to do so if the request to provide them is made only as a rhetorical move. So, are you asking because you haven’t encountered such criticisms? or because you have, but found them unconvincing (and if so, which sort have you encountered)? or because you have, and are aware of convincing counterarguments?
(To be clear: for my part, I have never encountered convincing responses to any of [what I consider to be] the standard criticisms. At most, there are certain evasions[1], or handwaving, etc.)
Comment
Everyone’s being silly. Consequentialism maximizes the expected utility of the world. Said understands "world" to mean "universe configuration history". The others understand "world" to mean "universe configuration". Said, your "[1]" is not a link.
Comment
Consequentialist moral frameworks do not require the agent to have[1] a utility function. Without a utility function, there is no "expected utility".
In general, I would advise avoiding such "technical-language" rephrasings of standard definitions; they often (such as here) create inaccuracies where there were none.
Unless you’re positing a last-Thursdayist sort of scenario where we arrive at some universe configuration "synthetically" (i.e., by divine fiat, rather than by the universe evolving into the configuration "naturally"), this distinction is illusory. Barring such bizarre, wholly hypothetical scenarios, you cannot get to a state where, for instance, people remember an event happening, there’s records and other evidence of the event happening, etc., without that event actually having happened.
It wasn’t meant to be a link, it was meant to be a footnote reference (as in this comment); however, I seem to have forgotten to add the actual footnote, and now I don’t remember what it was supposed to be… perhaps something about so-called "normalizing assumptions"? Well, it’s not critical.
[1] Here "have" should be taken to mean "have preferences that, due to obeying certain axioms, may be transformed into".
Comment
I only meant to unpack consequentialism’s definition in order to get a handle on the "world" term. I’m fine with "Consequentialism chooses actions based on their consequences on the world.". The distinction is relevant for, for example, whether to care about an AI simulating humans in detail in order to figure out their preferences. Quantum physics combines amplitudes of equal universe configurations regardless of their history. A quantum computer could arrive in the same state through different paths, some of which had it run morally relevant algorithms. Even if the distinction is illusory, it seems to be the crux of everyone’s disagreement.