At the beginning of last year, I noticed that neuroscience is very confused about what suffering is. This bothered me quite a lot, because I couldn’t see how Utilitarianism or even Consequentialism could ever be coherent without a coherent understanding of suffering. The following is my simple model that has helped me think through some issues and clarified my intuitions a bit. I am both posting this to share what I feel is a useful tool, and inviting criticism. "Utilitarianism" (which term I will use to refer to all applicable formulations of Utilitarianism) assumes a space of subjective states that looks like a number line. Suffering is bad. Joy is good. They are negatives of each other, and straightforwardly additive. This has always struck me as nonsense, not at all reflective of my inner experience, and thus a terrible basis for extrapolating ethics over populations. It feels more natural to treat Joy and Suffering as two axes. There may or may not be possible mappings between them depending on context. It does not seem natural to assume they are fully orthogonal, but there is also no reason to assume any particular mapping. There is particularly no reason to assume the mapping is anything like "+1 suffering is identical to −1 joy". There is extra-super-no-reason to assume that there’s some natural definition of a vector magnitude of joy-suffering that is equivalent to "utility". If you’re having a nice conversation with an old friend you haven’t seen in a while and you’re feeling good, it’s likely that your subjective sense of suffering is quite low and your subjective sense of joy is quite high. If you have a bad headache and you’re lying in bed trying to rest, your subjective sense of suffering is pretty high, and your level of joy is quite low. What interests me is the "mixed" scenario, where your friend is only in town briefly, and you happen to have a headache. You’re happy to be able to visit with them, so there’s a definite quality of joy to your subjectivity. You wish you didn’t have a headache, but the headache doesn’t somehow negate the joy in being able to talk to them. It’s possible that forcing yourself to talk to them, rather than lie in bed, is making the headache worse, thus increasing the "suffering" component of your experience, but this is in a very important sense worth it because of the joy premium. It’s also entirely possible that the conversation is so good that you become distracted from the headache and your subjective experience reflects less suffering than if you were lying in bed resting! Suffering and joy interact complexly and in context-dependent ways! Thus there emerges something like an envelope of possible states, as depicted by the dashed red region. Could you collapse those axes onto a number line, somehow? Yeah, I guess. Why would you? Why would you throw out all the resolution you gain by viewing joy and suffering as separate but interrelated quantities? Why would you throw all this out in favor of some lossy definition of "utility"? If your answer is, "So I can perform interpersonal utilitarian calculations," that’s a terrible answer. Let’s briefly look at a few popular Utilitarian problems. I don’t propose that my simple model really solves anything. It’s more that I find it to be an interesting framework that sometimes clarifies intuitions. My intuitions have never been satisfied that I should choose Torture over Dust Specks. In the two-axis model, Dust Specks is essentially a slight, transient increase along the Suffering axis, multiplied by an astronomical number. The Dust Speck "intervention" can happen to any person at any point in the Joy/Suffering field. They can be skydiving, they can be doing paperwork, they can be undergoing a different kind of torture already. The Dust Speck always induces a slight, transient increase in suffering, relative to where they started out. Torture is the long-term imposition of a "ceiling" on joy, and a "floor" on suffering, for an individual. You are confining a person’s subjectivity to that narrow, horrible lower-right-hand envelope of subjectivity-space. I would argue that this feels morally distinct, in some important sense, from just inducing a large number of transient suffering-impulses. "But Matt, couldn’t you just reduce all this to vector-sums of Suffering/Joy and collapse the problem to the typical Utilitarian formulation?" Yeah, you could, I guess. But I feel like you lose something important when you do that blindly. You throw out that intuition that these 50 years of torture is different from 3^3^3 dust specks. And what’s the point of an ethics that throws out important intuitions? I have even less insight to provide on the Repugnant Conclusion, other than to say, again, that there seems to be something deeply, intuitively monstrous about confining humans permanently to undesirable zones of Joy/Suffering space, and that it might be worth thinking about why, exactly. For example, it appears that life may be not-worth-living because of a permanently high state of suffering, or because of a permanently low state of joy, or both. This insight, while minor, is not something that falls out of a utility-number-line analysis. The more progress I make in meditation, the more correct this perspective feels, and the less correct number-line utilitarianism feels. "Suffering" and "Joy" appear more and more as distinct mental phenomena. Reducing suffering feels less and less like the same thing as increasing joy, and vice versa. I welcome your feedback.
Interesting ideas. I wonder how this might integrate with more complex models of utility like the valence/arousal model. Also I’ve used slightly different terminology than you do, but I wonder if you have any thoughts on how you might have to adjust your model if you account for suffering as identification with pain rather than pain itself (cf. what I’ve written about this here with links to additional, related work by FRI).
Comment
Thanks for sharing those links. It may indeed be appropriate to use more than two axes or more nuance in general. I think the key idea shared by my simplistic model and the valence/arousal model is the notion that "good feelings" aren’t just "nega-bad feelings". If you want to construct a preference ordering over subjective states—which is the point of using utility in the first place—then it pays dividends to really introspect on what your preferences are, and not treat wellbeing as a scalar quantity by default. For example, you might actually prefer "talking to old friend, while suffering through a headache" to "missing out on talking to old friend, but don’t have a headache at all". Or maybe not. It will depend on context. Regarding suffering as identification with pain, I basically agree with your blog post. I do think that the human brain by default identifies with pain. I can "turn off" the suffering component of pain with intense concentration and directed, sustained attention. As soon as my attention wavers, the suffering returns. It’s exhausting and unsustainable. Perhaps extremely advanced meditators can permanently turn off suffering. I view this as a promising direction for investigation, but not necessarily in the short term. I have wondered if animals aren’t actually perpetually walking around in something more like a blissful flow-state punctuated by extremely brief, suffering-free episodes of negative valence. Perhaps a cow, lacking reflectivity, can literally never suffer as much as I suffer by restraining myself from eating chocolate. I have very low confidence in these thoughts, though.
Comment
Yeah, non-human animals remain a tricky subject. For example, I’m pretty sure thermostats are minimally conscious in a technical sense, yet probably don’t suffer in any meaningful way because they have no way to experience pain as pain to the extent we allow pain to include things like negative-valence feedback (and what does "negative valence" even mean for a thermostat?). Yet somewhere along the way we get things conscious enough that we can suspect them of suffering the way we do, or suffering the way we do but to a lesser degree. I like your thought that maybe humans, or generally more conscious processes (in the IIT sense that consciousness can be quantified), are capable of more suffering as it seems to line up with my expectation that things that experience themselves more have more opportunity to experience suffering. This has interesting implications too for the potential suffering of AIs and other future things which may be more conscious than anything that presently exists.
Why stop at two? What’s that Isaac Asimov quote about one being possible, and infinity being possible, but two being ridiculous? If you just have one dimension, then you’re doing utility comparisons—not necessarily for interpersonal reasons, but just so that you can rank actions to decide which action to take! If you’re going to have more, you can have as many as you like, because then they’re just going to be dimensions of variation between things humans might prefer.
Comment
I agree! I think you should use as many dimensions as necessary to define a subjective state, no more, no less. You don’t want to leave any important ethical intuitions on the cutting room floor. You don’t want to compare and "rank" apples and oranges unless it’s appropriate. If you’re comparing "dying of malaria" versus "not dying of malaria", I have no problem with ranking those outcomes in a simple preference ordering, to which it would appropriate to employ QALYs or something. Crushing the full symphonic nuance of human experience onto a number line, without capturing every micro-wrinkle of that landscape, is that pathway to the Bad Ending for humanity. A true success at CEV means that the FAI has successfully understood every possible dimension of human subjectivity and can reliably answer questions of preference between "headache+friend" and "no headache+no friend", and even more ambiguous questions. I frankly didn’t put a ton of thought into what my two axes were. I was merely trying to capture the intuition that whatever Suffering is and whatever "Positively Valenced Emotion" is, they aren’t merely opposites of each other.
What kind of meditation?
The images don’t seem to work.
Comment
I fixed them, fyi.
The point of having a single utility is that we want to ask you "would you prefer outcome A or B" for all possible outcomes, and, unless your answers are inconsistent, we could put those outcomes on a single line, from best to worst. I don’t see what benefit having more dimensions adds. The important difference in your dust spec torture example is having floors and ceilings, not the second axis. You can add a ceiling for 1d utility (though I’m not sure why you would).
Comment
I would frame it as: subjective states already have more than one dimension. When you compress them into one dimension, you are
Comment
Wait, do you find yourself tempted to violate preference intransitivity? I don’t think I do. Please share. subjective states > already have more than one dimensionOf course they do. Nobody has ever suggested expressing everything that happens in a human brain in one number. That’s not what utility is for. Utility is for choosing the best outcome. Your 2d model can’t do that.
Comment
It is rather strange to say that utility is for choosing the best outcome, given that a utility function can only be constructed in the first place if it’s already true that we can impose a total ordering on outcomes. If what you have in mind when you say "utility" is VNM-utility (which is what it sounds like), then as you know, only agents whose preferences satisfy the axioms, have a utility function (i.e., a utility function can be constructed for an agent if and only if the agent’s preference satisfy the axioms). Whether an agent’s preferences do, or do not, satisfy the VNM axioms, is an empirical question. We can ask it about a particular human, for instance. The answer will be "yes" or "no", and it will, again, be an empirical fact. Suppose we investigate some person’s preferences, and find that they do not satisfy the VNM axioms? Well, that’s it, then; that person has no utility function. That is a fact. No normative discussions (such as discussions about what a utility function is "for") can change it. I read moridinamael’s commentary to be aimed at just such an empirical question. He is asking: what can we say about human preferences? Is our understanding of the facts on the ground mistaken in a particular way? Are human preferences in fact like this, and not like that? —and so on. Given that, comments about what "the point of" having a utility function, or what a utility function is "for", or any other such normative concerns, seem inapplicable and somewhat strange. Asking "what benefit having more dimensions adds" seems like entirely the wrong sort of question to ask—a confusion underlies it, about what sort of thing we’re talking about. The additional dimensions either are present in the data, or they’re not. (Would you ask "what benefit" is derived from using three dimensions to measure space—why not define points in space using a one-dimensional scalar, isn’t that enough…? etc.)
Comment
Flagging that (I think) utilitarianism and VNM-utility are different things. They *are *closely related, but I think Bentham invented utiltarianism before VNM utility was formalized. They are named similar things for similar reasons but the formalisms don’t necessarily transfer. It is separately the case that: a) humans (specific or general) might be VNM agents, and that if they are not, they might aspire to try to be so that they don’t spend all their time driving from San Francisco to San Diego to New York b) even if they are not, if you care about global welfare (either altruisticly or for self-interested Rawlsian veil of ignorance style thinking), you may want to approximate whether given decisions help or harm people, and this eventually needs to cash out into some kind of ability to decide whether a decision is net-positive. tldr: "utilitariansim" (the term the OP used) does not formally imply VNM utility, although it does waggle its eyebrows suggestively.
Comment
Addressing your points separately, just as you made them:
I.
I do not think that is the mistake zulupineapple is making (getting VNM utility, and utilitarianism, mixed up somehow). (Though it is a common mistake, and I have commented on it many times myself. I just think it is not the problem here. I think utility, in the sense of VNM utility (or something approximately like it) is in fact what zulupineapple had in mind. Of course, he should correct me if I misunderstood.)
II.
re: a): Someone we know once said: "the utility function is not up for grabs". Well, indeed; and neither is my lack of utility function (i.e., my preferences) up for grabs. It seems very strange indeed, to say "in order to be rational, change your preferences"; when the whole point of (instrumental) rationality is to satisfy my preferences. And I can’t help but notice that actual humans, in real life, do not spend all their time driving from SF to SD to NY and so on. Why is that? Now, perhaps you meant that scenario figuratively—yes? What you had in mind was some other, more subtle (apparent) preference reversal, and the driving between cities was a metaphor. Very well; but I suspect that, if you described the actual (apparent) preference reversal(s) you had in mind, their status as irrational would be rather more controversial, and harder to establish to everyone’s satisfaction.
III.
re: b): Making decisions does not require having a total ordering on outcomes—not even if we are consequentialists (as I certainly am) and care about helping vs. harming people (which is certainly one of the things I care about). Furthermore, notice specifically that even if your requirement is that we have an "ability to decide whether a decision is net-positive", even that does not require having a total ordering on outcomes. (Example 1: I can prefer situation B to A, and C to A, and D to A, while having a preference cycle between B, C, and D (this is the trivial case). Example 2: I can have a preference cycle between A, B, and C, and believe that any decision to go from one of those states to the next one in the cycle is net positive (this is the substantive case).)
IV.
By the way, violation of transitivity is not the most interesting form of VNM axiom violation—because it’s relatively easy to make the case that it constitutes irrationality. Far more interesting is violation of continuity; and you will, I suspect, have a more difficult time convincingly showing it to be irrational. (Correspondingly, it’s also—in my experience—more common among humans.) (Edit: corrected redundant phrasing)
Comment
Can you describe the violation of continuity you observe in humans?
Comment
Robyn Dawes describes one class of such violations in his Rational Choice in an Uncertain World. (Edit: And he makes the case—quite convincingly, IMO—that such violations are not irrational.) You can search my old LessWrong comments and find some threads where I explain this. If you also search my comments for keywords "grandmother" and "chicken", you’ll find some more examples. If you can’t find this stuff, I’ll take some time to find it myself at some point, but not right now, sorry.
Comment
Found it here Let us say I prefer the nonextinction of chickens to their extinction (that is, I would choose not to murder all chickens, or a*> ny *chickens, all else being equal). I also prefer my grandmother remaining alive to my grandmother dying. Finally, I prefer the deaths of arbitrary numbers of chickens, taking place with any probability, to any probability of my grandmother dying.Would you also prefer losing an arbitrary amount of money to any probability of your grandmother dying? I think chicken can be converted into money, so you should prefer this as well. I’m hoping that you’ll find this preference equivalent, but then find that your actions don’t actually follow it.
Comment
a) Chickens certainly can’t be converted into money (in the sense you mean) b) Even if they could be, the comparison is nonsensical, because in the money case, we’re talking about my money, whereas in the chickens case we’re talking about chickens existing in the world (none of which I own) c) That aside, I do not, in fact, prefer losing an arbitrary amount of money to any probability of my grandmother dying (but I do prefer losing quite substantial amounts of money to relatively small probabilities of my grandmother coming to any harm, and my actions certainly do follow this)
Comment
Chickens are real wealth owned by real people. Pressing a magical button that destroys all chickens would do massive damage to the well being of many people. So, you’re not willing to sacrifice your own wealth for tiny reductions in probability of dead grandma, but you’d gladly sacrifice the wealth of other people? That would make you a bad person. And the economic damage would end up affecting you eventually anyway.
Comment
I rather think you’ve missed most, if not all, of the point of that hypothetical (and you also don’t seem to have fully read the grandparent comment to this one, judging by your question). Perhaps we should set the grandmother/chickens example aside for now, as we’re approaching the limit of how much explaining I’m willing to do (given that the threads where I originally discussed this are quite long and answer all these questions). Take a look at the other example I cited.
Comment
Comment
As I said, there are old LW comments of mine where I explain said argument in some detail (though not quite as much detail as the source). (I even included diagrams!) Edited to add:
Comment
Comment
http://www.greaterwrong.com/posts/g9msGr7DDoPAwHF6D/to-what-extent-does-improved-rationality-lead-to-effective#kpMS4usW5rvyGkFgM
Comment
Regarding the grandma-chicken argument, having given it some thought, I think I understand it better now. I’d explain it like this. There is a utility function u, such that all of my actions maximize Eu. Suppose that u(A) = u(B) for some two choices A, B. Then I can claim that A > B, and exhibit this preference in my choices, i.e. given a choice between A and B I would always choose A. However for every B+, such that u(B+) > u(B) I would also claim B < A < B+. This does violate continuity, however because I’m still maximizing Eu, my actions can’t be called irrational, and the function u is hardly any less useful than it would be without the violation.
Comment
Please see https://www.lesserwrong.com/posts/3mFmDMapHWHcbn7C6/a-simple-two-axis-model-of-subjective-states-with-possible/wpT7LwqLnzJYFMveS.
Finally I read your link. So the main argument is that there is a preference between different probability distributions over utility, even if expected utility is the same. This is intuitively understandable, but I find it lacking specificity. I propose the following three step experiment. First a human chooses a distribution X from two choices (X=A or X=B). Then we randomly draw a number P from the selected distribution X, then we try to win 1$ with probability P (and 0$ otherwise, which I’ll ignore by setting u(0$)=0, because I can). Here you can plot X as a distribution over expected utility, which equals P times u(1$). The claim is that some distributions X are more preferable to others, despite what pure utility calculations say. I.e. Eu(A) > Eu(B), but a human would choose B over A and would not be irrational. Do you agree that this experiment accurately represents Dawes claim? Naturally, I find the argument bad. The double lottery can be easily collapsed into a single lottery, the final probabilities can be easily computed (which is what Eu does). If P(win 1$ | A) = P(win 1$ | B) then you’re free to make either choice, but if P(win 1$ | A) > P(win 1$ | B) even by a hair, and you choose B, you’re being irrational. Note that the choices of 0$ and 1$ as the prizes are completely arbitrary.
Comment
I would love to respond to your comment, and will certainly do so, but not here. Let me know what other venue you prefer.
I think that he set the mind experiment in the Least convenient possible world. So your last hypothesis is right.
This seems like a weird preference to have. This de-facto means that you would never pay any attention whatsoever to the live’s of chicken, since any infinitesimally small change to the probability of your grandmother dying will outweigh any potential moral relevance. For all practical purposes in our world (which is interconnected to a degree that almost all actions will have some potential consequences to your grandmother), an agent following this preference would be indistinguishable from someone who does not care at all about chickens.
Comment
Comment
The value of information of finding out the consequences that any action has on the life of your grandmother is infinitely larger than the value you would assign to any number of chickens. De-facto this means that even if your grandmother is dead, as long as you are not literally 100% certain that she is dead and forever gone and could not possibly be brought back, you completely ignore the plight of chickens.
Comment
The fact that he is not willing to kill his grandmother to save the chickens doesn’t imply that chickens have 0 value or that his grandmother has infinite value. Consider the problem from an egocentric point of view: to be responsible for one’s grandmother’s death feels awful, but also dedicating your life to a very unlikely possibility to save someone who has been declared dead, seems awful.
Stuart wrote a post about this a while ago, though it’s not the most understandable.
Comment
Comment
Comment
Comment
Comment
How? Demonstrate, please.
If I can’t check my preferences over a pair of outcomes, then I can’t construct my utility function to begin with. You have yet to say or show anything that even approaches a rebuttal to this basic point.
Again: demonstrate. I tell you I follow the "do what I prefer" strategy. Dutch book me! I offer real money (up to $100 USD). I promise to consider any bet you offer (less those that are illegal where I live).
What difficulties do you see (that are absent if we instead construct and then use a utility function)?
Edited to add:
I don’t think you understand how fundamental the difficulty is. Interpersonal comparison, and aggregation, of VNM-utility is not hard. It is not even uncomputable. It is undefined. It does not mean anything to speak of comparing it between agents. It is, literally, mathematical nonsense, like comparing ohms to inches, or adding kilograms to QALYs. You can’t "approximate" it, or do a "not-exactly-correct" computation, or anything like that. There’s nothing to approximate in the first place!
Comment
Comment
Comment
Of course it’s terrible. In fact it’s > maximally terrible, insofar as it has no basis whatsoever in any realityA procedure is terrible if it has terrible outcomes. But you didn’t mention outcomes at all. Saying that it’s "nonsensical" doesn’t convince me of anything. If it’s nonsensical but works, then it is good. Are you, by any chance, not a consequentialist? "No formal guarantee of not being Dutch booked" does not even > begin to qualify as such a difficulty.I’m somewhat confused why this doesn’t qualify. Not even when phrased as "does not permit proving formal guarantees"? Is that not a difficulty in reasoning?
Comment
Comment
Comment
Comment
Comment
Hi zulupineapple, I’d love to continue this discussion, but I’m afraid that the moderation policy on this site does not permit me to do so effectively, as you see. I’d be happy to take this to another forum (email, IRC, the comments section of my blog—whatever you prefer). If you’re interested, feel free to email me at myfirstname@myfullname.net (you could also PM me via LW’s PM system, but last time I tried using it, I couldn’t figure out how to make it work, so caveat emptor). If not, that’s fine too; in that case, I’ll have to bow out of the discussion.
Please see https://www.lesserwrong.com/posts/3mFmDMapHWHcbn7C6/a-simple-two-axis-model-of-subjective-states-with-possible/4vD2B3aG87EGJb7L5
Perhaps the following context will be useful. In school I learned about utility in context of constructing decision problems. You rank the possible outcomes of a scenario in a preference ordering. You assign utilities to the possible outcomes, using an explicitly mushy, introspective process—unless money is involved, in which case the "mushy" step came in when you calibrated your nonlinear value-of-money function. You estimate probabilities where appropriate. You chug through the calculations of the decision tree and conclude that the best choice is the one that results probablistically in the best outcome as described by proxy as the outcome with the greatest probability-weighted utility. That’s all good. Assuming you can actually do all of the above steps, I see no problem at all with using utility in that way. Very useful for deciding whether to drill a particular oil well or invest in a more expensive kind of ball bearing for your engine design. But if you’ve ever actually tried to do that for, say, an important life decision, I would bet money that you ran up against problems. (My very first post on lesswrong.com concerned a software tool that I built to do exactly this. So I’ve been struggling with these issues for many years.) If you’re having trouble making a choice, it’s very likely that your certainty about your preferences is poor. Perhaps you’re able to construct the decision tree, and find that the computed "best choice" is actually highly sensitive to small changes in the utility values of the outcomes, in which case, the whole exercise was pointless, aside from the fact that it explicated why this was a hard decision, but on some level you already knew that, after all that’s why you were building a decision tree in the first place.
Another property of 3D space is that there is, in fact, a natural and useful definition of a norm, the 3D vector magnitude, which gives us the intuitive quantity "total distance". I daresay physics would look very different if this weren’t the case. "Total distance" (or vector magnitude or whatever) is both real and useful. "Real" in the sense that physics stops making sense without it. "Useful" in the sense that engineering becomes impossible without it. My contention with "utility" is not real and only narrowly useful. It’s not real because, again, there’s no neurological correlate for utility, there’s no introspective sense of utility, utility is a purely abstract mathematical quantity. It’s only narrowly useful because, at best, it helps you make the "best choice" in decision problems in a sort of rigorously systematic way, such that you can show your work to a third party and have them agree that that was indeed the best choice by some pseudo-objective metric. All of the above is uncontroversial, as far as I can tell, which makes it all the weirder when rationalists talk about "giving utility", "standing on top of a pile of utility", "trading utilons", and "human utility functions". None of those phrases make any sense, unless the speaker is using "utility" in some kind of folk terminology sense, and departing completely from the actual definition of the concept. At the risk of repeating myself, this community takes certain problems very seriously, problems which are only actually problems if utility is the right abstraction for systematizing human wellbeing. I don’t see that it is, unless you find yourself in a situation where you can converge on a clear preference ordering with relatively good certainty.
Comment
Perhaps you’re able to construct the decision tree, and find that the computed "best choice" is actually highly sensitive to small changes in the utility values of the outcomes, in which case, the whole exercise was pointless, aside from the fact that it explicated > why this was a hard decisionAre you sure that optimizing oil wells and ball bearings causes no such problems? These sound like generic problems you’d find with any sufficiently complex system, not something unique to human condition and experience. I could argue that the abstract concept of utility is both quite real/natural and a useful abstraction, but there is nothing too disagreeable in your above comment. What bothers me is, I don’t see how adding more dimensions to utility solves any of the problems you just talked about.
Comment
If these are indeed problems that crop up with any sufficiently complex system, that’s even worse news for the idea that we can/should be using utility as the Ur-abstraction for quantifying value. Perhaps adding more dimensions doesn’t solve anything. Perhaps all I’ve accomplished is suggesting a specific, semi-novel critique of utilitarianism. I remain unconvinced that I should push past my intuitive reservations and just swallow the Torture pill or the Repugnant Conclusion pill because the numbers say so.
Comment
Comment
Not confused, just being lazy with language. That being said, every formulation of Utilitarianism that I can find depends on some sense of the "most good" and utility is a mathematical formalization of that idea. My quibble is less with the idea of doing the "most good" and more with the idea that the "most good" precisely corresponds to VNM utility. Ur- is a prefix which strictly mean "original" but which I was using here intending more of a connotation of "fundamental". Also I probably shouldn’t have capitalized it.
Comment
My point is that you can accept that "most good" does in fact correspond to VNM utility but reject that we want to add up this "most good" for all people and maximize the sum.
Comment
Hm. Yeah, you can accept that. You can choose to. I’m not arguing that you can’t — if you accept the axioms, then you must accept the conclusions of the axioms. I just don’t see why you would feel compelled to accept the axioms.
Comment
I feel a very strong urge to accept transitivity, others I care somewhat less about, but they seem reasonable too.
[Moderation note] Zulupineapple- thanks for continuing to argue in good faith despite some provocation. Said Achmiz- you’ve made some good points, but your heat:light ratio is unnecessarily high, for reaosns we’ve discussed before. Please tone it down.