A lot of people speak in terms of "existential risk from artificial intelligence" or "existential risk from nuclear war." While this is fine to some approximation, I rarely see it pointed out that this is not how risk works. Existential risk refers to the probability of a set of outcomes, and those outcomes are not defined in terms of their cause. To illustrate why this is a problem, observe that there are numerous ways for two or more things-we-call-existential-risks to contribute equally to a bad outcome. Imagine nuclear weapons leading to a partial collapse of civilization, leading to an extremist group ending the world with an engineered virus. Do we attribute this to existential risk from nuclear weapons or from Bio-Terrorism? That question is neither well-defined, nor does it matter. All that matters is how much each factor contributes to [existential risk of any form]. Thus, ask not "is climate change an existential risk," but "does climate change contribute to existential risk?" Everything we care about is contained in the second question.
Assuming the goal is to prevent the existential risk, how is this view beneficial? Aren’t the conditions for nuclear war different enough from those of climate change to make it too much to expect a single policy that prevents both?
Comment
With the "existential risk from x" framing, I’ve heard people say things like "climate change is not an existential risk, but it might contribute to other existential risks." Other people have talked about things like "second-order existential risks." This strikes me as fairly confused. In particular, to assess the expected impact of some intervention, you don’t care about whether effects are first-order, second-order, or even less direct, but the "classical" view pushes you to regard them as qualitatively different things. Conversely, the framing "how does climate change contribute to existential risk" subsumes n-th order effects for all n \in \mathbb N. Less abstractly, suppose you work on mitigating climate change and want to assess how much this influences existential risk. The question you care about is
By how much does my intervention on climate change mitigate existential risk? This immediately leads to the follow-up question
How much does climate change contribute to existential risk? Which is precisely the framing I suggest. Thus, it perfectly captures the thing we care about. Conversely, the classical framing "existential risk from climate change" would ask something analogous to
How likely are we to end up in a world where climate change is the easily recognized primary cause for the end of the world? And this is simply not the right question.
Comment
So, this is about taking the causes seriously even when they are not the direct final link in the chain before extinction?
Comment
Yes, in the sense that I think what you said describes how the views differ. It’s not how I would justify the view; I think the fundamental reason why classical view is inaccurate is that
Comment
Comment
I think you are deluding yourself when you think you can examine the future >50 years down the road without ignoring parts of the effects of a factor.