Second-Order Existential Risk

https://www.lesswrong.com/posts/6nvscqdebMyB8hKHG/second-order-existential-risk

Cross-posted. [Epistemic status: Low confidence] [I haven’t seen this discussed elsewhere, though there might be overlap with Bostrom’s "crunches" and "shrieks"] How important is creating the conditions to fix existential risks versus actually fixing existential risks? We can somewhat disentangle these. Let’s say there are two levels to "solving existential risk." The first level includes the elements deliberately ‘aimed’ at solving existential risk. This includes researchers, their assistants, their funding. On the second level are the social factors that come together to produce humans and institutions with the knowledge and skills to even be able to contribute to existential risk. This second level includes things like "a society that encourages curiosity" or "continuity of knowledge" or "a shared philosophy that lends itself to thinking in terms of things like existential risk (humanism?)." All of these have numerous other benefits to society, and they could maybe be summarized as "create enough surplus to enable long-term thinking." Another attribute of this second level is that these are all conditions that allow us to tackle existential risk. Here are a few more of these conditions:

Comment

https://www.lesswrong.com/posts/6nvscqdebMyB8hKHG/second-order-existential-risk?commentId=Z63Juok3aE3BDT53L

Stable career paths read to me as very surprising to be a condition. The implication is that if we don’t have scientists we can get screwed? But what if science gets done but not by professionals but citizens or hobbyists? Shift toward a negative freedom of thought to the right to be stupid coul dlead to idiocrazy or that scientists exist but instead of doing science they do politics or a kind of orthodoxy production. European style positive right for universal education could keep the voting populus science literate and keep important science to inform political decision making.