2da46c41efb76deca20e400142ddaeb9490758e4053c783024fce46abb67acd65b0658e35b6141bfe4fc0e8ccae08ea9010b36dadcf8aed916c0659d58b301c3

Question: Are Google, OpenAI etc. aware of the risk?

Answer: The major AI companies are thinking about this. OpenAI was founded specifically with the intention to counter risks from superintelligence, many people at Google, [https://medium.com/@deepmindsafetyresearch DeepMind], and other organizations are convinced by the arguments and few genuinely oppose work in the field (though some claim it’s premature). For example, the paper [https://www.youtube.com/watch?v꞊AjyM-f8rDpg Concrete Problems in AI Safety] was a collaboration between researchers at Google Brain, Stanford, Berkeley, and OpenAI.

However, the vast majority of the effort these organizations put forwards is towards capabilities research, rather than safety.