printouts
{"Answer":["The major AI companies are thinking about this. OpenAI was founded specifically with the intention to counter risks from superintelligence, many people at Google, [https://medium.com/@deepmindsafetyresearch DeepMind], and other organizations are convinced by the arguments and few genuinely oppose work in the field (though some claim it’s premature). For example, the paper [https://www.youtube.com/watch?v꞊AjyM-f8rDpg Concrete Problems in AI Safety] was a collaboration between researchers at Google Brain, Stanford, Berkeley, and OpenAI.\n\nHowever, the vast majority of the effort these organizations put forwards is towards capabilities research, rather than safety."],"StampedBy":[],"Tags":[{"fulltext":"Organizations","fullurl":"https://stampy.ai/wiki/Organizations","namespace":0,"exists":"1","displaytitle":"organizations"},{"fulltext":"Capabilities","fullurl":"https://stampy.ai/wiki/Capabilities","namespace":0,"exists":"1","displaytitle":"capabilities"}]}
answer
["The major AI companies are thinking about this. OpenAI was founded specifically with the intention to counter risks from superintelligence, many people at Google, [https://medium.com/@deepmindsafetyresearch DeepMind], and other organizations are convinced by the arguments and few genuinely oppose work in the field (though some claim it’s premature). For example, the paper [https://www.youtube.com/watch?v꞊AjyM-f8rDpg Concrete Problems in AI Safety] was a collaboration between researchers at Google Brain, Stanford, Berkeley, and OpenAI.\n\nHowever, the vast majority of the effort these organizations put forwards is towards capabilities research, rather than safety."]