[Link] Max Tegmark and Nick Bostrom Speak About AI Risk at UN International Security Event

https://www.lesswrong.com/posts/B5E7L43JphggCMg25/link-max-tegmark-and-nick-bostrom-speak-about-ai-risk-at-un

http://​​webtv.un.org/​​watch/​​chemical-biological-radiological-and-nuclear-cbrn-national-action-plans-rising-to-the-challenges-of-international-security-and-the-emergence-of-artificial-intelligence/​​4542739995001

Exactly what it says on the tin. It starts around 1:50:00. I had trouble buffering for some reason, and I couldn’t find it on YouTube.

EDIT: It’s on YouTube now: https://​​www.youtube.com/​​watch?v=W9N_Fsbngh8

Comment

https://www.lesswrong.com/posts/B5E7L43JphggCMg25/link-max-tegmark-and-nick-bostrom-speak-about-ai-risk-at-un?commentId=JcJohtf4zKqGAGk7y

The buffering issues seem to affect only video not audio.

Nick Bostrom’s part starts at 2:15.

Indeed as it says on the tin. Interesting to notice how they phrase AI risk in terms of CBRN and esp. use robotics and drones as a spring board.

They will organize an AI master class later the year in the netherlands.

Comment

https://www.lesswrong.com/posts/B5E7L43JphggCMg25/link-max-tegmark-and-nick-bostrom-speak-about-ai-risk-at-un?commentId=YD8NFjoo6Yjn98bnx

They will organize an AI master class later the year in the netherlands.

What does that mean?

Comment

I guess something like this about AI risk. It was mentioned briefly in the video and I though it might interest somebody (as did this one).

Comment

It is interesting. Something like that seems like it would be extremely high-impact.

I can’t believe how awareness has shot up. I thought it would remain a phenomenon in popular culture; now the global elite is taking it seriously and is being educated in various levels of detail about the problem.

https://www.lesswrong.com/posts/B5E7L43JphggCMg25/link-max-tegmark-and-nick-bostrom-speak-about-ai-risk-at-un?commentId=ywcRKvv4XiERWGvFn

To be fair, it seems that recently almost everyone can speak before a some kind of UN panel.

Comment

https://www.lesswrong.com/posts/B5E7L43JphggCMg25/link-max-tegmark-and-nick-bostrom-speak-about-ai-risk-at-un?commentId=gocYrTeSTkiupFdWS

Which is good. The last thing I want is the UN to mess with AI. So, if it is just another UN panel, I don’t have to worry.

Comment

Nick Bostrom seems to think it’s useful to alert the UN to AI. The UN is currently the best thing we have that handles international cooperation.

Comment

I’m toning back my response. Nick had said previously that he didn’t think policy action could help the field of AI now (except perhaps more funding) but that it could help prevent other existential risks. There still has to be some reason why he made this talk, though.

https://www.lesswrong.com/posts/B5E7L43JphggCMg25/link-max-tegmark-and-nick-bostrom-speak-about-ai-risk-at-un?commentId=Qw4tHQLq839PDMiPo

Added a YouTube link to the OP.

https://www.lesswrong.com/posts/B5E7L43JphggCMg25/link-max-tegmark-and-nick-bostrom-speak-about-ai-risk-at-un?commentId=xwoWpRYfzciuBjGFc

We must never create strong AI.

Because they have no guarantee of death.

There is a possibility that they will be tormented forever due to a bug and HPC.

And needless to say, we must never upload our mind to computer.