I’m not sure how I feel about the images used on that website. I feel that they trigger all the wrong associations. Digital-looking human-like hand just reinforces the assumption that the AI will be human-like, and the robot head brings to mind all the recent AI movies.
That seems to be too much like an image that will only appeal to people who are already familiar not only with the ideas of FAI but with specific variants that are discussed only on LW. For most people that will probably be confusing.
That’s just… wow. That’s frighteningly stupid. That’s about as bad as someone saying they aren’t worried about a nuclear reactor undergoing a meltdown because they had their local clergy bless it although the potential negative pay off of this is orders of magnitude higher. I don’t put a high probability to an AI triggered singularity but this is just… wow. One thing seems pretty clear: if an AGI does do a hard-take off to control its light cone, the result is likely to be really bad, simply because so many people are being stupid about it.
Part of me is worried that the SIAI people are thinking much more carefully about some of these issues maybe should suggest that their estimates for recursively self-improving are much more likely than I estimate.
To me, the frightening thing isn’t the original mistake (though it is egregious), it’s the fact that the response to having it pointed out was "You should not assume such a poor implementation of my idea that it cannot make discriminations that are trivial to current humans" rather than "OOPS!"
Inside view of a toroidal space habitat, is what I assumed.
I’d suggest some kind of Matrix-style ‘wireframe made of equations’ thing as being probably the best illustration, except for the obvious negative connotations.
I’m not sure how I feel about the images used on that website. I feel that they trigger all the wrong associations. Digital-looking human-like hand just reinforces the assumption that the AI will be human-like, and the robot head brings to mind all the recent AI movies.
Comment
If you can come up with better images to represent Friendly AI, please let me know!
EDIT: How ’bout now?
Comment
How about an image of a paper clip?
Comment
That seems to be too much like an image that will only appeal to people who are already familiar not only with the ideas of FAI but with specific variants that are discussed only on LW. For most people that will probably be confusing.
Comment
A similar specific variant: http://s.wsj.net/public/resources/images/OB-DU671_0604dn_D_20090604122543.jpg
Comment
Man, we’re never gonna let Hibbard forget that, are we? You propose one obviously-flawed genocidal AI Overlord...
Comment
Sorry, did he ever actually propose a smiley tiler? I thought that was just used as an example of a simple way one could easily go wrong.
Comment
Well, he sort of did, actually.
Comment
That’s just… wow. That’s frighteningly stupid. That’s about as bad as someone saying they aren’t worried about a nuclear reactor undergoing a meltdown because they had their local clergy bless it although the potential negative pay off of this is orders of magnitude higher. I don’t put a high probability to an AI triggered singularity but this is just… wow. One thing seems pretty clear: if an AGI does do a hard-take off to control its light cone, the result is likely to be really bad, simply because so many people are being stupid about it.
Part of me is worried that the SIAI people are thinking much more carefully about some of these issues maybe should suggest that their estimates for recursively self-improving are much more likely than I estimate.
Comment
To me, the frightening thing isn’t the original mistake (though it is egregious), it’s the fact that the response to having it pointed out was "You should not assume such a poor implementation of my idea that it cannot make discriminations that are trivial to current humans" rather than "OOPS!"
Plus there’s the people who’ll go "friendly" + "AI" + "paperclip" = "Microsoft Word help menu interface" and feel a surge of irrational hatred.
Comment
I think in this case the Way affirms one’s hatred.
Comment
You’re probably right. Replace "irrational" with "misplaced", maybe.
(Also, I owe Clippy an apology for bringing up the Great Shame of paperclip enthusiasts everywhere.)
The current image looks like Vacuum Doomsday. What about no image, just calming or neutral tones?
Comment
What is Vacuum Doomsday?
Comment
It looked like everything in the picture was being sucked into space, perhaps by an enormous robotic maid.
Yeah, that looks fine, although I’m not sure what it is.
As for FAI images, I think most math-related images (equations, etc...) are fine. Abstract diagrams, trees, graphs, etc...
Comment
Inside view of a toroidal space habitat, is what I assumed.
I’d suggest some kind of Matrix-style ‘wireframe made of equations’ thing as being probably the best illustration, except for the obvious negative connotations.
Comment
Yes. Specifically, I think it’s meant to be the inside of a Stanford torus.
RSS’d. Thanks.