[Question] What would make you confident that AGI has been achieved?

https://www.lesswrong.com/posts/bpJ3A5sDoBq6i83Xp/what-would-make-you-confident-that-agi-has-been-achieved

Consensus seems to be that there is no fire alarm for artificial intelligence. We may see the smoke and think nothing of it. But at what point do we acknowledge that the fire has entered the room? To be less euphemistic, what would have to happen to convince you that a human-level AGI has been created? I ask this because it isn’t obvious to me that there is any level of evidence which would convince many people, at least not until the AGI is beyond human levels. Even then, it may not be clear to many that superintelligence has actually been achieved. For instance, I can easily imagine the following hypothetical scenario: A future GPT-N which scores a perfect 50% on a digital Turing Test (meaning nobody can detect if a sample output is written by humans or GPT-N), is announced by OpenAI. Let’s imagine they do the responsible thing and don’t publicly release the API. My intuition is that most people will not enter panic mode at that point, but will either:

Comment

https://www.lesswrong.com/posts/bpJ3A5sDoBq6i83Xp/what-would-make-you-confident-that-agi-has-been-achieved?commentId=JeSXykQ38d2YdXDPj

I think a Turing test grounded in something visual would be very convincing to me and many other people. I.e. not just the ability to to do small talk or talk about things that have been discussed extensively on the internet, but to talk sensibly about a soccer game in progress, a live chess game, a movie that is shown for the first time, a fictional map of a subway system, a painting, reality TV, etc. I think visual grounding would cut out a lot of faking.

https://www.lesswrong.com/posts/bpJ3A5sDoBq6i83Xp/what-would-make-you-confident-that-agi-has-been-achieved?commentId=gjxD82HFPPDrDCmAL

Some "superpowers" are an existential risk, such as designing a virus that quickly kills all humans. Some "superpowers" are horrifying on a System 1 level, but not actually an existential risk; like building dozen giant robots that will stomp on buildings and destroy a few cities, but will ultimately be destroyed by an army. If we get lucky and the AI develops and uses the latter kind of "superpowers" first, we might get scared before we die.

https://www.lesswrong.com/posts/bpJ3A5sDoBq6i83Xp/what-would-make-you-confident-that-agi-has-been-achieved?commentId=pnb8eqvssLkED9zrf

at what point would you expect the average (non-rationalist) AI researcher to accept that they’ve created an AGI?

Easy answers first: the average AI researcher will accept it when others do.

at what point would you be convinced that human-level AGI has been achieved?

When the preponderance of evidence is heavily weighted in this direction. In one simple class of scenario this would involve unprecedented progress in areas limited by things like human attention, memory, io bandwidth, etc. Some of these would likely not escape public attention. But there are a lot of directions AGI can go.

Comment

https://www.lesswrong.com/posts/bpJ3A5sDoBq6i83Xp/what-would-make-you-confident-that-agi-has-been-achieved?commentId=dnbFxiuTdfbWZZWDW

Could you give a specific hypothetical if you have the time? What would be a specific example of a scenario that you’d look at and go "welp, that’s AGI!" Asking since I can imagine most individual accomplishments being brushed away as "oh, guess that was easier than we thought"