Elon Musk says he didnt think anyone would actually agree to the A.I. pause he called for
Elon Musk made waves in March when he called for a pause on A.I. development, joining hundreds of other tech luminaries in signing an open letter warning of the dangers of advanced artificial intelligence.
But he never thought anyone would heed the call, apparently.
Well, I mean, I didnt think anyone would actually agree to the pause, but I thought, for the record I just want to say, I think we should pause, the Tesla CEO said yesterday at the Vivatech technology conference in France.
Many took the letter seriously, of course, including its signatories and critics. It warned of dire consequences for humanity from advanced A.I. and called for a six-month pause on development of anything more advanced than OpenAIs GPT-4 chatbot.
Critics included Microsoft cofounder Bill Gates, U.S. senator Mike Rounds, and even Geoffrey Hintonthe Godfather of A.I. who left Google this year to sound the alarm about the technology he did so much to advance.
Hinton, like others, felt the call for a pause didnt make sense because the research will happen in China if it doesnt happen here, as he explained to NPR.
Its sort of a collective action problem, agreed Google CEO Sundar Pichai on the Hard Fork podcast in March, saying the people behind the letter intended it, probably, as a conversation starter.
Aidan Gomez, CEO of the $2 billion A.I. startup Cohere, told the Financial Times this week that the call was not plausibly implementable. He added, To spend all of our time debating whether our species is going to go extinct because of a takeover by a superintelligent AGI is an absurd use of our time and the publics mindspace.
Musk, however, said yesterday that for the first time, theres going to be something that is smarter than the smartest humanlike way smarter than the smartest human. He warned of potentially a catastrophic outcome if humanity is not careful with creating artificial general intelligence.
The worlds richest person reiterated his call for strong regulation around the technology, calling advanced A.I. a risk to the public. The most likely outcome with A.I. is positive, he added, but thats not every possible outcome, so we need to minimize the probability that something will go wrong.
If there were indeed some kind of A.I. apocalypse, he added, he would still want to be alive to see it.