Sure, A.I. has some 'real risks,' but the human extinction fears are a distraction, says CEO of a $2 billion unicorn backed by Oracle and Nvidia
Artificial intelligence destroying humanity used to be the stuff of sci-fi blockbusters. More recently, billionaires, lawmakers, and large swaths of the public have fretted about it for real.
But Aidan Gomez, cofounder and CEO of Coherea red-hot A.I. startup recently backed by database giant Oracle and chipmaker Nvidiathinks such fears are overblown. Worse, theyre distracting us from real risks with this technology, he said in a Financial Times interview published Thursday.
Oracle said this week it will use Coheres technology to let its business customers build their own generative A.I. apps. Cohere is in some ways to Oracle what OpenAI is to Microsoft, with each startup receiving hefty investments from their Big Tech partners who in turn use their A.I. technology. The difference is that Cohere is designed for corporate customers that want to train A.I. models on their own data without sharing it, whereas OpenAI has tapped more readily available information to train its buzzy A.I. chatbots ChatGPT and GPT-4.
Gomez, previously a researcher at Google Brain, one Googles A.I. arms, sharply criticized the open letter signed in March by tech luminariesincluding Tesla CEO Elon Musk and Apple cofounder Steve Wozniakcalling for a six-month pause on development of A.I. systems more advanced than GPT-4 to give policymakers a chance to catch up. Aside from it being not plausibly implementable, Gomez told the FT, the letter talked about a superintelligent artificial general intelligence (AGI) emerging that can take over, a scenario he considers exceptionally improbable.
(The letter asking for the pause reads in part, Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?)
To spend all of our time debating whether our species is going to go extinct because of a takeover by a superintelligent AGI is an absurd use of our time and the publics mindspace, Gomez argued.
Real A.I. risks
Instead, he said, there are real risks that need to be addressed today. One immediate concern is that we can now flood social media with accounts that are truly indistinguishable from a human, so extremely scalable bot farms can pump out a particular narrative.
Asked about the danger of such capabilities undermining democratic processeswith the U.S. presidential election loominghe replied:
Things get normalized just by exposure, exposure, exposure, exposure. So, if you have the ability to just pump people the same idea again and again and again, and you show them a reality in which it looks like theres consensus it looks like everyone agrees X, Y and Z then I think you shape that person and what they feel and believe, and their own opinions. Because were herd animals.
To address the problem, he said, we need mitigation strategies such as human verification, so we can filter our feeds to only include the legitimate human beings who are participating in the conversation.
He credits Musk for the blue-check revamp at Twitter, despite its rough start and surrounding controversy. Under Musk, the marks previously given to notable figures for free are now given to anyone for a monthly subscription, with Musk describing it in late March as the only realistic way to address advanced AI bot swarms taking over.
You can complain about the price or whatever maybe it should be free if you upload your drivers license, or something, said Gomez. But its really important that we have some degree of human verification on all our major social media.
Another pressing risk, he said, is with people trusting A.I. chatbots for medical advice. ChatGPT and its ilk are known to hallucinate, or basically make things up, which can be problematic when your health is on the line. Gomez didnt offer specific ways to address the danger, but warned: We shouldnt have reckless deployment of end-to-end medical advice coming from a bot without a doctors oversight. Thats just not the right way to deploy these systemsTheyre not at that level of maturity where thats an appropriate use of them.
Given the real risks and real room for regulation with A.I. technology, he said, he hopes the public takes the fantastical stories from A.I. doom merchants with a grain of salt.
Theyre distractions, he said, from the conversations that should be going on.