Geoffrey Hintons artificial intelligence (A.I.) research has helped enable the rise of technologies that were once the stuff of Sci-Fi flicks, from facial recognition to chatbots like OpenAIs ChatGPT and Googles Bard. The British-Canadian computer scientist earned the title, the godfather of A.I., by dedicating his career to the study of neural networkscomplex computer models whose layered structures mimic the human braindecades before the technology went mainstream. But Hinton resigned from a position he held at Google for over a decade last month, telling the New York Times he made the decision so he could freely discuss the dangers of A.I. without considering how it might impact the company.
Since then, he has been on a Paul Revere-esque campaign to warn about the existential risk to humanity that A.I. poses in a series of interviews which have even garnered the attention of the rapper Snoop Dogg, who recently referenced Hinton sayings claim that A.I. is not safe. Snoop gets it, Hinton told Wired Monday.
The A.I. pioneers latest cautionary message? Even the threat of climate change doesnt compare to A.I.
I wouldnt like to devalue climate change. I wouldnt like to say, You shouldnt worry about climate change. Thats a huge risk too, he told Reuters Friday. But I think this might end up being more urgent.
Hinton believes A.I. systems could eventually become more intelligent than humans and take over the planet, or bad actors could use the technology to fuel division in society in hopes of gaining powerand thats all before the threat of job losses. And while the solutions to climate change are quite obvious (just stop burning carbon) when it comes to A.I., Hinton warned that its not at all clear what you should do.
Repeated warnings
On his campaign to warn of the dangers of A.I., Hinton has compared the technology to the birth of Nuclear weapons, and admitted that he regrets much of his work now that he sees its destructive potential. I console myself with the normal excuse: If I hadnt done it, somebody else would have, he told the New York Times in late April.
Comparing the rise of artificial intelligence to the creation of nuclear weapons may sound hyperbolic, but even Warren Buffett sees the parallels. The 92-year-old investing legend referenced a warning Albert Einstein gave after the birth of the atomic bomb at Berkshire Hathaways annual conference over the weekend, noting that A.I. has can change everything in the world except how men think and behave.
And Hinton, who won the Turing Award for his lasting contributions of major technical importance to computer science in 2018, warned earlier this month in an interview with BBC of a nightmare scenario in which chatbots like ChatGPT are used to seek power. It is hard to see how you can prevent the bad actors from using it for bad things, he said.
In a separate interview at MIT Technology Reviews EmTech Digital conference last week, the computer scientist told the crowd: These things will have learned from us, by reading all the novels that ever were and everything Machiavelli ever wrote, how to manipulate people. Even if they cant directly pull levers, they can certainly get us to pull levers.
I wish I had a nice simple solution for this, but I dont, he added. Im not sure there is a solution.
But no A.I. pause?
The potential risks posed by A.I. led over 1,100 prominent figures in tech, including Tesla CEO Elon Musk and Apple cofounder Steve Wozniak, to sign an open letter calling for a six-month pause on the development of advanced A.I. systems earlier this year. But Hinton told Reuters Wednesday that a pause in A.I. development is utterly unrealistic.
Im in the camp that thinks this is an existential risk, and its close enough that we ought to be working very hard right now, and putting a lot of resources into figuring out what we can do about it, he said.
In an interview with CNN last week, the computer scientist explained that if the U.S. stopped developing A.I. tech, China wouldnt. And in a May 5 tweet, he clarified his position:
There is so much possible benefit that I think we should continue to develop it but also put comparable resources into making sure its safe.
To that end, President Biden and Vice President Harris met A.I. leaders including Alphabet CEO Sundar Pichai and OpenAI CEO Sam Altman last week to discuss the need for safety and transparency in the field as well as the potential for new regulations. And the European Unions A.I. Actwhich classifies A.I. systems into different risk categories, adds transparency requirements, and laws to prevent biasis expected to be operational by the end of the year. After Musks letter, a committee of E.U. lawmakers also agreed to a new set of proposals which would force A.I. companies to disclose when they use copyright material to train their systems, Reuters first reported May 1.