Geoffrey Hinton is the tech pioneer behind some of the key developments in artificial intelligence powering tools like ChatGPT that millions of people are using today. But the 75-year-old trailblazer says he regrets the work he has devoted his life to because of how A.I. could be misused.
It is hard to see how you can prevent the bad actors from using it for bad things, Hinton told the New York Times in an interview published Monday. I console myself with the normal excuse: If I hadnt done it, somebody else would have.
Hinton, often referred to as the Godfather of A.I., spent years in academia before joining Google in 2013 when it bought his company for $44 million. He told the Times Google has been a proper steward for how A.I. tech should be deployed and that the tech giant has acted responsibly for its part. But he left the company in May so that he can speak freely about the dangers of A.I.
According to Hinton, one of his main concerns is how easy access to A.I. text and image-generation tools could lead to more fake or fraudulent content being created, and the average person could not be able to know what is true anymore.
Concerns surrounding the improper use of A.I. have already become a reality. Fake images of Pope Francis in a white puffer jacket made the rounds online a few weeks ago, and deepfake visuals showing China invading Taiwan and banks failing under President Joe Biden if he is re-elected was published by the Republican National Committee last week.
As companies like OpenAI, Google and Microsoft work on upgrading their A.I. products, there are also growing calls for slowing the pace of new developments and regulating the space that has expanded rapidly in recent months. In a March letter, some of the top names in the tech industry, including Apple co-founder Steve Wozniak and computer scientist Yoshua Bengio signed a letter asking for a ban on the development of advanced A.I. systems. Hinton didnt sign the letter, although he believes that companies should think before scaling A.I. technology further.
I dont think they should scale this up more until they have understood whether they can control it, he said.
Hinton is also worried about how A.I. could change the job market by rendering non-technical jobs irrelevant. But he warned also had the capability to hurt more types of roles.
It takes away the drudge work, Hinton said. It might take away more than that.
When asked for a comment about Hintons interview, Google emphasized the companys commitment to a responsible approach.
Geoff has made foundational breakthroughs in A.I., and we appreciate his decade of contributions at Google, Jeff Dean, chief scientist at Google, told Fortune in a statement. As one of the first companies to publish A.I. Principles, we remain committed to a responsible approach to AI. Were continually learning to understand emerging risks while also innovating boldly.
Hinton did not immediately return Fortunes request for comment.
A.I.s pivotal moment
Hinton began his career as a graduate student at the University of Edinburgh in 1972. Thats where he first started his work on neural networks, mathematical models that roughly mimic the workings of the human brain and are capable of analyzing vast amounts of data.
His neural network research was the breakthrough concept behind a company he built with two of his students called DNNresearch, which Google ultimately bought in 2013. Hinton won the 2018 Turing Awardthe equivalent of a Nobel prize in the computing worldwith his two other colleagues (one of whom was Bengio) for their neural network research, which has been key to the creation of technologies including OpenAIs ChatGPT and Googles Bard chatbot.
As one of the key thinkers in A.I., Hinton sees the current moment as pivotal and ripe with opportunity. In an interview with CBS in March, Hinton said he believes that A.I. innovations are outpacing our ability to control itand thats a cause for concern.
Its very tricky things. You dont want some big for-profit companies to decide what is true, he told CBS Morning in an interview in March. Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose A.I. And now I think it may be 20 years or less.
Hinton added that we could be close to computers being able to come up with ideas to improve themselves. Thats an issue, right? We have to think hard about how you control that.
Hinton said that Google is going to be a lot more careful than Microsoft when it comes to training and presenting A.I.-powered products and cautioning users about the information shared by chatbots. Google has been at the helm of A.I. research for a long timewell before the recent generative A.I. wave caught on. Sundar Pichai, CEO of Google parent Alphabet, has famously likened A.I. to other innovations that have shaped humankind.
Ive always thought of A.I. as the most profound technology humanity is working onmore profound than fire or electricity or anything that weve done in the past, Pichai said in an interview aired in April. Just like humans learned to skillfully harness fire despite its dangers, Pichai thinks humans can do the same with A.I.
It gets to the essence of what intelligence is, what humanity is, Pichai said. We are developing technology which, for sure, one day will be far more capable than anything weve ever seen before.