The so-called Godfather of A.I. continues to issue warnings about the dangers advanced artificial intelligence could bring, describing a nightmare scenario in which chatbots like ChatGPT begin to seek power.

In an interview with the BBC on Tuesday, Geoffrey Hintonwho announced his resignation from Google to the New York Times a day earliersaid the potential threats posed by A.I. chatbots like OpenAIs ChatGPT were quite scary.

Right now, theyre not more intelligent than us, as far as I can tell, he said. But I think they soon may be.

What were seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has, and it eclipses them by a long way, he added.

In terms of reasoning, its not as good, but it does already do simple reasoning. And given the rate of progress, we expect things to get better quite fastso we need to worry about that.

Hintons research on deep learning and neural networksmathematical models that mimic the human brainhelped lay the groundwork for artificial intelligence development, earning him the nickname the Godfather of A.I..

He joined Google in 2013 after the tech giant bought his company, DNN Research, for $44 million.

A nightmare scenario

While Hinton told the BBC on Tuesday that he believed Google had been very responsible when it came to advancing A.I.s capabilities, he told the NYT on Monday that he had concerns about the techs potential should a powerful version fall into the wrong hands.

When asked to elaborate on this point, he said: This is just a kind of worst-case scenario, kind of a nightmare scenario.

You can imagine, for example, some bad actor like [Russian President Vladimir] Putin decided to give robots the ability to create their own sub-goals.

Eventually, he warned, this could lead to A.I. systems creating objectives for themselves like: I need to get more power.

Ive come to the conclusion that the kind of intelligence were developing is very different from the intelligence we have, Hinton told the BBC.

Were biological systems and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world.

All these copies can learn separately but share their knowledge instantly, so its as if you had 10,000 people and whenever one person learned something, everybody automatically knew it. And thats how these chatbots can know so much more than any one person.

Hintons conversation with the BBC came after he told the NYT he regrets his lifes work because of the potential for A.I. to be misused.

It is hard to see how you can prevent the bad actors from using it for bad things, he said on Monday. I console myself with the normal excuse: If I hadnt done it, somebody else would have.

Since announcing his resignation from Google, Hinton has been vocal about his concerns surrounding artificial intelligence.

In another separate interview with the MIT Technology Review published on Tuesday, Hinton said he wanted to raise public awareness of the serious risks he believes could come with widespread access to large language models like GPT-4.

I want to talk about A.I. safety issues without having to worry about how it interacts with Googles business, he told the publication. As long as Im paid by Google, I cant do that.

He added that peoples outlook on whether superintelligence was going to be good or bad depends on whether they are optimists or pessimistsand noted that his own opinions on whether A.I.s capabilities could outstrip those of humans had changed.

I have suddenly switched my views on whether these things are going to be more intelligent than us, he said. I think theyre very close to it now and they will be much more intelligent than us in the future. How do we survive that?

Wider concern

Hinton isnt alone in speaking out about the potential dangers that advanced large language models could bring.

In March, more than 1,100 prominent technologists and artificial intelligence researchersincluding Elon Musk and Apple co-founder Steve Wozniaksigned an open letter calling for the development of advanced A.I. systems to be put on a six-month hiatus.

Musk had previously voiced concerns about the possibility of runaway A.I. and scary outcomes including a Terminator-like apocalypse, despite being a supporter of the technology.

OpenAIwhich was co-founded by Muskhas publicly defended its chatbot phenomenon amid rising concerns about the technologys potential and the rate at which it is progressing.

In a blog post published earlier this month, the company admitted that there were real risks linked to ChatGPT, but argued that its systems were subjected to rigorous safety evaluations.

When GPT-4the successor to the A.I. model that powered ChatGPTwas released in March, Ilya Sutskever, OpenAIs chief scientist, told Fortune the companys models were a recipe for producing magic.


Newspapers

Spinning loader

Business

Entertainment

POST GALLERY