Sam Altman once said that A.I. will probably most likely lead to the end of the world, but in the meantime, therell be great companies. While this was at least partially in jest, the OpenAI CEO often expresses real worry about the potential consequences of the A.I. chatbots that his company creates, including its buzzy ChatGPT released last year that supercharged the A.I. space. In addition to wiping out jobs, or the human race altogether, here are some of Altmans biggest worries about A.I.

Goodbye humanity

The worst case could be lights out for humanity, Altman said in an interview with StrictlyVC. The bad case and I think this is important to say is, like, lights out for all of us, he said. Im more worried about an accidental misuse case in the short term.

Bad dreams

Altman loses sleep thinking that releasing ChatGPT might have been really bad.

What I lose the most sleep over is the hypothetical idea that we already have done something really bad by launching ChatGPT, he said at an Economic Times event on Wednesday. That maybe there was something hard and complicated in there (the system) that we didnt understand and have now already kicked it off. 

Strange moves

OpenAI might make strange decisions in the future that wont make investors happy and Altman therefore wont take the company public anytime soon. When we develop superintelligence, were likely to make some decisions that public market investors would view very strangely, he said at an event in Abu Dhabi. 

Outbreaks and break-ins

He fears A.I. could create new diseases. When asked by Fox News about what dangerous things A.I. could do, Altman said, An AI that could design novel biological pathogens. An AI that could hack into computer systems. I think these are all scary.

Fake news

A.I. could launch cyberattacks and sow disinformation, he said. Im particularly worried that these models could be used for large-scale disinformation, Altman said in an  interview with ABC News. Now that theyre getting better at writing computer code, they could be used for offensive cyberattacks.

Axis of evil

Bad actors could use the technology, and the world has only limited time to prevent it, Altman warned. We do worry a lot about authoritarian governments developing this, he said in an interview with ABC News. A thing that I do worry about is were not going to be the only creator of this technology. There will be other people who dont put some of the safety limits that we put on it. Society, I think, has a limited amount of time to figure out how to react to that, how to regulate that, how to handle it.

An atomic level problem

Altman signed a statement saying that A.I. is as dangerous as nuclear war. Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war, said the statement, signed by other tech leaders such as Elon Musk and Bill Gates.

On a knifes edge

A.I. has the potential to go quite wrong, Altman fears. We understand that people are anxious about how it can change the way we live. We are, too, he said at a Senate subcommittee hearing in May. If this technology goes wrong, it can go quite wrong.

Risky business

A.I. could destroy the economy. The current worries that I have are that there are going to be disinformation problems or economic shocks, or something else at a level far beyond anything were prepared for, Altman told Lex Fridman on his podcast. And that doesnt require superintelligence. 

Mental decline

People will get dumber as A.I. gets smarter, he said. I worry that as the models get better and better, the users can have less and less of their own discriminating thought process, Altman said in his first appearance before Congress.

Miseducation

A.I. can provide one-on-one interactive disinformation and, he said, potentially impact the 2024 presidential election. The more general ability of these models to manipulate, to persuade, to provide sort of one-on-one interactive disinformation, Altman said during a Senate hearing. Given that were going to face an election next year and these models are getting better, I think this is a significant area of concern.

Hoaxes

He cautioned people who use ChatGPT that it could lie. The thing that I try to caution people the most is what we call the hallucinations problem, Altman told ABC News. The model will confidently state things as if they were facts that are entirely made up.

Out of work

Certain jobs will be wiped out fast by A.I., he said. I think a lot of customer service jobs, a lot of data entry jobs get eliminated pretty quickly, Altman told David Remnick on the New Yorker Radio Hour. Some people wont work for sure. I think there are people in the world who dont want to work and get fulfillment in other ways, and that shouldnt be stigmatized either.

Frankensteins monster?

Hes a little bit scared of his own creation. Weve got to be careful here, Altman told the senators during a committee hearing. I think people should be happy that we are a little bit scared of this.

Sci-fi nightmare

Artificial general intelligence (AGI), when A.I.  possesses human-level understanding rather than just the ability to complete tasks, could create a worldwide dystopia, he said. A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that too, Altman wrote in a February blog post


Newspapers

Spinning loader

Business

Entertainment

POST GALLERY