If science is supposed to be the pursuit of truth, there might be something decidedly unscientific, and possibly even dangerous, about the commercialization of artificial intelligence over the past several months, according to a top A.I. expert.

OpenAI may have let the A.I. genie out of the bottle in November when it released ChatGPT, a chatbot based on the start-ups groundbreaking generative A.I. system. Tech giants including Microsoft and Google have since piled into the race, fast-tracking development of their own A.I. products, some of which have already been released.

But an accelerated timeline can be risky, especially with a technology like A.I., which continues to divide experts as to whether it will be a net positive for humanity, or evolve to destroy civilization. Even OpenAI CEO Sam Altman said in a Congressional hearing this week that A.I. might benefit from more regulation and government oversight than if it were just left to corporations. But its hard to stop the race once it has already started, and the race for A.I. is quickly turning into a vicious circle, Yoshua Bengio, a University of Montreal professor and leading expert on artificial intelligence and deep learning, told the Financial Times in an interview Thursday.

Bengio was one of the over 1,000 experts who signed an open letter in March calling for a six-month moratorium on advanced A.I. research. For his pioneering research in deep learning, Bengio was a co-winner of the 2018 Turing Award, among the highest honors in computer science, and is referred to as one of the Godfathers of A.I. alongside Geoffrey Hinton and Yann LeCun, who shared the award.

But Bengio now warns that the current approach to developing A.I. comes with significant risks, telling the FT that tech companies competitive strategy with A.I. is unhealthy, adding that he is starting to see danger to political systems, to democracy, to the very nature of truth.

A long list of dangers associated with A.I. has emerged over the past few months. Current generative A.I. that is trained on troves of data to predict text and images has so far been riddled with mistakes and inconsistencies and known to spread misinformation. If left unregulated and used by bad actors, the technology could be used to purposefully mislead people, OpenAIs Altman testified this week, cautioning that ChatGPT could be used for interactive disinformation during next years elections.

But the risks will likely only get bigger as the technology evolves. If researchers can crack the code of general artificial intelligence, also known as AGI, machines would be able to think and reason as well as a human. Tech executives have suggested we are closer to AGI than once believed, but A.I. experts including Bengios colleague Hinton have warned that advanced A.I. could pose an existential threat to humanity. 

Bengio told the FT that, within this decade, humans risk losing control of more advanced forms of A.I. that will potentially be capable of more independent thought. In the meantime, he recommended regulators crack down on existing A.I. systems and create rules for the technology and the information used to train it. He also pointed out that disagreement in the A.I. community is normal in scientific research, but it should be giving companies reason to pause and reflect. 

Right now there is a lot of emotion, a lot of shouting within the wider A.I. community. But we need more investigations and more thought into how we are going to adapt to whats coming, he said. Thats the scientific way.

Governments have been slow to move on A.I., but there are recent signs of momentum. President Joe Biden invited tech leaders involved in A.I. research to the White House earlier this month to discuss risks and best practices moving forward, shortly after announcing new initiatives promoting the development of responsible A.I.

Regulators have moved faster in Europe, where last week lawmakers took an important step towards approving the European Unions A.I. Act, a  bill that outlines A.I.s risks and imposes more obligations on companies developing the technology. In China, meanwhile, where companies are developing their own versions of ChatGPT, regulators unveiled rules in early April requiring companies to source approved data to train their A.I. systems.


Newspapers

Spinning loader

Business

Entertainment

POST GALLERY