Authorities in the U.S. and Europe should act quickly to protect people against threats posed by OpenAIs GPT and ChatGPT artificial intelligence models, civil society groups have urged in a coordinated pushback against the technologys rapid proliferation.

On Thursday the U.S.s Center for AI and Digital Policy (CAIDP) filed a formal complaint with the Federal Trade Commission, calling on the agency to halt further commercial deployment of GPT by OpenAI until safeguards have been put in place to stop ChatGPT from deceiving people and perpetuating biases.

CAIDPs complaint came just one day after the release of a much-publicized open letter calling for a six-month moratorium on the development of next-generation A.I. models. Although the complaint references that letter, the group has signaled 10 days ago that it would be urging the FTC to investigate OpenAI and ChatGPT, and establish a moratorium on the release of further commercial versions of GPT until appropriate safeguards are established.

At the same time as CAIDPs complaint landed with the FTC, the European Consumer Organisation (BEUC) issued a call for European regulatorsboth at EU and national levelsto launch investigations into ChatGPT.

For all the benefits A.I. can bring to our society, we are currently not protected enough from the harm it can cause people, said BEUC deputy director general Ursula Pachl. In only a few months, we have seen a massive take-up of ChatGPT and this is only the beginning.

CAIDP, which advocates for a societally-just rollout of A.I., also asked the FTC to force OpenAI to submit to independent assessments of its GPT products before and after they launch, and to make it easier for people to report incidents in their interactions with GPT-4, the latest version of OpenAIs large language model.

The FTC has a clear responsibility to investigate and prohibit unfair and deceptive trade practices, said CAIDP president Marc Rotenberg in a statement. We believe that the FTC should look closely at OpenAI and GPT-4.

Concerns over ChatGPT, and other chat interfaces such as Microsofts OpenAI-powered Bing and Googles Bard, include the systems tendency to make up informationa phenomenon known in the A.I. industry as hallucinationand to amplify the biases that are present in the material on which these large-language models have been trained.

EU lawmakers are already planning to regulate the A.I. industry through an Artificial Intelligence Act that the European Commission first proposed nearly two years ago. However, some of the proposals measures are beginning to look outdated given rapid advances in the field and highly competitive rollouts of new services, and the EUs institutions are now scrambling to modernize the bill so it will adequately tackle services like ChatGPT.

Waiting for the A.I. Act to be passed and to take effect, which will happen years from now, is not good enough as there are serious concerns growing about how ChatGPT and similar chatbots might deceive and manipulate people, Pachl said.

A BEUC spokesperson told Fortune the organization hoped to see a variety of authorities spring into action, including those regulating product safety, data protection, and consumer protection.

OpenAI had not responded to a request for comment at the time of publication. However, some have responded to Wednesdays open letterwhich was signed by over 1,000 people, including Elon Musk and Apple co-founder Steve Wozniakby saying fears about A.I. are overblown and development should not be paused.

Others agreed with the letters call for governments to act quickly to regulate the technology, but took issue with the rationale for such regulation expressed in the open letter, which focused more on the potential of future A.I. systems to exceed human intelligence, and less on potential harms from todays existing systems in areas such as misinformation, bias, cybersecurity, and the outsized environmental costs of the massive amount of computing power and electricity needed to train and run such systems.

The sky is not falling, and Skynet is not on the horizon, wrote Daniel Castro and Emily Tavenner, of the pro-Big Tech Center for Data Innovation think tank, on Wednesday.

OpenAIs own CEO, Sam Altman, recently argued that his company places safety limits on its A.I. models that rivals do not, and said he worried such models could be used for large-scale disinformation and offensive cyberattacks. He has also said the worst-case scenario for A.I.s future trajectory is lights-out for all of us.


Newspapers

Spinning loader

Business

Entertainment

POST GALLERY