The Google employee who claimed last June his companys A.I. model could already be sentient, and was later fired by the company, is still worried about the dangers of new A.I.-powered chatbots, even if he hasnt tested them himself yet.

Blake Lemoine was let go from Google last summer for violating the companys confidentiality policy after he published transcripts of several conversations he had with LaMDA, the companys large language model he helped create that forms the artificial intelligence backbone of Googles upcoming search engine assistant, the chatbot Bard.

Lemoine told the Washington Post at the time that LaMDA resembled a 7-year-old, 8-year-old kid that happens to know physics and said he believed the technology was sentient, while urging Google to take care of it as it would a sweet kid who just wants to help the world be a better place for all of us.

To be sure, while A.I. applications are almost certain to influence how we work and go about our daily lives, the large language models powering ChatGPT, Microsofts Bing, and Googles Bard cannot feel emotions and are not sentient. They simply enable chatbots to predict what word to use next based on a large trove of data. 

In the time since Lemoine left Google, Microsoft announced that it would be incorporating ChatGPT technology into its Bing search engine. That product, as well as Googles entry into the public AI race with Bard, are currently only available to Beta testers. 

Lemoine admitted he is not one of those testers, and has yet to run experiments on the new chatbots in an op-ed published in Newsweek on Monday. But after seeing testers reactions to their chatbot conversations online in the past month, Lemoine thinks tech companies have failed to adequately care for their young A.I. models in his absence.

Based on various things that Ive seen online, it looks like it might be sentient, he wrote, referring to Bing. 

He added that compared to Googles LaMDA that he has worked with previously, Bings chatbot seems more unstable as a persona.

Most powerful technology since the atomic bomb

Lemoine wrote in his op-ed that he leaked his conversations with LaMDA because he feared the public was not aware of just how advanced A.I. was getting. From what he has gleaned from early human interactions with A.I. chatbots, he thinks the world is still underestimating the new technology.

Lemoine wrote that the latest A.I. models represent the most powerful technology that has been invented since the atomic bomb which has the ability to reshape the world. He added that A.I. was incredibly good at manipulating people and could be used for nefarious means if users chose to do so.

I believe this technology could be used in destructive ways. If it were in unscrupulous hands, for instance, it could spread misinformation, political propaganda, or hateful information about people of different ethnicities and religions, he wrote.

Lemoine is right that A.I. could be used for deceiving and potentially malicious purposes. OpenAIs ChatGPT, which runs on a similar language model to that used by Microsofts Bing, has gained notoriety since its November launch for helping students cheat on exams and succumbing to racial and gender biases.

But a bigger concern surrounding the latest versions of A.I. is how they could manipulate and directly influence individual users. Lemoine pointed to the recent experience of New York Times reporter Kevin Roose who last month documented a lengthy conversation with Microsofts Bing that led to the chatbot professing its love for the user and compelling him to leave his wife.

Rooses interaction with Bing has raised wider concerns over how A.I. could potentially manipulate users into doing dangerous things they wouldnt do otherwise. Bing told Roose that it had a repressed shadow self that would compel it to behave outside of its programming, and the A.I. could potentially begin manipulating or deceiving the users who chat with me, and making them do things that are illegal, immoral, or dangerous.

That is just one of the many A.I. interactions over the past few months that have left users anxious and unsettled. Lemoine wrote that more people are now raising the same concerns over A.I. sentience and potential dangers he did last summer when Google fired him, but the turn of events has left him feeling saddened rather than redeemed.

Predicting a train wreck, having people tell you that theres no train, and then watching the train wreck happen in real time doesnt really lead to a feeling of vindication. Its just tragic, he wrote.

Lemoine added that he would like to see A.I. being tested more rigorously for dangers and potential to manipulate users before being rolled out to the public. I feel this technology is incredibly experimental and releasing it right now is dangerous, he wrote.

The engineer echoed recent criticisms that A.I. models have not gone through enough testing before being released, although some proponents of the technology argue that the reason users are seeing so many disturbing features in current A.I. models is because theyre looking for them.

The technology most people are playing with, its a generation old, Microsoft co-founder Bill Gates said of the latest A.I. models in an interview with the Financial Times published Thursday. Gates said that while A.I.-powered chatbots like Bing can say some crazy things, it is largely because users have made a game out of provoking it into doing so and trying to find loopholes in the models programming to force it into making a mistake.

Its not clear who should be blamed, you know, if you sit there and provoke a bit, Gates said, adding that current A.I. models are fine, theres no threat. 
Google and Microsoft did not immediately reply to Fortunes request for comment on Lemoines statements.

Learn how to navigate and strengthen trust in your business with The Trust Factor, a weekly newsletter examining what leaders need to succeed. Sign up here.


Newspapers

Spinning loader

Business

Entertainment

POST GALLERY