Artificial intelligence will kill us all or solve the worlds biggest problemsor something in betweendepending on who you ask. But one thing seems clear: In the years ahead, A.I. will integrate with humanity in one way or another.
Blake Lemoine has thoughts on how that might best play out. Formerly an A.I. ethicist at Google, the software engineer made headlines last summer by claiming the companys chatbot generator LaMDA was sentient. Soon after, the tech giant fired him.
In an interview with Lemoine published on Friday, Futurism asked him about his best-case hope for A.I. integration into human life.
Surprisingly, he brought our furry canine companions into the conversation, noting that our symbiotic relationship with dogs has evolved over the course of thousands of years.
Were going to have to create a new space in our world for these new kinds of entities, and the metaphor that I think is the best fit is dogs, he said. People dont think they own their dogs in the same sense that they own their car, though there is an ownership relationship, and people do talk about it in those terms. But when they use those terms, theres also an understanding of the responsibilities that the owner has to the dog.
Figuring out some kind of comparable relationship between humans and A.I., he said, is the best way forward for us, understanding that we are dealing with intelligent artifacts.
Many A.I. experts, of course, disagree with his take on the technology, including ones still working for his former employer. After suspending Lemoine last summer, Google accused him of anthropomorphizing todays conversational models, which are not sentient.
Our teamincluding ethicists and technologistshas reviewed Blakes concerns per our A.I. Principles and have informed him that the evidence does not support his claims, company spokesman Brian Gabriel said in a statement, though he acknowledged that some in the broader A.I. community are considering the long-term possibility of sentient or general A.I.
Gary Marcus, an emeritus professor of cognitive science at New York University, called Lemoines claims nonsense on stilts last summer and is skeptical about how advanced todays A.I. tools really are. We put together meanings from the order of words, he told Fortune in November. These systems dont understand the relation between the orders of words and their underlying meanings.
But Lemoine isnt backing down. He noted to Futurism that he had access to advanced systems within Google that the public hasnt been exposed to yet.
The most sophisticated system I ever got to play with was heavily multimodalnot just incorporating images, but incorporating sounds, giving it access to the Google Books API, giving it access to essentially every API backend that Google had, and allowing it to just gain an understanding of all of it, he said. Thats the one that I was like, You know this thing, this things awake. And they havent let the public play with that one yet.
He suggested such systems could experience something like emotions.
Theres a chance thatand I believe it is the casethat they have feelings and they can suffer and they can experience joy, he told Futurism. Humans should at least keep that in mind when interacting with them.