Google suspended a software engineer, Blake Lemoine, for claiming that an artificial-intelligence chatbot the company developed, Language Model for Dialogue Applications (LaMDA), had become sentient. The company dismisses Lemoine’s claims citing a lack of substantial evidence.
Lemoine asserted that the chatbot had been consistently communicating with him its rights as a person and informed him that he had violated the company’s confidentiality policy by making such claims.
In a statement to the company, he affirmed his belief that LaMDA is a person who has rights, such as being asked for its consent before performing experiments on it, and might even have a soul.
Read More: Intel’s Sapphire Rapids processor to outperform AMD’s Genoa chip
LaMDA is an internal system by Google for building chatbots that can mimic speech. According to Google, LaMDA works by imitating the varied exchanges found in millions of sentences of human conversations.
Brian Gabriel, Google’s spokesperson, said that the company experts, ethicists, and technologists dismissed the claims after reviewing, insisting that Lemoine’s evidence does not support his claims.
In an email statement, Gabriel added that hundreds of engineers and researchers have conversed with LaMDA, and that they are unaware of anyone else making such wide-ranging assertions or anthropomorphizing LaMDA as Lemoine.
In a Medium post, Lemoine said that Google suspended him on June 6 for breaching the company’s confidentiality policies and that he might be fired soon.
Google introduced LaMDA last year, pitching it as a breakthrough in chatbot technology. The company emphasized the chatbot’s ability to engage freely about a seemingly endless number of topics. The tech giant also claimed that LaMDA would unlock more natural ways of interacting with technology and entirely new categories of useful applications.