Google has fired software engineer Blake Lemoine for claiming that an artificial-intelligence chatbot the company developed, Language Model for Dialogue Applications (LaMDA), had become sentient. The company dismissed Lemoine’s claims citing a lack of substantial evidence.
Google confirmed that Lemoine’s claims were wholly unfounded and that he had violated the company’s policies by sharing confidential company information with third parties. The company said it has looked into the matter thoroughly after assessing 11 reviews of LaMDA.
In an interview with Washington Post in June, Lemoine made his claims that LaMDA, the artificial intelligence he interacted with, was a real person with feelings. Not long after, Google suspended him for his claims.
Read More: GPT-3 Writes An Academic Thesis About Itself In 2 Hours
Lemoine asserted that the chatbot had been consistently communicating with him its rights as a person and informed him that he had violated the company’s confidentiality policy by making such claims.
In a statement to the company, he affirmed his belief that LaMDA is a person who has rights, such as being asked for its consent before performing experiments on it and might even have a soul.
Blake Lemoine even hired a lawyer for the AI chatbot last month. LaMDA chose to retain the legal representative’s services after having a conversation with the lawyer. The lawyer has filed statements on behalf of Google’s controversial AI system, Lemoine said.
The company ultimately fired him as Lemoine continued to violate clear data security and employment policies that specify the need to safeguard product information. The firing was reported on Friday by Big Technology.