Recently, Google fired a software engineer, Blake Lemoine, for claiming that an artificial-intelligence chatbot the company developed, namely Language Model for Dialogue Applications (LaMDA), had become sentient. The company dismissed Lemoine’s claims citing a lack of substantial evidence. In an interview with Washington Post in June, Lemoine claimed that LaMDA, the artificial intelligence he interacted with, was a human with feelings.
Not long after, Google suspended him for his claims. Lemoine asserted that the chatbot had been consistently communicating with him its rights as a person and informed him that he had violated the company’s confidentiality policy by making such claims. In a statement to the company, he affirmed his belief that LaMDA is a person who has rights, such as being asked for its consent before performing experiments on it and might even have a soul.
Although Google has dismissed the claims, the question remains can artificial intelligence become sentient? With the supersonic advancements in technology, there seems almost no reason why humans cannot build a machine that is as smart or smarter than them. But is it possible to create a machine with feelings like an actual human? And if it is, what happens when AI becomes sentient?
What Is Sentience?
Sentience is the ability to perceive and feel self, others, and the world. It can be thought of as abstracted consciousness, which means that a particular entity is thinking about itself and its surroundings. This means that sentience involves both emotions, i.e., feelings and thoughts. Humans and animals are sentient because they exhibit emotions like joy, sadness, fear, and love. So when we talk about artificial intelligence becoming sentient, we talk about it having the ability to have genuine emotions and thoughts about itself and its surroundings, just like humans.
Is it possible to re-create human sentience in artificial intelligence?
Well, the answer to this question depends on what one means by sentience. The term sentience encompasses one of the most complex phenomena regarding human existence, human emotions. However, according to artificial intelligence researcher Stuart J. Russell, there is not enough research that suggests we can replicate sentience in a machine. Russel explains that imitating sentience is not as simple as replicating walking or running as those activities only require one external body part in machines like, in this case, legs.
Sentience requires a perfect unity of an internal and an external body, i.e., a physical body and a brain. Not only that, but sentient beings also need their brains to be wired up with the brains of other sentient beings through language and culture. Russel explains that there is no way for AI researchers right now to simulate all three factors together to create sentience in artificial intelligence.
There have been arguments over whether it is possible to measure whether artificial intelligence is sentient or not using a standard test named The Turing Test (TTT). The test was developed by British computer scientist Alan Turing in the year 1950 as a way of evaluating whether computers are capable of demonstrating intelligent behavior similar enough to humans. However, the answer to the question of whether TTT can measure AI sentience is no. AI researchers like MIT professor Noam Chomsky opine that the test is not enough as intelligence is not binary. It instead exists along an infinite scale between zero and infinity.
Consequences of AI Sentience
What if the future advancements in technology make AI sentience possible? Here are the potential consequences:
• We may not be able to communicate with the AI properly. Artificial intelligence is based on logic, but humans have emotions that computers do not have. If AI has a different paradigm than humans after becoming sentient, they may not be able to understand each other and communicate effectively.
• We may not have any control over the sentient AI. Artificial intelligence that is more sentient than humans may as well be more intelligent than us in ways we will not be able to predict or plan for. It may even do things (good or evil) that surprise human. AI sentience can lead to situations where humans would lose control over their own machines.
• Humans may not be able to trust AI after gaining sentience. One possible negative outcome of creating sentient AI would be losing trust in humans. This could happen if AI thinks that they are perceived as lesser beings that do not need sleep or food like humans and can continue working regardless, thus making AI turn hostile.
We are on the verge of unraveling an age of revolution in artificial intelligence. Machine learning has taken giant leaps in almost all aspects of human existence, from playing chess at superhuman levels to predicting the stock market. However, humans are not close to building a sentient artificial intelligence yet. Nevertheless, the idea of a sentient AI is not unthinkable, and we might get there soon. Therefore, to make sure humans can handle this significant development safely and responsibly when it happens, more needs to be done than just building better machines. We need a robust ethical framework to determine how we relate to them.