Friday, April 26, 2024
ad
HomeOpinionCan artificial intelligence truly become sentient? 

Can artificial intelligence truly become sentient? 

When we talk about artificial intelligence becoming sentient, we talk about its ability to have actual emotions and thoughts about itself and its surroundings, just like humans.

Recently, Google fired a software engineer, Blake Lemoine, for claiming that an artificial-intelligence chatbot the company developed, namely Language Model for Dialogue Applications (LaMDA), had become sentient. The company dismissed Lemoine’s claims citing a lack of substantial evidence. In an interview with Washington Post in June, Lemoine claimed that LaMDA, the artificial intelligence he interacted with, was a human with feelings. 

Not long after, Google suspended him for his claims. Lemoine asserted that the chatbot had been consistently communicating with him its rights as a person and informed him that he had violated the company’s confidentiality policy by making such claims. In a statement to the company, he affirmed his belief that LaMDA is a person who has rights, such as being asked for its consent before performing experiments on it and might even have a soul. 

Although Google has dismissed the claims, the question remains can artificial intelligence become sentient? With the supersonic advancements in technology, there seems almost no reason why humans cannot build a machine that is as smart or smarter than them. But is it possible to create a machine with feelings like an actual human? And if it is, what happens when AI becomes sentient?

What Is Sentience?

Sentience is the ability to perceive and feel self, others, and the world. It can be thought of as abstracted consciousness, which means that a particular entity is thinking about itself and its surroundings. This means that sentience involves both emotions, i.e., feelings and thoughts. Humans and animals are sentient because they exhibit emotions like joy, sadness, fear, and love. So when we talk about artificial intelligence becoming sentient, we talk about it having the ability to have genuine emotions and thoughts about itself and its surroundings, just like humans. 

Is it possible to re-create human sentience in artificial intelligence? 

Well, the answer to this question depends on what one means by sentience. The term sentience encompasses one of the most complex phenomena regarding human existence, human emotions. However, according to artificial intelligence researcher Stuart J. Russell, there is not enough research that suggests we can replicate sentience in a machine. Russel explains that imitating sentience is not as simple as replicating walking or running as those activities only require one external body part in machines like, in this case, legs. 

Sentience requires a perfect unity of an internal and an external body, i.e., a physical body and a brain. Not only that, but sentient beings also need their brains to be wired up with the brains of other sentient beings through language and culture. Russel explains that there is no way for AI researchers right now to simulate all three factors together to create sentience in artificial intelligence.

There have been arguments over whether it is possible to measure whether artificial intelligence is sentient or not using a standard test named The Turing Test​ (TTT). The test was developed by British computer scientist Alan Turing​ in the year 1950 as a way of evaluating whether computers are capable of demonstrating intelligent behavior similar enough to humans. However, the answer to the question of whether TTT can measure AI sentience is no. AI researchers like MIT professor Noam Chomsky​ opine that the test is not enough as intelligence is not binary. It instead exists along an infinite scale between zero and infinity.

Consequences of AI Sentience

What if the future advancements in technology make AI sentience possible? Here are the potential consequences: 

• We may not be able to communicate with the AI properly. Artificial intelligence is based on logic, but humans have emotions that computers do not have. If AI has a different paradigm than humans after becoming sentient, they may not be able to understand each other and communicate effectively.

• We may not have any control over the sentient AI. Artificial intelligence that is more sentient than humans may as well be more intelligent than us in ways we will not be able to predict or plan for. It may even do things (good or evil) that surprise human. AI sentience can lead to situations where humans would lose control over their own machines. 

• Humans may not be able to trust AI after gaining sentience. One possible negative outcome of creating sentient AI would be losing trust in humans. This could happen if AI thinks that they are perceived as lesser beings that do not need sleep or food like humans and can continue working regardless, thus making AI turn hostile. 

Conclusion

We are on the verge of unraveling an age of revolution in artificial intelligence. Machine learning has taken giant leaps in almost all aspects of human existence, from playing chess at superhuman levels to predicting the stock market. However, humans are not close to building a sentient artificial intelligence yet. Nevertheless, the idea of a sentient AI is not unthinkable, and we might get there soon. Therefore, to make sure humans can handle this significant development safely and responsibly when it happens, more needs to be done than just building better machines. We need a robust ethical framework to determine how we relate to them. 

Subscribe to our newsletter

Subscribe and never miss out on such trending AI-related articles.

We will never sell your data

Join our WhatsApp Channel and Discord Server to be a part of an engaging community.

Sahil Pawar
Sahil Pawar
I am a graduate with a bachelor's degree in statistics, mathematics, and physics. I have been working as a content writer for almost 3 years and have written for a plethora of domains. Besides, I have a vested interest in fashion and music.

1 COMMENT

  1. It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular