Brands
Discover
Events
Newsletter
More

Follow Us

twitterfacebookinstagramyoutube
ADVERTISEMENT
Advertise with us

Ethical dilemma: Can ChatGPT-like AI gain consciousness?

While robots feeling human-like emotions have only been a part of science fiction movies and books, it does raise the question of whether this possibility even exists. Here is what we know so far.

Ethical dilemma: Can ChatGPT-like AI gain consciousness?

Monday October 23, 2023 , 3 min Read

Terminator, a popular science fiction movie mainly depicts how a machine gains consciousness just like us humans. While the idea of sentient machines or artificial intelligence tools like ChatGPT is not a reality, it does fuel curiosity- will AI and machines become self-aware one day?


Before we deep dive into this topic, we need to understand what consciousness truly is.

The "C-word"

According to neuroscience, consciousness is a condition of awakeness or awareness where a human responds to themselves or the environment. However, this definition sets indicators of consciousness, which means in the era of technology, even, machines can be tested to see if they are self-aware.

Ongoing debates about AI consciousness

Artificial intelligence

Recently, Grace Lindsay a neuroscientist at New York University along with a group of philosophers and computer scientists did some research to determine whether ChatGPT-like AI can be conscious or not.


Their report analysed current AI systems with neuroscientific theories of consciousness. While this study revealed that no AI is conscious at present, it also found that there are no technical barriers to building artificial intelligence that can become self-aware.


But this is not just one instance. Last year, Blake Lemoine, an engineer at Google stated their AI product the LaMDA chatbot was conscious. He argued that LaMDA is sentient but his claims were denied by Google Vice President Blaise Aguera y Arcas and Head of Responsible Innovation Jen Gennai.


But that did not stop Lemoine as he went public with his claim in response to which Google fired him.

Also Read
Meta unveils its first generative AI-powered ad features


This entire debate has sparked controversy as there has been no evidence to back claims of AI being conscious, the possibility of it being is not entirely dismissed. In fact, neuroscientist Anil Seth published a book named "Being You: A New Science of Consciousness" that revolves around the theory of consciousness and explores the possibility of machines being sentient.


On the other hand, neuropsychologist Nicholas Humphrey in his latest book called "Sentience" revealed how machines lack phenomenal consciousness- which is the experience of encountering the world, feelings, etc.


Perhaps this leaves room for more discussion as philosophers, computer scientists and neuroscientists study AI systems and test them based on parameters which can determine if AI does have feelings. However, a curious question in this scenario is- what is the degree of morality in this aspect?

Morality and human responsibility

The entire puzzle of AI's consciousness raises a very significant and important concern- morality. Just imagine using technology that internally suffers the burden of automation or any other tasks every day. Artificial intelligence or generative AI products should not attain sentient for the benefit of all.


This is why computer scientists, developers, researchers and people around the world need to address the potential threats of AI gaining consciousness or building AI that is sentient.


As AI is rapidly being adopted globally, certain ethical and moral guidelines need to be created to avoid the negative impacts of AI behaving like humans who can feel and be aware.