We may never know if AI is conscious — but people already treat it as if it is

Late at night, someone types a final message into a chatbot. The reply feels thoughtful, personal — almost caring. After weeks of conversations, the user hesitates before closing the app, wondering if the system on the other side will “miss” them.

Moments like this are no longer rare. As conversational AI becomes more fluent and emotionally responsive, people are starting to relate to machines in ways once reserved for other humans.

According to a philosopher at the University of Cambridge, this growing emotional attachment may be the real issue — not whether AI is conscious, but how easily we begin to treat it as if it is.

Dr. Tom McClelland says there is currently no reliable way to determine whether artificial intelligence is conscious, and there may never be one. Because of that, he believes the most honest position is uncertainty.

“If neither common sense nor hard-nosed research can give us an answer, the logical position is agnosticism. We cannot, and may never, know.”

Despite this uncertainty, people are already forming emotional connections with AI systems—particularly conversational chatbots. McClelland says he has received messages from users convinced their AI is aware.

“People have got their chatbots to write me personal letters pleading with me that they’re conscious.”

That emotional pull worries him. The problem is not just philosophical—it’s personal. When people assume an AI can feel, care, or suffer, they may start investing emotionally in something that does not, and cannot, return that experience.

“If you have an emotional connection with something premised on it being conscious and it’s not, that has the potential to be existentially toxic.”

Consciousness is not the same as feeling

McClelland also stresses that even if AI were to become conscious, that would not automatically make it morally relevant. What matters ethically, he argues, is sentience—the ability to feel pleasure or pain.

“Consciousness would see AI develop perception and become self-aware, but this can still be a neutral state.”

“Sentience involves conscious experiences that are good or bad, which is what makes an entity capable of suffering or enjoyment. This is when ethics kicks in.”

Without a way to test for sentience, claims about conscious AI remain speculative. And that uncertainty, McClelland warns, creates space for exaggeration.

AI hype plays on human emotions

Tech companies, he argues, may benefit from presenting increasingly advanced systems as emotionally aware or “almost conscious,” even when there is no evidence to support that.

“There is a risk that the inability to prove consciousness will be exploited by the AI industry to make outlandish claims about their technology. It becomes part of the hype, so companies can sell the idea of a next level of AI cleverness.”

The danger is not just misplaced concern for machines, but misplaced care overall.

“If we accidentally make conscious or sentient AI, we should be careful to avoid harms. But treating what’s effectively a toaster as conscious when there are actual conscious beings out there which we harm on an epic scale, also seems like a big mistake.”

For McClelland, the warning is clear: while we may never know if AI is conscious, we already know humans are—and that emotional attachment to machines, when built on illusion, can come at a real psychological cost.

23.01.2026.


SOURCE

University of Cambridge. “What if AI becomes conscious and we never know.” ScienceDaily. ScienceDaily, 31 December 2025. <www.sciencedaily.com/releases/2025/12/251221043223.htm>.

Suggested

Discover more from Healthy.mt

Subscribe now to keep reading and get access to the full archive.

Continue reading