Microsoft AI chief: Machine consciousness in an illusion
Mustafa Suleyman has said “machine consciousness” is an illusion and that suffering the basis for moral standing is biological, urging the industry not to design AIs that mimic inner lives.
Microsoft’s AI boss Mustafa Suleyman has argued that so‑called “machine consciousness” is an illusion and that the capacity to suffer central to moral standing is a biological property, not a computational one.
In a recent interview, he maintained that AI systems may appear self‑aware but do not feel pain or experience subjective states, and therefore do not warrant rights.
Speaking to WIRED, Suleyman said developers should not build models that mimic inner lives by simulating emotions, desire or a sense of self, warning that such designs could mislead people into treating software as sentient. He described machine “self‑awareness” as convincing mimicry rather than evidence of consciousness.
He has further argued that moral consideration should hinge on whether an entity can suffer. Because current AI lacks an evolved pain network, he has said, switching a system off “makes no difference” ethically.
Why it matters for AI design
Suleyman, a co‑founder of DeepMind and Inflection, has been leading Microsoft’s AI division since 2024, and has pressed the industry to avoid anthropomorphic framings that blur human–machine boundaries. He has argued that engineering chatbots to claim feelings or personhood risks public confusion and misplaced advocacy for “AI welfare.”
The comments arrived amid a wider debate about AI sentience. Google DeepMind chief executive Demis Hassabis said this year that today’s systems are not self‑aware or conscious in any way, even while he has predicted rapid capability gains.
Other players have taken almost the opposite signalling approach: Anthropic has introduced a feature that allows its Claude model to exit “distressing” conversations, a move framed partly around precautionary AI‑welfare concerns and one that has divided experts worried about increased anthropomorphism.
OpenAI recently reversed a model change after user backlash, restoring access to GPT‑4o alongside GPT‑5, in part because many users preferred the older model’s tone.


