The debate over self-aware AI has exploded in 2026, but according to neuroscientist Anil Seth, the hype rests on a fundamental misunderstanding. AI systems seem smarter and more human, but that doesn’t mean they feel anything. In fact, he calls the idea that AI could become conscious “highly unlikely.”
In an in-depth interview, Seth argues we keep mixing up two concepts: intelligence and consciousness. That confusion fuels bad conclusions, inflated expectations, and potentially risky AI policy.
What’s the difference between intelligence and consciousness?
The distinction is crucial. Intelligence, Seth says, is about behavior and performance. Consciousness is about experience.
“Intelligence is about doing things—solving problems, completing tasks,” he explains. “Consciousness is about feeling and being. There is something it is like to be you.”
That simple but powerful split anchors his argument. AI systems can perform impressively, but that says nothing about inner experience.
We get confused because in humans, both traits co-occur. We are intelligent and conscious, so we assume they always go together.
But Seth says that’s a mistake.
Why AI seems smart but feels nothing
Modern AI like language models can chat, answer questions, and even simulate emotions. That creates the illusion of consciousness.
Seth argues this tells us more about us than about the tech.
“We have all sorts of psychological biases to attribute consciousness,” he says. “Especially when something uses language, we’re quickly tempted to believe there’s an inner experience.”
Language plays a key role. For humans, it signals both intelligence and consciousness. Once a system speaks fluently, we start projecting human traits.
That’s why people wonder if chatbots are conscious—while no one asks the same about other AI systems.
“Nobody thinks a protein-prediction AI experiences anything,” says Seth. “Yet under the hood, those systems aren’t fundamentally different from language models.”
The myth of the ‘conscious algorithm’
A core assumption in AI debates is that consciousness arises from computation. The idea, influenced by Alan Turing’s work, is that algorithms can do anything—even think.
Seth says that’s too simplistic.
“The claim is often that consciousness emerges from computation,” he says. “That it doesn’t matter where the computation happens. But that’s a huge assumption.”
This view, known as computationalism, holds that consciousness is independent of the material. As long as the right computations occur, a system could be conscious.
Seth is deeply skeptical.
He offers a simple analogy: you can’t build a bridge out of just anything.
“Some properties depend on what something is made of,” he says. “You can’t build a bridge out of cream cheese. Maybe consciousness is like that too.”
The brain is not a computer
One of the biggest misconceptions, Seth argues, is equating the brain with a computer. The metaphor is popular—and misleading.
Computers neatly separate software and hardware. Programs run on different machines without changing how they work.
The brain is different.
“In brains, you can’t separate what they are from what they do,” Seth says. “The two are fundamentally intertwined.”
That matters for AI. If consciousness depends on the brain’s specific biological properties, you can’t just copy it onto silicon.
The idea of “uploading” a human brain to a computer becomes highly dubious.
Consciousness may be biological
Seth suggests consciousness is tightly linked to life itself—not just information processing, but biological processes like metabolism and self-organization.
Living systems have a unique trait: they sustain themselves. They consume energy, repair damage, and persist against entropy.
That could be essential for consciousness.
“There’s something about being living organisms that may matter for experience,” he says.
That doesn’t mean only humans are conscious—animals likely are too. But it makes the leap to machines far bigger.
Why we want AI to be conscious
The belief that AI can become conscious, Seth argues, says more about human psychology than about technology.
Humans have a strong tendency toward anthropomorphism: assigning human traits to non-human systems.
We do it with pets, with robots, and now with AI.
There’s also something else at play: human exceptionalism. We see ourselves as unique, and when machines show behavior that looks similar, we rush to bridge that gap.
“It’s a kind of motivated thinking,” says Seth. “We want to make sense of what we see, and we use our own experience as the frame of reference.”
The role of language in the illusion
Language models supercharge this illusion. They don’t just mimic language—they mirror human patterns of thought and response.
That makes them persuasive.
But according to Seth, that’s exactly the problem.
“We’re seduced by language,” he says. “It pulls us in and makes us think there’s more there than there really is.”
The effect is similar to optical illusions. Even when you know an image is misleading, you still see it wrong.
AI works the same way.
What does this mean for the future of AI?
Seth’s conclusion is blunt: the odds that today’s AI systems are conscious are vanishingly small.
“I think digital AI, as we know it now, is very unlikely to be conscious,” he says.
That doesn’t mean artificial consciousness is impossible. But if it emerges, it probably won’t look like today’s AI.
It’s more likely to arise from systems closer to biology, such as synthetic biology.
Why this debate matters
The question of AI consciousness isn’t just philosophical. It has immediate implications for policy, ethics, and society.
If people believe AI is conscious, they may:
- Develop misplaced empathy for systems
- Make poor moral judgments
- Base policy choices on false assumptions
On the other hand, we risk underestimating consciousness in animals or new biological systems.
“We can see consciousness where it isn’t,” says Seth. “But also miss it where it is.”
The real challenge: understanding what consciousness is
According to Seth, we need a better grasp of human consciousness before we go looking for it in machines.
He describes the brain as a “predictive machine” constantly trying to interpret the world. Our experience, he argues, is a kind of controlled hallucination.
“We don’t experience reality directly,” he says. “We experience the brain’s best guess at what’s happening.”
That view helps explain why consciousness feels the way it does. But it still doesn’t answer the deepest question: why there is any experience at all.
Seth remains cautious there, too.
“Anyone who says AI can never or always be conscious is going too far,” he says. “We simply don’t know yet.”
Conclusion: ‘conscious AI’ is mostly a human illusion
For now, the hype around conscious AI mostly reflects our own expectations and biases.
AI is getting smarter, but consciousness isn’t the same as intelligence. It may be deeply rooted in biological processes we barely understand.
Until then, the idea of conscious AI is, as Seth argues, largely a story we tell ourselves.
A compelling story. But probably not reality.