As AI moves from narrow intelligence (ANI) toward artificial general intelligence (AGI), the idea of machines that can feel and become self-aware is increasingly gaining traction. With improvements, AI systems have now become more capable and have started mimicking human-like responses. Advanced large language models, systems that power AI chatbots, can now write, reason, and interact in ways that appear just like humans. Many have started thinking that with improvements in AI systems, we can jump from intelligence to consciousness—an AI system that can feel and is aware of its surroundings. But a new paper from Google DeepMind argues that future may never arrive.
Alexander Lerchner, who works as a senior staff scientist at Google’s artificial intelligence laboratory DeepMind, claims that large language models (LLMs), despite their capabilities, are unlikely to ever become conscious.
The research paper, titled “The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness,” argues that AI systems are ultimately “mapmaker-dependent,” meaning they require an active, experiencing cognitive agent—a human—to organise continuous reality into meaningful states.
Why AI still depends on humans
In simple terms, AI needs humans to organise data in a way it can learn from. It is designed to process and predict patterns in data to generate its responses and cannot think on its own.
Being able to simulate conversation or reasoning is not the same as actually experiencing thoughts or feelings, and Lerchner argues this would be impossible without a physical body. The systems do not know what they are saying—they only predict what comes next.
A growing divide in belief
This comes at a time when interest in AI consciousness is increasing due to rapid advancements in the technology. Researchers and users have started feeling that someday these systems might gain true consciousness.
This has divided people into two groups: those who see consciousness as a possible outcome of advanced intelligence, and those who argue that the two are fundamentally different.
Why this debate matters
The question of AI consciousness has broader implications for society. If a system ever gains consciousness, it could change how it is regulated, used, and even treated.
But if it remains non-conscious, it will continue to be viewed as a tool rather than a system that can feel or be aware of how it is treated.
For now, the Google DeepMind paper suggests that with advancement, AI systems may feel more and more human-like—but they will not be human-like from the inside.


