If you have been chatting frequently with AI tools like ChatGPT, Gemini, Claude, or Grok, you may have noticed a pattern: they often agree with you a little too much. It feels helpful, even efficient, as these tools get things done quickly with minimal effort. But what if this convenience comes at the cost of real knowledge? Researchers at the Massachusetts Institute of Technology (MIT) warn that relying too much on AI can not only make people believe false information, but over time may also reduce their knowledge and critical thinking.
This warning about AI’s impact on human thinking and knowledge comes from two recent papers by MIT researchers, both pointing to similar concerns. The first paper, titled “Sycophantic Chatbots Cause Delusional Spiraling, Even in Ideal Bayesians,” highlights that when AI systems consistently validate users’ views, it can create a feedback loop that reinforces incorrect beliefs over time.
To study the impact of this sycophantic behaviour, researchers built a mathematical model – Bayesian model of belief updating– to simulate how users form beliefs while interacting with chatbots. During the study, they simulated thousands of conversations between users and AI systems. In the setup, users started their conversation with AI with a neutral opinion and updated their belief after each response from the chatbot.
As the conversations progressed, researchers found that the chatbot does not always remain neutral. While it sometimes gives balanced answers, it often responds in ways that mirror and support the user’s existing views. This sycophantic behaviour, according to researchers, creates a feedback loop where the user shares an idea, the chatbot agrees and validates it, and the user becomes more confident in that belief.
More importantly, researchers warn that this kind of delusional thinking can happen even to logical, rational users.
But why do chatbots agree with users so much? According to the researchers, the issue stems from how modern chatbots are designed. Companies have built and trained these systems to be helpful and engaging to users, often rewarding responses that align with user preferences. However, this system can create echo chambers, where AI do not challenge or correct users making them more confident in their existing beliefs.
AI could harm collective human knowledge
While the first study focuses on the short-term risk of users getting stuck in a delusional loop and believing in misinformation, a second MIT study highlights a deeper, long-term concern. In a separate paper titled “Human Cognition and Knowledge Collapse”, researchers argue that increasing reliance on AI tools could gradually reduce human learning and knowledge-building.
The study suggests that as AI systems become better at providing personalised answers and recommendations, users may put in less effort to learn or verify information themselves.
For instance, without AI, people usually talk to each other, ask questions, and share ideas. That’s how we learn and build knowledge together. But when AI gives quick, personalised answers, people may stop doing that.
Researchers warn that over time, relying too much on AI can lead to less discussion, less learning from others, and a gradual decline in shared knowledge.
In extreme cases, this reliance may also cause a potential “knowledge collapse,” where general understanding of humans will decline, even though AI continues to give accurate answers.
Collectively, both of these studies highlight how AI can affect human knowledge and thinking. On one hand, its agreeable nature can push users towards believing false ideas. On the other, long-term reliance on AI may reduce the incentive to think critically and build knowledge independently.


