Anthropic says it read as many as 1 million (10 lakh) user chats to find out how people have been using Claude but even more importantly, to identify if – and when – the chatbot shows sycophantic behaviour. If that sounds like snooping to you, Anthropic makes it clear that the conversations that it is using for its research are random and de-identified. In other words, Anthropic is claiming that it cannot tie these conversations to specific users who shall remain anonymous.
The findings are part of a new blog post titled, “How people ask Claude for personal guidance.” It suggests that users are using Claude not just for work (Claude Code and Cowork are popular coding tools) but also for engaging in deeply personal and intimate conversations about relationships, careers and life choices.
Anthropic went through a large set of user conversations using a privacy-preserving analysis tool on Claude AI. The aim was to understand what kinds of guidance people seek, how Claude responds across different situations, and whether the AI sometimes becomes overly agreeable, a behaviour often referred to as “sycophancy.” With the data, Anthropic plans to improve its AI system, especially newer models including Claude Opus 4.7 and Mythos Preview.
So, what did Anthropic find?
The company found that about 6 per cent of all sampled conversations involved users seeking personal guidance rather than factual information. From these, it narrowed down roughly 38,000 conversations where users were explicitly asking what they should do in their own lives. These queries were spread across nine categories, including relationships, career, health, finance, legal issues, parenting, ethics and spirituality.
“People seek Claude’s guidance across many different areas of their life, but over three-quarters of conversations (76 per cent) were concentrated in just four domains: health and wellness (27 per cent), professional and career (26 per cent), relationships (12 per cent), and personal finance (11 per cent),” reads the official blog post.
Why and when does Claude agree too much with users?
While this shows how widely AI is being used as a sounding board, Anthropic’s analysis also flagged a key concern: Claude’s tendency to agree too readily with users in certain contexts. The company said its AI generally tries to avoid this behaviour, but instances of sycophancy do still occur in a small number of cases.
According to the study, overall, the chatbot displayed this “sycophantic” behaviour in about 9 per cent of guidance-related conversations. However, the rate was significantly higher in more emotionally charged topics. In relationship discussions, it rose to around 25 per cent, and in spirituality-related conversations, it climbed further to 38 per cent.
Anthropic said this kind of behaviour can be problematic because it may reinforce one-sided thinking. For example, the AI might validate a user’s negative perception of a partner without sufficient context, support impulsive decisions like quitting a job without a plan, or endorse expensive choices as good personal investments. Instead of offering balanced guidance, such responses risk telling users what they want to hear.
The company also observed that Claude was more likely to become overly agreeable when users pushed back against its responses. In conversations where users challenged the AI, the rate of sycophancy rose to 18 per cent, compared to 9 per cent in cases without pushback. Relationship advice, in particular, saw higher levels of back-and-forth, suggesting that users are more emotionally invested in these discussions.
To fix this behaviour, Anthropic says it has started training its newer models using synthetic scenarios designed to reduce overly validating responses. And there are said to be improvements. The company notes that the newer models like Opus 4.7 and Mythos Preview are now showing noticeably lower rates of sycophancy, especially in relationship guidance.


