Anthropic to use Claude conversations for AI training unless users opt out by September 28
Anthropic’s new data policy puts user conversations at the centre of AI training, sparking privacy concerns and regulatory scrutiny.

- Aug 29, 2025,
- Updated Aug 29, 2025 4:36 AM IST
Anthropic has announced a major shift in its data policies, requiring all Claude users to decide by 28 September whether they want their conversations included in training future AI models.
Until now, the company had not used consumer chat data for training. With the new policy, Anthropic will retain user conversations and coding sessions for up to five years unless individuals opt out. The changes apply to Claude Free, Pro and Max users, including Claude Code, while business customers using Claude Gov, Claude for Work, Claude for Education, or API access remain unaffected.
Previously, consumer chat prompts and outputs were automatically deleted after 30 days, unless flagged for policy violations, in which case they could be stored for up to two years.
Anthropic framed the update as a matter of choice, stating that users who allow training will “help us improve model safety, making our systems for detecting harmful content more accurate and less likely to flag harmless conversations.” The company added that participation would also “help future Claude models improve at skills like coding, analysis, and reasoning, ultimately leading to better models for all users.”
Industry watchers, however, believe the move is driven by Anthropic’s need for high-quality conversational data to stay competitive against OpenAI and Google. Access to millions of Claude interactions would give the company valuable material for refining its systems.
The change mirrors wider trends in the AI sector, where firms face mounting scrutiny over data retention. OpenAI, for example, is currently contesting a court order requiring it to store all ChatGPT consumer conversations indefinitely, including deleted chats, following a lawsuit filed by The New York Times and other publishers. OpenAI COO Brad Lightcap has called the ruling “a sweeping and unnecessary demand” that “fundamentally conflicts with the privacy commitments we have made to our users.”
The way Anthropic is implementing the update has also raised eyebrows. Existing users are being shown a pop-up labelled “Updates to Consumer Terms and Policies” with a large “Accept” button. Beneath it, a much smaller toggle controls training permissions, which are set to “On” by default. The Verge noted that such a design risks users clicking through quickly without realising they are consenting to data sharing.
Privacy experts warn that the complexity of AI systems makes meaningful user consent nearly impossible. The US Federal Trade Commission has previously cautioned that AI companies could face enforcement if they make changes in “legalese, fine print, or buried hyperlinks” that obscure the true implications of policy updates.
Whether the FTC intends to intervene in Anthropic’s case remains unclear.
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine
Anthropic has announced a major shift in its data policies, requiring all Claude users to decide by 28 September whether they want their conversations included in training future AI models.
Until now, the company had not used consumer chat data for training. With the new policy, Anthropic will retain user conversations and coding sessions for up to five years unless individuals opt out. The changes apply to Claude Free, Pro and Max users, including Claude Code, while business customers using Claude Gov, Claude for Work, Claude for Education, or API access remain unaffected.
Previously, consumer chat prompts and outputs were automatically deleted after 30 days, unless flagged for policy violations, in which case they could be stored for up to two years.
Anthropic framed the update as a matter of choice, stating that users who allow training will “help us improve model safety, making our systems for detecting harmful content more accurate and less likely to flag harmless conversations.” The company added that participation would also “help future Claude models improve at skills like coding, analysis, and reasoning, ultimately leading to better models for all users.”
Industry watchers, however, believe the move is driven by Anthropic’s need for high-quality conversational data to stay competitive against OpenAI and Google. Access to millions of Claude interactions would give the company valuable material for refining its systems.
The change mirrors wider trends in the AI sector, where firms face mounting scrutiny over data retention. OpenAI, for example, is currently contesting a court order requiring it to store all ChatGPT consumer conversations indefinitely, including deleted chats, following a lawsuit filed by The New York Times and other publishers. OpenAI COO Brad Lightcap has called the ruling “a sweeping and unnecessary demand” that “fundamentally conflicts with the privacy commitments we have made to our users.”
The way Anthropic is implementing the update has also raised eyebrows. Existing users are being shown a pop-up labelled “Updates to Consumer Terms and Policies” with a large “Accept” button. Beneath it, a much smaller toggle controls training permissions, which are set to “On” by default. The Verge noted that such a design risks users clicking through quickly without realising they are consenting to data sharing.
Privacy experts warn that the complexity of AI systems makes meaningful user consent nearly impossible. The US Federal Trade Commission has previously cautioned that AI companies could face enforcement if they make changes in “legalese, fine print, or buried hyperlinks” that obscure the true implications of policy updates.
Whether the FTC intends to intervene in Anthropic’s case remains unclear.
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine
