The individual settings per provider, and what organisations need to think about beyond them.
If you have typed something into ChatGPT, Claude, or Gemini recently and wondered where it went, you are not being paranoid – you are asking a question that most people skip entirely.
The default setting across most major AI platforms is that your conversations can be used to train future versions of the model unless you have actively turned it off. According to Cisco’s 2025 Data Privacy Benchmark Study, nearly half of employees admit to inputting personal or non-public company data into GenAI tools, often without realising what happens to it next.
Why your AI conversations are more sensitive than you think
It is tempting to treat AI chat like a slightly more eloquent search engine. The differentiator is that people do not talk to search engines the way they talk to AI tools. They share half-formed strategies, client concerns, internal frustrations, and the kind of thinking they would never put in a document. Teams upload spreadsheets, PDFs, and internal reports. There are well-documented examples across the industry of engineers pasting proprietary source code into AI coding assistants, reporting from TechNewsWorld found that nearly 40% of AI interactions at work involve sensitive data, though it is worth being clear this is an industry-wide pattern rather than something specific to any one organisation.
People also ask questions that reveal not just what they know, but what they do not know, what worries them, and what they are still trying to figure out. Stored and used to improve the model, it builds a fairly detailed picture of how a person or a business actually operates.
That information is useful for AI providers improving their models. It is also the kind of information that is useful for competitors. You would not email your confidential workflows and reporting to a rival, or share your strategic gaps with someone who might exploit them. Allowing that data to sit in a shared training pipeline is a different version of the same risk. Your processes, your client context, your internal logic, these are your secret sauce, and it is worth being deliberate about where they end up.
The key distinction for organisations is between individual and enterprise accounts. Almost every provider offers stronger data protections for business accounts, typically excluding your data from training entirely, and this is where your focus should be. If your team is on an enterprise plan, data protection is usually handled at the account level by default. The problem arises when employees use personal accounts for work tasks, operating under individual data terms without necessarily realising it. That is where exposure tends to happen.

If your team accesses AI models via API rather than consumer interfaces, most providers apply stricter data protections by default, but it is worth verifying the specific terms for your provider and use case. The same principle applies to AI coding tools like GitHub Copilot, Cursor, and JetBrains AI: business and enterprise plans carry stronger protections than individual ones, but it is worth knowing that some tools send context from whatever files a developer has open at the time, not just what they have actively typed.
One observation worth flagging for any organisation that reimburses employees for personal AI subscriptions: if those subscriptions are being used for work, the data terms that apply are the individual ones, not enterprise ones. In effect, you may be paying for access while your data sits under consumer-grade protections.
The good news is that every major provider gives you the ability to opt out. Here is where to find the settings.
How to turn off ChatGPT data training
- Go to Settings
- Data Controls
- Toggle off “Improve the model for everyone.”
That is the whole thing. Applies across your account on all devices.
If your organisation is on ChatGPT Enterprise or Team, your data is excluded from training by default and you do not need to touch anything. OpenAI offers a Data Processing Addendum for business customers, and enterprise accounts can be configured for zero data retention entirely (OpenAI’s guidance).
How to stop Claude using your data for training
- Go to Settings
- Privacy
- Toggle off “Help improve Claude.”
If you want an additional layer of caution, Incognito mode is never used for training regardless of your main settings.
Claude for Work (Team and Enterprise plans) sits under commercial terms that exclude training by default (Anthropic’s privacy centre).
It is worth knowing that Anthropic positioned itself as the privacy-first alternative in this space for years, but this all changed in their 2025 policy update. Consumer accounts now default to sharing data for training, with retention extended to up to five years if you do not opt out. The change was communicated with a large “Accept” button, with a smaller pre-ticked training toggle sitting below it. Subtle as these things go.
How to stop Google Gemini training on your conversations
- Gemini Apps Activity page in your Google account settings (or “Keep Activity”)
- Turn it off.
- If you use multiple Google accounts, you will need to repeat this for each one. Yes, each one.
If your organisation uses Google Workspace, you are protected by default. The gap tends to be personal Google accounts that employees use alongside work tools, which is more common than most organisations realise (Google’s guidance).
How to turn off Microsoft Copilot data training
On copilot.com:
- Go to your profile
- Privacy
- turn off “Training on conversation activity.”
- There is a separate toggle for voice conversations, worth turning off too.
If your team uses Microsoft 365 through an organisational Entra ID account, your data is not used to train foundation models by default (Microsoft’s guidance).
How to opt out of Perplexity data training
- Go to Account Settings
- Preferences
- Turn off the AI Data Retention toggle.
Enterprise accounts are excluded from training entirely, with uploaded files retained for only seven days (Perplexity’s help centre).
Meta AI: there is no opt-out, and that is worth taking seriously
Meta offers no opt-out from training across WhatsApp, Instagram, and Facebook. None. If your team uses WhatsApp for work communication, those conversations are contributing to model training whether anyone has noticed or not. Unlike the platforms above, there is no toggle to find here. The only sensible answer is to stop using it for anything work-related.
One note that applies across all of the above, opting out covers future conversations. It does not reach back and unpick data that has already been used in training. Sooner is better than later.

Does deleting your chats actually remove your data?
Under UK and EU GDPR, you have the right to request deletion of your personal data, Article 17, the right to erasure. In most digital contexts, deletion means deletion. With AI models, the honest answer is more complicated.
When a conversation is used for training, it does not get stored as a retrievable record somewhere. As the Cloud Security Alliance has noted, it gets pulled into the model’s architecture, which is not easily traceable, or deletable. The European Data Protection Board made right to erasure one of its enforcement priorities for 2025, and the ICO is similarly engaged.
What you can do is ask providers to remove your data from future training sets and apply filters preventing your personal information appearing in outputs. What you cannot do is guarantee that data already woven into a model has been fully erased. It is one of those areas where the law and the technology have not quite caught up with each other yet.
Which is why the most reliable control remains keeping data out of training pipelines in the first place.
The steps above are worth doing. But they assume someone knows to do them, is using the right account type, and has actually followed through across the whole team. That is quite a lot to rely on, and in our experience, it is rarely as consistent as it looks on paper.
What organisations need to consider… beyond the settings
Our view on AI has always been that strategy comes before tools, and that understanding what you are signing up to matters as much as understanding what you are getting. Many organisations are already making privacy trade-offs they have not fully examined, often through every day decisions. And even where the intent is right, turning that intent into actual policy, ownership, and day-to-day guardrails is where most organisations find the going gets harder.
The more durable answer involves being clear about which tools are approved, under what conditions, and making sure the right enterprise agreements and safeguards are in place. Less about restricting how people work, more about making sure the defaults actually reflect how you want your data handled.
This is one piece of a larger picture. In the coming weeks we will be publishing more on what AI readiness looks like and drawing on our broader framework for organisations that want to move forward with AI in a practical and responsible way. In the meantime, if you would like to think through any of this for your own organisation, we would love to hear from you.