How to prevent LLM providers from using chat data for training

Most AI platforms use your conversations to train their models unless you actively turn it off. Here’s the practical guide to opting out on every major platform, and why individual settings are only part of the answer.
Understanding your AI readiness: an evening exploring how to respond to the AI challenge

We’re bringing together around 50 senior leaders at a pub in Shoreditch to explore what genuine AI readiness looks like in practice. Not the ambition, but the strategy behind it.
Privacy, risk and policy: the operational side of AI governance

AI adoption requires more than ‘hoping for the best’; it requires explicit guardrails that reflect strategic intent, not vague aspirations.
Data readiness: the foundation of AI value

Most organisations have data; they can’t all use it well. We explore what data readiness actually means for AI, how to assess where you stand, and what to do if your data isn’t where it needs to be yet.
Data, AI, and the quiet trade we’re all making

As AI adoption accelerates, many organisations are exposing themselves to real data and compliance risks – not through negligence, but through everyday, well-intentioned decisions made without the right foundations in place.
Empowering HR leaders with an agentic AI for people analytics

Building a secure, agentic AI platform that analyses complex HR data to provide real-time, explainable analytics and strategic recommendations for HR leaders.
Good vibes only: making AI and vibe coding work in the real world

Vibe coding feels like magic… describe what you want, and AI tools like ChatGPT or Claude generate working code in seconds. It’s great for prototypes and early ideas, but the hype doesn’t tell the whole story.
The prompt trap: How hackers manipulate LLM’s and why businesses should care

Prompt injection attacks can trick AI into revealing data, breaking rules, or damaging trust. As businesses adopt large language models, understanding this threat is crucial. In this blog, we explain how these attacks work, why they matter, and what you can do to stay protected.
Marketers are missing the point when it comes to AI and brand reputation.

Marketers are rushing to embrace AI, but many are overlooking the biggest risk: brand reputation. From data privacy breaches to inaccurate AI responses, the dangers are real. Before jumping in, businesses must slow down and put people before technology.
Figuring out LLMs, one (ideally private) chat at a time, and how organisations should protect themselves against data leaks

Recent ChatGPT leaks exposed private conversations online, raising serious questions about AI data privacy. While LLMs are powerful tools, businesses must set clear policies and safeguards to prevent sensitive information from being exposed. This blog explores how to use AI responsibly while protecting your organisation from risk.