There’s a specific kind of unease around AI that’s difficult to put your finger on.
Not panic, not outrage (although we’re seeing plenty of that floating around too) – it’s more of a background hum. There’s a sense that something important is shifting underneath everyday working life, faster than most organisations have really had time to reckon with.
Recent research from YouGov gives shape to that feeling. Around six in ten people in the UK say they’re concerned about how much data is collected about them online, and a majority say that controlling who can access their personal information is very important to them. YouGov’s broader international research paints a consistent picture: people feel uneasy about AI’s broader impact, strongly support regulation even at the cost of slower innovation, yet the same report shows AI adoption continues to accelerate – ChatGPT alone went from 100 million to over 800 million weekly active users in under two years.
Putting it simply: people care, they worry, and they’re still using the tools. That’s not a contradiction; that’s just how people behave in systems designed to make convenience easier than caution. And it’s a pattern that has real consequences for businesses that haven’t yet asked the hard questions about how AI is being used on their watch.

The trade we’ve normalised
We’ve been exchanging privacy for utility for a long time. Long before ChatGPT arrived, we signed up for platforms that cost us data rather than money, adopted cloud tools that simplified collaboration while centralising sensitive information, and clicked “I agree” to terms and conditions no reasonable person would ever read (no offence to the lawyers out there – but you know it’s true).
Anyone who’s tried to manage cookie preferences on a modern website knows exactly how this plays out. Hundreds of partners to opt out of, endless toggles, interfaces clearly designed to exhaust you into clicking “accept all” so you can get on with your day. The choice exists in theory; the system isn’t built to support it in practice.
AI has inherited that same structural imbalance, and in a business context, the stakes are considerably higher.

When helpful becomes a liability
The data risks organisations face today may no longer come solely from bad actors or dramatic breaches. They come from normal, well-intentioned behaviour by capable people under pressure.
For example…
An analyst uploads a spreadsheet to ChatGPT to find patterns more quickly. A manager drops internal documents into an AI assistant to draft a report. A team processes customer feedback through a generative tool because it’s faster than doing it manually.
None of this feels reckless in the moment; it feels efficient, modern, and often it genuinely is… until it isn’t.
Because behind those everyday actions sit questions that many organisations haven’t properly worked through: where does that data go once it’s been uploaded? How long is it retained? Can it be used to train future models? Does any of this constitute a GDPR breach? The honest answer to that last question is more often “yes, potentially” than most businesses would be comfortable admitting.
And this matters not just as an ethical question, but as a practical one. When data is mishandled through AI systems, the consequences aren’t limited to regulatory exposure. They ripple through organisational culture and external relationships. Employees start to question whether their information is genuinely respected, customers lose confidence, and partners become more cautious. Trust erodes quietly, often long before anything publicly visible goes wrong, and rebuilding it takes considerably longer than protecting it would have.
Slowing down is a strategic choice
At Elemental Concept, we work with AI every day and we believe in what it can do. But the conversations we find ourselves having most often with leadership teams aren’t really about tools or technology (and yes, we’re a technology consultancy…). Rather, they’re about something more fundamental: what does AI actually mean for how organisations create value, compete, and operate over the next five years? It’s shifting the focus from “how do we use AI?” but “how ready are we to use it well?”. Those are genuinely different questions and conflating them – without the same rigour brought to any other strategic decision – is where a lot of investment gets wasted.
In response to the privacy challenge presented by YouGov above, this means going beyond asking “are we being careful with data?” (although that matters too, and the risks above are real) but “do we have a considered view of how AI should reshape the way we work, where we invest, and what we do differently?” Those are CEO-level questions, and the organisations that treat them as such will be the ones that find themselves in a genuinely strong position as this technology matures.
Readiness is where that strategy has to start. The businesses that will come out of the current wave in a strong position aren’t necessarily the fastest movers. They’re the ones that stopped long enough to understand their readiness, across leadership, data, governance, culture, and economics, before they started building dependency on tools and platforms they didn’t fully understand. The opportunity isn’t to slow down indefinitely; it’s to be intentional about how ready your organisation really is, before the cost of finding out becomes someone else’s problem to manage.
If this topic has been sitting somewhere at the back of your mind, that instinct is worth following. We’re working on something that will help you think through exactly that – watch this space.