In 2025, LinkedIn feeds are becoming a strange mix of town hall and therapist’s couch. I can’t be the only marketer thinking the ever-increasing adoption of generative AI tools for business seems to have led us into an existential crisis of sorts.
For example, this morning, I scrolled upon a somebody proudly declaring that they had fired an entire team of content writers, replacing them with one SEO consultant and a generative AI license because everything in marketing has changed overnight, and it’s time to get onboard.
“It’s the end of SEO as we know it!” declare swathes of ex-SEOs cosplaying as AI visibility experts. “We guarantee exposure on AI search engines and get YOU clients” shout lead gen companies. “Being seen on AI is the only thing that matters. Visibility! Mentions! Tell users to Buy! Our! Stuff!”
Are we focusing on the wrong thing?
The marketing industry has always existed in some kind of bubble, proudly congratulating ourselves on our genius ad creative whilst the public scratch their head and try to work out what product they’re supposed to be buying. And the current wave of noise about AI visibility is the latest hot topic to excite us all, whilst the real opportunities (and threats) go unspoken.
The unseen threat to your brand reputation.
Data privacy has become an afterthought.
With great opportunities come great risks.
What happens when an LLM tells users not to buy your products, or invents a story about how your brand isn’t what it really is?
What if it starts to share information it should never know, because an uninformed or overworked employee has uploaded a confidential document and unknowingly trained the AI’s algorithm on confidential business information? Sounds like a data breach fine waiting to happen. Just look at what happened to Samsung.
We all cared a lot about GDPR and data privacy about when it came into force, yet now, we’re suddenly okay with giving LLMs access to confidential information. We’re allowing AI agents to automatically prowl our inboxes to draft replies on our behalf, sacrificing data safety and confidentially in the search to save five minutes of inbox admin.
Reputation risks
It’s not to say that this technology isn’t useful, or clever, but are we really considering the reputational risk of customer or client information being put into systems that can learn and disseminate that information? Remember that time 10,000 ChatGPT logs were shared on Google and people’s ‘private’ chats were all over the Internet? No? Probably because nobody paid anywhere near enough attention to that particular news story.
The danger of premature implementation
Then an even bigger threat: what if the LLM you rushed to implement onto your own website (because everybody told you AI was the solution to growth and you needed it NOW!) starts to give answers that are, at best, incorrect, and at worst, biased, rude or quite simply untrue?
Brands everywhere are rushing to become AI-first and make sure they’ve integrated some kind of AI on their website / app / brand experience (delete as appropriate), and truth be told, you might not even need that kind of technology in your business (more on that here). But it’s pretty well known that AI can hallucinate and implementing it on your brand property without the right safeguarding can be infinitely damaging to your brand. Just look at what Grok is up to on X.

The result of all of this? A brand reputational nightmare.
I don’t say all of this to scare you or to bash AI. The fact is: the technology is transformative, and brilliant. But our excitement about its possibilities has created some serious blind spots around risk management.
Before implementing any AI tool, ask yourself: What data will this access? How will customer information be protected? What happens if it generates incorrect or harmful responses? Do we have safeguards in place?
Think human first. Audit your current AI usage, establish clear data governance policies, involving your legal and tech teams from day one. Don’t rush to implement. In five years, no one will remember who was first to market with AI, but they’ll definitely remember who handled their data responsibly and kept consumer trust in mind.
Because ultimately, protecting your brand’s reputation is worth more than saving a few minutes on inbox admin.