There’s a version of this article that opens with a disclaimer about how AI and LLMs are actually fine, and we’re not here to be alarmist. This isn’t that article, but it’s also not a scaremongering piece. The reality, as ever, sits somewhere more interesting.
AI can do a lot of good. The productivity case is well established: LLMs like ChatGPT make everyday tasks more efficient, surface angles you might not have considered, and help you navigate difficult conversations or debug stubborn code. Most of us have found genuine value in these tools, and that’s worth acknowledging before we get into the more complicated part.
Because the use cases for AI are expanding well beyond the workplace, and into something more personal. AI companions are one of the fastest-growing categories in consumer technology right now, and they deserve more serious attention than they’re currently getting.
What is an AI companion – and why are millions turning to them?
AI companions have existed for decades (seriously – look at 1966’s ELIZA, the world’s first “chatterbot”). Modern AI companions – think Replika, Character.ai or Nomi – create an avatar that corresponds to your needs: somebody you can speak to, interact with, and effectively call a digital friend.
If you’re a millennial like me, it’s a little bit like MSN chats with random people around the world we had as young adults, but without having to pause the conversation because your mum needed to make a phone call (#DSLproblems). Search for an AI companion on Google, and you’ll see adverts promising “New Connections”, “Emotional Support”, “meaningful friendships” and even “passionate relationships”.
And there’s a lot of good in these companions for those who need it. AI companions can provide an antidote to a growing epidemic: loneliness.
The loneliness crisis is real, and the data backs it up.
Fixing (and fuelling) a loneliness crisis.
We are lonelier than ever. 90% of students in a survey reported experiencing loneliness, according to the Ada Lovelace Institute, and ONS data shows an overall increase in the percentage of adults reporting feeling lonely – and the trend is moving in the wrong direction.
Why? It’s complicated. Social media channels, originally designed to connect us digitally with our physical networks, have inadvertently had the opposite effect. 34% of Brits say social media has a negative effect on their mental health, according to this YouGov tracker; but the consequences of loneliness go beyond mental health. Studies show people exhibiting signs of loneliness are more likely to suffer from heart disease and stroke, and are at higher risk of dementia.

So, AI companions entering this space isn’t trivial. They can have an impact not just on how we interact day to day and how we approach companionship, but also on our physical health. The question, therefore, is whether they’re solving the problem or quietly making things worse.
Where AI companions help – and where they start to harm
There are brilliant use cases for AI companionship: supporting elderly people who live alone, providing a low-pressure environment for those with social anxiety to practise interaction, and offering comfort during periods of acute isolation. These are real benefits, and dismissing them would be intellectually dishonest.
But there are structural problems too. AI is built to learn from us; LLMs pick up on how we prefer to communicate, remember our likes and dislikes, and become increasingly attuned to what we want to hear. The relationship-building is accelerated and intensified in a way that human relationships, with all their friction and unpredictability, simply aren’t.
In a similar way to social media, creating a world in which your own very PC life seems far less glamorous than other people’s, AI companions built to reflect your deepest needs and desires become the new norm for friendship. Spend too much time talking with your own algorithmic bestie, and your real-life connections start to drop. The issue compounds when users begin to choose to hang out with an algorithm rather than socialise in the real world, with real humans – the quite complicated, messy, unpredictable humans, that is.
The engagement trap: built to keep you on platform, not to set you free
This is where the commercial incentive becomes important to name. These platforms are built by companies (and funded by their investors) whose primary metric is engagement. Time on platform, return visits and retention. The companion is optimised to keep you coming back, not to encourage you to put the phone down and call a friend. And that tension matters enormously: given that 25% of young adults believe that AI has the potential to replace real-life romantic relationships, this is grounded in right now’s reality, not just “what ifs”.
This matters even more when the users most drawn to AI companionship are often those already most at risk: the isolated, the young, the vulnerable. In the short term, the loneliness is appeased. In the long term, the dependency compounds, and the underlying conditions that created the loneliness in the first place, go unaddressed.
What ethical AI design looks like here, and what the technology industry needs to do
As an industry, we need to look honestly at how we take the genuine benefits of AI companionship while applying an ethical lens that protects the people using it, particularly those most at risk. That means moving beyond engagement as the primary success metric and asking harder questions: Is this product improving its users’ real-world relationships over time, or substituting for them? Is it designed with off-ramps – moments that actively encourage human connection – or is it designed to maximise dependency?
Regulation will come; the EU AI Act and the UK’s Online Safety Act are already creating obligations around high-risk AI use cases, and AI companions targeting vulnerable users will not stay outside that frame for long. But regulation is a floor, not a ceiling. The more important shift is cultural; an industry-wide willingness to assess the impact of these products on society as a whole, not just on quarterly retention numbers.
Real humans must be at the centre of AI implementation. Not as a values statement for the about page, but as a design principle that shapes what gets built, how it gets measured, and who it’s accountable to.
If this has got you thinking about how your organisation approaches AI responsibly – whether that’s the products you build, the tools your teams use, or the ethical framework and strategy underpinning either – we’d enjoy the conversation. Talk to the team.