Insights

AI is fixing (then fuelling) a loneliness crisis… what does the technology industry need to do to keep humans at the heart of AI?

AI companions can ease loneliness but also risk deepening isolation. At Elemental Concept, we call for ethical AI use focused on real human connection and long-term wellbeing.
Are AI companions fuelling a loneliness epidemic?

First things first – this isn’t an article to bash AI. There is a lot of good that AI can do. But the consequences of AI implementation without an ethical or security lens can have huge ramifications on society at large.  

Let me explain… 

We’re putting the productivity side of AI to the side for a moment; yes, LLMs like ChatGPT can help make everyday tasks more efficient, help you consider alternative angles to problem solving or tell you how to respond to an email to achieve a particular goal. All very helpful.  

But there are more and more use cases now for AI to aid in wider society and our personal lives; in particular, AI companions.  

What is an AI companion? 

AI companions have existed for decades (seriously – look at 1966’s ELIZA, the world’s first “chatterbot”). Modern AI companions create an avatar that corresponds to your needs, becoming somebody you can speak to, interact with and effectively be a digital friend.  

If you’re a millennial like me, it’s a little bit like MSN chats with random people around the world we had as young adults, but without having to pause the conversation because your mum needed to make a phone call (#DSLproblems).  Search for an AI companion on Google and you’ll see adverts promising “New Connections”, “Emotional Support”, “meaningful friendships” and even “passionate relationships”. 

And there’s a lot of good in these companions for those who need it. AI companions can provide an antidote to a growing epidemic: loneliness.  

Fixing (and fuelling) a loneliness crisis. 

We are lonelier than ever.  

90% of students in a survey reported experiencing loneliness, according to the Ada Lovelace Institute.  

Why? It’s complicated. Social media channels, originally existing to connect us digitally to our physical networks, making us feel closer to our real-life connections, have inadvertently had the opposite effect. 34% of Brits say social media has a negative effect on their mental health, according to this YouGov tracker.  

The effect is a growing mental health crisis and increased rates of loneliness, with ONS data showing an overall increase in the percentage of adults reporting they feel lonely.  

ONS data showing an overall increase in the percentage of adults reporting they feel lonely.  

The consequences of loneliness go beyond mental health – studies show people exhibiting signs of loneliness are more likely so suffer from heart disease and stroke, and are at higher risk of dementia. So AI companions can have an impact not just on how we interact day to day and how we approach companionship, but can have a tangible impact on our physical health.  

Why is this an issue?  

There are brilliant use cases for AI companionship. But there are problems too.  

AI is built to learn from us and how we interact; it can pick up on how we prefer to communicate, remember our likes, our dislikes, and become the ultimate companion for daily life. The relationship building between humans and AI is accelerated and intense.  

In a similar way to social media creating a world in which your own very PC life seems far less glamorous than other people, AI companions built to reflect your deepest needs and desires become the new norm for the standard of friendship. Spend too much time talking with your own algorithmic bestie and your real-life connections start to drop. The issue compounds when users begin to choose hanging out with an algorithm, rather than socialising in the real world. And given that 25% of young adults believe that AI has the potential to replace real-life romantic relationships, this is grounded in right now’s reality, not just “what ifs”.  

Humans are ultimately complicated, messy, unpredictable individuals. AI gives you what you want, when you want it, created by companies (and investors) fuelled by engagement metrics, aiming to keep you on platform as much as possible.  

Without education and safeguarding / regulation, those most at risk in society – the lonely, the young, the vulnerable – could be exposed to negative consequences because of AI.   

Short term the loneliness is appeased, but long-term, AI could potentially be fuelling even more mental health issues.  

In conclusion…

As an industry, we need to look at how we take the good from machine learning but apply an ethical lens to protect those around us, and as companies, look at metrics beyond engagement to assess the impact of our technology on wider society.   

Real humans must therefore be at the centre of any machine learning implementation, working with framework to make sure we are doing the right thing for our customers, not just our board meetings.  

More insights