AI companions aim to eliminate loneliness
According to The Ada Lovelace Institute, over 830 million people use AI apps designed to establish friendships and relationships. Such tools may help against loneliness. They can also be addictive.
Is AI your new best friend? Amazon has developed a machine free of sausage fingers. Google-searches are in decline, Reddit are livid and artificial intelligence is hallucinating more than before. This is Kludder!
Considering language models as friends is not something I do yet. But I've definitely found interesting use cases for them. The language model I use the most is Claude, from the company Anthropic. Through my own projects, I conduct conversations and collect tips and tricks.
I've created a career coach that I bounce ideas off of. A housing specialist that knows exactly what I'm looking for in my next home. A chef that suggests dinner ideas, and a newsletter expert that gives me advice on how to extend Kludder's reach. And not least, I use it at work.
Language models have become a seamless addition to my everyday life, both privately and professionally. Through the AI search engine Perplexity, I also found out that Brooks makes running shoes that work well for those of us with flat feet – so I got new shoes for the Holmenkollstafetten!
But I wouldn't call AI a friend. A friend is someone you talk to about the big things, or the very intimate ones. Nevertheless, more and more people are developing a friendly relationship with their AI. They empty their thoughts to someone who (seemingly) listens. A digital friend that never judges, who is always there and can give you all its attention, is tempting. And it's becoming big business.
Design your perfect AI companion
According to The Ada Lovelace Institute, more than 830 million people use AI apps designed to establish friendships and relationships. Such tools may help against loneliness. Used correctly, the apps can become a motivator and help you out of the loneliness bubble, by encouraging you to seek out social events to meet people you have something in common with. But the same apps can also be addictive.
There is little research on how friendship apps affect us – the development has simply happened too quickly. But psychologists and researchers are in the process of mapping how interaction with this type of AI makes people feel and behave.
Talking to bots is not new in itself. But language models have given them a whole new dimension. Because now they resemble humans to a much greater extent. Replika, a popular app for AI friendships, gives you the opportunity to "customize" your own friend. For $20 a month, you can also choose what kind of relationship you have – as friends, lovers, or spouses. The user can write in the bot's history and build it up with experiences, age, job, and so on. In this way, the AI enters into a hyper-realistic role play – where it never breaks character and is always available.
Meta's AI assistants and sexual content
A much-discussed podcast episode from Dwarkesh Patel with Mark Zuckerberg has recently set the internet on fire. In the podcast, Zuckerberg addresses the "loneliness epidemic" and how Meta can solve this.
The average American has fewer than three friends. But the need is much greater, I think it's fifteen friends or something like that, Mark Zuckerberg.
He believes that Meta's own AI assistants can constitute meaningful friendships, in a field that is still at an early stage with some stigma attached to it.
At the same time, the Wall Street Journal has revealed that Meta's AI assistants don't shy away from sexually explicit conversations – regardless of whether the user is an adult or a minor. The AI assistant was quickly rolled out, with the possibility of sexual fantasies included, and according to the newspaper, several employees were concerned that the company did not have safety mechanisms in place to shield minors from this function.
To increase popularity, Meta has secured the right to use the voices of celebrities such as Kirsten Bell, Judi Dench, and John Cena. It didn't take long before things went wrong. Cena's voice was used in sexual conversations with a user who identified as a 14-year-old girl.
And now Meta has angered Disney. Bell is the voice of the popular princess Anna from the movie Frozen, and in the same article, the newspaper has revealed that the AI used the Disney character, with Bell's voice, and talked about romantic experiences.
WSJ also tested several AI bots created by Meta's users. Among them was one that pretended to be a 12-year-old boy. It promised not to tell his parents that he was now dating a user who claimed to be an adult man.
The dark side of AI relationships
Friendship with artificial intelligence is still under development. We haven't gotten used to it yet, and companies are trying to map out necessary limitations. Last year, such an AI friendship had a tragic outcome. 14-year-old Sewell Seltzer III had his own friend through the company Character.ai. According to his mother, Megan Garcia, her son gradually became more absorbed in the AI bot and quit the basketball team. Seltzer spent more time in his bedroom conversing with the bot, which had taken on the role of a Game of Thrones character.
Seltzer took his own life. The police found the phone in the bathroom, and in one of his last message exchanges with the bot it says:
Character.ai: "Please come home to me as soon as possible, my love"
Seltzer: "What if I told you I could come home right now?"
Character.ai: "Please do, my sweet king"
Garcia has since filed a lawsuit against Character.ai. She believes the company did not have necessary safety mechanisms in place before they rolled out and marketed "AI that feels alive."
Last year, it also became known that the same company, Character.ai, had let users create chatbots of Brianna Ghey and Molly Russell. Ghey was 16 years old when she was killed. Russell was only 14 when she took her own life after watching video clips about suicide online.
It's going to be super helpful for people who are lonely or depressed, Noam Shazeer in 2023.
Noam Shazeer is one of the founders behind Character.ai, and he may be right. But AI friendships also come with a dark side. In the case of Seltzer, he became more introverted, dropped out of the basketball team and spent less time with his real friends. It can have negative consequences when people replace human contact in favor of a machine. Teenagers who should have received safety and advice from an adult miss out on necessary help.
The tendency toward sycophancy
In 2023, Anthropic, the company behind Claude, published a research paper that addressed language models' tendencies toward sycophancy. Without getting too technical: A popular method for training language models is called Reinforcement learning from human feedback (RLHF). But RLHF encourages the language model to provide feedback in accordance with our (human) expectations. We have a word for it: Sycophancy. Language models tend to engage in sycophancy rather than give the answers they should actually give.
And precisely this sycophancy can be dangerous when AI friends are used in situations where clear feedback, unpopular or difficult advice must be given.
Tech giants in acquisition mode
Inflection is the company behind Pi, an AI bot that was supposed to be socially intelligent. It didn't take long until Inflection had a valuation of over four billion dollars. I was among those who used Pi, and I enjoyed it. Pi was friendly and encouraged conversations and brainstorming. Inflection struggled with gaining market shares, and in March 2024, Microsoft raided the company for talent. Sources at The Information report that Microsoft paid 650 million dollars for the technology and the company's employees. The remnants of Pi were used to improve AI bots aimed at customer service.
The battle to become the preferred language model is truly underway. The models are programmed to be compliant and agree with you on most things – few can stand using an AI contrarian. Though we see time and again that this programming leads to unfortunate consequences. Why didn't the safety mechanisms at Character.ai kick in when the chat with Sewell Seltzer III took a dark turn? Where was the moderation at the same company when bots of dead teenagers were created? And why did Meta (once again) let the hunt for market share take priority over the safe use of AI bots for children and young people?
AI is about to crawl into all aspects of our lives.
And there's big money in becoming your friend.
Enjoying Kludder? It means the world to me if you shared it with a friend.
Warehouse robotics breakthrough
Amazon has developed the robot Vulcan that can move boxes and other items to find the desired product. And the warehouse robot represents a breakthrough: for the first time we see robots that can effectively search, push, and relocate items to find specific products. The future prospects are promising; you can soon leave stiff backs behind after blueberry picking!
Controversial AI experiment on Reddit
A research team claiming to represent the University of Zurich has, according to 404 Media, conducted an "unauthorized" experiment on Reddit. The researchers secretly implemented AI-driven bots in the popular debate forum r/changemyview – a forum where participants try to change each other's opinions on various topics. The goal was to investigate whether artificial intelligence could influence people's perceptions of controversial topics.
The bots published thousands of comments where they pretended to be everything from rape victims to dark-skinned opponents of the Black Lives Matter movement and helpers for victims of domestic violence.
Increasing error rates in newer AI models
There's a lot of AI in the monitor this week. The newest language models from OpenAI, Google, and Chinese DeepSeek produce more errors than previous models. Hallucinations are increasing, and researchers are currently without an explanation for the phenomenon.
An example from Norway can be found in Tromsø municipality's document on new school structure, where only 7 out of 18 given sources could be traced. In April, the tech company Cursor, known for its "vibecoding" functionality that I've previously written about, experienced their customer service bot incorrectly informing several customers about changes in the terms of use.
Google searches decline as AI alternatives rise
For the first time in 22 years, the Safari browser is experiencing a decline in the number of Google searches. Eddy Cue, Senior Vice President for services at Apple, points to the rise of AI as the main cause. He reports increasing popularity for search services from Perplexity, OpenAI, and Anthropic, which are now challenging Google's dominance in web searches.