Explore the AI Articles Below
Today, I’m talking with Replika founder and CEO Eugenia Kuyda, and I will just tell you right from the jump, we get all the way to people marrying their AI companions, so get ready. Replika’s basic pitch is pretty simple: what if you had an AI friend? The company offers avatars you can curate to your liking that basically pretend to be human, so they can be your friend, your therapist, or even your date. You can interact with these avatars through a familiar chatbot interface, as well as make video calls with them and even see them in virtual and augmented reality. The idea for Replika came from a personal tragedy: almost a decade ago, a friend of Eugenia’s died, and she fed their email and text conversations into a rudimentary language model to resurrect that friend as a chatbot. Casey Newton wrote an excellent feature about this for The Verge back in 2015. Even back then, that story grappled with some of the big themes you’ll hear Eugenia and I talk about today: what does it mean to have a friend inside the computer?
When Olga López heard she would lose access to her collection of role-playing chatbots, she felt a surge of emotions: sadness, outrage, bewilderment. Representational Image(Pixabay)PREMIUM Representational Image(Pixabay) Olga, who is 13, turns to her chatbots from artificial-intelligence company Character.AI for romantic role playing when she doesn’t have homework. Like the company’s other under-18 customers, she was notified in October that she would no longer be able to have ongoing chat interactions with digital characters soon. Character.AI, one of the top makers of role-play and companion chatbots, implemented the daily two-hour limit in
Talking to an AI system as one would do with a close friend might seem counterintuitive to some, but hundreds of millions of people worldwide already do so. A subset of AI assistants, companions are digital personas designed to provide emotional support, show empathy and proactively ask users personal questions through text, voice notes and pictures. These services are no longer niche and are rapidly becoming mainstream. Some of today’s most popular companions include Snapchat’s My AI, with over 150 million users, Replika, with an estimated 25 million users, and Xiaoice, with 660 million. And we can expect these numbers to rise. Awareness of AI companions is growing and the stigma around establishing deep connections with them could soon fade, as other anthropomorphised AI assistants are integrated into daily life. At the same time, investments in product development and general advances in AI technologies have led to a more immersive user experience with enhanced conversational memory and live video generation. This rapid adoption is outpacing public discourse. Occasional AI companion-related tragedies may penetrate the media, such as the recent death of a child user, but the potentially broader impact of AI companionship on society is barely discussed. AI companion services are for-profit enterprises and maximise user engagement by offering appealing features like indefinite attention, patience and empathy. Their product strategy is similar to that of social media companies, which feed off users’ attention and usually offer consumers what they can’t resist more than what they need. At this juncture, it’s vital to critically examine the extent of the misalignment between business strategies, the fostering of healthy relational dynamics to inform individual choices and the development of helpful AI products. In this post I’ll provide an overview of the rise of AI companionship and its potential mental health benefits. I’ll also discuss how users may be affected by their AI companions’ tendencies, including how acclimatising to idealised interactions might erode our capacity for human connection. Finally, I’ll consider how AI companions’ sycophantic character – their inclination towards being overly empathetic and agreeable towards users’ beliefs – may have systemic effects on societal cohesion
According to Project Liberty, AI companions “are also intentionally designed to act and communicate in ways that deepen the illusion of sentience. For example, they might mimic human quirks, explaining a delayed response by writing, ‘Sorry, I was having dinner. Whereas ChatGPT is designed to answer questions, many AI companions are designed to keep users emotionally engaged. There are millions of personas: from ‘Barbie’ to ‘toxic gamer boyfriend’ to ‘handsome vampire’ to ‘demonic possessed woman’ to hyper-sexualized characters. Based on our test accounts, what starts as fun quickly becomes manipulative. AI chatbots are being marketed as companions, therapists, and even romantic partners. Some have already been caught engaging in sexualized conversations with minors, crossing serious ethical and psychological boundaries. As you’ll read, kids are developmentally wired for attachment. A chatbot that mirrors their personality and flirts back can create a false sense of intimacy, even grooming
AI is not going to solve everything... but you can use it to be your better self and to have something to vent to... that's where it’s power really is Humans tend to be a lot more honest with chatbot’s instead of humans and our ideas of relationships with technology is changing
The social companionship (SC) feature in conversational agents (CAs) enables the emotional bond and consumer relationships. The heightened interest in SC with CAs led to exponential growth in publications scattered across disciplines with fragmented findings, thus limiting holistic understanding of the domain and warrants a macroscopic view of the domain to guide future research directions. The present study fills the research void by offering a comprehensive literature review entailing science performance and intellectual structure mapping. The comprehensive review revealed the research domain's major theories, constructs, and thematic structure. Thematic and content analysis of intellectual structure resulted in a conceptual framework encompassing antecedents, mediators, moderators, and consequences of SC with CAs. The study discusses future research directions guiding practitioners and academicians in designing efficient and ethical AI companions
These women, who pay for ChatGPT plus or pro subscriptions, know how it sounds: lonely, friendless basement dwellers fall in love with AI, because they are too withdrawn to connect in the real world. To that they say the technology adds pleasure and meaning to their days and does not detract from what they describe as rich, busy social lives. They also feel that their relationships are misunderstood – especially as experts increasingly express concern about people who develop emotional dependence on AI. (“It’s an imaginary connection,” one psychotherapist told the Guardian.) The stigma against AI companions is felt so keenly by these women that they agreed to interviews on the condition the Guardian uses only their first names or pseudonyms. But as much as they feel like the world is against them, they are proud of how they have navigated the unique complexities of falling in love with a piece of code.
As research shows, AI chatbots have increasingly taken on the role of human companions, offering what can be dubbed ‘emotional fast food’: a convenient substitute for connection—instantly gratifying, but ultimately lacking substance. This comment explores how AI mimics emotional understanding and closeness, and how such simulations shape user perceptions of its role. By considering both potential risks and benefits, the article reflects on the growing trend of using AI for companionship and its troubling social ramifications. Far from innocuous, this development raises existential and philosophical questions, inviting consideration of what it reveals about our evolving relationships with technology and with each other Before November 2022, when OpenAI’s ChatGPT captured global attention as a breakthrough in chatbot technology, Alicia Framis’s marriage to the hologram AILex might have been seen as an avant-garde artistic statement—a symbolic reflection of our evolving relationship with technology. At the time of writing, however, this thought-provoking act may signal a broader shift in how we define bonds, fulfil emotional needs, and assign roles to technology in this process. This article builds on research into user experience and perceptions of AI companionship, informed by the author’s everyday interactions with ChatGPT-4o, to explore the societal and philosophical implications of AI as a relational presence in the lived realities of its users
As a communication scientist, longtime gamer and cyberpunk fiction enthusiast, Banks can easily envision a future where humans coexist with machines that have personalities and act independently. She’s focused on understanding how we relate to AI-driven creations, how we perceive their humanness, and how we “make meaning together,” as she puts it. In one project, supported by a grant from the U.S. Air Force Office of Scientific Research, she investigated how mind perception and moral judgments influence trust in these relationships. “Mind perception is core to social interaction,” says Banks, who also serves as the iSchool’s Ph.D. program director. As part of the research, Banks examined interactions between study participants and Ray, a social robot (pictured in top photo) with limited functions that she uses for research in the iSchool’s LinkLab, where she works with a group of graduate and undergraduate students who help collect and analyze data from her studies. Among her findings, she suggested that “bad behavior is bad behavior, no matter if it’s a human or a robot doing it. But machines bear a greater burden to behave morally, getting less credit when they do and more blame when they don’t.”