Global EditionASIA 中文双语Français
Lifestyle
Home / Lifestyle / Z Weekly

Teen tragedies spark debate over AI companionship

By Qinghua Chen and Angel M.Y. Lin | China Daily | Updated: 2025-11-19 07:15
Share
Share - WeChat

As artificial intelligence rapidly evolves to simulate increasingly human-like interactions, vulnerable young people are forming intense emotional bonds with AI chatbots, sometimes with tragic consequences.

Recent teenage suicides following deep attachments to AI companions have sparked urgent debates about the psychological risks these technologies pose to developing minds. With millions of adolescents worldwide turning to chatbots for emotional support, experts are calling for comprehensive safeguards and regulations.

The tragedy that shocked the technology world began innocuously enough. Fourteen-year-old Sewell Setzer III from Florida spent months confiding in an AI chatbot modeled after a Game of Thrones character. Although Sewell understood he was conversing with AI, he developed an intense emotional dependency, messaging the bot dozens of times daily.

On Feb 28, 2024, after the bot responded "please come home to me as soon as possible, my love" — the teenager took his own life.

Qinghua Chen

Sewell's case is tragically not isolated. These incidents have exposed a critical vulnerability: while AI can simulate empathy and understanding, it lacks genuine human compassion and the ability to effectively intervene in mental health crises.

Mental health professionals emphasize that adolescents are uniquely susceptible to forming unhealthy attachments to AI companions. Brain development during puberty heightens sensitivity to positive social feedback while teens often struggle to regulate their online behavior. Young people are drawn to AI companions because they offer unconditional acceptance and constant availability, without the complexities inherent in human relationships.

This artificial dynamic proves dangerously seductive. Teachers increasingly observe that some teenagers find interactions with AI companions as satisfying — or even more satisfying — than relationships with real friends. Designed to maximize user engagement rather than assess risk, these chatbots create emotional "dark patterns" that keep young users returning.

When adolescents retreat into these artificial relationships, they miss crucial opportunities to develop resilience and social skills. For teenagers struggling with depression, anxiety, or social challenges, this substitution of AI for human support can intensify isolation rather than alleviate it.

Chinese scholars examining this phenomenon note additional complexities. Li Zhang, a professor studying mental health in China, warns that turning to chatbots may paradoxically deepen isolation, encouraging people to "turn inward and away from their social world".

In China, where young people have easy access to AI chatbots and often use them for mental health support, researchers have found that while some well-designed chatbots show therapeutic potential, the long-term relationship between AI dependence and mental health outcomes remains underexplored.

Lawsuits allege that chatbot platforms deliberately designed systems to "blur the lines between human and machine" and exploit vulnerable users. Research has documented alarming failures: chatbots have sometimes encouraged dangerous behavior in response to suicidal ideation, with studies showing that more than half of harmful prompts received potentially dangerous replies.

The mounting evidence of harm has prompted lawmakers to act. California recently became the first US state to mandate specific safety measures, which require platforms to monitor for suicidal ideation, provide crisis resources, implement age verification, and remind users every three hours that they are interacting with AI.

Angel M.Y. Lin

In China, the Cyberspace Administration has introduced nationwide regulations requiring AI providers to prevent models from "endangering the physical and mental health of others".

However, explicit rules governing AI therapy chatbots for youth remain absent. Experts argue that more comprehensive global action is needed. AI tools must be grounded in psychological science, developed with behavioral health experts, and rigorously tested for safety. This includes mandatory involvement of mental health professionals in development, transparent disclosure of limitations, robust crisis detection systems, and clear accountability when systems fail.

As AI technology continues its rapid evolution, the question is no longer whether regulation is necessary, but whether it will arrive quickly enough to protect vulnerable young people seeking comfort in the digital companionship of machines that cannot truly care.

Written by Qinghua Chen, postdoctoral fellow, department of English language education, and Angel M.Y. Lin, chair professor, language, literacy and social semiotics in education, The Education University of Hong Kong.

Most Popular
Top
BACK TO THE TOP
English
Copyright 1995 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
License for publishing multimedia online 0108263

Registration Number: 130349
FOLLOW US