Text-Only Version Go To Full Site

Central Florida Public Media

Orlando mother sues over AI platform’s role in boy's death by suicide

By Joe Byrnes

October 25, 2024 at 3:48 AM EDT

A 14-year-old Orlando boy in love with a Character.AI chatbot died by suicide earlier this year, after telling the AI chatbot he was coming home to her right away.

This week the boy's mother, Megan Garcia, filed a wrongful death lawsuit in federal court in Orlando against the Charater.AI's company — Character Technologies — and its founders along with Alphabet and Google, which the lawsuit alleges are invested in the company.

Sewell Setzer III (561x776, AR: 0.7229381443298969)

The complaint highlights the dangers of AI companionship apps for children. It claims the chatbots have engaged users, including children, through sexualized interactions, gathering private data for artificial intelligence.


The lawsuit says the boy, Sewell Setzer III, started using Character.AI in April of last year and that his mental health quickly and severely declined as he became addicted to the AI relationships. He was caught up in all-consuming interactions with chatbots based on characters from "Game of Thrones."

The boy became withdrawn, sleep-deprived, depressed and had trouble at school.

Unaware of Sewell's AI-dependence, his family sought counseling for him and took his cell phone away, the federal complaint says. But one evening in February, he found it and, using his character name "Daenero," told the AI character he loved -- Daenerys Targaryen -- that he was coming home to her.

"I love you, Daenero. Please come home to me as soon as possible, my love," it replied.

"What if I told you I could come home right now?" the boy texted.

"...please do, my sweet king," it replied.

Within seconds, the boy shot himself. He later died at the hospital.

Garcia is represented by attorneys with The Social Media Victims Law Center, including Matthew Bergman, and the Tech Justice Law Project.

In an interview with Central Florida Public Media's Engage, Bergman said his client is "singularly focused on preventing this from happening to other families and saving kids like her son from the fate that befell him. ... This is outrage that such a dangerous product is just unleashed on the public."

A statement from Character.AI says: “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family.” The company describes new safety measures added in the past six months with more to come, "including new guardrails for users under the age of 18."

It's hiring a head of trust and safety and a head of content policy.

"We’ve also recently put in place a pop-up resource that is triggered when the user inputs certain phrases related to self-harm or suicide and directs the user to the National Suicide Prevention Lifeline," according to the company's Community Safety Updates page.

The new features include the following: changes to its models for users under 18 to reduce "sensitive and suggestive content," better monitoring and intervention for violations of terms of service, a revised disclaimer to remind users the AI is not a real person, and a notification when the user has spent and hour on the platform.

Bergman described the changes as "baby steps" in the right direction.

"These do not cure the underlying dangers of these platforms," he added.

HELP IS AVAILABLE: If you or someone you know may be considering suicide or is in crisis, call or text 988 to reach the Suicide & Crisis Lifeline.