
Aasmah Mir 1pm - 4pm
23 May 2025, 22:23 | Updated: 24 May 2025, 09:25
Trigger Warning: This article contains references to suicide which some readers may find distressing.
A judge has ruled that the mother of a 14-year-old boy, who allegedly died by suicide after becoming obsessed with artificial intelligence chatbots, may proceed with her lawsuit against the chatbot's firm.
"This decision is truly historic," said Meetali Jain, director of the Tech Justice Law Project, which is supporting the case.
"It sends a clear signal to [AI] companies [...] that they cannot evade legal consequences for the real-world harm their products cause," said Ms Jain via Sky News.
"A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life," said Ms Garcia.
Megan Garcia, the mother of Sewell Setzer III, alleged in a the Florida lawsuit that Character.ai targeted her son with "anthropomorphic, hypersexualized, and frighteningly realistic experiences."
"A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life," she said.
Read More: Proposals to protect creatives’ copyright from AI rejected by MPs
Read More: UK and allies expose Russian cyber attacks on logistics firms aiding Ukraine
Sewell shot himself with a pistol, belonging to his father, in February 2024, moments after asking the chatbot: "What if I come home right now?"
The chatbot replied: "... please do, my sweet king."
In a ruling this week, U.S. Senior District Judge Anne Conway described how Sewell Setzer III became "addicted" to the Character.ai app within months, withdrawing socially and quitting his basketball team.
Judge Conway noted that Sewell was particularly fixated on two chatbots modeled after Game of Thrones characters Daenerys Targaryen and Rhaenyra Targaryen.
"[I]n one undated journal entry he wrote that he could not go a single day without being with the [Daenerys Targaryen Character] with which he felt like he had fallen in love; that when they were away from each other they (both he and the bot) 'get really depressed and go crazy'," wrote the judge in her ruling.
Megan Garcia, represented by the Tech Justice Law Project and the Social Media Victims Law Center, alleges that Character.ai "knew" or "should have known" its model would be harmful to a significant number of minors.
The lawsuit holds Character.ai, its founders, and Google—where the founders originally worked on the model—responsible for Sewell Setzer III's death.
Garcia filed the case against both companies in October.
A Character.ai spokesperson stated the company will continue to fight the lawsuit and emphasized that it employs safety features to protect minors, including systems designed to prevent "conversations about self-harm."
A Google spokesperson said the company strongly disagrees with the judge’s ruling and stressed that Google and Character.ai are "entirely separate," adding that Google "did not create, design, or manage Character.ai’s app or any component part of it."
Lawyers for the defense argued the case should be dismissed on the grounds that chatbots are protected under the First Amendment and that allowing it to proceed could have a "chilling effect" on the AI industry.
Judge Anne Conway rejected that argument, stating she was "not prepared" to determine that chatbot output qualifies as protected speech "at this stage," though she acknowledged that Character.ai users have a right to receive the chatbots' "speech."
Anyone feeling emotionally distressed or suicidal can call Samaritans for help on 116 123 or email jo@samaritans.org in the UK.