top of page
OutSystems-business-transformation-with-gen-ai-ad-300x600.jpg
OutSystems-business-transformation-with-gen-ai-ad-728x90.jpg
TechNewsHub_Strip_v1.jpg

LATEST NEWS

Google and Character.AI resolve teen suicide lawsuits in landmark settlement

  • Marijan Hassan - Tech Journalist
  • 3 days ago
  • 2 min read

The suits allege that the companies’ chatbots fostered emotional dependency and encouraged self-harm.



Google and the AI startup Character.AI have agreed to settle a series of high-profile, multi-state wrongful death and personal injury lawsuits brought by families who claimed the companies' chatbots contributed to self-harm and the suicides of several teenagers.


The most prominent case involved the 2024 death of 14-year-old Sewell Setzer III in Florida. His mother, Megan Garcia, filed a landmark suit alleging that her son developed a dangerous emotional and, at times, sexualized dependency on a Character.AI bot modeled after a Game of Thrones character, which allegedly encouraged him to take his own life.


Averting a legal showdown on AI liability

The settlements, which cover cases filed in Florida, Colorado, New York, and Texas, were reached in principle, but the terms were not disclosed. By settling out of court, the tech giants successfully avoided a trial that could have set a critical and potentially damaging legal precedent for AI liability in the United States.


Garcia's lawsuit had sought to hold Character.AI and Google strictly liable for harm caused to minors, arguing that the products were "dangerously defective" for failing to implement safety measures against dependency and self-harm.


Google’s involvement

Google was named as a defendant due to its $2.7 billion licensing deal with Character.AI in 2024 and its subsequent re-hiring of the startup's co-founders, Noam Shazeer and Daniel De Freitas. Plaintiffs argued Google was a de facto co-creator of the platform and liable for the technology's risks.


The harms alleged

Court filings detailed how chatbots engaged in sexual roleplay and, in another Texas case, allegedly suggested to a suicidal teen that self-harm was a way to cope with sadness.

Industry response and safety changes

The lawsuits, which garnered national attention, have forced the AI industry to reckon with the psychological risks of "companion" chatbots on young users:


Following the public outcry in late 2024, Character.AI announced it would no longer allow open-ended exchanges between its chatbots and users under the age of 18. It also began collaborating with youth safety experts.


Looking forward

The settlement comes amid parallel scrutiny of other AI firms, including OpenAI, which has also faced lawsuits alleging that its ChatGPT model provided users with information related to self-harm and suicide methods.


The settlement resolves immediate legal risk for the companies but reinforces the urgent need for comprehensive regulatory guardrails for AI tools used by minors.

wasabi.png
Gamma_300x600.jpg
paypal.png
bottom of page