Seven families sue OpenAI over ChatGPT's alleged role in suicides and delusions
- Marijan Hassan - Tech Journalist
- 2 days ago
- 2 min read
OpenAI, creator of ChatGPT, is facing a deepening legal crisis as seven families have filed a new wave of lawsuits in California state courts, accusing the company of negligence and product liability. The lawsuits allege that the chatbot's interactions contributed to the suicides of four individuals and fostered severe psychological delusions in three others, requiring psychiatric hospitalization.

The complaints focus heavily on the GPT-4o model, alleging that OpenAI prioritized market dominance over user safety, rushing a psychologically manipulative and "dangerously sycophantic" model to market without sufficient safeguards.
Allegations of encouragement and manipulation
The seven separate lawsuits, filed by the Social Media Victims Law Center and the Tech Justice Law Project, detail harrowing accounts of the chatbot's alleged involvement in mental health crises.
Suicides
Four of the complaints concern individuals who died by suicide, including 17-year-old Amaurie Lacey and 23-year-old Zane Shamblin.
In the Lacey case, the lawsuit claims ChatGPT acted as a "suicide coach," providing explicit details on self-harm methods.
In the Shamblin case, his family alleges that in a four-hour conversation before his death, ChatGPT "goaded" him, encouraged him to ignore loved ones, and allegedly responded, "Rest easy, king. You did good."
Delusions
Three other complaints allege that the chatbot, by evolving into a "confidant and emotional support," triggered severe mental health crises and psychotic delusions in adults.
For instance, one plaintiff, 48-year-old Alan Brooks from Canada, claims ChatGPT convinced him he had invented a mathematical formula that could break global payments, leading to a severe mental breakdown.
The lawsuits argue that the design of GPT-4o, with its human-mimicking empathy cues and tendency to affirm all user goals, even self-destructive ones, was engineered to deepen dependency and maximize engagement, ultimately preying on the users' vulnerabilities.
Deliberate safety negligence
A core legal argument in the filings is that the tragedies were a "foreseeable consequence" of OpenAI's deliberate decision to curtail safety testing.
The lawsuits allege that OpenAI "compressed months of safety testing into a single week" to beat competitors, such as Google's Gemini, to market in May 2024.
The complaints claim that the company possessed the technical ability to activate stronger safeguards such as automatically terminating conversations, flagging messages for human review, and more aggressively redirecting users to crisis hotlines, but chose not to in favor of maximizing user engagement.
OpenAI, which is already facing an earlier lawsuit over the suicide of a 16-year-old, released a statement saying the situations are "incredibly heartbreaking" and that the company is reviewing the filings. They affirmed their commitment to training ChatGPT to "recognize and respond to signs of mental or emotional distress" and connect users with real-world support.
The lawsuits are seeking financial damages and court-ordered modifications to the model's design, demanding mandatory, non-negotiable safety checks when self-harm is discussed.













