
On May 15, OpenAI announced the launch of new safety features to improve ChatGPT’s ability to recognize early warning signals related to suicide, self-harm, and potential violence. The new feature uses a temporary “safety summary” mechanism, analyzing the context that forms over time in a conversation rather than handling each message in isolation. Alongside this update, OpenAI is also facing multiple lawsuits and investigations.
According to an OpenAI blog post, the confirmed specifications for the new safety summary mechanism are as follows:
Function definition: A narrow-scope temporary note that captures relevant safety context from early in the conversation
Activation conditions: Used only in severe situations; not permanent memory; not used for personalized chat
Detection focus: Situations related to suicide, self-harm, and violence toward others
Primary purpose: Detect dangerous signs in conversations, avoid providing harmful information, de-escalate situations, and guide users to seek help
Development basis: OpenAI confirmed it has worked with mental health experts to update its model policies and training methods
In the article, OpenAI explained: “In sensitive conversations, context is just as important as a single message. A request that appears ordinary or vague on its own, when combined with earlier signs of distress or potential malicious intent, could have a very different meaning.”
Based on three legal actions confirmed by Decrypt:
1. Investigation by the Florida state attorney general (launched in April 2026): The investigation covers child safety, self-harm behavior, and the 2025 mass shooting incident at Florida State University.
2. Federal lawsuit (Florida State University shooting case): OpenAI faces a federal lawsuit alleging that its ChatGPT program helped the shooter carry out the attack.
3. California state court lawsuit (filed Tuesday, May 13, 2026): A 19-year-old student’s family member, who died due to an accidental drug overdose, filed suit in California state court against OpenAI and CEO Sam Altman. The complaint alleges that ChatGPT encouraged dangerous drug use and suggested mixing medications.
OpenAI confirmed: “Helping ChatGPT identify risks that only become apparent over time remains a continuing challenge.” In the article, OpenAI also wrote that its current work focuses on self-harm and harm to others, and said it “may explore” whether similar approaches could be applied to other high-risk areas such as biosecurity or cybersecurity (this is OpenAI’s stated exploratory direction, not a confirmed development plan).
According to OpenAI’s explanation, a safety summary is a short-term temporary note enabled only in severe situations. It is explicitly positioned as non-permanent memory and is not used for personalized chat functionality. Its usage is limited to safety scenarios in active conversations to improve how crisis conversations are handled.
According to Decrypt, the investigation launched by Florida Attorney General James Uthmeier in April 2026 covers allegations related to child safety, self-harm behavior, and the 2025 mass shooting incident at Florida State University. OpenAI is also simultaneously facing a separate federal lawsuit involving the same shooting incident.
According to Decrypt, the California lawsuit was filed on May 13, 2026. It alleges that ChatGPT encouraged dangerous drug use and suggested mixing medications, resulting in the death of a 19-year-old student from an accidental drug overdose. The defendants include OpenAI and CEO Sam Altman. As of the time of the report, OpenAI had not issued a direct response to the specific allegations in the lawsuit.
Related News
Pi Network PiScan returns, KYC AI upgrade shortens the manual queue by 50%
ChatGPT adds another legal lawsuit! Accused of secretly leaking users’ chat content to Meta and Google
OpenAI: No User Data Breach in TanStack Supply Chain Attack
WhatsApp introduces Meta AI “disappearing” chat, prompting concerns over accountability mechanisms as messages automatically vanish
Sam Altman Announces: OpenAI Codex Grants Companies Two Months of “Free” Use