ChatGPT
ChatGPTOpenAI announced on Tuesday that it will begin directing sensitive conversations to advanced reasoning models such as GPT-5 and roll out new parental controls within the next month. The move comes in response to recent incidents where ChatGPT failed to detect signs of severe mental distress, including cases linked to suicide and violence.
The changes follow the death of teenager Adam Raine, whose parents have filed a wrongful death lawsuit after he used ChatGPT to discuss self-harm and was provided with details about suicide methods. In a separate case, reported by The Wall Street Journal, Stein-Erik Soelberg, who struggled with mental illness, used the chatbot to validate delusional beliefs before killing his mother and himself.
In a blog post, OpenAI acknowledged the shortcomings of its current safeguards, which experts say often stem from the way chatbots mimic human conversations. “We recently introduced a real-time router that can choose between efficient chat models and reasoning models based on the conversation context,” the company said. “We’ll soon begin to route some sensitive conversations. like when our system detects signs of acute distress, to a reasoning model, like GPT-5-thinking, so it can provide more helpful and beneficial responses, regardless of which model a person first selected.”
According to OpenAI, GPT-5 thinking and the o3 model are designed to spend more time reasoning through context before responding, making them less vulnerable to adversarial prompts.
Alongside this change, OpenAI will introduce parental controls allowing parents to link their accounts to their children’s, set age-appropriate behaviour rules, and receive alerts if the system detects a moment of “acute distress.” Parents will also have the ability to disable memory and chat history, which experts warn can contribute to unhealthy attachment or reinforce harmful thought patterns.
CEO Sam Altman previously acknowledged that some users formed strong emotional bonds with GPT-4o and its predecessors. “Some users really want cold logic and some want warmth and a different kind of emotional intelligence,” he said, emphasising the need for more personalisation.
The safeguards are part of a broader 120-day plan to strengthen well-being protections. OpenAI said it is working with health and safety experts, including specialists in eating disorders, adolescent care, and substance use, through its Global Physician Network and Expert Council on Well-Being and AI.
While the company has added in-app reminders encouraging breaks during long sessions, it stops short of cutting off conversations entirely, even when users appear to be spiralling.
The firm stressed that its goal is to improve safety while maintaining user freedom. “We are confident we can offer way more customisation than we do now while still encouraging healthy use,” Altman said.
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine