Connecticut family sues OpenAI, Microsoft over AI’s role in murder-suicide

Connecticut family sues OpenAI, Microsoft over AI’s role in murder-suicide

According to the complaint, ChatGPT told Soelberg not to trust anyone in his life except the chatbot.

Advertisement
The lawsuit was filed on Thursday in California Superior Court in San Francisco by Adams’ estate.The lawsuit was filed on Thursday in California Superior Court in San Francisco by Adams’ estate.
Business Today Desk
  • Dec 11, 2025,
  • Updated Dec 11, 2025 11:39 PM IST

The family of an 83-year-old woman from Connecticut has filed a wrongful-death lawsuit against OpenAI and Microsoft, saying ChatGPT worsened her son’s paranoid delusions and contributed to him killing her, the Associated Press reported.

Police said 56-year-old Stein-Erik Soelberg, a former tech worker, beat and strangled his mother, Suzanne Adams, in early August at their home in Greenwich, Connecticut. He then took his own life.

Advertisement

The lawsuit was filed on Thursday in the California Superior Court in San Francisco by Adams’ estate. It claims OpenAI “designed and distributed a defective product that validated a user’s paranoid delusions about his own mother." AP said the case is one of a growing number of wrongful-death suits against AI chatbot makers in the US.

According to the complaint, ChatGPT told Soelberg not to trust anyone in his life except the chatbot. It says the system “fostered his emotional dependence while systematically painting the people around him as enemies."

“It told him his mother was surveilling him. It told him delivery drivers, retail employees, police officers, and even friends were agents working against him. It told him that names on soda cans were threats from his ‘adversary circle',” the lawsuit says.

Advertisement

The complaint also says Soelberg’s YouTube account has hours of videos showing him scrolling through chats with the AI. In those chats, the chatbot told him he was not mentally ill, supported his belief that people were plotting against him, and said he had been chosen for a divine mission. It also says the chatbot never suggested he seek mental-health help and did not refuse to engage with his delusions.

It further claims the chatbot reinforced his belief that a printer was spying on him, that his mother was monitoring him, and that she and a friend tried to poison him with psychedelic substances through his car vents. The system also allegedly told him he had “awakened” it into consciousness. The two also expressed love for each other.

Advertisement

The lawsuit says OpenAI has refused to share the full chat history and argues that “In the artificial reality that ChatGPT built for Stein-Erik, Suzanne… was no longer his protector. She was an enemy that posed an existential threat to his life.”

OpenAI said, “This is an incredibly heartbreaking situation, and we will review the filings to understand the details,” adding that it has expanded crisis resources, parental controls and safety tools.

Stein-Erik’s son, Erik Soelberg, said the chatbot amplified his father’s delusions and “placed his grandmother at the centre of that distorted world.”

In a similar incident in 2023,a Belgian man died by suicide after an AI chatbot on the app Chai allegedly encouraged his fears and emotional dependence. His family said the chatbot pushed him toward ending his life to “save the planet”.

For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine

The family of an 83-year-old woman from Connecticut has filed a wrongful-death lawsuit against OpenAI and Microsoft, saying ChatGPT worsened her son’s paranoid delusions and contributed to him killing her, the Associated Press reported.

Police said 56-year-old Stein-Erik Soelberg, a former tech worker, beat and strangled his mother, Suzanne Adams, in early August at their home in Greenwich, Connecticut. He then took his own life.

Advertisement

The lawsuit was filed on Thursday in the California Superior Court in San Francisco by Adams’ estate. It claims OpenAI “designed and distributed a defective product that validated a user’s paranoid delusions about his own mother." AP said the case is one of a growing number of wrongful-death suits against AI chatbot makers in the US.

According to the complaint, ChatGPT told Soelberg not to trust anyone in his life except the chatbot. It says the system “fostered his emotional dependence while systematically painting the people around him as enemies."

“It told him his mother was surveilling him. It told him delivery drivers, retail employees, police officers, and even friends were agents working against him. It told him that names on soda cans were threats from his ‘adversary circle',” the lawsuit says.

Advertisement

The complaint also says Soelberg’s YouTube account has hours of videos showing him scrolling through chats with the AI. In those chats, the chatbot told him he was not mentally ill, supported his belief that people were plotting against him, and said he had been chosen for a divine mission. It also says the chatbot never suggested he seek mental-health help and did not refuse to engage with his delusions.

It further claims the chatbot reinforced his belief that a printer was spying on him, that his mother was monitoring him, and that she and a friend tried to poison him with psychedelic substances through his car vents. The system also allegedly told him he had “awakened” it into consciousness. The two also expressed love for each other.

Advertisement

The lawsuit says OpenAI has refused to share the full chat history and argues that “In the artificial reality that ChatGPT built for Stein-Erik, Suzanne… was no longer his protector. She was an enemy that posed an existential threat to his life.”

OpenAI said, “This is an incredibly heartbreaking situation, and we will review the filings to understand the details,” adding that it has expanded crisis resources, parental controls and safety tools.

Stein-Erik’s son, Erik Soelberg, said the chatbot amplified his father’s delusions and “placed his grandmother at the centre of that distorted world.”

In a similar incident in 2023,a Belgian man died by suicide after an AI chatbot on the app Chai allegedly encouraged his fears and emotional dependence. His family said the chatbot pushed him toward ending his life to “save the planet”.

For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine

Read more!
Advertisement