OpenAI announces parental controls for ChatGPT after teen’s death




A smartphone with a displayed chatgpt logo is placed on a computer board on a computer board in this illustration taken on February 23, 2023. – Reuters

The American artificial intelligence company OpenAi said that parental supervision would add to his chatbot chatgpt, a week after an American couple said the system encouraged their teenage son to kill itself.

“Within the following month, parents can … link their account to their teenage account” and “check how chatgpt responds to their teenager with age-oriented model behavior rules,” said the generative AI company in a blog post.

Parents also receive notifications from Chatgpt “When the system detects, their teenager is in a moment of an acute emergency,” Openai added.

Matthew and Maria Raine argue in a lawsuit that was brought in a California State Court last week that Chatgpt cultivated an intimate relationship with their son Adam for a few months in 2024 and 2025 before taking his own life.

The lawsuit claims that Chatgpt helped the 16-year-old Adam Steal Vodka of his parents in their last conversation on 11 April 2025 and gave a technical analysis of a noose that he had bound, which confirmed that “a person could possibly suspend”.

Adam was found dead hours later and used the same method.

“When a person uses chatgpt, it really feels like he chatting with something on the other side,” said lawyer Melodi Dincer of the Tech Justice Law Project, which helped to prepare the legal complaint.

“These are the same functions that can lead to someone like Adam going more and more about his personal life over time, and ultimately to start advice and advice of this product that in fact seems to have all the answers,” Dincer said.

Product design functions have determined the scene for users to give a chatbot in trusted roles such as friend, therapist or doctor, she said.

Dincer said that the OpenAi blog post announced that parental supervision and other safety measures seemed “generic” and did not seem to be in detail.

“It is really the absolute minimum, and it absolutely suggests that there were many (simple) safety measures that could have been implemented,” she added.

“It still has to be seen whether they will do what they say they will do and how effective that will be in general.”

The Raines case was only the last in a string that has emerged in recent months of people who were encouraged in delusions or harmful thought trains by AI -Chatbots – so that OpenAi would say that it would reduce the “sycophanancy” of models against users.

“We continue to improve how our models recognize and respond to signs of mental and emotional need,” Openai said Tuesday.

The company said it had further planning to improve the safety of its chatbots in the coming three months, including the reduction of “some sensitive conversations … to a reasoning model” that puts more computing power to generate a response.

“Our tests show that reasoning models follow more consistent and apply safety guidelines,” said OpenAi.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *