On Monday, OpenAi announced the launch of parental supervision in Chatgpt in Chatgpt on both telephone and browser, after a lawsuit was filed by the parents of a teenager tried suicide after the chat boat a method for self -harm.
The checks have parents and teenagers opt for stronger guarantees by linking their accounts, whereby one party sends an invitation and parental supervision, only activate if the other accepts, the company said.
American supervisors are increasingly investigating AI companies about the potential negative effects of chatbots. In August, Reuters had reported how Meta, new tab AI rules opens, flirty conversations with children allowed.
According to the new measures, parents can reduce exposure to sensitive content, check whether Chatgpt reminds of chats from the past and decide whether conversations can be used to train the models of OpenAI, the company said with Microsoft-supported company on X.
Parents can also set quiet hours that block access to access in certain times and disable speech mode and generating and editing images, said OpenAi. However, parents do not have access to the chat transcriptions of a teenager, the company added.
In rare cases in which systems and trained reviewers draw signs of a serious safety risk, parents can only be informed of the information needed to support the safety of the teenager, OpenAi said, adding that they will be informed if a teenager disconnects the accounts.
OpenAI, which has around 700 million weekly active users for its chatgpt products, builds an age predictive system to help predict whether a user is younger than 18, so that the chatbot can automatically apply teen-fitting settings.
Meta had also announced new guarantees for teenagers for his AI products last month. The company said it will train systems to prevent flirty conversations and discussions about self-harm or suicide with minors and temporarily limit access to certain AI characters.

