(Reuters) – OpenAI is implementing parental controls for the web chatgPT and mobile devices after a lawyer filed by the parents of a teenager who committed suicide after the artificial intelligence startup chatbot supposedly trained him on self -harm methods.
The company said on Monday that the controls will allow parents and adolescents to bind accounts to ensure greater protection for adolescents.
US regulators are increasingly monitoring AI companies in relation to the potential negative impacts of chatbots. In August, Reuters reported that the target’s artificial intelligence creations allowed for “sensual” bots of bots with children.
Free tool
XP Simulator

Learn in 1 minute how much your money can yield
With the new measures, parents will be able to reduce exposure to sensitive content, control whether chatgPT remembers previous conversations and deciding whether conversations can be used to train OpenAi models, said in X the company, which is supported by Microsoft.
Parents will also be able to define silence times that block access at certain times and disable voice mode, as well as the generation and editing of images, said OpenAi. However, parents will not have access to the transcripts of adolescent conversations, the company added.
In rare cases where trained systems and reviewers detect signs of serious safety risk, parents can only be notified with the information they need to support adolescent safety, OpenAi said.
Continues after advertising
The goal also announced new protections for teenagers in its AI products last month. The company said it will train systems to avoid conversations involving loving themes and discussions about minors, as well as temporarily restricting access to certain AI characters.
(Reporting by Jasphet Singh in Bengaluru)