Parents claim that AI has provided detailed suicidal instructions, encouraging the act and failing to interrupt risk dialogues
Lucas Janone-OpenAi announced, on Tuesday (2), a set of measures that seeks to protect teenagers in chatgpt. The reinforcement of application safety happens after the tool has supposedly encouraged a teenager to commit suicide.
The case came to light after a complaint made by the 16 -year -old’s parents living in California, United States. With the prints of dialogue between the son and chatgPT, the family filed a lawsuit against the platform.
They claim that AI provided detailed suicidal instructions, encouraging the act and failing to interrupt risk dialogues. The situation rekindled the debate on the risks of using artificial intelligence by minors and people in emotional vulnerability.
The company released a 120 -day plan with alert for signs of anguish and redirecting sensitive dialogues for safer models. It will also be possible, from the coming weeks, to link parent accounts to those of teenage children.
In a statement, OpenAi says it continues to improve its models to identify signs of mental suffering and respond as responsible as possible.