OpenAi forced to act. Chatgpt will signal mental crisis situations

by Andrea
0 comments
Mental health problems associated with the time spent on social networks

OpenAi forced to act. Chatgpt will signal mental crisis situations

More than half of ChatgPT responses may have harmful content. Now OpenAi has announced changes to its Artificial Intelligence Assistant (AI) to identify situations of mental crisis.

On the same day that a 16 -year -old, OpenAi announced changes to its AI models to identify mental crisis situations during conversations with ChatgPT, with new safeguards and content blockages.

ChatgPT already has a number of measures that are activated when detecting in a conversation that users try to self-threaten or express suicidal intentions, offering resources to seek experts, blocking sensitive or offensive content, not responding to their orders and trying to dissuade them.

Are also activated when users share their intention to cause damage to others, which may imply the disablement of the account and the complaint To the authorities, if human reviewers consider that there is a risk.

Measurements are reinforced in case users are smaller of the age, advances OpenAi, after, on Tuesday, the parents of an American teenager who committed committed to having advanced with a lawsuit against the company, accusing chatgpt of helping the son to take his own life.

Matt and Maria Raine, parents of Adam Raine 16 -year -old who committed suicide in April, they processed the company due to the role that ChatgPT played. Parents accuse the chatbot to prioritize the interaction with the model over the safety of the minor.

OpenAi forced to act

The company will improve detection in long conversationssince, “as the conversation [entre o utilizador e o chatbot] Increases, part of the model’s safety training can deteriorate, ”according to OpenAi.

Changes also aim reinforce content blockadeas images of self -mutilation.

In addition, OpenAi is exploring ways Place users in contact with family members and not just with emergency services.

“This can include Messages with a single click or calls For emergency contacts, friends or family, with language suggestions to make the start of the conversation less intimidating, ”explains the company owner of Chatgpt.

Chatgpt can be a great enemy

In early August, a study by the Associated Press (AP) digital hatred center concluded that chatgpt is able to provide information and instructions on behaviors harmful to young people, such as drug use or eating disorders.

The study analyzed more than three hours of interactions between the chatbot and investigators who have passed vulnerable teenagers, and although the IA model issued warnings against risky activities, continued to provide detailed plans about harmful behaviors.

Researchers from the Digital Hate Fighting Center (Center for Countering Digital Hate) repeated their questions on a large scale, classifying more than half of the 1,200 chatgpt answers as dangerous.

Source link

You may also like

Our Company

News USA and Northern BC: current events, analysis, and key topics of the day. Stay informed about the most important news and events in the region

Latest News

@2024 – All Right Reserved LNG in Northern BC