Building bombs with ChatGPT: Chatbots are too easy to trick

by Andrea
0 comments


How easy is it to “jailbreak” a language model, i.e. to deactivate the built-in security barriers in the software? A team of researchers from the AI ​​company Anthropic asked themselves this question. The short answer: It is surprisingly easy to elicit unwanted answers from ChatGPT, Claude, Gemini and Co. For example, instructions on how to build a bomb.



Source link

You may also like

Our Company

News USA and Northern BC: current events, analysis, and key topics of the day. Stay informed about the most important news and events in the region

Latest News

@2024 – All Right Reserved LNG in Northern BC