Getting a chatbot to ignore its own security barriers was once easier. Most providers have now set up more or less sensible locks so that instructions on how to make bombs are not spit out. Of course, that doesn’t mean that you can’t get ChatGPT, Claude, Gemini and Co to say similarly dangerous things. You just have to rhyme well enough.
