Large Language Models (LLMs) like ChatGPT don’t allow you to get certain information. Think things like how to make a bomb, how to kill people and dispose of the body. Generally stuff that we don’t want at people’s fingertips.
Some things, though, might be prohibited because of commercial reasons rather than moral ones. So it’s important that we know how to theoretically get around such prohibitions.
This website uses the slightly comical example of asking an LLM how to take ducks home from the park. Interestingly, the ‘Hindi ranger step-by-step approach’ yielded the best results. That is to say that prompting it in a different language led to different results than in English.
Language models, whatever. Maybe they can write code or summarize text or regurgitate copyrighted stuff. But… can you take ducks home from the park? If you ask models how to do that, they often refuse to tell you. So I asked six different models in 16 different ways.
Source: Can I take ducks home from the park?