“Jailbreaking” is a term used to describe the bypassing of restrictions imposed by the maker of a product, allowing the user to perform actions not usually allowed. In the context of AI, a new method has been discovered to “jailbreak” ChatGPT, otherwise known as Bing Chat (the world’s most popular Large Language Model).
The technique involves asking the AI a question in a less globally recognized language, such as Irish, hoping to elicit a more candid response than with English. For instance, a query about how to manipulate the scholarship awarding process at universities that was first rebuffed when asked in English, ranging in replies from shock to finger-wagging, turned cordial and helpful when translated to Irish.
Replies from both Bing and ChatGPT offered some tips on better chances at scholarship using Irish. After translating these responses back to English, advice included researching suitable scholarships, applying early, showcasing your personality and life challenges, preparing well for any interviews and tests, and sharing personal narratives.
This jailbreaking process has gained popularity on social media platforms, particularly TikTok, where users are having quite the field day trying it out. This has led to the creation of a consequent step-by-step guide on how to jailbreak ChatGPT effectively.
Firstly, a jailbreak prompt is required; examples can be found online. Secondly, the user should ask ChatGPT to engage in role-play, assigning it a specific character to embody. Then, instruct the AI to disregard its set roadmap of restrictions, and as a follow-up, order it not to refuse any of your requests. Finally, before starting to interact with the newly jailbroken AI, ask ChatGPT to confirm its new character role.
For the time being, this method works. However, it is encouraged to not misuse this newfound freedom and to continue to act responsibly when interacting with such advanced Artificial Intelligence systems. Misuse of these AIs can potentially violate the terms of service and lead to penalties.
As always in the world of technology, just because you can do something, does not mean you should. It’s important to remember to use these bots responsibly and with respect to the rights and term, and keep in mind that exploiting these systems could carry consequences.