To counter unsafe responses from chatbots, companies often use a process called red-teaming, in which human testers write prompts designed to elicit such responses so the artificial intelligence (AI) can be trained to avoid them. However, since it is impossible for human testers to cover every potential toxic prompt, MIT researchers developed a technique utilizing…
