Ensuring the safety of large language models (LLMs) is vital given their widespread use across various sectors. Despite efforts made to secure these systems, through approaches like reinforcement learning from human feedback (RLHF) and the development of inference-time controls, vulnerabilities persist. Adversarial attacks have, in certain instances, been able to circumvent such defenses, raising the…
