David Zollikofer from ETH Zurich and Benjamin Zimmerman from Ohio State University have developed a proof-of-concept computer virus labelled ‘synthetic cancer’ that uses large language models (LLMs) to modify its own code and author persuasive, malicious emails. This malware, revealed as part of a submission to the Swiss AI Safety Prize, takes advantage of Artificial Intelligence (AI) to bolster its evasion techniques and propagation, signaling increased sophistication in cyber attacks.
The infection process starts with the malware being sent as an email attachment. After it’s activated, it could possibly encrypt the victim’s data and download supplementary files. The distinguishing characteristic of this virus is its use of AI models, like GPT-4 or similar LLMs, with which it interacts either through API calls or by executing a local LLM. It then utilizes the LLMs to rewrite the virus’s code and craft contextual and convincing phishing emails tailored to the recipients. The LLM-created code keeps the original functionality intact despite changing variable names, restructuring logic, and potentially shifting coding styles.
Such techniques pose a significant challenge for cybersecurity as signature-based antivirus solutions may become ineffective against this self-rewriting code. The virus also capitalizes on deeply personalized phishing emails, raising the likelihood of successful future infections.
A similar a type of AI-infused worm, capable of attacking email assistants powered by AI to steal sensitive data, was previously reported in March. It was created by Ben Nassi from Cornell Tech and his team, and targeted email assistants powered by GPT-4 of OpenAI, Gemini Pro of Google, and an open-source model LLaVA.
While Nassi’s worm mainly targeted AI assistants, the ‘synthetic cancer’ designed by Zollikofer and Zimmerman extends its malicious reach by manipulating the malware’s code and generating persuasive spam emails, indicating imminent threats cybersecurity threats driven by AI.
The rapid development of these worrisome AI-powered worms alongside recent incidents including Disney’s data breach by a hacktivist group and OpenAI’s concealed breach in 2023 underscore the tangible risks in AI cybersecurity.
To prevent misuse, Zollikofer and Zimmerman have implemented safeguards including not publicly revealing the code and deliberately maintaining ambiguity in their paper’s details. Although the researchers are concerned about the potential abuse of this type of malware, they argued they published it in an effort to raise awareness about these emerging threats.
Reflecting on these risks, Nassi and his colleagues suggested that AI worms could be widespread in a few years and trigger substantial undesired consequences. Given recent advancements, this timeline could be viewed as potentially conservative. The presence of this ‘synthetic cancer’ worm highlights a new frontier in cybersecurity threats.