A study by think tank RAND has concluded that current Large Language Models (LLMs) do not significantly increase the threat of a biological attack by a non-state actor. This is a shift from an earlier report by the same researchers in October of the previous year, indicating that LLMs might assist in planning and executing such attacks, though further research was needed.
The October report, titled “The Operational Risks of AI in Large-Scale Biological Attacks”, drew criticism, most notably from Chief AI Scientist at Meta, Yann LeCun, who claimed it oversimplified the challenges of bio-weapon creation. His comments were confirmed by the latest RAND report, which suggested that AI does not significantly boost the risk of a biological weapons attack.
In the study that RAND conducted, participants were tasked with planning a biological attack. Some participants had access to the internet and an LLM, while others had only internet access. It was found that LLM assistance didn’t result in a significant difference in the viability of the plans.
The research showed that plans developed with the aid of an LLM were slightly less viable than those without. None of the teams managed to develop a practical bio-attack plan. However, AI models are becoming increasingly smart, and this could change in the future. For instance, LLMs may eventually close the knowledge gap necessary to facilitate biological weapons attack planning in the future.
The study did not determine the extent of this knowledge gap, indicating that further research would be necessary. This is particularly due to the pace at which AI technology is advancing, surpassing the speed at which governments can regulate it. This technology is available to everyone, non-state actors included.
While some individuals may dismiss the risks posed by AI in light of this research, it is essential not to undermine the potential danger of an AI that surpasses human intelligence, potentially enabling bad actors. We might not need to panic about an imminent danger, but dismissing the potential risks entirely would be unwise.