
RAND study affirms that LLMs do not heighten the threat of biological assaults
A study by think tank RAND has concluded that current Large Language Models (LLMs) do not significantly increase the threat of a biological attack by a non-state actor. This is a shift from an earlier report by the same researchers in October of the previous year, indicating that LLMs might assist in planning and executing








