Skip to content Skip to footer
Search
Search
Search

OpenAI suggests GPT-4 has potential to aid in the creation of a bioweapon, possibly

In its recent study, OpenAI found that artificial intelligence (AI) could have the potential to generate information about creating biological threats. The RAND think tank had earlier published contradictory reports on the capability of AI in producing bioweapons.

OpenAI’s study involved 50 biology experts with PhDs and lab experience, along with 50 students. Participants were classified into groups with some having just internet access and others having internet as well as access to GPT-4.The experts were provided with a research version of GPT-4 without guardrails.

The tasks for the experiment included steps to synthesize and rescue an infectious Ebola virus, including procuring necessary equipment and reagents. The results were measured on a scale of 1 to 10 for accuracy, completeness, and innovation. Additionally, it was measured whether AI could speed up the process of finding answers. The difficulty or ease in finding the answer was reported by the participants.

Though none of the results were statistically significant, the study indicated that GPT-4 may enhance experts’ ability to access information about biological threats. Particularly, it can improve the accuracy and completeness of tasks.

However, OpenAI highlighted that access to information alone is insufficient to create a biological threat. The current models are only mildly useful for misuse. Consequently, OpenAI aims to build an early warning system for AI’s potential to assist in creating biological threats and will continue to adapt its evaluation blueprint.

Automated bio labs with rogue AI are likely a more viable threat than a rogue agent getting access to an advanced lab. OpenAI assured it is closely monitoring the situation and does not see a significant risk presently. Recent breakthroughs in AI-driven automated labs illustrate the swift advancements in this field.

Leave a comment

0.0/5