A group of renowned scientists has launched a voluntary initiative to establish a set of ethical standards, principles, and commitments for AI in protein design. The proposal addresses the potential misuse of AI tools capable of creating new proteins rapidly and effectively. AI could provide solutions to significant global problems, such as pandemics and sustainable energy, but it also raises concerns about malign use, including producing innovative bioweapons.
Proteins designed by DeepMind’s AlphaFold system and the University of Washington School of Medicine show high affinity and specificity. But these proteins could interact unexpectedly with other molecules in the body, leading to unwanted side effects or initiating new diseases. Another significant concern is the potential for abuse of this technology. AI-designed proteins could be manipulated to cause harm, such as creating a protein targeting a specific ethnic group or exploiting a unique genetic vulnerability.
David Baker, a computational biophysicist at the University of Washington and a central figure in this initiative, raised the question of how AI in protein design should be regulated and what the dangers could be. The letter signed by over 100 participants believes the benefits of AI in protein design outweigh the risks but acknowledges the need for a proactive risk management approach.
According to the letter, with the anticipated advances in this field, a proactive risk management strategy may be needed to mitigate the potential harmful effects of developing AI technologies. The initiative presents a set of values and principles, including safety, security, equity, global collaboration, openness, responsibility, for responsible AI development in protein design. Signatories have voluntarily accepted set commitments based on these standards.
A crucial part of the initiative is to enhance the DNA synthesis screening, an essential step in turning AI-designed proteins into physical molecules. Furthermore, the commitment is to only use DNA synthesis services from providers that follow industry-standard biosecurity screening practices, detecting harmful biomolecules before production.
The scientists also agree to collaborate with global stakeholders and refrain from research that could lead to overall harm or misuse of their technologies. AI has made substantial progress in medicine and biotechnology, leading to the discovery of new antibiotics and anti-aging drugs. These commitments aim to ensure the continued safe and beneficial use of AI in these vital areas.