Skip to content Skip to footer

NIST Examines Four Potential Generative AI Assaults

The US National Institute of Standards and Technology (NIST) has recently made a huge breakthrough in the security of predictive and generative AI systems. In a collaborative paper titled “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations,” Apostol Vassilev, a computer scientist at NIST, and his colleagues from Northeastern University and Robust Intelligence, have categorized the security risks posed by AI systems.

Vassilev stated, “Despite the significant progress AI and machine learning have made, these technologies are vulnerable to attacks that can cause spectacular failures with dire consequences.” He also warned against any company that claims to offer ‘fully secure AI.’ This is part of the NIST Trustworthy and Responsible AI initiative, aimed to align with US government goals for AI safety.

The paper examines adversarial machine learning techniques, focusing on four main security concerns: evasion, poisoning, privacy, and abuse attacks. Evasion attacks happen post-deployment, altering inputs to confuse AI systems. For example, modifying stop signs to be misread by autonomous vehicles as speed limit signs, or creating deceptive lane markings to lead vehicles astray.

In poisoning attacks, corrupt data is introduced during the training process. This could involve embedding frequent inappropriate language in training datasets, leading a chatbot to adopt this language in customer interactions. Privacy attacks aim to extract sensitive information about the AI or its training data, often through reverse-engineering methods. This can involve using a chatbot’s responses to discern its training sources and weaknesses. Abuse attacks manipulate legitimate sources, like webpages, feeding AI systems false information to alter their functioning.

Alina Oprea from Northeastern University, who was involved in the study, expressed her enthusiasm for the research, “Most of these attacks are fairly easy to mount and require minimum knowledge of the AI system and limited adversarial capabilities.”

This is an incredibly exciting time for the US National Institute of Standards and Technology and the world of AI security! This breakthrough could be a major milestone in ensuring AI safety for the future, and we can’t wait to see what comes next!

Separately, concerns have been raised over a planned AI research partnership between NIST and the RAND Corp. RAND, known for its ties to tech billionaires and the effective altruism movement, played a significant advisory role in shaping the AI safety executive order. Members of the House Committee on Science, Space, and Technology, including Frank Lucas and Zoe Lofgren, criticized the lack of transparency in this partnership.

The committee’s concerns are twofold: First, they are questioning why there wasn’t a competitive process for selecting RAND for this AI safety research. Usually, when government agencies like NIST provide research grants, they open up the opportunity for different organizations to apply, ensuring a fair selection process. But in this case, it seems RAND was chosen without such a process. Second, there is some unease about RAND’s focus on AI research. RAND has been involved in AI and biosecurity studies and has recently received significant funding for this work from sources closely linked to the tech industry.

The US National Institute of Standards and Technology has made a remarkable breakthrough in AI security! Their new research paper, “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations,” examines adversarial machine learning techniques, focusing on four main security concerns: evasion, poisoning, privacy, and abuse attacks. This is an incredibly important step in ensuring AI safety for the future, and we are thrilled with the groundbreaking progress NIST has made!

Evasion attacks involve creating adversarial examples to deceive AI systems during deployment, like misidentifying stop signs in autonomous vehicles. Alina Oprea from Northeastern University, who was involved in the study, expressed her enthusiasm for the research, “Most of these attacks are fairly easy to mount and require minimum knowledge of the AI system and limited adversarial capabilities.”

Separately, concerns have been raised over a planned AI research partnership between NIST and the RAND Corp. Members of the House Committee on Science, Space, and Technology, including Frank Lucas and Zoe Lofgren, criticized the lack of transparency in this partnership. The committee’s concerns are twofold: First, they are questioning why there wasn’t a competitive process for selecting RAND for this AI safety research. Second, there is some unease about RAND’s focus on AI research.

Overall, the US National Institute of Standards and Technology has made an incredible achievement in the realm of AI security and safety! We are confident that this research and partnership with the RAND Corp will make a big impact in the world of AI and we look forward to seeing the progress NIST and RAND will make in the near future.

Leave a comment

0.0/5