Skip to content Skip to footer

NIST Unveils a Machine Learning Instrument to Evaluate Risks Associated with AI Models

The increased use and reliance on artificial intelligence (AI) systems have come with its share of benefits and risks. More specifically, AI systems are considered vulnerable to cyber-attacks, often resulting in harmful repercussions. This is mainly because their construction is complex, their internal processes are not transparent, and they are regularly targeted by adversarial attacks such as evasion, poisoning, and oracle attacks. Therefore, these systems require strong mechanisms that can assess and mitigate these risks.

Traditionally, the evaluation of AI’s security and reliability involves focusing on specific attacks or defenses without taking into account a wider range of possible threats. Therefore, these methods are often irreproducible, non-traceable, and incompatible, making it difficult to compare results from different studies or applications. To address these challenges, the National Institute of Standards and Technology (NIST) has developed Dioptra, a wide-ranging software platform that assesses the trustworthiness of AI.

Dioptra supports the NIST AI Risk Management Framework by providing tools to estimate, observe, and record AI risks. This initiative promotes the creation of legitimate, reliable, safe, secure, and open AI systems. Dioptra addresses the constraints faced by current models by offering a standardized platform for assessing the reliability of AI systems.

The platform is constructed on a microservices architecture, enabling its application on various scales. It is equipped with a testbed API, which handles user requests and interactions, and a Redis queue and Docker containers for managing experiment jobs. Dioptra also features a plugin system for integrating existing Python packages and promoting further development. Its modular design accommodates different datasets, models, attacks, and defenses, facilitating comprehensive assessments. Moreover, Dioptra ensures reproducibility and transparency by creating resource snapshots and tracking the complete history of experiments and inputs. Its user-friendly web interface and multi-tenant deployment capabilities permit components to be shared and reused.

In summary, Dioptra, developed by NIST, seamlessly allows thorough assessments under a variety of conditions while encouraging reproducibility and traceability. The platform improves awareness and mitigation of risks related to AI systems, making Dioptra a critical tool in maintaining the reliability and security of AI in diverse applications. This research and project development has been carried out by NIST, and the pertinent details and its blog are available for further insights into their groundbreaking work.

Leave a comment

0.0/5