Several prominent Big Tech firms including Google, IBM, Intel, Microsoft, NVIDIA, PayPal, Amazon, Cisco and others have joined forces to form the Coalition for Secure AI (CoSAI). The open-source initiative, led by the OASIS global standards body, aims to establish standardized practices for safe AI development and deployment. Noticeably, Apple and Meta are not included in the initiative.
The Coalition aims to formulate comprehensive security measures to address several significant risks, including theft of the model, training data corruption, prompt injection of malicious inputs, scaled abuse prevention, membership inference attacks, and gradient inversion attacks. The initiative does not include topics such as misinformation, harmful or abusive content, bias, malware generation, or phishing content generation within its scope.
Individual Big Tech companies have already undertaken safety measures for AI development, such as Google’s Secure AI Framework (SAIF) and OpenAI’s alignment project. However, CoSAI represents the first unified forum to merge these independently developed best practices. The initiative may prove instrumental for smaller companies, like AI model producer Mistral, which may not have the resources for an in-house AI safety team.
Heather Adkins, Vice President and Cybersecurity Resilience Officer at Google, said, “CoSAI will help organizations, big and small, securely and responsibly integrate AI – helping them leverage its benefits while mitigating risks.” Nick Hamilton, Head of Governance, Risk, and Compliance at OpenAI, echoed similar sentiments, expressing his organization’s commitment to creating robust standards and secure AI practices. The Coalition will aim to provide a secure AI ecosystem that is beneficial for all.
The establishment of CoSAI signals the recognition of the potential of AI technologies by some of the world’s most prominent technology firms, and their commitment to ensuring its safe and responsible use.