The US Department of Homeland Security (DHS) has established an Artificial Intelligence Safety and Security Board, according to Secretary Alejandro Mayorkas. The board’s inception is in response to President Biden’s Executive Order for a “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” The board’s role is to create recommendations on how stakeholders of key infrastructure, such as pipeline and power grid operators and internet service providers, use AI responsibly.
Mayorkas highlights the potential of AI as a transformative technology but also acknowledges the risks it presents. He insists that these risks can be mitigated through best practices and thorough, consistent action. Mayorkas further indicates that the board will aid the DHS in staying ahead of evolving threats emanating from hostile nation-state actors.
The DHS’s 2024 Threat Assessment warns that nations like China, Russia, and Iran could utilize AI tools to target US economic security and vital infrastructure. AI has the potential to enable large-scale, rapid, efficient, and otherwise elusive cyber-attacks on crucial targets within the US. The establishment of the new AI safety board is viewed as a necessary step to counter these threats.
The 22-member board includes numerous high-profile names from the technology industry. CEOs such as Sam Altman from OpenAI, Satya Nadella of Microsoft, Sundar Pichai from Alphabet, and Dario Amodei from Anthropic sit on the board. Noticeably absent are Mark Zuckerberg and Elon Musk, known advocates for open-source AI models. Secretary Mayorkas disclosed that he intentionally did not invite social media companies to join the board, although it remained unclear whether this decision related to their platforms or their open-source AI strategies.
The board members’ list hints at possible existing biases towards AI risk mitigation, potentially contributing to ongoing debates about open-source AI models. Some accuse companies, ironically including OpenAI, of utilizing AI apprehension to maintain their influence by keeping their models proprietary. In contrast, other companies like Meta, xAI, and French startup Mistral have opted for an open-source approach.
Despite the absence of prominent open-source AI advocates, the board features several individuals with a strong interest in AI safety. Their participation could influence ongoing AI risk discussions when the board convenes for the first time. The task of the board remains clear: to ensure the responsible and secure development and implementation of AI within the US. Their role could have a significant impact on the nation’s approach to AI technology in a rapidly advancing digital landscape.