A committee of leaders and scholars from MIT has developed a series of policy briefs outlining a framework for the governance of artificial intelligence (AI) in the U.S. The goal of these papers is to strengthen U.S. leadership in AI, mitigate potential harm from new technologies, and explore how AI deployment can serve society.
The primary policy paper, “A Framework for U.S. AI Governance: Creating a Safe and Thriving AI Sector,” argues existing U.S. government entities that already oversee relevant fields can effectively regulate most AI tools. It emphasizes the need to determine the purpose of AI tools to tailor regulations accordingly.
This project, overseen by experts from the MIT Schwarzman College of Computing, comes amidst growing interest in AI and increased industry investments. Contemporarily, jurisdictions like the European Union are laboring to establish AI regulations that categorize specific application types based on associated risk levels.
A key part of this approach involves AI providers defining the purpose and intent of AI applications in advance. This will clarify which regulations and regulators apply to each AI tool. However, the authors note AI systems often exist at multiple levels and may require shared responsibility for problems arising from them.
To prevent misuse, the policy paper advocates for clear definitions of the purpose and intent of AI tools, including “guardrails.” This could also clarify whether companies or end users are accountable for specific problems.
While the proposed framework relies heavily on existing agencies, the policy paper suggests increased auditing of new AI tools and the consideration of creating a new government-approved “self-regulatory organization” akin to the Financial Industry Regulatory Authority.
Key legal matters requiring attention include copyright and other intellectual property issues related to AI, as well some where AI capabilities exceed human abilities, such as mass surveillance.
The policy papers also emphasize the need for more research on how to make AI beneficial to society in general. One paper explores the possibility that AI might be used to assist workers rather than replace them.
The authors of the policy papers believe academic institutions have a crucial role in understanding the interplay between technology and society, aiming to bridge the divide between those excited about AI and those urging for robust regulation. As technology experts, they argue that appropriate oversight is crucial for the proper development and deployment of AI.