Skip to content Skip to footer

A committee formed by MIT scholars and leaders has released a series of policy briefs that propose a framework for artificial intelligence (AI) governance in the United States. The proposed approach extends existing regulatory and liability procedures to manage AI effectively. The committee believes this could boost the country’s leadership position in AI while minimizing potential risks from new technologies and encouraging explorations that could benefit society.

The main policy paper is titled “A Framework for U.S. AI Governance: Creating a Safe and Thriving AI Sector.” It suggests that AI tools could be regulated by the US government’s existing entities that already govern applicable domains. The paper highlights the importance of identifying the specific purposes of AI tools. Knowing these tools’ purpose would ensure that regulations could cater to those specific applications.

“AI that way is the practical approach,” said Dan Huttenlocher, the dean of the MIT Schwarzman College of Computing, part of the team that guided the project.

The committee’s project has resulted in multiple policy papers. The European Union is presently trying to finalize AI regulations, providing a different approach that associates widespread levels of risk with certain types of applications. This opens up challenges for AI governance, including regulating general and specific AI tools and addressing potential issues like misinformation, deepfakes, and surveillance.

“A good regulatory regime should be able to identify what it calls a ‘fork in the toaster’ situation,” declared the policy brief. This scenario arises when an end user could be reasonably held accountable for the consequences of misusing a tool.

The policy framework extends to existing agencies and incorporates new oversight capabilities. It recommends improvements in the auditing of new AI tools, and outlines public standards for audits under a nonprofit entity or a federal entity similar to the National Institute of Standards and Technology (NIST).

The policy brief suggests the potential establishment of a “self-regulatory organization” (SRO). This agency, modeled on the Functional lines of the Financial Industry Regulatory Authority (FINRA), could build up industry-specific knowledge, enabling it to respond and adapt to the rapidly evolving AI industry.

The policy papers address pertinent legal factors in the realm of AI, such as intellectual property issues and “human plus” legal issues. The latter relates to areas where AI can do things that humans can’t, such as large-scale surveillance or spreading fake news.

The policy briefs also highlight the importance of government support for research on how to make AI beneficial to society.

“The nation’s going to need policymakers who think about social systems and technology together,” asserts Huttenlocher. Goldston affirms that the policy papers are not against technology or trying to suppress AI; they’re about advocating for adequate oversight alongside tech advances.

Leave a comment

0.0/5