Skip to content Skip to footer

A committee of leaders and scholars from MIT has developed a policy framework for the governance of artificial intelligence (AI) in the United States. The aim of these policy briefs is to enhance U.S. leadership in AI and limit the potential harm from new technologies while promoting awareness of the societal benefits of AI deployment.

The primary policy paper, titled “A Framework for U.S. AI Governance: Creating a Safe and Thriving AI Sector,” argues that existing government entities in the U.S. that already regulate relevant sectors can oversee AI tools. The paper emphasizes the importance of identifying the intent and utility of AI tools, enabling regulations to better suit these specific applications. The aim is not just to regulate AI but to build on areas where human activities are already regulated and deemed high risk by society.

Yet, regulating AI is not straightforward, owing to the layered structure of AI systems. For example, a general-purpose language model may support a specific new tool. The paper acknowledges that providers of specific services could be primarily responsible if there are problems. However, the builders of general-purpose tools should also be held accountable when their technologies contribute to particular issues.

Clarifying the purpose and intent of AI tools, and setting preventive measures against misuse, could also assist in determining the extent to which either companies or end users are liable for specific issues. A sound regulatory regime, the brief argues, has to be able to identify instances where the end user can be reasonably held responsible for knowing the potential problems mishandling a tool might cause.

While involving existing agencies, the policy framework adds a few new oversight abilities. It calls for improvements in the auditing of new AI tools and suggests that this could be driven by the government, end users, or legal proceedings. The paper also suggests creating a new self-regulatory organization (SRO) specifically for AI, which could develop industry-specific knowledge and respond flexibly to a rapidly-evolving AI scenario.

AI raises certain unique legal challenges as well. For instance, AI enables capacities beyond human abilities, like mass surveillance or spreading fake news, which may require special legal consideration.

Moreover, the papers underline the need to foster more research on how AI can be beneficial to society. They consider the possibility of AI serving as an aid to workers instead of replacing them.

The committee advocates for balanced governance that permits technological advances while ensuring effective oversight. The interdisciplinary approach of the papers reflects an ongoing effort to contribute to the nation’s understanding of the interplay between social systems and technology, thus guiding policymakers.

The ad hoc committee comprises a range of experts from fields such as AI, economics, political science, entrepreneurship, information technology, and cognitive sciences, suggesting a holistic approach to AI governance.

Leave a comment

0.0/5