Skip to content Skip to footer

A committee of leaders and scholars from MIT has developed a series of policy briefs aimed at establishing a framework for the governance of artificial intelligence (AI) in the U.S. The briefs propose extending existing regulatory and liability approaches in a practical manner to oversee AI.

The main paper, titled “A Framework for U.S. AI Governance: Creating a Safe and Thriving AI Sector,” suggests that AI tools can often be regulated by existing government bodies, and underscores the importance of identifying the purpose of AI tools to fit appropriate regulations. The committee believes that a practical approach to AI governance is to start with areas where human activity is already regulated.

The policy briefs also highlight the need for AI providers to clearly define the purpose and intent of the AI tools and to institute guardrails to prevent misuse. This clarity could also help determine accountability for specific problems. The briefs propose that existing regulations should be extended to cover AI using existing agencies and legal liabilities. For instance, in the medical field where strict licensing laws exist, if AI is used to prescribe medicine or make a diagnosis under the pretense of being a doctor, it should be considered a violation of the law just as if a human had done so.

One significant challenge is the issue of responsibility when AI systems are part of a “stack” of systems that together provide a service. The suggested approach is that if a component system of a stack fails to perform as promised, the provider of that component should share in the responsibility.

To ensure responsive and flexible oversight, the policy framework proposes the creation of new oversight capacities and potentially a new government-approved “self-regulatory organization”. It also calls for advances in the auditing of new AI tools, with public standards established by either a nonprofit or federal entity.

The papers also address intellectual property issues related to AI, which are already being litigated, as well as what they term “human plus” legal issues, those where AI has capacities beyond human abilities. These might include mass-surveillance tools, which may require special legal consideration.

The key principle is that governance should enhance AI’s societal benefits and minimize its harms. Therefore, the policy papers advocate further research on how AI can aid society, and stress the importance of academic institutions in supporting AI regulation. Amid increased interest in AI, the policy papers put forward a stance advocating for adequate regulation to accompany technological advances in AI. The committee views this as being part of the responsible development and application of AI technology.

Leave a comment

0.0/5