Skip to content Skip to footer

A group of scholars and leaders from MIT has developed policy briefs to establish a governance framework for artificial intelligence (AI). The briefs are intended to assist U.S. policymakers, sustain the country’s leadership position in AI, limit potential risks from new technologies, and explore how AI can benefit society.

The primary policy paper, “A Framework for U.S. AI Governance: Creating a Safe and Thriving AI Sector,” implies that existing U.S. government entities that regulate relevant fields can often govern AI tools. It also highlights the importance of determining the purpose of AI tools to ensure regulations fit their uses.

The experts recommend starting with areas where human activity is already regulated and considered high-risk. To facilitate this, the team suggests the need for AI providers to define the purpose and intent of AI applications in advance. This could help identify which regulations and regulators are relevant to each unique AI tool.

There are also discussions regarding “stack” systems, where multiple levels of AI systems work together to deliver a service. In these instances, the specific service provider may be primarily liable for any issues. However, if a component system in the stack fails, the provider of that system may need to share responsibility.

One of the proposals includes advances in the auditing of new AI tools. This could follow various paths, whether initiated by the government, driven by users, or stemming from legal liability procedures. The paper suggests the potential formation of a government-approved “self-regulatory organization” (SRO) specifically for AI, ensuring the agency can adapt as the AI industry rapidly changes and grows.

The policy briefs also note the necessity to address specific legal matters in the AI realm, such as copyright and intellectual property issues. Particular attention is paid to “human plus” legal issues where AI surpasses human capabilities, like mass-surveillance tools, requiring unique legal considerations.

The papers indicate the importance of AI being beneficial to society overall. For instance, the policy paper “Can We Have a Pro-Worker AI?” explores the scenario where AI supports workers rather than replacing them. This approach could potentially lead to better long-term economic growth throughout society.

The interdisciplinary committee aims to extend the lens of policymaking on AI regulation, incorporating various perspectives and not limiting it to technical facets alone. The committee emphasizes the importance of adequate regulation keeping pace with advances in AI technology and believes that proper governance is integral for the responsible and beneficial use of AI. The group sees its role as bridging the gap between those optimistic about AI and those concerned about it.

Leave a comment

0.0/5