Leaders and scholars from MIT have released a series of policy papers proposing a framework for U.S. governance of artificial intelligence (AI), seeking to extend existing regulatory and liability approaches. The primary aim is to support and enhance American leadership in the evolving AI namespace while minimizing the potential risks and harms associated with the technology’s deployment.
The primary recommendation in the seminal policy paper, “A Framework for U.S. AI Governance: Creating a Safe and Thriving AI Sector,” suggests that AI tools could be regulated by existing U.S. government agencies that already have oversight in the relevant domains. However, the paper underscores the importance of first identifying the purpose and intent of AI tools to tailor regulations accordingly.
The policy papers also engaged with the increased global interests in AI and industrial investments, and the ongoing attempts by the European Union to finalize AI regulations that categorize risk levels for various types of applications, presenting an alternative to the EU approach.
Notably, the policy framework proposed does not exclude the potential need for some new oversight bodies. Instead, it emphasizes the need to advance AI tool audits with standard public norms established, either through a nonprofit entity or a federal authority akin to the National Institute of Standards and Technology (NIST). The papers space out possibility of creating a government-approved “self-regulatory organization” (SRO) specializing in AI governance.
Moreover, the policy framework encourages more research into the positive societal applications of AI, exploring the potential for AI to support rather than displace workers, thus providing more broadly distributed long-term economic growth.
In addressing the regulation of AI, these policy papers stress the importance of clearly defining the purpose and intent of AI tools and establishing guardrails against misuse. They also highlight the complexities and challenges intrinsic to the task, such as intellectual property issues, and the need for legal adjustments for “human plus” complications where AI tools can perform tasks that humans can’t, such as mass surveillance and producing fake news at scale.
The committee working on these papers embodies a collaborative approach between technology optimists and those wary about the socio-economic implications of unregulated AI, seeking to ensure that progress in the field of technology is accompanied by appropriate regulation and oversight. By releasing these papers, the committee aims to promote responsible, mutually beneficial actions on the part of both the developers and end-users of AI technologies. It also signifies an effort to narrow the gap between those who are optimistic about AI and those who are apprehensive, through the advocacy of adequate regulation that complements technological advancements.