Skip to content Skip to footer

A committee of scholars and leaders from MIT has released a series of policy briefs, aiming to frame a strategy for governance of artificial intelligence (AI). The targeted audience for these briefs is primarily U.S. policymakers, with the aim of regulating AI to ensure its safe use, limit potential harms, and encourage exploration of societal benefits. The primary policy paper, “A Framework for U.S. AI Governance: Creating a Safe and Thriving AI Sector,” proposes the extension of existing regulatory and liability regimes to cover AI.

The policies stress on identifying the purpose of AI tools, which would help tailor regulations to appropriate applications. As explained by Dan Huttenlocher, Dean of the MIT Schwarzman College of Computing, the practical approach is to regulate AI in the way other risky human activities with potential societal impacts have been overseen in the past.

However, AI governance poses unique challenges as it encompasses both general and specific AI tools, with potential issues including misuse, misinformation, mass-surveillance, and more. As such, the role of MIT was deemed necessary due to its pivotal role in AI technology development and its inherent obligation to address critical issues raised by the same.

The approach suggested by these policy papers is for AI providers to define the intent and purpose of their AI applications in detail in advance. This way, the relevant regulatory body and rules for any given AI tool can be determined more easily. The policy documents also suggest that the responsibility for AI tools should extend beyond merely the service providers – companies or individuals developing additional applications on top should also be accountable for any issues that arise.

The paper also advocates ‘guardrails’ to prevent misuse of AI, specifying that a good regulatory regime should be able to identify when misuse and resulting problems can reasonably be pinned on the end user. Finally, the papers discuss the requirement for public auditing of AI technologies and consider the creation of a government-approved “self-regulatory organization” (SRO) to manage AI, similar to FINRA for the financial industry. This agency, armed with domain expertise, would be better equipped to deal with the rapidly evolving AI industry.

MIT’s strategy, articulated through these policy papers, is not only to assure safe progression in AI but also to encourage more research into ways of making AI beneficial to society as a whole. The papers argue for AI to buttress rather than replace human labour, enabling a better long-term economic trajectory benefitting the entirety of society.

In conclusion, these policy papers try to bridge the divide between those excited about AI and those apprehensive about it. The creators of these technologies themselves are advocating for governance over AI to ensure its responsible development and deployment. The role of academic institutions, drawing from their expertise in technology and it’s societal interplays, is emphasised as crucial in this venture.

Leave a comment

0.0/5