A committee of MIT leaders and scholars has released a set of policy briefs offering a framework for the governance of artificial intelligence (AI), to guide U.S. policymakers. This comes amid heightened interest in AI technology and significant new industry investment.
The aim of these papers is to strengthen U.S. leadership in AI, while also mitigating potential harm from these new technologies and fostering exploration of how AI can be beneficial for society. The document, titled “A Framework for U.S. AI Governance: Creating a Safe and Thriving AI Sector” suggests that AI tools can often be regulated using pre-existing U.S. government entities that oversee relevant domains.
The policy paper also emphasizes the need for clearly defining the purpose and intent of AI applications, which would subsequently clarify which regulations and regulators are relevant. Also highlighted is the importance of making regulatory and liability regimes aware of the existing “stacks” of systems that together deliver a specific AI service.
The documents advised for advances in auditing of new AI tools, which could take different forms, whether initiated by government, compelled by users, or birthed from legal liability proceedings. They proposed the formation of a self-regulatory body (SRO) agency, focused on AI, functioning along the same lines as the government-established Financial Industry Regulatory Authority (FINRA).
In addition to recommending an SRO, as existing laws and regulations may not sufficiently cover new AI tools and capacities, the MIME papers also suggest creating new oversight capacities, where necessary. Furthermore, the policy briefs encourage more research into how AI can be socially beneficial. This involves exploring how AI might augment and aid workers, rather than replacing them, which could lead to better, widespread long-term economic growth.
Above all, it is made clear that the law must adapt to AI and not the other way round. As several legal matters, such as intellectual property issues related to AI, become increasingly subject to litigious matters, it remains crucial to ensure that the law is able to adequately address all facets and applications of AI. This would involve anticipating and preparing for future challenges, while ensuring that the legislation is flexible enough to adapt to the rapidly evolving AI industry.