MIT has released a set of policy briefs offering guidance for the governance of artificial intelligence (AI) for lawmakers. The goal of these documents is to strengthen the U.S.’s leadership in AI, minimize potential harm from misapplication, and promote the beneficial uses of AI in our society.
The primary policy paper, titled “A Framework for U.S. AI Governance: Creating a Safe and Thriving AI Sector,” suggests that existing regulatory bodies already oversee relevant domains. The document emphasizes the significance of defining the purpose of AI tools to create suitable regulations.
The policy framework suggests that AI could be regulated by existing laws and regulatory bodies. An example used is the U.S.’s strict medical licensing laws, and it is suggested that using AI to impersonate a doctor should be penalized in the same way. Autonomous vehicles, another AI application, should be regulated similarly to traditional vehicles.
This paper emphasizes the need for AI systems developers to predefine their products’ intent, which should help establish which existing regulations apply. The authors of the report warn that the builders of general-purpose tools should also bear responsibility if their tech is implicated in issues.
As well as existing agencies, the authors propose the creation of a new oversight body. This could be based on the Public Company Accounting Oversight Board (PCAOB) model, or a federal entity similar to the National Institute of Standards and Technology (NIST). Another suggestion is a government-originated self-regulatory organization (SRO), like the Financial Industry Regulatory Authority (FINRA), which would enable quick adaptation to changes in the AI industry.
The paper includes several specific legal issues, such as copyright and intellectual property, that will need to be addressed in the context of AI. Moreover, it highlights “human plus” legal issues where AI is capable of more than a human, such as mass surveillance tools, may require special legal consideration.
The importance of further research on how AI can be beneficial to society is also emphasized. For example, the consideration of AI’s potential to assist, rather than replace, workers for overall economic growth. This approach of drawing from a variety of disciplinary fields for policymaking helps address the complex interactions between humans and machines.
Finally, the report says that academic institutions have a key role to play in offering expertise on the interaction between technology and society. It emphasizes the need for a balance: promoting AI advancement while ensuring appropriate regulations are in place. The committee advocates for AI oversight as part and parcel of responsible technological advancement.