A committee of leaders and scholars from the Massachusetts Institute of Technology (MIT) has released a series of policy papers providing a regulatory framework for artificial intelligence (AI). The goal is to enable the US to maintain its leadership position in AI, preventing potential harm from novel technologies and promoting their societal benefits.
The central policy paper proposes that existing government bodies could often regulate AI tools, as they already preside over the pertinent areas. The paper stresses the significance of pinpointing the purpose of each AI tool, ensuring that regulations are appropriate for software’s specific applications.
Moreover, the authors assert that it is not about whether the current regulatory approach is sufficient, instead, it is about building on areas where human activity, considered high risk, is already supervised.
The project, which includes numerous additional policy papers, has been released amidst growing interest in AI and increased investment in the field. There is also an ongoing attempt by the European Union to finalize its AI regulations, which assigns risk levels to various types of applications. Therefore, governance efforts are challenging due to the need to regulate both general and specific AI tools, and a range of potential issues, including misinformation, deepfakes, and surveillance.
One crucial step in establishing these regulatory and liability regimes is for AI providers to define the purpose and intent of AI applications beforehand. Examining new technologies based on this would clarify which existing sets of regulations, and regulators, apply to a particular AI tool. Nonetheless, it is recognized that AI systems exist at multiple levels or in a “stack” of systems to deliver a service.
The policy paper also suggests introducing new oversight bodies, advances in auditing AI tools, and a new government-approved “self-regulatory organization” (SRO) specifically for AI. Such an agency could accumulate domain-specific knowledge, which would enable flexibility and responsiveness when dealing with the rapidly changing AI industry.
The policy papers delve into IP, copyright, and other legal issues surrounding AI. They also explore what the committee calls “human plus” legal issues, in which AI capabilities surpass human capacities, such as mass surveillance tools. These may necessitate special legislative consideration.
Another aspect of the policy papers entails promoting research on how to make AI more beneficial to society. Further scholarly institutions are encouraged to provide input regarding technology’s interplay with society – a vital requirement for effective AI governance.
The committee aims to bridge the gap between individuals excited about AI and those concerned about it by advocating for adequate regulation accompanying technology advancements. Therefore, it is apparent that the papers’ authors encourage AI development while asserting that governance and oversight are essential to technology’s proper implementation.