MIT has released a series of policy briefs surrounding the governance of artificial intelligence (AI), with a focus on extending current regulatory and liability practices. Intended to strengthen U.S. leadership in AI, these policies aim to mitigate potential harm and promote beneficial exploration of the technology. The primary paper suggests that existing government entities overseeing relevant domains can also regulate AI tools, although the applications must be clearly defined.
Despite the EU’s efforts to finalize AI regulations following a different approach, MIT believes the regulation of both specific and general AI tools to address risks, such as misinformation and surveillance, is necessary. Considering MIT’s pivotal role in AI research, the institution feels it is their obligation to help resolve these issues.
The policy proposes that the existing laws should encompass AI, such as impersonating a doctor or using AI to prescribe medicine. By clearly defining AI’s purpose and intent, the existing regulations and regulators may be identified for each AI tool. However, a layered system with a specific service at the top and a general-purpose model underneath could complicate governance, and the policy suggests that accountability should be shared by both layers in the event of a failure.
Adding new auditing practices for AI tools was also recommended, either initiated by the government, the user, or legal liability proceedings. A potential new self-regulatory organization was suggested to accumulate domain-specific knowledge and respond to rapidly changing AI dynamics.
There are specific legal issues such as copyright and “human plus” legal issues where AI capabilities exceed human abilities, requiring special legal considerations. Encouraging more research to benefit society as a whole from AI advances was also emphasized, with papers exploring the idea of AI supporting rather than replacing workers for more balanced long-term economic growth.
Broadening policy-making perspectives rather than narrowing it to a few technical questions was also stressed by the committee. The need for a balance between technological advancement in AI and the necessity of governance and oversight was emphasized, to ensure that AI is beneficial and beneficially regulated.