Skip to content Skip to footer

Key Laws and Frameworks Governing Artificial Intelligence (AI)

The rapid growth of artificial intelligence (AI) technology has led numerous countries and international organizations to develop frameworks that guide the development, application, and governance of AI. These AI governance laws address the challenges AI poses and aim to direct the ethical use of AI in a way that supports human rights and fosters innovation.

One such legislation is the EU AI Act, a historic piece of legislation designed to promote AI innovation and ensure AI safety. The Act outlaws AI applications that pose a risk to human rights, controls the use of biometric identification by law enforcement and requires high-risk AI systems to follow clear duties to reduce potential harm.

The European Parliament and Council have also proposed an AI Liability Directive. This aims to provide victims of AI-related harm with the same level of protection as those affected by traditional products.

Various countries have their own AI laws, such as Brazil’s AI Bill, Canada’s AI and Data Act, and the U.S. Executive Order on Trustworthy AI. These laws focus on protecting human rights, ensuring safe and reliable AI systems, safeguarding national security, and promoting economic growth and societal benefits.

Similarly, countries like Indonesia, Mexico, Chile and South Korea have proposed or enacted laws to regulate AI systems and ensure their ethical use. They aim to address issues such as compliance, data privacy, biases, and gender equality, amongst others.

China has enacted numerous laws regulating various aspects of AI, including algorithmic recommendation technology, deep synthesis technology, and generative AI services.

Cities within countries have also seen the evolution of AI regulations. For instance, New York City’s Bias Audit Law prohibits the use of Automated Employment Decision Tools unless necessary notices are given and a biased audit has been completed.

Apart from individual nations, several international bodies have also proposed AI guidelines. The OECD AI Principles, created in May 2019, promote innovative and trustworthy AI usage while upholding democratic principles and human rights.

The structural frameworks, such as Singapore AI Verify Framework, NIST AI RMF and the UNESCO AI Ethics Recommendation, provide organized guidelines for addressing the risks associated with AI.

Technology-centric standards bodies like IEEE have proposed the P2863 and P7003, which focus on governance in AI and address bias in the development of AI algorithms. Similarly, the international standard ISO/IEC 42001 outlines conditions to implement and manage an AI management system effectively.

Countries and organizations globally are recognizing the need to regulate AI technologies, developing frameworks that emphasize the ethical use and societal benefit of AI while minimizing the potential risks and maintaining human rights and safety.

Leave a comment

0.0/5