We are living in an era where Artificial Intelligence (AI) is becoming increasingly embedded in our lives. The extensive integration of AI across various sectors has raised important questions around the need for greater transparency in how these AI systems are trained and the data they rely upon. This lack of clarity has resulted in AI models producing inaccurate, biased, or unreliable outcomes, particularly in critical areas such as healthcare, cybersecurity, elections, and financial decisions.
In response to this need, lawmakers have recently introduced the AI Foundation Model Transparency Act, aiming to mandate the disclosure of crucial information by creators of foundation models. This Act seeks to ensure transparency in AI models’ training data sources and operations, and direct regulatory bodies like the Federal Trade Commission (FTC) and the National Institute of Standards and Technology (NIST) to collaborate in setting clear rules for reporting transparency in training data. Companies creating foundation models would be required to disclose sources of training data, how the data is retained during the inference process, limitations or risks associated with the model, and its alignment with established AI Risk Management Frameworks. Moreover, they must divulge the computational power used to train and operate the model.
The proposed Act also emphasizes the importance of transparency concerning training data about copyright concerns. Numerous lawsuits alleging copyright infringement have arisen due to the use of AI foundation models without proper disclosure of data sources. Thus, the Act seeks to mitigate these issues by requiring comprehensive reporting to prevent instances where AI inadvertently infringes upon copyrights.
The metrics proposed by the bill encompass a wide array of sectors where AI models are applied, ranging from healthcare and cybersecurity to financial decisions and education. The bill mandates that AI developers report efforts to test their models against providing inaccurate or harmful information, ensuring their reliability in crucial areas affecting the public.
Excitingly, if passed, this Act will establish federal rules ensuring transparency requirements for AI models’ training data, thus fostering responsible and ethical use of AI technology for the benefit of society. We are thrilled to witness the extensive efforts being made to ensure transparency in AI systems and look forward to the potential impact of the AI Foundation Model Transparency Act in enhancing accountability and trust.