Skip to content Skip to footer

The Next Level in Transparency for Foundation Models: Advancements in Foundation Model Transparency Index (FMTI)

Foundation models are critical to AI’s impact on the economy and society, and their transparency is imperative for accountability, understanding, and competition. Governments worldwide are launching regulations such as the US AI Foundation Model Transparency Act and the EU AI Act to promote this transparency. The Foundation Model Transparency Index (FMTI), rolled out in 2023, is used to evaluate the level of transparency across 10 main developers, namely Meta, OpenAI, and Google. The assessment is conducted using a hundred indicators. Initially, the FTM v1.0 revealed a surprising opacity with an average score of 37/100, highlighting the variability in disclosures within these entities.

The FMTI was designed based on a hierarchical taxonomy that mirrors the foundation model’s supply chain, including upstream resources, the model’s structure, and downstream use. These components are broken down into 23 subdomains evaluated by a hundred binary transparency indicators. Open model developers outperformed closed model developers in terms of transparency, thus revealing a trend that could influence disclosure behavior in future assessments.

Researchers from leading universities such as Stanford, MIT, and Princeton produced an updated version of the FMTI (v1.1) to track the evolution of transparency within foundation models over a six-month period. This research maintained the original 100 indicators and encouraged developers to self-report data. This additional layer of self-reporting was included to increase data completeness, clarity, and scalability, paving the way for more reliable and realistic assessments.

The execution process of FMTI v1.1 included four stages: indicator collection, developer solicitation, information gathering, and scoring. The information gathering aspect leaned more towards direct developer submissions, thereby ensuring coherent and comprehensive data. The scoring process involved a dual evaluation by two independent researchers to ensure fairness and precision. Developers were also given a chance to contest their scores and supply extra information, allowing for more granular assessments and enhanced transparency.

Fourteen developers participated in FMTI v1.1, submitting detailed transparency reports on their respective models. The initial scores varied significantly, with a majority scoring below 65, indicating ample room for transparency improvement. However, open developers generally outperformed closed developers, showing that overall levels of transparency improved compared to the previous FMTI iteration.

The societal impact of foundation models is amplifying, and there is a growing attention from other stakeholders. The Foundation Model Transparency Index’s findings confirm that despite positive changes since 2023, there is still room for improvement in the AI ecosystem’s transparency. By offering a platform for developers to report on their transparency measures and holding them accountable, the index creates a valuable resource to help enhance collective knowledge, making it useful for downstream developers, researchers, and journalists alike.

Leave a comment

0.0/5