American lawmakers introduce AI Foundation Model Transparency Act

Proposed AI Foundation Model Transparency Act directs FTC and the NIST to collaborate in establishing rules to enhance transparency in reporting training data
A representative image of AI. — Canva
A representative image of AI. — Canva

American lawmakers Anna Eshoo and Don Beyer have introduced a bill aiming to regulate the disclosure of training data sources for foundation models. 

The proposed AI Foundation Model Transparency Act directs the Federal Trade Commission (FTC) and the National Institute of Standards and Technology (NIST) to collaborate in establishing rules to enhance transparency in reporting training data.

AI data reporting requirements for companies

The bill outlines stringent reporting obligations for companies developing foundation models. 

These include the mandatory disclosure of training data sources, details on data retention during inference, descriptions of model limitations and risks, alignment with NIST's AI Risk Management Framework, adherence to federal standards, and information on the computational power used in training and running the model.

Safeguarding against misinformation and harm

In response to concerns about AI-generated misinformation, the legislation mandates that AI developers report efforts to "red team" models. 

This preventive measure aims to ensure that foundation models do not disseminate inaccurate or harmful information, particularly in sensitive domains such as medicine, cybersecurity, elections, policing, financial decisions, education, employment, public services, and vulnerable populations like children.

Addressing copyright concerns, legal precedents about AI

The bill recognises the growing importance of training data transparency in the context of copyright infringement lawsuits against AI companies. It specifically mentions legal cases involving Stability AI, Midjourney, and Deviant Art, as well as Getty Images’ complaints against Stability AI. 

The legislation underscores the need for regulation as the public's access to artificial intelligence increases, leading to instances of inaccurate, imprecise, or biased information.