In the past year, we have written several articles about legislation enacted regarding Generative AI (GenAI) systems. Earlier this year, we wrote about Colorado’s Generative AI Copyright Disclosure Act, a bill focused on transparency when using copyrighted materials in AI training datasets. Recently we wrote about California’s AB 2013, a law introducing new compliance obligations for developers of GenAI systems or services. Both pieces of U.S. legislation share a common goal of promoting transparency, but their approaches—and the challenges they present—differ. Moreover, while these bills mark progress in addressing AI’s impact on intellectual property and privacy, they are not as robust as the EU AI Act.
First, the Colorado Generative AI Copyright Disclosure Act attempts to ensure companies disclose their use of copyrighted works in training AI models. It requires detailed notifications to be submitted to the U.S. Copyright Office summarizing the copyrighted content used and where that material was acquired—e.g., the URL or web pages. A public database is created to catalog the disclosures of the material used, which copyright owners can then monitor. However, this bill is relatively narrow in scope, concentrating almost exclusively on permissible usage of copyrighted material without addressing broader concerns such as data privacy or ethical AI practices.
In contrast, California’s AB 2013 attempts to take a broader approach. It requires developers to disclose not only the intellectual property used in training datasets but also whether any personal information, as defined by the California Consumer Privacy Act (CCPA), is being used to train GenAI systems or services. This law, therefore, attempts to extend its regulatory reach to include privacy concerns by requiring companies to maintain detailed documentation and publish summaries that private information was used to train a GenAI system or service. The law even further requires that GenAI systems developed or modified after January 1, 2022 should retroactively compile and disclose what training datasets were used—a task that could be burdensome for companies with incomplete records. While AB 2013 attempts to balance transparency with privacy and IP protection, it also falls short in areas like AI ethics or broader governance frameworks.
In both cases, the U.S. laws fall short of the more comprehensive scope seen in the EU AI Act. This Act establishes a much more robust regulatory framework that starts with transparency and IP protection. But unlike the U.S. bills, the EU Act includes provisions that address ethical GenAI deployment, risk management, and consumer protection. The EU Act categorizes GenAI systems based on risk, with high-risk systems subject to stringent transparency, oversight, and human intervention requirements. The EU Act also emphasizes accountability in GenAI development that strives to ensure safety, privacy, and fundamental rights are at the core of the design.
In short, while Colorado and California are the first states with bills directed to GenAI technology, both are limited and simply address specific concerns about the dataset information used in training the GenAI models. However, neither bill addresses the broader concerns of GenAI technology like the EU Act. As GenAI continues to grow in usage, it will be crucial for U.S. lawmakers to consider broader regulations that reflect the multi-faceted challenges posed by this transformative technology.
