California AI watermarking bill - gavel
Sep 17, 2024

California has a new AI watermarking bill – here’s what it means for marketers

Dayna Lang
Author Dayna Lang

California is stepping into action – tabling a new AI watermarking bill. Bill AB 3211, also called the California Digital Content Provenance Standards Act, requires developers to add watermarking technology into their systems, marking generative AI images, video, and audio with identifiers. 

This legislation comes as no surprise, with many members of the general public calling for regulation to temper fears over AI’s growing capabilities since the AI boom began last year. These new regulations would make it possible to identify AI-created content via its metadata. These indicators, while invisible to the naked eye, would let users examine images to distinguish generative AI images and audio from those created by people. 

The AI watermarking bill also regulates social media, requiring networks to label AI-generated content users post on their platforms clearly.

Bill AB 3211 looks to label deepfakes 

California’s new legislation comes in the midst of growing social and political pressure to regulate AI and control deepfakes. Deepfakes are AI-generated images, video, and/or audio designed to fool audiences into believing it to be authentic. 

This content has been on the rise, leaving many worried about its potential to ruin reputations and manipulate political campaigns. Deepfakes are already being used in politics. Donald Trump recently used AI images of Taylor Swift endorsing him, raising concerns about the legality of such images. 

Bill AB3211’s text reads: “Failing to appropriately label synthetic content created by GenAI technologies can skew election results, enable defamation, and erode trust in the online information ecosystem.” California is the first state to enact such legislation, but it likely won’t be the last. As more negative impacts of AI come to light, more regulations will be put forward to curb abuse and prevent defamation. 

Concerns about the rise of deepfakes are international and the quagmire of ethical and legal questions raised by generative AI technology brings forward legislation across the world. The European Union enacted its AI Act in 2023. One of the Act’s clauses also mandates organizations disclose the use of AI-generated content. 

What do AI watermarking bills mean for advertisers? 

Marketing isn’t exempt from the deepfake conversation. While some brands embrace AI images (including deepfakes) others take a strong stance against them and voice their concern over the role AI plays in advertising. Dove is one such brand. On the 20th anniversary of the brand’s “Real Beauty” campaign, Dove pledged to never use AI-generated images of women in its advertising, taking a strong stance against the use of deepfake images.

While deepfakes aren’t currently common in advertising, they do exist and their use could increase as AI tools improve and expand. If California’s newest AI watermarking bill passes, deepfakes will be easily identifiable through either watermarks or labels, making it easier for consumers to make informed choices. 

Brands would do well to get ahead of the curve, either committing to not using or clearly labeling any AI-generated imagery, audio, or video content before this bill passes. Avoiding transparency with audiences in the present could have adverse consequences going forward, as customers who learn of AI content later may feel tricked or misled. 

This AI watermarking bill is part of a larger cultural movement towards greater transparency; specifically towards how and when AI is used. And many companies, even those at the forefront of AI development, favor greater visibility. 

ChatGPT developer OpenAI supports bill AB 3211, with Chief Strategy Officer Jason Kwon saying, “New technology and standards can help people understand the origin of content they find online, and avoid confusion between human-generated and photorealistic AI-generated content.”

Adobe is another tech giant throwing its support behind California’s efforts. The tech giant’s director of policy and government relations, Anne Perkins, said, “Today’s bill provides an effective framework that allows good actors to be trusted while protecting people from harmful AI-generated deepfakes, which is especially critical in an election year when misinformation tends to swirl and the stakes are high.” 

Until now, the matter of AI labeling has been left up to companies building and using AI systems. California is aiming to change that. Bill AB 3211 will be the first in what is sure to be a long line of regulations mandating the labeling of AI to protect users and public figures alike from the harms caused by deepfakes. Advertisers considering using generative AI themselves need to pay close attention to get ahead of the cultural shift leading to this legislation and the regulations that are sure to follow. 

 

micro-cta@2x
Advertising made easy
Learn how illumin unlocks the power of journey advertising
Get started!

To see more from illumin, be sure to follow us on Twitter and LinkedIn where we share interesting news and insights from the worlds of ad tech and advertising.