Colorado law on disclosing AI-generated political ads raises free speech concern

SHARE NOW

As Democratic Attorney General Phil Weiser issued an advisory on Monday on Colorado’s new “deepfake” law on political messages, the statute might raise First Amendment concerns.

Weiser’s two-page public advisory refers to House Bill 24-1147, which took effect July 1. It created new regulations and penalties for using artificial intelligence and deepfake-generated content in communications about candidates for elected office. The law requires anyone using AI to create election communications featuring images, videos or audio of candidates to include a disclaimer explaining the content isn’t real.

Candidates who have their appearance, actions or speech depicted in a deepfake can pursue legal prohibition of the distribution, dissemination, publication, broadcast, transmission or other display of the communication. The bill provides for compensatory and punitive damages and the possibility of criminal charges.

“Much false speech is constitutionally protected,” David Greene, senior staff attorney with the Electronic Frontier Foundation, said in an interview with The Center Square. “I don’t read this law as creating a category of speech that’s unprotected. But it’s a content-based law and will have to pass strict scrutiny because it is a restriction on otherwise protected speech.”

While the U.S. Congress hasn’t moved forward on legislation regarding the use of artificial intelligence in political campaigns, 40 states introduced legislation targeting the production, distribution or publication of AI-generated content in political advertising. The California legislature last week passed a bill requiring AI-produced material to be easily identified along with tracing the creators. A number of states including Washington and Oregon have government committees looking into the matter.

Jeffrey Schwab, senior counsel for the Liberty Justice Center, said the Colorado law is complicated as it doesn’t prohibit deepfakes and provides exceptions for news organizations and content containing satire or parody.

“I think in general it might be OK except for the disclosure that not only requires that it’s generated by AI, but also must say it is false,” Schwab said in an interview with The Center Square. “I think that’s where the statute could be in some First Amendment trouble. Whether or not it was generated by AI or not, it may or may not be false. It might be false in that the person didn’t say the exact words, but those words could be true.”

Schwab gave a hypothetical example of distribution of an AI image of President Joe Biden stating Democratic nominee Kamala Harris was the “border czar.”

“Even if that’s not something Joe Biden explicitly referred to Kamala Harris as, I think a pretty good case can be made the statement is true,” Schwab said. “But under the Colorado law, if someone were to generate AI of Biden saying Harris is the border czar, then the statute would apply. They would have to have a disclosure that says it’s not only generated by AI, but that it’s false.”

The law applies to communications to voters within 60 days of a primary election and 90 days of a general election. Weiser recommended Coloradans check political communications for disclosure of a deepfake and verify “through trusted sources” whether questionable communication includes a deep fake.

“… while the law only applies to communications related to candidates for office, deepfakes can be used in many other ways to influence the opinions of voters, and in general voters should be mindful that bad actors will find ways not protected by this law to influence public opinion using deepfakes, especially on the internet.,” according to the advisory.