image
By Marilyn Odendahl

Compared with what’s coming, the lies and conspiracy theories surrounding the 2020 election may soon be seen as child’s play.

The use of advancing technology to create political ads depicting candidates in fake situations they were never in or saying controversial things they never said is expected to become so rampant in 2024 that the United States has been described as entering its “first-ever artificial intelligence election.” In a January report entitled “Democracy on Edge in the Digital Age,” Common Cause California said if AI and disinformation are not addressed, they will pose a threat not only to this year’s election but also to democracy itself.

“We’ve entered a monumental election year … and we do so with our democracy already weakened with trust in institutions and in the media at all-time lows, with the truth under assault every day, and with deep voter participation disparity still leaving certain communities out of our electorate,” Jonathan Mehta Stein, executive director of California Common Cause, said in a recent webinar. “So in that challenging environment, we’re now entering what I think will become known as the first-ever generative AI election in which generative AI deep fakes will inundate our political discourse and voters may not know what images, audio or video they can trust.”

A handful of bills have been filed in Congress seeking to place limits on the use of AI technology in political communications, but many states are not waiting for federal action. Indiana has joined that growing list of states crafting new laws to regulate the use of AI technology in political campaigns.

House Bill 1133, authored by Rep. Julie Olthoff, R-Crown Point, does not ban “fabricated media.” Instead, it requires political communications that have used AI that “conveys a materially inaccurate depiction of the individual’s speech, appearance or conduct” without the individual’s consent must include a disclaimer to alert the public that the image, video or recording has been altered. If the disclaimer is absent, the injured candidate can bring a civil action to recover damages and attorney fees, and get injunctive relief.

The bill picked up bipartisan support and passed through the Indiana General Assembly without any opposition. Tuesday, the House concurred with the Senate amendments and the measure is now headed to Gov. Eric Holcomb’s desk.

Testifying before the Senate Elections Committee, Olthoff pointed out that AI technology can do amazing things. The tools are inexpensive and readily available, so people need little expertise to create or manipulate images, video or recordings with the use of artificial intelligence.

“Despite what we enjoy from Hollywood, there’s a downside for political messaging, because people can no longer decipher what is real and what is fabricated,” Olthoff told the committee members. “So this bill provides some guidance.”

Sen. John Crane, R-Avon, said HB 1133 takes an important step in elevating election integrity and providing a means for the voter to get the information they need to make an informed decision. However, he noted, it was a limited step.

“I might argue that if they’re going to use AI in the first place to alter, they might not be inclined to divulge that, Crane said, calling the bill a “gentleman’s agreement” that the political ad identify if AI has been used.

‘The problem isn’t the technology’

HB 1133 was among four bills introduced this legislative session trying to regulate fabricated or altered media in political messaging. Like Olthoff’s measure, the other bills do not outright prohibit the use of the technology, but they do require a disclaimer be attached to the communication.

Included in HB 1133 is the actual disclaimer language that has to be used on the political ads. The declaimer must say, “Elements of this media have been digitally altered or artificially generated.”

Speaking to The Indiana Citizen, Olthoff said her bill was “more defensive” and just putting some guardrails on the use of artificial intelligence. A ban is not imposed, she said, because that might run afoul of the First Amendment.

“It’ll go to the (U.S.) Supreme Court,” Olthoff said, explaining what she thinks will happen if HB 1133 barred AI in political communications. “For all I know, there might be cases already at the Supreme Court about where’s the link between free speech and this.”

In response to Crane’s concern that HB 1133 does not have any teeth to stop nefarious actors, Olthoff replied that her bill gives some recourse to the candidate misrepresented in an AI-generated political message. The bill allows the candidate to file a civil lawsuit against the person who paid for the campaign communication, the person who sponsored the communication and the person who disseminated it.

Two of the bills – House Bill 1228 by Rep. Pat Boy, D-Michigan City, and Senate Bill 7 by Sen. Spencer Deery, R-West Lafayette – included criminal penalties. In SB 7, the potential charges ranged from a Class B misdemeanor for leaving off the disclaimer to a Class A misdemeanor and a Level 5 felony for an intent to cause violence or bodily harm with the fabricated media.

Both HB 1228 and SB 7 failed to gain any traction in the Statehouse with neither receiving a committee hearing. However, portions of SB 7 were merged into Olthoff’s bill so the disclaimer requirement was expanded to campaign ads that do “not explicitly show the candidate but works to discredit him or her,” Olthoff told the Senate committee.

Also when HB 1133 went to the Senate, the provisions were shifted from the campaign ad financial disclosure section of the Indiana Code into a new chapter. Olthoff explained to the Senate committee that under the new chapter, the AI protections will cover political communications for federal as well as state elections.

 Fred Cate, founding director and currently senior fellow of the Indiana University Center for Applied Cybersecurity Research, said while legislative action on this issue is a positive step, it will likely have little effect.

“Disclaimers don’t work,” Cate said. “We know they don’t work and we have lots of research that they don’t work.”

He pointed to the disclaimers in pharmaceutical advertisements, which draw little consumer attention. Instead, people are going to believe what they want, he said, and allowing fabricated media to be used in political messaging, so long as the disclaimer is attached, is “going to do very little good, possibly none at all.”

Even the potential to sue the individual behind the fabricated media may not be a deterrent. Cate noted the party creating or distributing the altered content will be difficult to locate, since they will not, necessarily have a public-facing presence and “could just disappear when someone starts chasing them.” Or, he said, the political communication might be produced by China, Russia, or Iran, “so they’re not going to care about this AI law.”

The deliberative pace of the judicial process poses another hurdle, Cate said. Even if a candidate misrepresented by an AI-generated political ad is able to bring a civic action, the election might already be over – and the fabricated media might have worked in keeping the candidate from elected office – by the time the court issues a ruling.

Despite the difficulty, Cate said, federal and state governments have to do something to mitigate the falsehoods artificial intelligence can ignite in politics. He is concerned that, with the passage of HB 1133, Indiana lawmakers will be content that they did something and then not revisit the subject to tweak the law or write new regulations.

“It’s just going to be (a) tough one,” Cate said. “We’re going to get better at it, but the problem isn’t the technology. The problem is our skills and motivations and so forth. It’s just going to matter more now that we’re getting everyone their own little ‘nuclear bomb.’”

A start to the regulatory conversation

Rep. Blake Johnson, D-Indianapolis, authored House Bill 1283, a fabricated media bill that is very similar to Olthoff’s effort. His bill prohibits a “person who finances a campaign communication” from knowingly disseminating fabricated media without a disclaimer within 90 days of an election to injure a candidate or influence the outcome of the election.

HB 1283 was assigned to the House Elections and Apportionment Committee, but was never given a hearing.

Johnson acknowledged the difficulty in trying to regulate the use of AI, but said a state law requiring a disclaimer is the “most enforceable.” The technology is advancing rapidly and preventing the use altered media in political messaging would be impossible, he said, so, at least, having a notification coupled with a method for injunctive relief will let people know what they are seeing or hearing is not necessarily real.

He served on an interim study committee last year that examined artificial intelligence, but the members could not agree on any recommendations on how the legislature could address the troubling aspects of fabricated communications. He said sample legislation with different language was drafted and, while the disclaimer is not the “ideal outcome,” that is where the legislators landed.

“This is so new and state governments are just now really starting to explore the ways to manage this,” Johnson said of AI in political communications. “So we’re looking for a solution and, as often is the case with a piece of legislation, this is chapter one. Let’s start the conversation and see what our best minds can bring to the table to improve it.”

Indiana is not alone in its approach.

Other states are also passing laws to curb the anticipated flood of fake videos, recordings and photos targeting a rival candidate in the weeks leading to an election. According to Common Cause California, Washington State and Michigan require “synthetic media” to carry a disclaimer. Also, deep fake laws in Texas and Minnesota come with criminal penalties.

Olthoff’s concerns about the balance between reigning in AI-generated political ads and protecting free speech are shared by others. Common Cause California noted the courts have not yet tackled the issue, but as more states regulate AI content, a legal challenge is likely going to be filed soon.

Voters must become more discerning

Amid the warnings of electoral upheaval, Paul Ehlinger talks about the benefits of artificial intelligence.

As the founder of flamel.ai, a marketing company based in Covington, Kentucky, that uses AI tools to help small businesses develop content about their products and services on social media, Ehlinger admits he holds a utopian view of the future, where artificial intelligence enables people to work less and “enjoy life a little bit more,” while still being very productive.

The  “vast opportunity” brought by AI is no different than the potential unleashed by the internet in terms of what the new technology can do for society, he said. Although having safeguards on the use of technology by political campaigns makes sense, he said, the risk of creating a harmful regulatory environment is high because AI is evolving very quickly and the government will always be trying to catch up.

Ehlinger said the government should look beyond regulation and instead try to counter deep fakes with investments. Public dollars should be invested into research and technology that is being built to detect fabricated content.  For the AI laws currently being crafted to regulate political communication, he said the creator of the message should be held responsible. Echoing Cate, he said AI is a tool and whether it is used for good or nefarious purposes depends on who is using it.

“I can use a shovel to kill somebody or I can use a shovel to dig a garden,” Ehlinger said. “You can regulate how tools are used, but at the end of the day, if you’ve got a bad actor, they’re going to find a way to use legal and official tools for negative purposes.”

House Bill 1133 includes provisions allowing it to take effect as soon as the governor signs it, so the guardrails could be in place well in advance of the May primary. Even so, Common Cause California and Ehlinger point out voters will have to be more discerning about what they see and hear because AI tools will still be used to manipulate political messages.

Educating voters is among the series of steps Common Cause California is advocating to address deep fakes and disinformation in politics. Digital media literacy instruction should be expanded and civics education should be augmented.

Currently, Jonathan Mehta Stein, executive director of California Common Cause, does not believe that Americans are not really prepared for the first AI election.

“All people need to realize they need to be more intelligent, more skeptical consumers of news content in 2024. We all need to develop fact-checking skills. We all need to learn to investigate what we see before we believe it and definitely before we share it,” Mehta Stein said. “And … in a culture or atmosphere of uncertainty, when we don’t know what we can trust, it may be easy to believe whatever agrees with us and dismiss as fake anything that disagrees with us. We have to fight back against that instinct and hold strong to the truth.”

Dwight Adams, a freelance editor and writer based in Indianapolis, edited this article. He is a former content editor, copy editor and digital producer at The Indianapolis Star and IndyStar.com, and worked as a planner for other newspapers, including the Louisville Courier Journal.

Related Posts