Everything You Need to Know About Alabama’s HB172

In today's fast-paced digital world, it's becoming harder to tell what's real and what's fake. Thanks to advanced artificial intelligence, it's now possible to create incredibly convincing images, audio, and videos that show people doing or saying things they never did. These are often called "deepfakes," and they pose a serious threat to our trust in information, our reputations, and even our democratic elections.

To tackle this growing problem, Alabama has enacted a new law: House Bill 172 (HB172). Signed into law on May 15, 2024, and effective starting October 1, 2024, HB172 specifically aims to prevent the malicious use of AI-generated deceptive media, especially in the crucial period leading up to an election. This article will break down everything you need to know about Alabama's HB172: what it targets, what's considered a violation, the penalties involved, and how it seeks to protect our elections.

What Are We Talking About? Understanding "Deepfakes" and AI

Before diving into the specifics of HB172, it's important to understand the technology it's addressing. "Deepfakes" are synthetic media created by artificial intelligence. Think of AI systems, particularly those known as generative adversarial networks (GANs), that can learn from vast amounts of data to create new, realistic content. This means they can generate a video of a politician giving a speech they never actually made, or an audio clip of someone confessing to something they didn't do.

The dangers of such deceptive media, especially in politics, are clear:

  • Damaging Reputations: Deepfakes can severely harm a person's public image and career.

  • Eroding Trust: If we can't trust what we see and hear, it breaks down our ability to have informed discussions and believe legitimate news.

  • Election Interference: Imagine a fake video of a candidate making a controversial statement just days before an election. This could spread quickly, confuse voters, and unfairly sway public opinion.

  • Spreading False Information: Deepfakes are powerful tools for widespread disinformation campaigns.

While older laws like those against defamation (slander or libel) offer some protection against false statements, deepfakes are uniquely challenging because they're so visually and audibly convincing. They spread rapidly, making it hard to correct the record quickly. HB172 is designed to be a more specific tool to combat this new form of deception, particularly when elections are at stake.

Breaking Down HB172: What Does the Law Say?

HB172 focuses on stopping the distribution of "materially deceptive media" created by AI, especially during election campaigns. Let's look at the key definitions and conditions.

The Main Rule: No Distributing Deceptive AI Media

At its core, HB172 states that a person cannot "distribute or enter into an agreement to distribute materially deceptive media." This means it's illegal to share or even agree to share this kind of content.

What is "Materially Deceptive Media"?

This is a crucial definition in the law. For media to be considered "materially deceptive," it must meet all three of these conditions:

  1. It shows someone doing or saying something they didn't actually do or say. This means the content is factually false regarding the depicted individual's actions or words.

  2. A regular person would mistakenly believe it's real. This sets an objective standard: would an average, reasonable viewer or listener genuinely be fooled into thinking the depicted individual actually engaged in that speech or conduct? This helps distinguish truly deceptive content from things like satire or obvious parody.

  3. It was created by AI. This law specifically targets media generated by artificial intelligence, not just any doctored photo or video edited by traditional means.

How Does HB172 Define "AI"?

The law uses a broad definition for "AI" to ensure it covers current and future technologies: "any artificial system or generative artificial intelligence system that performs tasks under varying and unpredictable circumstances without significant human oversight or that can learn from experience and improve performance when exposed to data sets." This includes AI systems that can operate somewhat independently and learn over time, which are exactly the types of AI that create sophisticated deepfakes. The phrase "without significant human oversight" is key, aiming at genuinely AI-driven creations.

When Is It a Violation? The Three Conditions

Simply creating or having "materially deceptive media" isn't enough for a violation. For criminal penalties to apply, all three of these conditions must be met:

  1. You Knew It Was Fake: The person distributing the media must know that "the media falsely represents someone." This "knowledge" requirement is important because it targets those intentionally spreading falsehoods, not someone who unknowingly shares something they believed was real.

  2. It's Close to an Election: The distribution must happen "within 90 days before an election." This highlights the law's focus on protecting the integrity of elections during a sensitive campaign period.

  3. You Intended a Specific Outcome: The person must "intend to distribute this and cause a particular result." This is often the hardest part to prove. "Particular result" isn't fully defined, but it could mean influencing votes, harming a candidate's reputation, or affecting the election in some other way. This ambiguity might be debated in court.

The Disclaimer Requirement: Transparency is Key

A very important part of HB172 is the requirement for a disclaimer: "The creator, sponsor, or purchaser must have a disclaimer informing viewers the media has been manipulated." This means that if AI-generated content is indeed manipulated, but it includes a clear notice to viewers, it might not be a violation. The goal here is transparency: allowing AI tools for creative purposes while ensuring the public isn't unknowingly deceived. A violation occurs if the deceptive media is distributed without such a disclaimer, among other conditions.

Who Can Take Legal Action (Injunctive Relief)?

Beyond criminal charges, HB172 allows several parties to "seek injunctive relief." An injunction is a court order telling someone to do or stop doing something. In this case, it would likely be an order to remove the deceptive media or prevent it from being distributed further. The law empowers these groups to seek such relief:

  • The Attorney General: The state's top lawyer can step in to protect the public interest and election fairness.

  • The Depicted Person: If your image or voice is used deceptively, you can directly go to court to stop the harm to your reputation.

  • A Candidate for Office: Political candidates, who are often direct targets of such media during campaigns, can seek immediate action to protect their campaign and prevent election manipulation.

  • An Entity that Represents the Interests of Voters: This could include non-profit organizations or advocacy groups dedicated to ensuring fair elections. This acknowledges that deepfakes can harm the entire democratic process, not just individuals.

The ability to get an injunction quickly is very valuable because deceptive media can spread so fast. While criminal cases punish past actions, injunctions can prevent ongoing harm. However, there's always a concern that such provisions could be misused to silence legitimate political speech, so courts will need to be careful in applying them.

Challenges and Considerations

Like any law dealing with free speech, especially political speech, HB172 will face scrutiny under the First Amendment of the U.S. Constitution, which protects freedom of speech.

  • Free Speech vs. Election Integrity: Protecting elections is a "compelling government interest," which generally allows for some speech regulation. The question is whether HB172 is crafted precisely enough to achieve this without overreaching.

  • False Statements: Generally, the First Amendment doesn't protect false statements of fact, especially when made knowingly. HB172's requirement that the person "know" the media is false helps align it with existing legal principles.

  • Political Speech and "Particular Result": The focus on the 90-day election window and the intent to "cause a particular result" targets political speech specifically. Some might argue that even false statements about public figures are part of a vigorous political debate. The vague definition of "particular result" could make people hesitant to speak, fearing they might unknowingly violate the law.

  • Satire and Parody: The law's "reasonable viewer or listener" standard is crucial for distinguishing genuinely deceptive content from satire or parody, which are protected forms of expression. This will be a key area for interpretation.

  • Proving Intent: It's often very difficult to prove what someone "knew" or what their "intent" was, especially when content is shared widely and anonymously online.

  • Technology vs. Law: AI technology is constantly evolving. Detecting sophisticated deepfakes remains a challenge, and the speed at which they can spread makes enforcement difficult across state and international lines.

Conclusion

Alabama's HB172 is an important step in addressing the complex challenges posed by AI-generated deceptive media, particularly in the context of our elections. By clearly defining what constitutes "materially deceptive media" and AI, setting conditions for violations, and requiring disclaimers, the law aims to protect both individual reputations and the integrity of our democratic process.

However, the effectiveness of HB172 will depend on how it's interpreted and enforced. Questions around proving intent, navigating First Amendment protections for speech, and keeping up with rapidly advancing AI technology will all play a role.

As HB172 went into effect on October 1, 2024, its real-world application will offer valuable lessons. Ultimately, combating deepfakes and disinformation requires more than just laws. It needs ongoing advancements in technology to detect manipulated content, and a strong focus on educating the public to critically evaluate the information they encounter. Our ability to distinguish fact from fiction will be crucial for the future of our democracy in the age of AI.

If interested, you can find a copy of the bill here and read for yourself!

Previous
Previous

What Is TRAIGA? The South’s Newest AI Legislation