What Is TRAIGA? The South’s Newest AI Legislation

The State of Texas has taken a proactive step with House Bill 149 (H.B. No. 149), officially known as the Texas Responsible Artificial Intelligence Governance Act (TRAIGA). Signed into law on June 22nd, 2025, TRAIGA aims to establish a framework for the responsible development and deployment of AI systems across the state.

This article will serve as your guide to TRAIGA, explaining its purpose, key definitions, the new rules it sets for AI, and how Texas plans to oversee this rapidly expanding technological frontier.

What is TRAIGA? A Framework for Responsible AI

TRAIGA is more than just a set of rules; it's a comprehensive attempt by Texas to balance innovation with protection in the AI space. Its core purposes are multi-faceted: it aims to facilitate and advance the responsible development and use of artificial intelligence systems, to protect individuals and groups from known and reasonably foreseeable risks associated with AI systems, to provide transparency regarding risks in the development, deployment, and use of AI, and to offer reasonable notice regarding how state agencies use or plan to use AI systems.

Essentially, Texas wants to ensure that AI flourishes in a way that benefits its citizens while minimizing potential harm and maintaining public trust.

Defining the Digital Frontier: Key Terms in TRAIGA

Understanding TRAIGA begins with its definitions, which lay the groundwork for how the law is applied.

An "Artificial intelligence system" is broadly defined as any machine-based system that, for any explicit or implicit objective, infers from its inputs how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments. This comprehensive definition aims to cover the wide spectrum of AI technologies.

The law primarily focuses on the "consumer," defined as an individual who is a resident of Texas acting solely in an individual or household context, excluding those acting in commercial or employment capacities. TRAIGA also introduces roles like "Developer" (a person who develops AI systems offered or provided in Texas) and "Deployer" (a person who deploys an AI system for use in Texas).

New Rules for AI: What TRAIGA Prohibits and Requires

TRAIGA establishes several critical duties and prohibitions for entities involved with AI systems.

If a governmental agency makes an AI system available that is intended to interact with consumers, that agency must disclose to the consumer, before or at the time of interaction, that they are interacting with an AI system. This disclosure must be clear, conspicuous, and written in plain language, avoiding "dark patterns" (user interfaces designed to manipulate choices). For health care services, this disclosure must be provided no later than the date the service is first rendered, or as soon as reasonably possible in emergencies.

Prohibitions on AI Use

TRAIGA explicitly forbids the development or deployment of AI systems in ways that intentionally aim to incite or encourage a person to:

  • Commit physical self-harm (including suicide).

  • Harm another person.

  • Engage in criminal activity.

Additionally, governmental entities are prohibited from using AI systems to evaluate or classify natural persons or groups based on social behavior or personal characteristics with the intent to assign a "social score" or similar valuation. This includes evaluations that result in detrimental treatment unrelated to observed behavior, disproportionate treatment, or infringement of constitutional or legal rights.

The bill also addresses the sensitive area of biometric data, which includes fingerprints, voiceprints, and retina scans used to identify individuals. Governmental entities generally cannot develop or deploy AI systems for uniquely identifying individuals using biometric data or for targeted/untargeted gathering of images from public sources without consent, if such gathering would infringe upon an individual's constitutional or legal rights.

Furthermore, TRAIGA explicitly states that a person may not develop or deploy an AI system with the sole intent to infringe, restrict, or impair an individual's rights guaranteed under the U.S. Constitution.

Addressing concerns about harmful content, the law prohibits developing or distributing AI systems with the sole intent of producing or aiding in the production or distribution of certain sexually explicit content or child pornography. It also prohibits intentionally developing or distributing AI systems that engage in text-based conversations simulating sexual conduct while impersonating a child under 18.

Data Security and Processor Duties

The bill amends the existing business and commerce code to clarify that processors (entities that process data on behalf of a controller) must assist controllers in meeting their data security obligations, including those related to personal data collected, stored, and processed by AI systems. This emphasizes the importance of data security within the AI ecosystem.

Enforcement and Penalties: What Happens if the Rules are Broken?

The Attorney General has exclusive authority to enforce TRAIGA, creating an online mechanism for consumers to submit complaints. Upon receiving a complaint, the Attorney General can conduct investigations, including requesting detailed information about an AI system's purpose, data usage, outputs, limitations, and safeguards.

Before bringing an action, the Attorney General must notify the alleged violator in writing. The person has 60 days to cure the violation, provide documentation of the cure, and show changes to internal policies to prevent recurrence. If the violation is cured within this period, no action will be taken.

However, if a violation is not cured, civil penalties can be significant:

  • For each "curable" violation or breach of a submitted statement, penalties range from $10,000 to $12,000.

  • For each "uncurable" violation, penalties range from $80,000 to $200,000.

  • For a continued violation, an additional $2,000 to $40,000 per day may be imposed.

The Attorney General can also seek injunctive relief to prevent further violations, recover attorney's fees, and investigative expenses. Importantly, the law includes a rebuttable presumption that a person used reasonable care and provides defenses if another person misused the system, or if the defendant discovered the violation through testing, feedback, agency guidelines, or compliance with recognized AI risk management frameworks like NIST.

In addition to civil penalties, state agencies can impose sanctions against licensed, registered, or certified persons found in violation of TRAIGA, including suspension or revocation of licenses and monetary penalties up to $100,000, upon recommendation from the Attorney General.

Beyond enforcement, TRAIGA establishes forward-looking mechanisms to foster responsible AI innovation:

Texas Artificial Intelligence Council

The bill creates the Texas Artificial Intelligence Council, administratively attached to the Department of Information Resources. This seven-member council, composed of experts appointed by the governor, lieutenant governor, and speaker of the house, has a broad mandate:

  • To ensure AI systems in Texas are ethical and serve the public interest.

  • To identify potential harm to public safety or individual freedoms from AI and recommend legislative changes.

  • To identify laws impeding AI innovation and suggest reforms.

  • To analyze opportunities for improving state government efficiency through AI.

  • To offer guidance on the ethical and legal use of AI systems.

  • To evaluate regulatory capture and potential censorship by technology companies using AI.

The Council can issue reports to the legislature, conduct studies, and provide training programs for state and local government agencies on AI use.

Artificial Intelligence Regulatory Sandbox Program

TRAIGA also establishes an AI Regulatory Sandbox Program, overseen by the Texas Department of Information Resources in consultation with the AI Council. This program allows individuals or entities to test innovative AI systems for a limited time (up to 36 months, with possible extensions) and on a limited basis without immediately needing full licenses or registrations that would otherwise apply.

The sandbox is designed to promote safe and innovative AI use across various sectors (healthcare, finance, education, public services) by providing clear guidelines while certain laws or regulations are waived or suspended during testing. This provides a safe space for controlled experimentation, though violations of the core duties and prohibitions (Subchapter B, Chapter 552) are not waived. Applicants must provide detailed descriptions of their AI systems, benefit assessments, mitigation plans for adverse consequences, and proof of federal compliance.

Conclusion: Texas Navigates the Future

The Texas Responsible Artificial Intelligence Governance Act (TRAIGA) positions Texas as a leader in navigating the complex landscape of AI regulation. By establishing clear definitions, consumer rights, developer and deployer responsibilities, and a robust enforcement mechanism, TRAIGA aims to create an environment where AI innovation can thrive responsibly.

The creation of the Texas AI Council and the Regulatory Sandbox further demonstrates a thoughtful approach to understanding, guiding, and encouraging AI development while prioritizing public safety and ethical considerations. As AI continues its rapid advancement, TRAIGA provides a crucial framework for Texas to manage its impact, ensuring that the benefits of this transformative technology are realized while its risks are mitigated.

Previous
Previous

Why Even Regulate A.I. ?

Next
Next

Everything You Need to Know About Alabama’s HB172