Back to Insights
AI Compliance & RegulationGuide

Texas TRAIGA: What the Responsible AI Governance Act Means for Your Business

February 12, 202612 min readMichael Lansdowne Hauge
Updated March 15, 2026
For:Legal/ComplianceBoard MemberCTO/CIOIT ManagerConsultantCHRO

The Texas Responsible AI Governance Act (TRAIGA) took effect January 1, 2026. It applies to any business serving Texas residents and introduces AI disclosure requirements, prohibited uses, and governance standards.

Summarize and fact-check this article with:
Executive reviewing AI governance framework documents in a modern office

Key Takeaways

  • 1.Applies to any company doing business in Texas or serving Texas residents — effectively most US companies
  • 2.Plain-language disclosure required when individuals interact with AI instead of humans
  • 3.Prohibited uses: social scoring, biometric identification without consent, AI designed to cause harm
  • 4.Texas AI Advisory Council will issue best-practice guidelines and shape enforcement
  • 5.Government entities face additional inventory, assessment, and reporting requirements
  • 6.Federal preemption risk: December 2025 Trump Executive Order may override some provisions

What Is TRAIGA?

On June 22, 2025, Texas Governor Greg Abbott signed HB 149 into law, creating the Texas Responsible AI Governance Act. The law took effect on January 1, 2026, making it one of the broadest state-level AI governance statutes in the United States.

What sets TRAIGA apart from earlier regulatory efforts is its scope. NYC Local Law 144, which took effect in July 2023, applies only to AI-assisted hiring. The Colorado AI Act, effective June 30, 2026, targets high-risk decision-making. TRAIGA, by contrast, establishes general governance principles for AI systems used across all sectors. And because Texas is the second-most populous state in the country, with more than 30 million residents, any company doing business nationally should assume it falls within the law's reach.

Who Must Comply

TRAIGA casts a wide net. The law applies to any person or entity conducting business in Texas, any person or entity providing products or services to Texas residents, all Texas government entities that use AI systems, and all AI system developers whose tools are deployed in the state.

The practical implication is straightforward: if your company serves customers anywhere in the United States, TRAIGA compliance is almost certainly relevant to your operations.

Core Requirements

AI Disclosure Requirements

TRAIGA's most immediate operational requirement is disclosure. When a government entity or business uses an AI system to interact with individuals, it must provide plain-language notice that the individual is communicating with an AI system rather than a human being. This obligation covers chatbots, virtual assistants, AI-generated phone calls or voice interactions, AI-driven customer service, and any system where a person might reasonably believe they are speaking with another person.

The timing requirement is unambiguous: the notice must be delivered before or at the beginning of the interaction, not buried in post-interaction disclosures or terms of service.

Prohibited AI Uses

The law draws four explicit boundaries around what AI systems cannot do.

First, TRAIGA bans social scoring, the practice of using AI to assign scores to individuals based on social behavior, lifestyle choices, or personality traits in order to determine their access to services or quality of treatment. Second, the law prohibits biometric identification without consent, meaning AI-powered facial recognition, voice recognition, or similar systems cannot be deployed without the individual's informed consent. Third, TRAIGA outlaws AI systems specifically designed to cause harm, including those that encourage self-harm, violence, or other dangerous behavior. Fourth, the law bars deceptive AI, systems designed to mislead individuals in ways that produce material harm.

Government-Specific Requirements

Texas state government entities face additional obligations that go beyond the private sector baseline. They must conduct a comprehensive inventory of AI systems in use, assess risks associated with each system, implement safeguards to protect individual rights, provide clear channels for individuals to contest AI-driven government decisions, and report annually on AI usage and governance practices.

These requirements position Texas as one of the first states to impose a structured accountability framework on government AI deployment, a move that may influence other states considering similar legislation.

AI Advisory Council

TRAIGA also establishes the Texas AI Advisory Council, a body charged with advising the governor and legislature on AI policy, monitoring AI developments and emerging risks, recommending updates to state governance frameworks, issuing best practice guidelines for AI deployment, and publishing an annual report on the state of AI in Texas.

The Council's guidance will be particularly important because TRAIGA, like many first-generation AI laws, leaves certain enforcement details to be defined through subsequent regulatory action.

What This Means for Businesses

For Companies Deploying Customer-Facing AI

Organizations that use chatbots, virtual assistants, AI-generated communications, or automated decision systems touching Texas residents face three immediate priorities. They need to identify every AI touchpoint where customers or users interact with an AI system. They need to implement disclosure notices at each of those touchpoints before the interaction begins. And they need to review their AI systems against the prohibited use categories, paying particular attention to social scoring functionality and biometric identification deployed without consent.

For Companies Deploying Employee-Facing AI

While TRAIGA's primary focus is consumer-facing and government use, companies should not overlook internal applications. AI-based employee monitoring or evaluation tools, biometric access systems used in Texas offices, and AI tools that inform employment decisions affecting Texas-based employees all warrant careful review against the law's requirements.

For AI Developers

Companies that build AI tools or platforms used by Texas businesses or government entities carry their own compliance responsibilities. Their products should support disclosure requirements by making it straightforward for deployers to display the required notices. They should document intended uses and limitations clearly. And they should ensure their products do not enable any of the prohibited uses defined under TRAIGA.

Comparison with Other State AI Laws

Understanding TRAIGA in context requires looking at it alongside the other major state-level AI regulations.

Texas TRAIGA, effective January 1, 2026, applies broadly across all sectors, requires disclosure for all AI interactions, prohibits social scoring and biometric identification without consent, mandates impact assessments for government entities only, provides no private right of action, and relies on the AI Advisory Council for ongoing governance.

The Colorado AI Act, effective June 30, 2026, is more prescriptive in its requirements for high-risk AI, requires disclosure only for adverse decisions, mandates annual impact assessments for all deployers, and relies on Attorney General enforcement rather than an advisory body.

NYC Local Law 144, effective since July 5, 2023, is narrower in scope, covering only AI in hiring. It requires 10 days of advance disclosure, mandates annual bias audits, and is enforced by the Department of Consumer and Worker Protection.

None of the three laws provides a private right of action, meaning enforcement runs through government agencies rather than individual lawsuits.

How to Comply

Step 1: AI Inventory

The foundation of compliance is a thorough catalog of every AI system your company uses that interacts with Texas residents (customer service bots, virtual assistants, automated communications), makes decisions affecting Texas residents (lending, insurance, employment), or collects biometric data from Texas residents. Without this inventory, it is impossible to assess your exposure or prioritize remediation.

Step 2: Disclosure Implementation

For each AI system that interacts with individuals, organizations must add clear, plain-language disclosure before the interaction begins. Effective language might read "You are communicating with an AI assistant" or "This service is powered by artificial intelligence." The critical requirement is visibility: disclosures cannot be hidden in fine print or buried in terms of service.

Step 3: Prohibited Use Review

Every AI system in the inventory should be reviewed against the four prohibited use categories. No social scoring of individuals. No biometric identification without consent. No systems designed to encourage harmful behavior. No deceptive AI that causes material harm. Companies with complex AI ecosystems should pay particular attention to third-party tools and vendor-provided systems that may contain prohibited functionality.

Organizations that use biometric identification powered by AI, whether facial recognition, fingerprint scanning, or voice authentication, must implement informed consent mechanisms before collecting biometric data. This means clearly explaining what biometric data is collected and why, and providing opt-out alternatives where feasible.

Step 5: Ongoing Monitoring

Compliance is not a one-time exercise. Organizations should monitor guidance from the Texas AI Advisory Council as it develops, track regulatory updates and any early enforcement actions, review AI systems periodically for continued compliance, and document their compliance efforts in a format that can be produced if regulators come calling.

What to Watch

TRAIGA is still in its early days, and several dimensions of the law remain in flux.

On enforcement, the statute does not specify detailed penalty amounts for all violations. How aggressively the state pursues noncompliance will depend significantly on the guidance the AI Advisory Council issues over the coming months. On federal preemption, the December 2025 Trump Executive Order on AI signaled a push toward federal AI policy that could preempt some state-level requirements. Whether future federal legislation narrows or expands TRAIGA's effective scope remains an open question. And on interpretation, the Advisory Council's forthcoming best practice guidelines will shape how the law's broad language translates into specific operational requirements.

Several other laws intersect with or complement TRAIGA's requirements. The Colorado AI Act imposes more prescriptive requirements for high-risk AI, including specific impact assessment obligations. NYC Local Law 144 offers a narrower but more established enforcement model for AI in hiring. The Illinois Biometric Information Privacy Act (BIPA) overlaps with TRAIGA's biometric identification requirements and carries significant private litigation risk. The EU AI Act provides the comprehensive risk-based framework that influenced TRAIGA's overall approach. And the Utah AI Policy Act contains similar disclosure requirements focused specifically on generative AI.

Common Questions

TRAIGA was signed into law on June 22, 2025, and took effect on January 1, 2026. Companies operating in Texas or serving Texas residents should already be in compliance.

Yes. TRAIGA applies to any person or entity conducting business in Texas or providing products or services to Texas residents. Given Texas's population of over 30 million, most companies operating nationally in the US need to consider TRAIGA compliance.

TRAIGA does not specify detailed penalty structures for all violations. Enforcement is expected to develop as the Texas AI Advisory Council issues guidance. However, violations of prohibited uses (social scoring, biometric identification without consent) could trigger enforcement under existing Texas consumer protection and privacy laws.

You must disclose AI use when a person might reasonably believe they are communicating with a human. A clearly labeled automated system (like an IVR phone menu) may not require additional disclosure, but chatbots, virtual assistants, and AI-generated communications that simulate human conversation must include a disclosure.

Yes, but only with the individual's informed consent. TRAIGA prohibits biometric identification without consent. You must clearly explain what biometric data is being collected, why, and obtain consent before using facial recognition, voice recognition, or other biometric AI systems.

The December 2025 Trump Executive Order on AI signals a push for federal AI standards that may preempt state laws. It remains to be seen how this will affect TRAIGA. Companies should comply with TRAIGA while monitoring federal developments, as federal legislation could override some state requirements.

References

  1. HB 149 — Responsible AI Governance Act (Bill Text). Texas Legislature (2025). View source
  2. Texas Signs Responsible AI Governance Act Into Law. Latham & Watkins (2025). View source
  3. The Texas Responsible AI Governance Act: What Your Company Needs to Know. Norton Rose Fulbright (2025). View source
  4. Texas Adopts the Responsible AI Governance Act. DLA Piper (2025). View source
  5. Texas Enacts Responsible AI Governance Act: What Companies Need to Know. Baker Botts (2025). View source
  6. Texas Responsible AI Governance Act Enacted. Wiley (2025). View source
  7. Privacy Legislation in Texas: What Happened in 2025 and What's Next. Holland & Knight (2026). View source
Michael Lansdowne Hauge

Managing Partner · HRDF-Certified Trainer (Malaysia), Delivered Training for Big Four, MBB, and Fortune 500 Clients, 100+ Angel Investments (Seed–Series C), Dartmouth College, Economics & Asian Studies

Advises leadership teams across Southeast Asia on AI strategy, readiness, and implementation. HRDF-certified trainer with engagements for a Big Four accounting firm, a leading global management consulting firm, and the world's largest ERP software company.

AI StrategyAI GovernanceExecutive AI TrainingDigital TransformationASEAN MarketsAI ImplementationAI Readiness AssessmentsResponsible AIPrompt EngineeringAI Literacy Programs

EXPLORE MORE

Other AI Compliance & Regulation Solutions

INSIGHTS

Related reading

Talk to Us About AI Compliance & Regulation

We work with organizations across Southeast Asia on ai compliance & regulation programs. Let us know what you are working on.