State AI Regulations in 2026: Colorado, Texas, California, and What's Coming

AI GovernanceLast updated on
State AI Regulations in 2026: Colorado, Texas, California, and What's Coming

While the federal government debates comprehensive AI legislation, states have stopped waiting. In 2025 and 2026, three major state AI laws have been signed into law: Colorado's AI Act, Texas's Responsible AI Governance Act, and California's SB 53. Each takes a different approach. Each creates distinct obligations. And for enterprises deploying AI across multiple states, the compliance surface has expanded dramatically.

This is not a theoretical problem. If your organization deploys high-risk AI systems in any of these jurisdictions, these laws affect your operations. The question is not whether state AI regulation will affect your business. It is how fast you can operationalize compliance across a fragmented regulatory landscape.

The Colorado AI Act (SB 24-205)

Colorado's AI Act, formally SB 24-205, was signed into law in May 2024 and is the most comprehensive state-level AI regulation in the United States. It establishes clear obligations for both developers (those who build AI systems) and deployers (those who use AI systems to make consequential decisions).

Important update: The original enforcement date of February 1, 2026 was delayed by SB 25B-004 to June 30, 2026. Organizations should use this additional time to build compliance programs rather than postpone preparation.

Who It Covers

The law targets high-risk AI systems: any AI system that makes, or is a substantial factor in making, a consequential decision about a consumer. Consequential decisions include determinations about education, employment, financial or lending services, essential government services, healthcare, housing, insurance, and legal services. If your AI system touches any of these eight domains in Colorado, the Colorado AI Act applies to you.

What Developers Must Do

Developers of high-risk AI systems must provide deployers with documentation sufficient to understand the system's capabilities and limitations. This includes:

  • A general description of the reasonably foreseeable uses and known harmful or inappropriate uses of the system
  • The type of data used to train the system
  • Known or reasonably foreseeable limitations, including any risks of algorithmic discrimination
  • The purpose of the system and its intended benefits

What Deployers Must Do

Deployers carry the heavier compliance burden. They must:

  1. Implement a risk management policy and program that governs the deployment of high-risk AI systems
  2. Complete an impact assessment for each high-risk AI system before deployment, and annually thereafter
  3. Provide consumers with notice that an AI system is being used to make a consequential decision about them
  4. Offer an opportunity to appeal and access a human reviewer when a consequential decision is adverse
  5. Disclose to the Colorado Attorney General within 90 days if the deployer discovers that algorithmic discrimination has occurred

The impact assessment requirement is particularly significant. Deployers must document the purpose of the AI system, how it was evaluated for risks, the categories of data processed, known limitations, and the measures taken to mitigate risks of algorithmic discrimination.

Penalties

The Colorado Attorney General enforces the law through the Colorado Consumer Protection Act, meaning violations carry standard consumer protection penalties. There is no private right of action. Developers and deployers are required to use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination.

The Texas Responsible AI Governance Act (HB 149)

The Texas Responsible Artificial Intelligence Governance Act, known as TRAIGA, was signed into law by Governor Abbott on June 22, 2025 (HB 149) and took effect on January 1, 2026. TRAIGA represents a distinctive approach to AI regulation: it focuses primarily on government agency use of AI while taking an intent-based rather than impact-based approach to private sector liability.

What Makes TRAIGA Different

Unlike Colorado's comprehensive deployer obligations, the final version of TRAIGA was significantly pared back from earlier proposals. The law focuses its strongest requirements on state agencies rather than private sector deployers. For the private sector, TRAIGA takes a lighter-touch approach centered on prohibitions and safe harbors rather than affirmative compliance mandates.

Key Prohibitions

TRAIGA prohibits the development or deployment of AI systems that intentionally aim to:

  • Incite or encourage a person to commit physical self-harm
  • Incite or encourage harm to another person
  • Encourage engagement in criminal activity

The critical word is "intentionally." TRAIGA's liability framework is intent-based, meaning it requires proof of intentional misconduct rather than strict liability for discriminatory outcomes. This represents a fundamentally different philosophy from Colorado's impact-focused approach.

Government Agency Requirements

State agencies that deploy high-risk AI systems face more substantial obligations, including disclosure requirements when consumers are interacting with AI systems. Healthcare service providers must also disclose to patients when AI systems are used in treatment decisions.

The NIST AI RMF Safe Harbor

TRAIGA's most significant feature for enterprise compliance is its safe harbor provision. Organizations that substantially comply with the NIST AI Risk Management Framework or other recognized AI risk management standards gain protection against enforcement actions. Additionally, organizations that discover violations through internal testing, including adversarial testing and red-team exercises, benefit from safe harbor protections.

This provision creates a concrete business incentive for framework adoption. Organizations that invest in NIST AI RMF alignment are not just building better risk management: they are building legal protection.

Enforcement

The Texas Attorney General enforces TRAIGA with tiered civil penalties. Curable violations carry penalties of $10,000 to $12,000 per violation, while uncurable violations range from $80,000 to $200,000 per violation. Ongoing violations may incur penalties of up to $40,000 per day. There is no private right of action.

California SB 53: The Transparency in Frontier Artificial Intelligence Act

California's SB 53 was signed into law by Governor Newsom on September 29, 2025 and represents a targeted approach to AI regulation. Unlike Colorado and Texas, which address AI broadly, California focuses specifically on developers of frontier AI models: the organizations training the largest and most capable systems.

SB 53 was introduced after Governor Newsom vetoed the more ambitious SB 1047 in September 2024. The replacement takes a transparency-first approach rather than the prescriptive safety requirements of its predecessor.

Key Requirements

SB 53 establishes three primary obligations for developers of covered frontier models:

  1. Safety and security protocols: Developers must implement and maintain a safety and security protocol, including procedures for identifying and mitigating risks of critical harms. They must publish a plain-language summary of their safety framework and update it at least annually.

  2. Safety incident reporting: Developers must report safety incidents involving covered AI models to the California Office of Emergency Services. The Office of Emergency Services is required to establish a public reporting mechanism for frontier AI safety incidents. Developers must also confidentially submit summaries of internal risk assessments related to catastrophic risks.

  3. Whistleblower protections: SB 53 establishes explicit protections for employees who report safety concerns related to covered AI models. Developers cannot retaliate against employees who disclose information about AI safety risks to regulators or the public.

Who Qualifies as a "Large Frontier Developer"

The law applies to developers who train AI models using computing resources exceeding a defined threshold (currently 10^26 integer or floating-point operations). This captures the largest foundation model developers while exempting smaller organizations. Computing resources used for fine-tuning, reinforcement learning, or other material modifications count toward the threshold. The law also distinguishes between "frontier developers" and "large frontier developers" (those with annual revenue exceeding $500 million), with the latter facing additional obligations. The California Department of Technology can recommend updates to these definitions as technology evolves, subject to legislative adoption.

Enforcement

The California Attorney General enforces SB 53 with civil penalties of up to $1 million per violation, scaled to the severity of the offense.

Other State Laws to Watch

The three laws above represent the first wave, not the last. Several other state and local regulations are already in force or advancing rapidly.

Illinois AI Video Interview Act: Already in effect, this law requires employers who use AI to analyze video interviews to notify candidates, explain how the AI works, and obtain consent before analysis. It also requires employers to delete video recordings within 30 days of a candidate's request.

New York City Local Law 144: In force since 2023, this law requires employers and employment agencies using automated employment decision tools (AEDTs) to conduct annual bias audits and publish the results. It has served as a template for similar proposals in other jurisdictions.

Maryland and Vermont: Both states have introduced bills in their 2026 legislative sessions addressing AI in hiring, insurance underwriting, and consumer lending. These bills remain in committee but signal expanding state-level attention to AI governance.

Building a Multi-State Compliance Strategy

The fragmentation of state AI regulation creates a practical challenge: how do you comply with multiple, overlapping, and sometimes contradictory requirements without building a separate compliance program for each jurisdiction?

The answer is a framework-based approach. Rather than treating each state law as an isolated compliance project, organizations should build a unified governance infrastructure that satisfies the core requirements common across all jurisdictions.

Step 1: Map Your AI Inventory

Every major state law requires organizations to know what AI systems they deploy and where. Start by building a comprehensive inventory that captures each system's purpose, the data it processes, the decisions it influences, and the jurisdictions where it operates.

Step 2: Adopt NIST AI RMF as Your Baseline

The NIST AI Risk Management Framework provides the most widely recognized governance baseline. Texas explicitly rewards its adoption with a safe harbor provision. Colorado's impact assessment requirements map cleanly to NIST's risk identification and mitigation practices. California's transparency requirements align with NIST's documentation and communication guidance.

Adopting NIST AI RMF as your baseline creates a single governance framework that addresses the overlapping requirements of multiple state laws.

Step 3: Implement Continuous Impact Assessments

Colorado requires impact assessments for deployers. Rather than treating these as one-time exercises, build them into your deployment pipeline. Every high-risk AI system should undergo an initial assessment before deployment and periodic reassessments as the system evolves, the data changes, or the regulatory landscape shifts.

Step 4: Build Documentation Infrastructure

Compliance across all three states requires extensive documentation: training data descriptions, evaluation results, risk mitigation measures, safety protocols, and audit trails. This documentation cannot be an afterthought. It must be generated automatically as part of your AI development and deployment workflow.

Step 5: Establish Monitoring and Incident Response

California's incident reporting requirements demand monitoring infrastructure. Colorado's discrimination discovery disclosure requires ongoing fairness monitoring. Building these capabilities proactively positions your organization to meet both current requirements and the inevitable expansion of state-level incident reporting obligations.

How Swept AI Helps

At Swept AI, we build the infrastructure that makes multi-state AI compliance operational, not aspirational. Our platform provides:

  • Automated impact assessments that map to Colorado and emerging state requirements
  • Continuous monitoring for algorithmic discrimination, drift, and safety incidents
  • Documentation generation that produces audit-ready records aligned with NIST AI RMF and state-specific requirements
  • Incident detection and reporting capabilities that support state reporting mandates

State AI regulation is not a temporary trend. It is the new operating environment for enterprise AI. The organizations that build governance infrastructure now will deploy with confidence. Those that wait will spend the next several years retrofitting compliance into systems that were never designed for it.

The states are not waiting for Washington. Neither should you.

Join our newsletter for AI Insights