In the last two weeks of March 2026, governors in seven states signed 19 new AI laws. The year's total jumped from 6 to 25. Another 27 bills have passed both chambers and could reach governors' desks in the coming weeks. Across 45 states, legislators have introduced 1,561 AI-related bills. On March 20, the White House published a framework recommending that Congress preempt state AI laws. Congress has not acted. Every week without federal legislation, the compliance patchwork adds new obligations.
The March Acceleration
State legislatures pass laws in bursts. Many sessions run only a few months, and the end of March marks the final stretch. This year, they all had the same priority.
Utah Governor Spencer Cox signed nine AI bills, eight in the last two weeks of March. The bills cover AI literacy in public schools, deepfake criminalization, health insurance AI disclosure, and expanded government AI oversight. Cox has positioned himself as one of the most active AI regulatory voices among governors, treating AI governance as a signature issue.
Washington Governor Bob Ferguson signed two bills. HB 1170 requires AI providers with over one million monthly users to inform consumers when content has been modified using AI. HB 2225 requires chatbot operators to meet transparency standards and implement protections for minors.
In New York, Governor Hochul signed amendments to the RAISE Act, extending the state's framework for frontier AI developers to include 72-hour incident reporting.
The surge reflects political consensus, not scheduling coincidence. These bills were introduced months ago, debated through committee, and advanced by bipartisan coalitions. State legislatures across the country have decided AI governance is a defining priority of this session.
Four Themes Across the 19 Laws
The new laws cluster around four regulatory priorities. Each addresses an area where states believe federal action is too slow.
Transparency and content provenance. Multiple states now require disclosure when AI generates or modifies content. Washington's HB 1170 mandates "latent disclosures," provenance markers embedded in AI-generated content, for providers serving over one million monthly users. Utah's provenance bill adds similar requirements. Consumers have a right to know when they are interacting with AI-generated material, and the disclosure burden falls on the provider.
Harmful AI-generated content. Creating or distributing non-consensual AI-generated intimate images is now a criminal offense in Utah under HB 276. Utah's HB 289 addresses AI-generated child sexual abuse material with new criminal provisions. Idaho's S 1297 establishes regulations for conversational AI services. States are treating synthetic content as a distinct harm category with its own enforcement mechanisms.
Consumer-facing AI interactions. Washington's HB 2225 requires chatbot operators to disclose when consumers interact with AI, implement protections for minors, and establish safeguards against self-harm facilitation. The requirements are specific: what disclosures must say, when they must appear, and what consequences follow if they are missing.
Insurance and healthcare AI. A pattern is emerging in state insurance regulation: AI-assisted decisions about coverage and claims are consumer protection issues, subject to disclosure requirements and oversight. Utah's SB 319 increases disclosure requirements for AI used in health insurance preauthorization. Washington's SB 5395 restricts how health insurance carriers can use AI in prior authorizations.
Three Laws Enterprises Should Read Closely
Not all 25 laws carry the same weight. Three stand out for the scope of their obligations and the precedent they establish.
Colorado AI Act (SB 24-205): effective June 30, 2026. The first comprehensive state AI law in the country. It covers high-risk AI systems across eight consequential decision domains: employment, housing, financial services, healthcare, education, insurance, government services, and legal services. Developers must publish transparency statements and provide deployers with documentation on capabilities, limitations, and known risks. Deployers face the heavier burden: risk management policies aligned with the NIST AI Risk Management Framework, annual impact assessments for each high-risk system, consumer disclosure when AI influences consequential decisions, and three-year record retention. Compliance with NIST AI RMF or ISO/IEC 42001 creates a statutory safe harbor. The Colorado Attorney General has exclusive enforcement authority, with penalties up to $20,000 per violation. Jurisdictional reach extends to any company whose AI system affects Colorado consumers, regardless of headquarters. A company headquartered in Texas whose underwriting model evaluates a single Colorado applicant is subject to the full requirements. For the full breakdown: our state regulations guide and the insurance-specific compliance roadmap.
New York RAISE Act: 72-hour reporting. The Responsible AI Safety and Education Act requires developers of advanced frontier AI models to report critical safety incidents to the state within 72 hours of determining an incident occurred. Most federal reporting frameworks allow significantly more time. For organizations deploying frontier models in New York, this window demands incident detection and reporting infrastructure that most have not built. The RAISE Act also establishes broader safety obligations for advanced model developers, but the 72-hour reporting requirement is the provision with the sharpest operational teeth.
Utah's nine-bill package: breadth over depth. No single Utah bill matches Colorado's comprehensiveness, and that may be the point. Nine laws spanning AI literacy in K-12 education, criminal provisions for AI-generated abuse material, expanded state AI oversight, defamation law updates for AI-manipulated content, and health insurance preauthorization disclosure. Instead of one comprehensive framework, Utah is layering targeted regulations across every domain where AI touches people. For enterprises operating in Utah, compliance obligations are spread across multiple statutes rather than consolidated under a single law.
Federal Preemption Is Not Coming Soon
On March 20, 2026, the Trump administration published its "National Policy Framework for Artificial Intelligence Legislative Recommendations," urging Congress to adopt a "minimally burdensome national standard" and preempt state AI laws imposing "undue burdens."
Senator Blackburn's proposed TRUMP AMERICA AI Act would codify that preemption, though passage remains uncertain. Meanwhile, regulators in Colorado, New York, Utah, and Washington are training examiners and deploying oversight tools.
Companies that defer compliance planning while hoping for federal preemption are making a specific bet: that Congress will act on technology regulation faster than it has at any point in the past twenty years. Most compliance officers would not take that bet.
What to Build for a Multi-State Reality
Six months ago, an enterprise could reasonably focus on a single state's requirements. An HR technology company might have tracked Colorado's AI Act and planned accordingly. A health insurer might have focused on Utah's preauthorization disclosure. That approach no longer works when a company deploying AI across hiring, customer service, and claims processing faces overlapping obligations in multiple states simultaneously, with more arriving every legislative session.
Start with the AI system inventory. Map every AI system across all eight consequential decision domains Colorado defines: employment, housing, financial services, healthcare, education, insurance, government services, and legal services. Include vendor-embedded models, ML features inside larger platforms, and automated decision components nobody in the organization categorizes as AI. This inventory is the first thing every regulator asks for. Swept AI's evaluation framework produces this mapping as a byproduct of ongoing system evaluation.
Align to NIST AI RMF. Colorado grants a statutory safe harbor for organizations that build their governance programs on the NIST AI Risk Management Framework. Texas's TRAIGA rewards framework adoption with similar protections. A governance program built on NIST AI RMF satisfies the core requirements across every jurisdiction referencing a nationally recognized framework.
Deploy continuous monitoring. Without monitoring infrastructure, your organization carries exposure for discrimination it could have detected but didn't. Colorado's "should have discovered" language means the reporting clock started even if nobody was watching. Multiple states require incident reporting within 72 to 90 days of discovery, and New York's 72-hour window for frontier AI safety incidents demands even faster detection. Continuous supervision identifies risks before reporting obligations activate.
Build consumer notification workflows. Transparency requirements appear in nearly every new law. Consumers need to know when AI is involved in consequential decisions. They need explanations when outcomes are adverse. Many laws require appeal pathways with human review. Designing these workflows into AI deployments now costs a fraction of retrofitting them later.
Establish documentation and audit trails. When a regulator sends an inquiry, the organizations that respond confidently are the ones generating evidence continuously, not the ones assembling documentation under deadline pressure. Colorado requires three years of record retention for impact assessments, monitoring logs, and incident reports. Other states will follow. Automated evaluation and Trust Reports produce this evidence as a byproduct of normal governance operations.
The 19 laws signed in March are the baseline, not the peak. Twenty-seven more bills have passed both chambers. Over 1,500 remain in play across 45 states. The governance infrastructure you build for today's multi-state requirements serves every jurisdiction that acts next.
If your AI systems make decisions about consumers in any of these states, we can show you what multi-jurisdiction governance looks like in practice.
