On this page
The State AI Regulation Landscape in 2026
US state AI regulation has exploded into a complex patchwork of overlapping laws, with more than 15 states enacting or advancing AI-specific legislation by early 2026 - creating a compliance challenge that rivals the early days of state privacy law.
In the absence of comprehensive federal AI legislation, states have moved aggressively to regulate AI within their jurisdictions. The result is a fragmented regulatory landscape where a single enterprise AI system may be subject to different requirements in Colorado, California, New York, and Illinois simultaneously.
The pattern mirrors what happened with state privacy laws after California passed the CCPA in 2018. Within five years, more than a dozen states enacted their own privacy laws with varying requirements. AI regulation is following the same trajectory, but faster - and with higher stakes because AI-related harms (discrimination, safety failures, privacy violations) are more immediate and visible than traditional data privacy concerns.
For enterprises, the strategic question is not whether to comply with state AI laws, but how to build a compliance program efficient enough to satisfy multiple states without maintaining separate governance programs for each jurisdiction. The answer lies in anchoring your program on the most stringent requirements and using frameworks like the NIST AI RMF as a common baseline. The Areebi platform is designed for exactly this multi-jurisdictional challenge.
Colorado: The Benchmark State AI Law
The Colorado AI Act (SB 24-205) is the most comprehensive state AI law in the US, establishing a duty of care framework for high-risk AI systems that is likely to serve as the template for other states.
Enacted in May 2024, effective February 1, 2026, with enforcement beginning June 30, 2026, the Colorado AI Act targets "high-risk AI systems" that make or substantially contribute to consequential decisions in employment, education, financial services, healthcare, housing, insurance, and government services.
Key requirements include:
- A general duty of care to protect consumers from algorithmic discrimination
- Mandatory impact assessments for each high-risk AI system
- Consumer notification before AI-assisted consequential decisions
- Mechanisms for consumers to contest adverse AI decisions
- AG notification within 90 days of discovering algorithmic discrimination
- Safe harbor consideration for organizations implementing recognized AI risk frameworks (NIST AI RMF explicitly mentioned)
For detailed compliance guidance, including an action plan with deadlines, see our Colorado AI Act compliance guide.
California: Multiple Targeted AI Laws
California has enacted multiple targeted AI laws covering training data transparency, AI-generated content watermarking, and AI in hiring - but vetoed the broad SB 1047 AI safety bill, signaling a preference for narrow, use-case-specific regulation.
Key California AI laws active or effective in 2026:
- AB 2013 (Training Data Transparency): Requires developers of generative AI systems to post detailed information about the datasets used to train their models, including sources, data types, and whether personal information or copyrighted material is included. Effective January 1, 2026.
- SB 942 (AI Transparency Act): Requires providers of generative AI to implement content provenance mechanisms including watermarking and metadata for AI-generated content. Targets deepfakes and AI-generated misinformation.
- AB 1008 (AI in Hiring): Regulates the use of AI tools in employment decisions, requiring employers to notify candidates when AI is used in hiring processes and to provide information about how the AI system evaluates candidates.
- SB 1047 (vetoed): Governor Newsom vetoed this broad AI safety bill in September 2024, arguing it was premature and could stifle innovation. However, revised versions of safety-focused legislation continue to advance in the California legislature.
California's approach is significant because it often sets the de facto national standard - companies building compliance for California's requirements typically extend those practices nationally. Enterprises should treat California's AI laws as a baseline for national operations.
New York City: Automated Employment Decisions
NYC Local Law 144 requires employers using automated employment decision tools (AEDTs) to conduct annual independent bias audits and provide candidates with specific notices about AI use in hiring.
Active since July 2023, Local Law 144 is the longest-running AI-specific employment law in the US and provides a preview of enforcement challenges. Key requirements include:
- Annual bias audit conducted by an independent auditor
- Publication of bias audit results on the employer's website
- Notice to candidates at least 10 business days before AEDT use
- Information about the AEDT's data sources, type, and retention policy
- Option for candidates to request alternative selection processes
Penalties range from $500 for a first violation to $1,500 for subsequent violations, assessed per day per violation. While penalties may seem modest, the reputational risk and class action exposure from documented AI bias in hiring are substantially larger.
The law's requirement for published bias audit results creates public transparency that AI governance programs must account for. Organizations using AI in hiring should implement bias testing as part of their standard deployment process.
Get your free AI Risk Score
Take our 2-minute assessment and get a personalised AI governance readiness report with specific recommendations for your organisation.
Start Free AssessmentIllinois: AI Video Interviews and Fairness
Illinois has two AI-specific employment laws: the AI Video Interview Act (effective since 2020) and the Artificial Intelligence Fairness Act, both targeting AI use in hiring and employment decisions.
The AI Video Interview Act requires employers using AI to analyze video interviews to:
- Notify applicants before the interview that AI will be used
- Explain how the AI works and what characteristics it evaluates
- Obtain applicant consent before using AI analysis
- Limit sharing of video recordings to only those necessary for evaluation
- Destroy recordings within 30 days of an applicant's request
The Artificial Intelligence Fairness Act expands protections beyond video interviews to broader use of AI in employment decisions, prohibiting the use of AI that results in discriminatory outcomes based on protected characteristics.
Illinois's Biometric Information Privacy Act (BIPA) also intersects with AI when systems process facial geometry, voiceprints, or other biometric data, carrying penalties of $1,000-$5,000 per violation. Organizations using AI for any biometric processing of Illinois residents face compound compliance obligations.
Virginia, Texas, and Emerging State AI Laws
Virginia and Texas are advancing AI governance legislation that would expand the patchwork further, while several other states have introduced AI bills targeting specific use cases including deepfakes, algorithmic pricing, and surveillance.
Virginia's proposed High-Risk Artificial Intelligence Developer and Deployer Act follows the Colorado model, establishing impact assessment requirements and consumer disclosure obligations for high-risk AI systems. If enacted, it would create another state with comprehensive AI governance requirements.
Texas has introduced legislation targeting AI in insurance underwriting, law enforcement use of facial recognition, and AI-generated content disclosure. Texas's approach is more sector-specific than Colorado's comprehensive framework.
Other states with active AI legislation include:
- Connecticut: AI in hiring transparency requirements
- Maryland: Restrictions on AI facial recognition in employment
- Washington: AI accountability and transparency proposals
- New Jersey: Automated decision-making notification requirements
The trend is clear: state AI regulation will continue to expand. Organizations that build compliance programs only for current laws will find themselves constantly catching up. A proactive, framework-based approach using the NIST AI RMF and Areebi's governance platform provides adaptability to new requirements as they emerge.
State AI Law Comparison Table
The following table compares key provisions of major US state AI laws to help enterprises identify their compliance obligations by state.
| State/City | Law | Scope | Key Requirements | Penalties | Effective |
|---|---|---|---|---|---|
| Colorado | SB 24-205 | High-risk AI in consequential decisions | Impact assessments, consumer disclosure, duty of care, AG notification | $20,000/violation | Feb 2026 (enforcement Jun 2026) |
| California | AB 2013 | Generative AI training data | Training data transparency disclosures | Varies | Jan 2026 |
| California | SB 942 | Generative AI content | AI content watermarking and provenance | Varies | 2025-2026 |
| California | AB 1008 | AI in hiring | Candidate notification, AI explanation | Varies | 2025 |
| NYC | Local Law 144 | Automated employment decisions | Annual bias audit, candidate notice, audit publication | $500-$1,500/violation/day | Jul 2023 |
| Illinois | AI Video Interview Act | AI analysis of video interviews | Notice, consent, explanation, recording destruction | $1,000-$5,000/violation | Jan 2020 |
| Illinois | AI Fairness Act | AI in employment | Non-discrimination in AI employment decisions | Varies | 2025-2026 |
| Virginia | Proposed | High-risk AI systems | Impact assessments, consumer disclosure (proposed) | TBD | TBD |
| Texas | Multiple proposed | Insurance, law enforcement, content | Sector-specific AI requirements (proposed) | TBD | TBD |
The Federal Preemption Question
The Trump administration has signaled interest in preempting state AI laws with a lighter federal framework, but no preemption legislation has passed as of April 2026 - and enterprises should not delay compliance while waiting for it.
The arguments for federal preemption are familiar from the privacy law context: a single national standard reduces compliance costs, provides regulatory certainty, and prevents a race to the most stringent requirements. Industry groups have lobbied aggressively for preemption, arguing that the state patchwork threatens innovation.
However, several factors make near-term preemption unlikely:
- Political complexity: Designing a federal AI law that satisfies both innovation-focused Republicans and consumer-protection-focused Democrats has proven intractable
- State resistance: States that have invested in AI legislation will resist preemption, particularly if the federal standard is weaker than existing state laws
- Enforcement gap: Federal agencies lack the resources for nationwide AI enforcement, meaning state AGs would need to retain some role
- Precedent: Despite years of calls for federal privacy preemption, state privacy laws continue to proliferate. The same pattern is likely for AI regulation.
The pragmatic approach is to build compliance programs that satisfy the most stringent current requirements (Colorado) while maintaining the flexibility to adapt as new laws emerge. Areebi's compliance assessment evaluates your posture against all current and proposed state AI laws and helps you prioritize accordingly.
Building a Multi-State AI Compliance Strategy
The most efficient multi-state AI compliance strategy is to build a single governance program anchored on the most stringent state requirements, map controls to each applicable jurisdiction, and use framework-based compliance to future-proof against new laws.
- Determine your exposure: Identify which states you operate in, where your customers are located, and which state laws apply to your AI use cases. Most national enterprises will need to satisfy Colorado, California, and NYC requirements at minimum.
- Anchor on Colorado: The Colorado AI Act's comprehensive requirements (impact assessments, consumer disclosure, duty of care, safe harbor) represent the most stringent current baseline. Satisfying Colorado generally satisfies most other state requirements.
- Implement NIST AI RMF: The NIST AI RMF provides the framework-based foundation that Colorado's safe harbor provision explicitly rewards. It also provides compliance leverage for future state laws that reference recognized frameworks.
- Automate compliance tracking: Manual compliance tracking across multiple jurisdictions is unsustainable. Use Areebi's platform to maintain automated compliance mapping across all applicable state laws.
- Monitor legislative developments: Subscribe to legislative tracking for all states where you operate. New AI bills are introduced in nearly every legislative session.
The state AI law patchwork will continue to grow. Organizations that invest in adaptable, framework-based governance programs now will have a durable competitive advantage over those scrambling to comply with each new law individually. See Areebi's pricing for multi-jurisdictional compliance support.
Frequently Asked Questions
How many US states have AI laws?
As of early 2026, more than 15 states have enacted or are actively advancing AI-specific legislation. Colorado has the most comprehensive law (SB 24-205). California has multiple targeted AI laws. NYC, Illinois, and several other jurisdictions have laws targeting AI in specific contexts like hiring.
Will federal law preempt state AI laws?
No federal AI preemption legislation has passed as of April 2026. While the Trump administration has expressed interest in preemption, political complexity, state resistance, and the privacy law precedent suggest that state AI laws will continue to proliferate for the foreseeable future. Enterprises should not delay compliance while waiting for federal action.
Which state AI law is the most stringent?
The Colorado AI Act (SB 24-205) is currently the most comprehensive and stringent state AI law, with broad coverage of high-risk AI systems, mandatory impact assessments, consumer disclosures, and a general duty of care. Building compliance for Colorado generally satisfies most other state requirements.
Do state AI laws apply to out-of-state companies?
Yes. Most state AI laws apply based on the location of the affected individual, not the company. If your AI system makes decisions affecting residents of a particular state, you are generally subject to that state's AI law regardless of where your company is headquartered.
How do I comply with multiple state AI laws simultaneously?
Build a single governance program anchored on the most stringent requirements (currently Colorado), implement the NIST AI RMF for framework-based compliance leverage, and use automated tools like Areebi to map your controls against all applicable jurisdictions. This is more efficient than maintaining separate compliance programs for each state.
Related Resources
- Areebi Platform
- AI Governance Assessment
- NIST AI RMF Compliance
- Colorado AI Act Guide
- NIST AI RMF Implementation Guide
- AI Compliance Landscape 2026
- AI Governance vs Security
- Pricing
- EU AI Act Compliance
- Case Study: University System FERPA Compliance
- What Is AI Compliance
- What Is Algorithmic Discrimination
- What Is Automated Decision Making
About the Author
VP of Compliance & Trust, Areebi
Former compliance director at a Big Four consulting firm. Deep expertise in HIPAA, SOC 2, GDPR, and the EU AI Act. VP Compliance and Risk at Areebi.
Ready to govern your AI?
See how Areebi can help your organization adopt AI securely and compliantly.