How 2025 state legislative sessions grew the AI patchwork and what it means for startups

Nearly all state legislatures adjourned for the year, and they took several notable steps impacting startups and AI, putting forward over 1000 bills related to the technology. States considered, passed, or enacted various approaches to AI issues, ranging from broad “comprehensive” frameworks, to narrower use-specific rules, to developer obligations at the technical level. In the meantime, Congress tried to include a “pause” on individual state frameworks in a party-line package to prevent the proliferation of a patchwork of varying AI rules. That provision did not make it into the final bill, and the debate around it is likely to color other attempts to solve problems facing startups and the broader AI ecosystem. Regardless, it’s instructive for startups and policymakers to understand the AI policy activity at the state level and the utility of clarity and uniformity for startups when it comes to rules for AI. 

Category-based frameworks

In 2024, Colorado became the first U.S. state to enact “comprehensive” AI legislation to govern the development and deployment of AI systems. The Colorado law would create obligations for developers and deployers of AI, with those in categories of “high-risk” areas, such as education, employment, health, housing, or finance subject to the most stringent requirements around risk management, annual impact assessments, and notices, e.g. Upon signing the bill in 2024, Governor Jared Polis (D) suggested that it should be fixed by convening a task force to recommend updates to the bill. The legislature did not advance any fixes to the bill during their regular legislative session, and instead delayed the effective date from February to June 2026 during a special session in August. That delay would allow the legislature to consider further amendments during the 2026 regular session.

In 2025, several states advanced frameworks paralleling Colorado’s AI Act, most notably Virginia, Connecticut, and Texas. The bills met different fates but are instructive of debates surrounding AI frameworks. The Virginia legislature passed the Virginia High-Risk Artificial Intelligence Developer and Deployer Act (HB 2094) in February. Governor Glenn Younkin (R) vetoed the bill saying its “rigid framework [...] puts an especially onerous burden on smaller firms and startups that lack large legal compliance departments.” 

Meanwhile in Connecticut—where a similar bill also died in 2024 over Governor Ned Lamont’s (D) threat of a veto—SB 2 failed to make it beyond Senate passage. During consideration of the bill, the state’s Chief Innovation Officer offered testimony that echoed many concerns of the startup ecosystem, telling legislators that “even the most carefully crafted regulations [...] can create an unintended chilling effect for businesses and innovators” because “compliance with the new regulations” will cost “substantial amounts of time and money” and create a “prohibitive barrier for those considering establishing operations Connecticut,” or lead to “shifting their operations elsewhere.” He also underscored the importance of avoiding a “state-by-state approach,” and the preference for “a uniform, predictable playing field for fair competition and innovation.”

Republican-controlled Texas surprised many stakeholders by putting forward a bill, the Texas Responsible AI Governance Act (TRAIGA), originally modeled on the Colorado Act—but going even farther by allowing individuals to sue over alleged violations. (The Colorado AI Act is exclusively enforced by the state’s Attorney General, and violations are considered deceptive trade practices under Colorado consumer protection law.) Allowing a private right of action would open startups to costly bad-faith litigation that startups experience in other areas of the law. A significantly altered version of TRAIGA was introduced in March and signed into law by Governor Greg Abbott (R) in June with narrow obligations for developers, an AI regulatory sandbox to promote experimentation, and exclusive Attorney General enforcement. 

In California, legislative and regulatory actions around the creation of category-based frameworks are worth startups’ attention. In September, the California Privacy Protection Agency—the agency created to implement and enforce the state’s privacy laws—finalized a long-running rulemaking on Automated Decisionmaking Technology. Those rules would apply to companies’ use of AI to “substantially replace” human decisionmaking in finance, housing, education, employment, or healthcare—scoping in many of the services startups create. According to agency estimates, implementing the rules will cost startups tens of thousands in both initial and ongoing costs. Economists criticized the agency figures as inaccurate underestimates. (The rulemaking package also includes costly independent cybersecurity audit requirements that will impact startups beginning in 2030.)

The California legislature considered a related bill, AB 1018 on Automated Decision Systems, and while it only passed one chamber, would have been problematic for startups and is likely to come back in 2026. The bill—owing to more vague definitions—would likely apply to an even broader range of AI and software products than other frameworks discussed in this section. It would have scoped in—and required developers to predict—fine-tuning of their AI tools, something that would likely tighten the availability of AI tools, limit iteration, improvement, and innovation, and undermine the competitiveness of the AI ecosystem. The bill would have required independent third-party audits by 2030, which (aside from there not being an audit ecosystem or long-settled industry standards) would create a costly barrier to entry for startups. Finally, the bill included enforcement by a mish-mash of agencies, including the Attorney General, city attorneys, and other state agencies for certain violations, which would breed uneven or even politicized enforcement and create uncertainty for businesses that are trying to follow the law.

Rules for foundation models

In 2025, two state legislatures—in California and New York—sent bills to their Governors’ desks that would regulate foundation model development in the name of AI safety. An earlier version of this type of proposal focused on “existential” risks potentially arising from the most advanced “frontier” AI models, California’s SB 1047, gathered much attention in 2024. That bill would have regulated model development by requiring a safety determination at the outset and then held developers liable for meeting it—disincenting model development and the availability of open-source AI models. Even though the bill was aimed at only the most resource-heavy models, it would have rippled through the startup ecosystem since those are the foundation models startups are leveraging to innovate in AI. Governor Gavin Newsom (D) vetoed the bill over these concerns. In 2025, California lawmakers passed a dialed-back version of 1047, SB 53, which Newsom seems poised to sign. That bill will require developers to share their safety plans every three months, and includes protections for whistleblowers. It will add cost and uncertainty to model development, but not to the same magnitude as 1047 would have. 

The “Responsible AI Safety and Education” (RAISE) Act (S. 6953) in New York is similar in goals but hews closer to SB 1047 in posing problems for AI innovation and startups in particular.  Like 1047 would have, the RAISE Act’s approach to regulation and liability will disincent model development and availability. The models regulated by the RAISE Act are suitable for a wide range of tasks, but end users often determine how the model will be used. The RAISE Act misguidedly holds developers of foundation models responsible for harms—rather than end users or malign actors. In response, developers will restrict the availability of their models, because they will not want to be liable for actions of others that they do not have control over. It is unclear what action Governor Kathy Hochul (D) will take on RAISE, but New York has a unique “chapter amendment” process allowing her to sign bills while requesting changes from the legislature (a path she has shown a particular affinity for as opposed to outright vetoes). 

Chatbots

Several states in 2025 enacted legislation responding to concerns about the use of AI for therapy, including Utah, Nevada, and Illinois, while New York enacted and California passed broader legislation related to chatbots. The rules are likely to impact those offering AI-enabled mental health tools, startups with chatbot-based user interfaces (think translation services, productivity assistants, educational tutors, etc.), and how general-purpose AI services are made available to the public.  

The three therapy-specific laws vary slightly in approach. The Nevada and Illinois laws generally prohibit the use of AI to provide or assist in mental healthcare outside of activities like scheduling. The Utah law meanwhile recognizes that there could be a beneficial role for AI, and clears a path through an affirmative defense against allegations of unlicensed practice. That defense requires providers to maintain transparency, testing, risk management, and oversight mechanisms. And in an important recognition of human imperfection, it requires that the mental health chatbot pose “no greater risk to a user than that posed to an individual in therapy with a licensed mental health therapist.”

The New York law could scope-in therapy uses (a separate proposal that did not pass would have directly addressed use of AI for licensed professions like mental healthcare), but applies more generally to “companion chatbots.” AI companies must disclose and regularly remind users that they are not interacting with a human, and they must detect if a user is expressing thoughts of suicide or self-harm and direct them to crisis services. The New York law was signed as part of the state budget and will go into effect on November 5th of this year.

The California legislature passed two bills responding to companion chatbots, SB 243, and AB 1064, which have important differences for their broader unintended consequences for AI services. SB 243 hews closer to the New York law, requiring chatbot providers to periodically notify users that it is not human, maintain protocols to prevent the model from outputting suicidal ideation-related content, and direct users to crisis services in the event they express such behaviors. The bill would also require additional steps for users that the service knows are minors, like instituting “reasonable measures” to prevent sexually explicit interactions or content.

Meanwhile, AB 1064, the Leading Ethical AI Development (LEAD) for Kids Act, is much stricter in application and defines companion chatbot in an even broader way that likely scopes in all chatbots. If signed, the bill would require companies to prevent those under 18 from accessing chatbots unless the chatbots are “not foreseeably capable” of a “encouraging” a range of harms, including self-harm, suicidal ideation, violence, drug or alcohol use, or disordered eating. The chatbot must not make available content that is sexually explicit, illegal, or intimates a therapist. “Prioritizing validation of the user’s beliefs, preferences, or desires over factual accuracy or the child’s safety” is also not allowed, which might leave a provider in a no-win situation should a child ask about where the tooth fairy gets their money from or what will happen to a terminally-ill grandparent. Prioritizing factual accuracy would require a response that the tooth fairy does not exist and that a beloved family member will die, likely leading to emotional distress for the user, in tension with the safety requirement. The bill enables children and their parents or guardians to sue over alleged violations, meaning very high legal risk for any provider with a chatbot interface. The consequence of the bill will be for chatbot providers to institute strict age controls, like age verification, and ban those under 18 from using their products. Of course, the blanket restriction may make steps to prevent enumerated harms, but will also prevent students from using common tools to practice new languages, write computer code, or solve math problems. Governor Newsom has not yet acted on either chatbot bill.

“Right to compute”

Standing in contrast to regulatory frameworks, Montana, New Hampshire, and Idaho all introduced efforts to limit burdensome AI frameworks or ensure the “right to compute,” with Montana Governor Greg Gianforte (R) signing the Right to Compute Act in April. The act recognizes the “right to own and make use of technological tools, including computational resources” and requires restrictions on those rights be “demonstrably necessary and narrowly tailored to fulfill a compelling government interest.”

Federal happenings related to state AI policy

By May, states had introduced over 1000 bills related to AI, and Members of Congress took note—most notably with Republicans putting forward a proposed 10-year moratorium on the enforcement of unique state AI rules in their party-line budget reconciliation package. Rules associated with that special procedure require a budgetary impact, leading states’ eligibility for certain federal funds tied to compliance with the moratorium. The proposal ignited a firestorm of opposition from Democrats and populist Republicans, and Sen. Marsha Blackburn’s (R-Tenn.) 11th-hour recanting of a deal she reached with Sen. Ted Cruz (R-Texas) on supporting the provision led Cruz to pull the moratorium from the broader package. 

The problems with a patchwork of varying state AI rules have continued to be a focus at the federal level, surfacing in several congressional hearings and in the Trump administration’s AI Action Plan. Cruz, Chair of the Senate Commerce Committee—which has primary jurisdiction over AI issues—laid out a framework at a hearing earlier this month that would seek to “Prevent a  Patchwork of Burdensome AI Regulation” from states and abroad. Cruz has so far only introduced one part of his framework, a bill to set up a federal regulatory sandbox for AI. It’s unclear when other parts of the framework might be introduced and whether they’ll be able to overcome newly heightened partisanship around AI regulation. Whatever happens (or doesn’t) at the federal level, states are poised to continue with their own unique efforts, spelling an increasingly fracturing landscape for startups to navigate. 

Engine is a non-profit technology policy, research, and advocacy organization that bridges the gap between policymakers and startups. Engine works with government and a community of thousands of high-technology, growth-oriented startups across the nation to support the development of technology entrepreneurship through economic research, policy analysis, and advocacy on local and national issues.