Digesting Trump’s AI wishlist for Congress 

With states already putting forward over 1,600 AI-related proposals this year, startups are scrambling to understand which proposals might have legs and what they will mean for their companies. Last Friday, the Trump administration stepped back into the conversation by sharing recommendations on how they believe Congress should legislate around AI. That seven-part framework touches on key areas of the AI policy debate for startups—including suggesting measures to advance innovation and “preempting cumbersome state AI laws.”

The White House’s framework is borne out of Trump’s December executive order on AI that tasked members of his administration to develop legislative recommendations for a national AI framework. Though the framework released last week stops short of proposing actual legislative text for Congress to take up, it will shape the debate on Capitol Hill around AI. It centers on the administration's “four C’s” of ai policy—children, communities, creators, and censorship—as well as innovation, workforce, and state preemption. The framework echoes many startup priorities for AI policy and contains some potential pitfalls where policymakers must tread carefully. 

Things to like

Innovation

The framework’s fifth pillar focuses on “enabling innovation” and includes directives to Congress that will be helpful for startups. In particular, it calls on Congress to set up regulatory sandboxes for AI and to make available AI-ready datasets for use by industry and academia. Both of these ideas are poised to help lower barriers to AI innovation for startups, as sandboxes reduce regulatory constraints in a controlled way, and access to data remains a cost center for AI innovation. The innovation section of the framework also cautions Congress against creating a new AI-specific regulator. Avoiding this pitfall ensures that the expertise federal entities are already bringing to bear are not undermined. Moreover, creating a new technology-specific regulator would likely prove short-sighted, and that regulator would potentially be subject to regulatory capture. 

Preemption

The framework ends with a section on state preemption—a key priority for startups given the costs and practical difficulties associated with navigating a patchwork of varying rules. Startups encounter the challenges of varying state rules in many areas of the law—from employment to data privacy and cybersecurity—which amount to steering where they hire, how they scale, and the number of zeros on their legal bills. With states hyperactive on AI issues over the past few years, AI startups are facing another barrier to nationwide success.  

Despite startups' experiences and the straightforward solution of strong federal preemption, it may prove an uphill task given previous rounds of attempts at preemption language in Republicans’ party-line bill and a must-pass defense bill. Those episodes catalyzed opposition to preemption, even among members previously open to it. The framework calls on Congress to “preempt state AI laws that impose undue burdens to ensure a minimally burdensome national standard […] not fifty discordant ones,” and then goes to great lengths to respond to past criticism of preemption, describing what should not be preempted, or what preemption does not mean. Importantly, the framework calls out that “states should not be permitted to penalize AI developers for a third party’s unlawful conduct involving their models,” a problem that has arisen in multiple states, including Colorado. Resolving the incongruous patchwork of state rules is tantamount for startup competitiveness, and it will likely need to be paired with broader elements of a robust—but balanced—federal AI rulebook.

Workforce

What impact AI will have on the workforce is a top concern for many Americans, according to public opinion polling. The administration’s framework acknowledges this without devolving into overreactions to the new technology by calling for a study down to the task-level impacts of AI use in the workplace to ground any potential policy responses. 

Talent shortages in the AI space and preparing the workforce for AI are also addressed by the framework. It suggests AI training not just in formal university settings, but also for youth, through apprenticeships and in existing training programs. These all-of-the above avenues for AI upskilling will help grow the AI talent pool.

Things to be careful about

Children

The Trump framework begins with and devotes significant real estate to discussion of “protecting children.” The section contains some commonsense suggestions with measures aimed at empowering parents, clarifying that “existing child privacy protections apply to AI systems,” and avoiding undermining certainty for startups by “setting ambiguous standards about permissible content,” or creating “open-ended liability that could give rise to excessive litigation.” 

However, the section also suggests Congress “establish commercially reasonable, privacy protective, age assurance requirements (such as parental attestation) for AI platforms and services likely to be accessed by minors.” That language stops short of endorsing age verification and has limiting language—e.g., calling out parental attestation as an example method and underscoring the requirements should be for services likely to be accessed by minors. That language is more limited than Congress is poised to be—where key committee Republicans have advanced age verification requirements. Conservative allies outside of Congress have pointed at the White House framework as an endorsement of broad age verification. Broad requirements to verify user age pose clear problems for startups, because they create new costs, add friction to growth, and threaten the privacy and security of their users.

Copyright

How policymakers approach issues at the intersection of copyright and AI will determine the types of entities that are able to participate in AI innovation. The framework's approach to these issues contains both helpful and not-so-helpful elements. Helpfully, the administration lays out their view that the “training of AI models on copyrighted material does not violate copyright laws,” but stops short of calling on action to that end—saying it to be left to the courts. While that is not directionally problematic, it ensures litigation over fair use—and associated cost and legal uncertainty—will continue. A better outcome for startups would be policymakers making clear that model training—including with copyrightable works—is permissible.

The section diving into copyright issues also supposes exempting the creation of licensing cartels from the antitrust laws. Clearing the way for licensing regimes makes little sense here.  The administration asserts that copyrights are not infringed by AI training, so there would seem to be nothing to license (unless the administration is referring to licensing for model outputs, which it does not make clear). Regardless, greenlighting new cartels would be further problematic for low-resource startups because it would put them at an even larger power imbalance compared to rightsholders. This section of the framework also contemplates regulations for digital replicas. Existing bills in Congress pose clear problems for startups by creating liability for actions outside of their control coupled with excessive penalties. 

Censorship 

The framework includes a short section on censorship. While the goals as outlined should be relatively uncontroversial on their face—ensuring government does not infringe on the First Amendment rights of AI companies—the administration overall has a poor track record in this space. The president and administration officials have gone after media outlets and entertainers for speech they don’t like. And in the AI space specifically, have designated a leading AI lab, Anthropic, a supply-chain risk over its expression around a contract dispute. 

The language the framework uses around prohibiting government interference based on “ideology” should also probably not be absolute. For example, a resume review AI tool should take extra precautions to make sure it's not more favorably returning lacrosse-playing candidates named Jared. The administration may have a push against what they consider “DEI,” but civil rights enforcers should still make sure high risk AI tools are appropriately taking into account training data that may reflect historical biases.

Communities

The administration’s framework folds several issues into their “communities” bucket: AI infrastructure issues, preventing AI fraud, national security considerations, and AI resources for small businesses. Many of these are logical and straightforward—steps should be taken to enhance law enforcement capabilities to address AI related harms aimed at vulnerable populations, to ensure that agencies with AI expertise have the resources they need to evaluate national security implications of models, and to help small businesses leverage AI to support their growth. The points on data center infrastructure needed to power AI innovation include a helpful call to streamline permitting but also seems to imply codifying the administration’s ratepayer protection pledge, which could be more fraught. 

That pledge—for AI companies to pay for power generation, connecting infrastructure, and rates separate from other utility customers—has been agreed to by all of the major AI players as a voluntary commitment and business-driven investment decision. In practice, that might mean higher costs as companies pay to generate power, pay to receive power, and pay potentially higher rates than other similarly heavy users of power, like factories—costs that will likely be passed on to users of AI models and services, like startups. Setting in law seems unlikely to be forward looking as this space will be much different in two years from where it is today—and there may be new solutions to energy-related constraints for AI infrastructure.

Next steps

The administration’s framework marks a new step that should help reframe parts of the AI policy debate. But getting sufficient buy-in to turn it into legislation will prove difficult—even among members of the same party, much less reaching across the aisle. House Democrats formed their own “AI Commission” after the bipartisan House AI Task Force was not renewed for this Congress, and they responded skeptically to the administration’s framework. Senate Republicans have not been in lockstep on AI issues, with Sen. Marsha Blackburn (Tenn.)—who tanked Republicans' earlier attempt at AI preemption that was led by Commerce Committee Chair Ted Cruz (Texas)—putting forward her own competing AI package. And House Republicans have not always looked aligned either. Congress needs to overcome these hurdles and move forward with balanced legislation that tackles these AI issues and creates certainty needed for U.S. startups to succeed.


Previous
Previous

#StartupsEverywhere: Boston, Mass.

Next
Next

Engine Statement regarding Supreme Court Decision in Cox Communications, Inc. v. Sony Music Entertainment