The Big Story: Court risks setting precedent for staggering damages in AI copyright case
A recent court decision could open the door to devastating liability for AI companies, threatening the ability of AI companies—including startups—to train large language models and threatening the availability of those models to startups innovating at the application layer. This week, Engine and several other industry groups filed an amicus brief warning that a federal judge’s move to let millions of authors collectively sue Anthropic could chill AI innovation, setting a precedent where large groups could weaponize copyright infringement claims that carry the potential for billions of dollars in damages.
While a district court ruled in late June that using copyrighted content in AI training data qualifies as fair use, the Ninth Circuit is now considering whether rightsholders—whose work may have been included in a dataset compiled by a third party—can collectively sue for infringement-related damages. The recent order allows up to seven million rightsholders to join together in a class-action lawsuit, raising the likelihood that rightsholders will seek damages against Anthropic. Under copyright law, findings of infringement can carry up to $150,000 in damages per work infringed; with up to seven million works, that could total more than $1 trillion in potential liability. For a startup, even the potential for those kinds of staggering damages can make training an AI model too risky. Startups building AI models need vast training datasets to build models and can’t navigate hundreds of thousands of dollars—let alone hundreds of billions of dollars—in damages when they typically operate with just tens of thousands of dollars a month to cover all expenses. For them, the mere risk of costly litigation and damages can pressure early-stage companies into preemptively settling a lawsuit, stalling development, or avoiding entering a market altogether.
Policymakers should keep a close eye on how these cases evolve and be prepared to implement meaningful guardrails that won’t stifle innovation and disproportionately harm startups. A balanced intellectual property framework—one that empowers startups to build—is essential to ensuring competition and innovation in AI.
Policy Roundup:
Trump indicates semiconductor tariffs are imminent. President Donald Trump said Wednesday his administration would place tariffs “of approximately 100%” on semiconductors as soon as next week using national security authority. Semiconductor chips are a key input in nearly all electronics, and high tariffs risks raising costs for hardware startups and infrastructure that all startups rely upon, including cloud computing data centers—as we explore in a new blog post this week. The move comes as sweeping tariffs—some as high as 39 percent—took effect Thursday.
Colorado’s AI law to be debated during special session. Colorado’s sweeping AI regulation is under fire as Attorney General Phil Weiser (D) warned the law could push innovation out of the state, and Governor Jared Polis (D) has now called a special legislative session for mid-August where lawmakers could tweak or delay the law before it takes effect in February. Engine has warned that Colorado’s first in the nation law will burden startups innovating with AI and could push them out of the state.
Trump makes agency cuts following weak employment numbers. After a lower-than-anticipated July jobs report and major downward revisions to May and June estimates, the Trump administration fired Bureau of Labor Statistics (BLS) Commissioner Erika McEntarfer, accusing the agency of partisanship. The BLS is seen as a nonpartisan source of economic data, and its findings often inform decisions by investors, government agencies, and financial institutions. Undermining trust in their reports could discourage investor confidence and destabilize the early-stage capital environment startups rely on.
California deepfake law blocked in Section 230 ruling. A federal judge blocked California’s election deepfake law this week, citing online intermediary liability limitations under Section 230. The California law would have prohibited Internet platforms from hosting user content containing AI-generated deepfakes during elections. In his ruling, the judge skirted complaints that the law infringed on users’ First Amendment rights and cited Section 230, which states that online platforms can’t be held legally liable for the content shared by their users and is a critical protection for startups that host user content.
Startup Roundup:
#StartupsEverywhere: Charlotte, North Carolina. After a career in the startup ecosystem, Alex Smereczniak knew he wanted to help others along their journey to entrepreneurship. Using his experience franchising his laundry business as an example, Alex co-founded Franzy, a research and matching tool assisting individuals in opening franchises. We had the opportunity to chat with him about his product, the franchise industry, and more.