Anthropic is reported to have donated $20 million to Public First Action, marking the emergence of a developer-backed coalition that directly challenges a $1.1 million campaign by investor-led super PAC Leading the Future over New York’s AI oversight law.

Split in AI political coalitions

This marks the first time a major AI developer appears to have financed targeted political advertising in defense of safety-focused legislation. Public First Action’s estimated $450,000 ad buy in support of New York City Council member Alex Bores in the NY-12 race runs counter to ads funded by Leading the Future, a super PAC reported to be backed by more than $100 million in contributions from investors such as Andreessen Horowitz, OpenAI co-founder Greg Brockman, Perplexity AI, and Palantir co-founder Joe Lonsdale. The clash lays bare a growing rift between developer-led safety advocates and investor-led industry groups on the proper role of state-level AI regulation.

Money and messaging

According to FEC filings, Public First Action’s $450,000 commitment is dwarfed by Leading the Future’s cumulative $1.1 million attack spending but is underpinned by a reported $20 million Anthropic donation. In its ads, Public First Action casts Bores as a “champion of transparency,” while Leading the Future labels the RAISE Act “overreaching” and “costly.” The divergent messaging underscores two pro-AI visions: one emphasizing public oversight and safety disclosures, the other advocating industry-led norms to preserve innovation incentives.

Governance stakes

The RAISE Act, which Gov. Kathy Hochul is reported to have signed in December 2025, requires large AI developers—defined in bill text as those with over $100 million in compute spend or models linked to critical harms—to publish safety protocols and report serious incidents to New York’s Division of Homeland Security. Supporters describe it as a navigational framework for safer deployment; opponents in Leading the Future warn of policy capture, high compliance costs, and legal challenges. Public First Action’s move signals that AI developers may now be willing to deploy substantial political capital to defend disclosure requirements in mass media arenas.

These funding contests carry reputational and regulatory risks. Continued ad battles may erode public trust if legislative outcomes appear tied to corporate war chests rather than public interest. Regulators could face intensified scrutiny or litigation aimed at rolling back or reshaping disclosure mandates—especially if ad-fueled controversies paint safety rules as politically motivated rather than evidence-based.

Signals to watch

  • FEC filings for both PACs over the next 60 days, indicating whether developer-backed spending extends beyond NY-12 or remains a single-race defense.
  • Official statements from Anthropic or Public First Action clarifying strategic intent, which could set precedents for future developer involvement in AI governance politics.
  • Polling shifts in the NY-12 race, where AI oversight messaging may sway undecided voters and inform subsequent campaign strategies on tech regulation.
  • Emergence of similar ad patterns in other state or federal contests where AI policy is contested, signaling a broader realignment of political influence in the tech sector.

Implications for industry dynamics

This episode underscores that the AI industry is no longer unified on regulatory philosophy. Developer-funded advocacy for transparency and safety can now collide with deep-pocketed investor coalitions favoring lighter-touch norms. As both sides refine political playbooks, the balance of power in shaping state and federal AI rules may hinge on which coalition sustains its presence across multiple races. Ultimately, this dynamic will shape not only who defines AI policy narratives but also how the public perceives accountability mechanisms for preventing model misuse and systemic harm.