On July 23, the Trump Administration rolled out its AI Action Plan, promising broad loose regulation of AI. However, much of the actual regulatory policy the plan will eventually advance remains unclear.

Bottom Line: 

The Trump Administration’s “AI Action Plan” is a surprisingly mixed bag. At its core is a vague, though firm, commitment to deregulation, something for which the AI industry has heavily lobbied.

Most interesting, however, is what it leaves out. In a surprising snub to the AI industry and Silicon Valley, the AI Action Plan entirely fails to mention the subject of whether training data count as “fair use.” 

Analysis

On July 23, the Trump Administration unveiled its long-awaited AI Action Plan, setting forth the administration’s agenda for American AI. This plan emphasizes deregulation and wide adoption of AI. At first glance, the Plan seems to be a veritable “wish list” for the AI industry, which has heavily lobbied the administration for favorable treatment. 

Yet, when viewed more carefully, the AI Action Plan is a mixed bag. Much of the Plan does, indeed, cater to the wishes of Silicon Valley, which has sought the administration’s protection from what it characterizes as excessively burdensome regulations and safety standards. Other portions seem tailored to conservative concerns about Big Tech’s purported bias against right-wing voices. And, a substantial portion is composed of largely unremarkable and uncontroversial provisions for AI literacy initiatives. Most interestingly, however, is the fact that it entirely neglects one of the AI industry’s biggest concerns – lawsuits by intellectual property owners for the non-consensual use of their IP as training data by major AI firms. Additionally, it wholly neglects public concerns about AI safety.

Overall Organization

At a high level, the AI Action Plan is organized around three pillars:

  1. “Accelerate AI Innovation”
  2. “Build American AI Infrastructure”
  3. “Lead in International AI Diplomacy and Security”

Of these, the most time is spent on Pillar I, which encompasses a broad range of subjects generally pertaining to deregulation, but also including references to advancing AI research. A few mentions of limited, generic “safety” policies are also interspersed throughout this section. Pillar II deals primarily with the upgrading the power grid to support AI development, as well as secure AI design and efficient federal incident response. Finally, Pillar III focuses primarily about expanding the influence of American technology and countering Chinese AI advances. 

A Gift to Big Tech?

A substantial portion of the Plan is devoted to addressing the concerns of major AI developers. The very first section of the AI Action Plan is entitled “Remove Red Tape and Onerous Regulation,” and essentially instructs various arms of the federal government to launch reviews of existing regulations and FTC orders in order to rescind those which are perceived as hindering the rapid development of AI. This, by far, is the AI Action Plan’s biggest gift to the tech industry. However, as will be discussed later on, specifics are few and far between, potentially stoking significant uncertainty among technology firms.

State AI Regulation

Particularly noteworthy is the Plan’s stance on state-level AI regulation. Tech industry lobbyists have been pushing for the White House to take federal action to protect AI developers from the inconvenience of having to comply with state-level regulations. Indeed, this was one of the primary points of OpenAI’s policy proposal, submitted in March as part of the AI Action Plan draft. Seemingly in response, the Trump Administration had previously endorsed a proposed moratorium on the enforcement of state-level AI regulation. However, this controversial provision encountered substantial resistance from Republican members of Congress, and was ultimately removed from the One Big Beautiful Bill Act.

The AI Action Plan makes it clear that while the administration has not changed its stance on this issue, it also seems unlikely to attempt a repeat of the strategy pursued with the One Big Beautiful Bill Act. Instead, the Plan instructs the Office of Management and Budget to analyze AI discretionary funding, and:

“…work with Federal agencies that have AI-related discretionary funding programs to ensure, consistent with applicable law, that they consider a state’s AI regulatory climate when making funding decisions and limit funding if the state’s AI regulatory regimes may hinder the effectiveness of that funding or award.”

The first element of this is a backhanded effort at incentivizing states toward less restrictive AI regulation, while the second seems to be a not-so-quiet exploration of a potential future legal strategy. However, the effectiveness of this move is questionable – though this action is clearly targeted at blue states such as California, which are pursuing more aggressive AI regulation, it also disregards the fact that AI investment requires large-scale, existing infrastructure. California remains America’s primary hub for tech innovation, and with so much server infrastructure already concentrated in-state, it is difficult to imagine that changes in federal discretionary funding will appreciably shift the landscape.

Additionally, the Plan presents a seemingly softened stance on state AI regulation, at least rhetorically. It explicitly notes that the aforementioned federal funding changes “should also not interfere with states’ rights to pass prudent laws that are not unduly restrictive to innovation.” This, indeed, seems to be a concession to the states’ rights and child protection concerns that sank Congress’ attempted AI regulation moratorium. 

The target of the AI regulation language may, thus, instead be red-state skeptics. At present, 45/50 states have at least one law on the books dealing with artificial intelligence. Most frequently, these laws have to do with banning AI-generated child sexual abuse material, or extending existing “revenge porn” laws to cover AI-generated imagery. These laws, rooted in the GOP base’s distrust of Big Tech, were likely a major part of what sank Congress’ attempted AI regulation moratorium, and the seeming moderation of the administration’s rhetoric on state-level regulation may stem from this.

A Role for the FCC?

The “deregulation” section of the plan also contains a rather bizarre instruction for the FCC, which is directed to:

“evaluate whether state AI regulations interfere with the agency’s ability to carry out its obligations and authorities under the Communications Act of 1934.”

This, to be clear, is absolute nonsense. Interpreting the FCC as somehow having power over artificial intelligence would, in the words of Cody Venzke, senior policy counsel at the ACLU, “[extend] the Communications Act beyond all recognition.” To interpret the Communications Act in this way would go well beyond AI, and turn the FCC into a regulator of the Internet in general. (To be clear, the Communications Act was codified in 1934, at a time when television was not yet widespread.)

Is This Really Good For Big Tech?

Many have criticized the AI Action Plan as a sellout to Big Tech. Yet, there is a compelling argument to be made that the Trump Administration has unintentionally thrown the tech industry a very unwanted curveball in the form of regulatory uncertainty. Key deregulation provisions of the Plan, such as the directive to reallocate AI-related discretionary spending to states with less restrictive AI regulations, are incredibly unclear. What exactly qualifies as “not unduly restrictive” is entirely up to interpretation. This, of course, can be read as a deliberate move, with the vagueness meant to allow Trump to bestow AI-related discretionary spending on favored, red-state governors (even if the infrastructure to support such spending does not exist). But the true reason could also very well be inept drafting, or a desire to balance GOP anti-elite sentiment with a competing desire to cater to Big Tech. What is certain, however, is that the structuring of the Plan’s deregulation provisions provides AI firms and policymakers with little certainty about the future course of federal regulation.

“Unbiased” AI

The AI Action Plan includes a brief, but significant, nod to the GOP base’s distrust of Big Tech. On Page 4, it states:

Update Federal procurement guidelines to ensure that the government only contracts with frontier large language model (LLM) developers who ensure that their systems are objective and free from top-down ideological bias. 

The intent behind this language is clear – it aims to address the perceived left-leaning bias of Silicon Valley. What this might mean in practical terms, however, is anyone’s guess. If it aims to ensure that AI developers parrot the administration’s talking points, the odds of it shaping the overall market are low, as market forces incentivize developers to produce uncontroversial, generally neutral AI models. (As Elon Musk is learning, this is also a tendency supported by the fundamental nature of LLMs.) Regardless, this clause does stoke fears of politicizing AI adoption in order to further the administration’s agenda.

The Uncontroversial Side

It should be noted, however, that not everything in the AI Action Plan is particularly partisan or controversial. Included in the plan is a raft of relatively mundane policies aimed at boosting AI literacy. This, indeed, is the bulk of the plan’s contents, at least in terms of page count. Also included under this category is a variety of initiatives for streamlining or advancing AI procurement in different sectors of the government, as well as promoting “AI centers of excellence” across the country as a means of accelerating the technology’s adoption in various sectors of the government and economy. 

AI Diplomacy

Much of the Plan’s length is devoted to “AI Diplomacy,” characterized as the idea that American AI should be supported in order to become the global standard, and thus further democratic values. One of the more significant aspects of this is a directive to lift export restrictions on AI technology targeting many US allies. The central emphasis of this part of the Plan, however, is countering Chinese advances, and the Plan does little to engage with foreign AI regulation or building global normative standards. This, in some respects, could be a significant step back from Vice President JD Vance’s previously aggressive rhetoric against foreign AI regulation. 

What the Plan Leaves Out

The Elephant in the Room – IP Law

However, one cannot discuss the AI Action Plan without addressing what it fails to discuss. The most significant point on this front is, by far, the ongoing clash between AI developers and intellectual property holders. 

Protection from IP lawsuits has been a major “ask” for the AI industry. In March, OpenAI published its proposals for the AI Action Plan, which included language that would seek to immunize AI developers from facing intellectual property lawsuits. OpenAI framed this as being a necessary move to ensure continued American AI innovation in the face of Chinese advances:

“Applying the fair use doctrine to AI is not only a matter of American competitiveness —it’s a matter of national security.”

OpenAI’s desire for federal protection comes as the generative AI industry finds itself fighting a growing number of lawsuits from IP holders. At the moment, the company is currently defending itself against a massive, slow-moving lawsuit filed by a coalition of publishers led by the New York Times. And, in June, Hollywood studio giants Disney and Universal Pictures joined forces to bring a lawsuit against AI image generator Midjourney. The legal question at the core of these suits is whether the use of copyrighted material as training data constitutes “fair use,” and is thus exempted from copyright lawsuits. 

To date, there is no sign that this legal question is settled. On the one hand, in February, a federal judge found in Thomson Reuters v. Ross Intelligence that the use of copyrighted legal headnotes as training data for non-generative AI did not constitute fair use, granting summary judgment for the plaintiff IP holder. But, in June, a different judge in California’s Northern District granted a summary judgment motion for the defendant in Bartz v. Anthropic, finding that it was “fair use” for an AI company to use legally purchased books as AI training data. 

SIDE NOTE:

It remains to be seen whether Bartz will prove persuasive or not. The judge in Bartz, Senior District Judge William Alsup (a Clinton appointee), seemed somewhat disproportionately impressed with the capabilities of the defendant’s AI technologies. As Wiggin & Dana’s analysis of the case notes:

Judge Alsup seems to have been greatly influenced by the transformative aspect of the use, characterizing the technology as “exceedingly transformative,” “spectacularly so,” “quintessentially transformative,” and “among the most transformative many of us will see in our lifetimes.”

This, frankly, seems like a less than objective analysis, and it is questionable whether other judges will be similarly impressed. Indeed, at least one other judge from the same circuit and same general partisan alignment as Alsup, Judge Vince Chhabria (an Obama appointee), rather clearly criticized this analysis in his own opinion on a similar case.

The fact that the White House did not offer support to the AI industry’s position that training data falls within the “fair use” protection is highly significant. However, exactly what it signifies is entirely unclear. On the whole, the AI Action Plan is generally highly friendly towards the AI industry’s wishes, and caters to them extensively. Yet, on this major point, it seems to have completely snubbed them.

It is possible that, on this issue, major IP holders such as Hollywood studios exerted enough counterpressure via lobbying to derail any effort by the Trump White House to involve itself. The entertainment lobby does have a significant amount of political clout, with over $74.7M spent on lobbying and Super PAC donations in 2024 alone. So, it is not unreasonable to suppose that, on this issue, the likes of Disney, Universal, and Warner Brothers were perhaps able to fight the AI industry to a draw.

Whatever the reason may be, however, this represents a significant “miss” for AI companies which have, in most other respects, gotten most of their “wish list” from the AI Action Plan. 

The Other Elephant in the Room – Safeguards

The AI Action Plan also leaves uncertain a second significant policy area – that of AI safeguards, which are almost entirely unaddressed. The sole mention of AI safety occurs in the field of national security and defense, where the Plan instructs that AI systems should utilize “safe-by-design” principles. However, this is largely couched in the idea that a foreign adversary might attempt to manipulate the technology – the concept that the technology itself could fail is practically left unaddressed. This is indirectly hinted at in provisions directing federal agencies to develop “AI incident response” capabilities. But, the nature of such incidents is left deliberately ambiguous, and the section contains a significant amount of language speaking to cybersecurity incidents, rather than technical failures.

One of the few AI-related risks that is specifically mentioned is the threat of synthetic media being used as spurious evidence in the legal system. However, the Action Plan’s discussion of this topic is remarkably toothless, stopping at instructing the National Institute for Standards and Technology (NIST) to “consider” expanding its deepfake evaluation program into a “voluntary” watermarking system for AI-generated media. Buried at the end of the discussion of Pillar I, this seems to have been largely an afterthought for the Plan’s drafters, and does little to address AI safety concerns.

Conclusions

The Trump Administration’s AI Action Plan forwards relentlessly pro-AI and industry-friendly rhetoric, but without a great deal of substance behind it. While it promises broad deregulation and an emphasis on innovation, it remains wholly unclear how this will be achieved. Additionally, the Plan fails to address significant sectors of AI policy, leaving them in regulatory limbo.

 

Matthew Sparks was a Justice Fellow at the Tech Institute 2023-2024.