{"id":8828,"date":"2025-11-11T18:37:51","date_gmt":"2025-11-11T18:37:37","guid":{"rendered":"https:\/\/www.law.georgetown.edu\/tech-institute\/?page_id=8828"},"modified":"2025-11-13T15:39:20","modified_gmt":"2025-11-13T15:39:20","slug":"trump-administration-unveils-ai-action-plan","status":"publish","type":"page","link":"https:\/\/www.law.georgetown.edu\/tech-institute\/research-insights\/insights\/trump-administration-unveils-ai-action-plan\/","title":{"rendered":"Trump Administration Unveils \u201cAI Action Plan\u201d"},"content":{"rendered":"<p><span style=\"font-weight: 400\">On July 23, the Trump Administration rolled out its AI Action Plan, promising broad loose regulation of AI. <\/span><span style=\"font-weight: 400\">However, much of the actual regulatory policy the plan will eventually advance remains unclear.<\/span><\/p>\n<p><b><i>Bottom Line:\u00a0<\/i><\/b><\/p>\n<p><i><span style=\"font-weight: 400\">The Trump Administration\u2019s \u201cAI Action Plan\u201d is a surprisingly mixed bag. At its core is a vague, though firm, commitment to deregulation, something for which the AI industry has heavily lobbied.<\/span><\/i><\/p>\n<p><i><span style=\"font-weight: 400\">Most interesting, however, is what it leaves out. In a surprising snub to the AI industry and Silicon Valley, the AI Action Plan entirely fails to mention the subject of whether training data count as \u201cfair use.\u201d\u00a0<\/span><\/i><\/p>\n<h2><b>Analysis<\/b><\/h2>\n<p><b>On July 23, the Trump Administration unveiled its long-awaited <\/b><a href=\"https:\/\/www.whitehouse.gov\/wp-content\/uploads\/2025\/07\/Americas-AI-Action-Plan.pdf\"><b>AI Action Plan<\/b><b>,<\/b><\/a><b> setting forth the administration\u2019s agenda for American AI. <\/b><span style=\"font-weight: 400\">This plan emphasizes deregulation and wide adoption of AI. At first glance, the Plan seems to be a veritable \u201cwish list\u201d for the AI industry, which has heavily lobbied the administration for favorable treatment.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">Yet, when viewed more carefully, the AI Action Plan is a mixed bag. Much of the Plan does, indeed, cater to the wishes of Silicon Valley, which has sought the administration\u2019s protection from what it characterizes as excessively burdensome regulations and safety standards. Other portions seem tailored to conservative concerns about Big Tech\u2019s purported bias against right-wing voices. And, a substantial portion is composed of largely unremarkable and uncontroversial provisions for AI literacy initiatives. Most interestingly, however, is the fact that it entirely neglects one of the AI industry\u2019s biggest concerns &#8211; lawsuits by intellectual property owners for the non-consensual use of their IP as training data by major AI firms. Additionally, it wholly neglects public concerns about AI safety.<\/span><\/p>\n<h3><strong>Overall Organization<\/strong><\/h3>\n<p><span style=\"font-weight: 400\">At a high level, the AI Action Plan is organized around three pillars:<\/span><\/p>\n<ol>\n<li><b>\u201cAccelerate AI Innovation\u201d<\/b><\/li>\n<li><b>\u201cBuild American AI Infrastructure\u201d<\/b><\/li>\n<li><b>\u201cLead in International AI Diplomacy and Security\u201d<\/b><\/li>\n<\/ol>\n<p><span style=\"font-weight: 400\">Of these, the most time is spent on Pillar I, which encompasses a broad range of subjects generally pertaining to deregulation, but also including references to advancing AI research. A few mentions of limited, generic &#8220;safety&#8221; policies are also interspersed throughout this section. Pillar II deals primarily with the upgrading the power grid to support AI development, as well as secure AI design and efficient federal incident response. Finally, Pillar III focuses primarily about expanding the influence of American technology and countering Chinese AI advances.\u00a0<\/span><\/p>\n<h3><strong>A Gift to Big Tech?<\/strong><\/h3>\n<p><span style=\"font-weight: 400\">A substantial portion of the Plan is devoted to addressing the concerns of major AI developers. The very first section of the AI Action Plan is entitled \u201cRemove Red Tape and Onerous Regulation,\u201d and essentially instructs various arms of the federal government to launch reviews of existing regulations and FTC orders in order to<\/span><b> rescind those which are perceived as hindering the rapid development of AI. <\/b><span style=\"font-weight: 400\">This, by far, is the AI Action Plan\u2019s biggest gift to the tech industry. However, as will be discussed later on, specifics are few and far between, potentially stoking significant uncertainty among technology firms.<\/span><\/p>\n<h3><strong>State AI Regulation<\/strong><\/h3>\n<p><span style=\"font-weight: 400\">Particularly noteworthy is the Plan\u2019s stance on state-level AI regulation. Tech industry lobbyists have been pushing for the White House to take federal action to protect AI developers from the inconvenience of having to comply with state-level regulations. Indeed, this was one of the primary points of OpenAI\u2019s policy proposal, submitted in March as part of the AI Action Plan draft. Seemingly in response, the Trump Administration had previously endorsed a proposed moratorium on the enforcement of state-level AI regulation. However, this controversial provision encountered substantial resistance from Republican members of Congress, and was ultimately removed from the One Big Beautiful Bill Act.<\/span><\/p>\n<p><span style=\"font-weight: 400\">The AI Action Plan makes it clear that while the administration has not changed its stance on this issue, it also seems unlikely to attempt a repeat of the strategy pursued with the One Big Beautiful Bill Act. Instead, the Plan instructs the Office of Management and Budget to analyze AI discretionary funding, and:<\/span><\/p>\n<blockquote><p><span style=\"font-weight: 400\">\u201c\u2026work with Federal agencies that have AI-related discretionary funding programs to ensure, consistent with applicable law, that they <\/span><b>consider a state\u2019s AI regulatory climate when making funding decisions and limit funding if the state\u2019s AI regulatory regimes may hinder the effectiveness<\/b><span style=\"font-weight: 400\"> of that funding or award.\u201d<\/span><\/p><\/blockquote>\n<p><span style=\"font-weight: 400\">The first element of this is a backhanded effort at incentivizing states toward less restrictive AI regulation, while the second seems to be a not-so-quiet exploration of a potential future legal strategy. However, the effectiveness of this move is questionable &#8211; though this action is clearly targeted at blue states such as California, which are pursuing more aggressive AI regulation, it also disregards the fact that AI investment requires large-scale, existing infrastructure. California remains America\u2019s primary hub for tech innovation, and with so much server infrastructure already concentrated in-state, it is difficult to imagine that changes in federal discretionary funding will appreciably shift the landscape.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Additionally, the Plan presents a seemingly softened stance on state AI regulation, at least rhetorically. It explicitly notes that the aforementioned federal funding changes <\/span><b>\u201cshould also not interfere with states\u2019 rights to pass prudent laws that are not unduly restrictive to innovation.\u201d <\/b><span style=\"font-weight: 400\">This, indeed, seems to be a concession to the states\u2019 rights and child protection concerns that sank Congress\u2019 attempted AI regulation moratorium.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">The target of the AI regulation language may, thus, instead be red-state skeptics. At present, 45\/50 states have at least one law on the books dealing with artificial intelligence. Most frequently, these laws have to do with banning AI-generated child sexual abuse material, or extending existing \u201crevenge porn\u201d laws to cover AI-generated imagery. These laws, rooted in the GOP base\u2019s distrust of Big Tech, were likely a major part of what sank Congress\u2019 attempted AI regulation moratorium, and the seeming moderation of the administration\u2019s rhetoric on state-level regulation may stem from this.<\/span><\/p>\n<h3><strong>A Role for the FCC?<\/strong><\/h3>\n<p><span style=\"font-weight: 400\">The \u201cderegulation\u201d section of the plan also contains a rather bizarre instruction for the FCC, which is directed to: <\/span><\/p>\n<blockquote><p><span style=\"font-weight: 400\">\u201cevaluate whether state AI regulations interfere with the agency\u2019s ability to carry out its obligations and authorities under the Communications Act of 1934.\u201d<\/span><\/p><\/blockquote>\n<p><span style=\"font-weight: 400\">This, to be clear, is absolute nonsense. Interpreting the FCC as somehow having power over artificial intelligence would, in the words of Cody Venzke, senior policy counsel at the ACLU, <\/span><a href=\"https:\/\/www.engadget.com\/ai\/everyones-a-loser-in-trumps-ai-action-plan-160023247.html\"><span style=\"font-weight: 400\">\u201c[extend<\/span><span style=\"font-weight: 400\">]<\/span><span style=\"font-weight: 400\"> the <\/span><\/a><a href=\"https:\/\/www.engadget.com\/ai\/everyones-a-loser-in-trumps-ai-action-plan-160023247.html\"><span style=\"font-weight: 400\">Communications Act be<\/span><span style=\"font-weight: 400\">y<\/span><span style=\"font-weight: 400\">ond all reco<\/span><span style=\"font-weight: 400\">g<\/span><span style=\"font-weight: 400\">nition.\u201d<\/span><\/a><span style=\"font-weight: 400\"> To interpret the Communications Act in this way would go well beyond AI, and turn the FCC into a regulator of the Internet in general. (To be clear, the Communications Act was codified in 1934, at a time when <\/span><i><span style=\"font-weight: 400\">television<\/span><\/i><span style=\"font-weight: 400\"> was not yet widespread.)<\/span><\/p>\n<h3><strong>Is This <i>Really<\/i> Good For Big Tech?<\/strong><\/h3>\n<p><span style=\"font-weight: 400\">Many have criticized the AI Action Plan as a sellout to Big Tech. Yet, there is a compelling argument to be made that the Trump Administration has unintentionally thrown the tech industry a <\/span><i><span style=\"font-weight: 400\">very unwanted <\/span><\/i><span style=\"font-weight: 400\">curveball in the form of regulatory uncertainty. Key deregulation provisions of the Plan, such as the directive to reallocate AI-related discretionary spending to states with less restrictive AI regulations, are incredibly unclear. What exactly <\/span><i><span style=\"font-weight: 400\">qualifies<\/span><\/i><span style=\"font-weight: 400\"> as \u201cnot unduly restrictive\u201d is entirely up to interpretation. This, of course, can be read as a deliberate move, with the vagueness meant to allow Trump to bestow AI-related discretionary spending on favored, red-state governors (even if the infrastructure to support such spending does not exist). But the true reason could also very well be inept drafting, or a desire to balance GOP anti-elite sentiment with a competing desire to cater to Big Tech. What is certain, however, is that the structuring of the Plan\u2019s deregulation provisions provides AI firms and policymakers with little certainty about the future course of federal regulation.<\/span><\/p>\n<h3><strong>&#8220;Unbiased&#8221; AI<\/strong><\/h3>\n<p><span style=\"font-weight: 400\">The AI Action Plan includes a brief, but significant, nod to the GOP base&#8217;s distrust of Big Tech. On Page 4, it states:<\/span><\/p>\n<blockquote><p><span style=\"font-weight: 400\">Update Federal procurement guidelines to ensure that the government only contracts with frontier large language model (LLM) developers who ensure that their systems are objective and free from top-down ideological bias.\u00a0<\/span><\/p><\/blockquote>\n<p><span style=\"font-weight: 400\">The intent behind this language is clear &#8211; it aims to address the perceived left-leaning bias of Silicon Valley. What this might mean in practical terms, however, is anyone&#8217;s guess. If it aims to ensure that AI developers parrot the administration&#8217;s talking points, the odds of it shaping the overall market are low, as market forces incentivize developers to produce uncontroversial, generally neutral AI models. (As Elon Musk is learning, this is also a tendency supported by the fundamental nature of LLMs.) Regardless, this clause does stoke fears of politicizing AI adoption in order to further the administration&#8217;s agenda.<\/span><\/p>\n<h3><strong>The Uncontroversial Side<\/strong><\/h3>\n<p><span style=\"font-weight: 400\">It should be noted, however, that not everything in the AI Action Plan is particularly partisan or controversial. Included in the plan is a raft of relatively mundane policies aimed at boosting AI literacy. This, indeed, is the bulk of the plan\u2019s contents, at least in terms of page count. Also included under this category is a variety of initiatives for streamlining or advancing AI procurement in different sectors of the government, as well as promoting \u201cAI centers of excellence\u201d across the country as a means of accelerating the technology\u2019s adoption in various sectors of the government and economy.\u00a0<\/span><\/p>\n<h3><strong>AI Diplomacy<\/strong><\/h3>\n<p><span style=\"font-weight: 400\">Much of the Plan\u2019s length is devoted to \u201cAI Diplomacy,\u201d characterized as the idea that American AI should be supported in order to become the global standard, and thus further democratic values. One of the more significant aspects of this is a directive to lift export restrictions on AI technology targeting many US allies. The central emphasis of this part of the Plan, however, is countering Chinese advances, and the Plan does little to engage with foreign AI regulation or building global normative standards. This, in some respects, could be a significant step back from <\/span><a href=\"https:\/\/www.nytimes.com\/2025\/02\/11\/world\/europe\/vance-speech-paris-ai-summit.html\"><span style=\"font-weight: 400\">Vice President JD Vance\u2019s previousl<\/span><span style=\"font-weight: 400\">y<\/span><span style=\"font-weight: 400\"> a<\/span><span style=\"font-weight: 400\">gg<\/span><span style=\"font-weight: 400\">ressive rhetoric a<\/span><span style=\"font-weight: 400\">g<\/span><span style=\"font-weight: 400\">ainst forei<\/span><span style=\"font-weight: 400\">g<\/span><span style=\"font-weight: 400\">n AI re<\/span><span style=\"font-weight: 400\">g<\/span><span style=\"font-weight: 400\">ulation.<\/span><\/a><span style=\"font-weight: 400\">\u00a0<\/span><\/p>\n<h2><strong>What the Plan Leaves Out<\/strong><\/h2>\n<h3><strong>The Elephant in the Room &#8211; IP Law<\/strong><\/h3>\n<p><span style=\"font-weight: 400\">However, one cannot discuss the AI Action Plan without addressing what it <\/span><i><span style=\"font-weight: 400\">fails to discuss.<\/span><\/i><span style=\"font-weight: 400\"> The most significant point on this front is, by far, the ongoing clash between AI developers and intellectual property holders.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">Protection from IP lawsuits has been a major \u201cask\u201d for the AI industry. In March, <\/span><a href=\"https:\/\/openai.com\/global-affairs\/openai-proposals-for-the-us-ai-action-plan\/\"><span style=\"font-weight: 400\">OpenAI published its proposals<\/span> <\/a><span style=\"font-weight: 400\">for the AI Action Plan, which included language that would seek to immunize AI developers from facing intellectual property lawsuits. OpenAI <\/span><a href=\"https:\/\/cdn.openai.com\/global-affairs\/ostp-rfi\/ec680b75-d539-4653-b297-8bcf6e5f7686\/openai-response-ostp-nsf-rfi-notice-request-for-information-on-the-development-of-an-artificial-intelligence-ai-action-plan.pdf\"><span style=\"font-weight: 400\">framed this<\/span> <\/a><span style=\"font-weight: 400\">as being a necessary move to ensure continued American AI innovation in the face of Chinese advances:<\/span><\/p>\n<blockquote><p><span style=\"font-weight: 400\">\u201cApplying the fair use doctrine to AI is not only a matter of American competitiveness \u2014it\u2019s a matter of national security.\u201d<\/span><\/p><\/blockquote>\n<p><span style=\"font-weight: 400\">OpenAI\u2019s desire for federal protection comes as the generative AI industry finds itself fighting a growing number of lawsuits from IP holders. At the moment, the company is currently defending itself against a massive, slow-moving lawsuit filed by a coalition of publishers led by the <\/span><i><span style=\"font-weight: 400\">New York Times<\/span><\/i><span style=\"font-weight: 400\">. And, in June, Hollywood studio giants Disney and Universal Pictures joined forces to bring a lawsuit against AI image generator Midjourney. The legal question at the core of these suits is whether the use of copyrighted material as training data constitutes \u201cfair use,\u201d and is thus exempted from copyright lawsuits.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">To date, there is no sign that this legal question is settled. On the one hand, in February, a federal judge found in <\/span><i><span style=\"font-weight: 400\">Thomson Reuters v. Ross Intelligence<\/span><\/i><span style=\"font-weight: 400\"> that the use of copyrighted legal headnotes as training data for <\/span><i><span style=\"font-weight: 400\">non-<\/span><\/i><span style=\"font-weight: 400\">generative AI <\/span><a href=\"https:\/\/www.jdsupra.com\/legalnews\/fair-use-falls-short-judge-bibas-5280387\/\"><span style=\"font-weight: 400\">did not constitute fair use,<\/span><\/a><span style=\"font-weight: 400\"> granting summary judgment for the plaintiff IP holder. But, in June, a different judge in California\u2019s Northern District granted a summary judgment motion for the defendant in <\/span><i><span style=\"font-weight: 400\">Bartz v. Anthropic<\/span><\/i><span style=\"font-weight: 400\">, finding that it was \u201cfair use\u201d for an AI company to use <\/span><i><span style=\"font-weight: 400\">legally purchased <\/span><\/i><span style=\"font-weight: 400\">books as AI training data.\u00a0<\/span><\/p>\n<p><b>SIDE NOTE:<\/b><\/p>\n<p><span style=\"font-weight: 400\">It remains to be seen whether <\/span><i><span style=\"font-weight: 400\">Bartz<\/span><\/i><span style=\"font-weight: 400\"> will prove persuasive or not. The judge in <\/span><i><span style=\"font-weight: 400\">Bartz<\/span><\/i><span style=\"font-weight: 400\">, Senior District Judge William Alsup (a Clinton appointee), seemed somewhat disproportionately impressed with the capabilities of the defendant\u2019s AI technologies. As <\/span><a href=\"https:\/\/www.wiggin.com\/publication\/bartz-v-anthropic-first-court-decision-on-fair-use-defense-in-llm-training\/\"><span style=\"font-weight: 400\">Wi<\/span><span style=\"font-weight: 400\">gg<\/span><span style=\"font-weight: 400\">in &amp; Dana\u2019s anal<\/span><span style=\"font-weight: 400\">y<\/span><span style=\"font-weight: 400\">sis of the case<\/span> <\/a><span style=\"font-weight: 400\">notes: <\/span><\/p>\n<blockquote><p><span style=\"font-weight: 400\">Judge Alsup seems to have been greatly influenced by the transformative aspect of the use, characterizing the technology as \u201cexceedingly transformative,\u201d \u201cspectacularly so,\u201d <\/span><span style=\"font-weight: 400\">\u201cquintessentially transformative,\u201d and \u201camong the most transformative many of us will see in our lifetimes.\u201d<\/span><\/p><\/blockquote>\n<p><span style=\"font-weight: 400\">This, frankly, seems like a less than objective analysis, and it is questionable whether other judges will be similarly impressed. Indeed, at least one other judge from the same circuit and same general partisan alignment as Alsup, Judge Vince Chhabria (an Obama appointee), rather clearly criticized this analysis in his own opinion on a similar case.<\/span><\/p>\n<p><span style=\"font-weight: 400\">The fact that the White House did <\/span><i><span style=\"font-weight: 400\">not<\/span><\/i><span style=\"font-weight: 400\"> offer support to the AI industry\u2019s position that training data falls within the \u201cfair use\u201d protection is highly significant. However, <\/span><i><span style=\"font-weight: 400\">exactly what it signifies<\/span><\/i><span style=\"font-weight: 400\"> is entirely unclear. On the whole, the AI Action Plan is generally highly friendly towards the AI industry\u2019s wishes, and caters to them extensively. Yet, on this major point, it seems to have completely snubbed them.<\/span><\/p>\n<p><span style=\"font-weight: 400\">It is possible that, on this issue, major IP holders such as Hollywood studios exerted enough counterpressure via lobbying to derail any effort by the Trump White House to involve itself. The entertainment lobby does have a significant amount of political clout, with <\/span><a href=\"https:\/\/www.opensecrets.org\/industries\/lobbying?cycle=2024&amp;ind=B02\"><span style=\"font-weight: 400\">over $74.7M spent on lobb<\/span><span style=\"font-weight: 400\">y<\/span><span style=\"font-weight: 400\">in<\/span><span style=\"font-weight: 400\">g<\/span><span style=\"font-weight: 400\"> and Super PAC donations in 2024 alone.<\/span> <\/a><span style=\"font-weight: 400\">So, it is not unreasonable to suppose that, on this issue, the likes of Disney, Universal, and Warner Brothers were perhaps able to fight the AI industry to a draw.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Whatever the reason may be, however, this represents a significant \u201cmiss\u201d for AI companies which have, in most other respects, gotten most of their \u201cwish list\u201d from the AI Action Plan.\u00a0<\/span><\/p>\n<h3><strong>The Other Elephant in the Room &#8211; Safeguards<\/strong><\/h3>\n<p><span style=\"font-weight: 400\">The AI Action Plan also leaves uncertain a second significant policy area &#8211; that of AI safeguards, which are almost entirely unaddressed. The sole mention of AI safety occurs in the field of national security and defense, where the Plan instructs that AI systems should utilize \u201csafe-by-design\u201d principles. However, this is largely couched in the idea that a <\/span><i><span style=\"font-weight: 400\">foreign adversary<\/span><\/i><span style=\"font-weight: 400\"> might attempt to manipulate the technology &#8211; the concept that the technology itself could fail is practically left unaddressed. This is <\/span><i><span style=\"font-weight: 400\">indirectly<\/span><\/i><span style=\"font-weight: 400\"> hinted at in provisions directing federal agencies to develop \u201cAI incident response\u201d capabilities. But, the nature of such incidents is left deliberately ambiguous, and the section contains a significant amount of language speaking to <\/span><i><span style=\"font-weight: 400\">cybersecurity<\/span><\/i><span style=\"font-weight: 400\"> incidents, rather than technical failures.<\/span><\/p>\n<p><span style=\"font-weight: 400\">One of the few AI-related risks that <\/span><i><span style=\"font-weight: 400\">is <\/span><\/i><span style=\"font-weight: 400\">specifically mentioned is the threat of synthetic media being used as spurious evidence in the legal system. However, the Action Plan\u2019s discussion of this topic is remarkably toothless, stopping at instructing the National Institute for Standards and Technology (NIST) to <\/span><i><span style=\"font-weight: 400\">\u201cconsider\u201d <\/span><\/i><span style=\"font-weight: 400\">expanding its deepfake evaluation program into a <\/span><i><span style=\"font-weight: 400\">\u201cvoluntary\u201d <\/span><\/i><span style=\"font-weight: 400\">watermarking system for AI-generated media. Buried at the end of the discussion of Pillar I, this seems to have been largely an afterthought for the Plan\u2019s drafters, and does little to address AI safety concerns.<\/span><\/p>\n<h2><strong>Conclusions<\/strong><\/h2>\n<p><span style=\"font-weight: 400\">The Trump Administration&#8217;s AI Action Plan forwards relentlessly pro-AI and industry-friendly rhetoric, but without a great deal of substance behind it. While it promises broad deregulation and an emphasis on innovation, it remains wholly unclear <\/span><i><span style=\"font-weight: 400\">how<\/span><\/i><span style=\"font-weight: 400\"> this will be achieved. Additionally, the Plan fails to address significant sectors of AI policy, leaving them in regulatory limbo.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p>Matthew Sparks was a Justice Fellow at the Tech Institute 2023-2024.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>On July 23, the Trump Administration rolled out its AI Action Plan, promising broad loose regulation of AI. However, much of the actual regulatory policy the plan will eventually advance [&hellip;]<\/p>\n","protected":false},"author":19629,"featured_media":0,"parent":7881,"menu_order":6,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_acf_changed":false,"_price":"","_stock":"","_tribe_ticket_header":"","_tribe_default_ticket_provider":"","_tribe_ticket_capacity":"0","_ticket_start_date":"","_ticket_end_date":"","_tribe_ticket_show_description":"","_tribe_ticket_show_not_going":false,"_tribe_ticket_use_global_stock":"","_tribe_ticket_global_stock_level":"","_global_stock_mode":"","_global_stock_cap":"","_tribe_rsvp_for_event":"","_tribe_ticket_going_count":"","_tribe_ticket_not_going_count":"","_tribe_tickets_list":"[]","_tribe_ticket_has_attendee_info_fields":false,"footnotes":"","_tec_slr_enabled":"","_tec_slr_layout":""},"class_list":["post-8828","page","type-page","status-publish","hentry"],"acf":[],"ticketed":false,"_links":{"self":[{"href":"https:\/\/www.law.georgetown.edu\/tech-institute\/wp-json\/wp\/v2\/pages\/8828","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.law.georgetown.edu\/tech-institute\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/www.law.georgetown.edu\/tech-institute\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/www.law.georgetown.edu\/tech-institute\/wp-json\/wp\/v2\/users\/19629"}],"replies":[{"embeddable":true,"href":"https:\/\/www.law.georgetown.edu\/tech-institute\/wp-json\/wp\/v2\/comments?post=8828"}],"version-history":[{"count":3,"href":"https:\/\/www.law.georgetown.edu\/tech-institute\/wp-json\/wp\/v2\/pages\/8828\/revisions"}],"predecessor-version":[{"id":8866,"href":"https:\/\/www.law.georgetown.edu\/tech-institute\/wp-json\/wp\/v2\/pages\/8828\/revisions\/8866"}],"up":[{"embeddable":true,"href":"https:\/\/www.law.georgetown.edu\/tech-institute\/wp-json\/wp\/v2\/pages\/7881"}],"wp:attachment":[{"href":"https:\/\/www.law.georgetown.edu\/tech-institute\/wp-json\/wp\/v2\/media?parent=8828"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}