{"id":8819,"date":"2025-11-11T18:23:51","date_gmt":"2025-11-11T18:23:11","guid":{"rendered":"https:\/\/www.law.georgetown.edu\/tech-institute\/?page_id=8819"},"modified":"2025-11-13T15:39:35","modified_gmt":"2025-11-13T15:39:35","slug":"growing-signs-of-ai-development-slowdown","status":"publish","type":"page","link":"https:\/\/www.law.georgetown.edu\/tech-institute\/research-insights\/insights\/growing-signs-of-ai-development-slowdown\/","title":{"rendered":"Growing Signs of AI Development Slowdown"},"content":{"rendered":"<p>Multiple reports have emerged in the last week stating that conventional \u201cupscaling\u201d &#8211; that is, the use of increasingly large datasets in an effort to produce better AI &#8211; is losing its effectiveness, and that concern has begun to mount within the AI industry. OpenAI insiders report that GPT-4\u2019s replacement, codenamed \u201cOrion,\u201d is not reliably performing better than GPT-4 itself, as prominent AI researchers warn that up-scaling is reaching its limit.<\/p>\n<p>This week, I\u2019m going to be examining this particular story in greater depth, as well as the history of the AI field.<\/p>\n<p><i><span style=\"font-weight: 400\"><strong>Bottom Line:<\/strong> Evidence is mounting that the current era of explosive, exponential AI growth may be coming to an end, as up-scaling of datasets loses effectiveness as a means to produce better AI. Though this has yet to be reflected on Wall Street, this news may bring a potential \u201cAI bubble\u201d closer to deflation.<\/span><\/i><\/p>\n<p>&nbsp;<\/p>\n<h2>Analysis: Growing Signs of AI Development Slowdown<\/h2>\n<p>Several weeks ago, Reuters reported that top AI scientists, as well as sources within OpenAI, have <a href=\"https:\/\/www.reuters.com\/technology\/artificial-intelligence\/openai-rivals-seek-new-path-smarter-ai-current-methods-hit-limitations-2024-11-11\/\">stated that the previously-rapid progression of AI development is beginning to hit serious structural roadblocks.<\/a><span style=\"font-weight: 400\"> Subsequently, other industry sources have made similar reports. Tech outlet <\/span><i><span style=\"font-weight: 400\">The Information<\/span><\/i><span style=\"font-weight: 400\"> reported that, per OpenAI employees, the company\u2019s next-gen successor to the now two-year-old GPT-4, codenamed Orion, <\/span><a href=\"https:\/\/www.theinformation.com\/articles\/openai-shifts-strategy-as-rate-of-gpt-ai-improvements-slows\"><span style=\"font-weight: 400\">is not reliabl<\/span><span style=\"font-weight: 400\">y<\/span><span style=\"font-weight: 400\"> performin<\/span><span style=\"font-weight: 400\">g<\/span><span style=\"font-weight: 400\"> better than its predecessor at certain tasks<\/span><span style=\"font-weight: 400\">,<\/span><span style=\"font-weight: 400\"> and that the rate of model improvement has decreased from previous iterations.<\/span><\/a><i><span style=\"font-weight: 400\">\u00a0Bloomberg<\/span><\/i><span style=\"font-weight: 400\"> has additionally reported <\/span><a href=\"https:\/\/www.bloomberg.com\/news\/articles\/2024-11-13\/openai-google-and-anthropic-are-struggling-to-build-more-advanced-ai\"><span style=\"font-weight: 400\">similar slowdowns are occurrin<\/span><span style=\"font-weight: 400\">g<\/span><span style=\"font-weight: 400\"> with the next iteration of Goo<\/span><span style=\"font-weight: 400\">g<\/span><span style=\"font-weight: 400\">le\u2019s \u201cGemini\u201d and Anthropic\u2019s \u201cCl<\/span><span style=\"font-weight: 400\">y<\/span><span style=\"font-weight: 400\">de\u201d AI models<\/span><span style=\"font-weight: 400\">,<\/span><\/a><span style=\"font-weight: 400\"> suggesting that<\/span><b> whatever is happening is not unique to any one LLM. <\/b><a href=\"https:\/\/www.forbes.com\/sites\/johnwerner\/2024\/11\/15\/the-big-ai-slowdown\/\"><span style=\"font-weight: 400\">Concerns have be<\/span><span style=\"font-weight: 400\">g<\/span><span style=\"font-weight: 400\">un to spread within the industry,<\/span> <\/a><span style=\"font-weight: 400\">and the tech world more broadly, that a significant slowdown, or even bubble-burst, may be in the cards.<\/span><\/p>\n<h3><b>Tech Execs Remain Optimistic <\/b><b>(<\/b><b>At Least<\/b><b>,<\/b><b> Publicl<\/b><b>y<\/b><b>)<\/b><\/h3>\n<p><span style=\"font-weight: 400\">In the face of these potential warning signs, Silicon Valley leadership is putting off a decidedly unconcerned face. <\/span><a href=\"https:\/\/www.axios.com\/2024\/11\/13\/ai-scaling-chatgpt-openai-plateau\"><span style=\"font-weight: 400\">OpenAI CEO Sam Altman remains adamant that bi<\/span><span style=\"font-weight: 400\">gg<\/span><span style=\"font-weight: 400\">er models will continue to be better<\/span><span style=\"font-weight: 400\">,<\/span><span style=\"font-weight: 400\"> as stated in his manifesto \u201cThe Intelli<\/span><span style=\"font-weight: 400\">g<\/span><span style=\"font-weight: 400\">ence A<\/span><span style=\"font-weight: 400\">g<\/span><span style=\"font-weight: 400\">e.\u201d<\/span><\/a><span style=\"font-weight: 400\"> Guests and speakers at the Web Summit last week opined that AI would <\/span><a href=\"https:\/\/fortune.com\/2024\/11\/19\/web-summit-ai-slowdown-no-sign-eye-on-ai\/\"><span style=\"font-weight: 400\">continue to <\/span><span style=\"font-weight: 400\">g<\/span><span style=\"font-weight: 400\">row as it had previously<\/span><\/a><span style=\"font-weight: 400\">, while former Google CEO Eric Schmidt asserted that there was <\/span><a href=\"https:\/\/www.businessinsider.com\/eric-schmidt-google-ceo-ai-scaling-laws-openai-slowdown-2024-11\"><span style=\"font-weight: 400\">\u201cno si<\/span><span style=\"font-weight: 400\">g<\/span><span style=\"font-weight: 400\">n\u201d that data scalin<\/span><span style=\"font-weight: 400\">g<\/span><span style=\"font-weight: 400\"> principles had <\/span><span style=\"font-weight: 400\">y<\/span><span style=\"font-weight: 400\">et be<\/span><span style=\"font-weight: 400\">g<\/span><span style=\"font-weight: 400\">un to fail.<\/span><\/a><span style=\"font-weight: 400\"> But, in private, OpenAI has<\/span><a href=\"https:\/\/techcrunch.com\/2024\/11\/09\/openai-reportedly-developing-new-strategies-to-deal-with-ai-improvement-slowdown\/\"><span style=\"font-weight: 400\"> formed an internal task force to tr<\/span><span style=\"font-weight: 400\">y<\/span><span style=\"font-weight: 400\"> and find a wa<\/span><span style=\"font-weight: 400\">y<\/span><span style=\"font-weight: 400\"> forward<\/span><span style=\"font-weight: 400\">,<\/span><span style=\"font-weight: 400\"> in the face of dwindlin<\/span><span style=\"font-weight: 400\">g<\/span><span style=\"font-weight: 400\"> trainin<\/span><span style=\"font-weight: 400\">g<\/span><span style=\"font-weight: 400\"> data.<\/span><span style=\"font-weight: 400\">\u00a0<\/span><\/a><\/p>\n<p><span style=\"font-weight: 400\">All of this is compounded by a growing problem &#8211; AI is failing to prove as revolutionary and profit-generating as many had hoped. <\/span><a href=\"https:\/\/www.capitalbrief.com\/newsletter\/banks-hit-an-ai-plateau-7999d369-3f5f-402a-88e7-028308aabf42\/preview\/\"><span style=\"font-weight: 400\">Banks are increasin<\/span><span style=\"font-weight: 400\">g<\/span><span style=\"font-weight: 400\">ly findin<\/span><span style=\"font-weight: 400\">g<\/span><span style=\"font-weight: 400\"> that AI is failin<\/span><span style=\"font-weight: 400\">g<\/span><span style=\"font-weight: 400\"> to drive profit mar<\/span><span style=\"font-weight: 400\">g<\/span><span style=\"font-weight: 400\">ins,<\/span><\/a><span style=\"font-weight: 400\"> while IT firms find that it is <\/span><a href=\"https:\/\/www.itpro.com\/technology\/artificial-intelligence\/generative-ai-was-supposed-to-make-life-easier-for-enterprises-instead-it-s-becoming-a-tax-with-no-clear-revenue-gains\"><span style=\"font-weight: 400\">a \u201ctax with no clear <\/span><span style=\"font-weight: 400\">g<\/span><span style=\"font-weight: 400\">ains.\u201d<\/span><\/a><span style=\"font-weight: 400\"> Despite this, and reports that firms are struggling to find value in enterprise AI, <\/span><a href=\"https:\/\/venturebeat.com\/ai\/enterprise-ai-adoption-surges-as-organizations-shift-from-experimentation-to-implementation\/\"><span style=\"font-weight: 400\">corporate adoption of AI continues at a breakneck pace<\/span><\/a><span style=\"font-weight: 400\">, with managers and executives claiming a high level of confidence in AI\u2019s value. All the while, returns-on-investment for AI developers such as Microsoft <\/span><a href=\"https:\/\/futurism.com\/the-byte\/microsoft-losing-money-ai\"><span style=\"font-weight: 400\">remain distant.<\/span><\/a><\/p>\n<h3><b>Lack of Unused Trainin<\/b><b>g<\/b><b> Data<\/b><\/h3>\n<p><span style=\"font-weight: 400\">One of the biggest challenges facing AI developers is the lack of unused, human-generated training data. Evidence is mounting that, <\/span><a href=\"https:\/\/www.nature.com\/articles\/d41586-024-02355-z\"><span style=\"font-weight: 400\">if allowed to consume AI-<\/span><span style=\"font-weight: 400\">g<\/span><span style=\"font-weight: 400\">enerated content<\/span><span style=\"font-weight: 400\">,<\/span><span style=\"font-weight: 400\"> AI models\u2019 output will be<\/span><span style=\"font-weight: 400\">g<\/span><span style=\"font-weight: 400\">in to de<\/span><span style=\"font-weight: 400\">g<\/span><span style=\"font-weight: 400\">rade until the model eventuall<\/span><span style=\"font-weight: 400\">y<\/span><span style=\"font-weight: 400\"> collapses outri<\/span><span style=\"font-weight: 400\">g<\/span><span style=\"font-weight: 400\">ht, producin<\/span><span style=\"font-weight: 400\">g<\/span><span style=\"font-weight: 400\"> unintelli<\/span><span style=\"font-weight: 400\">g<\/span><span style=\"font-weight: 400\">ible content as the model loses an<\/span><span style=\"font-weight: 400\">y<\/span><span style=\"font-weight: 400\"> correspondence to reality.<\/span><\/a><span style=\"font-weight: 400\"> The rapid expansion of AI up to this point has been, in large part, driven by the application of the same types of LLM technology to progressively larger human-generated datasets. However, to put it simply, AI developers are running out of data that has not already been used. Human creative output is not enough to actually sustain the needs of the AI industry, posing a significant problem going forward.<\/span><\/p>\n<h3><b>We\u2019ve Been Here Before: Concerns of a \u201cThird AI Winter\u201d<\/b><\/h3>\n<p><span style=\"font-weight: 400\">AI has a surprisingly long history, and its <\/span><b>historical development is commonly divided into a series of cycles.<\/b><span style=\"font-weight: 400\"> These cycles are characterized by initial periods of expansion, as hype for the future promise of AI drives investment, which drives further advancement, generating more hype in a self-sustaining reaction until, eventually, the limits of the current generation of technology are reached, and the hype fades as businesses realize firsthand the limitations of the technology. The chain reaction then reverses, with pessimism driving cuts in funding and investment, which in turn drive reduced advancement, heightening pessimism about the technology and driving many AI development firms out of business. <\/span><b>This creates a period of relative technological stagnation known as an \u201cAI Winter,\u201d before the cycle eventually repeats itself.<\/b><\/p>\n<p><b>The first AI boom occurred in the 1960s and early 1970s, <\/b><span style=\"font-weight: 400\">when the first generation of electronic computers led to hopes that true artificial intelligence was right around the corner. Most of the theory around artificial intelligence now existed, much of it posited decades earlier by mathematicians like Alan Turing. Now, it seemed, the only obstacle was using the power of the silicon transistor and microelectronics to make these dreams of AI a reality. But, it was not to be. Machine translation, hailed as the wave of the future, turned out to be well beyond the reach of the era\u2019s computer technology. By the early 1970s, the failure of the technology to mature had prompted a series of crippling cuts in funding in the US and UK. The \u201cFirst AI Winter\u201d had arrived.\u00a0<\/span><\/p>\n<p><b>It took until the early 1980s for confidence in the technology to recover to the point of another boom, as hype grew that a new generation of computer systems would finally be able to realize the potential of AI.<\/b><span style=\"font-weight: 400\"> The late 1970s had seen the emergence of the personal computer, and with these machines were now found in businesses all across the world, it seemed like the hardware challenges of the 1960s had finally been solved. <\/span><a href=\"https:\/\/aiws.net\/the-history-of-ai\/this-week-in-the-history-of-ai-at-aiws-net-the-market-for-specialised-ai-hardware-collapsed-in-1987\/\"><span style=\"font-weight: 400\">Investors were convinced that new \u201cexpert s<\/span><span style=\"font-weight: 400\">y<\/span><span style=\"font-weight: 400\">stems<\/span><span style=\"font-weight: 400\">,<\/span><span style=\"font-weight: 400\">\u201d coded in the Lisp pro<\/span><span style=\"font-weight: 400\">g<\/span><span style=\"font-weight: 400\">rammin<\/span><span style=\"font-weight: 400\">g<\/span><span style=\"font-weight: 400\"> lan<\/span><span style=\"font-weight: 400\">g<\/span><span style=\"font-weight: 400\">ua<\/span><span style=\"font-weight: 400\">g<\/span><span style=\"font-weight: 400\">e and desi<\/span><span style=\"font-weight: 400\">g<\/span><span style=\"font-weight: 400\">ned to mimic the <\/span><span style=\"font-weight: 400\">j<\/span><span style=\"font-weight: 400\">ud<\/span><span style=\"font-weight: 400\">g<\/span><span style=\"font-weight: 400\">ment of learned experts<\/span><span style=\"font-weight: 400\">,<\/span><span style=\"font-weight: 400\"> would quickl<\/span><span style=\"font-weight: 400\">y<\/span><span style=\"font-weight: 400\"> find their way into the workplace and replace human expertise in a variet<\/span><span style=\"font-weight: 400\">y<\/span><span style=\"font-weight: 400\"> of fields.<\/span> <\/a><span style=\"font-weight: 400\">Leveraging the booming computer industry, startups like Symbolics and Lucid rapidly grew into multi-billion dollar businesses as they spun up production of Lisp-running computer stations. In 1985, Symbolics would register the world\u2019s first \u201c.com\u201d website.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">Amid all of renewed optimism, two researchers, Roger Schank and Marvin Minsky, gave a talk at the 1984 meeting of the American Association of Artificial Intelligence, <\/span><b>coining the term \u201cAI Winter\u201d<\/b><span style=\"font-weight: 400\"> to describe the funding cuts of the mid-1970s. They gave a stark warning &#8211; that unrestrained hype would lead to this cycle repeating in the future. Minsky himself was one of the co-creators of the world\u2019s first \u201cexpert system.\u201d But, at the time, few listened &#8211; the Lisp machines, after all, were already making their way onto the market, and the future seemed impossibly bright.<\/span><\/p>\n<p><b>There was just one problem &#8211; the Lisp machines weren\u2019t actually useful.<\/b><span style=\"font-weight: 400\"> In practice, they were overpriced, difficult to use, and would produce unpredictable errors when confronted with novel situations. Their programming was brittle, and incapable of adapting to new circumstances. Making matters worse, companies like Sun Microsystems had just put cheap, human-operated IBM and Microsoft computer workstations on the market. These workstations could do the same things as the Lisp machines, but for a fraction of the price. Within three years, Schank and Minsky ended up being right, and the nascent, multibillion dollar AI industry imploded. <\/span><b>This marked the beginning of the \u201cSecond AI Winter,\u201d lasting until the early 2010s.<\/b><span style=\"font-weight: 400\"> During this period, computer scientists tended to cloak their own products in euphemisms, <\/span><a href=\"https:\/\/www.nytimes.com\/2005\/10\/14\/technology\/14artificial.html?ei=5070&amp;en=11ab55edb7cead5e&amp;ex=1185940800&amp;adxnnl=1&amp;adxnnlx=1185805173-o7WsfW7qaP0x5\/NUs1cQCQ\"><span style=\"font-weight: 400\">tr<\/span><span style=\"font-weight: 400\">y<\/span><span style=\"font-weight: 400\">in<\/span><span style=\"font-weight: 400\">g<\/span><span style=\"font-weight: 400\"> to avoid the term \u201cAI\u201d and its connotations of failed promises.<\/span><\/a><\/p>\n<p><b>The industry is, without a doubt, in an AI \u201cspring.\u201d <\/b><span style=\"font-weight: 400\">The technology is booming, with advances (both positive and negative) coming at a bewildering rate. The question, therefore, is how long this can continue. The ultimate issue is whether the AI industry can innovate itself out of this cycle, by continuing to make sufficient advances to support current levels of hype and investment. (Additionally, it must find ways to make AI actually provide value for consumers &#8211; <\/span><a href=\"https:\/\/futurism.com\/the-byte\/ai-industry-money\"><span style=\"font-weight: 400\">as it stands<\/span><span style=\"font-weight: 400\">,<\/span><span style=\"font-weight: 400\"> enterprise AI faces a serious problem with actually turnin<\/span><span style=\"font-weight: 400\">g<\/span><span style=\"font-weight: 400\"> a profit.<\/span><span style=\"font-weight: 400\">)<\/span><\/a><span style=\"font-weight: 400\"> In many respects, the current generation of AI has succeeded where the previous two generations have failed, in that it has delivered an actually usable product.\u00a0<\/span><\/p>\n<p><b>The problem, however, is that the pace of advancement up to this point is arguably unsustainable. <\/b><span style=\"font-weight: 400\">The fact that dataset upscaling has begun to falter as a formula for creating better AI is an ominous sign for the industry. If one applies the lens of the \u201cAI winter\u201d cycle, the message for the industry is clear &#8211; it must innovate neural networks on a fundamental level to make <\/span><i><span style=\"font-weight: 400\">better<\/span><\/i><span style=\"font-weight: 400\">, rather than simply <\/span><i><span style=\"font-weight: 400\">bigger<\/span><\/i><span style=\"font-weight: 400\">, AI, or face a \u201cThird AI Winter.\u201d\u00a0<\/span><\/p>\n<p>&nbsp;<\/p>\n<p>Matthew Sparks was a Justice Fellow at the Tech Institute 2023-2024.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Multiple reports have emerged in the last week stating that conventional \u201cupscaling\u201d &#8211; that is, the use of increasingly large datasets in an effort to produce better AI &#8211; is [&hellip;]<\/p>\n","protected":false},"author":19629,"featured_media":0,"parent":7881,"menu_order":5,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_acf_changed":false,"_price":"","_stock":"","_tribe_ticket_header":"","_tribe_default_ticket_provider":"","_tribe_ticket_capacity":"0","_ticket_start_date":"","_ticket_end_date":"","_tribe_ticket_show_description":"","_tribe_ticket_show_not_going":false,"_tribe_ticket_use_global_stock":"","_tribe_ticket_global_stock_level":"","_global_stock_mode":"","_global_stock_cap":"","_tribe_rsvp_for_event":"","_tribe_ticket_going_count":"","_tribe_ticket_not_going_count":"","_tribe_tickets_list":"[]","_tribe_ticket_has_attendee_info_fields":false,"footnotes":"","_tec_slr_enabled":"","_tec_slr_layout":""},"class_list":["post-8819","page","type-page","status-publish","hentry"],"acf":[],"ticketed":false,"_links":{"self":[{"href":"https:\/\/www.law.georgetown.edu\/tech-institute\/wp-json\/wp\/v2\/pages\/8819","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.law.georgetown.edu\/tech-institute\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/www.law.georgetown.edu\/tech-institute\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/www.law.georgetown.edu\/tech-institute\/wp-json\/wp\/v2\/users\/19629"}],"replies":[{"embeddable":true,"href":"https:\/\/www.law.georgetown.edu\/tech-institute\/wp-json\/wp\/v2\/comments?post=8819"}],"version-history":[{"count":8,"href":"https:\/\/www.law.georgetown.edu\/tech-institute\/wp-json\/wp\/v2\/pages\/8819\/revisions"}],"predecessor-version":[{"id":8864,"href":"https:\/\/www.law.georgetown.edu\/tech-institute\/wp-json\/wp\/v2\/pages\/8819\/revisions\/8864"}],"up":[{"embeddable":true,"href":"https:\/\/www.law.georgetown.edu\/tech-institute\/wp-json\/wp\/v2\/pages\/7881"}],"wp:attachment":[{"href":"https:\/\/www.law.georgetown.edu\/tech-institute\/wp-json\/wp\/v2\/media?parent=8819"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}