GPT, Esquire: How the Nippon Case May Shape the Future of AI in Pro Se Litigation

April 15, 2026 by Kalina Pierga

A courtroom with wooden benches.

The Nippon Lawsuit

On March 4, 2026, a lawsuit filed in the Northern District of Illinois opened a new chapter in the ongoing discourse surrounding artificial intelligence (“AI”) in legal practice. Nippon Life Insurance Company (“Nippon”) sued OpenAI for allegedly practicing law without a license.[1] Nippon claimed OpenAI’s ChatGPT platform induced a woman to file a frivolous lawsuit against Nippon that resulted in significant financial harm to the company.[2]

After her disability claim against Nippon was settled with prejudice in January 2024, former Nippon employee Graciela Dela Torre raised concerns with her attorney that the terms of her settlement agreement “resulted from potential errors or omissions of important facts and documentation.”[3] Dela Torre’s attorney refuted the actionability of these concerns. He explained that Dela Torre signed an agreement releasing Nippon from further causes of action related to the dispute, and the case had been dismissed with prejudice, meaning it could not be reopened.[4]

Dissatisfied, Dela Torre uploaded her attorney-client correspondence to ChatGPT, asking “whether she was being gaslighted.”[5] ChatGPT concluded she was.[6] Dela Torre fired her attorneys and attempted to reopen the suit pro se with the help of ChatGPT­—complete with 21 motions, eight notices and statements, and a subpoena.[7]

Nippon’s complaint against OpenAI alleges that ChatGPT engaged in unauthorized practice of law, induced Dela Torre to commit tortious interference with her settlement contract, and aided Dela Torre’s abuse of judicial process when it advised and assisted in drafting Dela Torre’s filings.[8]

The Nippon lawsuit appears to be one of the first cases in which an AI company has been directly accused of engaging in unauthorized practice of law (“UPL”).[9] More broadly, Nippon’s theory of the case raises the question of what it means to “practice law” in the age of AI.

The Nippon case sits at the intersection of three trends shaping the future of the legal system: expanded use of generative AI tools in legal practice, persistent challenges faced by pro se litigants, and the legal profession’s evolving understanding of its own regulatory boundaries. This piece examines these intersections and the legal ethics challenges they present.

AI-Assisted Pro Se Litigation and Access to Justice

The Pro Se Dilemma

Pro se litigants, or individuals who represent themselves in court without a lawyer, face significant procedural and substantive challenges.[10] Though courts agree that pro se filings should be construed in the light most favorable to the individual representing themselves, pro se litigants are still expected to comply with the same rules as licensed attorneys.[11] These expectations challenge people without formal legal education to play by the often-stringent, complex, and jargon-laden rules of the court system.

Pro se litigants represent a significant number of cases in courts nationwide. In 2025 alone, filings by pro se litigants in federal appellate courts constituted 50 percent of new cases.[12] Many litigants rely on self-representation because formal legal representation is expensive and inaccessible—a problem widely described as the “access to justice” crisis.[13]

Generative AI has quickly become an attractive tool for pro se litigants.[14] Instead of relying solely on legal self-help guides, court-provided resources, or general internet inquiries, pro se litigants may use conversational AI systems to generate pleadings, develop legal theories, and even script oral arguments.[15]

In an interview with the Georgetown Journal of Legal Ethics, Jonah Perlin, Legal Ethics and AI scholar and Professor of Legal Practice at Georgetown University Law Center, summarized AI’s role in addressing difficulties faced by pro se litigants:

Pro se litigants, by definition, face many challenges. One challenge that I saw, especially when I was clerking, was an inability to make legal arguments in the way the court requires, to understand the rules of evidence, or to understand the expectations of a particular briefing schedule. AI can help support that… Litigants have long used [legal] form books to accomplish their goals through the legal system. The challenge is that AI can do it in a way that looks convincing, even if it is factually wrong. That’s not a bug of AI, but a feature. It’s a generative technology that produces new text that looks, algorithmically, like the most likely outcome for the input text, which is why you get hallucinated cases and theories of law.[16]

Professor Perlin captures the double-edged reality of generative AI use by pro se litigants: while AI tools offer unprecedented access to legal reasoning and drafting assistance, they also introduce new risks of error that pro se litigants are ill-equipped to detect. The central question for courts, and the legal community more broadly, is how to harness AI’s democratizing potential without allowing its limitations to further disadvantage those it stands to help.

Courts’ Responses to Pro Se AI Use

As pro se litigants’ use of AI has increased, courts have responded with varying degrees of tolerance.

In a March 2025 proceeding before a panel of New York State appellate judges, a pro se litigant attempted to present oral argument using an AI-generated avatar.[17] The litigant submitted a video in which a digitized, human-looking ‘attorney’ began presenting opening remarks on the appellant’s behalf. Judges immediately halted the proceeding and berated the plaintiff for failing to prior disclose use of an AI ‘representative.’[18] A video clip of this incident has since gone viral online.[19]

Some courts readily sanction pro se litigants whose filings contain erroneous AI-generated content, while others have banned AI use altogether. [20] A Missouri appellate court, for example, penalized a pro se litigant to the tune of $10,000 for including AI-hallucinated case law in a “frivolous appeal.”[21] Another litigant in the Western District of Michigan was recently fined $2,900 for AI hallucinations.[22]

Other courts have extended grace­—albeit limited—to pro se litigants who use AI. In January 2026, the Seventh Circuit declined to sanction a pro se plaintiff suspected of using AI-hallucinated case law in his filings.[23]

When vacating the plaintiff’s appeal and remanding it to the lower court, the Seventh Circuit commented more broadly on pro se AI use:

“While AI presents great overall promise, the experience so far in litigation has revealed instances of inaccurate factual and legal representations to courts,” wrote the Seventh Circuit.[24] “As pro se litigants employ AI to assist with court filings, a basic reminder seems wise. Accuracy and honesty matter.”

The Seventh Circuit further asserted that pro se litigants, despite lack of formal legal training, ought still “shoulder responsibility” in presenting accurate, factual filings to the court.[25]

Legal scholars have considered solutions that exceed the bounds of individual disciplinary sanctions by courts.

“On balance, [AI] provides access to the legal system to people that would not have effective access otherwise,” said Professor Jonah Perlin of Georgetown University Law Center. “But we as a legal system need to ask, what guardrails do we need to put on [AI use]? To me, this is not really an AI question. This is a question of making the current regulatory infrastructure for things like checking case citations work at a larger scale.”

Evolving Solutions to Fallible AI in Legal Filings

In 2024, the American Bar Association (“ABA”) issued Formal Opinion 512, its first ethics guidance addressing lawyers’ use of AI.[26] The opinion emphasizes that attorneys must understand the benefits and risks of AI systems, protect client confidentiality when using them, and carefully review any AI-generated output before relying on it in legal practice.[27]

But this guidance does not apply to self-represented individuals who are not beholden to standards of legal professional ethics. Formal regulatory frameworks for guiding pro se AI use are currently sparse. Regulatory gaps leave courts, legal resource organizations, and attorneys themselves to mitigate issues presented by pro se AI use.

Several jurisdictions have implemented supplemental programs to respond to these challenges. The Northern District of New York, for example, launched a “virtual assistant” chatbot that can answer specific user-input questions and guide users through legal procedures.[28] Similarly, the Alaska Court System partnered with LawDroid, a legal technology company, to develop an AI-powered legal assistance chatbot for pro se litigants.[29]

The Alaska Court System has also experimented with a new paradigmatic approach: shifting its focus from teaching pro se litigants to better equipping court staff.[30] Court clerks, librarians, and help center workers—who interface most often with pro se litigants—are instructed to advise self-represented individuals in a way that acknowledges, rather than condescends, AI use.[31] Staff are then able to effectively redirect litigants to reliable court-specific resources.

These solutions, while promising in the short term, require significant investment of already-slim court resources. Moving forward, developing scalable support systems for overburdened courts will be essential to ensuring that pro se AI use enhances, rather than undermines, AI’s role in granting meaningful access to justice.

Nippon: Unauthorized Practice of Law, or Something Else?

In accordance with the ABA’s Model Rule of Professional Conduct 5.5, every U.S. jurisdiction restricts the practice of law to licensed attorneys admitted to the bar in the relevant jurisdiction.[32] Prohibition of Unauthorized Practice of Law (“UPL”) is meant to protect the public from incompetent legal advice and to preserve the integrity of the legal system.

The Nippon lawsuit raises the possibility that AI systems themselves could be accused of violating those rules, or that companies operating them might be liable for facilitating unauthorized legal services.

Eran Kahana, an advisory board member of the Stanford Artificial Intelligence Law Society and attorney specializing in artificial intelligence and intellectual property law, argues that ChatGPT crossed the threshold for UPL when it counseled Dela Torre.[33]

“The uncrossable threshold (UT)… separates the provision of legal information from unauthorized practice of law,” said Kahana, writing for the Stanford Law School Codex Blog. “It is about what a system is built to do and what it is built to refuse. OpenAI built a system with no such refusal architecture.”

AI’s functional mechanisms, driven by algorithmic responses to unique user inputs, muddle the question of what constitutes mere resource provision versus informed counsel. Generative AI systems produce responses based on statistical patterns in training data rather than professional judgment. Yet when an AI system answers a user’s legal question with specific guidance about a particular case, the difference between general information and personalized advice begins to blur.

Traditional attorney-client relationship doctrine sharpens analysis of ChatGPT’s conduct under UPL. In Togstad v. Vesely, Otto, Miller & Keefe, the Minnesota Supreme Court held that an attorney-client relationship is formed “whenever an individual seeks and receives legal advice from an attorney in circumstances in which a reasonable person would rely on such advice.”[34] Had ChatGPT been a person faced with Dela Torre’s request, rather than an AI system, its conduct would likely have been considered UPL under the Togstad standard. It not merely provided general legal advice that Dela Torre relied upon, but rendered a personalized assessment of her legal circumstances and actively contributed to drafting her filings.

This conduct treads into waters that are muddied by the fact that ChatGPT’s generated responses are algorithmic, even if based on case-specific details provided by user input. Courts and regulators have not yet clarified this distinction with respect to UPL, but scholars expect that a line will be drawn in due time.

“The Nippon Life case will likely force courts and regulators to define a safe harbor for AI legal applications,” writes Kahana.[35] “Neither Congress nor the ABA has produced one.”

Looking Ahead

The legal field is no stranger to technological shifts that upend practice norms. Email, internet, cloud storage, and even voice assistants like Siri roused similar concerns regarding integrity of legal practice and court filings.

“This is a fast-moving terrain,” said Professor Perlin. “So what is the role of legal scholars, law professors, and law students? To me, it’s understanding the history and doctrine to try to give us analogical hooks that we can use to make the best decisions we can for the future of legal practice. We need to be comfortable with change.”

As litigation like the Nippon case continues to arise, courts may begin to articulate clearer rules about how AI systems fit within existing doctrines such as the unauthorized practice of law. At the same time, the American Bar Association will likely continue exploring ways to harness AI’s potential benefits while minimizing its risks.

Generative AI is already reshaping the landscape of self-representation. Whether it ultimately expands access to justice, undermines professional safeguards, or produces some combination of both will depend on how regulators and courts navigate the delicate boundary between legal assistance and the practice of law.


[1]Complaint, Nippon Life Ins. Co. of Am. v. OpenAI Found., No. 26-cv-02449(N.D. Ill. 2026) (No. 26-cv-02449) [hereinafter “Nippon Complaint”].

[2] Id., ¶¶ 1–2.

[3] Id., ¶ 48.

[4] Id., ¶ 49.

[5] Id. at ¶ 50.

[6] Id.

[7] Complaint, Dela Torre v. Nippon Life Ins. Co. of Am., 2023 WL 2868058 (N.D. Ill. 2022) (No. 22-cv-07059); Mike Scarcella, OpenAI Hit with Lawsuit Claiming ChatGPT Acted as an Unlicensed Lawyer, Reuters (Mar. 5, 2026) https://www.reuters.com/legal/legalindustry/openai-hit-with-lawsuit-claiming-chatgpt-acted-an-unlicensed-lawyer-2026-03-05/; Amanda Robert, OpenAI Sued for Practicing Law Without a License, ABA Journal News (Mar. 6, 2026) https://www.abajournal.com/news/article/openai-sued-for-practicing-law-without-a-license.

[8] Nippon Complaint, ¶¶ 1, 51–56, 58–59.

[9] Scarcella, supra note 7.

[10] Mostafa Soliman, Pro Se Advocacy in the AI Era: Benefits, Challenges, and Ethical Implications, NYSBA (Feb. 10, 2026) https://nysba.org/pro-se-advocacy-in-the-ai-era-benefits-challenges-and-ethical-implications/?srsltid=AfmBOoqtC9c3I1V5k_6bltwnK2Ctq_EWdyC4i42EE_NEdX5ABXdPU0P5.

[11] See, e.g., McNeil v. United States, 508 U.S. 106, 113 (1993); Faretta v. California, 422 U.S. 806 (1975); Burgs v. Sissel, 745 F.2d 526 (8th Cir. 1984).

[12] Judicial Business 2025, uscourts.gov https://www.uscourts.gov/data-news/reports/statistical-reports/judicial-business-united-states-courts/judicial-business-2025.

[13] Nora Freeman Engstrom and David Freeman Engstrom, Justice For All? Why We Have an Access to Justice Gap in America–and What Can We Do About It?, Stanford Law School Blog (June 13, 2024) https://law.stanford.edu/2024/06/13/justice-for-all-why-we-have-an-access-to-justice-gap-in-america-and-what-can-we-do-about-it/.

[14] Fisher Phillips, The ChatGPT Plaintiff: How AI is Transforming Employment Litigation, Driving Up Defense Costs, and What In-House Counsel Can Do About It, Fisher Phillips (Feb. 26, 2026) https://www.fisherphillips.com/en/insights/insights/how-ai-is-transforming-employment-litigation; Brooke K. Brimo, How Should Legal Ethics Rules Apply When Artificial Intelligence Assists Pro Se Litigants?, 35 Geo. J. of Legal Ethics 549 (2022).

[15] Fisher Phillips, supra note 14; Soliman, supra note 10.

[16] Telephone Interview with Jonah Perlin, Professor of Law, Legal Practice, Georgetown University Law Center (Mar. 9, 2025).

[17] New York Appellate Division, First Department, Live Stream, (YouTube, March 26, 2025),  https://www.youtube.com/watch?v=Ctv4ZQRZgbA&t=1170s.; Larry Neumeister, An AI Avatar Tried to Argue a Case Before a New York Court. The Judges Weren’t Having It, NBC New York (Apr. 5, 2025) https://www.nbcnewyork.com/new-york-city/ny-man-uses-ai-in-court/6212925/.

[18] Neumeister, supra note 16; Shayla Colon, Man Employs A.I. Avatar in Legal Appeal, and Judge Isn’t Amused, New York Times (Apr. 4, 2025) https://www.nytimes.com/2025/04/04/nyregion/ai-lawyer-replica-new-york.html.

[19] Colon, supra note 17.

[20] Marco Poggio, Gen AI Shows Promise – And Peril – for Pro Se Litigants, Law360 (May 3, 2024) https://www.law360.com/pulse/articles/1812918/gen-ai-shows-promise-and-peril-for-pro-se-litigants.

[21] Rose Krebs, AI-Generated Fake Case Law Leads to Sanctions in Wage Suit, Law360 (Feb. 13, 2024) https://www.law360.com/pulse/articles/1797437/ai-generated-fake-case-law-leads-to-sanctions-in-wage-suit.

[22] Eugene Volokh, $2900 in Sanctions for AI Hallucinations in Filings by Self-Represented Litigant, Reason (Dec. 4, 2025) https://reason.com/volokh/2025/12/04/2900-in-sanctions-for-ai-hallucinations-in-filings-by-self-represented-litigant/#:~:text=Even%20if%20the%20Court%20were,carelessly%20filing%20AI%2Dgenerated%20documents.

[23] Sara Merken, US Appeals Court Warns Self-Represented Litigants Over AI Errors, Reuters (Jan. 21, 2026) https://www.reuters.com/legal/government/us-appeals-court-warns-self-represented-litigants-over-ai-errors-2026-01-21/; Maura Johnson, AI Use by Pro Se Litigants Presents Challenges for Courts, The Indiana Lawyer (Feb. 27, 2026) https://www.theindianalawyer.com/articles/ai-use-by-pro-se-litigants-presents-challenges-for-courts.

[24] Jones v. Kankakee Cnty. Sheriff’s Dep’t, 164 F.4th 967, 971 (7th Cir. 2026).

[25] Id.

[26] ABA Issues First Ethics Guidance on a Lawyer’s Use of AI Tools, American Bar Association (July 29, 2024) https://www.americanbar.org/news/abanews/aba-news-archives/2024/07/aba-issues-first-ethics-guidance-ai-tools/.

[27] A.B.A. Comm. On Ethics and Prof’l Responsibility, Formal Op. 512 (2024).

[28] Pro Se Assistance Program, Northern District of New York Federal Court Bar Association https://ndnyfcba.org/pro-se-assistance-program/.

[29] Natalie Runyon, Chatbots for Justice: The Impact of AI-Driven Tech Tools for Pro Se Litigants, Reuters (Feb. 12, 2025) https://www.thomsonreuters.com/en-us/posts/ai-in-courts/chatbots-pro-se-litigants/.

[30] Rabihah Butler, When Courts Meet GenAI: Guiding Self-Represented Litigants Through the AI Maze, Reuters (Feb. 19, 2026) https://www.thomsonreuters.com/en-us/posts/ai-in-courts/guiding-self-represented-litigants/.

[31] Id.

[32] Model Rules of Prof’l Conduct R. 5.5; Carol A. Needham, The Application of Unauthorized Practice of Law Regulations to Attorneys Working in Corporate Law Departments, American Bar Association https://www.americanbar.org/groups/professional_responsibility/committees_commissions/commission-on-multijurisdictional-practice/mjp_cneedham/.

[33] Eran Kahana, Designed to Cross: Why Nippon Life v. Open AI Is a Product Liability Case, Stanford Law School Codex Blog (Mar. 7, 2026) https://law.stanford.edu/2026/03/07/designed-to-cross-why-nippon-life-v-openai-is-a-product-liability-case/.

[34] Togstad v. Vesely, Otto, Miller & Keefe, 291 N.W.2d 686, 693 n. 4 (Minn. 1980).

[35] Id.