How Should Legal Ethics Rules Apply When Artificial Intelligence Assists Pro Se Litigants?
Each year, one out of every six Americans represents himself or herself in court without the assistance of a lawyer. Individuals represent themselves, or litigate pro se,2 for a variety of reasons, including their inability to afford a lawyer, their mistrust or dislike of lawyers, or their desire to advocate for themselves. The justice gap—the large difference between the number of people who want or need legal assistance and the number who receive it—is widely perceived as a failure of the United States legal system to provide equal justice under the law. Involuntary self-representation is especially prevalent in civil cases: no right to counsel exists for civil litigants, and, because most low-to-moderate-income families and individuals cannot afford legal services, approximately three out of every five people in civil cases go to court with no lawyer. Self-representation occurs in criminal cases to a lesser extent. The Sixth Amendment, applied to the states through the Fourteenth Amendment, provides criminal defendants the right to effective assistance of counsel both at trial7 and on their first appeal as of right. But there is no constitutional right to appeal, and criminal defendants also have an implicit constitutional right to represent themselves, so long as their choice is “‘free and intelligent.’”
In the early 2000s, the American Bar Association (“ABA”) began to recommend increased use of limited-scope representation (also known as unbundled legal services and discrete task representation) to mitigate the justice gap, guided by the idea that “in the great majority of situations some legal help is better than none.” Depending on the state, a lawyer and client might be allowed to agree that the lawyer will represent the client for only one or a few phases of the case and then withdraw, that the lawyer will represent the client in a specified forum only, or that the lawyer will help the client proceed pro se but will not represent the client in court.
In the future, limited-scope representation might also be provided by machines. There is enormous potential to further narrow the justice gap using technology, particularly software that uses artificial intelligence (“AI”) to provide legal services in a “one-to-many” format. A few software publishers have made self-help legal information programs available to the public. In the future, legal AI could be developed to assist pro se litigants in drafting pleadings, motions, briefs, and other documents; to advise them on their litigation strategy and likelihood of success; or to perform other litigation-related tasks.
The development of such programs would give rise to unprecedented ethical dilemmas. Is it legal for this technology to be made available to the public? Who should be permitted to create and market this technology? To what legal ethics rules should software publishers be subject? How should publishers be sanctioned if they cause harm to users? This Note, which is structured in question-and-answer format, seeks to advise courts, practitioners, and rulemaking bodies on the application of legal ethics rules to legal AI software intended for use by pro se litigants. In some cases, the existing rules or the rationales underlying the existing rules can be applied directly to legal AI to produce a reasonable result. But in other cases, it is not apparently clear how, or if at all, the existing rules should be applied. Where the answers are not readily apparent, consumer protection should be prioritized—legal ethics rules should promote affordable access to legal services, hold legal service providers accountable for the quality of their services, and provide redress to clients who have been harmed by them. And when these various interests conflict, they should be balanced in a reasonable manner.
This Note is divided into three Parts. Part I provides background information on the current state of legal AI, sets forth the assumptions about likely future developments in legal AI on which Part II is based, and discusses the importance of protecting the public as these new technologies are introduced. Part II applies legal ethics rules to legal AI software intended for use by pro se litigants. The Note concludes with suggested avenues for future research.
Subscribe to GJLE