AI & the Law… & what it means for legal education & lawyers

January 2, 2024

An AI image generator shows what it could look like if law school classes were taught by robots.

From ChatGPT to algorithms that ace the LSAT, artificial intelligence (AI) is roiling the legal world like perhaps no technology ever has – and this is just the beginning. Georgetown Law students, faculty and alumni are on the frontlines of efforts to come to grips with the baffling range of potential benefits as well as dangers raised by this new era.

“The launch of ChatGPT in November of 2022 was a moment as big as the introduction of the World Wide Web in the 1990s,” says Ed Walters, a Georgetown Law adjunct professor who has long taught a class on the “Law of Robots.” Just as browsers and the Web made the Internet accessible to ordinary people who didn’t necessarily know much about computers, he says, ChatGPT, a “chatbot” tool, brought AI to the mainstream. “It was the first time regular people could see artificial intelligence and relate to it in a way that they understood,” says Walters. Now the algorithmic floodgates have been flung wide open, leaving corporations, governments and practically every kind of institution scrambling to figure out how to adapt to the incoming tidal wave of AI.

AI ENTERS THE ACADEMY

Law schools are no exception. Last March, researchers showed that GPT-4, an upgraded version of the model that runs ChatGPT, could outperform most humans on the Uniform Bar Examination, sending a shiver through the ranks of administrators and educators tasked with evaluating students. In an effort to keep students from outsourcing their application essays or research papers to algorithms, some institutions such as the UC Berkeley School of Law have banned the use of generative AI models in exams and assignments.

Professor Wilf-Townsend

Professor Wilf-Townsend

At Georgetown Law, “we considered a complete ban but so far, have decided that was too broad an approach,” says Professor Daniel Wilf-Townsend, who chairs a committee tasked with, essentially, figuring out how the school should deal with AI. “If you can do a Google search while working on an assignment, then why not be able to do a search on Microsoft Bing, even though it also uses ChatGPT? We want there to be a sense that generative AI resources, especially as they get better, can be used by students in contexts where they’re already allowed to use whatever resources they find ready to hand. But that doesn’t mean that it’s no holds barred when it comes to exams, or plagiarism.”

There’s certainly no shortage of interest in the subject: Georgetown Law currently offers at least 17 courses addressing different aspects of AI. Professor Paul Ohm, whose undergraduate degree is in computer science, is teaching two of them. At present, the Law Center is leaving it up to individual professors to set their own policies on whether and how students may use AI, while maintaining existing rules about plagiarism and exams. Some instructors are forbidding their first-year students from using AI, figuring 1Ls need to learn the basics so that they will at least be able to tell if an AI-abetted paper is up to scratch.

Headshot of Professor DeLaurentis

Professor DeLaurentis

Others are tentatively allowing some use of the technology. Wilf-Townsend plans to add at least one exercise to his upcoming seminar, “AI & the Law: Principles and Problems,” in which students will use language models to respond to reading materials. And Professor Frances DeLaurentis, director of the Georgetown Law Writing Center, is launching an upper-level class in which students will experiment with using AI as a writing aid – playing with different prompts, taking turns writing and editing with the algorithms. “It can be really helpful for brainstorming topics, and with writing that first draft, especially for students whose first language isn’t English,” she says. “I think the future is hybrid work produced by humans working with gen-AI.”

Alonzo Barber, L’06, who heads Microsoft’s U.S. Enterprise Commercial team, is already there. He had no teaching experience when he agreed last fall to lead a one-week course on “Legal Skills in an AI-Powered World” as one of this year’s Week One experiential electives, so he turned to ChatGPT for help. “I was like, this is my first time doing this adjunct thing. I don’t know what a curriculum should look like. So I type into ChatGPT, ‘Draft me a course description about the legal implications of AI and the law.’ It spit out three paragraphs and I was like, ‘This is pretty good!’” He reworked and refined that outline, of course, but says having that first draft done for him saved him hours of work.

Some students may well use the technology to cheat, but at this point stopping them is difficult. Tools that claim to be able to spot AI-generated text are unreliable, says Wilf-Townsend. And in any case, students have always cheated; in a way, AI might even help level the playing field. “AI puts kids who don’t have an Uncle Alito to call for help with their take-home exam on an equal footing with those who do,” says DeLaurentis.

AI JOINS A LAW FIRM

Beyond academia, Barber believes it’s crucial for legal professionals to not only learn how to use AI tools, but to understand them – how they are built, their strengths, their weaknesses and the ways in which they can fail. Practically every lawyer in America has by now shuddered at the story of the ill-advised attorneys who had ChatGPT write a legal brief that they submitted to a New York federal court – only to find that the brief was filled with nonexistent case citations the bot had simply made up.

AI systems of all types are often plagued with more subtle shortcomings. Many AI-powered face recognition systems, for instance, are more prone to misidentify people of color than they are white people. That’s often because the data sets those systems were trained on included far more white faces. That imbalance makes those systems questionable tools for helping to make decisions about who to arrest in or convict of a crime. Many other AI systems are similarly biased as a result of flaws in the data they were trained on.

“You really want to think about those things, because our profession touches pretty much every corner of society,” says Barber – from criminal justice to legal issues in bank lending and employment. “These technologies will be implemented in all those areas, which makes it important that we as a legal community understand them.”

For some lawyers, the task is to not only understand the algorithms but to defend them in court. Bennett Borden, L’04, chief data scientist at DLA Piper, is part of a team of lawyers and data scientists that helps the firm counsel most of the biggest generative AI companies. These unprecedented technologies are raising unprecedented legal questions. For example, generative AI companies have been sued by individuals who claim platforms produced defamatory statements about them. “These cases are really quite novel,” says Borden. “They raise fundamental questions, like ‘Can you even be defamed by a computer?’”

FRIEND OR FOE?

An AI-generated, surrealist image of two human-sized dogs wearing suits and ties and sitting in a law library

An AI image generator shows what it might look like if dogs could practice law.

On the other hand, such technology could also help ordinary people use the law to their advantage. Bots can make it easier than ever to, say, fight an unfair eviction notice or contest a firing. “Generative AI should have an amazing democratizing and leveling effect on the practice of law and the judicial system,” says Borden. “It will make the creation of legal products and services easier, and therefore less expensive. So people who previously could not afford to bring a case are going to be able to do that more. And it should boost the capacity of civil rights organizations and pro bono groups to help more people.”

One of the biggest potential upsides to adding AI in to legal practice is that it could supercharge lawyers’ productivity. Algorithms can learn a company’s style and draft bespoke contracts in seconds, or summarize lengthy documents in the time it takes a human attorney to post a vacation shot on Instagram. Big firms are already integrating generative AI models into their practice – for example, London-based Allen & Overy has partnered with a startup on “Harvey,” a chatbot tool its staff can use to help with routine tasks like drafting memos and contracts.

General purpose models like ChatGPT aren’t (yet) reliable enough for most kinds of legal work, but there are plenty of businesses offering AI tools specifically designed for legal professionals. In addition to his teaching at Georgetown, Ed Walters is an executive at one of those companies, vLex. Unlike models trained on the random cacophony of the whole Internet, his company’s VincentAI tool is trained on a database of some one billion legal documents. “You’re not getting answers from trolls on Reddit or comments on YouTube,” says Walters. Instead, he explains, users type in a natural language query and the tool provides an answer with links to relevant cases. “It’s like asking a junior lawyer or paralegal to read all the relevant cases, treatises and statutes and produce a memo. You still need a lawyer to then go and read those cases and decide if that’s the best way to argue. But research that might have taken a week, you can now start while you’re on the phone with a client, and have the answer by the end of the call.”

But if systems like VincentAI work as well as advertised, will companies even need those junior lawyers and paralegals any more? And if first-year associates don’t get to learn under the tutelage of more experienced lawyers, how will they get the training they need to move up the career ladder? In short: Will lawyers lose their jobs to robots?

It’s a concern shared by many, and not just those in the legal field. (Freelance writers, for instance!) Walters, at least, isn’t one of them. “Everyone was afraid e-discovery would put junior lawyers out of work,” he said. “But there are more lawyers than ever now. And they’re happier, because they’re no longer stuck in warehouses reviewing boxes of documents.”

THE JURY IS STILL OUT

One thing is for sure: given all the ethical, social and legal perils AI presents, governments are going to have to get serious about regulating the technology. Miriam Vogel, L’01, President and CEO of the nonprofit EqualAI, sits on a committee that advises the Biden Administration on AI policy. She points out that existing laws do already provide some guardrails on how the technology is used. Race-based employment discrimination is illegal whether it’s perpetrated by a hiring manager or an algorithm, for instance. But AI raises all kinds of new issues that will require new rules.

Legislators are starting to tackle that challenge. Several states have passed laws forbidding law enforcement from using face recognition, and California requires companies to let customers know if they are talking to a chatbot. The European Union is expected to soon enact a sweeping package of rules governing how AI is used. “We can expect much more regulation in the EU, and that will impact anyone doing business there,” says Vogel. And in late October, President Biden issued an expansive executive order that obliges major AI companies to share information on the potential risks of their products with the government and directs federal agencies to set up safeguards around the technology.

It’s a start. But the government, like the legal world, and for that matter pretty much all of us, is still trying to catch up with a technology that is getting better and more powerful all the time.

“We’re at the toddler stage of generative AI,” says Borden. “It’s like when your two-year-old takes his first steps. It’s amazing. But he’s still not very good at walking, compared to an Olympic runner. When these systems start to run, and jump, and fly – that idea fills me with excitement and optimism, but it’s also where things get really scary.”

This article is the cover feature in the Fall 2023 issue of Georgetown Law Magazine

Its author, Vince Beiser, is a journalist based in Vancouver, Canada. His work has appeared in Wired, The Atlantic, Harper’s, The Guardian and elsewhere. He is the author of the books “The World in a Grain: The Story of Sand and How It Transformed Civilization” and the forthcoming “Power Metal.”