Events

AI Governance Series

The AI Governance Series, an initiative of the Tech Institute’s Global TechNet Working Group and the Yale Information Society Project, is an event series that convenes academics, activists, policymakers, and industry leaders to speak on and debate issues of technology law and digital governance globally. Since its launch in April 2021, the AI Governance Series has hosted more than a dozen events, featuring U.S. Trade Representative Katherine Tai, Austrian privacy activist Max Schrems, U.S. Senator Ron Wyden, Taiwan’s Minister of Digital Affairs Audrey Tang, OpenAI executive Jason Kwon, and Justice Ricardo Cueva of Brazil’s Superior Court of Justice. The AI Governance Series hosted its most recent event, featuring European Parliament Member Brando Benifei, on July 9, 2024.

See the full list of past events and watch the recordings.

Global Perspectives on AI Governance

UNESCO, the OECD, President Biden’s Executive Order, and the EU AI Act all emphasize the importance of diverse perspectives when analyzing artificial intelligence’s unique and novel challenges. In April 2024, twelve lawyers in the Technology Law & Policy LL.M. program came together to present lightning talks on the critical issue of how regulatory approaches to AI will differ across countries and regions, while drawing on their experience working in Africa, Asia, Latin America, the Caribbean, and the United States. Topics included policy approaches to deepfakes in Colombia, the development of AI regulations in Japan, and the way that U.S-China relations affect AI policy approaches in Southeast Asia. Learn more about the event.

Government AI Hire, Use, Buy (HUB) Roundtable Series

In 2024, the Tech Institute is joining forces with the Georgetown University Beeck Center for Social Impact and Innovation and the Georgetown University Center for Security and Emerging Technology to host a series of invite-only roundtables examining the government’s use of artificial intelligence. These roundtables bring together officials across government, civil society, academia, and the private sector to examine the U.S. government’s role as an employer of AI talent, a user of AI systems, and buyer of AI systems. The initiative is funded by a generous grant from the Rockefeller Foundation.

After each roundtable, high-level takeaways will be made public. The series will conclude with a final roundtable to synthesize conclusions from earlier events and provide final, actionable policy recommendations for the policy community. Results of this concluding roundtable will feature in a final public webinar.

Read more about the roundtable series.

Summit on Emerging Technology Policy at Georgetown Law

In September 2023, the Tech Institute presented its inaugural Summit on Emerging Technology Policy. Artificial intelligence, perhaps the most significant emerging technology of our time, was a point of discussion through every stage of the Summit, including the AI-focused panel discussion “Getting Serious About AI.” The panel, moderated by John Heflin (L’21) (Alston & Bird), featured Professor Amanda Levendowski (Georgetown Law), Lisa H. Macpherson (Public Knowledge), Sean Perryman (Uber), Professor Neel Sukhatme (Georgetown Law), and Miriam Vogel (L’01) (EqualAI; Chair, National AI Advisory Committee; Chair, Georgetown Law Alumni Board). Learn more about the Summit.

Tech Foundations for Government Staff

Tech Foundations for Government Staff, a multi-day program for federal agency and congressional staff during the summer recess, prepares policymakers to engage with timely tech policy issues. Lectures and panels cover how various technologies work, how they’re being deployed, the policy questions they raise, and how businesses, policymakers and civil society are addressing those questions in the real world. Tech Foundations for Government Staff: Spotlight on AI took place during the August 2024 Congressional recess and featured three days of AI-centric programming on topics including social media algorithms, national security, intellectual property, election integrity, and U.S.-China competition. Read more about Tech Foundations for Government Staff.

Academic Offerings

With over 80 classes offered each year, Georgetown Law’s robust tech curriculum reflects the diverse and fast-changing array of issues facing lawyers and policymakers. During the 2024-2025 academic year, we are pleased to offer eight classes focused on artificial intelligence and how its rapid development is changing the field:

Georgetown Law Technology Review Symposium: Artificial Lawyering — Law in the Age of Artificial Intelligence

On January 30, 2024, the Georgetown Law Technology Review hosted its biennial symposium. The 2024 symposium, Artificial Lawyering — Law in the Age of Artificial Intelligence, convened legal academics, government technologists, practitioners, philosophers, librarians, and media experts seeking to answer important questions about how artificial intelligence will affect both the law and the legal profession. View the agenda.

Forthcoming Casebook: Artificial Intelligence Law

Faculty Co-Director Professor Paul Ohm’s forthcoming casebook on artificial intelligence law, co-authored with Professor Margot Kaminiski of the University of Colorado Law School and Professor Andrew Selbst of the University of California, Los Angeles School of Law, will be a crucial and unprecedented teaching tool for burgeoning course curriculums on artificial intelligence law. Read updates on the casebook.

The casebook is based upon the materials Professor Ohm utilized in his course Artificial Intelligence and Law, first taught at Georgetown Law in Fall 2023. In that course, which examined the emerging legal frameworks surrounding machine learning and other forms of AI, students analyzed artificial intelligence law in the United States and internationally.

Scholarship

Between Truth and Power

In 2019, Faculty Co-Director Professor Julie Cohen published Between Truth and Power: The Legal Constructions of Informational Capitalism, which explores the rise of online platforms and the resulting strain on existing legal regimes.  She argues that platforms’ use of AI-powered “algorithmic processes for intermediating and filtering information flows have facilitated the emergence of new legal relations revolving around legal immunity, legal power, and the interplay between them.”

“How Copyright Law Can Fix Artificial Intelligence’s Implicit Bias Problem”

Faculty Advisor Professor Amanda Levendowski’s 2018 article in the Washington Law Review, “How Copyright Law Can Fix Artificial Intelligence’s Implicit Bias Problem,” examines bias in artificial intelligence through the lens of copyright doctrine, and argues that the fair use doctrine is capable of mitigating that bias, just as it has been used to address similar concerns in other technological fields.

“Playing with the Data”

David Lehr and Faculty Co-Director Professor Paul Ohm’s 2017 article in the UC Davis Law Review, “Playing with the Data: What Legal Scholars Should Learn About Machine Learning,” provides a primer on machine learning for legal scholars who are unfamiliar with the technology. Ohm and Lehr argue that lawyers and technologists will need to collaborate extensively as machine learning grows in power and prominence, and attempt to create a shared vocabulary to foster these collaborations.

“The Racist Algorithm?”

In his article “The Racist Algorithm?,” published in the Michigan Law Review in 2017, Faculty Advisor Professor Anupam Chander argues that algorithms are no more inscrutable or bias-prone than human decisionmakers they may replace. Rather than taking a race- and gender-blind approach to training automated decisionmakers, Chander advocates for “algorithmic affirmative action,” in which algorithms are trained with great intentionality and continually assessed for fairness across race and gender lines.

“Right to a Human in the Loop”

In 2017, Faculty Advisor Professor Meg Leta Jones published “Right to a Human in the Loop: Political Constructions of Computer Automation & Personhood from Data Banks to Algorithms” in Social Studies of Science. The article compares legal constructions of automated data processing in the European Union and the United States, determining that one legal framework has developed the “right to a human in the loop,” while another encourages complete automation.

“Machines Without Principals”

In his 2014 Washington Law Review article, “Machines Without Principals: Liability Rules and Artificial Intelligence,” Faculty Advisor Professor David Vladeck warns that existing legal structures are not equipped to assign liability in cases of “machines without principals” — that is, machines without identifiable human agents responsible for them — causing harm to people or property.