Past Events

February 15, 2024 | ChatGPT: Lessons and Reflections

Featuring Anupam Chander (Georgetown Law), Nikolas Guggenberger (University of Houston Law Center), Jason Kwon (SFS’99) (OpenAI), and Kyoko Yoshinaga (Keio University).

Looking back at ChatGPT’s ubiquitousness in conversations across every industry, Jason Kwon, in conversation with Professors Anupam Chander, Nikolas Guggenberger, and Kyoko Yoshinaga, provided an in-depth analysis of the current state and future trajectory of AI technologies, particularly focusing on the deployment and societal integration of tools like OpenAI’s ChatGPT and novel Sora text-to-video tool. Kwon emphasized the critical importance of iterative deployment as a mechanism for public engagement and safety, allowing users to familiarize themselves with AI technologies while also addressing potential risks. He shed light on OpenAI’s approach to data training, ensuring models are developed with ethical considerations at the forefront to responsibly manage sensitive information, asserting that this strategy not only reflects a commitment to ethical AI development but also aims to foster trust and transparency between AI innovators and the broader community. Kwon’s discussion extended to the challenges and methodologies involved in making AI technologies adaptable across diverse legal and cultural landscapes, advocating for model customization to uphold local expectations and regulations while still pursuing the broader goal of accessible and beneficial AI for all.

Kwon also addressed complex issues such as copyright and the competitive dynamics within the AI industry, emphasizing the need for a balanced ecosystem that encourages innovation while upholding ethical standards and societal values. He highlighted the potential of generative AI to catalyze a shift towards more informed and adaptive legal and cultural frameworks, which can accommodate the rapid advancements in AI while ensuring equitable benefits. The conversation also touched on the varied global reception to ChatGPT, illustrating the technology’s impact on different demographics and the potential for AI to bridge infrastructural gaps in regions like Nigeria, contrasting sharply with apprehensions observed in some western European countries. Looking toward the future of work, Kwon posited that AI’s role would be more about augmentation rather than replacement, enhancing quality of work through automation of mundane tasks, thus freeing human capacity for more creative and difficult challenges. This perspective not only reassures the workforce about the value of their roles in an AI-integrated future but also underscores the need for ongoing education and adaptation as AI technologies continue to evolve and reshape the landscape of work and society at large.

June 29, 2023 | Exploring AI Accountability Policy with Russ Hanser

Featuring Anupam Chander (Georgetown Law), Nikolas Guggenberger (University of Houston Law Center), Russ Hanser (National Telecommunications and Information Administration), Mehtab Khan (Yale Information Society Project), Natalie Roisman (Georgetown Law Institute for Technology Law & Policy), and Neel Sukhatme (Georgetown Law).

Russ Hanser is Associate Administrator for Policy Analysis and Development at the National Telecommunications and Information Administration (“NTIA”) which is housed within the U.S. Department of Commerce. Associate Administrator Hanser spoke about the work NTIA has done involving AI before and since the unveiling of ChatGPT, particularly NTIA’s AI Accountability Policy Request for Comment in April of 2023 to help NTIA offer recommendations for promoting trustworthy AI.

Takeaways from his remarks included: As the President’s principal advisor on information policy issues, NTIA can offer value in policy recommendations for how the government should treat AI, given NTIA’s diverse expertise both legally and technologically. NTIA’s own values contribute towards ensuring AI ultimately benefits the American people and ensuring innovation and commercial advantages. In other words, NTIA seeks not to answer how to stop the new technology but rather what frameworks allow the most benefit from the technology while mitigating its harms. There is also a focus on national security from an international perspective, including how AI could be used to surveiland repress. Furthermore, human rights issues are a concern and working towards AI transparency to give individuals an explanation on what basis important decisions are made that involve AI.

The panel posed questions to clarify NTIA’s national security strategy, which focuses on actions curtailing transactions that otherwise would fall in the domain of commerce; the difficulties in creating any kind of horizontal risk assessment with general purpose generative AI like ChatGPT; and different transparency standards for different audiences that are meaningful and useful for those audiences. The panel noted that agencies are also learning how AI works, so while there is a race to begin regulating the technology to prevent possible harms, there is a fear of overregulating and harming the benefits the technology may bring. Bearing this delicate balancing in mind is important going forward for regulators and policymakers.

Ultimately, NTIA’s broad review of AI from across the board helps pull together stakeholders from across different sectors and groups in one place and ensure their considerations are accounted for going forward. Developing a framework for how to think about AI technology at this stage is significant as discussions develop with and among regulators and policymakers.

March 3, 2023 | Perspectives on Algorithmic Accountability with Senator Ron Wyden

Featuring Chinmayi Arun (Yale Information Society Project), Anupam Chander (Georgetown Law), Nikolas Guggenberger (University of Houston Law Center), Dennis D. Hirsch (The Ohio State University Moritz College of Law), Mehtab Khan (Yale Information Society Project), Paul Ohm (Georgetown Law), United States Senator Ron Wyden, and Kyoko Yoshinaga (Non-Resident Senior Fellow of the Georgetown Law Institute for Technology Law & Policy).

In this in-person event hosted by the AI Governance Series, Senator Ron Wyden, who helped draft Section 230 of the Communications Decency Act, emphasized the importance of ensuring fairness and effectiveness in AI technology.

Takeaways from Senator Wyden’s remarks included: When AI algorithmic systems are used in critical tools Americans use, such as for acquiring insurance or housing, it is important to find out and understand what exactly these systems are and how they work in order to identify and remedy any biases that may be present. In other words, black-box automated systems require transparency in order to avoid discrimination. Senator Wyden’s experience with the NFL’s algorithm used for concussion settlement payouts and Amazon’s algorithm for hiring employees shows the kinds of harm biases in these algorithms can cause if not transparent. Senator Wyden, along with Congresswoman Yvette Clarke, drafted the Algorithmic Accountability Act to force companies to perform ongoing assessments to identify any negative impacts of their algorithms and remedy them. Ultimately, accountability and transparency are key in ensuring algorithmic fairness.

The panel discussed with Senator Wyden his experience with Section 230 and its parallels between its confrontations with the Internet when it was a new technology, and with algorithms today. Wyden also discussed how the biggest hurdle to tackling AI technology is its speed of growth in comparison to the slower moving governmental response. This constant gap between growth and response will mean that the government will always have to race to catch up to any new harms that may arise from that growth.

The follow-up panel discussed Senator Wyden’s remarks and his bill, highlighting agency building and institutional building in the bill’s importance to address new AI issues. Skepticism was also drawn to the bill’s focus on companies conducting self-assessments, as similar models in the data privacy space have historically shown to be ineffective. Fixing a flaw in the algorithm versus the data being used by an algorithm having bias poses different challenges and remedies. Companies will have to have a certain level of preparation to prepare themselves for fostering accountability, such as establishing a code of conduct for AI ethics that is published, as well as strictly following these guidelines and being quick to respond to any shortcomings in their AI systems. Humans should hold the ultimate responsibility for their AI’s output. Companies already have self-regulated as a means of maintaining trust and positive reputations with consumers.

February 3, 2023 | Brazil’s Draft AI Bill

Featuring Anupam Chander (Georgetown Law), Ricardo Cueva (Superior Court of Justice, Brazil), Marcela Mattiuzo (Yale Information Society Project), Laura Mendes (University of Brasília), and Art Pericles (Yale Information Society Project).

Justice Ricardo Cueva of the Superior Court of Brazil and Professor Laura Mendes of the University of Brasília discussed Brazil’s Draft AI Bill, elaborating on both the problems it hopes to ameliorate and the challenges in addressing those problems.

Takeaways from Justice Cueva and Professor Mendes’ remarks included: The bill has a five-part structure: 1) Principles, 2) Rights, 3) Risk Categorization, 4) Governance Measures, and 5) Enforcement. In drafting this bill, experts from various disciplines came forth in senatorial commission hearings to contribute towards the bill’s drafting, along with over 100 written contributions voicing opinions. General principles alone were not sufficient, thus the bill aimed to provide comprehensive rules and standards to be enforced, with variation only in specific contexts such as autonomous vehicles; this follows in line with Brazilian tradition in consistently and continually regulating technology. Drawing from the White House AI Bill of Rights in the United States, as well as the European Union’s ongoing debates, the drafters determined that the risks posed by AI vary enormously and thus should be classified in order to apply a set of rules and requirements as needed.

The bill takes a rights-based approach, defining rights which could be enforced via private rights of action, complementing the enforcement of a possible future agency in charge of enforcing the general norms and rules. Such a comprehensive approach is necessary in order to address discriminatory potential of AI systems as a result of social and technical biases. In comparison, the United States has taken an interventionist approach through an agency enforcing these rules. Professor Mendes explained that a comprehensive regulation with a central coordination seems the best way to achieve the goal of harmonized standards and enforcing rights regarding the impacts of AI in specific sectors.

The panel then discussed with Justice Cueva and Professor Mendes the various rights in the bill and how they balance with the risk-based approach that the bill takes. The bill takes this balance into consideration by applying these rights as long as the technology is impacting a person, the rights must be guaranteed, and the more intense the impact, the more intense the governmental measures to maintain those rights. The role of the provider versus the role of the operator — referred to as “AI agents” in the bill — and the imposition of liability is a difficult question to handle, and the bill attempts to address this with some rules and norms regarding general applications from the provider of the AI tools. Theoretically, there should be some rules or limitations by the provider in order to limit possible liability that an operator may attempt to impose, and this will vary depending on the context. Other questions from the audience highlighted how the bill tackles not just AI tools but algorithms as well, focusing on the model and training of the models as opposed to solely the output of these AI systems, and considering intellectual property rights through transparency.

December 2, 2022 | Blueprint for an AI Bill of Rights

Featuring Anupam Chander (Georgetown Law), Nikolas Guggenberger (University of Houston Law Center), Mehtab Khan (Yale Information Society Project), Art Pericles (Yale Information Society Project, and Suresh Venkatasubramanian (Brown University).

October 21, 2022 | Management of Ethical AI: The State of the Art

Featuring Michael Akinwumi (NFHA), John Basl (Northeastern Ethics Institute), Ilana Golbin (PwC), Dennis D. Hirsch (The Ohio State University Moritz College of Law), and Irina Raicu (Markkula Center for Applied Ethics).

October 7, 2022 | A Conversation with Max Schrems

Featuring Anu Bradford (Columbia Law School), Anupam Chander (Georgetown Law), Nikolas Guggenberger (University of Houston Law Center), Mehtab Khan (Yale Information Society Project), and Max Schrems (NOYB — European Center for Digital Rights).

April 8, 2022 | Towards Human-Centric AI: The Japanese Model

Featuring Susumu Hirano (Chuo University), Hideaki Shiroyama (The University of Tokyo), Toshie Takahashi (Waseda University), and Kyoko Yoshinaga (Non-Resident Senior Fellow of the Georgetown Law Institute for Technology Law & Policy).

Today’s Panel on Human Centric AI in a Japanese Context moderated by Georgetown’s Kyoko Yoshinaga saw a rich panel of experts in Public Policy, Media, Law and Sociology commenting on Japan’s initiatives in AI governance. The Panel covered a wide range of subjects from Japan’s role in shaping the global AI regulation regime to the human perception to robotics and their fears over it.

The Panel began with comments from Professor Hideyaki Shiroyama, Director of Institute for Future Initiatives, Professor of Graduate School of Public Policy, The University of Tokyo. Professor Shiroyama divided his talk under two major heads. First, the International Strategy, which dealt with use of the various global forums by Japan such as the OECD and G20 to develop, share and propagate the nation’s vision for AI governance. He mentions the existence of Beijing AI principles and also draws on comparative data between Japan, Finland and Ireland on the question of familiarity with robotics in a care setting. He expressly added that Japan’s AI policy was and is being developed against the backdrop of its aging population and the prevailing culture. And that is reflected in all spheres including its treatment of robotics where robotics is not just treated as a human tool but an individual entity in itself. Second, is the culture aspect, which looked closely at the relationship between humans and AI, thus curating a list of seven basic principles that society should attend to in order to utilize completely without fearing for a social imbalance. The highlight of Professor Shiroyama’s presentation was an interesting image of a political campaign in Japan. Where a poster was exhibited with pictures of two human candidates and a third robot, which was apparently the way the third candidate wanted to present himself. As a neutral, transparent entity without the vices of human nature!

Next in the speaker line up was Professor Susumu Hirano, Professor and Dean, Faculty of Global Informatics, Chuo University. Professor Hirano has spent time on the Japanese Cabinet as well as Ministry of Internal Affairs and Communications. He spoke of his experience of directly shaping and contributing to numerous soft laws (i.e., guidelines and principles) for AI under the Japanese Government. The Japanese goal was to create a list of principles that would be globally applicable to the AI regime. These principles he believes should be akin to OECD’s privacy guidelines widely adopted worldwide. He adds that the OECD has in fact prepared their own principles on Japan’s insistence and those reflect Japanese guidelines and principles in them. He believes that global collaboration and interoperability is desirable for an efficient AI regime. The focus of his speech and work seemed to be on soft law as the basis of Japan’s current AI governance. Soft law in the form of directions and regulations, co-created with industry members are adherent to social principles drafted for human centric AI. And it is these principles that are embraced by schools, Government Organisations, and Corporations alike.

The third speaker was Professor Toshie Takahashi from the School of Culture, Media and Society/the Institute for AI and Robotics, Waseda University. Her work is premised on the broad question of how it may be possible to place human happiness at the center of AI governance. She explains that narratives of AI in Japan differ from Western narratives and the dichotomy between both are brightly visible. While Japan stands on a utopian front, the West embraces a more dystopian one in so far as its AI narrative is concerned. Professor Takahashi however is focused on all the positives that AI can add to society, such as new avenues of diversity and inclusion. She believes it can also help in fostering transnational dialogue between Japan and beyond. However, she cautions against the use of AI to increase discrimination and potential for addiction and overdependence. While acknowledging the potential for chaos, she adds that AI society has to prioritize sustainability of human society. She believes that we now have the potential to achieve our Sustainable Development Goals but for that we must move from AI/Nation first to human first motto. She fears the gap between natural and social sciences in AI research and suggests that we should create AI for good together using a cross-disciplinary approach. She speaks of her two projects that seek to eliminate it. The first being, ‘A Future with AI’, in collaboration with the United Nations, which focuses on the response of children and young adults to AI. And the other, being ‘Project GenZAI’ (meaning ‘now’) which explores ways of making human prioritizing AI through the lens of Generation Z.

The Panel was richly packed with information and the last leg had three major questions from the moderator Yoshinaga. 1) Will soft law operate well in regulation of AI? 2) What are the differences between Japan and EU’s human centric AI approaches, 3) What are the cultural progresses that the Japanese AI regime has seen?

In response to the first, Professor Hirano stated with firmness that the soft law is indeed the way to go since it is co-created by the Government and the corporations together [along with academics and consumer groups’ representatives] (which is the so-called “multi-stakeholders’ approach”). Thus naturally, the corporations are willing to adhere to what they have set out for themselves. In fact, he believes that hard law may just be more a bane than a boon in these circumstances. The second question was answered by Professor Shiroyama who pinned the difference between EU and Japan on the choice of hard law (by EU) v. the choice of soft law by Japan. He also added the difference in cultures as both regimes of human centric AI are deeply embedded within their own culture. And finally, Professor Takashi spoke of the rise of robots and techno-animism in Japanese culture. She cited statistics to claim that people in Japan have accepted AI but they do not want robots to look like them, neither do they want AI to behave impulsively akin to their human counterparts on occasion. The panel closed with all speakers speaking of Japan’s contribution to the global discourse and Professor Hirano’s specific emphasis on Japan’s history as a harmonious legal regime which can weave harmony between different national legal and regulatory structures around the world.

February 25, 2022 | The Geopolitics of European AI

Featuring Vigjilenca Abazi (Maastricht University), Nikolas Guggenberger (Yale Information Society Project), Przemyslaw Palka Jagiellonian University), and Kirsten Rulf (Federal Chancellery of Germany).

Following the panel on “The Geopolitics of Chinese AI,” this session discussed Europe’s role in the world of AI : both in terms of software development and creating a thriving regulatory framework around it. The key instruments under spotlight were the European Commission’s Artificial Intelligence Act proposed in April 2021 and the General Data Protection Rules (GDPR) of 2018. The panel was moderated by Yale ISP’s Executive Director, Nikolas Guggenberger and featured Vigjilenca Abazi, an Assistant Professor of European Law at Maastricht University, Przemyslaw Palka, an Assistant Professor of Law at Jagiellonian University, and Kirsten Rulf, Head of Unit for Digital and Technology policy who also uses the designation of ‘Nerd-in-Chief’ at the federal Chancellery of Germany!

The conversation began with an expression of solidarity with the people of Ukraine before Vigjilenca Abazi opened the panel in response to the moderator’s question regarding the future and direction of European AI today. She emphasized on the twin issues of compliance and transparency that were at the heart of EU’s AI regulatory framework and noted the mismatch between the spirit and letter of the law. She referred to pre and post market assessment in determination of risk and definitional vagueness to mention the drawbacks of Europe’s multi tiered regulatory regime.

Prezemyslaw Palka joined right after and added that he was skeptical of the AI Act’s ambition of governance in the market. He was particularly apprehensive of the horizontal regulation, which divides AI systems by risk thus making rules apply only for “high risk” regulations. Yet the irony of fact remained at this “high risk” being an undefined component in the legislation. Later in the discussion, Palka reiterated that it is untrue or perhaps inaccurate to say that regulation is bad for the Internet. But it is bad regulation, like the AI act which is bad for the Internet as it is long and expensive but shies away from making regulatory choices. Most crucial questions pertaining to risk are left to the market and consequently it is private parties and law firms who define and answer much of this component. Unlike its American counterpart, which cares about the effect of regulation, Europe places a disproportionate blame on enforcement issues while considering the social landscape as an ideal one.

Kirsten Rulf offers a rich insight on the social and political aspects behind the regulatory regime’s success and failure. She adds that the oscillatory views on AI see it either as a silver bullet or a dangerous and discriminatory tool. But this is not to say that people shy away from using AI on their personal devices, thus bringing many more cultural and policy challenges than strictly legal ones. One aspect of the act could just be to ‘signal’ to its citizens. Much like the GDPR, the AI Act too is a geopolitical tool as opposed to a legal one. Noting that the Trade Technology Council, Negotiations on EU-US Privacy Shield and other forum discussions have been lying cold for years, this is the EU’s political move to control the narrative. Responding to Palka, she wonders aloud, if it is indeed a flaw to put decisions on the market or a design principle.

The questions of extraterritoriality of the GDPR, also known as the Brussels effect, was discussed squarely by Abazi and parallels between the concept of privacy in EU law today and that of human rights globally 20 years ago were drawn. She said that the modern European stance of not signing trade agreements till privacy compliances were achieved were akin to the human rights compliance demands in the yesteryears. Other critical questions posed to panelists included regulatory pace (is AI developing fast enough to capture and respond to social nuances) and finally, how much the AI act materially regulates the scape since much of it is already addressed by Art. 22 of the GDPR which were addressed in the final segment.

January 21, 2022 | The Geopolitics of Chinese AI

Featuring Anupam Chander (Georgetown Law), Simon Chesterman (National University of Singapore), Jeffrey Ding (George Washington University), Samm Sacks (Yale Law School Paul Tsai China Center), and Kendra Schaefer (Trivium China).

In this AI Governance Virtual Symposium Series session, Professor Anupam Chander mediated a discussion between Simon Chesterman, Jeffrey Ding, Kendra Schaefer, and Samm Sacks.

Professor Chander started by posing the following puzzle: if the US and China are in the midst of a geopolitical race to become the world’s AI superpower, why are both of them cracking down on big tech? Recent Chinese regulations on data and algorithms caused major Chinese internet companies to lose over $1 Trillion in market value.

The discussion started with Kendra Schaefer, Partner and Head of Tech Policy Research at Trivium China, analyzing the recent wave of Chinese regulations directed at tech companies. These include data security, antitrust, IPOs, online content moderation, gaming, and education. To understand the extent and breadth of this wave, Schaefer discussed the regulation of recommendation algorithms, which recently came into force, calling it a “world-first” and an “example for Europe and the United States.” She argued that while some of these rules, such as those that require algorithms to promote “positive feelings” and “mainstream values,” are specific to China’s legal and political system, many others, such as those aimed to protect privacy, battle internet addiction, and increase user control, address nearly universal concerns.

The reason why the regulations of AI may make sense even from a market perspective, suggested Jeffrey Ding, Postdoctoral Fellow at Stanford’s Center for International Security and Cooperation, is because they are meant to promote AI development that is sustainable and legitimate. If one thinks of AI diffusion in the entire economy as a “long game” taking decades, then thoughtful and publicly salient regulation can be a “first step towards more sustained acceptance and more sustainable development of AI in China.”

Samm Sacks, a Senior Fellow at Yale Law School Paul Tsai China Center & New America, argued that the recent wave of regulation has its root in the Cyber Security Law passed in 2017. These laws and regulations exemplify a core tension between the government’s goal of increasing its control over data and the internet and maintaining the growth of the digital economy. This tension can be witnessed in the debates between Chinese agencies that are more security-focused and those that are more oriented toward the global economy.

Simon Chesterman, Dean of the Faculty of Law at National University of Singapore and Senior Director of AI Governance at AI Singapore, argued that the recent Chinese wave of regulations is motivated by an amalgam of at least three different goals. First, there is the political need to reign in technology companies because they “were getting a bit too powerful,” and a “bit too ahead of the curve in term of their politics.” The second goal is to promote social goals. An example of that is the limits on the number of hours children can play computer games or the crackdown on the for-profit after-school education system. Finally, there are purely economic reasons. What is striking, Chesterman suggests, is that China decided that promoting certain social and political goals is worth the loss of a trillion dollars in market value.

October 29, 2021 | AI’s Role in Addressing and Exacerbating Climate Change

Featuring Priya Donti (Climate Change AI), Sasha Luccioni (Mila — Quebec Artificial Intelligence Institute), journalist Jackie Snow, and Masaru Yarime (Hong Kong University of Science and Technology).

In this installment on the AI Governance Virtual Symposium Series, journalist Jackie Snow mediated a discussion between Priya Donti, Sasha Luccioni, and Masaru Yarime on the role of Artificial Intelligence in addressing and exacerbating climate change.

Priya Donti, Chair of Climate Change AI, surveyed the different ways AI applications can be used to mitigate climate change and support climate action, either by reducing greenhouse gas emissions or adapting to the results of climate change. These applications include information gathering, forecasting, improving operational efficiency, performing predictive maintenance, accelerating scientific experimentation, and approximating time-sensitive simulations. At the same time, AI can be used in systems that directly or potentially increase greenhouse gas emissions, such as emission-intensive industries, in addition to the substantial amounts of energy consumed by AI systems themselves.

Sasha Luccioni of the Mila Institute and Co-Founder of Climate Change AI discussed the “This Climate Does not Exist” project. In this project, AI is used to generate images that simulate the appearance of climate events in user-chosen locations. As research suggests, by bringing these effects closer to home, such AI-produced personalized imagery can help viewers better realize the urgency of climate change.

Professor Masaru Yarime of the Hong Kong University of Science and Technology noted the increasing involvement of AI systems in climate change-related technologies. After surveying these involvements, including promoting energy efficiency in information and communication systems, industry, transportation, and the household, Professor Yarime discussed how these use cases could fit into existing AI governance regimes.

September 24, 2021 | Classifying AI Systems and Understanding AI Accidents

Featuring Catherine Aiken (Georgetown Law Center for Security and Emerging Technology), April Falcon Doss (Georgetown Law Institute for Technology Law & Policy), and Helen Toner (Georgetown Law Center for Security and Emerging Technology).

In this semester’s first session of the AI Governance Virtual Symposium, moderated by April Falcon Doss, Executive Director of the Georgetown Institute for Technology Law and Policy, Catherine Aiken and Helen Toner from Georgetown’s Center for Security and Emerging Technology (CSET) presented their work on AI classification and AI accidents.

Catherine Aiken, the Director of Data Science and Research at CSET, discussed CSET’s work on developing AI classification frameworks for policymakers. As a general-purpose technology, different AI systems can have different meanings and implications for various regulatory frameworks. Responding to the challenge AI’s multifacetedness poses to policymaking, the CSET, in collaboration with the OECD, seeks to develop a user-friendly framework to classify AI systems uniformly along policy-relevant dimensions. To be successfully employed by policymakers, classifications need to be readily usable and understandable, characterize elements most relevant for policy and governance, involve minimal administrative burdens, be attuned to other AI governance frameworks, and be reliably consistent for a range of users. In line with these key criteria, CSET has developed two alternative and complementary classification frameworks, one classifying AI systems according to their level of autonomy and impact, and the other looking at the context the system operates in, the kind of input it receives, the model it utilizes, and its output.

Helen Toner, Director of Strategy at CSET, discussed the work done at the CSET on the emerging phenomenon of AI accidents. Regulatory frameworks risk lagging behind the rapid development of AI technology. To give the policy response sufficient time to adapt, CSET has been developing tools to foresee AI-related problems before they arise. Doing so involves drawing both on past non-AI technological accidents and known weaknesses and vulnerabilities characteristic of the use of AI systems, due to encounters with unexpected input, failures to devise appropriate specifications, and difficulties involving the system’s interpretability and inability to assure users of its accuracy. Looking at these two sources, the research has identified five factors contributing to AI accidents: competitive pressure, system complexity, the speeds at which AI systems operate, untrained and distracted users, and cascading effects in multiple instance systems. In responding to these weaknesses, policymakers can focus on investments in AI safety R&D and standards and testing capacities, work across borders to reduce accident risk, and facilitate information sharing.

June 18, 2021 | Watching Algorithms, The Role of Civil Society

Featuring Julia Angwin (The Markup), Iverna McGowan (Center for Democracy & Technology), David Robinson (Cornell College of Computing and Information Science), and Byron Tau (The Wall Street Journal).

In this year’s concluding session of the AI Governance Virtual Symposium, Byron Tau of the Wall Street Journal led Julia Angwin, Iverna McGowen, and David Robinson in a discussion on the role of civil society in keeping the use of algorithms in check.

Julia Angwin, Founder and Editor-in-Chief of The Markup, observed the role of journalism in bringing to light the various ways in which different AI use cases affect our lives, from hiring algorithms to those used in criminal proceedings. Despite their prevalence, algorithms are prone to introducing and enhancing various biases and are nevertheless often subject to little scrutiny. Furthermore, algorithms can be used to circumvent accountability for failures which comparable human decision-making would be held accountable for.

Iverna McGowen, the Europe Director of the Center for Democracy & Technology, discussed the EU’s recently published draft AI regulations. The proposed regulation task governmental agencies with determining the level of risk posed by various AI-based systems and regulates them according to their risk level. However, this risk-based approach should not come at the expense of a rights-based approach that puts the AI’s potential effect on human rights at center stage. Civil society organizations have an essential role in ensuring that the use of AI lives up to human rights standards and the general principles of the rule of law, including transparency and fairness.

David Robinson of Cornell’s College of Computing and Information Science focused on the mechanism that can be used in the service of AI governance. Examples of such mechanisms can, for instance, be gleaned from the process of organ allocation, which has been subject over the years to various regimes of public oversight and input. The debate over algorithms can act as a moral spotlight, focusing on specific aspects of fairness. However, there are also more general questions about the very use of algorithms in different circumstances.

May 28, 2021 | How Do We Regulate AI? Comparative Perspectives

Featuring Chinmayi Arun (Yale Information Society Project), Anupam Chander (Georgetown Law) Jessica Rich (Former Director of the Bureau of Consumer Protection at the Federal Trade Commission), and Lucilla Sioli (European Commission).

In the third session of the AI Governance Virtual Symposium, Professor Anupam Chander moderated a panel on comparative perspectives on AI regulations featuring Chinmayi Arun of the Information Society Project, Jessica L. Rich of the Institute for Technology Law And Policy, and Lucilla Sioli of the European Commission.

Discussing AI regulation in the global South, Chinmayi Arun noted the Western norms and imagery imposed by major multinational companies on the global majority through data collection choices and model development. This tension is exacerbated by the fact that some countries in the majority world are not democracies and others have weak regulators. Arun discussed how this results in technologies criticized in the minority world being embraced by countries in the majority world. Lastly, Arun touched on the tension between states’ questioning of certain technologies and international agreements guaranteeing the free flow of data.

Lucilla Sioli discussed the EU’s proposed utilization of the CE product marking framework to regulate the placement of AI products on the European market. Sioli stressed that the purpose of the proposed regulation is not to regulate AI technology but rather to impose rules on using certain AI systems in specific contexts according to a scaled measurement of the system’s risk and sensitivity. In the proposed regulation, AI use cases are ranked from mundane, low-risk systems to high-risk, prohibited use cases. This scaled regulation, Sioli noted, can help address the concerns of businesses reluctant to use AI out of fear of customer objection.

Presenting the perspective from the US, Jessica Rich, former Director at the FTC, discussed the many non-binding principles and standards addressing the use of AI and requiring transparency, truthfulness, and nondiscrimination, as well the increasing number of legislative proposals on the subject. Although AI technology is not regulated in the US as a general matter, Rich noted that as a process, AI is incorporated into products and services regulated by comprehensive regulatory frameworks. However, further regulation is needed, Rich added, to ensure that corporations cannot escape accountability by assigning responsibility to an algorithm.

May 7, 2021 | AI Ethics and Corporate Responsibility

Featuring Yoko Arisaka (Sony Group Corporation), Alexandra Reeve Givens (Center for Democracy and Technology), Erika Brown Lee (Mastercard), and Jutta Williams (Twitter).

In the second session of the AI Governance Virtual Symposium, Alexandra Reeve Givens, President and CEO of the Center for Democracy and Technology, moderated a panel on AI and corporate social responsibility featuring Yoko Arisaka, General Manager at Sony’s Legal Department, Erika Brown Lee, SVP and Assistant GC at Mastercard, and Jutta Williams, Staff Product Manager at Twitter.

Discussing Sony’s ethics activities, Yoko Arisaka emphasized corporations’ duty to promote creativity and sustainability. Ethics in AI requires constantly exploring the meaning of humanity, of what we are looking for as people. Though the challenges created by AI and machine learning cannot be resolved entirely, Arisaka underscored the importance of mitigating these risks and maximizing fairness. This requires transparency about collecting personal data, providing individuals with accessible information on the AI’s use of their data. Discussing the particular challenges faced by companies with global operations, Arisaka notes the variance in different people’s sense of ethics. To meet this challenge, global companies must develop standard ethics guidelines developed in dialog with different groups, academia, industry, and the public sector.

Presenting Twitter’s Responsible Machine Learning Initiative, Jutta Williams discussed the need for public development and accountability in assessing the algorithm’s fairness. Williams discussed this initiative as resting on four pillars: taking responsibility for algorithmic decisions, equity and fairness of outcomes, transparency about decisions, and enabling user agency and algorithmic choice.

For Erika Brown Lee, corporations’ social responsibility concerning AI revolves around the question of trustworthiness, as users need to be able to trust that service providers are good stewards of their personal data. Ethical entities, Brown Lee stressed, are responsible for ensuring that individuals and their rights are honored. Individuals should own their data, control and understand how it is used, benefit from its use, and have a right to keep personal data private and secure.

April 2, 2021 | AI for Municipalities

Featuring Albert Fox Cahn (Surveillance Technology Oversight Project), Ann Cavoukian (former Privacy Commissioner of Ontario, Canada), Sheila Foster (Georgetown Law), and Ellen Goodman (Rutgers University School of Law).

Professor Sheila Foster moderated a panel on AI for municipalities, featuring Dr. Ann Cavoukian, Albert Fox Cahn, and Professor Ellen Goodman. Spending on smart cities, Professor Foster noted, is likely to reach more than $130 billion this year, with AI expected to play a substantial role in their operation. This development promises to reshape many facets of city life, but it also gives rise to multiple challenges, ranging from privacy and security to the lack of transparency.

Dr. Ann Cavoukian, Executive Director of Global Privacy & Security by Design Center, discussed in her remarks the need to incorporate deidentification practices at the source of data collection in smart cities. AI is not magical, Dr. Cavoukian stressed, and transparency is essential to ensuring that privacy remains an inherent component of data gathering, as well as a safeguard against harmful and costly mistakes. Though generally optimistic about the potential benefits of smart cities, Dr. Caoukian insisted that eliminating the hidden biases endemic to AI and preventing the misuse of collected data require the ongoing ability to “look under the hood” of the systems being used.

Albert Fox Cahn, Founder and Executive Director of The Surveillance Technology Oversight Project, offered a more cautious stance on the use of AI by municipalities, drawing attention to municipalities’ tendencies to over-collect data, often beyond the collections’ original purpose. Public oversight of data collection and use, Mr. Fox Cahn warned, is hindered by the opacity of municipal procurement, compounded by the additional layers of technological complexity. In this reality, it is not always clear precisely what benefits the technology promises to produce and its efficacy in doing so. To further complicate the matter, the public allocation of the presumed benefits and of the harms these systems entail is often lopsided, with vulnerable communities bearing the brunt of the costs and enjoying little gains. Even when municipalities attempt to minimize misuse by restricting the use of collected data to its original purpose, it is often difficult to prevent the data from being turned over to state and federal authorities.

Professor Ellen Goodman, Co-founder and Director of the Rutgers Institute for Information Policy and Law, focused in her discussion on the subject of trust, noting how the failure to separate relatively mundane uses of AI from sensitive use cases can undermine public trust across the board. Exacerbating this challenge is the general concern over the role of private companies in data collection and its potentially harmful effects on democratic accountability and public participation. To gain public trust, municipalities must provide shielded data storage for their residents, protected from commercial and other interests, by employing purpose limitations and privacy controls.

March 10, 2021 | Interview with Minister Audrey Tang on AI

Featuring A. Prince Albert III (Georgetown Law Institute for Technology Law & Policy), Anupam Chander (Georgetown Law), Nikolas Guggenberger (Yale Information Society Project), Audrey Tang (Taiwan Minister of Digital Affairs), and Kyoko Yoshinaga (Non-Resident Senior Fellow of the Georgetown Law Institute for Technology Law & Policy).

In the opening segment to the AI Governance Virtual Symposium, organized by the Information Society Project at Yale Law School and the Institute for Technology Law and Policy at Georgetown Law, Audrey Tang, the Digital Minister of Taiwan, discussed Taiwan’s approach to AI governance and her vision for the future of AI with Professor Anupam Chander, Nikolas Guggenberger, Kyoko Yoshinaga, and Antoine Prince Albert III.

In her inspiring remarks, Minister Tang suggested treating AI as means of increasing democracy’s “bitrate,” using the technology to foster and facilitate interpersonal relationships. Discussing the Taiwanese approach, Minister Tang stressed the need for government investment in digital infrastructure, akin to that allocated to traditional tangible infrastructures. Without such investment, Minister Tang warned, government would be forced to rely on private resources, which may not be compatible with democratic rule.

On the future of AI governance, Minister Tang discussed the importance of technological education to increasing digital competence and access to AI. Investments in digital competence are key to fostering democratic participation by transforming citizens into active media producers. Discussing the risks of reliance on AI, Minister Tang addressed the need to promptly respond to implicit biases by introducing transparency and robust feedback mechanisms. Likening the use of AI technology to fire, Minister Tang advocated introducing AI literacy at a young age, teaching children how to successfully and safely interact with AI, and implementing safety measures in public infrastructure.

Offering an optimistic view on the future of AI, Minister Tang suggested that we replace talk of AI singularity with the language of “plurality”, using AI to expand the scope of social values to include future generations and the environment. Doing so requires international cooperation in developing norms that would promote fruitful AI governance and increased opportunities for future generations.

About the AI Governance Series

Initially founded by Professor Anupam Chander, Assistant Professor Nikolas Guggenberger, and Project Associate Professor Kyoko Yoshinaga, the AI Governance series is now in its fourth year and represents a collaboration between the Georgetown Law Institute for Technology Law & Policy Global TechNet Working Group and the Yale Information Society Project. Thank you to the many individuals at Georgetown Law, Yale Law, and elsewhere who have contributed over the years!

A. Prince Albert III
Chinmayi Arun
Heather Branch
Hillary Brill
James Carey
April Falcon Doss
Max Dreitlein
Mary Pat Dwyer
Nik Eder
Clara Ferrari
Mehtab Khan
Daniell Maggen
John Eagle Miles
Artur Pericles L. Monteiro
Keshav Raghavan
Natalie Roisman
Samantha Simonsen
Esther Tetruashvily
Eoin Whitney