Regulating Generative AI in International Arbitration: A Transnational Governance Challenge
April 2, 2026 by Sakshi Yadav
Generative artificial intelligence (“GenAI” or “AI”) is no longer a distant or speculative development in legal practice. It is already reshaping how lawyers research, draft, and manage complex disputes. According to a Goldman Sachs report[1], 44% of legal tasks could theoretically be performed by AI, the highest among major professions except clerical roles. Today, many firms are already using AI for due diligence, contract analysis, and compliance.
From billing models to staffing patterns, the legal profession will change considerably in the coming years. This technological transformation will be evident in international litigation and particularly international arbitration, a system whose legitimacy rests on confidentiality, due process, and at least at present the exercise of independent human judgment.
International arbitration has long prided itself on efficiency, procedural flexibility, and party autonomy. Yet the increasing use of GenAI in arbitral proceedings raises questions that go well beyond technological capability. At its core, the challenge is regulatory: how should GenAI be governed in a dispute-resolution system that is transnational, decentralized, and largely self-regulated?
As arbitration practice increasingly intersects with GenAI driven tools, the absence of clear, harmonized standards on disclosure, accountability, competence, and permissible use has become difficult to ignore. While GenAI offers significant potential to reduce costs and improve efficiency, its integration into arbitration exposes a regulatory gap that demands attention from arbitral institutions, policymakers, and the broader international legal community.
This piece aims to explore the regulatory challenges that GenAI poses for international arbitration, and to consider what a principled, transnational governance framework might look like in a private dispute resolution system that has, at least until now, depended on human agency and values.
The Appeal of GenAI in a Transnational Arbitration System: Promoting Efficiency and Saving Costs
International arbitration is uniquely positioned to absorb technological change. Unlike domestic litigation or dispute resolution, international arbitration operates across jurisdictions, legal systems, and procedural cultures. Parties choose arbitration precisely because it offers flexibility, predictability, speed, and procedural autonomy, features that align closely with the promises of GenAI such as efficiency.
In practice, GenAI tools are already being used by counsels to summarize voluminous records, generate chronologies, organize evidence, and assist with preliminary drafting. These applications are particularly attractive in cross-border disputes, where document production often spans multiple legal systems, languages, and regulatory environments. Some arbitration cases involve tens of thousands of documents and summarizing them manually is not only exhausting but also a poor allocation of legal time and human resource available. GenAI has the potential to transform this process.
A useful parallel can be seen in the rise of electronic discovery (“e-discovery”), which is the use of software tools to search, filter, and analyze large volumes of digital documents in litigation and arbitration. In large, document intensive disputes, e-discovery technologies have significantly improved efficiency and accuracy. Such GenAI assisted use improves efficiency allowing parties navigating complex international transactions to achieve substantial time and cost savings. This distinction between GenAI assisted administrative support and human decision-making authority matters even more in arbitration because arbitrators, especially subject matter experts, are often overwhelmed.
Many arbitrators juggle multiple matters at once, and their schedules leave little room for flexibility. In the 2025 International Arbitration Survey conducted by White & Case LLP & School of International Arbitration, Queen Mary University of London, one arbitrator noted, “AI will revolutionize the way we work—what used to take hours now takes seconds while another pointed out that proper use of AI could make dispute resolution “faster and much more economical.” [2] AI can reduce the administrative burden so arbitrators can spend more time analyzing evidence and less time sorting through it.
Professional Competence encompasses Technological Competence in a GenAI Assisted Arbitral Practice
Beyond efficiency and cost reduction, the growing use of GenAI in arbitration implicates a more subtle but equally important concern: professional competence. Competence in 2026 is not just researching, drafting or arguing in court rooms and tribunals but requires technological competence as well. A lawyer should be able to use the right research tools to present his client’s case in court room. Ignoring AI entirely could place a lawyer at a professional disadvantage. In the age of GenAI, clients expect counsel to know how to use AI efficiently and responsibly.
Another dimension to this is how client expectations are shifting. Many clients now explicitly ask whether counsel is using AI because it directly affects timelines and costs. A lawyer who refuses to adapt or take advantage of the benefits of AI into their legal practice may inadvertently be doing the client a disservice because, at the end of the day, clients want competent, efficient, and cost-effective representation. Lawyers must be prepared to integrate AI appropriately.
AI, however, is not a replacement for lawyers. Any output AI produces must be reviewed, verified, and polished, just like a junior associate’s draft. In fact, comparing AI to a junior associate can be an important metaphor to understand professional responsibility of lawyers in terms of usage of GenAI. If a partner receives a memo that is poorly researched or contains errors, they don’t blame the existence of junior associates, they blame the supervision. That is exactly how AI should be viewed. The tool is not the problem but failing to check the tool’s output is.
Domestic guidance illustrates this shift. For example, commentary to the American Bar Association Model Rule 1.1[3] clarifies that maintaining competence includes understanding the benefits and risks associated with relevant technology. While such guidance is jurisdiction specific, the underlying principle resonates in international arbitration, where counsel and arbitrators routinely rely on sophisticated technological tools in high stakes proceedings.
In the arbitral context, competence today requires more than technical proficiency in law and procedure. It also entails the ability to critically assess AI-generated outputs, recognize the risks of hallucinated authorities or biased analysis, safeguard confidential data, and ensure that human judgment remains central to decision making.
Uncritical reliance on GenAI does not merely raise technical concerns but it may undermine effective advocacy and, in extreme cases, compromise procedural fairness. Hence, learning and using GenAI for the benefit of the client pertains to professional competence, especially in this increasingly modern and technologically advanced age.
The difficulty, however, is that arbitration lacks a shared transnational understanding of what technological competence requires. While international bodies such as the International Bar Association have begun addressing artificial intelligence in dispute resolution, most visibly in the mediation context, equivalent arbitration specific guidance remains limited. This absence leaves practitioners to navigate AI use largely on an ad hoc basis, with uneven standards and uncertain accountability.
A Regulatory Vacuum in International Arbitration
Unlike domestic litigation, international arbitration does not operate within a single regulatory framework. There is no global arbitration regulator, no unified ethical code governing arbitrators, and no harmonized standard for procedural conduct beyond institutional rules. This structural reality complicates any effort to regulate GenAI.
At present, there is little consensus on fundamental questions such as:
- whether parties or arbitrators should be required to disclose the use of AI tools;
- what constitutes impermissible reliance on AI in reasoning or decision making;
- how confidentiality obligations apply when AI tools process sensitive data across borders; and
- who bears responsibility when AI-generated errors influence outcomes.
This regulatory silence is particularly concerning given arbitration’s finality. Arbitral awards are subject to very narrow and limited judicial review, and procedural errors introduced through improper or opaque AI use may not be visible on the face of the award. In a transnational system where enforcement mechanisms are already narrow, reliance on unregulated AI processes risks undermining due process without effective remedies. Although some institutions have begun issuing guidance on the use of AI in arbitration, such as the Silicon Valley Arbitration & Mediation Center’s Guidelines on the Use of Artificial Intelligence in Arbitration, these initiatives remain largely soft-law efforts and do not yet amount to a harmonized regulatory framework.
Confidentiality, Data Governance, and Cross-Border Risk
Confidentiality is often described as arbitration’s defining feature. Using third party tools to maintain client information is not new, lawyers have been doing this in the form of using fax machines, printers etc. In fact, using Microsoft Word for spelling check is also using a form of AI tool. Yet GenAI complicates traditional understandings of confidentiality, especially in cross-border contexts. Many AI tools rely on cloud based infrastructure, data processing across jurisdictions, and machine learning models whose data retention and data use practices may be opaque.
Some law firms have started using AI tools that do not expose client information to the outside world. This information and related research is usually siloed within the bounds of the law firm and its staff, helping to maintain the confidentiality of client sensitive information.
For international arbitration, this asymmetry between the technical complexity of AI systems and the limited technological training of many arbitration practitioners raises serious governance concerns. Confidential submissions, witness statements, and expert reports may simultaneously implicate multiple data protection regimes. Uncertainty about where data is stored, how it is processed, and who ultimately has access creates legal and ethical risks that existing arbitration rules were not designed to address.
According to a recent international arbitration survey, concerns around confidentiality and AI-related errors remain high among practitioners[4] and in the absence of clear guidance, responsibility falls largely on counsel and arbitrators to assess these concerns and risks. Yet many practitioners lack the technical expertise or institutional support necessary to do so effectively. This asymmetry underscores the need for policy level responses rather than purely individualized decision making.
Human Judgment, Due Process, and Legitimacy
Perhaps the most fundamental concern raised by GenAI in arbitration relates to decision making authority. Arbitration derives its legitimacy from party consent and the expectation that disputes will be resolved by arbitrators exercising independent judgment. While AI can assist with organization and analysis, delegating substantive reasoning to automated systems risks eroding core procedural guarantees.
This concern is amplified in international arbitration because awards are final and enforceable across jurisdictions. If arbitrators rely excessively on AI-generated reasoning without understanding its limitations, parties may question whether the tribunal properly exercised its mandate. After all, the duty to decide is perhaps the primordial duty of arbitrators in whom parties repose trust because of their impartiality and independence. Existing enforcement frameworks provide little guidance on how courts should assess the legitimacy of AI-assisted awards.
Here, concerns about human judgment are inseparable from questions of competence. As AI tools increasingly assist with legal analysis, effective adjudication depends not only on discretion but also on the arbitrator’s ability to understand and critically evaluate AI-generated outputs. Without such technological competence, AI risks influencing or substituting human reasoning rather that supporting it.
Towards Transnational AI Governance in Arbitration
Given arbitration’s transnational nature, purely domestic regulation is unlikely to provide a complete solution. What is needed instead is a coordinated, principle-based approach that preserves arbitration’s flexibility while safeguarding fundamental procedural values.
Possible regulatory responses include:
- institutional guidelines clarifying permissible and impermissible uses of GenAI;
- disclosure norms that balance transparency with efficiency;
- data governance standards tailored to confidential arbitral proceedings; and
- training initiatives that treat AI literacy as an element of professional competence for arbitrators and counsel.
Soft law instruments which have long been a feature of international arbitration, may offer a pragmatic starting point. Over time, consistent institutional practice could help establish shared expectations across jurisdictions without undermining party autonomy.
Conclusion: Efficiency Must Be Matched with Governance
GenAI has undeniable potential to reshape international arbitration by offering efficiency gains that align with the system’s core objectives. Yet technology alone cannot resolve the deeper governance questions that its use raises. In a dispute resolution system built on trust, consent, and legitimacy, the absence of clear regulatory guardrails is itself a risk.
The value of GenAI in arbitration ultimately depends not on what the technology can do, but on how it is used and supervised. Without thoughtful regulation and without treating AI literacy as a component of professional competence, GenAI risks undermining the very principles arbitration seeks to protect. With appropriate governance, however, it can become a tool that strengthens, rather than destabilizes, the future of international dispute resolution.
[1] Generative AI Could Radically Alter the Practice of Law, The Economist (June 6, 2023),
[2] White & Case LLP & School of International Arbitration, Queen Mary Univ. of London, 2025 International Arbitration Survey: Arbitration and AI (2025),
[3] ABA Model Rules of Pro. Conduct r. 1.1 cmt. 8 (Am. Bar Ass’n 2020).
[4] Id