A Brief Look at the Implications of Artificial Intelligence on ESG

August 7, 2024 by Guest Post from Corey Mirman (L'24)

In part 2 of his examination of ESG principles, Corey Mirman (L'24) considers the potential impact of AI technologies on both ESG investing and the movement overall in this student guest post.

Artificial intelligence (AI) has been hailed as a positive tool that could make humans more productive and happier because of its potential to complete mundane tasks in a matter of seconds, and at the same time, as something that could spell the end of the human race.[1] Either way, the rapidly growing use of AI creates profound legal, moral, and practical implications for the environmental, social, and governance (ESG) movement. This paper will discuss some of these implications for each of the E, S, and G in ESG, as well as the potential impact of AI on ESG investing.

 

Environmental Implications

The E in ESG typically refers to “[c]limate change, resource efficiency, pollution and waste management, biodiversity, and energy consumption.”[2] AI has the potential to create solutions for managing these issues—it can analyze electricity demand to “optimize the operation of renewable energy sources like wind and solar power” and “provide more accurate and detailed climate predictions.”[3] But at the same time, AI applications consume massive amounts of electricity—by 2027, global AI servers could potentially consume about as much electricity as “Argentina, the Netherlands and Sweden each use in a year.”[4] In fact, many markets across the U.S. are at risk of running out of power.[5] “Northern Virginia [the data center capital of the world] needs the equivalent of several large nuclear power plants to serve all the new data centers planned and under construction. Texas, where electricity shortages are already routine on hot summer days, faces the same dilemma.”[6] Cooling AI servers also requires significant water consumption.[7] As a result, it is unlikely that the environmental benefits of AI will keep up with its electricity and water demands.[8] Even Sam Altman of OpenAI has said that a technological breakthrough is needed to keep up with AI’s energy use.[9]

AI’s massive energy consumption also raises questions about how big companies that have made zero-emissions pledges will meet their goals while still expanding their AI products and services.[10] These companies will have to find ways to make their data centers more energy efficient, accept that they will not meet their net-zero goals, or run the risk of being accused of greenwashing. But while making data centers more efficient will be expensive, and thus impact shareholder returns, doing so is not a purely “woke” ESG activity. In fact, given the limited supply of power, companies likely have a fiduciary duty to make their AI operations more energy efficient. “Settled law permits corporations and institutional investors to take into account ESG factors that are rationally related to the profitability of their businesses and investments, and if those factors are obviously relevant as a matter of business and investment risk, may require consideration of those factors as a matter of fiduciary duty.”[11] The country running out of power, threatening the continued innovation of AI businesses poses a clear business risk to these businesses. They must act to make their operations more energy efficient.

Finally, while the anti-ESG movement has pushed back on many climate-change-related regulations, slowing down the explosive data center growth seems to have at least some bipartisan support—“[t]he top leaders of Georgia’s House and Senate, both Republicans, are championing a pause in data center incentives,” for example.[12]

 

Social Implications

The S in ESG typically refers to “[l]abor practices, human rights, community engagement, diversity and inclusion, and employee well-being.”[13] A major social implication of AI is its potential to eliminate jobs.[14] Business Roundtable’s 2019 Statement on the Purpose of a Corporation stated that its CEO signees were committed to “[i]nvesting in our employees. This starts with compensating them fairly and providing important benefits. It also includes supporting them through training and education that help develop new skills for a rapidly changing world. We foster diversity and inclusion, dignity and respect.” Do the companies that signed this statement (and other companies) have an obligation to protect jobs from the threat of technology like AI? Or do they have an obligation to train their employees to use AI (i.e., develop skills for a rapidly changing world)? Or is their only obligation to maximize shareholder returns, in which case, is their duty to use AI to eliminate jobs to save costs and improve profit margins?

“[U]nder Delaware law, if the board believes that action benefiting stakeholders like workers or creditors has a rational relationship to the best interests of the stockholders, the business judgment rule protects the board from stockholders seeking to overturn their judgment in litigation.”[15] One could argue that using AI to eliminate a significant number of jobs poses a reputational risk that would be damaging enough to a company to warrant keeping human employees. Still, companies likely have no obligation to maintain human employment unless they are barred from doing so by union labor contracts—companies lay off employees all the time despite the reputational risks. Ultimately, the efficiencies gained from increasing AI adoption will likely outweigh any reputational risks, but companies that are serious about ESG efforts should put in place policies regarding job automation. They also must ensure they are in compliance with federal and state Worker Adjustment and Retraining Notification (WARN) laws and ensure that layoffs do not “disproportionately impact a protected group and lack sufficient business justification.”[16]

Employers using AI tools to evaluate prospective employees must also ensure that they are in compliance with existing employment laws such as Title VII of the Civil Rights Act and the Americans With Disabilities Act, as well as recent state and local AI-specific regulations like “New York City Local Law 144, which sets forth limitations and requirements for employers using automated employment decision tools (AEDTs) to screen candidates for hire or promotion.”[17]

 

Governance Implications

The G in ESG typically refers to “[b]oard structure, executive compensation, transparency, anti-corruption measures, and risk management.”[18] Some AI scholars have posited a thought experiment that suggests that if humans tell an AI application to make as many paperclips as possible, without human intervention or sufficient controls, the application will “end up taking over every natural resource and wiping out humanity just to achieve its goal of creating more and more paperclips.”[19] While this seems fantastical (and impractical given the electricity issues noted previously), it does raise important questions about the responsibilities AI product creators and end-users have to use AI ethically.

Under Delaware corporate law (and that of most other states) corporations have “the ability to adopt charter provisions exculpating directors from liability for even gross negligence,”[20] which empowers directors to take risks with AI. Still, the “affirmative obligation [of the duty of loyalty] has at its core the requirement that directors and officers act to promote the best interests of the corporation and its sustained profitability, within the limits of their legal discretion and their sense of ethics.”[21] Directors are obligated to ensure the corporation operates within the bounds of the law and “[l]aw compliance…comes ahead of profit-seeking as a matter of the corporation’s mission.”[22] Delaware’s Caremark decision also obligates “fiduciaries to undertake active efforts to promote compliance with laws and regulations critical to the operations of the company.”[23]

In the U.S., governments have been slow to adopt AI regulations, but because AI will permeate so much of society, existing laws, such as anti-discrimination, product liability, and many others, apply to the use of AI. Thus, directors are obligated to ensure their companies’ use of AI operates within the bounds of the existing law. Even if the paper clip thought experiment were a practical reality, directors would be obligated to ensure that their AI does not take over every natural resource and wipe out humanity, as this would violate numerous laws.

That said, ensuring their companies operate within the bounds of the law is the bare minimum for directors, and the business judgment rule gives directors the ability to go further in ensuring ethical use of AI. “The business judgment rule gives [directors] substantial room to create a corporate culture with higher standards of integrity, fairness, and ethics than the law demands if they believe that will increase the corporation’s value, enhance its reputation, or otherwise rationally advance the best interests of the corporation and its stockholders.”[24] Directors can and should ensure that their AI systems operate in an ethical manner, as forgoing doing so could hurt the company’s value, reputation, and the interests of it and its shareholders. Indeed, a number of tech leaders, who could presumably benefit from the unfettered advancement of AI (along with their shareholders), including Elon Musk, have called for a moratorium on the development of powerful AI systems.[25] Unfortunately, AI’s profit-making potential seems to have made this moratorium unlikely from taking hold, but at least companies have the legal cover to slow their AI adoption and development if they choose. At the very least, companies should ensure they have policies and internal controls in place to govern their use of AI.

If Congress is able to pass legislation regulating AI, it might consider a requirement similar to that of the Sarbanes-Oxley Act, which requires public companies to include a section on internal controls for financial reporting and evaluate how well these controls are working in their annual reports.[26] Senior corporate officers must also take personal responsibility for ensuring their financial statements are accurate and acknowledge their personal responsibility for their financial reporting internal controls.[27] A similar law regarding AI could require companies to disclose how they use AI and what they use it for, what internal controls they have in place, and how effective these controls are, and force management to take personal responsibility for their companies’ AI use. This would go a long way in ensuring that AI is used ethically and responsibly.

 

ESG Investing Implications

Perhaps one of the biggest impacts AI will have on ESG in the near term is in simplifying the disclosure and benchmarking processes. The European Union (EU) “requires all large companies and all listed companies…to disclose information on what they see as the risks and opportunities arising from social and environmental issues, and on the impact of their activities on people and the environment” through its Corporate Sustainability Reporting Directive, which is intended to help investors and other stakeholders evaluate companies in terms of their financial risk as it relates to climate change.[28] This directive also applies to U.S. companies doing business in Europe.[29] The U.S. Securities and Exchange Commission recently promulgated “rules to enhance and standardize climate-related disclosures by public companies and in public offerings”[30] and there are numerous voluntary ESG reporting frameworks, such as the MSCI ratings, ISS E&S QualityScore, the Dow Jones Sustainability Indices, and the Global Reporting Initiative, to name a few.[31] Yet compiling the data—such as “vehicle mileage or weight of transported goods from third-party suppliers”[32]—needed for these disclosures can be challenging for companies, especially smaller companies with fewer resources.[33] Notably, research has shown that “[l]arger companies and those who interacted ‘frequently’, more than 10 times, with MSCI, were both more likely to have a high ESG rating.”[34] AI has the potential to make it easier for companies to collect data from various company locations and internal documents. Investors can also use AI to scan company filings such as annual reports to evaluate companies’ ESG risks and opportunities. Yet, some argue that because AI algorithms can be black boxes,[35] the use of AI to evaluate ESG risks “could make it more difficult for regulators and retail investors to make sense of already opaque methodologies.”[36] Others have argued that “no set of ESG metrics can capture the totality—or even majority—of a company’s social impact” and stakeholders need narratives and context to understand what ESG metrics mean in terms of a company’s ESG impact.[37] Perhaps in the future AI will be able to solve for this issue, but in its current state, AI is unlikely to able to provide investors, regulators, and other stakeholders with a company’s full ESG picture. Still, AI’s potential to make it easier for companies to disclose ESG metrics will likely improve society’s ESG understanding, benchmarking processes, and general ESG accountability.

 

Conclusion

The issues discussed in this paper are only some of the ways that AI will impact ESG.[38] AI could change society as we know it, and companies have a legal and moral obligation to ensure it is used responsibly and benefits not only their stockholders, but also their customers, employees, and communities. Ultimately, as Stephen Hawking said, “Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don’t know.”[39] Corporations must act to ensure it is the biggest event, not the worst event.

 

 

 

 

[1] How Could AI Destroy Humanity? – The New York Times (nytimes.com).

[2] Guide to ESG Reporting Frameworks & Standards | Convene ESG (azeusconvene.com).

[3] How can artificial intelligence help tackle climate change? (greenly.earth).

[4] A.I. Could Soon Need as Much Electricity as an Entire Country – The New York Times (nytimes.com).

[5] Amid record high energy demand, America is running out of electricity – The Washington Post.

[6] Id.

[7] AI Is Accelerating the Loss of Our Scarcest Natural Resource: Water (forbes.com).

[8] The Obscene Energy Demands of A.I. | The New Yorker.

[9] Id.

[10] Amid record high energy demand, America is running out of electricity – The Washington Post.

[11] Ignorance is Strength: Climate Change, Corporate Governance, Politics, and the English Language (harvard.edu).

[12] Amid record high energy demand, America is running out of electricity – The Washington Post.

[13] Guide to ESG Reporting Frameworks & Standards | Convene ESG (azeusconvene.com).

[14] “In 2022, 19% of American workers were in jobs that are the most exposed to AI, in which the most important activities may be either replaced or assisted by AI.” Which US workers are exposed to AI in their jobs? | Pew Research Center.

[15] Duty and Diversity by Chris Brummer, Leo E. Strine, Jr. :: SSRN at 78.

[16] Is AI coming for our jobs? And does it have to WARN us? | Reuters.

[17] AI and the Workplace: Employment Considerations | Insights | Skadden, Arps, Slate, Meagher & Flom LLP.

[18] Guide to ESG Reporting Frameworks & Standards | Convene ESG (azeusconvene.com).

[19] What Is the Paperclip Maximizer Problem and How Does It Relate to AI? (makeuseof.com).

[20] Duty and Diversity by Chris Brummer, Leo E. Strine, Jr. :: SSRN at 68.

[21] Id. at 70.

[22] Id. (internal citation omitted).

[23] Id. at 7.

[24] Id. at 77.

[25] Elon Musk and Others Call for Pause on A.I., Citing ‘Risks to Society’ – The New York Times (nytimes.com).

[26] What Are SOX Controls? Best Practices for SOX Compliance | AuditBoard.

[27] Id.

[28] Corporate sustainability reporting – European Commission (europa.eu).

[29] A Primer on the EU’s ESG Regulations | Corporate Finance Institute.

[30] SEC.gov | SEC Adopts Rules to Enhance and Standardize Climate-Related Disclosures for Investors.

[31] Guide to ESG Reporting Frameworks & Standards | Convene ESG (azeusconvene.com).

[32] Decoding ESG Reporting: Navigating The Puzzle With AI Assistance (forbes.com).

[33] Id.

[34] http://proxygt-law.wrlc.org/login?url=https://www.proquest.com/trade-journals/esg-ratings-whose-interests-do-they-serve/docview/2871725896/se-2?accountid=36339.

[35] “AI black boxes refer to AI systems with internal workings that are invisible to the user. You can feed them input and get output, but you cannot examine the system’s code or the logic that produced the output.” Why We Need to See Inside AI’s Black Box | Scientific American.

[36] Id.

[37] No Stakeholder Left Behind: The Dangers of ESG Metrics | by Alex Edmans | Medium.

[38] Here is a more comprehensive list of the ways AI could benefit or harm ESG-related issues. Potential Opportunities and Risks AI Poses for ESG Performance | Barnes & Thornburg (btlaw.com).

[39] Stephen Hawking says AI could be ‘worst event’ in civilization (cnbc.com).