Regulatory Spotlight on Finance: SEC and NYDFS Actions on Cybersecurity and AI Risks

January 27, 2025 by Duruhan Aydinli

Financial Markets and Cybersecurity

The recent actions by the SEC and NYDFS underscore an urgent shift in regulatory expectations for cybersecurity, especially as AI-driven technologies become more integrated into financial operations. This blog article explores how these recent updates, technological changes and recent regulatory developments affect businesses in the financial markets, which are under growing pressure to accurately and transparently manage cyber threats in the era of AI.

C. Duruhan Aydinli - Research Assistant

Artificial Intelligence (“AI“) related cybersecurity risks and vulnerabilities are growing along with technological advancements. Increasing regulatory attention on cybersecurity issues is shown in recent steps taken by the New York State Department of Financial Services (“NYDFS“) and the U.S. Securities and Exchange Commission (“SEC“). In mid-October 2024, the NYDFS released thorough recommendations to finance companies regarding cybersecurity risks arising from AI, including mitigating techniques and managing cyber risks unique to AI[1]. In late October, the SEC signaled that it intended to hold businesses responsible for their cybersecurity practices by announcing charges against four corporations for failing to report serious cybersecurity vulnerabilities[2].

This blog article explores how these technological changes and recent regulatory developments affect businesses in the financial markets, which are under growing pressure to accurately and transparently manage cyber threats.

The SEC’s Cybersecurity-Centered Enforcement Actions

In its recent enforcement actions, the SEC charged four current and former public companies, namely Unisys, Avaya, Check Point, and Mimecast, that were affected by a software compromise of a service provider in 2020, SolarWinds, which was a common service provider for these financial technology companies. The companies had unknowingly installed an update published by SolarWinds, which, in fact, included malware that compromised their security systems. Furthermore, the SEC noted that these four companies failed to disclose / materially mislead investors about these substantial cybersecurity vulnerabilities in violation of the Securities Act of 1933 and the Securities Exchange Act of 1934. While these claims signal elevated scrutiny over companies’ cyber-risk management and disclosure policies, they also underline the importance of adequately disclosing cybersecurity matters.

Key takeaways from the SEC’s action are as follows:

  • The Importance of Cybersecurity Transparency: Companies must provide regulators, enforcers, investors, and shareholders with up-to-date, accurate information on significant risks, such as those brought on by AI and sophisticated digital systems. Transparency matters for precise government enforcement and shields shareholders from possible financial risk brought on by cybersecurity lapses. Accurate and transparent risk disclosures are necessary, aligning with the SEC’s mission to protect market openness and integrity.
  • Accountability for Cybersecurity Failures: Businesses will be held responsible when they neglect to recognize and fix security flaws. As technology becomes more integrated into daily operations, businesses, particularly those involved in the financial markets, are required to have robust frameworks in place to manage the risks associated with AI.

The NYDFS’ Industry Guidance on Cybersecurity and AI Risks

The NYDFS’ guidance offers a more prescriptive approach, detailing strategies for managing AI-related cybersecurity threats for regulated financial institutions. According to the NYDFS, the increased reliance on AI has substantially impacted the cybersecurity landscape as this technology has introduced significant new opportunities for cybercriminals to commit crimes at grander scale and speed. Therefore, the Industry Guidance addresses the unique vulnerabilities AI can introduce, such as susceptibility to adversarial attacks, data poisoning, and model manipulation. The Industry Guidance emphasizes the need for finance companies to integrate AI-specific risk assessments, strengthen third-party vendor oversight, and establish dedicated governance frameworks for AI.

Although the Industry Guidance does not impose any new requirements beyond obligations that are in DFS’s cybersecurity regulation codified at 23 NYCRR Part 500[3], it sets a framework to assess and address the cybersecurity risks arising from AI. Based on the takeaways from the Industry Guidance, here are some critical areas where finance companies should strengthen cybersecurity practices:

  • AI-Centric Risk Assessments and Measures: Finance companies should embed AI-related risk considerations within their overarching cybersecurity frameworks. This may include proactively identifying vulnerabilities unique to AI, such as susceptibility to adversarial attacks, data manipulation, and model output distortions, which will indeed be followed by establishing customized controls to address AI-specific risks, which demand a more refined approach.
  • Prioritized Risk Management and Elevated Governance: The NYDFS recommends establishing structured governance for AI risk management. Finance companies are encouraged to train their members in AI oversight, establish solid controls, and continuously monitor evolving cybersecurity threats. Third-party systems and vendors are also crucial for a secure digital ecosystem, but weaknesses in these systems could expose companies to substantial risks. Therefore, distinct and thorough due diligence and respect for privacy, security, and confidentiality will play an important role.
  • Adherence to Established Cybersecurity Standards: The NYDFS also encourages alignment with widely accepted frameworks to set a robust foundation for AI-specific cybersecurity practices. This alignment extends beyond purely technical aspects. For example, the senior teams should be able to understand concepts such as Deepfake, be prepared for social engineering, and know what to do when faced with unusual requests. This adherence fosters resilience by incorporating proven standards into AI risk management.

One Last Step: Biden’s Executive Order on Cybersecurity

On January 16, 2025, former President Biden issued an Executive Order on Strengthening and Promoting Innovation in the Nation’s Cybersecurity[4]. While finance was not directly within the scope, it underlined the importance of enhancing cybersecurity across the federal structure, cooperation with the private sector, and the benefits of ensuring a safe and secure cyber infrastructure.

Although one would expect further orders and actions to be taken by the Trump Administration, Biden’s Executive Order demonstrated the situational awareness of the U.S. related to cybersecurity matters. The Executive Order addressed the transparency and safety requirements of the software systems provided by third-party providers (this, in fact, resonates with the results of the software compromise of SolarWinds and its adverse effects on the sector, as discussed above.)

The Executive Order further underlined timeless topics, such as the necessity of increased authentication and digital verification measures, robust detection mechanisms, secured and encrypted communication mechanisms, the use of AI in vulnerability detection, and the overall federal-level readiness for advanced cyber threats.

Regardless of the outcome of the Executive Order, as per the Trump Administration’s conclusionary steps on the matter, it set out a framework for all undertakings, both service providers and receivers and the federal agencies, on how the future might be. Without question, the finance sector and related cybersecurity matters, especially in the age of AI, will be among the primary subjects.

Conclusion

The recent actions by the SEC and NYDFS underscore an urgent shift in regulatory expectations for cybersecurity, especially as AI-driven technologies become more integrated into financial operations. Financial businesses are not only encouraged but are now expected to adopt comprehensive strategies that address the unique challenges posed by cybersecurity. By embedding tech-focused risk assessments, enhancing vendor management protocols, and fostering a culture of proactive governance, companies can significantly bolster their resilience against emerging threats.

Early alignment with these regulatory guidelines offers more than compliance; it builds trust, safeguards client data, and reinforces market stability in an increasingly digital financial ecosystem. As AI continues to reshape the industry, those businesses prioritizing cybersecurity as a foundational aspect of their digital transformation will be better positioned to navigate this evolving landscape and confidently meet both investor and regulatory expectations.

[1] https://www.dfs.ny.gov/industry-guidance/industry-letters/il20241016-cyber-risks-ai-and-strategies-combat-related-risks

[2] https://www.sec.gov/newsroom/press-releases/2024-174

[3] https://www.dfs.ny.gov/system/files/documents/2023/03/23NYCRR500_0.pdf

[4] https://www.whitehouse.gov/briefing-room/presidential-actions/2025/01/16/executive-order-on-strengthening-and-promoting-innovation-in-the-nations-cybersecurity/ (Link not available as of the inauguration day of the Trump Administration, January 20th, 2025)