
Earlier this year, the New York Fed and Columbia University’s School of International and Public Affairs (SIPA) co-organized the sixth annual State-of-the-Field Conference on Cyber Risk to Financial Stability. Since 2017, this collaboration between the New York Fed and SIPA has sought to address three key questions: What are we learning about cyber risk to financial stability? What are we doing to improve resilience and stability? And what’s next?
This article features key points of discussion from the 2025 conference, which focused on the rising risks from artificial intelligence (AI), third-party risk, and the broader technological infrastructure of the financial system.
What Are We Learning?
Frederic Veron, the New York Fed’s Chief Information Officer, delivered opening remarks. He emphasized that growing technological interconnectivity heightens systemic risk, where the failure of one system could jeopardize an entire network. Using the CrowdStrike incident as an example, he noted the increasing and shared reliance on third- and nth-party providers could raise concentration risk that amplifies potential vulnerabilities.
In the first panel, participants discussed ongoing research in AI risk, third-party risk, and the role of software companies and software vulnerabilities and cyber risks.
Participants said that while software vulnerabilities are a first-order driver of cyber risk, advancements in AI could shift dynamics in favor of defenders. Indeed, one study cited showed that AI could help identify zero-day vulnerabilities—or security issues that are unknown to developers—in widely used open-source packages. Nonetheless, because AI adoption could pose systemic risk to the financial system, human oversight remains essential, panelists said, adding that AI must be trained on quality, secure data.
Next, Federal Reserve Governor Michael S. Barr delivered keynote remarks and took part in a moderated discussion with Patricia Mosser of Columbia SIPA. Governor Barr discussed critical issues at the intersection of AI, cybersecurity, and financial stability. He identified AI-enabled fraud and deepfakes as emerging threats to the financial system and emphasized that banks are responsible for detecting fraud on either side of a transaction. While acknowledging AI’s potential to level the playing field for smaller institutions, he noted that the overall impact of the technology remains uncertain. He underscored the importance of layered defenses, stressing that a system is only as secure as its weakest link, and that banks remain accountable for the technologies they adopt.

At the same time, Governor Barr pointed to the transformative potential of generative AI to enhance productivity, drive economic growth, and support problem-solving across sectors. Regulators themselves must adopt advanced technologies to better understand and manage evolving risks, he said, adding that generative AI could support real-time assessments of cyber risk in the financial system.
What Are We Doing?
The second panel featured a discussion on current industry and regulatory responses to cyber risks. The use of AI tools holds significant promise for financial services, but there remains a critical need for human oversight, strong risk management, and caution around third-party use, panelists said.
While many firms are applying core risk management principles to AI and its emerging capabilities, use of the technology by third parties and in supply chains could amplify existing risks. Another security challenge is the cloud environment, which requires a multidimensional approach to protection and is complicated by mixed adoption by firms. And a cautious approach may be appropriate for critical infrastructure, because AI could amplify geopolitical risk factors across global markets, panelists said. Meanwhile, one positive use case for AI technology is internal products that are trained on data owned by financial institutions.
What’s Next? (AI and Third-Party Risk Management)
In the final session of the conference, panelists discussed AI’s impact on trust, data security, and third-party risks and highlighted the need for transparency and robust frameworks to manage these challenges. They explored how AI is reshaping industries, particularly in finance, and discussed both the opportunities AI presents and the challenges it brings, such as the risk of bias, misinformation, and cybersecurity threats.
One participant said organizations must identify and quantify risks related to AI—whether regulatory, internal, financial, or cybersecurity risks—and have the right controls in place, which could include human involvement to ensure accuracy and accountability. Another said it’s critical to understand the impact of trust when utilizing AI tools, given the growing risks associated with misinformation, deepfakes, and social fabric degradation. He added that financial firms could lead the charge in maintaining trust, as it is central to their business.
From a technologist’s perspective, AI deployment has a low barrier to entry, making it accessible, but the broader effect of AI-driven threats on society could be a significant loss of trust. In the area of cybersecurity, AI can compromise businesses by exacerbating phishing attacks, and regulators are focused on potential risks the technology poses to financial stability.
We would like to acknowledge Jason Healey, Erva Kan, Christine Elizabeth McNeill, and Patricia Mosser—our Columbia SIPA counterparts who contributed to this article.

Michael Junho Lee is a financial research economist in Money and Payments Studies in the New York Fed’s Research and Statistics Group.
Rinku Sinha is the program director for the Cyber Risk and Policy Department in the New York Fed’s Supervision Group.
The views expressed in this article are those of the contributing authors and do not necessarily reflect the position of the New York Fed or the Federal Reserve System.