Published Date : 10/10/2025
Global financial regulators are planning to ramp up their monitoring of artificial intelligence (AI) risks as banks and other financial institutions increasingly adopt these technologies. While banks are generally optimistic about the potential productivity gains from AI, regulators around the world have expressed concerns about its impact on financial stability.
The Financial Stability Board (FSB), the risk watchdog of the G20, outlined its next steps for authorities in a recent report. The report warns that if too many institutions rely on the same AI models and specialized hardware, it could lead to herd-like behavior. “This heavy reliance can create vulnerabilities if there are few alternatives available,” the board stated.
A separate study by the Bank for International Settlements (BIS), an umbrella group for central banks, emphasized the urgent need for central banks, financial regulators, and supervisory authorities to enhance their capabilities in relation to AI. “There is a need to upgrade their capabilities both as informed observers of the effects of technological advancements and as users of the technology itself,” the BIS noted.
The United States, China, and other countries are in a race to lead the development of revolutionary machine-learning technologies. Despite the potential benefits, the FSB report acknowledges that AI could amplify market stress. However, there is currently little empirical evidence that AI-driven market correlations significantly affect market outcomes.
Financial institutions are also at risk from AI-related cyberattacks and AI-driven fraud. The FSB report highlights these concerns and calls for increased vigilance.
Some regions have already taken the first steps towards regulating AI. For instance, the European Union’s Digital Operational Resilience Act (DORA) came into force in January, marking a significant step in the regulatory landscape. This act aims to enhance the resilience of the financial sector against cyber threats and technological disruptions.
In summary, while AI holds great promise for the financial industry, it also poses significant risks that need to be carefully managed. Regulators are stepping up their efforts to ensure that the benefits of AI can be realized without compromising financial stability.
Q: What is the Financial Stability Board (FSB)?
A: The Financial Stability Board (FSB) is the risk watchdog of the G20, tasked with monitoring and making recommendations about the global financial system to ensure its stability.
Q: Why are financial regulators concerned about AI?
A: Financial regulators are concerned about AI because it could lead to herd-like behavior if too many institutions rely on the same AI models, potentially creating vulnerabilities and amplifying market stress.
Q: What is the Digital Operational Resilience Act (DORA)?
A: The Digital Operational Resilience Act (DORA) is a regulatory act in the European Union that came into force in January. It aims to enhance the resilience of the financial sector against cyber threats and technological disruptions.
Q: What risks do financial institutions face with AI?
A: Financial institutions face risks such as AI-related cyberattacks and AI-driven fraud, which can compromise their security and operations.
Q: How are countries like the US and China involved in AI development?
A: The United States, China, and other countries are in a race to lead the development of revolutionary machine-learning technologies, aiming to leverage AI for various applications including financial services.