The financial sector has been a major target for cyberattacks, with nearly 20% of reported incidents over the past two decades, resulting in direct losses of $12 billion, according to the IMF. As the use of AI grows, the threat of cyberattacks is expected to intensify, highlighting the urgent need for stronger cybersecurity measures.
This is where the Digital Operational Resilience Act (DORA) comes into play. The regulation is designed to help the financial sector protect itself against cyber threats, with a strong focus on the responsible use of AI. With less than six months to go before the deadline, financial companies in Europe need to quickly prepare for DORA’s implementation to ensure their digital resilience and compliance.
The European Union has introduced various regulations to support the financial sector, starting with the General Data Protection Regulation (GDPR), which improved data security. The Digital Markets Act (DMA) and other measures followed, aiming to provide businesses with a thriving environment. DORA’s goal is no different: to help the financial sector navigate the digital landscape securely.
As the clock ticks, it’s essential for businesses using AI in financial services to understand the threats posed by AI’s rapid growth and the role DORA plays in addressing these risks. Taking the time to prepare before the regulation is enforced is crucial.
How Serious Is the AI Threat?
While AI has the potential for positive impact, it can also be used maliciously. AI-powered cyberattacks are becoming a serious concern, with nearly 74% of global security leaders noting AI-driven threats as a significant issue. Financial institutions are increasingly encountering cybercriminals using AI in various ways, including exploiting software vulnerabilities and launching sophisticated phishing attacks. Even if not directly targeted, financial services companies must be vigilant about the potential for AI-driven market manipulation.
Many financial institutions are aware of the risks but have not yet adequately prepared their systems to counter these threats. This could be because they see AI-related risks as a future concern rather than an immediate one. For example, in early 2024, only 28% of auditors considered AI a major threat, believing it would become a significant concern in two to three years.
However, the evidence is already clear: a 2023 study found that 85% of security professionals linked the rise in cyberattacks to the use of generative AI. The UK’s National Cyber Security Centre (NCSC) has also warned that cybercriminals are using AI to lower the barriers for launching high-impact attacks like ransomware.
Understanding DORA’s Approach to Responsible AI
Recognizing the role AI plays in digital transformation, DORA emphasizes the responsible and secure use of AI to build trust, particularly in the financial sector. The regulation directly addresses AI risks through a comprehensive framework for responsible AI.
One key element of this framework is algorithmic risk management. Financial institutions must establish systems to identify, assess, and mitigate risks such as bias and lack of transparency in AI models. This can be done by diversifying training datasets and conducting regular fairness checks throughout the development process to reduce bias.
Building on GDPR’s principles, DORA also requires strong data governance practices to ensure the quality, integrity, and security of AI training data. This will help prevent biased or inaccurate outcomes and safeguard sensitive information.
Vendor oversight is another critical component. Financial institutions must assess the resilience of third-party AI providers to better track risks and ensure proper mitigation. DORA also emphasizes the importance of continuous monitoring of AI models to detect performance issues or security vulnerabilities. Logging all actions related to AI models—such as training data, model versions, and user interactions—ensures compliance and auditability, an area in which the financial services sector can lead by example.
Preparing for DORA Compliance
While 70% of financial institutions are working towards DORA compliance, only 40% feel fully confident in their strategies. To be fully prepared, companies using AI in the financial sector must address key risks, such as AI model bias, AI vendor oversight, data governance, and continuous monitoring of AI systems.
Even though DORA is focused on the European Union, adhering to the regulation will make it easier for financial companies to operate across borders. Its emphasis on tackling growing threats and encouraging innovation makes it essential for companies to comply.
Looking ahead, any future AI models will likely need to meet specific criteria, including:
- Explainability: AI models will need to clearly explain the decisions they make, with developers ensuring this feature is prioritized.
- Bias Mitigation: Steps must be taken to avoid discriminatory outcomes by reducing bias.
- Human Oversight: Critical decisions should always involve human supervision.
When selecting a partner to help implement AI, it’s important to ask how the company is preparing for DORA. Key questions include: Are they monitoring and logging usage in line with DORA? How secure is their platform? What measures are they taking to mitigate algorithmic risks?
The Importance of Compliance and Adoption
DORA represents a crucial step toward building a more resilient financial sector in Europe. As the deadline approaches, financial institutions should adopt a comprehensive approach to AI governance, prioritizing transparency, accountability, and responsible innovation. By investing in robust AI systems and aligning with DORA’s framework, financial companies can ensure their long-term success and contribute to a stronger, more secure sector.