Skip to content

What the EU AI Act means for financial institutions

6 min read

The countdown to compliance has started for the EU AI Act. Published on 12 July 2024, it’s one of the most significant pieces of recent legislation to be adopted by the European Parliament. The rules came into force on 1 August 2024, becoming applicable over a period of six to 36 months, with most provisions applying after two years.

So, what does the EU AI Act mean for financial institutions (FIs) and what do you need to know?

 

For FIs, the AI Act has significant implications, particularly for those relying on AI systems designated as high-risk, such as credit scoring models and risk assessment tools in the insurance sector.

 

AI’s potential application in document processing in trade finance, risk management, and in the fight against financial crime is significant. However, the need for effective regulation that ensures trust in AI and safeguards fundamental rights cannot be overstated. The EU AI Act is an essential step in this direction, ensuring that as AI evolves, it does so safely and ethically.

Key points from the EU AI Act

 

In terms of scope, the law is far-reaching. It applies to AI systems marketed, used, or impacting individuals within the EU, including those from non-EU countries. And in terms of how it applies, the Act takes a risk-based approach, which means categorising AI applications based on risk levels.

 

As a result, there are strict requirements, outlined below, for high-risk use cases. This includes critical infrastructure, law enforcement and justice among a list of eight areas. Understandably, this puts greater control over AI that could affect life and death situations, liberty or public safety. With this in mind, there are also prohibited AI practices, such as those that manipulate behaviour or violate fundamental rights.

 

High-risk AI systems must abide by high standards of data quality and governance. This ensures software isn’t trained on poor or incomplete data that could lead to an adverse outcome. There are also documentation and traceability requirements to ensure AI can be audited and its decisions explained. Clear information must be provided to users to assure transparency, and there must be human oversight, offering mechanisms for human intervention in AI decisions. This is essential given AI could be used to support medical diagnoses or even offer evidence in legal cases.

 

Additionally, the Act mandates the use of synthetic or anonymized data when training AI models to detect and correct bias. Synthetic data is artificially created rather than collected from real-world events, which means it can be designed to cover a wide range of scenarios and populations, helping to create AI systems that are fair and accurate. If these types of data can achieve the goal of reducing bias, then AI system providers are required to use them instead of other methods.

 

Impact on financial institutions

For FIs, adhering to these requirements is crucial. In doing so, there are some important actions to take. Firstly, if it doesn’t already exist, an AI governance body must be created. This needs to be accountable for all use of AI throughout the business and should carefully evaluate systems to determine their classification under the Act.

 

For example, AI systems used for credit scoring or creditworthiness evaluation will likely be classified as high-risk given their significant impact on individuals’ access to financial resources. Conversely, the European Parliament proposes that AI systems deployed for detecting fraud in financial services should not be considered high-risk under the Act. This nuance allows FIs to continue innovating in fraud detection while ensuring compliance with the Act’s overarching principles.

 

Secondly, this group should assure human oversight of all systems to ensure AI is being used transparently ethically, fairly, securely and with privacy controls. For example, when the software interacts with humans, it must disclose that it is AI. Furthermore, AI-generated content must be clearly labelled to avoid misrepresentation. Users must also be informed when AI uses emotion recognition or biometric categorisation.

 

Finally, any governance body should also ensure high quality of input data, to minimise biases and prevent discriminatory outcomes. It must also be 100 per cent transparent in its own work, recording its decisions and making these available when needed.

 

Enforcement

Without this level of AI oversight, an FI could see itself the subject of fines. The European Artificial Intelligence Board will oversee implementation and coordination across member states. Meanwhile, each country will designate authorities to supervise and enforce the law. There will also be a public database for high-risk AI systems to ensure transparency and oversight.

 

Should breaches happen, penalties will be handed out in an effective and proportionate way to dissuade others from bending the rules. And if the legislation needs updating – which is likely given the speed at which AI is developing – there are mechanisms for regular reviews.

 

There will also be confidentiality and data exchange rules to ensure privacy and information sharing between each EU nation’s authorities.

 

EU AI Act: designing for the long term

General-purpose AI systems, like those performing image and speech recognition or pattern detection, will be addressed in future implementing acts. These systems, thanks to their versatility, may be used in various contexts and will require a proper flow of information throughout the AI value chain.

 

The AI Act’s flexible framework allows it to evolve with AI technology, ensuring that new developments are adequately regulated. FIs, in particular, must stay informed about these changes, as they will affect how AI can be used for efficiency and innovation in areas like document processing, risk management, and compliance.

 

EU AI Act: beneficial or detrimental?

While over-burdensome regulation can hinder innovation, the EU AI Act is a sorely needed set of guiderails for a technology that’s rapidly developing. On one hand, it aims to ensure AI systems are safe, protecting people’s rights. On the other, it’ll enhance innovation and investment thanks to the transparency and openness the Act demands. Developers will be better able to collaborate and improve the software.

 

However, the regulation must remain flexible and move at the same speed as the technology. Furthermore, policy makers must take note of feedback and collaborate with industry; especially from sectors such as financial services where the rules have greatest impact.

And while ethical considerations must always be at the heart of the regulation, this must be balanced with positive encouragement for the technology. In short, the act must foster better AI, not merely banish the bad.

 

So, as the countdown to compliance begins, FIs should be positive. There’s undoubtedly work to do in terms of compliance, but the outcome will be a better for everyone: developers, businesses and consumers.

 

The information in this blog post is for general informational purposes only and does not constitute legal advice. The content is based on the author's interpretation and may not reflect the most current legal developments. Readers should seek professional legal advice for specific concerns. All rights to referenced third-party content remain with their respective authors.

 

Featured expert

Image of David Salloum

David Salloum

Eastnets Group Legal Director

Insights from David Salloum

Subscribe to our newsletter