EBA sets the rules: Artificial Intelligence must be explainable

Insights 21 December 2020

News from 2020: Artificial Intelligence technologies are adopted by 64% of European banks

This is what stands out, according to EBA's latest annual Risk Assessment Report. Over the past two years, EU banks have continued investing in artificial intelligence (AI) and big data analytics,and 12% of the EUbanks have moved from pilot testing anddevelopment to the implementation of AItools in their processes.

Actually, it' s not a surprising news.

As expected, the pandemic has accelerated the EU's digital transformation plans, prompted by the ongoing rise of challanger banks and, more recently, the market entry of the so-called GAFA (Google, Facebook, Apple and Amazon). In 2020, budgetary changes to boost digital innovation/new technologies have been reportedby 60% of EU banks and partnerships with Fintech companies remain the main route for the rapid development of advanced purpose-built solutions.

However, as demand grows, so does the need for a European regulatory framework for the adoption of AI in finance. The proposal for a new EU regulation is planned for 2021 but the European Commission outlined some of its guidelines a couple of years ago in the document Ethics guidelines for trustworthy AI, which anticipates the ethical issues the future regulation will focus on.

Moving towards a trustworthy AI

As stated in the European guidelines, the development, deployment and use of AI systems should adhere to ethical principles such as respect for human autonomy and prevention of harm. AI systems should also be explainable, a term that spawned the definition of explainable AI.

The definition is simple: a model is explainable when it is possible to generate explanations that allow humans to understand how a result is reached or on what grounds the result is based. But why did we feel the need to specify a rather trivial concept?

Because the behaviour of an AI model is often anything but transparent.

AI models can quickly become “black-boxes”, opaque systems for which the internal behaviour cannot be easily understood, and for which therefore it is not easy to understand (and verify) how a model has reached a certain conclusion or prediction. This hampers corrective action when an error occurs – as it inevitably does – and affects the reliability of forecasting models. Understanding the behaviour of Ai models is therefore paramount when such models are implemented in fields that profoundly affect people's lives, such as the medical field, the financial sector or even the automotive industry.

The explainability of an AI solution may vary depending on the complexity of the underlying model and the learning model used. At best, a model is explainable as its internal behaviour can be directly understood by a human. This is, for example, the case for both MORE and the ForST model, both of which are designed from the outset to provide reliable and easy-to-understand explanations of the prediction they produce.

However, there are several techniques to interpret even black-box behaviour. With post-hoc interpretability techniques, the interpretation of the model is given by selectively interrogating the model to reveal some of its properties. Many approaches are based on the creation of surrogate models, i.e. simplified models trained to approximate black-box predictions from which the general behaviour of the model can be deduced (global surrogate model) or able to provide local explanations of a single prediction (LIME, Local Interpretable Model-Agnostic Explanations).

Today, the common theory black-boxes can achieve a higher degree of accuracy due to the greater complexity of models, has fewer and fewer supporters. The transparency of explainable models is an advantage that couples with the level of accuracy achieved by simpler models.

Explainable AI will gain consumers' trust

The need for explainability is higher whenever decisions have a direct impact on customers/humans and, in the upcoming years, technologies' explainability will have a key role in the competition between challengers and incumbents.

The huge amount of data that GAFA has access to, makes it particularly difficult for traditional players to bridge the technology gap. To win consumers' trust, incumbents should focus on the transparency of the methodologies adopted, particularly in a sector that suffers from a lack of widespread financial literacy.

The adoption of explainable models also benefits financial institutions, where human beings are called upon to make decisions according to the results produced by the models. Operators should have sufficient means to understand why a particular result has been generated and to validate the results produced. Transparency therefore does not end with the explainability of models, but consists in making data, features, algorithms and training methods available for external, and is the basis for building a regulated and reliable financial system.