Machine learning , artificial intelligence

Luxembourg’s Financial Regulator Takes Another Step to Advance AI Adoption

For some time now, financial firms have been looking at how they can use artificial intelligence (AI) to improve their processes, automate rote functions, and create efficiencies in their operating models. But the industry still has a long way to go before it harnesses the power of this advanced technology. While financial services adoption of AI is still in its early days, it has caught the attention of global regulators. Most recently, Luxembourg’s financial regulator, the Commission de Surveillance du Secteur Financier (the CSSF), released a white paper outlining the “opportunities, risks, and recommendations for the financial sector” relating to AI.

The CSSF paper is non-binding (it is not regulation), but marks one of the first times a financial regulator has released such extensive research on the topic. The aim is to provide a foundation for asset managers around the globe to engage on AI, garner a deeper understanding of how to implement it and consider its implications.

Here, we extract key considerations presented by the CSSF paper – the opportunities, risks, and recommendations for asset managers:

Opportunities

According to the CSSF, many current use cases of AI – particularly in the financial sector –  are “augmented intelligence” solutions. This means, an AI program focuses on a limited number of intelligent tasks and is used to support, not replace, humans in decision making. AI for asset managers also includes virtual assistants to help clients with common or frequently asked questions (chatbots), automated loan decisions used in credit scorings, forensic solutions to perform in-depth fraud investigations and machine learning tools to assist risk, compliance, and auditing tasks. The CSSF argues that one of the largest benefits to taking advantage of these opportunities for an asset manager can be freeing up their workforce for more difficult or personal tasks, since AI can take over mundane or easily repeatable tasks.

Easier access to new technologies has inspired a trend the CSSF calls “AI democratization.” Cloud services enable developers to apply machine learning algorithms to their programs and gain access to scalable computer power. In addition, the vast amounts of data generally available and stored are exponentially larger than anything collated before. These vast pools of data should enable greater adoption of AI and machine learning models. But as with any innovation within the regulated financial sector, the key to mass adoption will be solid legal and regulatory frameworks, the specifics of which are still to be defined.

Risks

According to the CSSF, while AI and automation can create efficiencies, prevent human error, and even reduce costs, there are also risks associated with removing human involvement.

Explainability

It comes as no surprise that for many financial regulators, transparency is at the core of what they expect from asset managers. For regulators, understanding how a product or operational structure can impact both the larger financial ecosystem and individual investors is of utmost importance. However, in some cases, AI can actually lead to a lack of transparency or reduced “explainability.” The CSSF explains that sometimes, when AI is layered within a system, the program can provide an answer, but not easily show how it arrived at that answer. That’s referred to as a “black box model” – or lack of knowledge of how it reaches solutions internally.

Black boxes that are used and amended by computer programs means only a handful of people at the asset manager understand these inner workings. The CSSF emphasizes that a human must remain in control and be responsible for all outputs. Wide training within an organization can help mitigate this risk.

The EU’s GDPR also touches on this issue as well. The CSSF states: “In case the output of the machine learning model is a decision affecting physical persons, the person which is the subject of that decision has the right to be informed on how that decision was reached.”

In order to help the AI program create more transparency around underlying assumptions and algorithms, asset managers should understand the importance of generating explanations. The CSSF recommends having explainability embedded into a program from the beginning would help satisfy this requirement. In addition, AI programs need to anticipate BCP events: managers need to be able to intervene – or continue a process using humans – should the program break or be compromised. In other words, the greater the number of people who can understand what the AI program is doing, and the more those people address contingencies, the better its use will be perceived from a regulatory, risk, legacy planning, and BCP perspective.

Security

Whenever there is a new technology, there are also new attack techniques exploiting security vulnerabilities – AI is no exception. Some of these risks include:

  • Data quality and data governance
    • Finding the right data and connecting data sources to legacy systems
    • Ethical and societal concerns including as data privacy, accountability, explainability and auditability
  • External data sources
    • Outsourcing risk (i.e. does my provider have the same protection standards that I have?)
    • Systemic risks where some specific AI solutions may be very successful and be adopted by many financial institutions, resulting in a high dependency on a concentration of few service providers
  • The human factor
    • “No human in the loop,” referring to lack of oversight that could allow an automated action to impact business processes
    • Challenges around building internal teams with the right skillset to create and properly use AI systems
    • Potentially difficult cultural shifts (i.e. fear of job losses, concerns around change)
    • Risk of AI programs picking up on and learning human biases from past decisions

Recommendations

One of the most significant recommendations from the CSSF echoes some of themes expressed by the US Securities and Exchange Commission surrounding personal data. The CSSF recommends:

  • Challenging the need for personal data as input to an AI program
  • Restricting access to personal data
  • Hiring and involving a Data Protection Officer and compliance teams
  • Ensuring users can review explainable explanations on any decision reached
  • Confirming individuals can change or erase data related to them
  • Applying data protection principles, such as data encryption

It’s important to remember that although AI can automate decisions, the asset manager is always responsible for their decisions and any actions taken as a result of those decisions.

Finally, AI stands to revolutionize the way managers solve certain problems and may enable more efficient predictive analytics. However, programs tend to be developed based on past observations, which are then used to predict new outcomes. Because its predictive power is limited to what it has learnt from the past, this technology is not a magic crystal ball. Disruptive events, like those that happened during the last financial crisis, cannot be predicted and the models need to be updated when such events occur.

In its conclusion, the CSSF paper reiterates the need for global asset managers and financial institutions who implement AI projects to address risks from the start and to continuously monitor the evolution of this and other types of technology. The CSSF has not stated that they will draft specific regulation based on this paper, but they do recommend that asset managers consider their recommendations carefully in order to “to ensure a reliable implementation and business integration of the AI solution while maintaining a sound control environment.”

The CSSF paper is another crucial step on the journey towards a global regulatory framework for AI, a journey that is only beginning, but at an unrelenting pace.

This article was contributed by BBH Luxembourg’s Chief Risk Officer Mehtap Numanoglu Tasiopoulos.