FCA proposes AI transparency framework

The Financial Conduct Authority (FCA) and The Alan Turing Institute have proposed a high-level framework for thinking about artificial intelligence (AI) transparency in financial markets.

As part of a year-long collaboration on the subject, the regulator and thinktank have presented an initial framework for thinking about transparency needs in relation to machine learning in financial markets.

Henrike Mueller, technical specialist in the Innovation Division at the FCA, and Florian Ostmann, who leads the public policy programme at The Alan Turing Institute, suggested that transparency can play a key role in the pursuit of responsible innovation.

A recent survey on machine learning published by the FCA and the Bank of England highlighted that financial services are witnessing rapidly growing interest in AI, but while it has the potential to enable positive transformations, the technology also raises important ethical and regulatory questions.

The FCA followed this last month by starting work to better understand how developments in AI and machine learning ML are driving change in financial markets; including business models, products, services and consumer engagement.

“Especially when they have a significant impact on consumers, AI systems must be designed and implemented in ways that are safe and ethical,” read the blog post. “From a public policy perspective, there is a role for government and regulators to help define what these objectives mean in practice.”

The Information Commissioner’s Office also yesterday launched its own consultation on the use of AI, with draft proposals on how to audit risk, governance and accountability in AI applications.

“One important function of transparency is to demonstrate trustworthiness which, in turn, is a key factor for the adoption and public acceptance of AI systems,” the pair wrote. “Providing information may, for instance, address concerns about a particular AI system’s performance, reliability and robustness; discrimination and unfair treatment; data management and privacy; or user competence and accountability.”

For instance, transparency may enable customers to understand and - where appropriate - challenge the basis of particular outcomes, with the post giving the example of an unfavourable loan decision based on an algorithmic creditworthiness assessment that involved factually incorrect information.

“Information about the factors that determine outcomes may also enable customers to make informed choices about their behaviour with a view to achieving favourable outcomes,” stated Ostmann and Mueller. “An illustration for this rationale would be the value to customers of knowing that credit scores depend on the frequency of late payments.”

The post noted that many common concerns raise process-related questions. Information about the quality of the data that was used in developing an algorithmic decision-support tool, for example, can play an important role in addressing concerns about bias.

“Rather than narrowly focusing on questions of model transparency, a balanced perspective on transparency needs will thus be based on a broader assessment of possible transparency measures that involve model-related as well as process-related information,” the pair explained.

The post suggested that decision-makers may find it helpful to develop a ‘transparency matrix’ that, for a particular use case, maps different types of relevant information against different types of relevant stakeholders.

This matrix can then be used as a tool to structure a systematic assessment of transparency interests, providing a basis for considering different stakeholder types one by one, identifying their respective reasons for caring about transparency, and then evaluating the case for making the different types of information listed in the matrix accessible to a given stakeholder type.

Ostmann and Mueller concluded that the opportunities and risks associated with the use of AI models depend on context and vary from use case to use case.

“In the absence of a one-size-fits-all approach to AI transparency, a systematic framework can assist in identifying transparency needs and deciding how best to respond to them, bringing into focus the respective roles of process-related and model-related information in demonstrating trustworthiness and contributing to beneficial innovation.”

    Share Story:

Recent Stories


Sanctions evasion in an era of conflict: Optimising KYC and monitoring to tackle crime
The ongoing war in Ukraine and resulting sanctions on Russia, and the continuing geopolitical tensions have resulted in an unprecedented increase in parties added to sanctions lists.

Achieving operational resilience in the financial sector: Navigating DORA with confidence
Operational resilience has become crucial for financial institutions navigating today's digital landscape riddled with cyber risks and challenges. The EU's Digital Operational Resilience Act (DORA) provides a harmonised framework to address these complexities, but there are key factors that financial institutions must ensure they consider.

Legacy isn’t the enemy: what FSIs can do to keep their systems up and running
In this webinar we will examine some of the steps FSIs have already taken to rigorously monitor and test systems – both manually and with AI-powered automation – while satisfying the concerns of regulators and customers.

Optimising digital banking: Unifying communications for seamless CX
In the digital age, financial institutions risk falling behind their rivals if they fail to unite fragmented communications ecosystems to deliver seamless, personalised customer experiences.

This FStech webinar sponsored by Precisely explores vital strategies to optimise cross-channel messaging through omnichannel orchestration and real-time customer data access.