FCA proposes AI transparency framework

The Financial Conduct Authority (FCA) and The Alan Turing Institute have proposed a high-level framework for thinking about artificial intelligence (AI) transparency in financial markets.

As part of a year-long collaboration on the subject, the regulator and thinktank have presented an initial framework for thinking about transparency needs in relation to machine learning in financial markets.

Henrike Mueller, technical specialist in the Innovation Division at the FCA, and Florian Ostmann, who leads the public policy programme at The Alan Turing Institute, suggested that transparency can play a key role in the pursuit of responsible innovation.

A recent survey on machine learning published by the FCA and the Bank of England highlighted that financial services are witnessing rapidly growing interest in AI, but while it has the potential to enable positive transformations, the technology also raises important ethical and regulatory questions.

The FCA followed this last month by starting work to better understand how developments in AI and machine learning ML are driving change in financial markets; including business models, products, services and consumer engagement.

“Especially when they have a significant impact on consumers, AI systems must be designed and implemented in ways that are safe and ethical,” read the blog post. “From a public policy perspective, there is a role for government and regulators to help define what these objectives mean in practice.”

The Information Commissioner’s Office also yesterday launched its own consultation on the use of AI, with draft proposals on how to audit risk, governance and accountability in AI applications.

“One important function of transparency is to demonstrate trustworthiness which, in turn, is a key factor for the adoption and public acceptance of AI systems,” the pair wrote. “Providing information may, for instance, address concerns about a particular AI system’s performance, reliability and robustness; discrimination and unfair treatment; data management and privacy; or user competence and accountability.”

For instance, transparency may enable customers to understand and - where appropriate - challenge the basis of particular outcomes, with the post giving the example of an unfavourable loan decision based on an algorithmic creditworthiness assessment that involved factually incorrect information.

“Information about the factors that determine outcomes may also enable customers to make informed choices about their behaviour with a view to achieving favourable outcomes,” stated Ostmann and Mueller. “An illustration for this rationale would be the value to customers of knowing that credit scores depend on the frequency of late payments.”

The post noted that many common concerns raise process-related questions. Information about the quality of the data that was used in developing an algorithmic decision-support tool, for example, can play an important role in addressing concerns about bias.

“Rather than narrowly focusing on questions of model transparency, a balanced perspective on transparency needs will thus be based on a broader assessment of possible transparency measures that involve model-related as well as process-related information,” the pair explained.

The post suggested that decision-makers may find it helpful to develop a ‘transparency matrix’ that, for a particular use case, maps different types of relevant information against different types of relevant stakeholders.

This matrix can then be used as a tool to structure a systematic assessment of transparency interests, providing a basis for considering different stakeholder types one by one, identifying their respective reasons for caring about transparency, and then evaluating the case for making the different types of information listed in the matrix accessible to a given stakeholder type.

Ostmann and Mueller concluded that the opportunities and risks associated with the use of AI models depend on context and vary from use case to use case.

“In the absence of a one-size-fits-all approach to AI transparency, a systematic framework can assist in identifying transparency needs and deciding how best to respond to them, bringing into focus the respective roles of process-related and model-related information in demonstrating trustworthiness and contributing to beneficial innovation.”

    Share Story:

Recent Stories


Safeguarding economies: DNFBPs' role in AML and CTF compliance explained
Join FStech editor Jonathan Easton, NICE Actimize's Adam McLaughlin and Graham Mackenzie of the Law Society of Scotland as they look at the role Designated Non-Financial Businesses and Professions (DNFBPs) play in the financial sector, and the challenges they face in complying with anti-money laundering and counter-terrorist financing regulations.

Ransomware and beyond: Enhancing cyber threat awareness in the financial sector
Join FStech editor Jonathan Easton and Proofpoint cybersecurity strategist Matt Cooke as they discuss the findings of the State of the Phish 2023 report, diving into key topics such as awareness of cyber threats, the sophisticated techniques being used by criminals to target the financial sector, and how financial institutions can take a proactive approach to educating both their employees and their customers.

Click here to read the 2023 State of the Phish report from Proofpoint.

Cracking down on fraud
In this webinar a panel of expert speakers explored the ways in which high-volume PSPs and FinTechs are preventing fraud while providing a seamless customer experience.

Future of Planning, Budgeting, Forecasting, and Reporting
Sage Intacct is excited to present FSN The Modern Finance Forum’s “Future of Planning, Budgeting, Forecasting, and Reporting Global Survey 2022” results. With participation from 450 companies around the globe, the survey results highlight how organisations are developing their core financial processes by 2030.