UK financial watchdog the Financial Conduct Authority (FCA) has called for the creation of a strong regulatory framework around the use of artificial intelligence in financial services.
In response to the results of the UK government’s consultation on a pro-innovation approach to AI regulation, the FCA has published its own update in which it states its focus on how firms can safely and responsibly adopt the technology as well as understanding what impact AI innovations are having on consumers and markets.
The announcement comes ahead of a government deadline set for 30th April which calls on UK regulators to outline their strategic approach to AI.
In the update’s foreword, Jessica Rusu, chief data, information and intelligence officer, outlines that in the two years since the FCA published its Discussion Paper on AI, the technology has been “propelled to the forefront of agendas across the economy with unprecedented speed” and that “AI could significantly transform the way [financial institutions] serve their customers and clients.”
The report goes on to outline the regulator’s approach to AI, along with responding to five key principals that had been laid out by the government: safety, security, robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.
Looking ahead, the FCA said that it plans to continue to further its understanding of AI deployment in UK financial markets in an effort to ensure that “any potential future regulatory adaptations are proportionate to the risks, whilst creating a framework for beneficial innovation”.
The FCA added that it is currently involved in diagnostic work on the deployment of AI across UK financial markets, and that it is re-running a third edition of its machine learning survey, jointly with the Bank of England. The FCA said it is also collaborating with the Payment Services Regulator (PSR) to consider AI across systems areas.
To ensure that AI is used in a way that is safe and responsible, the FCA said it is assessing opportunities to “pilot new types of regulatory engagement as well as environments in which the design and impact of AI on consumers and markets can be tested and assessed without harm materialising”.
As for its own use of AI, the regulator said that the tech can help identify fraud and bad actors, noting that it uses web scraping and social media tools that are able to detect, review and triage potential scam websites. The regulator said that it plans to invest more into these technologies, and that it is currently exploring potential further use cases involving Natural Language Processing to aid triage decisions, assessing AI to generate synthetic data or using LLMs to analyse and summarise text.
The report concludes that while AI can “make a significant contribution to economic growth,” this will require “a strong regulatory framework that adapts, evolves and responds to the new challenges and risks that technology brings”.
Reacting to the FCA’s response, Karim Haji, global and UK head of financial services at KPMG, said that regulation of AI will continue to be a major concern for the financial services sector this year.
“The regulation of AI will continue to be a big issue for the financial services sector this year,” Haji commented. “A recent poll we conducted found 81 per cent of sector leaders ranked policies aimed at balancing the opportunities with the risks of AI as important when it comes to government policy ahead of a general election.”
Recent Stories