FCA regulatory framework struggling with same AI challenges as FIs, says cross-cutting policy head

The Financial Conduct Authority (FCA) is struggling with some of the same challenges in rolling out AI as British financial institutions, the head of department, cross-cutting policy & strategy at the regulator has said.

Speaking on a panel at FTT Festival London, which was focused on the ethical and responsible use of AI in financial services, Alex Smith stressed the challenge the FCA faces in understanding what is happening in their regulatory environment given the huge amount of information they receive from thousands of firms, ranging from small local businesses to major corporations.

“Every single company in this room is facing the challenge of assessing how to effectively use AI in their organisation,” he told the audience. “This is certainly also happening within the regulatory framework of the FCA, and the promising initial results obtained from some proof of concepts suggest that this technology definitely has a future.”

The head of strategy was questioned on how regulators are currently using AI.

He said: “We are actively exploring how to integrate AI into our work, much like other regulators around the world.”

He added that the UK regulator is focusing on integrating AI into processes that require significant human input, such as their authorisation process, to boost efficiency and support improved decision-making.

“The intention is not to replace staff with AI, but to make these processes faster and more effective,” Smith said.

He also pointed out that the financial services watchdog is using AI to help analyse data so that the it can have a clearer picture of what is happening in the market.

“We wonder how best to process and make sense of all this intelligence, and there are hopes for using AI in new ways to help,” Smith added.

He also stressed the importance of accountability when it comes to using AI.

“Even if a company outsources its AI systems, it cannot outsource responsibility for how those systems are used,” he said.

Smith added that this responsibility must be clearly documented within the organisation which has deployed the technology.

He also emphasised that, as a regulatory authority, the FCA shares responsibility for ensuring that AI deployment is done correctly and for working with financial services to achieve this goal. .



Share Story:

Recent Stories


Creating value together: Strategic partnerships in the age of GCCs
As Global Capability Centres reshape the financial services landscape, one question stands out: how do leading banks balance in-house innovation with strategic partnerships to drive real transformation?

Data trust in the AI era: Building customer confidence through responsible banking
In the second episode of FStech’s three-part video podcast series sponsored by HCLTech, Sudip Lahiri, Executive Vice President & Head of Financial Services for Europe & UKI at HCLTech examines the critical relationship between data trust, transparency, and responsible AI implementation in financial services.

Banking's GenAI evolution: Beyond the hype, building the future
In the first episode of a three-part video podcast series sponsored by HCLTech, Sudip Lahiri, Executive Vice President & Head of Financial Services for Europe & UKI at HCLTech explores how financial institutions can navigate the transformative potential of Generative AI while building lasting foundations for innovation.

Beyond compliance: Building unshakeable operational resilience in financial services
In today's rapidly evolving financial landscape, operational resilience has become a critical focus for institutions worldwide. As regulatory requirements grow more complex and cyber threats, particularly ransomware, become increasingly sophisticated, financial services providers must adapt and strengthen their defences. The intersection of compliance, technology, and security presents both challenges and opportunities.