The UK’s financial watchdog has announced plans to launch a new AI Lab as part of its strategy to support financial services firms on their ongoing journeys with the evolving technology.
During a speech at an event marking the Financial Conduct Authority’s (FCA) 10th anniversary, chief data, information and intelligence officer Jessica Rusu said the move would support AI-related insights, discussions, and case studies.
She went on to say that the new Lab will help the organisation “deepen [its] understanding of potential AI risks and opportunities in a collaborative environment where regulators and industry can engage candidly and openly."
Following its collaboration with the Digital Regulation Cooperation Forum (DRCF) on the AI & Digital Hub, which offers free and informal advice on cross-regulatory queries, the regulator plans to work more closely with the organisation on its new initiative.
The DRCF brings together the Competition and Markets Authority; Information Commissioner's Office; Ofcom; and the FCA.
Rusu also shared that the Lab will be made up of four key components: AI Spotlight; AI Sprint; AI Input Zone; and Supercharged Sandbox and testing
AI Spotlight aims to provide a space for firms to share real-world examples of how they are using AI and to share emerging AI solutions that will "lead to industry growth".
The AI Policy Sprint will bring together the "brightest minds" in industry, academia, regulation, technology and consumer representatives to focus on how to enable the safe adoption of AI in financial services. The inaugural AI Sprint will take place in January 2025.
The AI Input Zone is an online feedback platform designed to give stakeholders a chance input their opinions on the future of AI in UK financial services.
The Supercharged Sandbox will involve the running of AI-focused TechSprints and expand upon the FCA's existing Digital Sandbox infrastructure through greater computing power, improved datasets, and increased AI testing capabilities.
"AI will revolutionise financial services, providing solutions to improve consumer financial inclusion, help prevent market abuse, and support the delivery of new products and services," continued the organisation's chief data officer. "And whilst we are just starting to see AI’s benefits emerge, we are clear that those benefits do not come without risks.
"As a regulator, we must play a critical role in ensuring AI is deployed in a way that is safe, fair and in the best interests of consumers and the market as a whole. Even some of the world’s most profusive backers of AI recognise the importance of ensuring the risks of AI are mitigated, as we all work to realise the undoubtedly enormous benefits the technology has to offer."
Speaking at FStech's The Future of AI in Financial Services conference last month, Ed Towers, head of advanced analytics and data science at the FCA discussed the organisation’s approach to regulating AI and its own use of the technology.
“We seek to create an environment that facilitates the beneficial adoption of AI, it is important to build public trust,” Towers explained, “At the FCA, we’re enabling an environment for the safe and responsible use of AI in financial services.”
He added: “We regulate the outcomes, assess the risks and how it can impact market integrity. We are working with other regulators and our foreign peers to understand the impact of AI, as AI cuts across regulatory boundaries and jurisdictions.”
In April 2024, the FCA announced its AI Update where it reiterated its principles-based and outcomes-focused approach to AI, including how many of the associated risks can be mitigated within its existing outcomes-based regulatory frameworks, such as the Consumer Duty and Senior Managers & Certification Regime.
Recent Stories