UK weighs standardised testing regime for bank AI models

The UK government is reportedly considering introducing standardised testing for artificial intelligence models used by banks following regulatory concerns about inadequate oversight of their deployment.

The proposal – first reported on by the Financial Times – was put forward last month by Harriet Rees, chief information officer at Starling Bank, and submitted to the Department for Science, Innovation and Technology as policymakers assess how to strengthen safeguards around widely used AI systems. Rees, who also serves as a government financial services AI “champion”, argued that independent evaluation would address gaps in current practices.

Rees said that banks currently rely on their own internal checks without any shared benchmark. “Lots of firms are using [AI models] and we can assume that [they] have done the necessary due diligence and, therefore, hopefully we’re happy. But we’ve not done that independent assessment,” she said.

The proposal follows warnings from the Bank of England’s Prudential Regulation Authority, which told lenders at meetings in October that their monitoring of AI models was “not frequent enough”, according to presentation materials from the sessions. Regulators have increasingly focused on how banks oversee third-party technologies embedded in critical operations.

Rees told the FT that a centralised approach could reduce duplication and establish consistent standards across the sector. “Given our reliance on US models, it would give [the government] the comfort that they’ve at least looked at [the models] and they know that they all are at a certain standard,” she said.

There is currently no legal requirement in the UK for AI models to be assessed before being deployed in regulated industries. While companies such as OpenAI and Anthropic have voluntarily submitted systems for review by the government’s AI Security Institute, these assessments focus on frontier risks rather than routine commercial use in banking.

Rees said an independent testing regime would act as a “fail-safe” rather than replacing firms’ own controls, and cautioned against assigning responsibility to a sector-specific regulator given the cross-industry use of general-purpose AI. She described the AI Security Institute as the “most obvious body” to lead such work and said discussions with its director-general Ollie Ilott had been positive, adding: “They agreed that there was nothing else out there like this today."

A government spokesperson, however, indicated that ministers are not currently planning to expand the institute’s remit. “The AI Security Institute is focused on frontier-AI security research, and we are not exploring expanding its remit into assurance or any testing of third-party AI models,” the spokesperson said to the paper.



Share Story:

Recent Stories


Creating value together: Strategic partnerships in the age of GCCs
As Global Capability Centres reshape the financial services landscape, one question stands out: how do leading banks balance in-house innovation with strategic partnerships to drive real transformation?

Data trust in the AI era: Building customer confidence through responsible banking
In the second episode of FStech’s three-part video podcast series sponsored by HCLTech, Sudip Lahiri, Executive Vice President & Head of Financial Services for Europe & UKI at HCLTech examines the critical relationship between data trust, transparency, and responsible AI implementation in financial services.

Banking's GenAI evolution: Beyond the hype, building the future
In the first episode of a three-part video podcast series sponsored by HCLTech, Sudip Lahiri, Executive Vice President & Head of Financial Services for Europe & UKI at HCLTech explores how financial institutions can navigate the transformative potential of Generative AI while building lasting foundations for innovation.

Beyond compliance: Building unshakeable operational resilience in financial services
In today's rapidly evolving financial landscape, operational resilience has become a critical focus for institutions worldwide. As regulatory requirements grow more complex and cyber threats, particularly ransomware, become increasingly sophisticated, financial services providers must adapt and strengthen their defences. The intersection of compliance, technology, and security presents both challenges and opportunities.