How the UK’s anti-fraud initiatives could drive scammers further towards GenAI

FStech News Editor Alexandra Leonards explores how scam techniques are evolving with the advent of generative AI and whether the UK’s upcoming mandatory reimbursement scheme will push fraudsters even closer to the technology.

The financial services market is no stranger to the idea that scammers are often one step ahead when it comes to the latest technology. So, it’s likely no surprise that the advancement of artificial intelligence (AI) is proving a double-edged sword for the industry.

While the broad concept of generative AI (GenAI) – and more specifically large language models (LLMs) – give financial institutions the potential to build new tools that could address everything from improving efficiency and customer experience to combating fraud and financial crime, the industry’s biggest foes also have access to the technology (with the added bonus of not operating within the bounds of regulation.)

Initiatives like the mandatory reimbursement scheme – coming into force next year – and recently announced measures that will see social media giants take more responsibility for online fraud, alongside support from advancing technologies like GenAI, could begin to offset the advantage criminal gangs have over banks and other stakeholders because of their ability to exploit the technology without bounds and with near impunity. However, often it is the case that when one type of fraud is under threat, criminals will act fast, and another will quickly crop up in its place.

“Scammers are constantly adapting and are extremely adept at exploiting new ways to trick the unsuspecting public into parting with their money,” warns Baz Thompson, head of fraud and investigations at Metro Bank.

He describes the fight against fraud as a “constant battle”, explaining that while banks have strengthened their defences to detect and prevent fraud, criminals are now paying more attention to directly targeting individual customers because they are often the easier target.

And, more recently, like they have done with other technologies and tools, financial services providers have seen criminals adapt quickly to adopt generative AI as part of their methodology.

Adapting with GenAI

“As the use of ChatGPT and other advanced language models (ALMs) have become widespread, we’ve seen an increase in fraudsters exploiting these tools in a variety of ways,” says Caroline Birchinall, head of fraud strategy at Visa. “This has included sophisticated phishing lures that take advantage of ALMs’ ability to produce written requests free of grammar or spelling errors that are harder to identify as fraud or creating and mimicking realistic speech to impersonate financial institutions.”

While Metasploit, a well-known tool that sends out phishing emails en mass, has been around for years, a common issue for fraudsters using the technology has been poor grammar and typos. But with WormGPT, a new tool that uses language models to create realistic emails without those tell-tale mistakes, these emails have become increasingly convincing.

Voice scams

Visa has also seen signs that AI-powered voice scams are becoming more common. The payments processor has recorded examples of fraudsters exploiting voice clips posted by the children of potential victims on social media by using them as valuable training data to refine their AI models and make their voice-mimicking more convincing.

The company said it is seeing this in the resurgence of “Hi Mum” scams, whereby criminals pose as a friend of relative of a victim to gain their trust before asking for money. According to the Australian Competition and Consumer Commission Scamwatch, around 95 per cent of all reported scams of this kind result in a loss.

Meanwhile, the same deepfake technology could pose a risk to banks and other organisations using voice as a form of authentication. While some developers of voice authentication may have programmes that enable them to spot the difference between a synthetic or a real voice, it’s likely that it will become harder for firms to detect as the technology develops.

The emergence of large language models has also driven the creation of undetectable malware that can be used to fuel ransomware attacks, warns Birchinall. She adds that the technology has also helped criminals write malicious code, identify vulnerabilities, craft phishing pages, and learn how to hack.

Additionally, ALMs have been known to create bots that send numerous SMS notifications to a victim’s device to bypass security controls and gain access to a victim’s account. As well as this, the ability for ChatGPT-style models to engage in human-like conversations can be exploited by scammers, whose unsuspecting victims can be fooled into thinking they are speaking to a real person.

Visual deepfakes

Ignatius Adjei, partner, fraud and financial crime at KPMG UK, says that deepfakes are creating an opportunity for fraudsters gain a head start with greater multi-vector attacks. Before there was only text, he explains, now there is voice, images, and video.

High-profile cases such as an incident reported by the South China Morning Post, whereby six people were arrested in Hong Kong after creating doctored images for loan scams targeting banks and money lenders, demonstrate how impactful the use of AI deepfake technology can be.

But Adjei says that in terms of volume, this type of fraud is not something he is seeing on a large scale in Britain and Northern Ireland. At the moment, it takes more effort to put together a deepfake and there are other, easier ways to defraud people, he explains.

Authorised Push Payment (APP) fraud for example, where a fraudster tricks someone into sending a payment to an account outside of their control, continues to be the biggest threat in the UK.

Synthetic identity fraud

These same GenAI-powered multi-vector attacks that Adjei describes could also drive an increase in identity-related synthetic fraud, which is already becoming a prevalent issue in the United States.

In this technique, criminals combine real information like addresses or social security numbers with fictitious information to create a fake identity which they use to apply for credit cards.

“The Federal Reserve now considers Synthetic ID fraud – where a malicious actor creates a fake persona to apply for a loan, open an account, or make a purchase – the fastest-growing financial crime in the U.S., costing banks billions of dollars per year,” warns Jonathan Anastasia, executive vice president, cyber and intelligence, Mastercard. “To prevent synthetic ID fraud, businesses need to validate that multiple identity elements are valid and linked to a genuine person.”

With GenAI enabling fraudsters to create increasingly realistic text, images, audio, and video, this could be used to create seemingly authentic customers. These fake customers are more difficult to detect and could be approved to make transactions, send and receive money, and even take out credit.

To address this, Mastercard uses machine learning-powered digital identity technology to validate a person by analysing multiple data points, such as name, email, and device IP address. These tools also use biometrics to analyse behaviour, for example by assessing how a user types or holds their phone to distinguish between a person and a bot.

Meanwhile, Visa’s Risk Operation Centres use AI-enabled capabilities alongside always-on experts to proactively detect and prevent billions of dollars in attempted fraud. The company is able to provide a risk score to each transaction and identify anomalies that may be related to synthetic fraud.

Mandatory reimbursement

While the fraudsters are cleverly adapting their methods to include the latest AI developments, new schemes in place in the UK have been designed to play a role in reducing their impact.

In July, the Payment Systems Regulator (PSR) announced that financial institutions and payments providers will move away from a voluntary reimbursement scheme to a mandatory initiative for APP.

The new rules, launching officially on 7 October 2024, will require banks and payments firms “in most cases” to reimburse in-scope customers who fall victim to this type of fraud, while those sending and receiving payments will split the cost 50:50.

KPMG UK’s Ignatius Adjei says that the scheme is fostering a “prevent or pay” culture, adding that the move will see even more investment into protection against fraud.
“There’s a huge amount of awareness – including through changes to regulation like the Consumer Duty,” he explains. “There’s increased pressure on making sure you get the right and fair outcome.”

Social media

Similarly, the introduction of a new Online Fraud Charter is promising to widen the pool of responsibility when it comes to fraud.

Under the new Charter, announced in November, some BigTech firms and social media platforms like Amazon and Facebook who are operating in the UK have pledged to take further action against the fraud taking place on their platforms.

2022 figures from UK Finance show that around three-quarters of online fraud starts on social media, with 80 per cent of APP fraud cases starting online.

"Social media firms are active players in the fraud ecosystem, and I think they're going to be playing a stronger role in terms of the prevention and detection of fraud,” says Adjei. “With greater collaboration between these different institutions, and with the regulatory backdrop and digitalisation, it's only going to increase that change in the prevention which is becoming cross-industry – I think that is going evolve in the next three to five years.”

The government says that the Charter, which it has described as the “first of its kind in the world”, will see eleven of the world’s largest tech companies clamp down on fraud through a new set of measures.

The companies, who also include eBay, Google, Instagram, LinkedIn, Match Group, Microsoft, Snapchat, TikTok, and YouTube, have pledged to verify new advertisers and "promptly" remove any fraudulent content. There will also be increased levels of verification on peer-to-peer marketplaces, while people using online dating services will have the opportunity to prove they are who they say they are.

The move has unsurprisingly been welcomed by the banking industry, with many financial institutions in the UK having long called for companies in the tech sector to take more responsibility for the fraud that is increasingly starting on their websites.

A shift towards GenAI scams

While more focus on fraud by social media platforms and further investment in fraud detection and prevention by banks and other stakeholders could begin to better tackle APP, it could also mean the UK begins to face some of the more complex GenAI-driven fraud which is growing elsewhere in the world.

“Because of the investment and the need to reimburse, those easier frauds that are being committed are going to become harder to do,” says Ignatius Adjei, KPMG. “As things like AI become more accessible to others, I think there will be a move towards those kinds of areas because if you shut off one avenue, they'll move to another very quickly.”

Ultimately, if companies get much better at preventing things like purchase scams, fraudsters will look for other ways to worm their way in. For example, if companies make it harder to register devices or put debit cards or coupons on e-wallets like Apple Pay or Google Pay, that’s where a fraudster could take the steps to take over an account using voice or deepfakes.

“I think the challenge for financial institutions – and this is something they are investing in and doing more on in this space – is having a more holistic approach in terms of where they see these various attacks and being able to join the dots,” adds Adjei. “So, if they are a fraudster making an attack from an audio channel or from telephony, and then they see a transaction being made via account-to-account, they can join the dots between the two and actually say this ‘looks higher risk’.

“I think that this will continue to be a challenge, certainty for some of the larger banks, as they get the holistic view of these multi-vector attacks.”

Additionally, UK Finance has warned that the reimbursement scheme could potentially lead to an increase in bad actors posing as consumers to take advantage of the initiative in the form of first party fraud.

Often the key is in how quickly firms can adapt and react to the latest scam method by being able to modify controls in time. That’s easy enough if that involves tweaking a rule. But, with the rapid development AI and ever-evolving scam techniques, companies may need to change a whole system. That’s where it starts to become far more complicated.

It’s likely that the organisations that attempt to look one step ahead to anticipate what is coming next from the fraudsters as they try to circumvent the next hurdle – whether that be a clampdown from BigTech or more investment in tackling APP – that will begin to find themselves on a more level playing field with the criminals.



Share Story:

Recent Stories


Safeguarding economies: DNFBPs' role in AML and CTF compliance explained
Join FStech editor Jonathan Easton, NICE Actimize's Adam McLaughlin and Graham Mackenzie of the Law Society of Scotland as they look at the role Designated Non-Financial Businesses and Professions (DNFBPs) play in the financial sector, and the challenges they face in complying with anti-money laundering and counter-terrorist financing regulations.

Ransomware and beyond: Enhancing cyber threat awareness in the financial sector
Join FStech editor Jonathan Easton and Proofpoint cybersecurity strategist Matt Cooke as they discuss the findings of the State of the Phish 2023 report, diving into key topics such as awareness of cyber threats, the sophisticated techniques being used by criminals to target the financial sector, and how financial institutions can take a proactive approach to educating both their employees and their customers.

Click here to read the 2023 State of the Phish report from Proofpoint.

Cracking down on fraud
In this webinar a panel of expert speakers explored the ways in which high-volume PSPs and FinTechs are preventing fraud while providing a seamless customer experience.

Future of Planning, Budgeting, Forecasting, and Reporting
Sage Intacct is excited to present FSN The Modern Finance Forum’s “Future of Planning, Budgeting, Forecasting, and Reporting Global Survey 2022” results. With participation from 450 companies around the globe, the survey results highlight how organisations are developing their core financial processes by 2030.