The Commonwealth Bank of Australia (CBA) has made its AI model for identifying digital payment transactions featuring offensive messages freely available to any bank worldwide.
The bank’s AI model, which is now available on source code platform GitHub, is designed to identify digital payment transactions which include harassing, threatening or offensive messages it refers to as “technology-facilitated abuse”.
CBA first rolled out abuse transaction monitoring in 2020, with 400,000 transactions blocked annually by an automatic filer that prevents offensive language being used in transaction descriptions on its app.
CBA group customer advocate Angela MacMillan explained that it developed the technology after conducting research which found that one in four Australian adults had experienced financial abuse from their partner.
“Sadly, we see that perpetrators use all kinds of ways to circumvent existing measures such as using the messaging field to send offensive or threatening messages when making a digital transaction,” she said. “By using this model, we can scan unusual transactional activity and identify patterns and instances deemed to be high risk so that the bank can investigate these and take action.”
CBA shared that its model detects around 1,500 cases of abuse it deems high-risk each year.
“By sharing our source code and model with any bank in the world, it will help financial institutions have better visibility of technology-facilitated abuse,” MacMillan said. “This can help to inform action the bank may choose to take to help protect customers.”
In August CBA launched a police referral pilot designed to set new standards for how banks report tech-facilitated abuse to law enforcement.
At the time, the bank said that the move built on its existing use of AI to identify and stop abuse in transaction descriptions by working with police in New South Wales (NSW) to create a new process that will allow it to report abuse with the consent of victims.
Recent Stories