By AI Trends Staff
The use of AI to fight financial fraud—internally and externally—is a hot topic.
“AI is the future of fraud management, irrespective of the system you are using,” stated Svetlana Belyalova, head of operational risk management at Rosbank, Societe Generale Group, during a recent webcast hosted by Risk.net. “It brings a lot of value in both data management and decision-making.”
A firm’s maturity and operational processes for fraud management are key to selecting the technology that will be right for it, suggested Belyalova. Firms that had taken a more siloed approach by fitting technology to a certain type of fraud, now want to take a more holistic approach and tap the AI capabilities of the fraud systems.
“What we really need to know better is how to manage these AI capabilities in our real-time environment—how to make them more effective, and how to make these systems learn from our [ever-evolving] day-to-day situations,” she stated.
Whereas AI capabilities might have been “nice to have” among the tools financial institutions use to fight fraud, today, “AI is becoming a must-have for analysts to decide whether transactions are fraudulent, stated Amir Shachar, lead fraud research data scientist at NICE Actimize of Raanana, Israel, a supplier of software to combat financial crime and ensure compliance. NICE, for Neptune Intelligence Computer Engineering, was founded by seven former Israeli army colleagues.
It is early days in the banking industry for fighting fraud with new technologies. Some early adopters have implemented advanced platforms incorporating AI, and others are still depending on older systems and existing processes. The group head of operational risk at Allied Irish Bank, Charles Forde, encouraged early adopters to talk about what is and is not working, so other banks can learn and derive best practices.
This would not be only about what technologies are being used, but also the approaches and operating models employed. “I think there’s still a big variance in different firms in how the technologies are being applied, and in the operating model,” he stated. “In some firms it’s primarily all in the first line. In some, the concentration of knowledge is in the second line. Ultimately, this activity should sit next to the business that it is supporting, regardless of what type of business you’re in.”
Bank Fraud Seen Costing At Least $7.1 Billion Annually
Sizing the cost of bank fraud is challenging. The Association of Certified Fraud Examiners’ (ACFE) 2018 Report to the Nations has found that the total losses caused by fraud exceed $7.1billion. However, this is only known losses. The ACFE claims that this figure does not come close to representing the total amount of fraud losses, and the true global cost of fraud is probably “magnitudes higher” due to undetected and indirect costs.
KPMG’s 2019 Global Banking Fraud Survey, with responses from 43 banks worldwide, found that 52% of banks were not monitoring the total cost of fraud risk management, according to a recent report from fcase, a data aggregation hub supporting fraud management services, based in London.
A fraud risk management model is a framework outlining all processes related to how fraud can be identified, assessed, mitigated, monitored, and reported to senior management.
An effective fraud risk management model needs to build risk awareness, accountability, and transparency into how fraud is being actively managed by banks and financial institutions, the report suggests. According to Deloitte, it enables organizations to have controls which initially prevent fraud from taking place, detects the fraud as soon as it occurs and finally responds effectively to fraud incidents.
The Association of Certified Fraud Examiners (ACFE) states that for a fraud risk management approach to function well, it must be proactive rather than reactive.
KPMG found major differences in which internal parties were responsible for setting the fraud risk tolerance for the organization, with 52% saying it was done by their Board/Risk Committee. “This shows there is still a lot to work on,” the report states. “With fraud activity increasing at a rapid pace costing banks and financial institutions billions every year, the right fraud risk management operating models can help manage the damage created by fraudsters.”
AI Seen As Worthwhile Investment for Combating Fraud by Surveyed Banks
The use of AI and machine learning to combat fraud and money laundering was seen as a worthwhile investment in a survey of banks that invested in AI conducted by analyst company Ovum. Over 80% believed the investment in AI generated a return on investment, according to a report on the blog of FICO, the data analytics company based in San Jose.
AI is being employed by the attackers also. “While we’re meeting to discuss how to tackle fraud and financial crime, elsewhere the criminals are holding their own conferences to plan their attacks,” stated Julie Conroy, director of the Fraud and AML practice at Aite Group, market researchers based in Boston, at a recent conference from Finovate, a conference company focused on banking and financial technology.
Conroy pointed out that fraud and money laundering are financing some of the worst crimes society faces, including human trafficking, terrorism, and the operations of drug cartels.
Banks investing in data science teams need to also provide them with the tools to operationalize the work they have done, suggested Doug Clare, who oversees FICO’s fraud and compliance solutions, at the Finovate conference, “Banks need to pivot quickly on their experience of the financial crime they are seeing and get the models they develop into operation fast,” he said. “Without investment in the right platforms they can’t do that.”
The AI in use by banks must be explainable as well. “Organizations that deploy AI and machine learning to detect fraud and money laundering must therefore take care that the models they use are not ‘black box’,” stated Sarah Rutherford, Senior Director, FICO, and author of the recent blog post.
AI models are not infallible. As FICO Chief Analytics Officer Scott Zoldi stated in his post ‘Bank of England Validates Need for Explainable AI’ the sheer size and complexity of these models make it difficult to explain their operating processes to people. Zoldi outlined techniques being developed to make AI explainable, for those using the right models.
Read the source articles in Risk.net, from fcase and the FICO blog.