Financial Abuse Specialists to Embrace Generative AI by 2025

TECHNOLOGYFinancial Abuse Specialists to Embrace Generative AI by 2025

A global survey conducted by the Association of Certified Fraud Examiners (ACFE) and SAS reveals a huge interest in generative artificial intelligence among fraud prevention specialists. However, earlier comparative studies suggest that the use of advanced technologies for fraud prevention is still lagging behind expectations.

Generative artificial intelligence has captured the public’s imagination with its possibilities and associated hopes permeating every aspect of our lives. Therefore, it is not surprising that the latest study conducted by ACFE and SAS showed that 83% of financial fraud control specialists anticipate the implementation of this technology within the next two years.

ACFE and SAS have presented the third iteration of the global survey 2024 Anti-Fraud Technology Benchmarking Report, which highlights key trends in the evolution of financial fraud detection methods since 2019. The latest edition reflects the observations of almost 1200 ACFE members surveyed at the end of 2023. The survey’s findings include:

Interest in artificial intelligence (AI) and machine learning (ML) is higher than ever before. Almost one in five fraud prevention specialists (18%) declares the use of this technology. Another 32% anticipate the implementation of AI/ML over the next two years. This is the highest score since the beginning of the survey. If this pace continues, the use of AI/ML in financial fraud prevention programs will nearly triple by the end of next year.

However, the actual use of AI and ML technology lags behind expectations. Despite much interest, the implementation of these technologies to detect and prevent fraud has only increased by 5% since 2019. This figure greatly deviates from the anticipated rates disclosed in the 2019 and 2022 surveys (25% and 26% respectively).

While the use of many analytical techniques remains stable, the application of biometrics and robotics in financial fraud prevention programs continues to grow. The use of physical biometrics has increased by 14% since 2019 and is currently indicated by 40% of respondents. 20% of survey participants reported using robotics, including Robotic Process Automation (RPA), whereas in 2019, this percentage was 9%. The use of these technologies is particularly high in banking and financial services, where half of the surveyed (51%) use physical biometric data, and a third (33%) use robotics.

“Generative artificial intelligence tools can be extremely dangerous if they fall into the wrong hands,” said ACFE President John Gill. “Three out of five organizations anticipate increasing their budgets for financial fraud prevention technologies in the next two years. How they invest these funds can give them an edge in the technological arms race with cybercriminals. It’s a tough fight, considering that, unlike fraudsters, organizations face the additional challenge of using these technologies ethically and in compliance with regulations.”

“Strong interest in advanced analytics techniques contrasted with much lower adoption rates speaks to the complexity of scaling the AI and analytics lifecycle,” said Stu Bradley, Senior Vice President, Fraud and Security Intelligence at SAS. “It’s also a reminder of the importance of choosing the right technology partner. AI and machine learning are not simple plug-and-play applications. But the benefits are more easily realized by introducing comprehensive solutions across the entire risk management framework on a single, AI-based platform. This is the approach we have taken with our native cloud solution, SAS Viya, which operates in any programming language.”

The report further explores the future of GenAI – will it boom or bust? The ultimate resolution of this tension between high enthusiasm for advanced analytical techniques and the reality of implementing them in complex organizational environments remains to be seen. The role of caution cannot be overstated as responsible innovation demands asking not just “Can we use these technologies?” but also “Should we use them?” and managing the consequences of those decisions ethically and effectively.

Exit mobile version