The National Bank of Poland reported over 326,000 instances of fraud involving payment cards and bank transfer orders in 2023. The total value of these transactions amounted to 327 million zloty [Around 81 million US dollars, as of 2021] . With each passing year, there has been an increasing range of threats that banks and their customers are exposed to. Both consumers and businesses alike fall victim to these fraudsters. However, there is a glimmer of hope in this darkness – Artificial intelligence (AI), which can assist financial institutions in minimizing risk and fighting criminals.
In many areas, banks are already making extensive use of AI and machine learning algorithms. They help automate and streamline processes such as personalizing offers, document analysis, credit risk assessment, and customer service through interactive chatbots. A KPMG study reveals that as many as 76% of financial institutions worldwide intend to use AI-based solutions for detecting and preventing fraud as well.
Fraud crimes come in many forms. These include identity theft and taking out loans in someone else’s name, phishing customer data, unauthorized use of credit cards, taking over bank accounts based on previously stolen data, or forging documents. These are just some of the tactics employed by scammers.
Most often, consumers and businesses only find out about the theft of their funds after the fact, when the money disappears from their bank accounts. Financial institutions, on the other hand, do not have the ability to verify each transaction and transfer of funds. However, generative artificial intelligence shows great potential in this area.
Thanks to its ability for instantaneous analysis of vast amounts of financial data, generative AI models can be used to identify anomalies in payment transactions. They enable real-time monitoring of customer behavior and analysis of payments for typical user schemes. Generated AI solutions can also check the content of notifications and messages sent “on behalf of the bank” before they reach the recipient, as well as verify the authenticity of identity documents used. This allows for detection of potentially fraudulent actions and blocking them before any theft occurs.
For artificial intelligence to operate accurately, it needs to be powered by a huge amount of data. Machine learning models created based on historical information on financial fraud can predict its recurrence with a high degree of accuracy. However, when faced with predicting new types of threats, their level of precision may decrease. It is therefore crucial that the data set on which the AI algorithm is trained is as comprehensive and up-to-date as possible.
If there is a lack of real information, a good approach is to use synthetic data instead. They are created using computer simulations, which allow for high-quality resources to train AI algorithms. Once used to train artificial intelligence, it is possible to run millions of potential financial fraud scenarios, and the developing fraud detection system becomes more precise and effective.
Simulations can be conducted in the financial industry using the Generative Adversarial Networks (GAN) model. It uses two neural networks: one generates data based on specific patterns, the other compares them with real data, looking for forgeries. Meanwhile, the Agent Based Model (ABM) runs simulations of relationships between specific user profiles based on particular rules. The generated data are then analyzed to draw conclusions and predictions about specific groups and communities.
A KPMG report indicates that within just three months in 2023, the proportion of entities in the financial sector implementing generative AI grew significantly from 5% in March to 49% in June. However, this rapid development does not mean that financial companies are not aware of the challenges linked to generative AI. Almost seven out of ten respondents (69%) QUOTE KPMG as their most important problem as information privacy. In addition, 46% believe that data transparency is an issue that requires immediate regulatory action.
Data anonymization by means of synthetic data in financial entities is a response to these challenges, providing high quality anonymized information for AI to train on. Gartner analysts estimate that by the end of 2024, 60% of all training data for artificial intelligence will be generated synthetically.
Source: https://managerplus.pl/oszustwa-finansowe-na-rekordowym-poziomie-czy-ai-powstrzyma-przestepcow-14961