Search results
1 – 2 of 2Abhishek Gupta, Dwijendra Nath Dwivedi and Ashish Jain
Transaction monitoring system set up by financial institutions is one of the most used ways to track money laundering and terrorist financing activities. While being effective to…
Abstract
Purpose
Transaction monitoring system set up by financial institutions is one of the most used ways to track money laundering and terrorist financing activities. While being effective to a large extent, the system generates very high false positives. With evolving patterns of financial transactions, it also needs effective mechanism for scenario fine-tuning. The purpose of this paper is to highlight quantitative method for optimizing scenarios in money laundering context. While anomaly detection and unsupervised learning can identify huge patterns of false negatives, that can reveal new patterns, for existing scenarios, business generally rely on judgment/data analysis-based threshold finetuning of existing scenario. The objective of such exercises is productivity rate enhancement.
Design/methodology/approach
In this paper, the authors propose an approach called linear/non-linear optimization on threshold finetuning. This traditional operations research technique has been often used for many optimization problems. Current problem of threshold finetuning for scenario has two key features that warrant linear optimization. First, scenario-based suspicious transaction reporting (STR) cases and overall customer level catch rate has a very high overlap, i.e. more than one scenario captures same customer with different degree of abnormal behavior. This implies that scenarios can be better coordinated to catch more non-overlapping customers. Second, different customer segments have differing degree of transaction behavior; hence, segmenting and then reducing slack (redundant catch of suspect) can result in better productivity rate (defined as productive alerts divided by total alerts) in a money laundering context.
Findings
Theresults show that by implementing the optimization technique, the productivity rate can be improved. This is done through two drivers. First, the team gets to know the best possible combination of threshold across scenarios for maximizing the STR observations better coverage of STR – fine-tuned thresholds are able to better cover the suspected transactions as compared to traditional approaches. Second, there is reduction of redundancy/slack margins on thresholds, thereby improving the overall productivity rate. The experiments focused on six scenario combinations, resulted in reduction of 5.4% of alerts and 1.6% of unique customers for same number of STR capture.
Originality/value
The authors propose an approach called linear/non-linear optimization on threshold finetuning, as very little work is done on optimizing scenarios itself, which is the most widely used practice to monitor enterprise-wide anti-money laundering solutions. This proves that by adding a layer of mathematical optimization, financial institutions can additionally save few million dollars, without compromising on their STR capture capability. This hopefully will go a long way in leveraging artificial intelligence for further making financial institutions more efficient in controlling financial crimes and save some hard-earned dollars.
Details
Keywords
Abhishek Gupta, Dwijendra Nath Dwivedi, Jigar Shah and Ashish Jain
Good quality input data is critical to developing a robust machine learning model for identifying possible money laundering transactions. McKinsey, during one of the conferences…
Abstract
Purpose
Good quality input data is critical to developing a robust machine learning model for identifying possible money laundering transactions. McKinsey, during one of the conferences of ACAMS, attributed data quality as one of the reasons for struggling artificial intelligence use cases in compliance to data. There were often use concerns raised on data quality of predictors such as wrong transaction codes, industry classification, etc. However, there has not been much discussion on the most critical variable of machine learning, the definition of an event, i.e. the date on which the suspicious activity reports (SAR) is filed.
Design/methodology/approach
The team analyzed the transaction behavior of four major banks spread across Asia and Europe. Based on the findings, the team created a synthetic database comprising 2,000 SAR customers mimicking the time of investigation and case closure. In this paper, the authors focused on one very specific area of data quality, the definition of an event, i.e. the SAR/suspicious transaction report.
Findings
The analysis of few of the banks in Asia and Europe suggests that this itself can improve the effectiveness of model and reduce the prediction span, i.e. the time lag between money laundering transaction done and prediction of money laundering as an alert for investigation
Research limitations/implications
The analysis was done with existing experience of all situations where the time duration between alert and case closure is high (anywhere between 15 days till 10 months). Team could not quantify the impact of this finding due to lack of such actual case observed so far.
Originality/value
The key finding from paper suggests that the money launderers typically either increase their level of activity or reduce their activity in the recent quarter. This is not true in terms of real behavior. They typically show a spike in activity through various means during money laundering. This in turn impacts the quality of insights that the model should be trained on. The authors believe that once the financial institutions start speeding up investigations on high risk cases, the scatter plot of SAR behavior will change significantly and will lead to better capture of money laundering behavior and a faster and more precise “catch” rate.
Details