Insurance fraud detection is a challenging problem, given the variety of fraud patterns and relatively small ratio of known frauds in typical samples. While building detection models, the savings from loss prevention needs to be balanced with cost of false alerts. Machine learning techniques allow for improving predictive accuracy, enabling loss control units to achieve higher coverage with low false positive rates. In this paper, multiple machine learning techniques for fraud detection are presented and their performance on various data sets examined. The impact of feature engineering, feature selection and parameter tweaking are explored with the objective of achieving superior predictive performance.
1.0 Introduction
Insurance frauds cover the range of improper activities which an individual may commit in order to achieve a favorable outcome from the insurance company. This could range from staging the incident, misrepresenting the situation including the relevant actors and the cause of incident and finally the extent of damage caused.
Potential situations could include:
The insurance industry has grappled with the challenge of insurance claim fraud from the very start. On one hand, there is the challenge of impact to customer satisfaction through delayed payouts or prolonged investigation during a period of stress. Additionally, there are costs of investigation and pressure from insurance industry regulators. On the other hand, improper payouts cause a hit to profitability and encourage similar delinquent behavior from other policy holders.
According to FBI, the insurance industry in the USA consists of over 7000 companies that collectively received over $1 trillion annually in premiums. FBI also estimates the total cost of insurance fraud (non-health insurance) to be more than $40 billion annually .
It must be noted that insurance fraud is not a victimless crime – the losses due to frauds, impact all the involved parties through increased premium costs, trust deficit during the claims process and impacts to process efficiency and innovation.
Hence the insurance industry has an urgent need to develop capability that can help identify potential frauds with a high degree of accuracy, so that other claims can be cleared rapidly while identified cases can be scrutinized in detail.
2.0 Why Machine Learning in Fraud Detection?
The traditional approach for fraud detection is based on developing heuristics around fraud indicators. Based on these heuristics, a decision on fraud would be made in one of two ways. In certain scenarios rules would be framed that would define if the case needs to be sent for investigation. In other cases, a checklist would be prepared with scores for the various indicators of fraud. An aggregation of these scores along with the value of the claim would determine if the case needs to be sent for investigation. The criteria for determining indicators and the thresholds will be tested statistically and periodically recalibrated.
The challenge with the above approaches is that they rely very heavily on manual intervention which will lead to the following limitations
These are challenging from a traditional statistics perspective. Hence, insurers have started looking at leveraging machine learning capability. The intent is to present a variety of data to the algorithm without judgement around the relevance of the data elements. Based on identified frauds, the intent is for the machine to develop a model that can be tested on these known frauds through a variety of algorithmic techniques.
3.0 Exercise Objectives
Explore various machine learning techniques to improve accuracy of detection in imbalanced samples. The impact of feature engineering, feature selection and parameter tweaking are explored with objective of achieving superior predictive performance.
As a procedure, the data will be split into three different segments – training, testing and cross-validation. The algorithm will be trained on a partial set of data and parameters tweaked on a testing set. This will be examined for performance on the cross-validation set. The high-performing models will be then tested for various random splits of data to ensure consistency in results.
The exercise was conducted on ApolloTM – Wipro’s Anomaly Detection Platform, which applies a combination of pre-defined rules and predictive machine learning algorithms to identify outliers in data. It is built on Open Source with a library of pre-built algorithms that enable rapid deployment, and can be customized and managed. This Big Data Platform is comprised of three layers as indicated below.
Three layers of Apollo’s architecture
Data Handling:
Detection Layer
Outcomes
The exercise described above was performed on four different insurance datasets. The names cannot be declared, given reasons of confidentiality.
Data descriptions for the datasets are given below.
4.0 Data Set Description
4.I Introduction to Datasets
Table 1: Features of various datasets
4.2 Detailed Description of Datasets
Overall Features:
The insurance dataset can be classified into different categories of details like policy details, claim details, party details, vehicle details, repair details, risk details. Some attributes that are listed in the datasets are: categorical attributes with names: Vehicle Style, Gender, Marital Status, License Type, and Injury Type etc. Date attributes with names: Loss Date, Claim Date, and Police Notified Date etc. Numerical attributes with names: Repair Amount, Sum Insured, Market Value etc.
For better data exploration, the data is divided and explored based on the perspectives of both the insured party and the third party. After doing some Exploratory Data Analysis (EDA) on all the datasets, some key insights are listed below
Dataset – 1:
Dataset – 2:
For Insured:
For Third Party:
Dataset – 3:
Dataset – 4:
4.3 Challenges Faced in Detection
Fraud detection in insurance has always been challenging, primarily because of the skew that data scientists would call class imbalance, i.e. the incidence of frauds is far less than the total number of claims, and also each fraud is unique in its own way. Some heuristics can always be applied to improve the quality of prediction, but due to the ever evolving nature of fraudulent claims intuitive scorecards and checklist- based approaches have performance constraints.
Another challenge encountered in the process of machine learning is handling of missing values and handling categorical values. Missing data arises in almost all serious statistical analyses. The most useful strategy to handle the missing values is using multiple imputations i.e. instead of filling in a single value for each missing value, Rubin’s (1987) multiple imputations procedure replaces each missing value with a set of plausible values that represent the uncertainty about the right value to impute.
The other challenge is handling categorical attributes. This occurs in the case of statistical models as they can handle only numerical attributes. So, all the categorical attributes are transposed into multiple attributes with a numerical value imputed. For example – the gender variable is transposed into two different columns say male (with value 1 for yes and 0 for no) and female. This is only if the model involves calculation of distances (Euclidean, Mahalanobis or other measures) and not if the model involves trees.
A specific challenge with Dataset – 4 is that it is not feature rich and suffers from multiple quality issues. The attributes that are given in the dataset are summarized in nature and hence it is not very useful to engineer additional features from them. Predicting on that dataset is challenging and all the models failed to perform on it. Similar to the CoIL Challenge [4] insurance data set, the incident rate is so low that with a statistical view of prediction, the given fraudulent training samples are too few to learn with confidence. Fraud detection platforms usually process millions of training samples. Besides that, the other fraud detection datasets increase the credibility of the paper results.
4.4 Data Errors
Partial set of data errors found in the datasets are:
5.0 Feature Engineering and Selection
5.1 Feature Engineering
Success in machine learning algorithms is dependent on how the data is represented. Feature engineering is a process of transforming raw data into features that better represent the underlying problem to the predictive models, resulting in improved model performance on unseen data. Domain knowledge is critical in identifying which features might be relevant and the exercise calls for close interaction between a loss claims specialist and a data scientist.
The process of feature engineering is iterative as indicated in the figure below
Fig 1:Feature engineering process
Importance of Feature Engineering:
Some Engineered Features:
5.2 Feature Selection:
Out of all the attributes listed in the dataset, the attributes that are relevant to the domain and result in the boosting of model performance are picked and used. i.e. the attributes that result in the degradation of model performance are removed. This entire process is called Feature Selection or Feature Elimination.
Feature selection generally acts as filtering by opting out features that reduce model performance.
Wrapper methods are a feature selection process in which different feature combinations are elected and evaluated using a predictive model. The combination of features selected is sent as input to the machine learning model and trained as the final model.
Forward Selection: Beginning with zero (0) features, the model adds one feature at each iteration and checks the model performance. The set of features that result in the highest performance is selected. The evaluation process is repeated until we get the best set of features that result in the improvement of model performance. In the machine learning models discussed, greedy forward search is used.
Backward Elimination: Beginning with all features, the model removes one feature at each iteration and checks the model performance. Similar to the forward selection, the set which results in the highest model performance are selected. This is proven as the better method while working with trees.
Dimensionality Reduction (PCA): PCA (Principle Component Analysis) is used to translate the given higher dimensional data into lower dimensional data. PCA is used to reduce the number of dimensions and selecting the dimensions which explain most of the datasets variance. (In this case it is 99% of variance). The best way to see the number of dimensions that explains the maximum variance is by plotting a two-dimensional scatter plot.
5.3 Impact of Feature Selection:
Forward Selection: Based on experience adding-up a feature may increase or decrease the model score. So, using forward selection data scientists can be sure that the features that tend to degrade the model performance (noisy feature) is not considered. Also it is useful to select the best features that lift the model performance by a great extent.
Backward Elimination: In general backward elimination takes more time than the forward selection because it starts with all features and starts eliminating the feature that compromises the model performance. This type of technique performs better when the model built is based on trees because more the features, more the nodes, Hence more accurate.
Dimensionality Reduction (PCA): I t is of ten helpful to use a dimensionality-reduction technique such as PCA prior to performing machine learning because:
6.0 Model Building and Comparison
The model building activity involves construction of machine learning algorithms that can learn from a dataset and make predictions on unseen data. Such algorithms operate by building a model from historical data in order to make predictions or decisions on the new unseen data. Once the models are built, they are tested on various datasets and the results that are considered as performance criterion are calculated and compared.
6.1 Modelling
Fig2:Model Building and Testing Architecture
The following steps are listed to summarize the process of the model development:
Separate models generated for each fraud type which self-calibrate over time - using feedback, so that they adapt to new data and user behavior changes.
Multiple models are built and tested on the above datasets. Some of them are:
Logistic Regression: Logistic regression measures the relationship between a dependent variable and one or more independent variables by estimating probabilities using a logit function. Instead of regression in generalized linear model (glm), a binomial prediction is performed. The model of logistic regression, however, is based on assumptions that are quite different from those of linear regression. In particular the key differences of these two models can be seen in the following two features of logistic regression. The predicted values are probabilities and are therefore restricted to [0, 1] through the logit function as the glm predicts the probability of particular outcomes which tends to be binary.
Fig3: Modified MVG
Boosting: Boosting is a machine learning ensemble learning classifier used to convert weak learners to strong ones. General boosting doesn’t work well with imbalanced datasets. So, two new hybrid implementations were implemented.
Fig4. Functionality of Boosting
Bagging using Adjusted Random Forest: Unlike single decision trees which are likely to suffer from high variance or high bias (depending on how they are tuned), in this case the model classifier is tweaked to a great extent in order to work well with imbalanced datasets. Unlike the normal random forest, this Adjusted or Balanced Random Forest is capable of handling class imbalance.
The Adjusted Random Forest algorithm is shown below:
The advantage of Adjusted Random Forest is that it doesn’t overfit as it performs tenfold cross-validations at every level of iteration. The functionality of ARF is represented in the form of a diagram below
Fig5. Functionality of Bagging using Random Forest
6.2 Model Performance Criterion
The Model performance can be evaluated using different measures. Some of them used in this paper are:
6.2.1 Using Contingency Matrix or Error Matrix
The error matrix is a specific table layout that allows tabular view of the performance of an algorithm
The two important measures Recall and Precision can be derived using the error matrix.
Recall: Fraction of positive instances that are retrieved, i.e. the
coverage of the positives.
Precision: Fraction of retrieved instances that are positives.
Both should be in concert to get a unified view of the model and balance the inherent trade-off. To get control over these measures, a cost function has been used which is discussed after the next section.
6.2.2 Using ROC Curves
Another way of representing the model performance is through Receiver Operating Characteristics (ROC curves). These curves are based on the value of an output variable. The frequencies of positive and negative results of the model test are plotted on X-axis and Y-axis. It will vary if one changes the cut-off based on their Output requirement. Given a dataset with low TPR (True Positive Rate) at a high cost of FPR (False Positive Rate), the cut-off may be chosen at higher value to maximize specificity (True Negative Rate) and vice versa.
The representation used in this is a plot and the measure used here is called Area under the Curve (AUC). This generally gives the area under the plotted ROC curve.
6.2.3 Additional Measure
In modelling, there is a cost function used to increase the precision (which results in a drop in recall) or increase in the recall (which results drop in precision), i.e. to have control over recall and precision. This measure is used in feature selection process. The Beta (fi) value is set to 5. Then start iterating feature process (either forward or backward) involving calculation of performance measures at each step. This cost function is used as a comparative measure for the feature selection (i.e. whether the involvement of feature results in improvement or not). The cost function increases the beta value, with increase in recall.
6.3 Model Comparison
6.3.1 Using Contingency Matrix
The performance of the models is given below:
Table 2: Performance of models on Dataset 2
Key note: All the models performed better on this dataset (with nearly ninety times improvement over the incident rate). Relatively, MMVG has better recall and the other models have better precision.
Key note: The ensemble models performed better with high values of recall and precision. Comparatively ARF has high precision and high recall(i.e. 1666 times better than the incident rate) and MMVG has better recall but poor precision.
Key note: MRU (with four hundred and sixty times improvement over random guess) and MMVG (with hundred and thirty three times improvement over the incident rate and reasonably good recall) are better performers and logistic regression is a poor performer.
For Dataset - 4
Table 3: Performance of models on Dataset 4
Key note: Overall issues with the dataset make the prediction challenging. All the models performance is very similar, with the issue of inability to
achieve higher precision keeping the coverage high.
6.3.1.1 Receiver Operating Characteristic (ROC) Curves
Adjusted Random Forest out-performed other models and Modified MVG is the poor performer
All the models performed equally.
Boosting as well as Adjusted Random Forest performed well and Logistic Regression is the poor performer.
All the models performed reasonably well except Modified MVG.
Area under Curve (AUC) for the above ROC Charts
Table 4: Area under Curve (AUC) for the above ROC
6.3.2 Model Performances for Different Values in Cost Function Fß
Fig6. Model Performances for Different Beta Values
To display the effective usage of the cost function measure, a graph of results is shown for different ß values (1, 5, and 10) and modelled on dataset-1. As evident from the results, there is a correlation between ß value and recall percentage. Typically this also results in static or decrease in precision (given implied trade-off). This is manifested across models, except for AMO at aß value of 10.
7.0 Model Verdict
In this analysis, many factors were identified which allows for an accurate distinction between fraudulent and honest claimants i.e. predict the existence of fraud in the given claims. The machine learning models performed at varying performance levels for the different input datasets. By considering the average F5 score, model rankings are obtained- i.e. a higher average F5 score is indicative of a better performing model.
Table 5: Performance of Various Models
The analysis indicates that the modified random under Sampling and Adjusted Random Forest algorithms perform best. However, it cannot be assumed that the order of predictive quality would be replicated for other datasets. As observed in the dataset samples, for feature rich datasets, most models perform well (ex: Dataset - 1). Likewise, in cases with significant quality issues and limited feature richness the model performance gets degraded (ex: Dataset - 4).
Key Takeaways:
Predictive quality depends more on data than on algorithm:
Many researches indicate quality and quantity of available data has a greater impact on predictive accuracy than quality of algorithm. In the analysis, given data quality issues most algorithms performed poorly on dataset-4. In the better datasets, the range of performance was relatively better.
Poor performance from logistic regression and relatively poor performance by odified MVG:
Logistic regression is more of a statistical model rather than a machine learning model. It fails to handle the dataset if it is highly skewed. This is a challenge in predicting insurance frauds, as the datasets will typically be highly skewed given that incidents will be low. MMVG is built with an assumption that the input data supplied is of Gaussian distribution which might not be the case. It also fails to handle categorical variables which in turn are converted to binary equivalents which leads to creation of dependent variables. Research also indicates that there is a bias induced by categorical variables with multiple potential values (which leads to large number of binary variables.)
Outperformance by Ensemble Classifiers:
Both the boosting and bagging being ensemble techniques, instead of learning on a single classifier, several are trained and their predictions are aggregated. Research indicates that an aggregation of weak classifiers can out-perform predictions from a single strong performer.
Loss of Intuition with Ensemble Techniques
A key challenge is the loss of interpretability because the final combined classifier is not a single tree (but a weighed collection of multiple trees), and so people generally lose visual clarity of a single classification tree. However, this issue is common with other classifiers like SVM (support vector machines) and NN (neural networks) where the model complexity inhibits intuition. A relatively minor issue is that while working on large datasets, there is significant computational complexity while building the classifier given the iterative approach with regard to feature selection and parameter tweaking. Anyhow given model development is not a frequent activity this issue will not be a major concern.
8.0 Conclusion
The machine learning models that are discussed and applied on the datasets were able to identify most of the fraudulent cases with a low false positive rate i.e. with a reasonable precision. This enables loss control units to focus on new fraud scenarios and ensuring that the models are adapting to identify them. Certain datasets had severe challenges around data quality, resulting in relatively poor levels of prediction.
Given inherent characteristics of various datasets, it would be impractical to a' priori define optimal algorithmic techniques or recommended feature engineering for best performance. However, it would be reasonable to suggest that based on the model performance on back-testing and ability to identify new frauds, the set of models offer a reasonable suite to apply in the area of insurance claims fraud. The models would then be tailored for the specific business context and user priorities.
R Guha
Head - Corporate Business Development
He drives strategic initiatives at Wipro with focus around building productized capability in Big Data and Machine Learning. He leads the Apollo product development and its deployment globally. He can be reached at: guha.r@wipro.com
Shreya Manjunath
Product Manager, Apollo Platforms Group
She leads the product management for Apollo and liaises with customer stakeholders and across Data Scientists, Business domain specialists and Technology to ensure smooth deployment and stable operations. She can be reached at: shreya.manjunath@wipro.com
Kartheek Palepu
Associate Data Scientist, Apollo Platforms Group
He researches in Data Science space and its applications in solving multiple business challenges. His areas of interest include emerging machine learning techniques and reading data science articles and blogs. He can be reached at: kartheek.palepu@wipro.com