October 7, 2019 by Greg Meckbach
Artificial intelligence can make it nearly impossible for fraudsters to make a false claim appear real, the founder of an A.I. vendor suggests.
If an auto insurance claim is fraudulent, A.I. software could pick up on that as long as it has enough data about other fraudulent – and legitimate – claims, said Gary Saarenvirta, founder and CEO of Toronto-based Daisy Intelligence.
Daisy’s software is designed to detect “outliers,” or things that are out of the ordinary.
Insurers tend not to share their full definition of a “normal” claim. As a result, it is actually impossible for someone to manufacture a claim that also sits in the middle of what the insurer says is normal in every way, Saarenvirta told Canadian Underwriter in an interview. This is because claims have many attributes. An example is the ratio of the bodily injury cost to the vehicle damage part of the claim, at a given speed.
An artificial intelligence system could have data on different accidents involving the same type of vehicle, same time of year, and same geographic area. “There should be a common set of description as to what the damage was, what the claim elements are, what the facts of the case are,” said Saarenvirta.
Say, for example, that four people rent a car at the airport and they load it up with the maximum insurance coverage. They drive it five miles from the airport and they have an accident. It’s also the first day of policy coverage.
That does not mean it is fraud for sure, but it could “ring a lot of alarm bells” for the insurer, Saarenvirta suggested.
When using A.I. in detecting abnormal claims, an insurer could look at literally thousands of variables. “You can’t manufacture a false record or a false claim that looks normal in every possible way compared to many different like groups,” said Saarenvirta. “That is the fundamental base of our theory of risk.”
An insurer could look at:
A.I. should go beyond conventional predictive analytics to eliminate ‘false positives’ – claims flagged as suspicious that are actually legitimate, said Saarenvirta.
Say you have a predictive analytics model that is 90% accurate and it looks at 1 million claims. If 1% of claims are fraudulent, then 10,000 claims out of a million will be fraudulent, said Saarenvirta. But if the model is only 90% accurate, it means that 1,000 fraudulent claims out of a million will go undetected. Moreover, 10% of the 990,000 of the legitimate claims – or 99,000 out of a million– will be falsely flagged as fraudulent.
“That means your predicative analytics will generate a wild goose chase for your investigators,” said Saarenvirta. “So you need to do something way more sophisticated than statistical prediction and deep learning to eliminate all of those false positives.”
Say you have 1,000 attributes about a claim. One of those variables has only a 1% chance of matching what would be true in a legitimate claim. If all 1,000 of those variables have only a 1% chance of being true, then the claim in question has a pretty much zero chance of being true.
One variable could be the contents. Say there are two different claims in which the client says the car was broken into and contents were stolen. In one case, the client has a $500 car that is broken into; in it is a set of very expensive, customized golf clubs. This could be something out of the ordinary. By contrast, the other claim, also involving theft of expensive golf clubs from a car, might be from a Lexus whose owner lives on the Bridal path.
That description alone does not detect fraud. It could be the person driving the really cheap car spent all his money on golf clubs and bought the car on the cheap, while the Lexus owner who says his car got broken into actually doesn’t play golf, said Saarenvirta.
“That’s why you have to look at many many things so you don’t make a false judgement too easily.”