December 4, 2019 by Greg Meckbach
Concerned about whether “black box” underwriting methods based on artificial intelligence are good for your clients? So is Canada’s federal insurance regulator.
“We get a lot of questions around AI and what are we to do about it,” said Neville Henderson (pictured, below right), assistant superintendent of the Office of the Superintendent of Financial Institutions.
“When you get into artificial intelligence there are some real black boxes out there and it’s hard to figure out what is going on. It’s easy to say ‘we will look at the model and see what is happening’ but they are often deceptive,” Henderson said Nov. 28 during KPMG Canada’s 28th annual Insurance Conference. He was commenting on the risk of systemic discrimination in financial services resulting from decisions based on computer algorithms.
“We really are a trust and verify regulator. We will believe you. It’s a great system, I am sure, but you’ve got to show us,” Henderson said during a “fireside chat” with Chris Cornell (pictured, left), KPMG Canada’s national sector leader for insurance. “But some of those proprietary systems won’t let us look at them, and that’s fine, but someone’s going to look at them and we have to have an independent review to satisfy us that the system does what it says it does and it’s accurate and companies don’t do something that could be systemically discriminatory.”
Cornell had asked Henderson about open banking and whether OSFI is watching the issue of insurers sharing data with one another.
The idea behind machine learning and AI is that a computer program is fed large volumes of data and learns about relationships among variables, John Hull, a professor at the University of Toronto’s Rotman school of management, said earlier this year at the recent Property and Casualty Insurers’ Risk Management Conference, produced by the Global Risk Institute and Insurance Bureau of Canada.
At KPMG Canada’s 2019 Insurance Conference, Henderson said he has attended a number of meetings on insurers’ use of data, including one last year in Washington, D.C. At that time, the National Association of Insurance Commissioners brought in some American consumer groups.
“They tried to get a sense of how companies cleansed the data so they went out and they managed to obtain some blocks. They went back to the original people and checked out the data. They found at least 40% had errors,” said Henderson. “Some of that was just spelling errors and those don’t matter very much but a number of the others were significant errors that would have resulted in an increase in cost for either life insurance or a property and casualty policy or a mortgage insurance policy, so that is a big concern.”
Some commercial cyber clients have been rejected outright for insurance when the carrier used AI to decide whether or not to accept the risk, Michael Loeters, senior vice president of commercial insurance and risk Management at Prolink, told Canadian Underwriter in a separate interview.
“I think that insurers will start to realize that the black boxes are actually kicking out a lot of business that maybe does have a few issues but an underwriter in the olden days used to just underwrite to those issues” such as adjusting the terms, deductibles or limits, Loeters said at the time.
Some insurers are using AI to price risks for very small groups of insureds, Henderson said Nov. 28 at the KPMG Canada Insurance Conference.
“The more granular the company gets, the less credible the data so there are big question marks about how accurate the pricing is for those products,” said Henderson.