September 18, 2019 by Jason Contant
Insurers should consider self-regulating machine learning practices before regulations are imposed on them, a speaker suggested last week at the Connected Insurance Canada conference in Toronto.
Moderator Stephen Applebaum, principal at Insurance Solutions Group, asked panelists if regulators understand machine learning and their perspectives on using machine learning in insurance, particularly since it’s difficult to explain the concept.
Koosha Golmohammadi, director of advanced analytics at Manulife, acknowledged that there are issues about the explanation of machine learning models. Machine learning, a subset of artificial intelligence, allows systems to automatically learn and improve without being explicitly programmed.
“This is new in Canada,” Golmohammadi said. “Regulators are trying to look at this. I think insurers should be at the forefront of this; try to self-regulate before the regulator comes after them.”
Another panelist, Hans Riedl, senior vice president of claims at Economical Insurance, suggested that in the P&C space, regulating machine learning has been prohibitive, similar to telematics. In some jurisdictions in Canada, including Ontario, only discounts can be given based on telematics-data; premiums cannot be charged.
In the United States regulatory environment, some telematics usage-based insurance (UBI) programs are finally getting approval for surcharge premiums on negative performance after 10 years of “no way,” Applebaum noted.
“It’s always some market somewhere does it and everybody goes and studies that market and if it’s working, it’s great,” Reidl said. With the recent change in Ontario to the Financial Services Regulatory Authority, the provincial regulator “appears to be more open to thinking about these things,” he said of machine learning. “Because in the end, I think the regulators want to find ways to have affordable insurance available that gives good coverage to customers.”
Reidl said he believes the onus is on insurers to explain why machine learning in insurance might be better for the consumer to alleviate some of the regulator’s concerns of where it might go. “Issues like, well, if we are using machine learning, how do you know that there’s not racial profiling going on?” he said. “That’s a big issue for the government. So, you have to bridge that gap, explain how it works.”
The third panelist, Steve Brandt, senior vice president of sales with administration software company Vitech, agreed that the “self-regulating conversation is the right one to be having. Certainly, it’s the one I’d want to be having if I were an insurance company, but at some point along the line, regulators are going to figure it out because people will abuse the situation and there’ll be those moments of truth that will define these situations for us.”
Brandt likened it to social media and privacy concerns, where regulators around the globe and struggling with what is acceptable when so much data is being collected. “You’re starting to see that come to roost and I think some of the machine learning and these tools that we are using now in the insurance industry, we run the risk of getting into those types of conversations with regulators because it’s so easy to abuse those situations for profit.”