Canadian Underwriter

AI’s new risk: How brokers and insurers can protect clients

February 28, 2022   by Alyssa DiSabatino

A woman's face is lit up by the blue light of the screen she is holding in her hand. It is scanning her face.

Print this page Share

Data automation has created an emerging risk: AI can develop unintended biases within its own data that can yield unfair results and potentially harm a client’s business.

Although it’s not the only risk associated with AI, the potential for a machine to become biased with its data is definitely a concern for insurers. AI bias can come from a few sources, says Chantal Sathi, founder and president of Cornerstone AI and its debiasing software, BiasFinderAI.

“Bias can come when you’re training the AI model to process information,” she says. “The algorithms are detecting patterns and statistics to give you results.” But if the statistics are skewed one way or another, the AI will pick up on it and continue to learn from and present skewed data.  

For example, one study found Google was showing fewer female-targeted than male-targeted ads promising to help them get higher-income jobs.

“Bias can also come in the way that these algorithms are coded,” Sathi explains. “[It] can also happen at the end, when you’re looking at all of the outputs — meaning the results that these machines compute. It also depends on the way that data is being interpreted and used…A [human] data analyst may interpret it one way when actually it’s being read [by a machine] in a completely different manner.” 

Sathi, who spoke at a January RIMS RiskTech webinar, says companies are at risk of embedding bias in AI technology.   

When a company or a business doesn’t engage in AI technology practices that reduce these biases, “you start to infringe on fairness, accuracy, transparency, explainability, and even cybersecurity, data trust and privacy,” she adds. 

One broker suggests ways for the industry to approach finding coverage for a client’s AI technologies, while addressing the potential risk that bias poses.  

“To be honest, it doesn’t actually matter if it was the AI or any other part of the codebase that led to the gender bias,” says Nick Kidd, director of business of Mitchell & Whale Insurance, when asked about the potential for AI to create bias through job recruiting software.  “The fact is, there could be a liability exposure which needs to be addressed.”  

Kidd says this is a well-known exposure that insurers address in the recruitment industry. “If an underwriter were looking at this risk…maybe they would have foreseen that, generally speaking, there’s an exposure around any kind of bias in recruitment decisions. So probably, that’s considered and priced in somewhere.” 

But the risk of bias doesn’t just come from AI, he explains.  

Illustration of a digital face silhouette looking at and analyzing digital symbols.

“Maybe the software would have even more gender bias if it weren’t for the AI component?” Kidd speculates. “The fact is, this is an exposure of that software, regardless of what components it’s built with.” 

To overcome these challenges, insurers and brokers are urged to work with their clients to employ AI best practices, ensure fairness and dispel bias. Sathi recommends insurers create “variable checklists” when finding coverage for AI software producers.  

“What are the codes of the algorithms, how are we creating these results?…What is the training data that’s gone into these models?…Who is auditing and checking each part of the development lifecycle? Those are strategic things that insurance companies have to start to look for,” she says.  

Kidd says a good broker is needed to discern potential risks that could arise when insuring AI technology. Perhaps ironically, he recommends that clients avoid finding coverage online based on AI recommendations. “The value of having a broker with good judgment in the process is going to be really key,” he says. 

“Potentially, the AI in those online engines is not going to spot some of the exposures that need to be positioned to insure. So, we would definitely use this as another good layer of reasoning [for] why I think experience and know-how is going to be key in the mix for protecting clients properly.”  


Feature image by