In an ever-evolving digital world, artificial intelligence — a technology that mimics human cognition by learning from experience, identifying patterns and deriving insights — is becoming widely adopted by companies.
In fact, AI adoption has skyrocketed in the 18 months before September 2021, Harvard Business Review reports. And a quarter of respondents in one PwC survey report widespread adoption of AI in 2021, up from 18% in the previous year.
So, as an evolving technology, this begs the question: how are insurers covering AI?
According to one expert, AI doesn’t usually qualify for standalone coverage because it is not exactly “a thing in and of itself to insure.”
“AI coverage is usually encompassed in another form of a client’s coverage,” says Nick Kidd, director of business insurance at Mitchell & Whale (which is rebranding as Mitch in late March). “It’s very rare someone is just insuring AI. They’re insuring their company and all its exposures, and the reality is AI is usually a component of something bigger.”
If a loss were to occur, it could be difficult to credit it specifically to the AI.
In fact, Kidd says it would be “virtually impossible” to solely insure the AI part of a product because it often works in tandem with other parts of the product.
“AI doesn’t exist in a vacuum. It is part of a product or service or somewhere in the chain of developing that product or service – and we’re looking to insure that product or service rather than just the AI,” Kidd says.
So, if AI products don’t qualify for standalone coverage, where does it fit in insurance policies?
Ruby Rai, cyber practice leader at Marsh Canada, says AI coverage is a technology risk, not just a cyber risk. “Artificial intelligence is just like any technology,” she says.
However, AI could qualify for different types of coverage depending on how it’s used. “Liability keeps shifting right across the chain as you utilize artificial intelligence,” Rai says.
Rai gives an example of telehealth tools or medical chatbots, where patients can use computer devices to access health care services and manage their health digitally.
“[Say the bot is] responding to an inquiry and [it] gives the wrong advice. Is that a failure of technology? Or is it medical malpractice?” she questions.
“Sometimes it can be technology errors and omissions … if technology was hacked or maliciously impacted, the resultant impact on individuals [or] on data is where cyber or extortion [would come in]. But then if someone’s hurt, takes the wrong dosage, or wrong medicine … that’s where you have physical or bodily injury, so general liability will come in,” Rai speculates.
At Mitchell & Whale, Kidd says they work through a series of questions to understand how to find the right coverage for a client, including:
Who are the intended users?
What are they using it for?
What are the consequences of failure?
What, if any, critical functions are exposed that could lead to bodily injury, property damage or financial loss?
What does the user agreement look like and what limitations are there on liability?
What are the qualifications and/or track record of the company?
Do they outsource work and to whom, and what protections do they get?
“When insuring a business, our focus is to understand its full operations and the various liabilities arising from it,” he says.