Canadian Underwriter
News

Why there needs to be a refresh of policy triggers for this E&O coverage


November 29, 2019   by Jason Contant


Print this page Share

There needs to be a complete refresh and rethink of policy triggers for healthcare liability insurance to make sure it’s up to speed for the digital age, a specialist insurer said.

With traditional insurance for medical malpractice, a type of errors and omissions (E&O) coverage that protects physicians and other healthcare professionals against claims alleging their negligence caused injury, there would be a medical or healthcare incident trigger. But with the increasing adoption of technology-enabled solutions and artificial intelligence (AI) within healthcare, liability lines are starting to blur.

“Globally, AI is raising profound questions around medical responsibility and an example of this is normally when something goes wrong, we can easily trace the blame,” Tim Boyce, healthcare practice leader at CFC Underwriting, told Canadian Underwriter Wednesday. “For example, if you got a misdiagnosis from a physician, you could easily trace it back to them. If it’s a faulty medical device and it gives an incorrect reading, harming a patient, you can easily go back to the manufacturer. But we don’t actually know in truth what this would mean for AI at the moment.”

And with traditional policies, if something went wrong with a piece of technology or AI, a policy wouldn’t necessarily affirmatively cover it, Boyce said.

There is also an absence of case law. “We think there is going to be, and has been already, a huge amount of confusion and ambiguity as to where liability sits when there is AI within healthcare systems. What may well have been correct for a medical malpractice policy for the last 100 years might not be correct for the next two decades because of the adoption of technology-enabled solutions within healthcare.”

Looking forward, Boyce predicts a broader trigger for coverage, which includes technology activities. Combined with this would likely be a regulatory framework and training to give a high-level benchmark and so that the quality of data that goes into the AI is assessed before widespread adoption.

From a Canadian perspective, the Canadian Medical Protective Association has said the buck ultimately should stop with the physician and not with the machine, Boyce reported, “which I would question in some regards.

“How that would work is if you had a doctor that sits within their practice and they are using AI-powered technology to analyze CT scans then ultimately the buck would stop with them if that piece of technology had failed,” he said. “In theory what should happen is when the physician follows the standard of care, and even if they are utilizing AI, then they generally won’t be held liable because they are operating in the same course of action that they would have been had the AI been present or not.”

Boyce recommends that healthcare providers ask their carriers how they would respond if something went wrong with a piece of technology or AI. And brokers should be posing questions to their medical malpractice or E&O carriers about how they would respond in such a case.


Print this page Share

Have your say:

Your email address will not be published. Required fields are marked *

*