December 2, 2019 by Jason Contant
We’ve all heard of ‘silent cyber’: potential cyber exposures contained within traditional property and liability policies that may not implicitly include or exclude cyber risks. But what about a ‘silent assassin’ in a policy?
“Technology-related areas are a silent assassin for medical providers and their insurance policies,” Tim Boyce, healthcare practice leader with London, UK-based CFC Underwriting, told Canadian Underwriter in an interview last week.
Boyce said with traditional coverage for medical malpractice, there is usually a specific medical or healthcare incident trigger (by a person or faulty device, for example). But with the increasing adoption of technology-enabled solutions and artificial intelligence (AI) within healthcare, sometimes it’s hard to trace liability to a specific person or device.
Boyce was responding to a question about what happens if there is a silent cyber-type occurrence in a medical malpractice claim, meaning technology wasn’t specifically excluded as a trigger.
“I would say it’s a silent assassin. We’ve got some real-world examples,” he said. “An insurer could quite easily deny a claim and say the medical provider hasn’t done anything wrong here and … they have followed the same standard of care they would have had the piece of technology been there to assist them. They could easily deny a claim, quite easily, because you could say the policy doesn’t trigger and that’s why I think there needs to be a uniform update.”
Boyce was referring to what he calls a complete “refresh and rethink” of policy triggers for healthcare liability insurance such as medical malpractice – a type of errors and omissions insurance that protect physicians and other healthcare professionals from allegations that their negligence caused injury.
“We think there is going to be, and has been already, a huge amount of confusion and ambiguity as to where liability sits when there is AI within healthcare systems,” he said. “What may well have been correct for a medical malpractice policy for the last 100 years might not be correct for the next two decades because of the adoption of technology-enabled solutions within healthcare.”
Even the term ‘medical malpractice’ is outdated, Boyce suggested. “I think we need an update on that. Its definition in the truest form is touching and seeing a patient, but now you’ve got assisted wearable devices [healthcare professionals] could potentially be using, opening up to product liability claims. You’ve got the technology they are using; you’ve got electronic medical records.
“I think the better term is bodily injury,” Boyce said, because that refers to anything that could potentially affect a patient, whether it’s something that goes wrong with the provision of healthcare, the technology, a cyber event or system failure.
Like other sectors such as retail and banking, healthcare is undergoing what Boyce calls a “technology revolution.” But the difference is when a piece of healthcare technology fails, it’s not just a loss of money or product, somebody could potentially die.
Boyce also referred to a recent Lloyd’s mandate that all policies clearly state whether they will provide affirmative coverage for cyber risks to provide clarity on silent cyber.
“I think healthcare liability markets will have to wake up to that as well,” he said. This could include updating policy form triggers to cover for something that goes wrong with technology or affirmatively covering things like cyber events. “I think this is one thing that needs a widespread adoption in the market globally.”