Data analysis may start to focus more on modelling the process than the data itself, speakers suggested last week at the Canadian Insurance Financial Forum (CIFF).
Although modelling began decades ago to model the process of financial catastrophes, there is now the ability to “model human behaviour, to model the man-made cat or to model terrorism, to model even casualty cats,” Kevin Huang, founder and CEO of Toronto-based Huang & Associates Analytics, said during a panel discussion called Data Analytics, Data Science, Big Data.
“I believe this is a trend that will continue,” Huang said at the conference, held at the Metro Toronto Convention Centre. “My belief in the future is more around modelling the process than data. We can really find a way to model the entire process.”
Huang noted that in the event that “perfect data” is available, then it probably doesn’t make a difference between modelling the data and modelling the process. “But if you have a very complicated process and you have lack of data, you have scarcity of data, then modelling the process is the way to go so you can complement your lack of data with your better understanding of the process,” he said. “That’s my personal opinion.”
Wei Pan, manager of advanced analytics with TD Insurance, agreed that “it’s more focused now on the process. The modeller not only has to look at the raw data, they have to look at the whole process. I think the modelling process is really, really important,” he argued.
For casualty cats, modelling the process also has to be the focus, “simply because the casualty events that we have seen in the past are extremely limited, much more limited than the catastrophic events,” added Jeff Turner, senior vice president and managing director, Toronto, with Beach & Associates, which creates alternative reinsurance solutions. “Your casualty losses are changing all the time. Modelling the process or at least thinking about the potential for new systemic issues that arise is important.”
Another topic of discussion at the session, moderated by Walter Fransen, senior vice president and chief actuary with Liberty International Underwriters, was the issue of too much versus too little data. Jonathan Frost, senior vice president of GC Analytics with Guy Carpenter, told conference attendees that in the reinsurance space, “the quantity of data is always a challenge and I think that’s not really necessarily going to change. Reinsurance is always kind of in this – existing without the quantity of data it wants; reinsurance is kind of a unique space in terms of trying to acquire more data.”
Turner agreed that there is a “scarcity of data” and in certain cases, there may not be enough data to come up with credible results. “There’s a ton of information that’s out there that isn’t necessarily being captured by primary insurers that would be helpful for the reinsurers,” he added.
The opposite is true in the actuarial space, Huang said. The biggest challenge that sector is “facing nowadays is actually the explosion of data rather than just new methodologies. The amount of data available to us, actuaries, have been unprecedented.”
He even proposed a “centralized place” where industry partners could team up with a provider “who can digitize every single piece of information on the planet.” Global users could then search for and extract the data, providing the source a certain amount of money per search, for example. “It would be much more efficient, better quality and faster,” he suggested.
Although Huang said he would prefer both quality and quantity of data, “if I have to choose between the two, I would probably prefer quality over quantity. If you have lots of garbage, you are going to produce lots of garbage.”
More coverage of the Canadian Insurance Financial Forum