Canadian Underwriter

Within the Margins

July 1, 2016   by Greg Meckbach, Associate Editor

Print this page Share

Buildings that are not constructed to code, man-made events such as terrorist attacks, the infrequency of market-shaking earthquakes and the lack of detailed hurricane data prior to the mid-1800s are among the factors that contribute to uncertainty in the models currently being used by reinsurers in the property and casualty insurance industry to determine their capitalization strength.

Of course, some risks – such as cyber attacks and terrorism – have more modelling uncertainty than natural disasters, reinsurers suggest. For example, a product manager for one catastrophe modelling provider says that despite there being much better data for terrorism loss models today than 15 years ago, the potential for a terrorist using a nuclear, biological or chemical weapon is top of mind for some reinsurers since such an incident could threaten the entire industry.

Cat models are based on science, loss experience and engineering judgment, says Andrew Castaldi, senior vice president and head of catastrophe perils, Americas for Swiss Re. “All models have a degree of uncertainty in them,” Castaldi says plainly. “There are statistical models and with any statistic, there will be some type of uncertainty, and there is also some uncertainty in the scientific component,” he points out.

screenshot cover web

“Modelling uncertainty is both an issue and an opportunity for everyone in the reinsurance industry,” says Paul Nunn, head of natural catastrophe risk modelling for Paris-based SCOR. “Many complex third-party Cat models are in use and these models are changing frequently as new data and science emerges,” Nunn explains.

“Understanding the uncertainty in underlying science, modelling approaches and results requires specialist expertise and time, both of which are in very limited supply. While vendors do reveal some of this uncertainty, there is still much that is not quantified – this is of bigger concern to us,” he suggests.

Castaldi cites another concern. “As good as the models that we build are, they can’t read a policy form,” he cautions. “The models are designed on loss history from the past and that loss history is based upon policy wordings, deductibles, sub-limits and everything else that happened years ago, decades ago, in some cases,” he adds.

So if policy wording changes, “in effect, you really should be modifying your modelled output to track the policy form that you have,” Castaldi explains.

A broader form, he ventures, “would mean that your model results are probably understated. If you contract the form, more than likely, your model results might be overstated. I think that is one of the most important things that people tend to miss.”

Brian Schneider, senior director of insurance at Fitch Ratings Inc. suggests that Cat modelling “is a big factor” in determining a reinsurer’s capitalization strength. “Model uncertainty comes from a variety of sources – accuracy of the underlying insured data, robustness of the data event set of the model itself, assumptions as to claims inflation from demand surge and the unique aspects of the actual event itself,” Schneider says.


A paper released in late April by Standard & Poor’s Financial Services LLC notes that there is “significant modelling uncertainty” when estimating a reinsurer’s exposure to catastrophe. Cat models created by third-party vendors “reflect the vendor’s own view of catastrophe risk,” states How We Capture Catastrophe Modeling Uncertainty in (Re)insurance Ratings.

“Reinsurers may not fully agree with these views, and some have even developed their own models for some perils. In fact, regulators have encouraged reinsurers not to blindly use the vendor models, but to review the workings of the models and identify and address any elements that do not reflect their view of the risk,” the paper notes.

In rating reinsurance firms, S&P researchers “take a negative view if a reinsurer does not vet and validate the vendor models to establish their suitability for the reinsurer’s own view of the risk and catastrophe exposure,” it adds.

Castaldi reports that Swiss Re relies almost entirely on its own models. “Our whole risk appetite, capital costing system, accumulation, everything is tied to our own internal tools,” he notes.

“The only reason we license an external vendor tool is because of their data format,” he goes on to say.

For its part, SCOR “operates a full internal model for capital setting in the Solvency II sense,” writes Nunn.

Solvency II is a European Commission (EU) directive that applies to some property and casualty insurers. Among other things, it requires insurers to determine capital requirements to cover the market-consistent losses that may occur over the next year with a confidence level of 99.5%, resulting from changes in market values of assets held by insurers, notes information posted on EU’s website.

As part of its internal capital model, SCOR adjusts and incorporates third-party models, “where these have been formally adopted in the business,” Nunn says. “Internally, SCOR developed Cat models, then supplement where no third-party model is available or fit-for-purpose,” he reports.

“We use models as one tool, amongst others, to inform decision-making,” he says. “Modelling uncertainty can be a major issue where firms are overly influenced by their modelling results.”


There are “two main sources of uncertainty that permeate through modelled losses,” says Tom Sabbatelli, product manager with the model product management team at Risk Management Solutions (RMS) Inc. “One part of that is the data and the assumptions that are being used by a modelling company, and the other part is going to be based on the data quality and availability that is controlled within each individual company.”

A Cat model depends on exposure data; for instance, “characteristics that dictate” how a building would respond to a peril such as wind or ground shaking, Sabbatelli reports.

For that, the four “primary characteristics” of a building are the number of storeys, construction type, occupancy and year it was built.”Those are generally well-captured by a majority of companies in North America,” Sabbatelli says.

“Cat models are trying to predict losses to an insurer or reinsurer’s portfolio, so you need to bring the right information into the model if you want to get the right output,” says Cagdas Kafali, vice president of research at AIR Worldwide, a unit of Verisk Analytics Inc.

For property risk, a modelling firm would “want to know the use case,” Kafali explains. “Is it a hospital? Is it a restaurant? Is it a concrete structure or a masonry one? Does it have a basement? So there is a lot of input that we need to bring into the Cat model and some insurance companies are doing better than others in collecting the data.”

Structure information like use, occupancy, material, height and construction year is generally incorporated into Cat models in the United States and Canada, Kafali notes, but adds that this may not be the case in other markets, such as Asia. If this type of information is missing or incorrect, however, “then the model is making an assumption,” he notes.

“Asia is where we, as an industry, need to really, really ramp up data collection,” contends Kafali.

The data there tends to be “at a very aggregated level, so they may know they have this amount of total exposure in a given province, but they don’t know where those are located. So without knowing where they are located, you are going to have a hard time estimating your risk from typhoons where flooding may be important,” he says.

Some insurers “go the extra mile” to capture secondary characteristics of a building, things like roof shape, roof material, local tree density and special shutter usage that “could exacerbate or mitigate damage,” Sabbatelli reports.

The issue is that this sort of information can be more difficult to collect, he points out. For example, “if the right nails weren’t used on a house and, therefore, it wasn’t built to code precisely, a Cat model wouldn’t pick up on that because that’s happening on such a small scale,” Sabbatelli explains.

Consider when RMS was updating its hurricane model and experts reported that in Texas, some roofs were not actually being built to code. As such, “roofs were failing at lower wind speeds than expected,” Sabbatelli reports.

“The model is meant to produce losses for the expectations, but if homes are not being built to the expectations, then it’s very difficult for the Cat model to represent that,” he points out.

A Cat model “produces mean losses, but, ultimately, this is useless without addressing some of the uncertainty around that because each location is going to have individual characteristics that set it apart from the average,” Sabbatelli cautions. “Even if we had a perfect data set, that would still produce a certain level of uncertainty that would need to be quantified,” he suggests.

For some perils, Sabbatelli notes, the data on the frequency and severity will be shorter than for other perils.

For example, south of the border, the National Oceanic and Atmospheric Administration’s HURDAT2 database has information as far back as 1851.

“160 years worth of data represents about the longest historical record we have for any natural catastrophe peril, but if you speak to statisticians, they may say that 160 years is not necessarily a long enough sample to work with,” he says.

“While we have a certain length of record of data for earthquakes, they have far less frequency, whereas hurricanes are happening far more frequently,” Sabbatelli adds.

Kafali says there is plenty of information from both insurers and government agencies, such as the U.S. Geological Survey, on catastrophes in the U.S.

Fitch Rating’s Brian Schneider notes “certain regions can have more modelling uncertainty to the extent that the frequency of events is lower and the catalogue of events is more limited,” which is “particularly the case for earthquakes.”

And since perils like flood and severe convective storm “happen on a much smaller scale” than earthquake and hurricane, Sabbatelli notes, he suggests it is difficult for insurers to model individual thunderstorms and tornadoes unless they use computers with “huge” processing capacity.

“You can’t run these millions and millions of thunderstorms at a high scale because it could cause the computer to crash. So you have to find a middle ground over which you can measure the larger-scale aspects that drive severe convective storms while also trying to maintain some level of detail within individual events,” he explains.

“Over time, as computing power has become more sophisticated and the modellers have had more events to study, the models have continued to improve and have proven to be extremely useful,” Schneider points out.


In addition to natural catastrophes, RMS also models losses from terrorist attacks. These models are based, in part, on computational fluid dynamics to “simulate how blast hazard waves propagate, deflect off buildings and damage different kinds of structures,” explains Chris Folkman, the company’s senior director of product management.

“We have less data on large-scale acts of terror available to us compared to the climate perils,” Folkman notes. “For things like wind storms and hurricanes, we have millions of sensor readings and wind speed readings, and it’s very well-established science with a lot of data behind it,” he adds.

That said, the industry has “much better data” on malicious disasters today than it did September 11, 2001, when al-Qaeda operatives hijacked four civilian airliners, crashing two into the World Trade Center in New York City and one into the Pentagon outside of Washington, D.C.

Inflation-adjusted insured losses resulting from those attacks were about US$42.9 billion, Congressman Gregory Meeks noted in 2014 during a debate on a bill to extend the U.S. Terrorism Risk Insurance Act. After 9-11, “terrorism risk insurance quickly became either unavailable or very, very expensive and unaffordable,” Meeks said at the time.

The Terrorism Risk Insurance Program Reauthorization Act, which was signed into law earlier this year, extends the act to the end of 2020. In short, the act requires commercial property insurance in the U.S. to cover terrorism, with the federal government sharing losses under certain conditions.

It was noted in a 2014 report by the U.S. President’s Working Group on The Long Term Availability and Affordability of Insurance for Terrorism Risk, while risk models for terrorist attacks “have become more advanced” since 2002, “commenters report that such models are still of relatively limited utility, particularly in terms of developing pricing for the risk of large-scale attacks with a sufficient degree of confidence.”

One recent tragedy – which at press time was under investigation as a possible terrorist attack – was the shooting in the early morning hours of June 12 at the Pulse Nightclub in Orlando, Florida. As of late June, the Orlando Police Department reported, 50 people, including the perpetrator, U.S. citizen Omar Mir Seddique Mateen, had died and another 53 had been injured.

“There are strong indications of radicalization by this killer, and a potential inspiration by foreign terrorist organizations,” U.S. Federal Bureau of Investigation director James Comey told reporters June 13.

Mateen “made clear his affinity, at the time of the attack,” for the Islamic State of Iraq and the Levant, Comey reported.

But Mateen “also appeared to claim solidarity with the perpetrators of the Boston Marathon bombing, and solidarity with a Florida man who died as a suicide bomber in Syria for al Nusra Front, a group in conflict with Islamic State,” he added.

“What happened in Orlando was absolutely terrible, but it remains to be seen what the overall insured loss will be from that,” Folkman says. “Compared to a natural catastrophe peril or something like that, in a way that could threaten solvency, I don’t think the Orlando attack is going to reach that level.”

What could potentially reach that level are incidents in which terrorists use chemical, biological, radiological or nuclear (CBRN) weapons. That possibility “really keeps a lot of reinsurers up at night, because a CBRN event could threaten not just the solvency of one reinsurer, it could potentially threaten the entire industry,” Folkman comments.


The presence of CBRN material, as well as explosives, “creates a significant risk of diversion or exploitation by terrorists or criminals,” Public Safety Canada notes on its website.

Folkman reports that indications are that mustard gas has been used recently in Syria. “What’s even more concerning, and this was recently highlighted by (U.S.) President (Barack) Obama at a nuclear summit in April, is there seems to be a renewed interest by a number of threat groups in revitalizing their research and development programs for CBRN attacks and more ambitious forms of terrorism,” he adds.

Those types of “severe attack scenarios” were mentioned in RMS’s recent paper, Terrorism Insurance & Risk Management in 2015: Five Critical Questions.

For example, modelled property damage loss from a five-kiloton nuclear detonation in Chicago (which the paper projects will kill 300,000 people) is US$323 billion while the modelled property damages from a similar attack in London (projected to kill about 190,000 people) is US$69 billion. The modelled property damage loss in New York City was US$12 billion from a sarin gas attack (with 2,000 fatalities) and US$127 billion from a cesium-137 dirty bomb (projected to kill few people).

Attacks such as these “still carry a low probability of success,” RMS suggests in the paper.

“Typically speaking, the uncertainty levels around CBRN events are very large compared to the uncertainty associated with conventional terrorism,” Folkman reports. “One of the reasons is that these are very, very rare events that have very high severity associated with them. The CBRN events also depend on things like wind conditions for successful dispersion and the technical barriers around assembling a successful weapon tend to be very high. That adds to the overall uncertainty level compared to something like a straightforward bomb blast,” he explains.

Folkman reports RMS has modelled approximately 90,000 events, including the smallest, a car bomb, to truck bombs and aircraft impacts. “We try to quantify risk and our ability to quantify the risk depends on the amount and the quality of the data that we have.”

The risks from terrorism and cyber events “are much less well-understood and the model methodologies are correspondingly less well-tested,” SCOR’s Paul Nunn notes. “While scenarios can be designed and used to understand how portfolios behave over time, representing the frequency of attacks, in the face of global security efforts to intercept, remains a key challenge.”

Risks such as terrorism and cyber “have more uncertainty in that there is more difficulty in modelling the frequency of events as these are man-made and not driven by science as are natural catastrophes,” points out Fitch Ratings’ Brian Schneider.

“It could be relatively easy for us to (simulate) a certain bomb in a building and figure out what type of damage occurs to that building or the surrounding buildings,” says Swiss Re’s Andrew Castaldi. “But the likelihood of that happening is very, very hard to predict in a model because we cannot read the human aspect,” he adds.

The likelihood of that bomb being detonated in a building “could be influenced not only by the person planting the bomb, but also by the government’s activity in trying to prevent it,” he says.

“People have to realize no model is ever going to be perfect, Castaldi points out. “It’s a model; it’s not reality, but it shows you relativity. For example, a certain area might have greater exposure to a given hazard than another area and you would see the losses from the area being higher there. It does give you an idea of accumulation. It’s a very good knowledge system,” he suggests.

Castaldi regards Cat modelling as “one of the most significant steps forward at the property insurance industry has made in the last 25 years.”

Still, AIR Worldwide’s Cagdas Kafali notes, models will always carry some uncertainty. “We can reduce it if we inject more knowledge and more data into the modelling process,” he notes.

“There is a part of the uncertainty that we will never be able to remove. That’s the whole idea of risk. That’s why there is insurance,” he adds.