Canadian Underwriter
Feature

Chronicle of Perils Foretold


February 1, 2006   by David Gambrill


Print this page Share

Hurricanes Katrina, Rita and Wilma damaged a lot of lives and property when they hit the southern U.S. in late 2005. Apparently, they also damaged the insurance industry’s confidence in the catastrophe modeling process.

Last year, people in the insurance industry became accustomed to hearing computer models estimate massive damage losses including: Katrina losses at between US$40 billion and $60 billion; Rita damages at between US$3 billion and $5 billion; and, Wilma damages at between US$5 billion and $10 billion.

Critics of cat modeling note these aggregate loss totals masked the impact of the storms on individual insurers. In one example known to Aon Re, after one of a record number of hurricanes hit the U.S. shores last season, an insurer openly wondered why his company’s balance sheet showed $120 million in claims when cat modeling forecast the company’s damage exposure at only $37million.

The concerns went beyond those of individual insurers: financial ratings agencies also publicly broadcast concerns of companies relying exclusively on cat modeling. In a September 2005 special report, S & P’s observed: “Katrina’s unexpected mix of wind and flood damage reminded the industry that its experience and knowledge of large loss events is limited so catastrophe modeling, while to some extent a substitute for experience, has its limitations.” The problem, S & P’s noted, is “underestimating the damages through computer modeling means insurers and reinsurers will, as a result, underprice their insurance product.”

“That raises the issue of where to raise the capital when it turns out the losses are greater than anticipated.”

Discussions about modeling are not exclusive to hurricanes or the U.S. Ellen Moore, the CEO of Chubb Insurance Canada, raised the topic at a January 2006 lunch meeting of the Insurance Brokers of Toronto & Region. Moore raised the subject in the context of her speech, ‘The Changing Environment Due to Natural Catastrophes,’ in which she referenced the massive property damages following 2005 weather storms in Toronto, Montral, and Calgary, and other areas.

“One of the areas the insurers have under review at this point in time are the various risk models that have been used to help predict storm damage,” Moore told her audience. “You know, when we look at that event that happened on Aug. 19 in the GTA (Greater Toronto Area), and actually all of the storms that I’ve mentioned [including two in Montral and one in Calgary], it was not loss due to wind damage, which is where all of our predictability models are concentrating. It’s in losses due predominantly to flooding, or seepage of groundwater… I think for most insurers, people these days are spending a lot of money on home entertainment centers that are often unfortunately below ground. So that’s where a lot of this loss activity is coming from. So we need better underwriting guidelines around some of those geographies. The double-jeopardy for us happens to be that massive flooding is not well-anticipated in those models and we’ll see what we come up with.”

Certainly technology is already in the works to provide insurers with a more nuanced analysis of specific perils. Canadian insurers can look south of the border to see new or forthcoming models for estimating damage caused by floods, winter storms and brushfires, among other perils (see below). But to be fair to cat models, the criticisms may neglect the interaction between technology and insurance users – and the significant impact interaction will have on the damage estimates. As AIR Worldwide, Aon Re and others noted, when the models came under fire, computer modeling of catastrophic loss damages is a process and not just a technological panacea.

EXPERIENCE VS. EXPOSURE

What is the process? First, it is worth looking into underlying assumptions of the cat models to understand what they do. To begin with, it should be recognized that using exposure models to predict cat losses is different than using traditional experiential modeling techniques, as Dr. Guru Rao, the head of Aon Re’s catastrophe modeling unit in Chicago, notes.

In an experiential model, loss experiences from the past are projected into the future and projections are made, based on this data, to see what might happen over a certain period of time. This approach is particularly successful when an insurer has a huge, broad base of data collected over a long period. In auto accident insurance, for example, insurers have quite a bit of historical data at their disposal – including information about the makes of cars, accident statistics, driver’s records, etc. – which gives them a basis on which to make reliable estimates about the future.

The same approach does not work well for modeling catastrophes. For one thing, catastrophes don’t happen very often. Rao notes that between 1933 and 1965, Florida witnessed 11 major hurricanes (defined as Category 3, 4, or 5 storms). Over the next 38 years, from 1966 to 2003, Florida saw only one major hurricane (Hurricane Andrew in 1992). And, in 2004 and 2005, Florida was impacted by eight hurricanes, six of which made landfall in Florida. So the data set is not large enough to use an experiential model for predicting catastrophic events.

The length of time between catastrophic events can also contribute to the challenges in predicting catastrophic damage losses. Consider, for example, the number of people who have moved to and established new businesses in Florida over the past 100 years. Such development has radically altered catastrophe exposure in the state. Had Hurricane Andrew touched down in Florida in 1892, for example, it would have done far less property damage in Florida than it did in 1992.

For these and other reasons, catastrophe models are based on exposure modeling. Geocoding technologies allow an insurer to find out, given the street address, exactly where a house is located on the earth in terms of latitude and longtitude. Once insurers know this, they can input data into models that are programmed with synthetic events – thousands of different earthquakes, tornadoes, hailstorms, winter storms, and most recently, terrorist attacks. These synthetic events are based on historical events, but supplemented with science and engineering knowledge. Today, an insurer has what Rao calls “a virtual world of hurricanes and earthquakes that you can run on detailed policy information – not as of yesterday, not as of last year or 10 years ago.” When insurers run the synthetic events of different landfall frequencies on their property locations or risks, the models will spit out potential losses for various events – for example, Category 3, Category 4, or Category 5 storms.

And this is where insurers need to keep in mind that cat modeling is a process, Rao says. “The model is obviously a key part of the process, but what goes into the model is just as important,” he says. “You’ve heard of the term, ‘Garbage in, garbage out’? The primary changes have to be in how the models are developed and how they are used.”

For example, when you have a model that relies heavily on exposures – the location of the exposures, the type of construction, the type of occupancy, the year of construction, the number of stories – the cat models, in order to predict losses accurately, require various, specific characteristics. “If you don’t supply that information correctly to the cat models, they are going to come back with very poor estimates,” Rao says.

REPLACEMENT COST

One of the most important pieces of information to be input into the models is the estimated replacement cost of the building. How much will the building cost if it were to be totally destroyed and rebuilt from scratch? Built into the model is a measure of how much of a projected loss will go to the insured, how much will go to the insurance company, and how much will go to the reinsurance company. “If you are off by 20% on the replacement cost value, everybody else could be 20% or even 100% off
depending on how the policies are structured,” Rao says.

Rao posits the following fictitious scenario, which is exaggerated to prove a point:

Assume an insurer has a policy on a building that has an estimated $200,000 replacement cost. Half of that value is deductible, so the policyholder retains $100,000 and the insurance company retains the rest. Details about the property are run through a hurricane model. The model predicts the building will sustain 50% damage. In this scenario, all of that loss is retained by the insured, because half of the damage to the $200,000 house ($100,000) is deductible. The insurance company gets zero loss from that. However, if the replacement cost of the house is estimated to be $300,000 rather than $200,000, then all of a sudden the insurance company is going to pay $50,000. (Half of $300,000 is $150,000, of which $100,000 is deductible and therefore retained by the insured).

“You can see [how replacement costs] can play a huge role in whether loss estimates are over or under what they should be,” Rao says. “The modeling companies are saying: ‘We have found out through claims analysis from 2004-05 that there are many policies in which, when you model them, they are $1, but when you actually paid out the losses, they are $1.50. You’re modeling the wrong information.'”

For this reason, Rao believes it would be better for the insurance industry to share more information with cat modelers, who represent a variety of disciplines – actuaries, engineers, scientists, seismologists, meteorologists, wind engineers, hydrologists and computer programmers.

“I think down the road what’s going to happen is partnership between insurance companies and modeling companies,” Rao says. “Insurance companies have a reluctance to share a lot of information with the modeling companies – they need to keep their policy data confidential and claims data confidential. For some companies, that is intelligence they can use to better underwrite the risk. But what has happened is, in sort of a vacuum, the modeling companies have developed models based on their understanding of how claims are administered, how buildings are damaged and [an assessment of] all the loss issues from catastrophes. While some of that can be done in isolation…there are some realities that you can only get by assessing hard data from insurance companies, or by seeing what’s happening, how things come together.”

Insurers, for example, are starting to tell cat modelers of the need to improve estimation of phenomena like demand surge. The stormy 2004-05 season illustrated how replacement costs can be skewed by demand surge in ways the models do not predict. Demand surge represents the hyper-inflation that happens when demand for goods and services exceeds the supply. After events in New Orleans and Florida, for example, building stores had to work hard to keep a steady supply of lumber and drywall, and the services for out-of-town contractors raised the value of their labour. So, demand surge increased the replacement costs in ways the cat models did not account for. “In general, the models underestimated the damage to certain commercial class losses more so than for residential classes,” Rao says. “The models that estimate demand surge, storm surge, business interruption, and tree damage have to be improved.”

The last step in improving cat modeling process is to know what is included in the model and what is not, and to make appropriate adjustments to loss estimates. Losses from loss adjustment expenses, debris removal, pollution, looting, liability, and inland flood (which could be a huge part of losses in New Orleans) are not included in hurricane cat models. For the most part, hurricane cat models estimate losses from wind damage only.

NEXT GENERATION CAT MODELS

Certainly the ‘black box’ aspect of cat models is becoming more colorful. Recently, the models have seen refinements in the estimation of damage due to flooding, winter storms, brushfires, storm surge and even terrorism

In the wake of the major flooding in New Orleans, EQECAT is preparing a new flood model, to be released in the U.S. this year. Discussions are in the works about adapting the model for use in Canada. According to Tom Larsen, senior vice president of EQECAT, the model accounts for variables like the speed of rushing water based on known slopes, risk to basement flooding due to property elevations, and storm surges like the ones destroyed Florida beach homes.

The model also accounts for new exposures such as the leveling of land to build new parking lots and housing developments. Larsen says they are working with insurers through the flood model to enlighten them as to the change in risks. Canadian insurers should be interested in AIR Worldwide’s development of a new winter storm model, which predicts damages arising from winter storms such as the 1998 Ice Storm in eastern Ontario and Qubec. That same storm affected the northeastern U.S., the model’s developer Dr. Peter Daley of AIR Worldwide noted. Daley said the winter storm model – availableonly in the U.S., as it is based solely on U.S. geography patterns – introduces new technology called “numerical weather prediction,” or NWP.

NWP is what meteorologists use to forecast the weather. It’s a three-dimensional, mathematical model of an event, starting at the earth’s surface and rising to a certain height in the atmosphere.

The vertical measurement, unique to NPW models, calculates wind speeds and temperature above the ground level in order to predict the form and amount of precipitation – whether snow, rain or ice. NPW is different from traditional earthquake and hurricanes models, which are two-dimensional models focusing on wind speeds and events at ground level.

“Until now, estimating likely losses from winter storms has been very challenging,” John DeMartini, principal and senior vice-president of Towers Perrin’s reinsurance business, said. “With the release of AIR’s new model, we will be able to incorporate a scientific assessment of winter storm risk into our actuarial analyses, similar to other perils.”

A forthcoming U.S. brushfire model might be of interest to Canadian insurers. This model would predict damages arising out of events such as the 852 forest fires that raged throughout B.C. in 2003, causing the provincial government to pay $157 million to extinguish the blazes. Steven Jakubowski, CEO of Impact Forecasting, said the brushfire model is modeled on U.S. territory, but Canadian insurers have expressed an interest.

Like the flood model, the brushfire takes into account variables such as elevation when considering fire loss. For example, fire on steep hillsides might make it awkward for emergency crews to access the fire, thus increasing the property damage. Also, the model considers weather patterns and tree types in different regions of the U.S. At high-altitude locations, for example, there is more likely to be snow cover on the trees and the resultant moisture on the trees will slow down the speed of a fire. On the other hand, Jakubowski notes, oil slicks on the ground in Texas might increase property damages due to wild brushfires.

The above is part of a trend in which the cat modelers are introducing new technologies to stay ahead of the curve in predicting property damage in a ‘Brave New World’ of catastrophic damage.


Print this page Share

Have your say:

Your email address will not be published. Required fields are marked *

*