Canadian Underwriter

Catastrophe Modeling: Shifting Perceptions

June 1, 2003   by Eric Gobble and Don Windeler at RMS

Print this page

Canadian property and casualty insurers face a wide range of potential catastrophe perils when underwriting and managing a book of business: earthquakes, fires, hailstorms, tornados, floods, and ice storms all pose enough risk to merit analysis using catastrophe models or geographic risk accumulation systems. Recent developments are bringing new capabilities to the market, both in terms of catastrophe management systems and the science of catastrophe risk. The earthquake peril offers a good example of how these changes will impact Canadian property writers in the coming years.

“Cat models” have traditionally been used by insurers, brokers, and reinsurers to understand the magnitude of earthquake risk, and to structure and price risk transfer to reinsurance markets. Companies also benchmark capital requirements based on the models and report potential PML losses to rating agencies and regulators such as the Office of the Superintendent of Financial Institutions (OSFI). These applications continue to be central to ‘best practices’ for earthquake risk management, but a new generation of cat models is expanding the view of earthquake risk management in two ways: by incorporating improved scientific research and site-specific risk analysis, and by linking the portfolio risk assessment to front-line underwriting systems.


Scientists learn something new each time a significant earthquake occurs, and these new insights can have a downstream impact on cat model risk estimates. A relevant case in point is the Nisqually, Washington, earthquake of February 28, 2001. This magnitude 6.8 earthquake caused insured losses of approximately US$300 million in Washington state, but its implications for risk modeling extend to nearby areas of British Columbia such as Vancouver and Victoria.

Earthquakes like Nisqually (referred to by scientists as ‘intermediate-depth’ earthquakes) comprise a major proportion of the regional earthquake hazard. Previous events occurred in the Puget Sound in 1949 and 1965, and catastrophe models for British Columbia reflected the best understanding of risk from such earthquakes. But, when Nisqually occurred in 2001, a wealth of new data on ground motions became available thanks to modern strong-motion instrumentation. The new data showed that ground motions were generally lower than those predicted by the most widely-used ground motion attenuations (formulas that estimate how energy drops off away from an earthquake source).

These discrepancies were greatest for sites far away from the epicenter, such as Vancouver and Victoria, which are somewhat distant from the areas of Washington state where intermediate depth earthquakes are most common. So the incorporation of revised attenuations tends to reduce overall risk estimates for the earthquake peril in British Columbia’s major cities.


The data on earthquake hazard in Canada is also evolving. Experience from numerous historic earthquakes has shown a correlation between the level of ground shaking and the properties of the top 5-30 meters of soil and rock. Buildings on young, unconsolidated sediments will tend to shake much harder than those on hard rock. As shown in the illustration for Victoria, these materials can vary significantly over a short distance.

How can this new generation of catastrophe models help the insurance industry? First, by deploying these powerful tools and results to both portfolio managers and frontline underwriters. Portfolio management and underwriting can be viewed as a continuous feedback loop. Portfolio managers measure overall risk for the book of business to identify accumulations and diversification strategies, and then these strategies are carried out as underwriters select and price risks. The portfolio changes over time, and the process of measuring, monitoring, and adjusting underwriting guidelines begins again.

Updated cat models are incorporating enhanced data and new modeling techniques for portfolio managers, which leads to a more accurate understanding of the underlying risk. For underwriters, there has been a huge expansion in the resolution and availability of earthquake hazard data. As higher-resolution and better quality geologic maps become available, their inclusion in a model enhances the underwriter’s ability to differentiate otherwise similar risks.

New models of building response can further differentiate risk based on building height and material, and the implications for damage in various types of earthquake shaking. Finally, advancements in computing technology, including web-based interfaces and increases in data storage capacity, allow underwriters to access more sophisticated information at the “point-of-entry”, and feed underwriting data back to portfolio managers as part of the risk assessment process.

Changes in catastrophe models offer an opportunity for improved decision making. Participants throughout the market either run catastrophe models or look at results produced from them. So it is important for all of these parties to understand how models are changing and how the results will affect the way that business is placed and the prices that are charged. Companies that embrace these innovations in catastrophe modeling science and computing will be a step ahead in tomorrow’s marketplace.