Canadian Underwriter

Living with the Black Swan


May 1, 2012   by Bill Keogh and Paul Thenhaus


Print this page

Over the past two decades, the insurance industry has come to embrace the value of catastrophe (CAT) models to assess and manage risk, and with good reason. An essential part of a robust risk management process, models bring increased discipline and transparency, allowing industry professionals to make objective risk assessments of insured loss relative to different perils and regions. They have also led to better management of portfolios, enabling users to develop consistent business plans, thereby lowering the risk of capital loss, or worse, insolvency following major catastrophes. Considering that decisions are made and billions of dollars potentially gained or lost based on the results of CAT models, it is critical to fully understand what they can and cannot do in order to make sound business decisions.

In the past, the only way to estimate a probable loss was with a limited number of statistical studies of past losses from various historical events. Not surprisingly, the degree of uncertainty was quite large. Thankfully, modeling methodology has improved, and we now have a sophisticated analytical framework with which to explore and attempt to quantify the uncertainty associated with physical hazards. However, despite our best efforts, we can only reduce, not eliminate, uncertainty. Why is that?

With all the brainpower collaborating on model development, from geologists to mathematicians, it is not for lack of expertise. The answer comes down to the element of surprise. Consider the unprecedented M 9.0 Tohoku earthquake and tsunami that struck Japan’s northeast coastline just a year ago, which contained unique and unexpected circumstances.  Until then, commonly accepted scientific knowledge was that an M 8.2 earthquake was the largest possible in this region. What’s more, the largest losses from this event came from the surprise perils of tsunami and flood.

The uncertainty associated with such events represents not only the approximation of the earthquake effects, but also such variables as the number and types of structures being analyzed, the relationship between structural response and damage, and the data being analyzed and how they are modeled.

As much as we strive to reduce uncertainty, it will always be there—uncertainty in time (we estimate the frequency of occurrence), in space (we do not know exactly where the event will occur), in intensity and in the spatial distribution. While we know what has happened in the past and what might happen in the future, we cannot know for certain what events will happen.

Unknowns

A statement made by former US Secretary of State Donald Rumsfeld in reference to military threats to the US succinctly captures the nature of CAT modeling: “There are known knowns, known unknowns, and unknown unknowns.”

CAT models strive to capture the known knowns and the known unknowns, or the current state of knowledge and degree of uncertainty in that knowledge, of the hazard they try to estimate. For example, we don’t know for certain how every building type responds to a specific ground motion or wind speed. A rich source of data, these events that have occurred inform us about a given peril in a given region, and provide insight to the nature of extreme and rare hazards.

Known unknowns refer to events that we can infer from what has already happened. They are the limits of current knowledgewe know what we do know and what we don’t. For example, while there have not been any intense land-falling hurricanes in northern Florida for the past 100 years, or a megaquake of 8.5 or higher in the Pacific Northwest region, that doesn’t mean it cannot happen, just that it has not happened.

As we have seen and will continue to see on occasion in the future, there are some low-probability high-consequence events that simply cannot be anticipated. These unknown unknowns are the “Black Swans” of CAT modelingwe can do nothing about them with current knowledge.

This is why we need models. They build a robust set of possibilities that are informed, but not constrained to only what has happened. We continue to learn from every event, and, as science evolves, so do CAT models. The stakes are high for model developers to continue researching ways to capture as many sources of uncertainty as possible—failure to do so will lead to a systematic understatement of risk. 

Changing Technology

With each significant event that has occurred, especially those with surprises, we have seen an increased use of technology in the application of CAT models. In regards to megathrust earthquake hazard, for example, the most significant technological advancement is the rapidly increasing coverage of Global Positioning Systems (GPS) worldwide. Originally developed exclusively for military applications, over the last 20 years GPS has expanded into civilian and research applications that have greatly contributed to knowledge of the deformation of the surface of our planet. Stress accumulation along fault zones is reflected in slow strain accumulation, or deformation, of the earth’s surface. Megathrust fault regions of the circum-Pacific Ring of Fire, where tectonic plates collide and are thrust over one another, have some of the highest strain accumulation rates worldwide, and those faults remain locked between the slip episodes that cause giant earthquakes. However the accumulating stress on the megathrust fault during this interseismic period is reflected in the uplift and warping of the leading edge of the overriding tectonic plate.

Such deformation is being continuously monitored by GPS measurements in circum-Pacific regions as Japan, Chile, and the Cascadia subduction zone that encompasses the Vancouver Island, the BC lower mainland and the US Pacific Northwest (PNW).GPS deformation models in the PNW are being used to explore the nature of surface deformation that will accompany the next giant earthquake in this region, which is estimated to be an M 8.9 earthquake.

Off the northeastern coast of Honshu, Japan, continuous GPS measurements since 1995 showed the continuous build up of strain in the region of 2011 Tohoku earthquake. Unknown at the time was that the region of highest strain accumulation off the east coast on Honshu outlines precisely the epicentral region of the 2011 Tohoku earthquake. A second region of high strain accumulation off the east coast of Hokkaido remains an ominous reminder of a potential future quake.

Following the 2010 M 8.8 Maule, Chile earthquake, detailed analysis of geodetic data from a dense GPS network revealed that the earthquake did not relieve all of the stored stress on the megathrust rupture segment and that up to five metres of potential slip remained, sufficient to produce a magnitude 7.5 to 8.0 “aftershock” in the future.

Modeling Challenges

GPS is one example of how changing technology and the rapid accumulation of data worldwide are providing unprecedented scientific insight into the physics of fault rupture. Such improvements include more refined time-dependent models of great earthquake occurrence in megathrust regions of the circum-Pacific and improved knowledge of the potential size of these earthquakes so that surprises, or the unknown-unknowns such as the Tohoku quake, can be reduced in the future.

Further challenges focus on the phenomenon of correlation (the tendency or lack of distributions to be in lockstep) within extreme events like hazard and vulnerability. Correlation modeling affects models’ performance, and overestimation and underestimation of it can prove problematic for anyone trying to quantify risk of ruin, or making pricing or reinsurance purchasing decisions.

As we see it, correlation modeling must include a reasonable, simple rule for quantifying outcome distributions, paired with straightforward calculation methods, but also allow for model-differing correlations between loss distribution components (such as occupancy, location and structural characteristics) and variability in loss data by peril and region.

Scientific theories developed in hindsight need to be tested and applied prospectively to assist our understanding of risk in the future. This process involves large uncertainties, and it is the modeler’s challenge to assess them in a way that represents the best science, while providing meaningful improvement to hazard and loss models. Pairing technological ease of use with a more robust approach will allow brokers more reliability and access to insight and data, allowing for better assessment of risk.

Bill Keogh is president of EQECAT and Paul Thenhaus is senior geologist at EQECAT.

____________________________________________________________________________________

Copyright 2012 Rogers Publishing Ltd. This article first appeared in the March 2012 edition of Canadian Insurance Top Broker magazine.

This story was originally published by Canadian Insurance Top Broker.


Print this page