Canadian Underwriter
Feature

Treaty Underwriters: Pricing & Volatility


July 1, 2002   by Rob Finnie, vice president of CCR


Print this page Share

Just because treaty underwriters are rare, it does not necessarily imply that we are odd, but consider these examples of our underwriting behavior: There is a saying that “accountants add, but actuaries multiply”. I would extend that with “and underwriters cannot do either”. How else do you explain an underwriter who will happily write business below cost?

Plus, in keeping with the business philosophies of the current times, reinsurance underwriters value “partnerships” and “long-term relationships” with our clients. We are sure that partnerships exist, even when we consistently lose money with the same partners year after year. The third factor is this – getting reinsurance underwriters to agree on anything is like “herding cats”. Each person has a mind “of their own” based on a deeply held conviction that everyone else is either irrelevant or wrong.

This article is an attempt to shed some light on our odd underwriting behavior. If I miss something that you consider to be of crucial importance, I offer my apologies in advance. This is intended as an “hors-d’oeuvre”, and not a “full menu”.

In the year 2002, we all agree that the soft market for treaty reinsurance has ended and rates must rise. As we watch losses climbing steadily, and as the world seems more unpredictable every day, our head offices are asking for explanations. If we are lucky, they are also seeking solutions that do not involve our immediate personal departure.

The task that treaty underwriters now face is to price our business so we can make a profit for our companies. We also have to trust our pricing well enough to use it as the governing factor in our risk acceptance decisions. “Commercial decisions” (writing business at a loss, for some larger future objective) just do not sit well with shareholders and head-offices anymore. Here is the problem: Treaty underwriters predict the future and assign a dollar value to it.

Humans, as individuals, will continue to do “dumb”, even “dangerous” things, which makes predicting less of a “science”. Humans in groups will translate specific behaviors into trends without any conscious thought to the underlying deviations inherent (for example, “drinking and driving” trends). There are all sorts of social trends in play at any one time, for insureds, insurers, brokers, lawyers, and even politicians. Some are significant to reinsurance and some are probably not. I have wondered if a night school course in “advanced witchcraft” might improve my predictions, but so far the company has not allowed it on the expense account. I’m therefore stuck with more conventional methods.

SCIENCE OF RISK

We use statistics and computer “models” to help us make our predictions. There are two basic ways of looking at the future. We can base our predictions on “experience” or “exposure”. Experience rating looks at the premiums and losses of a treaty over a long period of time, like 10 years, and projects the trend in loss costs into the future. Exposure rating looks at industry results and compares these statistics to the treaty in question. Exposure rating looks at what could have happened, based on statistical trends.

At first glance, experience rating seems to make more sense, being based on actual events in a specific insurance portfolio. However, like some other “common sense” solutions, it is too simplistic and in the long run wrong. The exposure method is based on strong statistical models, but the models are vulnerable to the usual glitches and false assumptions that often make statistics meaningless.

EXPERIENCE RATING

Experience rating models have been around for decades, and although there is no set format or universally accepted commercial package, the components are generally the same. Here’s how it works:

We need a list of gross incurred losses for the last five to 10 years. In most cases, the farther back we can go, the more secure our figures are. Given only this much information, we can figure out an “average loss cost” (ALC) per year (total incurred losses divided by the number of years);

If we also have a list of the premiums that correspond to the claims years, we can calculate a “burn cost” (BC). The main advantage over an ALC calculation is that it adjusts for the size of the underlying portfolio. Without this adjustment, a growing portfolio will always look better in the earlier years, because there are generally fewer claims. This is the classic BC, and it is a benchmark number in a lot of underwriters’ minds (total losses divided by total premiums);

If we are working on an “excess of loss” (XS) treaty, we need to figure out what portion of the losses would be covered in each layer. We do that calculation first, and then figure out our ALC and BC. The layer BC should never be greater than the reinsurance rate. If it is, the treaty is a long-term money-loser.

Sound simple so far? The classic BC may indeed be widely used as a benchmark, but only because it is extremely unlikely that next year’s losses will be less than the BC. It is virtually guaranteed that they will be higher. Here are some reasons why:

Claims inflation. It is more costly to settle claims now than it was in the past. This generally does not track with economic inflation because only specific sectors are involved, and there is a social aspect to claims settlements, particularly if we are looking at claims that take years to be settled, like auto claims.

Primary rate changes. Rate adjustments alter the premium side of the equation in ways that are completely unrelated to loss frequencies and severities. This problem could be avoided if we could compare losses with historical exposures, but this is not commonly available.

Systematic changes in loss frequency and severity. These are usually caused by changes in legislation or in the legal environment. Ontario auto losses over the last 10 years are a classic example.

Loss development over time. The recent losses on the incurred claims list are not what they seem…they have some growing to do.

“IBNR”. The four most dreaded letters in reinsurance. There are losses out there that we have not heard about yet.

THE “LUCK FACTOR”

Experience rating was the standard for a long time and is still a major item in a reinsurance underwriters’ toolbox. It has a few drawbacks that are very difficult or impossible to compensate for as adjustments to the BC. The most fundamental and pervasive is “luck”.

Sometimes insurance companies are lucky in their claims experience. We like to assume that the “law of large numbers” (LLN) applies to the insurance portfolios that we reinsure. I think it does, but in a statistical way. It applies on average, but even large personal lines portfolios show a range of results over the years. The LLN is over-ruled by the statistics applicable to the probability of occurrence of rare events. Sometimes we are simply just lucky.

If there is no reliable claims history, as with new portfolios, or if there are too few losses to be statistically useful in our ultimate incurred development and IBNR adjustments, the experience rate method falls apart. This is probably the single most annoying drawback of experience rating. It was this problem that prompted the development of exposure rating.

EXPOSURE RATING

Exposure rating has developed significantly in the last decade and because it is new and also more complicated than experience rating, there are even fewer standards and accepted procedures. Here is how exposure rating works. Exposure rating requires two main ingredients: statistics that describe the portfolio, and a set of industry loss information.

The industry loss information is too detailed and cumbersome for underwriters, so it is generally “pre-loaded” into the rating model by actuaries. Underwriters add the portfolio information and the model calculates an expected loss cost (ELC). Historical data does not enter into the calculations at all.

There are four sets of portfolio information required for exposure rating:

A list of exposures. This is commonly in the form of “limits profiles”, with the insurers’ portfolio bro
ken down according to the individual policy limits. Some exhibits divide up the portfolio premiums, while others show the number of policies in each policy limit category.

Breaking down portfolio “classes”. The underwriter chooses the statistical curves in the model that describe the portfolio. There are very few mono-line reinsurance treaties, so this usually means multiple selections.

Portfolio size indicator. This is usually the premium volume for each class of business we have defined. These figures are for the treaty year being assessed.

Underlying portfolio loss ratios. The model usually needs these to simulate individual losses from the embedded actuarial figures.

There are several other adjustments that are commonly available in exposure models, such as the investment rate (for present value calculations on premiums and paid losses), as well as the expected frequency of claims exceeding their policy limits, (with loss adjustment expenses, etc.). Also, the way that losses attach to the treaty (either by date of loss occurrence or by policy inception date).

The model takes all the information listed above and runs it through a series of calculations to arrive at an ELC for the next underwriting year. Some models just provide the single, most likely loss cost. Other models go one step further into “stochastic modeling”, so there is a range of possible loss costs, with a probability of occurrence attached.

COMPLICATED = ACCURACY?

Exposure model calculations are much more complicated than the exposure rate calculations, which can be summarized as the simple equations I mentioned for each variant form. The beauty of exposure rating models is that we can make all of these statistical calculations happen with very little input from ourselves. Most experience rating can be done with a pencil and paper. I would not dream of trying to exposure rate without a computer.

Of course, all this power can be a dangerous thing for us. We can be lulled into thinking that because it is complicated, it is more accurate. In fact, it is much easier for little mistakes to turn into absurdly incorrect conclusions.

One of the common mistakes with exposure rating is to assume that the company we are analyzing is “average”. This simple assumption produces extremely bad underwriting. In fact, it turns the pricing process upside down. If the real underlying portfolio performs better than the industry average, this misuse of the exposure model will produce pricing that is too high.

Even worse, if the portfolio is actually performing poorly in comparison with the average, the model will indicate a price that is insufficient to cover the real losses. Eventually, we will pay losses as they really are, and not on the industry average. At the same time, we will have turned away good business that the model has mistakenly assessed as under-priced. I would guess that most reinsurance underwriters make this mistake once in a while, but if it happens too often it can be fatal.

AFTER SEPT. 11

When September 11 happened, the world’s risk perspective changed. This paradigm shift also has implications for treaty reinsurance, but the new reality is superimposed on a familiar business environment. We have the same clients and the same tools that we had before the tragedy. Our clients are in the same business of providing insurance coverage for their policyholders. It will take some time before we know all the insurance and reinsurance implications of September 11.

However, it is already clear that neither experience rating nor exposure rating were of any use in predicting the loss to reinsurers. Both models draw on the past to predict the future. Unprecedented losses, by definition, are not part of the equation. In the aftermath of September 11, we now have a terrible measure of loss severity, but no frequency data. Where do we go from here? Can we continue to use the same tools?

The experience method is really not much use. Most Canadian treaties did not sustain a loss from September 11, so we have no severity data for them. There has never been a similar loss to Canadian treaties, so there is no frequency data either. This method is grounded in reality, so no loss equals “no problem”.

The exposure method has the advantage of being geared to predicting what could happen and what probably will happen in the future. If we can estimate a frequency, we can incorporate a “severity divided by frequency” estimate into our industry loadings, and we can apply it selectively according to each insurance portfolio’s lines of business and policy limits. There are lots of other issues around measuring terrorism exposures, and I do not know anyone who has solved this puzzle yet. But we may already have the tools.

THE “ODD BUNCH”

I remain convinced that treaty underwriters are odd, or at least we appear odd to most people because we have this responsibility to make judgments based on slippery social “science-type” data. Our tools are not exact and every submission is a unique puzzle.

Reinsurance underwriting requires an awareness of the limitations of our tools. There are plenty of examples in reinsurance companies where underwriting judgment was suspended and the rating models dictated some very bad decisions. We still need underwriting judgment because human brains can synthesize much more data than a computer model.

An effective underwriter uses the rating tools and then does a serious “reasonability check” that involves a good measure of non-linear and creative thinking. This can look quite odd, spending hours quantifying insurance portfolios and then questioning the results. Predicting the future for “fun and profit” is indeed an odd profession.


Print this page Share

Have your say:

Your email address will not be published. Required fields are marked *

*