There is insufficient evidence to support a determination of past actual frequency of remote events!

Go figure.  The Institute and Faculty of Actuaries seems to have just discovered that humans are involved in risk modeling.  Upon noticing that, they immediately issued the following warning:

RISK ALERT
MODEL MANIPULATION

KEY MESSAGE
There are a number of risks associated with the use of models:
members must exercise care when using models to ensure that the rationale for selection of a particular model is sound and, in applying that model, it is not inappropriately used solely to provide evidence to support predetermined or preferred outcomes.

They warn particularly about the deliberate manipulation of models to get the desired answer.

There are two broad reasons why a human might select a model.  In both cases, they select the model to get the answer that they want.

  1. The human might have an opinion about the correct outcome from the model.  An outcome that does not concur with their opinion is considered to be WRONG and must be corrected.  See RISKVIEWS discussion of Plural Rationality for the range of different opinions that are likely.  Humans actually do believe quite a wide range of different things.  And if we restrict the management of insurance organizations to people with a narrow range of beliefs, that will have similar results to restricting planting to a single strain of wheat.  Cheap bread most years and none in some!
  2. The human doesn’t care what the right answer might be.  They want a particular range of result to support other business objectives.  Usually these folks believe that the concern of the model – a very remote loss – is not important to the management of the business.  Note that most people work in the insurance business for 45 years or less.  So the idea that they should be concerned with a 1 in 200 year loss seems absurd to many.  If they apply a little statistics knowledge, they might say that there is an 80% chance that there will not be a 1 in 200 year loss during their career.  Their Borel point is probably closer to a 1 in 20 level, where there is a 90% chance that such a loss will happen at least once in their career.

They also suggest that there needs to be “evidence to support outcomes”.  RISKVIEWS has always wondered what evidence might support prediction of remote outcomes in the future.  For the most part, there is insufficient evidence to support a determination of past actual frequency of the same sort of remote events.  And over time things change, so past frequency isn’t always indicative of future likelihood, even if the past frequency were known.

One insurer. where management was skeptical of the whole idea of “principles based” assessment of remote losses, decided to use a two pronged approach.  For their risk management, they focused on 95th percentile, 1 in 20 year losses.  There was some hope that they could validate these values through observed data.  For their capital management, they used the rating agency standard for their desired rating level.

Banks, with their VaR approach have gone to an extreme in this regard.  Their loss horizon is in days and their calibration period is less than 2 years.  Validation is easy.  But this misses the possibility of extremes.  Banks only managed risks that had recently happened and ignored the possibility that things could get much worse, even though most risks that they were measuring went through multi year cycles of boom and bust.

At one time, banks usually used the normal distribution to extrapolate to determine the potential extreme losses.  The problem is Fat Tails.  Many, possibly all, real world risks have remote losses that are larger than what is predicted by the normal distribution.  Perhaps we should generalize and say that the normal distribution might be ok for predicting things that happen with high frequency and that are near the mean in value, but some degree of Fat Tails must be recognized to come closer to the potential for extreme losses.

For a discussion of Fat Tails and a metric for assessing them (Coefficient of Risk) try this:  Fatness of Tails in Risk Models .

What is needed to make risk measurement effective is standards for results, not moralizing about process.  The standards for results need to be stated in terms of some Tail Fatness metric such as Coefficient of Risk.  Then modelers can be challenged to either follow the standards or justify their deviations.  Can they come up with a reasonable argument of why their company’s risk has thinner tails than the standard?

 

 

Advertisements
Explore posts in the same categories: Enterprise Risk Management

One Comment on “There is insufficient evidence to support a determination of past actual frequency of remote events!”

  1. Robert Arvanitis Says:

    Willful manipulation of models is a fact of human nature, and requires psychological methods to detect and counter.
    ————
    Heedless abuse of models is a greater problem. When everyone trades to the same model, we get failures like Hurricane Andrew in insurance (silos of the Underwriting and Investment exhibit) and banking – 2008 subprime crisis (blind use of the Li cupola).
    ————
    More basic: I disagree we lack sufficient data for remote risks.
    First, there are no “black swans,” only unrecognized correlations. (A bold assertion; perhaps it will stimulate a valuable effort to prove me wrong!)
    Second, there are always analogies, metaphors and shifts in perspective to employ.
    Example – coffee rust is a danger to certain underdeveloped nations, yet little is known about the rust etiology. Perhaps we know little about that particular pathogen, but we have 4 billion years of nature’s own simulation games, offense and defense, on which to pattern models. And that is just one of the four or five modes of thinking by which we can hem in and measure the risk, in order to set a price. Never mind that we can further integrate hedge and financing to protect poorer nations and support their development.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s


%d bloggers like this: