Go figure. The Institute and Faculty of Actuaries seems to have just discovered that humans are involved in risk modeling. Upon noticing that, they immediately issued the following warning:
RISK ALERT
MODEL MANIPULATIONKEY MESSAGE
There are a number of risks associated with the use of models:
members must exercise care when using models to ensure that the rationale for selection of a particular model is sound and, in applying that model, it is not inappropriately used solely to provide evidence to support predetermined or preferred outcomes.
They warn particularly about the deliberate manipulation of models to get the desired answer.
There are two broad reasons why a human might select a model. In both cases, they select the model to get the answer that they want.
- The human might have an opinion about the correct outcome from the model. An outcome that does not concur with their opinion is considered to be WRONG and must be corrected. See RISKVIEWS discussion of Plural Rationality for the range of different opinions that are likely. Humans actually do believe quite a wide range of different things. And if we restrict the management of insurance organizations to people with a narrow range of beliefs, that will have similar results to restricting planting to a single strain of wheat. Cheap bread most years and none in some!
- The human doesn’t care what the right answer might be. They want a particular range of result to support other business objectives. Usually these folks believe that the concern of the model – a very remote loss – is not important to the management of the business. Note that most people work in the insurance business for 45 years or less. So the idea that they should be concerned with a 1 in 200 year loss seems absurd to many. If they apply a little statistics knowledge, they might say that there is an 80% chance that there will not be a 1 in 200 year loss during their career. Their Borel point is probably closer to a 1 in 20 level, where there is a 90% chance that such a loss will happen at least once in their career.
They also suggest that there needs to be “evidence to support outcomes”. RISKVIEWS has always wondered what evidence might support prediction of remote outcomes in the future. For the most part, there is insufficient evidence to support a determination of past actual frequency of the same sort of remote events. And over time things change, so past frequency isn’t always indicative of future likelihood, even if the past frequency were known.
One insurer. where management was skeptical of the whole idea of “principles based” assessment of remote losses, decided to use a two pronged approach. For their risk management, they focused on 95th percentile, 1 in 20 year losses. There was some hope that they could validate these values through observed data. For their capital management, they used the rating agency standard for their desired rating level.
Banks, with their VaR approach have gone to an extreme in this regard. Their loss horizon is in days and their calibration period is less than 2 years. Validation is easy. But this misses the possibility of extremes. Banks only managed risks that had recently happened and ignored the possibility that things could get much worse, even though most risks that they were measuring went through multi year cycles of boom and bust.
At one time, banks usually used the normal distribution to extrapolate to determine the potential extreme losses. The problem is Fat Tails. Many, possibly all, real world risks have remote losses that are larger than what is predicted by the normal distribution. Perhaps we should generalize and say that the normal distribution might be ok for predicting things that happen with high frequency and that are near the mean in value, but some degree of Fat Tails must be recognized to come closer to the potential for extreme losses.
For a discussion of Fat Tails and a metric for assessing them (Coefficient of Risk) try this: Fatness of Tails in Risk Models .
What is needed to make risk measurement effective is standards for results, not moralizing about process. The standards for results need to be stated in terms of some Tail Fatness metric such as Coefficient of Risk. Then modelers can be challenged to either follow the standards or justify their deviations. Can they come up with a reasonable argument of why their company’s risk has thinner tails than the standard?