There are two ways to assess risk. Quantitative and Qualitative. But when those two words are used in the NAIC ORSA Guidance Manual, their meaning is a little tricky.
In general, one might think that a quantitative assessment uses numbers and a qualitative assessment does not. The difference is as simple as that. The result of a quantitative assessment would be a number such as $53 million. The result of a qualitative assessment would be words, such as “very risky” or “moderately risky”.
But that straightforward approach to the meaning of those words does not really fit with how they are used by the NAIC. The ORSA Guidance Manual suggests that an insurer needs to include those qualitative risk assessments in its determination of capital adequacy. Well, that just will not work if you have four risks that total $400 million and three others that are two “very riskys” and one “not so risk”. How much capital is enough for two “very riskys”, perhaps you need a qualitative amount of surplus to provide for that, something like “a good amount”.
RISKVIEWS believes that then the NAIC says “Quantitative” and “Qualitative” they mean to describe two approaches to developing a quantity. For ease, we will call these two approaches Q1 and Q2.
The Q1 approach is data and analysis driven approach to developing the quantity of loss that the company’s capital standard provides for. It is interesting to RISKVIEWS that very few participants or observers of this risk quantification regularly recognize that this process has a major step that is much less quantitative and scientific than others.
The Q1 approach starts and ends with numbers and has mathematical steps in between. But the most significant step in the process is largely judgmental. So at its heart, the “quantitative” approach is “qualitative”. That step is the choice of mathematical model that is used to extrapolate and interpolate between actual data points. In some cases, there are enough data points that the choice of model can be based upon somewhat less subjective fit criteria. But in other cases, that level of data is reached by shortening the time step for observations and THEN making heroic (and totally subjective) assumptions about the relationship between successive time periods.
These subjective decisions are all made to enable the modelers to make a connection between the middle of the distribution, where there usually is enough data to reliably model outcomes and the tail, particularly the adverse tail of the distribution where the risk calculations actually take place and where there is rarely if ever any data.
There are only a couple of subjective decisions possibilities, in broad terms…
- Benign – Adverse outcomes are about as likely as average outcomes and are only moderately more severe.
- Moderate – Outcomes similar to the average are much more likely than outcomes significantly different from average. Outcomes significantly higher than average are possible, but likelihood of extremely adverse outcomes are extremely highly unlikely.
- Highly risky – Small and moderately adverse outcomes are highly likely while extremely adverse outcomes are possible, but fairly unlikely.
The first category of assumption, Benign, is appropriate for large aggregations of small loss events where contagion is impossible. Phenomenon that fall into this category are usually not the concern for risk analysis. These phenomenon are never subject to any contagion.
The second category, Moderate, is appropriate for moderate sized aggregations of large loss events. Within this class, there are two possibilities: Low or no contagion and moderate to high contagion. The math is much simpler if no contagion is assumed.
But unfortunately, for risks that include any significant amount of human choice, contagion has been observed. And this contagion has been variable and unpredictable. Even more unfortunately, the contagion has a major impact on risks at both ends of the spectrum. When past history suggests a favorable trend, human contagion has a strong tendency to over play that trend. This process is called “bubbles”. When past history suggests an unfavorable trend, human contagion also over plays the trend and markets for risks crash.
The modelers who wanted to use the zero contagion models, call this “Fat Tails”. It is seen to be an unusual model, only because it was so common to use the zero contagion model with the simpler maths.
RISKVIEWS suggests that when communicating that the approach to modeling is to use the Moderate model, the degree of contagion assumed should be specified and an assumption of zero contagion should be accompanied with a disclaimer that past experience has proven this assumption to be highly inaccurate when applied to situations that include humans and therefore seriously understates potential risk.
The Highly Risky models are appropriate for risks where large losses are possible but highly infrequent. This applies to insurance losses due to major earthquakes, for example. And with a little reflection, you will notice that this is nothing more than a Benign risk with occasional high contagion. The complex models that are used to forecast the distribution of potential losses for these risks, the natural catastrophe models go through one step to predict possible extreme events and the second step to calculate an event specific degree of contagion for an insurer’s specific set of coverages.
So it just happens that in a Moderate model, the 1 in 1000 year loss is about 3 standard deviations worse than the mean. So if we use that 1 in 1000 year loss as a multiple of standard deviations, we can easily talk about a simple scale for riskiness of a model:
So in the end the choice is to insert an opinion about the steepness of the ramp up between the mean and an extreme loss in terms of multiples of the standard deviation. Where standard deviation is a measure of the average spread of the observed data. This is a discussion that on these terms include all of top management and the conclusions can be reviewed and approved by the board with the use of this simple scale. There will need to be an educational step, which can be largely in terms of placing existing models on the scale. People are quite used to working with a Richter Scale for earthquakes. This is nothing more than a similar scale for risks. But in addition to being descriptive and understandable, once agreed, it can be directly tied to models, so that the models are REALLY working from broadly agreed upon assumptions.
* * * * * *
So now we go the “Qualitative” determination of the risk value. Looking at the above discussion, RISKVIEWS would suggest that we are generally talking about situations where we for some reason do not think that we know enough to actually know the standard deviation. Perhaps this is a phenomenon that has never happened, so that the past standard deviation is zero. So we cannot use the multiple of standard deviation method discussed above. Or to put is another way, we can use the above method, but we have to use judgment to estimate the standard deviation.
* * * * * *
So in the end, with a Q1 “quantitative” approach, we have a historical standard deviation and we use judgment to decide how risky things are in the extreme compared to that value. In the Q2 “qualitative” approach, we do not have a reliable historical standard deviation and we need to use judgment to decide how risky things are in the extreme.
Not as much difference as one might have guessed!