If 200 insurance companies are meeting Solvency II capital requirements should we expect that one of them will fail each year?
Do we really have any idea of the answer to that question?
Or can we admit that calculating a 1/200 capital requirement is not really the same as knowing how much capital it takes to prevent failures at that rate?
Calculating a 1/200 capital requirement is about creating capital requirements that are related to the level of risk of the insurer. Calculating a 1/200 capital requirement is about trying to make the relationship of the capital level to the risk level consistent for different insurers with all different types of risk. Calculating a 1/200 capital requirement is about having a regulatory requirement that is reasonably close to the actual level of capital held by insurers presently.
It actually cannot be about knowing the actual likelihood of very large losses. Because it is unlikely that we will ever actually know with any degree of certainty what the actual size of the 1/200 losses might be.
We agree on methods for extrapolating losses from observed frequency levels. So perhaps, we might know what a 1/20 loss might be and we use “scientific” methods to extrapolate to a 1/200 value. These scientific assumptions are about the relationship between the 1/20 loss that we might know with some confidence and the 1/200 loss. Instead of just making an assumption about the relationship between the 1/20 and the 1/200 loss, we make an intermediate assumption and let that assumption drive the ultimate answer. That intermediate assumption is usually an assumption of the statistical relationship between frequency and severity. By making that complicated assumption and letting it drive the ultimate values, we are able to obscure our lack of real knowledge about the likelihood of extreme values. By making complicated assumptions about something that we do not know, we make sure that we can keep the discussion out of the hands of folks who might not fully understand the mathematics.
For the simplest such assumption, i.e. that of a Gaussian or Normal Distribution, the relationships are something like this:
- For a risk with a coefficient of variance of 100% (i.e. the mean = standard deviation), the 1/200 loss is approximately 250% of the 1/20 loss
- For a risk with a coefficient of variance of 150% (1.e. the mean = 2/3 the standard deviation) the 1/200 loss is approximately 200% of the 1/20 loss
- For a risk with a coefficient of variance of 200% (i.e. the mean = 1/2 the standard deviation) the 1/200 loss is approximately 180% of the 1/20 loss
- For a risk with a coefficient of variance of 70%, the 1/200 loss is 530% of the 1/20 loss
The graph above is the standard deviation/mean looking backwards at the S&P 500 annual returns for each of the previous 21 twenty-year periods. So based upon that data, we see that the 1/200 loss might be somewhere between 530% and 180% of the worst result in the 20 year period.
And in this case, we base this upon the assumption that the returns are normally distributed. We simply varied the parameters as we made observations.
What this suggests is that the distribution is not at all stable based upon 20 observations. So using this approach to extrapolating losses at more remote frequency looks like it will have some severe issues with parameter risk.
You can look at every single sub model and find that there is huge parameter risk.
So the conclusion should be that the 1/200 standard is a convention, rather than a claim that such a calculation might be reliable.