## Who are we kidding?

When we say that we are “measuring” the 1/200 risk of loss of our activities?
For most risks, we do not even have one observation of a 200 year period.
What we have instead is an extrapolation based upon an assumption that there is a mathematical formula that relates the 1/200 year loss to something that we do have confidence in.

Let’s look at some numbers.  I am testing the idea that we might be able to know what the 1/10 loss would be if we have 50 years of observations.  Our process is to rank the 50 years and look at the 45th worst loss.  We find that loss is \$10 million.

Now if we build a model where our probability of losing \$10 million or more is 10% and we run that model 100 times, we get a histogram like this:

So in this test, with an underlying probability of 10%, the frequency of 50 year periods with 5 observations of losses of \$10 million or larger is only 22%!

When I repeat the test with a frequency assumption of 15% or of 6.67%, I get exactly 5 observations with a frequency of about 10% in each case.

So given 50 years of observations and 5 occurrences, it seems that it is quite possible that the underlying likelihood might be 50% higher or 1/3 lower.

Try to imagine the math of getting a 1/200 loss pick correct.  What might the confidence interval be around that number?

Who are we kidding?