Just Stop IT! Right Now. And Don’t Do IT again.
IT is a medieval, or possibly pre-medieval practice for evaluating risks. That is the assignment of a single Frequency and Severity pair to each risk and calling that a risk evaluation.
In the mid 1700’s Daniel Bernoulli wrote:
EVER SINCE mathematicians first began to study the measurement of risk there has been general agreement on the following proposition: Expected values are computed by multiplying each possible gain by the number of ways in which it can occur, and then dividing the sum of these products by the total number of possible cases where, in this theory, the consideration of cases which are all of the same probability is insisted upon. If this rule be accepted, what remains to be done within the framework of this theory amounts to the enumeration of all alternatives, their breakdown into equi-probable cases and, finally, their insertion into corresponding classifications.
Many modern writers attribute this process to Bernoulli but this is the very first sentence of his “Exposition of a New Theory for Measuring Risk” published in 1738. He suggests that the idea is so common in his time that he does not cite an original author. His work is not to prove that this basic idea is correct, but to propose a new methodology for implementing.
It is hard to say how the single pair idea (i.e. that a risk can be represented by a sing frequency/severity pair of values) has crept into basic modern risk assessment practice, but it has. And it is firmly established. But in 1738, Bernoulli knew that each risk has many possible gain amounts. NOT A SINGLE PAIR.
But let me ask you this…
How did you pick the particular pair of values that you use to characterize any of your risks?
You see, as far as RISKVIEWS can tell, Bernoulli was correct – each and every risk has an infinite number of such pairs that are valid. So how did you pick the one that you use?
Take for an example, the risk of a fire. There are an infinite number of possible fires that could happen. Some more likely and some less likely. Some would do lots of damage some only a little. The likelihood of a fire is not actually always related to the damage. Some highly unlikely fires might be very small and low damage. Hopefully, you do not have the situation of a likely high damage fire. But all by itself, you could make up a frequency severity heat map for any single risk with many points on the chart.
So RISKVIEWS asks again, how do you pick which point from that chart to be the one single point for your main risk report and heat map?
And those heat maps that you are so fond of…
Do you realize that the points on the heat map are not rationally comparable? That is because there is no single criteria that most risk managers use to pick the pairs that they use. To compare values they need to have been selected by applying the same exact criteria. But usually the actual criteria for choosing the pairs is not clearly articulated.
So here you stand, you have a risk register that is populated with these bogus statistics. What can you do to move away towards a more rational view of your risks?
You can start to reveal to people that you are aware that your risks are NOT fully measured by that single statistic. Try revealing some additional statistics about each risk on your risk register:
- The Likelihood of zero (or an inconsequential low amount) loss from each risk in any one year
- The Likelihood of a loss of 1% of earnings or more
- The expected loss at a 1% likelihood (or 1 in 100 year expected loss)
Try plotting those values and show how the risks on your risk register compare. Create a heat map that plots likelihood of zero loss against expected loss at a 1% likelihood.
Those values are then comparable.
So stop IT. Stop misinforming everyone about your risks. Stop using frequency severity pairs to represent your risks.