Much risk management literature talks about identifying the frequency and severity of risks.

There are several issues with this suggestion. It is a fairly confused way of saying that there needs to be a probabilistic measure of the risk.

However, most classes of risks – things like market, credit, natural catastrophe, legal, or data security will not have a single pair of numbers that represent them. Instead they will have a series of pairs of probabilities and loss amounts.

The word frequency adds another confusion. Frequency refers to observations. It is a backwards looking approach to the risk. What is really needed is likelihood – a forward looking probability.

For some risks, all we will ever have is an ever changing frequency.

So what do we do? With some data in hand and a view of the underlying nature of the risk, we form a likelihood assumption. With that assumption, we can then develop an actual gain and loss distribution that gives our best picture of the risk reward trade-offs.

For example, the following is three sets of observations of some phenomena.

On this example, the 1s represent the incidence of major loss experiences. There are at least four ways that these observations might be interpreted.

- One analyst might say that the average of all 60 observations is 2 (or a 10% frequency) so that is what they will use to project the forward likelihood of this problem.
- Another analyst might say that they want to be sure that they account for the worst case, so they will focus on the first set of observations and use a 15% likelihood assumption.
- A third analyst will focus on the trend and make a likelihood assumption below 5%.
- The fourth analyst will say that there is just not enough consistent information to form a reliable likelihood assumption.

Then the next 20 observations come up all zeros. How do the four analysts update their likelihood assumptions?

In fact, this illustration was developed with random numbers generated from a binomial distribution with a 5% likelihood.

The math is simple to determine that probability of frequency observations from 20 trials with a likelihood of 5% are:

- 0 – 36%
- 1 – 38%
- 2 – 19%
- 3 – 6%
- 4 – 1%

To be responsible in setting your likelihood assumptions, you should be fully aware of the actual distributions of possibilities based upon the frequency observations that you have to work with. So the first set of observations had a 6% likelihood, the second with 2 observations had a 19% likelihood and the third with 1 observations had a 38% likelihood.

That is when we know the actual likelihood. Usually you do not. But you can look at this sort of table for each possible assumption for likelihood.

Here we actually had 60 observations. The same sort of table for the 60 trials and for different assumptions of likelihood:

This type of thinking will only make sense for the first analyst above. The other three will not be swayed. But for that first analyst, some more detailed reflection can help them to better understand that their assumptions of likelihood are just that, assumptions; not facts.