## Frequency vs. Likelihood

Much risk management literature talks about identifying the frequency and severity of risks.

There are several issues with this suggestion. It is a fairly confused way of saying that there needs to be a probabilistic measure of the risk.

However, most classes of risks – things like market, credit, natural catastrophe, legal, or data security will not have a single pair of numbers that represent them. Instead they will have a series of pairs of probabilities and loss amounts.

The word frequency adds another confusion. Frequency refers to observations. It is a backwards looking approach to the risk. What is really needed is likelihood – a forward looking probability.

For some risks, all we will ever have is an ever changing frequency.

So what do we do? With some data in hand and a view of the underlying nature of the risk, we form a likelihood assumption. With that assumption, we can then develop an actual gain and loss distribution that gives our best picture of the risk reward trade-offs.

For example, the following is three sets of observations of some phenomena.

On this example, the 1s represent the incidence of major loss experiences. There are at least four ways that these observations might be interpreted.

- One analyst might say that the average of all 60 observations is 2 (or a 10% frequency) so that is what they will use to project the forward likelihood of this problem.
- Another analyst might say that they want to be sure that they account for the worst case, so they will focus on the first set of observations and use a 15% likelihood assumption.
- A third analyst will focus on the trend and make a likelihood assumption below 5%.
- The fourth analyst will say that there is just not enough consistent information to form a reliable likelihood assumption.

Then the next 20 observations come up all zeros. How do the four analysts update their likelihood assumptions?

In fact, this illustration was developed with random numbers generated from a binomial distribution with a 5% likelihood.

The math is simple to determine that probability of frequency observations from 20 trials with a likelihood of 5% are:

- 0 – 36%
- 1 – 38%
- 2 – 19%
- 3 – 6%
- 4 – 1%

To be responsible in setting your likelihood assumptions, you should be fully aware of the actual distributions of possibilities based upon the frequency observations that you have to work with. So the first set of observations had a 6% likelihood, the second with 2 observations had a 19% likelihood and the third with 1 observations had a 38% likelihood.

That is when we know the actual likelihood. Usually you do not. But you can look at this sort of table for each possible assumption for likelihood.

Here we actually had 60 observations. The same sort of table for the 60 trials and for different assumptions of likelihood:

This type of thinking will only make sense for the first analyst above. The other three will not be swayed. But for that first analyst, some more detailed reflection can help them to better understand that their assumptions of likelihood are just that, assumptions; not facts.

**Explore posts in the same categories:**Assumptions, Enterprise Risk Management

**Tags:** risk assessment

December 30, 2013 at 1:11 pm

[…] Frequency vs. Likelihood from June 2011 […]

June 26, 2011 at 8:32 am

Nice clarification of a difficult issue, using a simple example to show the heart of the question.

Let’s do what humans have always done, as shaped by evolutionary pressure – and that is to use comparables and analogies. (Bayes).

First, do we have any similar data from another venue?

(Example – female mortality is different from males, but male mortality can offer the shape of such curves.)

Second, can I find external experience to shape the thinking? (Example – Guess the proportion of red jelly beans in a jar? Notice there are five colors. Understand human nature and guess the preparer lazily just combined five difference bags…)

Third, anecdotes, folklore, all that captured “literary” wisdom. (Example – Lighter than air ships crossed the Atlantic many times without incident in the early 20th Century. The actuaries would deem it perfectly safe. But tell your grandmother; She’d reply “Wait! You have 7 million feet of flammable hydrogen, just a few yards above your hear?! Crazy!:)

Finally, simulations. Set up rough models, let them run. Here we are looking for previously unrecognized linkages and correlations. (Examples – Bhopal had three supposedly independent safeties, but all three were linked by poorly trained and understaffed maintenance, so all three failed together. Likewise Fukushima. The foundation withstood the earthquake, and the tidal wall WOULD HAVE withstood the tsunami, except the earthquake ALSO lowered the ground level by a meter, so the wall was swamped.)

Know the models and assumptions which are necessarily implicit in any analysis. May not prevent errors, but it shows you where to begin to look…