Comparing Eagles and Clocks
Original Title: Replacing Disparate Frequency Severity Pairs. Quite catchy, eh?
But this message is important. Several times, RISKVIEWS has railed against the use of Frequency Severity estimates as a basis for risk management. Most recently
Just Stop IT! Right Now. And Don’t Do IT again.
But finally, someone asked…
What would you do instead to fix this?
And RISKVIEWS had to put up or shut up.
But the fix was not long in coming to mind. And not even slightly complicated or difficult.
Standard practice is to identify a HML for Frequency and Severity for each risk. But RISKVIEWS does not know any way to compare a low frequency, high impact risk with a medium frequency, medium impact risk. Some people do compare the risks by rating the frequency and severity on a numerical scale and then adding or multiplying the values for frequency and severity for each risk to get a “consistent” factor. However, this process is frankly meaningless. Like multiplying the number of carrots times the number of cheese slices in your refrigerator.
But to fix it is very easy.
The fix is this…
For each risk, develop two values. First is the loss expected over a 5 year period under normal volatility. The second is the loss that is possible under extreme but not impossible conditions – what Lloyd’s calls a Realistic Disaster.
These two values then each represent a different aspect of each risk. They can each be compared across all of the risks. That is you can rank the risks according to how large a loss is possible under Normal Volatility and how large a loss is possible under a realistic disaster.
Now, if you are concerned that we are only looking at financial risks with this approach, you can go right ahead and compare the impact of each risk on some other non-financial factor, under both normal volatility and under a realistic disaster. The same sort of utility is there for any other factor that you like.
If you do this carefully enough, you are likely to find that some risks are more of a problem under normal volatility and others under realistic disasters. You will also find that some risks that you have spent lots of time on under the Disparate Frequency/Severity Pairs method are just not at all significant when you look at the consistently with other risks.
So you need to compare risk estimates where one aspect is held the same. Like comparing two bikes:
Or two birds:
But you cannot compare a bird and a Clock:
And once you have those insights, you can more effectively allocate your risk management efforts!
“Adalberti 1” by Juan lacruz – Own work. Licensed under CC BY-SA 3.0 via Wikimedia Commons – https://commons.wikimedia.org/wiki/File:Adalberti_1.jpg#/media/File:Adalberti_1.jpg
Explore posts in the same categories: Enterprise Risk ManagementTags: Business, Risk Management
You can comment below, or link to this permanent URL from your own site.
August 18, 2015 at 3:20 pm
I disagree.
Actuaries have for years dealt with incommensurate risks – a pound of mortality, a yard of credit, a gallon of interest rate, an hour of liquidity.
The common denominator is volatility, expressed as exposure to ruin. On occasion specified directly; more often via proxies like capital.
I note that frequency × severity is a basic tool, whether for car insurance, or credit (default times loss-given-default). Auto claims, with relatively high frequency, bears a smaller risk charge than rare but devastating earthquake losses.
By the way, the rating agencies have never reconciled their alternative meanings for ratings and “expectation of loss.”
A traditional issuer rating might be expressed as a 1% likelihood of default. But the structured finance ratings are expressed as high likelihood of say a 1% loss in the portfolio of underlying pool of assets. So how’s that working out?