Archive for the ‘risk assessment’ category

How to Show the Benefits of Risk Management

January 2, 2015

From Harry Hall at

Sometimes we struggle to illustrate the value of risk management. We sense we are doing the right things. How can we show the benefits?

Some products such as weight loss programs are promoted by showing a “before picture” and an “after picture.” We are sold by the extraordinary improvements.

The “before picture” and “after picture” are also a powerful way to make known the value of risk management.

We have risks in which no strategies or actions have been executed. In other words, we have a “before picture” of the risks. When we execute appropriate response strategies such as mitigating a threat, the risk exposure is reduced. Now we have the “after picture.”

Let’s look at one way to create pictures of our risk exposure for projects, programs, portfolios, and enterprises.

Say Cheese

The first step to turning risk assessments into pictures is to assign risk levels.

Assume that a Project Manager is using a qualitative rating scale of 1 to 10, 10 being the highest, to rate Probability and Impact. The Risk Score is calculated by multiplying Probability x Impact. Here is an example of a risk table with a level of risk and the corresponding risk score range.

Level of Risk

Risk Score

Very Low

< 20


21 – 39


40 – 59


60 – 79

Very High

> 80

Figure 1: Qualitative Risk Table

Looking Good

Imagine a Project Manager facilitates the initial risk identification and assessment. The initial assessment results in fifteen Urgent Risks – eight “High” risks and seven “Very High” risks.

Figure 2: Number of Risk before Execution of Risk Response Strategies

We decide to act on the Urgent Risks alone and leave the remaining risks in our Watch List. The team develops risk response strategies for the Urgent Risks such as ways to avoid and mitigate threats.

Figure 3: Number of Risks after Execution of Risk Response Strategies

After the project team executes the strategies, the team reassesses the risks. We see a drop in the number of Urgent Risks (lighter bars). The team has reduced the risk exposure and improved the potential for success.

How to Illustrate Programs, Portfolios, or Enterprises

Now, imagine a Program Manager managing four projects in a program. We can roll up the risks of the four projects into a single view. Figure 4 below illustrates the comparison of the number of risks before and after the execution of the risk strategies.

Figure 4: Number of Program risks before and after the execution of risk response strategies

Of course, we can also illustrate risks in a like manner at a portfolio level or an enterprise level (i.e., Enterprise Risk Management).

Tip of the Day

When you ask team members to rate risks, it is important we specify whether the team members are assessing the “before picture” (i.e., inherent risks) or the “after picture” (i.e., residual risks) or bothInherent risks are risks to the project in the absence of any strategies/actions that might alter the risk. Residual risks are risks remaining after strategies/actions have been taken.

Question: What types of charts or graphics do you use to illustrate the value of risk management?

Top 10 RISKVIEWS Posts of 2014 – ORSA Heavily Featured

December 29, 2014

RISKVIEWS believes that this may be the best top 10 list of posts in the history of this blog.  Thanks to our readers whose clicks resulted in their selection.

  • Instructions for a 17 Step ORSA Process – Own Risk and Solvency Assessment is here for Canadian insurers, coming in 2015 for US and required in Europe for 2016. At least 10 other countries have also adopted ORSA and are moving towards full implementation. This post leads you to 17 other posts that give a detailed view of the various parts to a full ORSA process and report.
  • Full Limits Stress Test – Where Solvency and ERM Meet – This post suggests a link between your ERM program and your stress tests for ORSA that is highly logical, but not generally practiced.
  • What kind of Stress Test? – Risk managers need to do a better job communicating what they are doing. Much communications about risk models and stress tests is fairly mechanical and technical. This post suggests some plain English terminology to describe the stress tests to non-technical audiences such as boards and top management.
  • How to Build and Use a Risk Register – A first RISKVIEWS post from a new regular contributor, Harry Hall. Watch for more posts along these lines from Harry in the coming months. And catch Harry on his blog,
  • ORSA ==> AC – ST > RCS – You will notice a recurring theme in 2014 – ORSA. That topic has taken up much of RISKVIEWS time in 2014 and will likely take up even more in 2015 and after as more and more companies undertake their first ORSA process and report. This post is a simple explanation of the question that ORSA is trying to answer that RISKVIEWS has used when explaining ORSA to a board of directors.
  • The History of Risk Management – Someone asked RISKVIEWS to do a speech on the history of ERM. This post and the associated new permanent page are the notes from writing that speech. Much more here than could fit into a 15 minute talk.
  • Hierarchy Principle of Risk Management – There are thousands of risks faced by an insurer that do not belong in their ERM program. That is because of the Hierarchy Principle. Many insurers who have followed someone’s urging that ALL risk need to be included in ERM belatedly find out that no one in top management wants to hear from them or to let them talk to the board. A good dose of the Hierarchy Principle will fix that, though it will take time. Bad first impressions are difficult to fix.
  • Risk Culture, Neoclassical Economics, and Enterprise Risk Management – A discussion of the different beliefs about how business and risk work. A difference in the beliefs that are taught in MBA and Finance programs from the beliefs about risk that underpin ERM make it difficult to reconcile spending time and money on risk management.
  • What CEO’s Think about Risk – A discussion of three different aspects of decision-making as practiced by top management of companies and the decision making processes that are taught to quants can make quants less effective when trying to explain their work and conclusions.
  • Decision Making Under Deep Uncertainty – Explores the concepts of Deep Uncertainty and Wicked Problems. Of interest if you have any risks that you find yourself unable to clearly understand or if you have any problems where all of the apparent solutions are strongly opposed by one group of stakeholders or another.

Economic Capital for Banking Industry

December 22, 2014

Everything you ever wanted to know but were afraid to ask.

For the last seventeen years I have hated conversations with board members around economic capital. It is perfectly acceptable to discuss Market risk, Credit risk or interest rates mismatch in isolation but the minute you start talking about the Enterprise, you enter a minefield.

The biggest hole in that ground is produced by correlations. The smartest board members know exactly which buttons to press to shoot your model down. They don’t do it out of malice but they won’t buy anything they can’t accept, reproduce or believe.

Attempt to explain Copulas or the stability of historical correlations in the future and your board presentation will head south. Don’t take my word for it. Try it next time.  It is not a reflection on the board, it is a simple manifestation of the disconnect that exist today between the real world of Enterprise risk and applied statistical modeling. And when it comes to banking regulation and economic capital for banking industry, the disconnect is only growing larger.

Frustrated with our ineptitude with the state of modeling in this space three years ago we started working on an alternate model for economic capital.  The key trigger was the shift to shortfall and probability of ruin models in bank regulation as well as Taleb’s assertions in the area of how risk results should be presented to ensure informed decision making.   While the proposed model was a simple extension of the same principles on which value at risk is based, we felt that some of our tweaks and hacks delivered on our end objective – meaningful, credible conversations with the board around economic capital estimates.

Enterprise models for estimating economic capital simply extend the regulatory value at risk (VaR) model. The theory focuses on anchoring expectations.  If institutional risk expectations max out at 97.5% then 99.9% can represent unexpected risk. The appealing part of these logistics is that the anchors can shift as more points become visible in the underlying risk distribution. In the simplest and crudest of forms, here is what economic capital models suggest

While regulatory capital model compensate for expected risk, economic capital should account for unexpected risk. The difference between two estimates is the amount you need to put aside for economic capital modeling.”

The plus point with this approach is that it ensures that Economic Capital requirements will always exceed regulatory capital requirements. It removes the possibility of arbitrage that occurs when this condition doesn’t hold. The downside is the estimation of dependence between business lines.  The variations that we proposed short circuited the correlation debate. It also recommended using accounting data, data that the board had already reconciled and sign off on.


Without further ado, there is the series that presents our alternate model for estimating economic capital for banking industry Discuss, dissect, modify, suggest. We would love to hear your feedback.

Economic Capital – An alternate Model

Can we use the accounting data series and skip copulas and correlation modeling for business lines altogether? Take a look to find the answer.


Economic Capital Case Study – setting the context

We use publicly available data from Goldman Sachs, JP Morgan Chase, Citibank, Wells Fargo & Barclays Bank from the years 2002 to 2014 to calculate economic capital buffers in place at these 5 banks. Three different approaches are used. Two centered around Capital Adequacy. One using the regulatory Tier 1 leverage ratio.


Economic Capital Models – The appeal of using accounting data

Why does accounting data work? What is the business case for using accounting data for economic capital estimation? How does the modeling work.


Calculating Economic Capital – Using worst case losses

Our first model uses worst case loss. If you are comfortable with value at risk terminology, this is historical simulation approach for economic capital estimation.  We label it model one


Calculating Economic Capital – Using volatility

Welcome to the variance covariance model for economic capital estimation. The results will surprise you.  Presenting model two.


Calculating Economic Capital – Using Leverage ratio

We figured it was time that we moved from capital adequacy to leverage ratios.  Introducing model three.


Too Much Risk

August 18, 2014

Risk Management is all about avoiding taking Too Much Risk.

And when it really comes down to it, there are only a few ways to get into the situation of taking too much risk.

  1. Misunderstanding the risk involved in the choices made and to be made by the organization
  2. Misunderstanding the risk appetite of the organization
  3. Misunderstanding the risk taking capacity of the organization
  4. Deliberately ignoring the risk, the risk appetite and/or the risk taking capacity

So Risk Management needs to concentrate on preventing these four situations.  Here are some thoughts regarding how Risk Management can provide that.

1. Misunderstanding the risk involved in the choices made and to be made by an organization

This is the most common driver of Too Much Risk.  There are two major forms of misunderstanding:  Misunderstanding the riskiness of individual choices and Misunderstanding the way that risk from each choice aggregates.  Both of these drivers were strongly in evidence in the run up to the financial crisis.  The risk of each individual mortgage backed security was not seriously investigated by most participants in the market.  And the aggregation of the risk from the mortgages was misunderestimated as well.  In both cases, there was some rationalization for the misunderstanding.  The Misunderstanding was apparent to most only in hindsight.  And that is most common for misunderstanding risks.  Those who are later found to have made the wrong decisions about risk were most often acting on their beliefs about the risks at the time.  This problem is particularly common for firms with no history of consistently and rigorously measuring risks.  Those firms usually have very experienced managers who have been selecting their risks for a long time, who may work from rules of thumb.  Those firms suffer this problem most when new risks are encountered, when the environment changes making their experience less valid and when there is turnover of their experienced managers.  Firms that use a consistent and rigorous risk measurement process also suffer from model induced risk blindness.  The best approach is to combine analysis with experienced judgment.

2.  Misunderstanding the risk appetite of the organization

This is common for organizations where the risk appetite has never been spelled out.  All firms have risk appetites, it is just that in many, many cases, no one knows what they are in advance of a significant loss event.  So misunderstanding the unstated risk appetite is fairly common.  But actually, the most common problem with unstated risk appetites is under utilization of risk capacity.  Because the risk appetite is unknown, some ambitious managers will push to take as much risk as possible, but the majority will be over cautious and take less risk to make sure that things are “safe”.

3.  Misunderstanding the risk taking capacity of the organization

 This misunderstanding affects both companies who do state their risk appetites and companies who do not.  For those who do state their risk appetite, this problem comes about when the company assumes that they have contingent capital available but do not fully understand the contingencies.  The most important contingency is the usual one regarding money – no one wants to give money to someone who really, really needs it.  The preference is to give money to someone who has lots of money who is sure to repay.  For those who do not state a risk appetite, each person who has authority to take on risks does their own estimate of the risk appetite based upon their own estimate of the risk taking capacity.  It is likely that some will view the capacity as huge, especially in comparison to their decision.  So most often the problem is not misunderstanding the total risk taking capacity, but instead, mistaking the available risk capacity.

4.  Deliberately ignoring the risk, the risk appetite and/or the risk taking capacity of the organization

A well established risk management system will have solved the above problems.  However, that does not mean that their problems are over.  In most companies, there are rewards for success in terms of current compensation and promotions.  But it is usually difficult to distinguish luck from talent and good execution in a business about risk taking.  So there is a great temptation for managers to deliberately ignore the risk evaluation, the risk appetite and the risk taking capacity of the firm.  If the excess risk that they then take produces excess losses, then the firm may take a large loss.  But if the excess risk taking does not result in an excess loss, then there may be outsized gains reported and the manager may be seen as highly successful person who saw an opportunity that others did not.  This dynamic will create a constant friction between the Risk staff and those business managers who have found the opportunity that they believe will propel their career forward.

So get to work, risk managers.

Make sure that your organization

  1. Understands the risks
  2. Articulates and understands the risk appetite
  3. Understands the aggregate and remaining risk capacity at all times
  4. Keeps careful track of risks and risk taking to be sure to stop any managers who might want to ignore the risk, the risk appetite and the risk taking capacity

Insurers need to adapt COSO/ISO Risk Management to achieve ERM

July 29, 2014

Both the COSO and ISO risk management frameworks describe many excellent practices.  However, in practice, insurers need to make two major changes from the typical COSO/ISO risk management process to achieve real ERM.

  1. RISK MEASUREMENT – Both COSO and ISO emphasize what RISKVIEWS calls the Risk Impressions approach to risk measurement.  That means asking people what their impression is of the frequency and severity of each risk.  Sometimes they get real fancy and also ask for an impression of Risk Velocity.  RISKVIEWS sees two problems with this for insurers.  First, impressions of risk are notoriously inaccurate.  People are just not very good at making subjective judgments about risk.  Second, the frequency/severity pair idea does not actually represent reality.  The idea properly applies to very specific incidents, not to risks, which are broad classes of incidents.  Each possible incident that makes up the class that we call a risk has a different frequency severity pair.   There is no single pair that represents the class.  Insurers risks are in one major way different from the risks of non-financial firms.  Insurers almost always buy and sell the risks that make up 80% or more of their risk profile.  That means that to make those transactions they should be making an estimate of the expected value of ALL of those frequency and severity pairs.  No insurance company that expects to survive for more than a year would consider setting its prices based upon something as lacking in reality testing as a single frequency and severity pair.  So an insurer should apply the same discipline to measuring its risks as it does to setting its prices.  After all, risk is the business that it is in.
  2. HIERARCHICAL RISK FOCUS – Neither COSO nor ISO demand that the risk manager run to their board or senior management and proudly expect them to sit still while the risk manager expounds upon the 200 risks in their risk register.  But a highly depressingly large number of COSO/ISO shops do exactly that.  Then they wonder why they never get a second chance in front of top management and the board.  However, neither COSO nor ISO provide strong enough guidance regarding the Hierarchical principal that is one of the key ideas of real ERM.    COSO and ISO both start with a bottoms up process for identifying risks.  That means that many people at various levels in the company get to make input into the risk identification process.  This is the fundamental way that COSO/ISO risk management ends up with risk registers of 200 risks.  COSO and ISO do not, however, offer much if any guidance regarding how to make that into something that can be used by top management and the board.  In RISKVIEWS experience, the 200 item list needs to be sorted into no more than 25 broad categories.  Then those categories need to be considered the Risks of the firm and the list of 200 items considered the Riskettes.  Top management should have a say in the development of that list.  It should be their chooses of names for the 25 Risks. The 25 Risks then need to be divided into three groups.  The top 5 to 7 Risks are the first rank risks that are the focus of discussions with the Board.    Those should be the Risks that are most likely to cause a financial or other major disruption to the firm.   Besides focusing on those first rank risks, the board should make sure that management is attending to all of the 25 risks.  The remaining 18 to 20 Risks then can be divided into two ranks.  The Top management should then focus on the first and second rank risks.  And they should make sure that the risk owners are attending to the third rank risks.  Top management, usually through a risk committee, needs to regularly look at these risk assignments and promote and demote risks as the company’s exposure and the risk environment changes.  Now, if you are a risk manager who has recently spent a year or more constructing the list of the 200 Riskettes, you are doubtless wondering what use would be made of all that hard work.  Under the Hierarchical principle of ERM, the process described above is repeated down the org chart.  The risk committee will appoint a risk owner for each of the 25 Risks and that risk owner will work with their list of Riskettes.  If their Riskette list is longer than 10, they might want to create a priority structure, ranking the risks as is done for the board and top management.  But if the initial risk register was done properly, then the Riskettes will be separate because there is something about them that requires something different in their monitoring or their risk treatment.  So the risk register and Riskettes will be an valuable and actionable way to organize their responsibilities as risk owner.  Even if it is never again shown to the Top management and the board.

These two ideas do not contradict the main thrust of COSO and ISO but they do represent a major adjustment in approach for insurance company risk managers who have been going to COSO or ISO for guidance.  It would be best if those risk managers knew in advance about these two differences from the COSO/ISO approach that is applied in non-financial firms.

Chicken Little or Coefficient of Riskiness (COR)

July 21, 2014

Running around waving your arms and screaming “the Sky is Falling” is one way to communicate risk positions.  But as the story goes, it is not a particularly effective approach.  The classic story lays the blame on the lack of perspective on the part of Chicken Little.  But the way that the story is told suggests that in general people have almost zero tolerance for information about risk – they only want to hear from Chicken Little about certainties.

But insurers live in the world of risk.  Each insurer has their own complex stew of risks.  Their riskiness is a matter of extreme concern.  Many insurers use complex models to assess their riskiness.  But in some cases, there is a war for the hearts and minds of the decision makers in the insurer.  It is a war between the traditional qualitative gut view of riskiness and the new quantitative view of riskiness.  One tactic in that war used by the qualitative camp is to paint the quantitative camp as Chicken Little.

In a recent post, Riskviews told of a scale, a Coefficient of Riskiness.  The idea of the COR is to provide a simple basis for taking the argument about riskiness from the name calling stage to an actual discussion about Riskiness.

For each risk, we usually have some observations.  And from those observations, we can form the two basic statistical facts, the observed average and observed volatility (known as standard deviation to the quants).  But in the past 15 years, the discussion about risk has shifted away from the observable aspects of risk to an estimate of the amount of capital needed for each risk.

Now, if each risk held by an insurer could be subdivided into a large number of small risks that are similar in riskiness for each (including size of potential loss) and where the reasons for the losses for each individual risk were statistically separate (independent) then the maximum likely loss to be expected (99.9%tile) would be something like the average loss plus three times the volatility.  It does not matter what number is the average or what number is the standard deviation.

RISKVIEWS has suggested that this multiple of 3 would represent a standard amount of riskiness and become the index value for the Coefficient of Riskiness.

This could also be a starting point in looking at the amount of capital needed for any risks.  Three times the observed volatility plus the observed average loss.  (For the quants, this assumes that losses are positive values and gains negative.  If you want losses to be negative values, then take the observed average loss and subtract three times the volatility).

So in the debate about risk capital, that value is the starting point, the minimum to be expected.  So if a risk is viewed as made up of substantially similar but totally separate smaller risks (homogeneous and independent), then we start with a maximum likely loss of average plus three times volatility.  Many insurers choose (or have chosen for them) to hold capital for a loss at the 1 in 200 level.  That means holding capital for 83% of this Maximum Likely Loss.  This is the Viable capital level.  Some insurers who wish to be at the Robust level of capital will hold capital roughly 10% higher than the Maximum Likely Loss.  Insurers targeting the Secure capital level will hold capital at approximately 100% of the Maximum Likely Loss level.

But that is not the end of the discussion of capital.  Many of the portfolios of risks held by an insurer are not so well behaved.  Those portfolios are not similar and separate.  They are dissimilar in the likelihood of loss for individual exposures, they are dissimilar for the possible amount of loss.  One way of looking at those dissimilarities is that the variability of rate and of size result in a larger number of pooled risks acting statistically more like a smaller number of similar risks.

So if we can imagine that evaluation of riskiness can be transformed into a problem of translating a block of somewhat dissimilar, somewhat interdependent risks into a pool of similar, independent risks, this riskiness question comes clearly into focus.  Now we can use a binomial distribution to look at riskiness.  The plot below takes up one such analysis for a risk with an average incidence of 1 in 1000.  You see that for up to 1000 of these risks, the COR is 5 or higher.  The COR gets up to 6 for a pool of only 100 risks.  It gets close to 9 for a pool of only 50 risks.




There is a different story for a risk with average incidence of 1 in 100.  COR is less than 6 for a pool as small as 25 exposures and the COR gets down to as low as 3.5.


In producing these graphs, RISKVIEW notices that COR is largely a function of number of expected claims.  So The following graph shows COR plotted against number of expected claims for low expected number of claims.  (High expected claims produces COR that is very close to 3 so are not very interesting.)

COR4You see that the COR stays below 4.5 for expected claims 1 or greater.  And there does seem to be a gently sloping trend connecting the number of expected claims and the COR.

So for risks where losses are expected every year, the maximum COR seems to be under 4.5.  When we look at risks where the losses are expected less frequently, the COR can get much higher.  Values of COR above 5 start showing up with expected losses that are in the range of .2 and values above .1 are even higher.


What sorts of things fit with this frequency?  Major hurricanes in a particular zone, earthquakes, major credit losses all have expected frequencies of one every several years.

So what has this told us?  It has told us that fat tails can come from the small portfolio effect.  For a large portfolio of similar and separate risks, the tails are highly likely to be normal with a COR of 3.  For risks with a small number of exposures, the COR, and therefore the tail, might get as much as 50% fatter with a COR of up to 4.5. And the COR goes up as the number of expected losses goes down.

Risks with very fat tails are those with expected losses less frequent than one per year can have much fatter tails, up to three times as fat as normal.

So when faced with those infrequent risks, the Chicken Little approach is perhaps a reasonable approximation of the riskiness, if not a good indicator of the likelihood of an actual impending loss.


Quantitative vs. Qualitative Risk Assessment

July 14, 2014

There are two ways to assess risk.  Quantitative and Qualitative.  But when those two words are used in the NAIC ORSA Guidance Manual, their meaning is a little tricky.

In general, one might think that a quantitative assessment uses numbers and a qualitative assessment does not.  The difference is as simple as that.  The result of a quantitative assessment would be a number such as $53 million.  The result of a qualitative assessment would be words, such as “very risky” or “moderately risky”.

But that straightforward approach to the meaning of those words does not really fit with how they are used by the NAIC.  The ORSA Guidance Manual suggests that an insurer needs to include those qualitative risk assessments in its determination of capital adequacy.  Well, that just will not work if you have four risks that total $400 million and three others that are two “very riskys” and one “not so risk”.  How much capital is enough for two “very riskys”, perhaps you need a qualitative amount of surplus to provide for that, something like “a good amount”.

RISKVIEWS believes that then the NAIC says “Quantitative” and “Qualitative” they mean to describe two approaches to developing a quantity.  For ease, we will call these two approaches Q1 and Q2.

The Q1 approach is data and analysis driven approach to developing the quantity of loss that the company’s capital standard provides for.  It is interesting to RISKVIEWS that very few participants or observers of this risk quantification regularly recognize that this process has a major step that is much less quantitative and scientific than others.

The Q1 approach starts and ends with numbers and has mathematical steps in between.  But the most significant step in the process is largely judgmental.  So at its heart, the “quantitative” approach is “qualitative”.  That step is the choice of mathematical model that is used to extrapolate and interpolate between actual data points.  In some cases, there are enough data points that the choice of model can be based upon somewhat less subjective fit criteria.  But in other cases, that level of data is reached by shortening the time step for observations and THEN making heroic (and totally subjective) assumptions about the relationship between successive time periods.

These subjective decisions are all made to enable the modelers to make a connection between the middle of the distribution, where there usually is enough data to reliably model outcomes and the tail, particularly the adverse tail of the distribution where the risk calculations actually take place and where there is rarely if ever any data.

There are only a couple of subjective decisions possibilities, in broad terms…

  • Benign – Adverse outcomes are about as likely as average outcomes and are only moderately more severe.
  • Moderate – Outcomes similar to the average are much more likely than outcomes significantly different from average.  Outcomes significantly higher than average are possible, but likelihood of extremely adverse outcomes are extremely highly unlikely.
  • Highly risky – Small and moderately adverse outcomes are highly likely while extremely adverse outcomes are possible, but fairly unlikely.

The first category of assumption, Benign,  is appropriate for large aggregations of small loss events where contagion is impossible.  Phenomenon that fall into this category are usually not the concern for risk analysis.  These phenomenon are never subject to any contagion.

The second category, Moderate, is appropriate for moderate sized aggregations of large loss events.  Within this class, there are two possibilities:  Low or no contagion and moderate to high contagion.  The math is much simpler if no contagion is assumed.

But unfortunately, for risks that include any significant amount of human choice, contagion has been observed.  And this contagion has been variable and unpredictable.  Even more unfortunately, the contagion has a major impact on risks at both ends of the spectrum.  When past history suggests a favorable trend, human contagion has a strong tendency to over play that trend.  This process is called “bubbles”.  When past history suggests an unfavorable trend, human contagion also over plays the trend and markets for risks crash.

The modelers who wanted to use the zero contagion models, call this “Fat Tails”.  It is seen to be an unusual model, only because it was so common to use the zero contagion model with the simpler maths.

RISKVIEWS suggests that when communicating that the  approach to modeling is to use the Moderate model, the degree of contagion assumed should be specified and an assumption of zero contagion should be accompanied with a disclaimer that past experience has proven this assumption to be highly inaccurate when applied to situations that include humans and therefore seriously understates potential risk.

The Highly Risky models are appropriate for risks where large losses are possible but highly infrequent.  This applies to insurance losses due to major earthquakes, for example.  And with a little reflection, you will notice that this is nothing more than a Benign risk with occasional high contagion.  The complex models that are used to forecast the distribution of potential losses for these risks, the natural catastrophe models go through one step to predict possible extreme events and the second step to calculate an event specific degree of contagion for an insurer’s specific set of coverages.

So it just happens that in a Moderate model, the 1 in 1000 year loss is about 3 standard deviations worse than the mean.  So if we use that 1 in 1000 year loss as a multiple of standard deviations, we can easily talk about a simple scale for riskiness of a model:


So in the end the choice is to insert an opinion about the steepness of the ramp up between the mean and an extreme loss in terms of multiples of the standard deviation.  Where standard deviation is a measure of the average spread of the observed data.  This is a discussion that on these terms include all of top management and the conclusions can be reviewed and approved by the board with the use of this simple scale.  There will need to be an educational step, which can be largely in terms of placing existing models on the scale.  People are quite used to working with a Richter Scale for earthquakes.  This is nothing more than a similar scale for risks.  But in addition to being descriptive and understandable, once agreed, it can be directly tied to models, so that the models are REALLY working from broadly agreed upon assumptions.

*                  *                *               *             *                *

So now we go the “Qualitative” determination of the risk value.  Looking at the above discussion, RISKVIEWS would suggest that we are generally talking about situations where we for some reason do not think that we know enough to actually know the standard deviation.  Perhaps this is a phenomenon that has never happened, so that the past standard deviation is zero.  So we cannot use the multiple of standard deviation method discussed above.  Or to put is another way, we can use the above method, but we have to use judgment to estimate the standard deviation.

*                  *                *               *             *                *

So in the end, with a Q1 “quantitative” approach, we have a historical standard deviation and we use judgment to decide how risky things are in the extreme compared to that value.  In the Q2 “qualitative” approach, we do not have a reliable historical standard deviation and we need to use judgment to decide how risky things are in the extreme.

Not as much difference as one might have guessed!


Get every new post delivered to your Inbox.

Join 756 other followers

%d bloggers like this: