Archive for the ‘Statistics’ category

Quantitative vs. Qualitative Risk Assessment

July 14, 2014

There are two ways to assess risk.  Quantitative and Qualitative.  But when those two words are used in the NAIC ORSA Guidance Manual, their meaning is a little tricky.

In general, one might think that a quantitative assessment uses numbers and a qualitative assessment does not.  The difference is as simple as that.  The result of a quantitative assessment would be a number such as $53 million.  The result of a qualitative assessment would be words, such as “very risky” or “moderately risky”.

But that straightforward approach to the meaning of those words does not really fit with how they are used by the NAIC.  The ORSA Guidance Manual suggests that an insurer needs to include those qualitative risk assessments in its determination of capital adequacy.  Well, that just will not work if you have four risks that total $400 million and three others that are two “very riskys” and one “not so risk”.  How much capital is enough for two “very riskys”, perhaps you need a qualitative amount of surplus to provide for that, something like “a good amount”.

RISKVIEWS believes that then the NAIC says “Quantitative” and “Qualitative” they mean to describe two approaches to developing a quantity.  For ease, we will call these two approaches Q1 and Q2.

The Q1 approach is data and analysis driven approach to developing the quantity of loss that the company’s capital standard provides for.  It is interesting to RISKVIEWS that very few participants or observers of this risk quantification regularly recognize that this process has a major step that is much less quantitative and scientific than others.

The Q1 approach starts and ends with numbers and has mathematical steps in between.  But the most significant step in the process is largely judgmental.  So at its heart, the “quantitative” approach is “qualitative”.  That step is the choice of mathematical model that is used to extrapolate and interpolate between actual data points.  In some cases, there are enough data points that the choice of model can be based upon somewhat less subjective fit criteria.  But in other cases, that level of data is reached by shortening the time step for observations and THEN making heroic (and totally subjective) assumptions about the relationship between successive time periods.

These subjective decisions are all made to enable the modelers to make a connection between the middle of the distribution, where there usually is enough data to reliably model outcomes and the tail, particularly the adverse tail of the distribution where the risk calculations actually take place and where there is rarely if ever any data.

There are only a couple of subjective decisions possibilities, in broad terms…

  • Benign – Adverse outcomes are about as likely as average outcomes and are only moderately more severe.
  • Moderate – Outcomes similar to the average are much more likely than outcomes significantly different from average.  Outcomes significantly higher than average are possible, but likelihood of extremely adverse outcomes are extremely highly unlikely.
  • Highly risky – Small and moderately adverse outcomes are highly likely while extremely adverse outcomes are possible, but fairly unlikely.

The first category of assumption, Benign,  is appropriate for large aggregations of small loss events where contagion is impossible.  Phenomenon that fall into this category are usually not the concern for risk analysis.  These phenomenon are never subject to any contagion.

The second category, Moderate, is appropriate for moderate sized aggregations of large loss events.  Within this class, there are two possibilities:  Low or no contagion and moderate to high contagion.  The math is much simpler if no contagion is assumed.

But unfortunately, for risks that include any significant amount of human choice, contagion has been observed.  And this contagion has been variable and unpredictable.  Even more unfortunately, the contagion has a major impact on risks at both ends of the spectrum.  When past history suggests a favorable trend, human contagion has a strong tendency to over play that trend.  This process is called “bubbles”.  When past history suggests an unfavorable trend, human contagion also over plays the trend and markets for risks crash.

The modelers who wanted to use the zero contagion models, call this “Fat Tails”.  It is seen to be an unusual model, only because it was so common to use the zero contagion model with the simpler maths.

RISKVIEWS suggests that when communicating that the  approach to modeling is to use the Moderate model, the degree of contagion assumed should be specified and an assumption of zero contagion should be accompanied with a disclaimer that past experience has proven this assumption to be highly inaccurate when applied to situations that include humans and therefore seriously understates potential risk.

The Highly Risky models are appropriate for risks where large losses are possible but highly infrequent.  This applies to insurance losses due to major earthquakes, for example.  And with a little reflection, you will notice that this is nothing more than a Benign risk with occasional high contagion.  The complex models that are used to forecast the distribution of potential losses for these risks, the natural catastrophe models go through one step to predict possible extreme events and the second step to calculate an event specific degree of contagion for an insurer’s specific set of coverages.

So it just happens that in a Moderate model, the 1 in 1000 year loss is about 3 standard deviations worse than the mean.  So if we use that 1 in 1000 year loss as a multiple of standard deviations, we can easily talk about a simple scale for riskiness of a model:

Scale

So in the end the choice is to insert an opinion about the steepness of the ramp up between the mean and an extreme loss in terms of multiples of the standard deviation.  Where standard deviation is a measure of the average spread of the observed data.  This is a discussion that on these terms include all of top management and the conclusions can be reviewed and approved by the board with the use of this simple scale.  There will need to be an educational step, which can be largely in terms of placing existing models on the scale.  People are quite used to working with a Richter Scale for earthquakes.  This is nothing more than a similar scale for risks.  But in addition to being descriptive and understandable, once agreed, it can be directly tied to models, so that the models are REALLY working from broadly agreed upon assumptions.

*                  *                *               *             *                *

So now we go the “Qualitative” determination of the risk value.  Looking at the above discussion, RISKVIEWS would suggest that we are generally talking about situations where we for some reason do not think that we know enough to actually know the standard deviation.  Perhaps this is a phenomenon that has never happened, so that the past standard deviation is zero.  So we cannot use the multiple of standard deviation method discussed above.  Or to put is another way, we can use the above method, but we have to use judgment to estimate the standard deviation.

*                  *                *               *             *                *

So in the end, with a Q1 “quantitative” approach, we have a historical standard deviation and we use judgment to decide how risky things are in the extreme compared to that value.  In the Q2 “qualitative” approach, we do not have a reliable historical standard deviation and we need to use judgment to decide how risky things are in the extreme.

Not as much difference as one might have guessed!

Ignoring a Risk

October 31, 2013

Ignoring is perhaps the most common approach to large but infrequent risks.

Most people think of a 1 in 100 year event as something so rare as it will never happen.

But just take a second and look at the mortality risk of a life insurer.  Each insured has on average around a 1 – 2 in 1000 likelihood of death in any one year.  However, life insurers do not plan for zero claims.  They plan for 1 -2 in 1000 of their policies to have a death claim in any one year.  No one thinks it odd that something with a 1-2 in 1000 likelihood happens hundreds of times in a year.  No one goes around scoffing at the validity of the model or likelihood estimate because such a rare event has happened.

But somehow, that seemingly totally simple minded logic escapes most people when dealing with other risks.  They scoff at how silly that it is that so many 1 in 100 events happen in a year.  Of course, they say, such estimated of likelihood MUST be wrong.

So they go forth ignoring the risk and ignoring the attempts at estimating the expected frequency of loss.  The cost of ignoring a low frequency risk is zero in most years.

And of course, any options for transferring such a risk will have both an expected frequency and an uncertainty charge built in.  Which make those options much too expensive.

The big difference is that a large life insurer takes on hundreds of thousands and in the largest cases, millions of exposures to the 1-2 in 1000 risks. Of course, the law of large numbers turns these individual ultra low frequency risks into a predictable claims pattern, in many cases one with a fairly tight distribution of possible claims.

But because they are ignored, no one tries to know how many of those 1 in 100 risks that we are exposed to.  But the statistics of 20 or 50 or 100 totally unrelated 1 in 100 risks is exactly the same as the life insurance math.

With 100 totally unrelated independent 1 in 100 risks, the chance of one or more turning into a loss in any one year is 63%!

And the most common reaction to the experience of a 1 in 100 event happening is to decide that the statistics are all wrong!

After Superstorm Sandy, NY Governor Cuomo told President Obama that NY “has a 100-year flood every two years now.”  Cuomo had been governor for less than two full years at that point.

The point is that organizations must go against the natural human impulse to separately decide to ignore each of their “rare” risks and realize that the likelihood of experiencing one of these rare events is not so rare, what is uncertain is which one.

A Posteriori Creation

September 29, 2010

The hunters had come back to the village empty handed after a particularly difficult day. They talked through the evening around the fire about what had happened. They needed to make sense out of their experience, so that they could go back out tomorrow and feel that they knew how the world worked well enough to risk another hunt. This day, they were able to convince themselves that what had happened was similar to another day many years ago and that it was an unusually bad day, but driven by natural forces that they could expect and plan for in the future.

Other days, they could not reconcile an unusually bad day and they attributed their experience to the wrath of one or another of their gods.

Risk managers still do the same thing.  They have given this process a fancy name, Bayesian inference.  The very bad days, we now call Black Swans instead of an act of the gods.

Where we have truly advanced is in our ability to claim that we can reverse this process.  We claim that we can create the stories in advance of the experience and thereby provide better protection.

But we fail to realize that underneath, we are still those hunters.  We tell the stories to make ourselves feel better, to feel safe enough to go back out the next day.  Once we have gone through the process of a posteriori creation of the framework, the past events fit neatly into a framework that did not really exist when those same events were in the future.

If you do not believe that, think about how many risk models have had to be significantly recalibrated in the last 10 years.

To correct for this, we need to go against 10,000 or more years of human experience.  The correction can be summed up with the line from the movie The Fly,

Be afraid.  Be very afraid.

There is another answer.  That answer is

Be smart.  Be very smart.

That is because it is not always the best or even a very good strategy to be very afraid.  Only sometimes.  So you need to become smart enough to:

  1. Know when it is really important to mistrust the models and to be very afraid
  2. Have built up the credibility and trust so that you are not ignored.

While you are doing that,be careful with the a posteriori creations.  The better people get with explaining away the bad days, the harder it will be for you to convince them that a really bad day is at hand.

How Many Dependencies did you Declare?

September 12, 2009

Correlation is a statement of historical fact.  The measurement of correlation does not necessarily give any indication of future tendencies unless there is a clear interdependency or lack thereof.  That is especially true when we seek to calculate losses at probability levels that are far outside the size of the historical data set.  (If you want to calculate a 1/200 loss and have 50 years of data, you have 25% of 1 observation)

Using historical correlations in the absence of understand the actual interdependencies could possibly result in drastic problems.

An example is the sub primes.  One of the key differences between what actually happened and the models used prior to the collapse of these markets is that historical correlations were used to drive the models for sub primes.  The correlations were between regions.  Historically, there had been low correlations between mortgage default rates in different regions of the US.  Unfortunately, those correlations were an artifact of regional unemployment driven defaults and unemployment is not the only factor that affects defaults.   The mortgage market had changed drastically from the period over which the defaults were measured.  Mortgage lending practices changed in most of the larger markets.  The prevalence of modified payment mortgages meant that the relationship between mortgages and income was changing as the payments shifted.  In addition, the amount of mortgage granted compared to income also shifted drastically.

So the long term low regional correlations were no longer applicable to the new mortgage market, because the market had changed.  The historical correlation was still a true fact, but is did not have much predictive power.

And it makes some sense to talk about interdependency rising in extreme events.  Just like in the subprime situation, there are drivers of risks that shift into new patterns because systems exceed their carrying capacity.

Everything that is dependent on confidence in the market may not correlate in most times, but that interdependency will show through when confidence is shaken.  In addition to confidence, financial market instruments may also be dependent on the level of liquidity in the markets.  Is confidence in the market a stochastic variable in the risk models?  It should be – it is one of the main drivers of levels of correlation of otherwise unrelated activities.

So before jumping to using correlations, we must seek to understand dependencies.

Good data, Models, Instincts and statistics

September 2, 2009

Guest Post from Jawwad Farid

http://alchemya.com/wordpress2/

Risk and transaction systems differ in many ways. But they both suffer from a common ailment – Good data and working models. On a risk platform the integrity of the data set is dependent on the underlying transaction platform and the quality of data feeds. Keeping the incoming stream of information clean and ensuring that the historical data set remains pure is a full time job. The resources allocated to this problem show how committed and reliant an organization is to its risk systems.

In organizations still ruled by the compliance driven check list mindset, you will find that it is sufficient to simply generate risk reports. It is sufficient because no one really looks at the results and when they do in most cases they may not have any idea about how to interpret them. Or even worse, work with the numbers to understand the challenges they represent for that organization’s future.

The same problem haunts the modeling domain. It is not sufficient to have a model in place. It is just as important to understand how it works and how it will fail. But once again as long as a model exists and as long as it produces something on a periodic basis, most Boards in the region feel they have met the necessary and sufficient condition for risk management.

Is there anything that we can do to change this mindset and fix this problem?

One could start with the confusion at Board level between Risk and the underlying transaction. A market risk platform is a very different animal from the underlying treasury transaction. The common ground however is the pricing model and market behavior, the uncommon factor is the trader’s instinct and his gut. Where risk and the transaction systems clash is on the uncommon ground. Instincts versus statistics!

The instinct and gut effect is far more prominent on the credit side. Relationships and strategic imperatives drive the credit business. Analytics and models drive the credit risk side. The credit business is “name” based, dominated by subjective factors that asses relationship, one at a time. There is some weight assigned to sector exposure and concentration limits at the portfolio level but the primary “lend”, “no lend” call is still relationship based. The credit risk side on the other hand is scoring, behavior and portfolio based. A payment delay is a payment delay, a default is a default. While the softer side can protect the underlying relationship and possibly increase the chances of recovery and help attain “current” status more quickly, the job of a risk system is to document and highlight exceptions and project their impact on the portfolio. A risk system focuses on the trend. While it is interested in the cause of the underlying event, the interest is purely mathematical; there is no human side.

I asked earlier if there is anything we can do to change. To begin with Boards need to spend more time and allocate more resources to the risk debate. Data, models and reports are not enough. They need to be poked, challenged, stressed, understood, grown and invested in. Two hours once a quarter for a Board Risk Committee meeting is not sufficient time to dissect the effectiveness of your risk function. You may as well close your eyes and ignore it.

But before you do that remember hell hath no fury like a risk scorned.