Archive for the ‘Tail Risk’ category

Setting your Borel Point

July 28, 2014

What is a Borel Risk Point you ask?  Emile Borel once said

“Events with a sufficiently small probability never occur”.

Your Borel Risk Point (BRP) is your definition of “sufficiently small probability” that causes you to ignore unlikely risks.

Chances are, your BRP is set at much too high of a level of likelihood.  You see, when Borel said that, he was thinking of a 1 in 1 million type of likelihood.  Human nature, that has survival instincts that help us to survive on a day to day basis, would have us ignoring things that are not likely to happen this week.

Even insurance professionals will often want to ignore risks that are as common as 1 in 100 year events.  Treating them as if they will never happen.

And in general, the markets allow us to get away with that.  If a serious adverse event happens, the unprepared generally are excused if it is something as unlikely as a 1 in 100 event.

That works until another factor comes into play.  That other factor is the number of potential 1 in 100 events that we are exposed to.  Because if you are exposed to fifty 1 in 100 events, you are still pretty unlikely to see any particular event, but very likely to see some such event.

Governor Andrew Cuomo of New York State reportedly told President Obama,

New York “has a 100-year flood every two years now.”
Solvency II has Europeans all focused on the 1 in 200 year loss.  RISKVIEWS would suggest that is still too high of a likelihood for a good Borel Risk Point for insurers. RISKVIEWS would argue that insurers need to have a higher BRP because of the business that they are in.  For example, Life Insurers primary product (which is life insurance, at least in some parts of the world) pays for individual risks (unexpected deaths) that occur at an average rate of less than 1 in 1000.  How does an insurance company look their customers in the eye and say that they need to buy protection against a 1 in 1000 event from a company that only has a BRP of 1 in 200?
So RISKVIEWS suggest that insurers have a BRP somewhere just above 1 in 1000.  That might sound aggressive but it is pretty close to the Secure Risk Capital standard.  With a Risk Capital Standard of 1 in 1000, you can also use the COR instead of a model to calculate your capital needed.

Is it rude to ask “How fat is your tail?”

July 23, 2014

In fact, not only is it not rude, the question is central to understanding risk models.  The Coefficient of Riskiness(COR) allows us for the first time to talk about this critical question.

332px-36_Stanley_Hawk

You see, “normal” sized tails have a COR of three. If everything were normal, then risk models wouldn’t be all that important. We could just measure volatility and multiply it by 3 to get the 1 in 1000 result. If you instead want the 1 in 200 result, you would multiply the 1 in 1000 result by 83%.

Amazing maths fact – 3 is always the answer.

But everything is not normal. Everything does not have a COR of 3. So how fat are your tails?

RISKVIEWS looked at an equity index model. That model was carefully calibrated to match up with very long term index returns (using Robert Shiller’s database). The fat tailed result there has a COR of 3.5. With that model the 2008 S&P 500 total return loss of 37% is a 1 in 100 loss.

So if we take that COR of 3.5 and apply it to the experience of 1971 to 2013 that happens to be handy, the mean return is 12% and the volatility is about 18%. Using the simple COR approach, we estimate the 1 in 1000 loss as 50% (3.5 times the volatility subtracted from the average). To get the 1/200 loss, we can take 83% of that and we get a 42% loss.

RISKVIEWS suggests that the COR can be an important part of Model Validation.

 Looking at the results above for the stock index model, the question becomes why is 3.5 then the correct COR for the index? We know that in 2008, the stock market actually dropped 50% from high point to low point within a 12 month period that was not a calendar year. If we go back to Shiller’s database, which actually tracks the index values monthly (with extensions estimated for 50 years before the actual index was first defined), we find that there are approximately 1500 12 month periods. RISKVIEWS recognizes that these are not independent observations, but to answer this particular question, these actually are the right data points. And looking at that data, a 50% drop in a 12 month period is around the 1000th worst 12 month period. So a model with a 3.5 COR is pretty close to an exact fit with the historical record. And what if you have an opinion about the future riskiness of the stock market? You can vary the volatility assumptions if you think that the current market with high speed trading and globally instantaneously interlinked markets will be more volatile than the past 130 years that Schiller’s data covers. You can also adjust the future mean. You might at least want to replace the historic geometric mean of 10.6% for the arithmetic mean quoted above of 12% since we are not really taking about holding stocks for just one year. And you can have an opinion about the Riskiness of stocks in the future. A COR of 3.5 means that the tail at the 1 in 1000 point is 3.5 / 3 or 116.6% of the normal tails. That is hardly an obese tail.

The equity index model that we started with here has a 1 in 100 loss value of 37%. That was the 2008 calendar total return for the S&P 500. If we want to know what we would get with tails that are twice as fat, with the concept of COR, we can look at a COR of 4.0 instead of 3.5. That would put the 1 in 1000 loss at 9% worse or 59%. That would make the 1 in 200 loss 7% worse or 49%.

Those answers are not exact. But they are reasonable estimates that could be used in a validation process.

Non-technical management can look at the COR for each model can participate in a discussion of the reasonability of the fat in the tails for each and every risk.

RISKVIEWS believes that the COR can provide a basis for that discussion. It can be like the Richter scale for earthquakes or the Saffir-Simpson scale for hurricanes. Even though people in general do not know the science underlying either scale, they do believe that they understand what the scale means in terms of severity of experience. With exposure, the COR can take that place for risk models.

Chicken Little or Coefficient of Riskiness (COR)

July 21, 2014

Running around waving your arms and screaming “the Sky is Falling” is one way to communicate risk positions.  But as the story goes, it is not a particularly effective approach.  The classic story lays the blame on the lack of perspective on the part of Chicken Little.  But the way that the story is told suggests that in general people have almost zero tolerance for information about risk – they only want to hear from Chicken Little about certainties.

But insurers live in the world of risk.  Each insurer has their own complex stew of risks.  Their riskiness is a matter of extreme concern.  Many insurers use complex models to assess their riskiness.  But in some cases, there is a war for the hearts and minds of the decision makers in the insurer.  It is a war between the traditional qualitative gut view of riskiness and the new quantitative view of riskiness.  One tactic in that war used by the qualitative camp is to paint the quantitative camp as Chicken Little.

In a recent post, Riskviews told of a scale, a Coefficient of Riskiness.  The idea of the COR is to provide a simple basis for taking the argument about riskiness from the name calling stage to an actual discussion about Riskiness.

For each risk, we usually have some observations.  And from those observations, we can form the two basic statistical facts, the observed average and observed volatility (known as standard deviation to the quants).  But in the past 15 years, the discussion about risk has shifted away from the observable aspects of risk to an estimate of the amount of capital needed for each risk.

Now, if each risk held by an insurer could be subdivided into a large number of small risks that are similar in riskiness for each (including size of potential loss) and where the reasons for the losses for each individual risk were statistically separate (independent) then the maximum likely loss to be expected (99.9%tile) would be something like the average loss plus three times the volatility.  It does not matter what number is the average or what number is the standard deviation.

RISKVIEWS has suggested that this multiple of 3 would represent a standard amount of riskiness and become the index value for the Coefficient of Riskiness.

This could also be a starting point in looking at the amount of capital needed for any risks.  Three times the observed volatility plus the observed average loss.  (For the quants, this assumes that losses are positive values and gains negative.  If you want losses to be negative values, then take the observed average loss and subtract three times the volatility).

So in the debate about risk capital, that value is the starting point, the minimum to be expected.  So if a risk is viewed as made up of substantially similar but totally separate smaller risks (homogeneous and independent), then we start with a maximum likely loss of average plus three times volatility.  Many insurers choose (or have chosen for them) to hold capital for a loss at the 1 in 200 level.  That means holding capital for 83% of this Maximum Likely Loss.  This is the Viable capital level.  Some insurers who wish to be at the Robust level of capital will hold capital roughly 10% higher than the Maximum Likely Loss.  Insurers targeting the Secure capital level will hold capital at approximately 100% of the Maximum Likely Loss level.

But that is not the end of the discussion of capital.  Many of the portfolios of risks held by an insurer are not so well behaved.  Those portfolios are not similar and separate.  They are dissimilar in the likelihood of loss for individual exposures, they are dissimilar for the possible amount of loss.  One way of looking at those dissimilarities is that the variability of rate and of size result in a larger number of pooled risks acting statistically more like a smaller number of similar risks.

So if we can imagine that evaluation of riskiness can be transformed into a problem of translating a block of somewhat dissimilar, somewhat interdependent risks into a pool of similar, independent risks, this riskiness question comes clearly into focus.  Now we can use a binomial distribution to look at riskiness.  The plot below takes up one such analysis for a risk with an average incidence of 1 in 1000.  You see that for up to 1000 of these risks, the COR is 5 or higher.  The COR gets up to 6 for a pool of only 100 risks.  It gets close to 9 for a pool of only 50 risks.

 

cor

 

There is a different story for a risk with average incidence of 1 in 100.  COR is less than 6 for a pool as small as 25 exposures and the COR gets down to as low as 3.5.

Cor100

In producing these graphs, RISKVIEW notices that COR is largely a function of number of expected claims.  So The following graph shows COR plotted against number of expected claims for low expected number of claims.  (High expected claims produces COR that is very close to 3 so are not very interesting.)

COR4You see that the COR stays below 4.5 for expected claims 1 or greater.  And there does seem to be a gently sloping trend connecting the number of expected claims and the COR.

So for risks where losses are expected every year, the maximum COR seems to be under 4.5.  When we look at risks where the losses are expected less frequently, the COR can get much higher.  Values of COR above 5 start showing up with expected losses that are in the range of .2 and values above .1 are even higher.

cor5

What sorts of things fit with this frequency?  Major hurricanes in a particular zone, earthquakes, major credit losses all have expected frequencies of one every several years.

So what has this told us?  It has told us that fat tails can come from the small portfolio effect.  For a large portfolio of similar and separate risks, the tails are highly likely to be normal with a COR of 3.  For risks with a small number of exposures, the COR, and therefore the tail, might get as much as 50% fatter with a COR of up to 4.5. And the COR goes up as the number of expected losses goes down.

Risks with very fat tails are those with expected losses less frequent than one per year can have much fatter tails, up to three times as fat as normal.

So when faced with those infrequent risks, the Chicken Little approach is perhaps a reasonable approximation of the riskiness, if not a good indicator of the likelihood of an actual impending loss.

 

Quantitative vs. Qualitative Risk Assessment

July 14, 2014

There are two ways to assess risk.  Quantitative and Qualitative.  But when those two words are used in the NAIC ORSA Guidance Manual, their meaning is a little tricky.

In general, one might think that a quantitative assessment uses numbers and a qualitative assessment does not.  The difference is as simple as that.  The result of a quantitative assessment would be a number such as $53 million.  The result of a qualitative assessment would be words, such as “very risky” or “moderately risky”.

But that straightforward approach to the meaning of those words does not really fit with how they are used by the NAIC.  The ORSA Guidance Manual suggests that an insurer needs to include those qualitative risk assessments in its determination of capital adequacy.  Well, that just will not work if you have four risks that total $400 million and three others that are two “very riskys” and one “not so risk”.  How much capital is enough for two “very riskys”, perhaps you need a qualitative amount of surplus to provide for that, something like “a good amount”.

RISKVIEWS believes that then the NAIC says “Quantitative” and “Qualitative” they mean to describe two approaches to developing a quantity.  For ease, we will call these two approaches Q1 and Q2.

The Q1 approach is data and analysis driven approach to developing the quantity of loss that the company’s capital standard provides for.  It is interesting to RISKVIEWS that very few participants or observers of this risk quantification regularly recognize that this process has a major step that is much less quantitative and scientific than others.

The Q1 approach starts and ends with numbers and has mathematical steps in between.  But the most significant step in the process is largely judgmental.  So at its heart, the “quantitative” approach is “qualitative”.  That step is the choice of mathematical model that is used to extrapolate and interpolate between actual data points.  In some cases, there are enough data points that the choice of model can be based upon somewhat less subjective fit criteria.  But in other cases, that level of data is reached by shortening the time step for observations and THEN making heroic (and totally subjective) assumptions about the relationship between successive time periods.

These subjective decisions are all made to enable the modelers to make a connection between the middle of the distribution, where there usually is enough data to reliably model outcomes and the tail, particularly the adverse tail of the distribution where the risk calculations actually take place and where there is rarely if ever any data.

There are only a couple of subjective decisions possibilities, in broad terms…

  • Benign – Adverse outcomes are about as likely as average outcomes and are only moderately more severe.
  • Moderate – Outcomes similar to the average are much more likely than outcomes significantly different from average.  Outcomes significantly higher than average are possible, but likelihood of extremely adverse outcomes are extremely highly unlikely.
  • Highly risky – Small and moderately adverse outcomes are highly likely while extremely adverse outcomes are possible, but fairly unlikely.

The first category of assumption, Benign,  is appropriate for large aggregations of small loss events where contagion is impossible.  Phenomenon that fall into this category are usually not the concern for risk analysis.  These phenomenon are never subject to any contagion.

The second category, Moderate, is appropriate for moderate sized aggregations of large loss events.  Within this class, there are two possibilities:  Low or no contagion and moderate to high contagion.  The math is much simpler if no contagion is assumed.

But unfortunately, for risks that include any significant amount of human choice, contagion has been observed.  And this contagion has been variable and unpredictable.  Even more unfortunately, the contagion has a major impact on risks at both ends of the spectrum.  When past history suggests a favorable trend, human contagion has a strong tendency to over play that trend.  This process is called “bubbles”.  When past history suggests an unfavorable trend, human contagion also over plays the trend and markets for risks crash.

The modelers who wanted to use the zero contagion models, call this “Fat Tails”.  It is seen to be an unusual model, only because it was so common to use the zero contagion model with the simpler maths.

RISKVIEWS suggests that when communicating that the  approach to modeling is to use the Moderate model, the degree of contagion assumed should be specified and an assumption of zero contagion should be accompanied with a disclaimer that past experience has proven this assumption to be highly inaccurate when applied to situations that include humans and therefore seriously understates potential risk.

The Highly Risky models are appropriate for risks where large losses are possible but highly infrequent.  This applies to insurance losses due to major earthquakes, for example.  And with a little reflection, you will notice that this is nothing more than a Benign risk with occasional high contagion.  The complex models that are used to forecast the distribution of potential losses for these risks, the natural catastrophe models go through one step to predict possible extreme events and the second step to calculate an event specific degree of contagion for an insurer’s specific set of coverages.

So it just happens that in a Moderate model, the 1 in 1000 year loss is about 3 standard deviations worse than the mean.  So if we use that 1 in 1000 year loss as a multiple of standard deviations, we can easily talk about a simple scale for riskiness of a model:

Scale

So in the end the choice is to insert an opinion about the steepness of the ramp up between the mean and an extreme loss in terms of multiples of the standard deviation.  Where standard deviation is a measure of the average spread of the observed data.  This is a discussion that on these terms include all of top management and the conclusions can be reviewed and approved by the board with the use of this simple scale.  There will need to be an educational step, which can be largely in terms of placing existing models on the scale.  People are quite used to working with a Richter Scale for earthquakes.  This is nothing more than a similar scale for risks.  But in addition to being descriptive and understandable, once agreed, it can be directly tied to models, so that the models are REALLY working from broadly agreed upon assumptions.

*                  *                *               *             *                *

So now we go the “Qualitative” determination of the risk value.  Looking at the above discussion, RISKVIEWS would suggest that we are generally talking about situations where we for some reason do not think that we know enough to actually know the standard deviation.  Perhaps this is a phenomenon that has never happened, so that the past standard deviation is zero.  So we cannot use the multiple of standard deviation method discussed above.  Or to put is another way, we can use the above method, but we have to use judgment to estimate the standard deviation.

*                  *                *               *             *                *

So in the end, with a Q1 “quantitative” approach, we have a historical standard deviation and we use judgment to decide how risky things are in the extreme compared to that value.  In the Q2 “qualitative” approach, we do not have a reliable historical standard deviation and we need to use judgment to decide how risky things are in the extreme.

Not as much difference as one might have guessed!

Why some think that there is No Need for Storm Shelters

May 22, 2013

The BBC featured a story about the dearth of storm shelters in the area hit last week by tornadoes.

Why so few storm shelters in Tornado Alley hotspot?

The story goes on to discuss the fact that Americans, especially in red states like Oklahoma, strongly prefer keeping the government out of the business of providing things like storm shelters, allowing that to be an individual option.  Then reports that few individuals opt to spend their money on shelters.

The answer might well be in the numbers…

Below, from the National Oceanic and Atmospheric Administration (NOAA) is a list of the 25 deadliest tornadoes in US history:

1. Tri-State (MO, IL, IN) – March 18, 1925 – 695 deaths
2. Natchez, MS – May 6, 1840 – 317 deaths
3. St. Louis, MO – May 27, 1896 – 255 deaths
4. Tupelo, MS – April 5, 1936 – 216 deaths
5. Gainesville, GA – April 6, 1936 – 203 deaths
6. Woodward, OK – April 9, 1947 – 181 deaths
7. Joplin, MO – May 22, 2011 – 158 deaths
8. Amite, LA, Purvis, MS – April 24, 1908 – 143 deaths
9. New Richmond, WI – June 12, 1899 – 117 deaths
10. Flint, MI – June 8, 1953 – 116 deaths
11. Waco, TX – May 11, 1953 – 114 deaths
12. Goliad, TX – May 18, 1902 – 114 deaths
13. Omaha, NE – March 23, 1913 – 103 deaths
14. Mattoon, IL – May 26, 1917 – 101 deaths
15. Shinnston, WV – June 23, 1944 – 100 deaths
16. Marshfield, MO – April 18, 1880 – 99 deaths
17. Gainesville, GA – June 1, 1903 – 98 deaths
18. Poplar Bluff, MO – May 9, 1927 – 98 deaths
19. Snyder, OK – May 10, 1905 – 97 deaths
20. Comanche, IA & Albany, IL – June 3, 1860 – 92 deaths
21. Natchez, MS – April 24, 1908 – 91 deaths
22. Worcester, MA – June 9, 1953 – 90 deaths
23. Starkville, MS to Waco, AL -April 20, 1920 – 88 deaths
24. Lorain & Sandusky, OH – June 28, 1924 – 85 deaths
25. Udall, KS – May 25, 1955 – 80 deaths

Looks scary and impressively dangerous.  Until you look more carefully at the dates.  Most of those events are OLD.  In fact, if you look at this as a histogram, you see something interesting…

Deadly Tornadoes

You see from this chart, why there are few storm shelters.  Between the 1890’s and 1950’s, there were at least two very deadly tornadoes per decade.  Enough to keep people scared.  But before the last week, there had not been a decade in over 50 years with any major events.  50  years is a long time to go between times when someone somewhere in the US needed a storm shelter to protect them from a very deadly storm.

This is not to say that there have not been storms in the past 50 years.  The chart below from the Washington Post, shows the losses from tornadoes for that same 50 year period and the numbers are not small.

It is RISKVIEWS’ guess that in the face of smaller, less deadly but destructive storms, people are much more likely to attribute their own outcome to some innate talent that they have and the losers do not have.  Sort of like the folks who have had one or several good experiences at the slot machines who believe that they have a talent for gambling.

Another reason is that almost 45% of storm fatalities are folks who live in trailers.  They often will not even have an option to build their own storm shelter.  There it is probably something that could be addressed by regulations regarding zoning of trailer parks.

Proper risk management can only be done in advance.  The risk management second guessing that is done after the fact helps to create a tremendous drag on society.  We are forced into spending money to prevent recurrence of the last disaster, regardless of whether that expenditure makes any sense at all on the basis of frequency and severity of the potential adverse events or not.

We cannot see the future as clearly as we can see the past.  We can only prepare for some of the possible futures. 

The BBC article stands on the side of that discussion that looks back after the fact and finds fault with whoever did not properly see the future exactly as clearly as they are now able to see the past.

A simple recent example of this is the coverage of the Boston Marathon bombers.  Much has been made of the fact that there were warnings about one or more members of the family before the event.  But no one has chosen to mention how many others who did not commit bombings were there similar or even much more dire warnings about.  It seems quite likely, that the warnings about these people were dots in a stream of hundreds of thousands of similar warnings.

Getting Paid for Risk Taking

April 15, 2013

Consideration for accepting a risk needs to be at a level that will sustain the business and produce a return that is satisfactory to investors.

Investors usually want additional return for extra risk.  This is one of the most misunderstood ideas in investing.

“In an efficient market, investors realize above-average returns only by taking above-average risks.  Risky stocks have high returns, on average, and safe stocks do not.”

Baker, M. Bradley, B. Wurgler, J.  Benchmarks as Limits to Arbitrage: Understanding the Low-Volatility Anomaly

But their study found that stocks in the top quintile of trailing volatility had real return of -90% vs. a real return of 1000% for the stocks in the bottom quintile.

But the thinking is wrong.  Excess risk does not produce excess return.  The cause and effect are wrong in the conventional wisdom.  The original statement of this principle may have been

“in all undertakings in which there are risks of great losses, there must also be hopes of great gains.”
Alfred Marshall 1890 Principles of Economics

Marshal has it right.  There are only “hopes” of great gains.  These is no invisible hand that forces higher risks to return higher gains.  Some of the higher risk investment choices are simply bad choices.

Insurers opportunity to make “great gains” out of “risks of great losses” is when they are determining what consideration, or price, that they will require to accept a risk.  Most insurers operate in competitive markets that are not completely efficient.  Individual insurers do not usually set the price in the market, but there is a range of prices at which insurance is purchased in any time period.  Certainly the process that an insurer uses to determine the price that makes a risk acceptable to accept is a primary determinant in the profits of the insurer.  If that price contains a sufficient load for the extreme risks that might threaten the existence of the insurer, then over time, the insurer has the ability to hold and maintain sufficient resources to survive some large loss situations.

One common goal conflict that leads to problems with pricing is the conflict between sales and profits.  In insurance as in many businesses, it is quite easy to increase sales by lowering prices.  In most businesses, it is very difficult to keep up that strategy for very long as the realization of lower profits or losses from inadequate prices is quickly realized.  In insurance, the the premiums are paid in advance, sometimes many years in advance of when the insurer must provide the promised insurance benefits.  If provisioning is tilted towards the point of view that supports the consideration, the pricing deficiencies will not be apparent for years.  So insurance is particularly susceptible to the tension between volume of business and margins for risk and profits,
and since sales is a more fundamental need than profits, the margins often suffer.
As just mentioned, insurers simply do not know for certain what the actual cost of providing an insurance benefit will be.  Not with the degree of certainty that businesses in other sectors can know their cost of goods sold.  The appropriateness of pricing will often be validated in the market.  Follow-the-leader pricing can lead a herd of insurers over the cliff.  The whole sector can get pricing wrong for a time.  Until, sometimes years later, the benefits are collected and their true cost is know.

“A decade of short sighted price slashing led to industry losses of nearly $3 billion last year.”  Wall Street Journal June 24, 2002

Pricing can also go wrong on an individual case level.  The “Winners Curse”  sends business to the insurer who most underimagines riskiness of a particular risk.

There are two steps to reflecting risk in pricing.  The first step is to capture the expected loss properly.  Most of the discussion above relates to this step and the major part of pricing risk comes from the possibility of missing that step as has already been discussed.  But the second step is to appropriately reflect all aspects of the risk that the actual losses will be different from expected.  There are many ways that such deviations can manifest.

The following is a partial listing of the risks that might be examined:

• Type A Risk—Short-Term Volatility of cash flows in 1 year

• Type B Risk—Short -Term Tail Risk of cash flows in 1 year
• Type C Risk—Uncertainty Risk (also known as parameter risk)
• Type D Risk—Inexperience Risk relative to full multiple market cycles
• Type E Risk—Correlation to a top 10
• Type F Risk—Market value volatility in 1 year
• Type G Risk—Execution Risk regarding difficulty of controlling operational
losses
• Type H Risk—Long-Term Volatility of cash flows over 5 or more years
• Type J Risk—Long-Term Tail Risk of cash flows over 5 years or more
• Type K Risk—Pricing Risk (cycle risk)
• Type L Risk—Market Liquidity Risk
• Type M Risk—Instability Risk regarding the degree that the risk parameters are
stable

See “Risk and Light” or “The Law of Risk and Light

There are also many different ways that risk loads are specifically applied to insurance pricing.  Three examples are:

  • Capital Allocation – Capital is allocated to a product (based upon the provisioning) and the pricing then needs to reflect the cost of holding the capital.  The cost of holding capital may be calculated as the difference between the risk free rate (after tax) and the hurdle rate for the insurer.  Some firms alternately use the difference between the investment return on the assets backing surplus (after tax) and the hurdle rate.  This process assures that the pricing will support achieving the hurdle rate on the capital that the insurer needs to hold for the risks of the business.  It does not reflect any margin for the volatility in earnings that the risks assumed might create, nor does it necessarily include any recognition of parameter risk or general uncertainty.
  • Provision for Adverse Deviation – Each assumption is adjusted to provide for worse experience than the mean or median loss.  The amount of stress may be at a predetermined confidence interval (Such as 65%, 80% or 90%).  Higher confidence intervals would be used for assumptions with higher degree of parameter risk.  Similarly, some companies use a multiple (or fraction) of the standard deviation of the loss distribution as the provision.  More commonly, the degree of adversity is set based upon historical provisions or upon judgement of the person setting the price.  Provision for Adverse Deviation usually does not reflect anything specific for extra risk of insolvency.
  • Risk Adjusted Profit Target – Using either or both of the above techniques, a profit target is determined and then that target is translated into a percentage of premium of assets to make for a simple risk charge when constructing a price indication.

The consequences of failing to recognize as aspect of risk in pricing will likely be that the firm will accumulate larger than expected concentrations of business with higher amounts of that risk aspect.  See “Risk and Light” or “The Law of Risk and Light“.

To get Consideration right you need to (1)regularly get a second opinion on price adequacy either from the market or from a reliable experienced person; (2) constantly update your view of your risks in the light of emerging experience and market feedback; and (3) recognize that high sales is a possible market signal of underpricing.

This is one of the seven ERM Principles for Insurers

During a Crisis – A Lesson from Fire Fighters

December 10, 2012

800px-FIRE_01

The fire cycle: “The action-cycle of a fire from birth to death follows a certain pattern.  The fire itself may vary in proportion from insignificance to conflagration, but regardless of its proportions, origin, propagation or rate of progression, the cycle or pattern of controlling it includes these phases:

1. the period between discovery and the transmittal of the alarm or alerting of the fire forces;

2. the period between receipt of alarm by the fire service and arrival of firemen at the scene of the fire; and, finally,

3. the period between arrival on the fire ground and final extinguishment of the fire itself.

It is important to fire fighting to make sure that the right things happen during each phase and that each step takes as little time as possible.  For the first phase, that means having fire detection equipment in place and working properly that produces a signal that will be noticed and conveyed to the fire forces.  In the second phase, the fire fighters need to be organized to respond appropriately to the alarm.  And the third phase includes the process of diagnosing the situation and taking the necessary steps to put out the fire.

That is a good process model for risk managers to contemplate.  Ask yourself and your staff:

  1. This is about the attitude and preparedness of company staff to accept that there may be a problem.  How long will it be before we know when an actual crisis hits the company?  How do our alarms work?  Are they all in functioning order?  Or will those closest to the problems delay notifying you of a potential problem?  Sometimes with fires and company crises, an alarm sounds and it is immediately turned off.  The presumption is that everything is normal and the alarm must be malfunctioning.  Or perhaps that the alarm is correct, but that it it calibrated to be too sensitive and there is not a significant problem.  As risk manager, you should urge everyone to err on the side of reporting every possible situation.  Better to have some extra responses than to have events, like fires, rage completely out of control before calling for help.
  2.  This is about the preparedness of risk management staff to begin to respond to a crisis.  One problem that many risk management programs face is that their main task seems to be measuring and reporting risk positions.  If that is what people believe is their primary function, then the risk management function will not attract any action oriented people.  If that is the case in your firm, then you as risk manager need to determine who are the best people to recruit as responders and build a rapport with them in advance of the next crisis so that when it happens, you can mobilize their help.  If the risk staff is all people who excel at measuring, then you also need to define their roles in an emergency – and have them practice those roles.   No matter what, you do not want to find out who will freeze in a crisis during the first major crisis of your tenure.  And freezing (rather than panic) is by far the most common reaction.  You need to find those few people whose reaction to a crisis is to go into a totally focuses active survival mode.
  3. This is about being able to properly diagnose a crisis and to execute the needed actions.  Fire Fighters need to determine the source of the blaze, wind conditions, evacuation status and many other things to make their plan for fighting the fire.  They usually need to form that plan quickly, mobilize and execute the plan effectively, making both the planned actions and the unplanned modifications happen as well as can be done.  Risk managers need to perform similar steps.  They need to understand the source of the problem, the conditions around the problem that are outside of the firm and the continuing involvement of company employees, customers and others.  While risk managers usually do not have to form their plan in minutes as fire fighters must, they do have to do so quickly.  Especially when there are reputational issues involved, swift and sure initial actions can make the world of difference.  And execution is key.  Getting this right means that the risk manager needs to know in advance of a crisis, what sorts of actions can be taken in a crisis and that the company staff has the ability to execute.  There is no sense planning to take actions that require the physical prowess  of Navy Seals if your staff are a bunch of ordinary office workers.  And recognizing the limitations of the rest of the world is important also.  If your crisis effects many others, they may not be able to provide the help from outside that you may have planned on.  If the crisis is unique to you, you need to recognize that some will question getting involved in something that they do not understand but that may create large risks for their organizations.

 

What Do Your Threats Look Like?

December 6, 2012

Severe and intense threats are usually associated with dramatic weather events, terrorist attacks, earthquakes, nuclear accidents and such like.  When one of these types of threats is thought to be immanent, people will often cooperate with a cooperative ERM scheme, if one is offered.  But when the threat actually happens, there are four possible responses:  cooperation with disaster plan, becoming immobilized and ignoring the disaster, panic and anti-social advantage taking.  Disaster planning sometimes goes no further than developing a path for people with the first response.  A full disaster plan would need to take into account all four reactions.  Plans would be made to deal with the labile and panicked people and to prevent the damage from the anti-social.  In businesses, a business continuity or disaster plan would fall into this category of activity.

When businesses do a first assessment, risks are often displayed in four quadrants: Low Likelihood/Low Severity; Low Likelihood/High Severity; High Likelihood/Low Severity; and High Likelihood/High Severity.  It is extremely difficult to survive if your risks are High Likelihood/High Severity, so few businesses find that they have risks in that quadrant.  So businesses usually only have risks in this category that are Low Likelihood.

Highly Cooperative mode of Risk Management means that everyone is involved in risk management because you need everyone to be looking out for the threats.  This falls apart quickly if your threats are not Severe and Intense because people will question the need for so much vigilance.

Highly Complex threats usually come from the breakdown of a complex system of some sort that you are counting upon.  For an insurer, this usually means that events that they thought had low interdependency end up with a high correlation.  Or else a new source of large losses emerges from an existing area of coverage.  Other complex threats that threaten the life insurance industry include the interplay of financial markets and competing products, such as happened in the 1980’s when money market funds threatened to suck all of the money out of insurers, or in the 1990’s the variable products that decimated the more traditional guaranteed minimum return products.

In addition, financial firms all create their own complex threat situations because they tend to be exposed to a number of different risks.  Keeping track of the magnitude of several different risk types and their interplay is itself a complex task.  Without very complex risk evaluation tools and the help of trained professionals, financial firms would be flying blind.  But these risk evaluation tools themselves create a complex threat.

Highly Organized mode of Risk Management means that there are many very different specialized roles within the risk management process.  May have different teams doing risk assessment, risk mitigation and assurance, for each separate threat.  This can only make sense when the rewards for taking these risks is large because this mode of risk management is very expensive.

Highly Unpredictable Threats are common during times of transition when a system is reorganizing itself.  “Uncertain” has been the word most often used in the past several years to describe the current environment.  We just are not sure what will be hitting us next.  Neither the type of threat, the timing, frequency or severity is known in advance of these unpredictable threats.

Businesses operating in less developed economies will usually see this as their situation.  Governments change, regulations change, the economy dips and weaves, access to resources changes abruptly, wars and terrorism are real threats.

Highly Adaptable mode of Risk Management means that you are ready to shift among the other three modes at any time and operate in a different mode for each threat.  The highly adaptable mode of risk management also allows for quick decisions to abandon the activity that creates the threat at any time.  But taking up new activities with other unique threats is less of a problem under this mode.  Firms operating under the highly adaptive mode usually make sure that their activities do not all lead to a single threat and that they are highly diversified.

Benign Threats are things that will never do more than partially reduce earnings.  Small stuff.  Not good news, but not bad enough to lose any sleep over.

Low Cooperation mode of Risk Management means that individuals within their firm can be separately authorized to undertake activities that expand the threats to the firm.  The individuals will all operate under some rules that put boundaries around their freedom, but most often these firms police these rules after the action, rather than with a process that prevents infractions.  At the extreme of low cooperation mode of risk management, enforcement will be very weak.

For example, many banks have been trying to get by with a low cooperation mode of ERM.  Risk Management is usually separate and adversarial.  The idea is to allow the risk takers the maximum degree of freedom.  After all, they make the profits of the bank.  The idea of VaR is purely to monitor earnings fluctuations.  The risk management systems of banks had not even been looking for any possible Severe and Intense Threats.  As their risk shifted from a simple “Credit” or “Market” to very complex instruments that had elements of both with highly intricate structures there was not enough movement to the highly organized mode of risk management within many banks.  Without the highly organized risk management, the banks were unable to see the shift of those structures from highly complex threats to severe and intense threats. (Or the risk staff saw the problem, but were not empowered to force action.)  The low cooperation mode of risk management was not able to handle those threats and the banks suffered large losses or simply collapsed.

Do we always underprice tail risk?

April 23, 2011

What in the world might underpricing mean when referring to a true tail risk? Adequacy of pricing assumes that someone actually can know the correct price.

But imagine something that has a true likelihood of 5% in any one period.  Now imagine 100 periods of randomly generated results.

Then for each of three 100 period trials look at 20 year periods. The tables below show the frequency table for the 80 observation periods.


20 Year Observed Frequency Out of 80
0 45
5% 24
10% 12
15% 0
20% 0
20 Year Observed Frequency Out of 80
0 9
5% 28
10% 24
15% 8
20% 7
20 Year Observed Frequency Our of 80
0 50
5% 11
10% 20
15% 0
20% 0

if the “tail risks” are 1/20 events and you do not have any information other than observations of experience, then this is the sort of result you will get. The observed frequency will jump around.

If that is the situation, how would anyone get the price “correct”?

But suppose that you then set a price for this tail risk. Let’s just say you picked 15% because that is what your competitor is doing.

And you have a very patient set of investors. They will judge you by 5 year results. So then we plot the 5 year results.

And you see that my profits are quite a wild ride.

Now in the insurance sector, what seems to happen is that when there are runs of good results people tend to cut rates to pick up market share. And when the profits run to losses, people tend to raise rates to make up for losses.

So again we are stymied from knowing what is the correct rate since the market goes up and down with a lag to experience.

Is the result a tendency to underprice?  You be the judge.

Leave Something on the Table

April 19, 2011

What was the difference between the banks and insurers with high tech risk management programs that did extremely poorly in the GFC from those with equally high tech risk management programs who did less poorly?

One major difference was the degree to which they believed in their models.  Some firms used their models to tell them exactly where the edge of the cliff was so that they could race at top speed right at the edge of the cliff.  What they did not realie was that they did not know, nor could they know the degree to which the edge of that cliff was sturdy enough to take their weight.  Their intense reliance on their models, most often models that focused like a lazer on the most important measure of risk, left other risks in the dark.  And those other risks undermined the edge of the cliff.

Others with equally sophisticated models were not quite so willing to believe that it was perfectly safe right at the edge of the cliff.  They were aware that there were things that they did not know.  Things that they were not able to measure.  Risks in the dark.  They took the information from their models about the edge of the cliff and they decided to stay a few steps away from that edge.

They left something on the table.  They did not seek to maximize their risk adjusted returns.  Maximizing risk adjusted return in the ultimate sense involved identifying the opportunity with the highest risk adjusted return and taking advantage of that opportunity to the maximum extent possible, then looking to deploy remaining resources to the second highest risk adjusted return and so on.

The firms who had less losses in the crisis did not seek to maximize their risk adjusted return.

They did not maximize their participation in the opportunity with the highest risk adjusted return.  They spread their investments around with a variety of opportunities.  Some with the highest risk adjusted return choice and other amounts with lesser but usually acceptable return opportunities.

So when it came to pass that everyone found that their models were totally in error regarding the risk in that previously top opportunity, they were not so concentrated in that possibility.

They left something on the table and therefore had something left at the end of that round of the game.

Where to Draw the Line

March 22, 2011

“The unprecedented scale of the earthquake and tsunami that struck Japan, frankly speaking, were among many things that happened that had not been anticipated under our disaster management contingency plans.”  Japanese Chief Cabinet Secretary Yukio Edano.

In the past 30 days, there have been 10 earthquakes of magnitude 6 or higher.  In the past 100 years, there have been over 80 earthquakes magnitude 8.0 or greater.  The Japanese are reputed to be the most prepared for earthquakes.  And also to experience the most earthquakes of any settled region on the globe.  By some counts, Japan experiences 10% of all earthquakes that are on land and 20% of all severe earthquakes.

But where should they, or anyone making risk management decisions, draw the line in preparation?

In other words, what amount of safety are you willing to pay for in advance and what magnitude of loss event are you willing to say that you will have to live with the consequences.

That amount is your risk tolerance.  You will do what you need to do to manage the risk – but only up to a certain point.

That is because too much security is too expensive, too disruptive.

You are willing to tolerate the larger loss events because you believe them to be sufficiently rare.

In New Zealand,  that cost/risk trade off thinking allowed them to set a standard for retrofitting of existing structures of 1/3 of the standard for new buildings.  But, they also allowed 20 years transition.  Not as much of an issue now.  Many of the older buildings, at least in Christchurch are gone.

But experience changes our view of frequency.  We actually change the loss distribution curve in our minds that is used for decision making.

Risk managers need to be aware of these shifts.  We need to recognize them.  We want to say that these shifts represent shifts in risk appetite.  But we need to also realize that they represent changes in risk perception.  When our models do not move as risk perception moves, the models lose fundamental credibility.

In addition, when modelers do things like what some of the cat modeling firms are doing right now, that is moving the model frequency when people’s risk perceptions are not moving at all, they also lose credibility for that.

So perhaps you want scientists and mathematicians creating the basic models, but someone who is familiar with the psychology of risk needs to learn an effective way to integrate those changes in risk perceptions (or lack thereof) with changes in models (or lack thereof).

The idea of moving risk appetite and tolerance up and down as management gets more or less comfortable with the model estimations of risk might work.  But you are still then left with the issue of model credibility.

What is really needed is a way to combine the science/math with the psychology.

Market consistent models come the closest to accomplishing that.  The pure math/science folks see the herding aspect of market psychology as a miscalibration of the model.  But they are just misunderstanding what is being done.  What is needed is an ability to create adjustments to risk calculations that are applied to non-traded risks that allow for the combination of science & math analysis of the risk with the emotional component.

Then the models will accurately reflect how and where management wants to draw the line.

Sins of Risk Measurement

February 5, 2011
.
Read The Seven Deadly Sins of Measurement by Jim Campy

Measuring risk means walking a thin line.  Balancing what is highly unlikely from what it totally impossible.  Financial institutions need to be prepared for the highly unlikely but must avoid getting sucked into wasting time worrying about the totally impossible.

Here are some sins that are sometimes committed by risk measurers:

1.  Downplaying uncertainty.  Risk measurement will always become more and more uncertain with increasing size of the potential loss numbers.  In other words, the larger the potential loss, the less certain you can be about how certain it might be.  Downplaying uncertainty is usually a sin of omission.  It is just not mentioned.  Risk managers are lured into this sin by the simple fact that the less that they mention uncertainty, the more credibility their work will be given.

2.  Comparing incomparables.  In many risk measurement efforts, values are developed for a wide variety of risks and then aggregated.  Eventually, they are disaggregated and compared.  Each of the risk measurements are implicitly treated as if they were all calculated totally consistently.  However,  in fact, we are usually adding together measurements that were done with totally different amounts of historical data, for markets that have totally different degrees of stability and using tools that have totally different degrees of certitude built into them.  In the end, this will encourage decisions to take on whatever risks that we underestimate the most through this process.

3.  Validate to Confirmation.  When we validate risk models, it is common to stop the validation process when we have evidence that our initial calculation is correct.  What that sometimes means is that one validation is attempted and if validation fails, the process is revised and tried again.  This is repeated until the tester is either exhausted or gets positive results.  We are biased to finding that our risk measurements are correct and are willing to settle for validations that confirm our bias.

4.  Selective Parameterization.  There are no general rules for parameterization.  Generally, someone must choose what set of data is used to develop the risk model parameters.  In most cases, this choice determines the answers of the risk measurement.  If data from a benign period is used, then the measures of risk will be low.  If data from an adverse period is used, then risk measures will be high.  Selective paramaterization means that the period is chosen because the experience was good or bad to deliberately influence the outcome.

5.  Hiding behind Math.  Measuring risk can only mean measuring a future unknown contingency.  No amount of fancy math can change that fact.  But many who are involved in risk measurement will avoid ever using plain language to talk about what they are doing, preferring to hide in a thicket of mathematical jargon.  Real understanding of what one is doing with a risk measurement process includes the ability to say what that entails to someone without an advanced quant degree.

6.  Ignoring consequences.  There is a stream of thinking that science can be disassociated from its consequences.  Whether or not that is true, risk measurement cannot.  The person doing the risk measurement must be aware of the consequences of their findings and anticipate what might happen if management truly believes the measurements and acts upon them.

7.  Crying Wolf.  Risk measurement requires a concentration on the negative side of potential outcomes.  Many in risk management keep trying to tie the idea of “risk” to both upsides and downsides.  They have it partly right.  Risk is a word that means what it means, and the common meaning associated risk with downside potential.  However, the risk manager who does not keep in mind that their risk calculations are also associated with potential gains will be thought to be a total Cassandra and will lose all attention.  This is one of the reasons why scenario and stress tests are difficult to use.  One set of people will prepare the downside story and another set the upside story.  Decisions become a tug of war between opposing points of view, when in fact both points of view are correct.

There are doubtless many more possible sins.  Feel free to add your favorites in the comments.

But one final thought.  Calling it MEASUREMENT might be the greatest sin.

Radical Collaboration

June 8, 2010

There are situations that require collaboration if they are going to be resolved in a manner that produces the largest combined benefit or the smallest combined loss.  This is not the “greatest good for the greatest number” objective of socialism – it is simple efficiency.  Collaborative results can be greater than competitive results.  It is the reason that a sports team where everyone is playing the same strategy does better than the team where each individual seeks to do their personal best regardless of what everyone else is doing.

There are also situations where the application of individual and separate and uncoordinated actions will result in a sub optimal conclusion and where the famous Invisible Hand points in the wrong direction.

You see, the reason why the Invisible Hand ever works is because by the creative destruction of wrong turns, the individual actions find a good way to proceed and eventually all resources are marshaled in following that optimal way of proceeding.  But for the Invisible Hand to be efficient, the destruction part of creative destruction needs to be small relative to the creative part.  For the Collaborative effort to be efficient, the collaboration needs to result in selection of an efficient approach without the need for destruction through a collaborative decision making process.  For the Collaborative effort to be necessary, the total cost of the risk management effort needs to exceed the amount that single firms could afford.

Remember the story of the Iliad. It is the story of armies that worked entirely on the Invisible Hand principle. Each warrior decided on his own what he would do, how and when he would fight.  It was the age of Heroes.

The stories of the success of Alexander and later the Roman armies was the success of an army that was collaborative.  The age of Heroes was over.   The efficiencies of the individual Heroes each finding their own best strategy and tactics was found to be inferior to the collaborative efforts of a group of soldiers who were all using the same strategy and tactics in coordination.

There are many situations in risk management were some sort of collaboration needs to be considered.

The Gulf Oil leak situation seems like it might be one of those.  BP is now admitting that it did not have the resources available or even the expertise to do what needs to be done.  And perhaps, this leak is a situation where the collective cost of their failure is much higher than society’s tolerance for this sort of loss.  But the frequency of this sort of problem has to date been so very low that having BP provide those capabilities may not have made economic sense.

However, there are hundreds of wells in the Gulf.  With clear hindsight, the cost of developing and maintaining the capacity to deal with this sort of emergency could have been borne jointly by all of the drillers in the Gulf.

There are many situations in risk management where collaboration would produce much better results than separate actions.  Mostly in cases where a common threat faces many where to overcome the threat would take more resources than any one could muster.

Remember the situation with LTCM?  No one bank could have helped LTCM alone, they would have gone down with LTCM.  But by the forced collaborative action, a large group of banks were able to keep the situation from generating large losses.  Now this action rankles many free marketeers, but it is exactly the sort of Radical Collaboration that I am talking about.  It did not involve any direct government funding.  It used the balance sheets of the group of banks to stabilize the situation and allow for an orderly disposition of LTCMs positions.  In the end, I beleive that it was reported that the banks did not end up taking a loss either.  (That was mostly an artifact of depressed market prices at the time of the rescue, I would guess.)

The exact same sort of thinking does NOT seem to have been tried with Lehman.  If Paulson could not find a single firm to rescue Lehman, he was not going to do anything.  But looking back and remembering LTCM, Paulson could have arranged an LTCM style rescue for Lehman.  In hindsight, that, even with government guarantees to sweeten the pot would have been better then the financial carnage that ensued.

Perhaps Paulson was one of the free marketeers that hated the LTCM “bailout”.  But in the end, he trampled the free market much worse than his predecessors did with LTCM when he bailed out AIG without even giving any thought to terms of the bailout.

Collaboration might have seemed radical to Paulson.  But it is sometimes needed for risk management.

Holding Sufficient Capital

May 23, 2010

From Jean-Pierre Berliet

The companies that withstood the crisis and are now poised for continuing success have been disciplined about holding sufficient capital. However, the issue of how much capital an insurance company should hold beyond requirements set by regulators or rating agencies is contentious.

Many insurance executives hold the view that a company with a reputation for using capital productively on behalf of shareholders would be able to raise additional capital rapidly and efficiently, as needed to execute its business strategy. According to this view, a company would be able to hold just as much “solvency” capital as it needs to protect itself over a one year horizon from risks associated with the run off of in-force policies plus one year of new business. In this framework, the capital need is calculated to enable a company to pay off all its liabilities, at a specified confidence level, at the end of the one year period of stress, under the assumption that assets and liabilities are sold into the market at then prevailing “good prices”. If more capital were needed than is held, the company would raise it in the capital market.

Executives with a “going concern” perspective do not agree. They observe first that solvency capital requirements increase with the length of the planning horizon. Then, they correctly point out that, during a crisis, prices at which assets and liabilities can be sold will not be “good times” prices upon which the “solvency” approach is predicated. Asset prices are likely to be lower, perhaps substantially, while liability prices will be higher. As a result, they believe that the “solvency” approach, such as the Solvency II framework adopted by European regulators, understates both the need for and the cost of capital. In addition, these executives remember that, during crises, capital can become too onerous or unavailable in the capital market. They conclude that, under a going concern assumption, a company should hold more capital, as an insurance policy against many risks to its survival that are ignored under a solvency framework.

The recent meltdown of debt markets made it impossible for many banks and insurance companies to shore up their capital positions. It prompted federal authorities to rescue AIG, Fannie Mae and Freddie Mac. The “going concern” view appears to have been vindicated.

Directors and CEOs have a fiduciary obligation to ensure that their companies hold an amount of capital that is appropriate in relation to risks assumed and to their business plan. Determining just how much capital to hold, however, is fraught with difficulties because changes in capital held have complex impacts about which reasonable people can disagree. For example, increasing capital reduces solvency concerns and the strength of a company’s ratings while also reducing financial leverage and the rate of return on capital that is being earned; and conversely.

Since Directors and CEOs have an obligation to act prudently, they need to review the process and analyses used to make capital strategy decisions, including:

  • Economic capital projections, in relation to risks assumed under a going concern assumption, with consideration of strategic risks and potential systemic shocks, to ensure company survival through a collapse of financial markets during which capital cannot be raised or becomes exceedingly onerous
  • Management of relationships with leading investors and financial analysts
  • Development of reinsurance capacity, as a source of “off balance sheet” capital
  • Management of relationships with leading rating agencies and regulators
  • Development of “contingent” capital capacity.

The integration of risk, capital and business strategy is very important to success. Directors and CEOs cannot let actuaries and finance professionals dictate how this is to happen, because they and the risk models they use have been shown to have important blind spots. In their deliberations, Directors and CEOs need to remember that models cannot reflect credibly the impact of strategic risks. Models are bound to “miss the point” because they cannot reflect surprises that occur outside the boundaries of the closed business systems to which they apply.

©Jean-Pierre Berliet   Berliet Associates, LLC (203) 972-0256  jpberliet@att.net

Comprehensive Actuarial Risk Evaluation

May 11, 2010

The new CARE report has been posted to the IAA website this week.

It raises a point that must be fairly obvious to everyone that you just cannot manage risks without looking at them from multiple angles.

Or at least it should now be obvious. Here are 8 different angles on risk that are discussed in the report and my quick take on each:

  1. MARKET CONSISTENT VALUE VS. FUNDAMENTAL VALUE   –  Well, maybe the market has it wrong.  Do your own homework in addition to looking at what the market thinks.  If the folks buying exposure to US mortgages had done fundamental evaluation, they might have noticed that there were a significant amount of sub prime mortgages where the Gross mortgage payments were higher than the Gross income of the mortgagee.
  2. ACCOUNTING BASIS VS. ECONOMIC BASIS  –  Some firms did all of their analysis on an economic basis and kept saying that they were fine as their reported financials showed them dying.  They should have known in advance of the risk of accounting that was different from their analysis.
  3. REGULATORY MEASURE OF RISK  –  vs. any of the above.  The same logic applies as with the accounting.  Even if you have done your analysis “right” you need to know how important others, including your regulator will be seeing things.  Better to have a discussion with the regulator long before a problem arises.  You are just not as credible in the middle of what seems to be a crisis to the regulator saying that the regulatory view is off target.
  4. SHORT TERM VS. LONG TERM RISKS  –  While it is really nice that everyone has agreed to focus in on a one year view of risks, for situations that may well extend beyond one year, it can be vitally important to know how the risk might impact the firm over a multi year period.
  5. KNOWN RISK AND EMERGING RISKS  –  the fact that your risk model did not include anything for volcano risk, is no help when the volcano messes up your business plans.
  6. EARNINGS VOLATILITY VS. RUIN  –  Again, an agreement on a 1 in 200 loss focus is convenient, it does not in any way exempt an organization from risks that could have a major impact at some other return period.
  7. VIEWED STAND-ALONE VS. FULL RISK PORTFOLIO  –  Remember, diversification does not reduce absolute risk.
  8. CASH VS. ACCRUAL  –  This is another way of saying to focus on the economic vs the accounting.

Read the report to get the more measured and complete view prepared by the 15 actuaries from US, UK, Australia and China who participated in the working group to prepare the report.

Comprehensive Actuarial Risk Evaluation

Much Worse than Anticipated

May 5, 2010

Arianna Huffington recently pointed out that time and time again, the crises that we face turn out to be Much Worse than We thought it would be.

And she has a good point there.  One that is important for risk managers to contemplate.  One that we are often asked after a major loss…

Why did your risk model get that wrong?

There is a correct answer, but it is one that we can never successfully use.

In situations where major risks are being underestimated widely in the market place, the risk managers who correctly size the worst risks can run into two responses:

  1. Their firm believes their evaluation of the risk and exits the exposure as rapidly as they can.
  2. Their firm does not believe their evaluation and will only believe a risk evaluation that gives a similar (under) estimation of the risk as the rest of the market.

It is a survival of the underestimators.

And this doesn’t just apply to risk managers and risk models.  Who do you think buys a house on a flood plain?  Someone who has a clear and realistic view of the risk or someone who vastly underestimates the risk?  The underestimator will out bid the realistic every time.

So after a flood, go around to those flooded out and ask if they expected this and most will tell you that this is “much worse that we thought it would be”.

Many “emerging risks” and “black swans” are such because most people had misunderestimated the size of the risk or the likelihood.

And one way to think of it is to go back to Knight and realize that all profits are simply rewards for the uncertainties.  So when we find ourselves getting profits where we cannot figure out the uncertainty that drives the profits, maybe we should go back and figure it out.

The solution is not to curl up in a ball, nor is it to just ignore all risks that pose these potential major threats.  The solution is to take our best shot at really evaluating the risks and make our decisions, eyes wide open, to the possibility that things might just be Much Worse than Anticipated.

Maybe we need to regularly add a column to our risk reports.  To the right of the column labeled Risk.  This one labeled “Worse Case”.

Many insurers with Cat risk exposures will report the 1/250 loss potential that is the focus of rating agencies, but along side of that show a 1/500 loss potential to remind management of just how much worse it might get.

Some people complain that risk managers are just too pessimistic.  But to me this sort of practice just seems to be acting as an adult and facing our risks honestly.  Not with the intention that we stop taking risks.  Instead hoping that we stop experiencing losses that are MUCH WORSE THAN ANTICIPATED.

Volcano Risk 2

April 20, 2010

Top 10 European Volcanos in terms of people nearby and potential losses from an eruption:

Volcano/Country/Affected population/Values of residences at risk
1.Vesuvius/Italy/1,651,950/$66.1bn
2.Campi Flegrei/Italy/144,144/$7.8bn
3.La Soufrière Guadeloupe/Guadeloupe,France/94,037 /$3.8bn
4.Etna/Italy/70,819/$2.8bn
5.Agua de Pau/Azores,Portugal/34,307/$1.4bn
6.Soufrière Saint Vincent/Saint Vincent,Caribbean/24,493/$1bn
7.Furnas/Azores,Portugal/19,862/$0.8bn
8.Sete Cidades/Azores,Portugal/17,889/$0.7bn
9.Hekla/Iceland/10,024/$0.4bn
10.Mt Pelée/Martinique,France/10,002/$0.4bn

http://www.strategicrisk.co.uk/story.asp?source=srbreaknewsRel&storycode=384008

Volcano Risk

April 20, 2010

Remarks from Giovanni Bisignani (International Air Transport Association) at the Press Breakfast in Paris

The Volcano

There was one risk that we could not forecast. That is the volcanic eruption which has crippled the aviation sector.  First in Europe, but we saw increasing global implications.  The scale of this crisis is now greater than 9/11 when US air space was closed for three days.  In lost revenue alone, this is costing the industry at least $200 million a day.  On top of that, airlines face added costs of extra fuel for re-routing and passenger care – hotel, food and telephone calls.

For Europe’s carriers – the most seriously impacted – this could not have come at a worse time.  As just mentioned, we already expected the region to have the biggest losses this year.  For each day that planes don’t fly the losses get bigger.  We are now into our fifth day of closed skies.  Let me restate that safety is our number one priority. But it is critical that we place greater urgency and focus on how and when we can safely re-open Europe’s skies.

We are far enough into this crisis to express our dissatisfaction on how governments have managed the crisis:

  • With no risk assessment
  • No consultation
  • No coordination
  • And no leadership

In the face of a crisis that some have estimated has already cost the European economy billions of Euros, it is incredible that it has taken five days for Europe’s transport ministers to organize a conference call.

What must be done?

International guidance is weak. The International Civil Aviation Organization (ICAO) is the specialized UN agency for aviation. ICAO has guidance on information dissemination but no clear process for opening or closing airspace. Closing airspace should be the responsibility of the national regulator with the support of the air navigation service provider.  They rely on information from meteorological offices and Volcanic Ash Advisory Centers.

Europe has a unique system.  The region’s decisions are based on a theoretical model for how the ash spreads.  This means that governments have not taken their responsibility to make clear decisions based on fact.  Instead, it has been the air navigation service providers who announced that they would not provide service. These decisions have been taken without adequately consulting the operators—the airlines. This is not an acceptable system, particularly when the consequences for safety and the economy are so large.

I emphasize that safety is our top priority. But we must make decisions based on the real situation in the sky, not on theoretical models. The chaos, inconvenience and economic losses are not theoretical. They are enormous and growing. I have consulted our member airlines who normally operate in the affected airspace. They report missed opportunities to fly safely.  One of the problems with the European system is that the situation is seen as black or white. If there is the possibility of ash then the airspace is closed.  And it remains closed until the possibility disappears with no assessment of the risk.

We have seen volcanic activity in many parts of the world but rarely combined with airspace closures and never at this scale. When Mount St. Helens erupted in the US in 1980, we did not see large scale disruptions because the decisions to open or close airspace were risk managed with no compromise on safety.

Today I am calling for urgent action to safely prepare for re-opening airspace based on risk and fact.  I have personally asked ICAO President Kobeh and Secretary General Benjamin to convene an urgent extra-ordinary meeting of the ICAO Council later today. The first purpose would be to define government responsibility for the decisions to open or close airspace in a coordinated and effective way based on fact—not theory.

Airlines have run test flights to assess the situation.  The results have not shown any irregularities and the data is being passed to governments and air navigation service providers to help with their assessment. Governments must also do their own testing. European states must focus on ways to re-open the airspace based on this real data and on appropriate operational procedures to maintain safety.  Such procedures could include special climb and descent procedures, day time flying, restrictions to specific corridors, and more frequent boroscopic inspections of engines.

We must move away from blanket closures and find ways to flexibly open airspace. Risk assessments should be able to help us to re-open certain corridors if not entire airspaces.  I have also urged Eurocontrol to also take this up. I urge them to establish a volcano contingency center capable of making coordinated decisions.  There is a meeting scheduled for this afternoon that I hope will result in a concrete action plan.

Longer-term, I have also asked the ICAO Council to expedite procedures to certify at what levels of ash concentration aircraft can operate safely.  Today there are no standards for ash concentration or particle size that aircraft can safely fly through. The result is zero tolerance. Any forecast ash concentration results in airspace closure. We are calling on aircraft and engine manufacturers to certify levels of ash that are safe.

Summary

1. Safety is our number one priority
2. Governments must reopen airspace based on data that tell us it is safe. If not all airspace, at least some corridors
3. Governments must improve the decision-making process with facts—not theory
4. Governments must communicate better, consulting with airlines and coordinating among stakeholders
5. And longer-term, we must find a way to certify the tolerance of aircraft for flying in these conditions

You might wonder about your own Volcano Risk.  Check out an explanation of what is covered by State Farm.

Finally, I got a question from the press about companies that I knew that had prepared specifically for this event.  One more example of how the press misses the point.  ERM is not about guessing the future correctly.

For something that is as unique as this event, the best any company could have expected to do would have been to anticipated the broad class of events that would cause extended disruptions of flights, tested the impact of such a disruption on their business operations and made decisions about contingency plans that they might have put in place to prepare for such disruptions.

Moral Hazard

January 13, 2010

Kevin Dowd has written a fine article titled “Moral Hazard and the Financial Crisis” for the Cato Journal.  Some of his very well articulated points include:

  • Moral Hazard comes from the ability for individuals to benefit from gains without having an equal share in losses.  (I would add that that has almost nothing to do with government bailouts.  It exists fully in the compensation of most executives of most firms in most economies. )
  • Bad risk model (Gaussian).  That ignore abnormal market conditions. 
  • Ignoring the fact that others in the market all have the same risk management strategy and that that strategy does not work for the entire market at once. 
  • Mark to Model where model is extremely sensitive to assumptions. 
  • Using models that were not designed for that purpose. 
  • Assumption of continuously liquid markets. 
  • Risk management system too rigid, resulting in easy gaming by traders. 
  • “the more sophisticated the [risk management] system, the more unreliable it might be.”
  • Senior management was out of control.  (and all CEOs are paid as if they were above average!)
  • Fundamental flaw in Limited Liability system.  No one has incentive to put a stop to this.  Moral Hazard is baked into the system.

Unfortunately, there are two flaws that I see in his paper. 

First, he misses the elephant in the room.  The actual exposure of the financial system to mortage loan losses in the end was over 400% of the amount of mortgages.  So without the multiplication of risk that happened under the guise of risk spreading had not happened, the global financial crisis would have simply been a large loss for the banking sector and other investors.  However, with the secret amplification of risk that happened with the CDO/CDS over the counter trades, the mortgage crisis became a depression sized loss, exceeding the capital of many large banks. 

So putting all of the transactions out in the open may have gone a long way towards allowing the someone to react intelligently to the situation.  Figuring out a way to limit the amount of the synthetic securities would probably be a good idea as well.  Moral Hazard is a term from insurance that is important to this situation.  Insurable interest in one as well. 

The second flaw of the paper is the standard Cato line that regulation should be eliminated.  In this case, it is totally outrageous to suggest that the market would have applied any discipline.  The market created the situation, operating largely outside of regulations. 

So while I liked most of the movie, I hated the ending. 

We really do need a Systemic Risk Regulator.  And somehow, we need to create a system so that 50 years from now when that person is sitting on a 50 year track record of no market meltdowns, they will still have enough credibility to act against the mega bubble of those days.

Best Risk Management Quotes

January 12, 2010

The Risk Management Quotes page of Riskviews has consistently been the most popular part of the site.  Since its inception, the page has received almost 2300 hits, more than twice the next most popular part of the site.

The quotes are sometimes actually about risk management, but more often they are statements or questions that risk managers should keep in mind.

They have been gathered from a wide range of sources, and most of the authors of the quotes were not talking about risk management, at least they were not intending to talk about risk management.

The list of quotes has recently hit its 100th posting (with something more than 100 quotes, since a number of the posts have multiple quotes.)  So on that auspicous occasion, here are my favotites:

  1. Human beings, who are almost unique in having the ability to learn from the experience of others, are also remarkable for their apparent disinclination to do so.  Douglas Adams
  2. “when the map and the territory don’t agree, always believe the territory” Gause and Weinberg – describing Swedish Army Training
  3. When you find yourself in a hole, stop digging.-Will Rogers
  4. “The major difference between a thing that might go wrong and a thing that cannot possibly go wrong is that when a thing that cannot possibly go wrong goes wrong it usually turns out to be impossible to get at or repair” Douglas Adams
  5. “A foreign policy aimed at the achievement of total security is the one thing I can think of that is entirely capable of bringing this country to a point where it will have no security at all.”– George F. Kennan, (1954)
  6. “THERE ARE IDIOTS. Look around.” Larry Summers
  7. the only virtue of being an aging risk manager is that you have a large collection of your own mistakes that you know not to repeat  Donald Van Deventer
  8. Philip K. Dick “Reality is that which, when you stop believing in it, doesn’t go away.”
  9. Everything that can be counted does not necessarily count; everything that counts cannot necessarily be counted.  Albert Einstein
  10. “Perhaps when a man has special knowledge and special powers like my own, it rather encourages him to seek a complex explanation when a simpler one is at hand.”  Sherlock Holmes (A. Conan Doyle)
  11. The fact that people are full of greed, fear, or folly is predictable. The sequence is not predictable. Warren Buffett
  12. “A good rule of thumb is to assume that “everything matters.” Richard Thaler
  13. “The technical explanation is that the market-sensitive risk models used by thousands of market participants work on the assumption that each user is the only person using them.”  Avinash Persaud
  14. There are more things in heaven and earth, Horatio,
    Than are dreamt of in your philosophy.
    W Shakespeare Hamlet, scene v
  15. When Models turn on, Brains turn off  Til Schuermann

You might have other favorites.  Please let us know about them.

The Future of Risk Management – Conference at NYU November 2009

November 14, 2009

Some good and not so good parts to this conference.  Hosted by Courant Institute of Mathematical Sciences, it was surprisingly non-quant.  In fact several of the speakers, obviously with no idea of what the other speakers were doing said that they were going to give some relief from the quant stuff.

Sad to say, the only suggestion that anyone had to do anything “different” was to do more stress testing.  Not exactly, or even slightly, a new idea.  So if this is the future of risk management, no one should expect any significant future contributions from the field.

There was much good discussion, but almost all of it was about the past of risk management, primarily the very recent past.

Here are some comments from the presenters:

  • Banks need regulator to require Stress tests so that they will be taken seriously.
  • Most banks did stress tests that were far from extreme risk scenarios, extreme risk scenarios would not have been given any credibility by bank management.
  • VAR calculations for illiquid securities are meaningless
  • Very large positions can be illiquid because of their size, even though the underlying security is traded in a liquid market.
  • Counterparty risk should be stress tested
  • Securities that are too illiquid to be exchange traded should have higher capital charges
  • Internal risk disclosure by traders should be a key to bonus treatment.  Losses that were disclosed and that are within tolerances should be treated one way and losses from risks that were not disclosed and/or that fall outside of tolerances should be treated much more harshly for bonus calculation purposes.
  • Banks did not accurately respond to the Spring 2009 stress tests
  • Banks did not accurately self assess their own risk management practices for the SSG report.  Usually gave themselves full credit for things that they had just started or were doing in a formalistic, non-committed manner.
  • Most banks are unable or unwilling to state a risk appetite and ADHERE to it.
  • Not all risks taken are disclosed to boards.
  • For the most part, losses of banks were < Economic Capital
  • Banks made no plans for what they would do to recapitalize after a large loss.  Assumed that fresh capital would be readily available if they thought of it at all.  Did not consider that in an extreme situation that results in the losses of magnitude similar to Economic Capital, that capital might not be available at all.
  • Prior to Basel reliance on VAR for capital requirements, banks had a multitude of methods and often used more than one to assess risks.  With the advent of Basel specifications of methodology, most banks stopped doing anything other than the required calculation.
  • Stress tests were usually at 1 or at most 2 standard deviation scenarios.
  • Risk appetites need to be adjusted as markets change and need to reflect the input of various stakeholders.
  • Risk management is seen as not needed in good times and gets some of the first budget cuts in tough times.
  • After doing Stress tests need to establish a matrix of actions that are things that will be DONE if this stress happens, things to sell, changes in capital, changes in business activities, etc.
  • Market consists of three types of risk takers, Innovators, Me Too Followers and Risk Avoiders.  Innovators find good businesses through real trial and error and make good gains from new businesses, Me Too follow innovators, getting less of gains because of slower, gradual adoption of innovations, and risk avoiders are usually into these businesses too late.  All experience losses eventually.  Innovators losses are a small fraction of gains, Me Too losses are a sizable fraction and Risk Avoiders often lose money.  Innovators have all left the banks.  Banks are just the Me Too and Avoiders.
  • T-Shirt – In my models, the markets work
  • Most of the reform suggestions will have the effect of eliminating alternatives, concentrating risk and risk oversight.  Would be much safer to diversify and allow multiple options.  Two exchanges are better than one, getting rid of all the largest banks will lead to lack of diversity of size.
  • Problem with compensation is that (a) pays for trades that have not closed as if they had closed and (b) pay for luck without adjustment for possibility of failure (risk).
  • Counter-cyclical capital rules will mean that banks will have much more capital going into the next crisis, so will be able to afford to lose much more.  Why is that good?
  • Systemic risk is when market reaches equilibrium at below full production capacity.  (Isn’t that a Depression – Funny how the words change)
  • Need to pay attention to who has cash when the crisis happens.  They are the potential white knights.
  • Correlations are caused by cross holdings of market participants – Hunts held cattle and silver in 1908’s causing correlations in those otherwise unrelated markets.  Such correlations are totally unpredictable in advance.
  • National Institute of Financa proposal for a new body to capture and analyze ALL financial market data to identify interconnectedness and future systemic risks.
  • If there is better information about systemic risk, then firms will manage their own systemic risk (Wanna Bet?)
  • Proposal to tax firms based on their contribution to gross systemic risk.
  • Stress testing should focus on changes to correlations
  • Treatment of the GSE Preferred stock holders was the actual start of the panic.  Leahman a week later was actually the second shoe to drop.
  • Banks need to include variability of Vol in their VAR models.  Models that allowed Vol to vary were faster to pick up on problems of the financial markets.  (So the stampede starts a few weeks earlier.)
  • Models turn on, Brains turn off.

Are We “Due” for an Interest Rate Risk Episode?

November 11, 2009

In the last ten years, we have had major problems from Credit, Natural Catastrophes and Equities all at least twice.  Looking around at the risk exposures of insurers, it seems that we are due for a fall on Interest Rate Risk.

And things are very well positioned to make that a big time problem.  Interest rates have been generally very low for much of the past decade (in fact, most observers think that low interest rates have caused many of the other problems – perhaps not the nat cats).  This has challenged the minimum guaranteed rates of many insurance contracts.

Interest rate risk management has focused primarily around lobbying regulators to allow lower minimum guarantees.  Active ALM is practiced by many insurers, but by no means all.

Rates cannot get much lower.  The full impact of the historically low current risk free rates (are we still really using that term – can anyone really say that anything is risk free any longer?) has been shielded form some insurers by the historically high credit spreads.  As the economy recovers and credit spreads contract, the rates could go slightly lower for corporate credit.

But keeping rates from exploding as the economy comes back to health will be very difficult.  The sky high unemployment makes it difficult to predict that the monetary authorities will act to avoid overheating and the sharp rise of interest rates.

Calibration of ALM systems will be challenged if there is an interest rate spike.  Many Economic Capital models are calibrated to show a 2% rise in interest rates as a 1/200 event.  It seems highly likely that rates could rise 2% or 3% or 4% or more.  How well prepared will those firms be who have been doing diciplined ALM with a model that tops out at a 2% rise?  Or will the ALM actuaries be the next ones talking of a 25 standard deviation event?

Is there any way that we can justify calling the next interest rate spike a Black Swan?

Black Swan Free World (6)

October 13, 2009

On April 7 2009, the Financial Times published an article written by Nassim Taleb called Ten Principles for a Black Swan Free World. Let’s look at them one at a time…

6. Do not give children sticks of dynamite, even if they come with a warning . Complex derivatives need to be banned because nobody understands them and few are rational enough to know it. Citizens must be protected from themselves, from bankers selling them “hedging” products, and from gullible regulators who listen to economic theorists.

It is my opinion that many bubbles come about after a completely incorrect valuation model or approach becomes widely adopted.  Today, we have the advantage over observers from prior decades.  In this decade we have experienced two bubbles.  In the case of the internet bubble, the valuation model was attributing value to clicks or eyeballs.  It had drifted away from there being any connection between free cashflow and value.  As valuations soared, people who had internet investments had more to invest in the next sensation driving that part of the bubble. The internet stocks became more and more like Ponzi schemes.  In fact, Hyman Minsky described bubbles as Ponzi finance.

In the home real estate bubble, valuation again drifted away from traditional metrics, the archaic and boring loan to value and coverage ratio pair.  It was much more sophisticated and modern to use copulas and instead of evaluating the quality of the credit to use credit ratings of a structured securities of loans.

Goerge Soros has said that the current financial crisis might just be the final end of a fifty year mega credit bubble.  If he is right, then we will have quite a long slow ride out of the crisis.

There are two aspects of derivatives that I think were ignored in the run up to the crisis.  The first is the leverage aspect of derivatives.  A CDS is equivalent to a long position in a corporate bond and a short position in a risk free bond.  But few observers and even fewer principals considered CDS as containing additional leverage equal to the full notional amount of the bond covered.  And leverage magnifies risk.  Worse than that.

Leverage takes the cashflows and divides them between reliable cashflows and unreliably cashflows and sells the reliable cashflows to someone else so that more unreliable cashflows can be obtained.

The second misunderstood aspect of the derivatives is the amount of money that can be lost and the speed at which it can be lost.  This misunderstanding has caused many including most market participants to believe that posting collateral is a sufficient risk provision.  In fact, 999 days out of 1000 the collateral will be sufficient.  However, that other day, the collateral is only a small fraction of the money needed.  For the institutions that hold large derivative positions, there needs to be a large reserve against that odd really bad day.

So when you look at the two really big, really bad things about derivatives that were ignored by the users, Taleb’s description of children with dynamite seems apt.

But how should we be dealing with the dynamite?  Taleb suggests keeping the public away from derivatives.  I am not sure I understand how or where the public was exposed directly to derivatives, even in the current crisis.

Indirectly the exposure was through the banks.  And I strongly believe that we should be making drastic changes in what different banks are allowed to do and what different capital must be held against derivatives.  The capital should reflect the real leverage as well as the real risk.  The myth that has been built up that the notional amount of a derivative is not an important statistic and that the market value and movements in market value is the dangerous story that must be eliminated.  Derivatives that can be replicated by very large positions in securities must carry the exact same capital as the direct security holdings.  Risks that can change overnight to large losses must carry reserves against those losses that are a function of the loss potential, not just a function of benign changes in market values and collateral.

In insurance regulatory accounting, there is a concept called a non-admitted asset.  That is something that accountants might call an asset but that is not permitted to be counted by the regulators.  Dealings that banks have with unregulated financial operations should be considered non-admitted assets.  Transferring something off to the books to an unregulated entity just will not count.

So i would make it extremely expensive for banks to get anywhere near the dynamite.  Or to deal with anyone who has any dynamite.

Black Swan Free World (5)

Black Swan Free World (4)

Black Swan Free World (3)

Black Swan Free World (2)

Black Swan Free World (1)

Black Swan Free World (4)

October 3, 2009

On April 7 2009, the Financial Times published an article written by Nassim Taleb called Ten Principles for a Black Swan Free World. Let’s look at them one at a time…

4. Do not let someone making an “incentive” bonus manage a nuclear plant – or your financial risks. Odds are he would cut every corner on safety to show “profits” while claiming to be “conservative”. Bonuses do not accommodate the hidden risks of blow-ups. It is the asymmetry of the bonus system that got us here. No incentives without disincentives: capitalism is about rewards and punishments, not just rewards.

For many years, money managers were paid out of the revenue from a small management fee charged on assets.  The good performing funds attracted more funds and therefore had more gross revenue.  Retail mutual funds usually charged a flat rate.  Institutional funds charged a sliding scale that went down as a percentage of assets as the amount of assets went up.  Since mutual fund expenses were relatively flat, that meant that the larger funds could generate quite substantial profits.

Then hedge funds came along fifty years ago and established the pattern of incentive compensation of 20% of profits fairly early.  In addition, the idea of the fund using leverage was an early innovation of hedge funds.

Another innovation was the custom that the hedge fund manager’s gains would stay in the fund so that the incentives were aligned.  But think about how that works.  The investor puts up $1 million.  The fund gains 20%, the manager gets $400k and the investor gets $160k.  Then the fund drops 50%, the investor’s account is now worth $580k – he is down $420k.  The manager is down to $80k, but still up by that $80k.  The investor is creamed but the manager is well ahead.  Seems like that incentives need realignment.

Taleb may be thinking of a major issue with hedge funds – valuation of illiquid investments.  Hedge funds often make purchases of totally illiquid investments.  Each quarter, the manager makes an estimate of what they are worth.  The manager gets paid based upon those estimates.  However, with the recent downturn, even in funds that have not shown significant losses have had significant redemptions.  When these funds have redemptions, the liquid assets are sold to pay off the departing investors.  Their shares are determined using the estimated values of the illiquid assets and the remaining fund becomes more and more concentrated in illiquid assets.

If the fund manager had been optimistic about the value of the illiquid assets or simply did not anticipate the shift in demand that has ocurred with the financial crisis, there may well be a major problem brewing for the last investors out the door.  The double whammy of depressed prices for the illiquid assets as well as the distribution based upon values for those assets that are now known to be optimistic.

And over payment of the one sided performance bonuses to the manager were supported by the optimistic valuations.

Black Swan Free World (3)

Black Swan Free World (2)

Black Swan Free World (1)

The Yin and Yang of Risk

October 2, 2009
Guest Post By Chris Mandel

One thing I’ve discovered in the last year is that extremes seem to be the rule of thumb these days.

The obvious example is that which represents the more significant aspects of the current financial crisis; huge amounts of mortgage defaults; unfathomable aggregations of loss in credit default swaps; inordinate quantums of market confidence destruction and the resulting 50 percent portfolio reductions in the wake, etc, etc.

In recent years it has been reflected in the more traditional insurable risk realm with record-setting natural catastrophe seasons and increasingly severe terrorism events. The fundamental insurance concept of pooling and sharing risk for profitable diversification is threatened. Even the expected level of loss is growing increasingly unexpected in actual results.

Examining the risk discipline and its evolving practice, I see management by extremes beginning to subsume the norm. So here are some examples of how this looks.

Reams of Data–Little Data Intelligence: We have tons of “risk” related data but limited ability to interpret it and use it in order to head off losses that were ostensibly preventable or at least reducible.

Continued at Risk and Insurance

How Many Dependencies did you Declare?

September 12, 2009

Correlation is a statement of historical fact.  The measurement of correlation does not necessarily give any indication of future tendencies unless there is a clear interdependency or lack thereof.  That is especially true when we seek to calculate losses at probability levels that are far outside the size of the historical data set.  (If you want to calculate a 1/200 loss and have 50 years of data, you have 25% of 1 observation)

Using historical correlations in the absence of understand the actual interdependencies could possibly result in drastic problems.

An example is the sub primes.  One of the key differences between what actually happened and the models used prior to the collapse of these markets is that historical correlations were used to drive the models for sub primes.  The correlations were between regions.  Historically, there had been low correlations between mortgage default rates in different regions of the US.  Unfortunately, those correlations were an artifact of regional unemployment driven defaults and unemployment is not the only factor that affects defaults.   The mortgage market had changed drastically from the period over which the defaults were measured.  Mortgage lending practices changed in most of the larger markets.  The prevalence of modified payment mortgages meant that the relationship between mortgages and income was changing as the payments shifted.  In addition, the amount of mortgage granted compared to income also shifted drastically.

So the long term low regional correlations were no longer applicable to the new mortgage market, because the market had changed.  The historical correlation was still a true fact, but is did not have much predictive power.

And it makes some sense to talk about interdependency rising in extreme events.  Just like in the subprime situation, there are drivers of risks that shift into new patterns because systems exceed their carrying capacity.

Everything that is dependent on confidence in the market may not correlate in most times, but that interdependency will show through when confidence is shaken.  In addition to confidence, financial market instruments may also be dependent on the level of liquidity in the markets.  Is confidence in the market a stochastic variable in the risk models?  It should be – it is one of the main drivers of levels of correlation of otherwise unrelated activities.

So before jumping to using correlations, we must seek to understand dependencies.

Models & Manifesto

September 1, 2009

Have you ever heard anyone say that their car got lost? Or that they got into a massive pile-up because it was a 1-in-200-year event that someone drove on the wrong side of a highway? Probably not.

But statements similar to these have been made many times since mid-2007 by CEOs and risk managers whose firms have lost great sums of money in the financial crisis. And instead of blaming their cars, they blame their risk models. In the 8 February 2009 Financial Times, Goldman Sachs’ CEO Lloyd Blankfein said “many risk models incorrectly assumed that positions could be fully hedged . . . risk models failed to capture the risk inherent in off-balance sheet activities,” clearly placing the blame on the models.

But in reality, it was, for the most part, the modellers, not the models, that failed. A car goes where the driver steers it and a model evaluates the risks it is designed to evaluate and uses the data the model operator feeds into the model. In fact, isn’t it the leadership of these enterprises that are really responsible for not clearly assessing the limitations of these models prior to mass usage for billion-dollar decisions?

But humans, who to varying degrees all have a limit to their capacity to juggle multiple inter-connected streams of information, need models to assist with decision-making at all but the smallest and least complex firms.

These points are all captured in the Financial Modeler’s Manifesto from Paul Wilmott and Emanuel Derman.

But before you use any model you did not build yourself, I suggest that you ask the model builder if they have read the manifesto.

If you do build models, I suggest that you read it before and after each model building project.

The Black Swan Test

August 31, 2009

Many commentators have suggested that firms need to do stress tests to examine their vulnerability to adverse situations that are not within the data set used to parameterize their risk models. In the article linked below, I suggest the adoption of a terminology to describe stress tests and also a methodology that can be adopted by any risk model user to test and
communicate a test of the stability of model results. This method can be called a Black Swan test. The terminology would be to set one Black Swan equal to the most adverse data point. A one Black Swan stress test would be a test of a repeat of the worst event in the data set. A two Black Swan stress test would
be a test of experience twice as adverse as the worst data point.

So for credit losses for a certain class of bonds, if the historical period worst loss was 2 percent, then a 1BLS stress test would be a 2 percent loss, a 4 percent loss a 2BLS stress test, etc.

Article

Further, the company could state their resiliency in terms of Black Swans. For example:

Tests show that the company can withstand a 3.5BLS stress test for credit and a 4.2BLS for equity risk and a simultaneous 1.7BLS credit and equity stress.

Similar terminology could be used to describe a test of model stability. A 1BLS model stability test would be performed by adding a single additional point to the data used to parameterize the model. So a 1BLS model stability test would involve adding a single data point equal to the worst point in the data set. A 2BLS test would be adding a data point that is twice as bad as the worst point.

Multi dimensional Risk Management

August 28, 2009

Many ERM programs are one dimensional. They look at VaR or they look at Economic Capital. The Multi-dimensional Risk manager consider volatility, ruin, and everything in between. They consider not only types of risk that are readily quantifiable, but also those that may be extremely difficult to measure. The following is a partial listing of the risks that a multidimensional risk manager might examine:
o Type A Risk – Short-term volatility of cash flows in one year
o Type B Risk – Short-term tail risk of cash flows in one year
o Type C Risk – Uncertainty risk (also known as parameter risk)
o Type D Risk – Inexperience risk relative to full multiple market cycles
o Type E Risk – Correlation to a top 10
o Type F Risk – Market value volatility in one year
o Type G Risk – Execution risk regarding difficulty of controlling operational losses
o Type H Risk – Long-term volatility of cash flows over five or more years
o Type J Risk – Long-term tail risk of cash flows over 5 five years or more
o Type K Risk – Pricing risk (cycle risk)
o Type L Risk – Market liquidity risk
o Type M Risk – Instability risk regarding the degree that the risk parameters are stable

Many of these types of risk can be measured using a comprehensive risk model, but several are not necessarily directly measurable. But the muilti dimensional risk manager realizes that you can get hurt by a risk even if you cannot measure it.

VaR is not a Bad Risk Measure

August 24, 2009

VaR has taken a lot of heat in the current financial crisis. Some go so far as to blame the financial crisis on VaR.

But VaR is a good risk measure. The problem is with the word RISK. You see, VaR has a precise definition, RISK does not. There is no way that you could possible measure an ill defined idea as RISK with a precise measure.

VaR is a good measure of one aspect of RISK. Is measures volatility of value under the assumption that the future will be like the recent past. If everyone understands that is what VaR does, then there is no problem.

Unfortunately, some people thought that VaR measured RISK period. What I mean is that they were led to believe that VaR was the same as RISK. In that context VaR (and any other single metric) is a failure. VaR is not the same as RISK.

That is because RISK has many aspects. Here is one partial list of the aspects of risk:

Type A Risk – Short Term Volatility of cash flows in 1 year
Type B Risk – Short Term Tail Risk of cash flows in 1 year
Type C Risk – Uncertainty Risk (also known as parameter risk)
Type D Risk – Inexperience Risk relative to full multiple market cycles
Type E Risk – Correlation to a top 10
Type F Risk – Market value volatility in 1 year
Type G Risk – Execution Risk regarding difficulty of controlling operational losses
Type H Risk – Long Term Volatility of cash flows over 5 or more years
Type J Risk – Long Term Tail Risk of cash flows over 5 years or more
Type K Risk – Pricing Risk (cycle risk)
Type L Risk – Market Liquidity Risk
Type M Risk – Instability Risk regarding the degree that the risk parameters are stable
(excerpted from Risk & Light)

VaR measures Type F risk only.


Follow

Get every new post delivered to your Inbox.

Join 630 other followers

%d bloggers like this: