Archive for the ‘Tail Risk’ category

Setting your Borel Point

July 28, 2014

What is a Borel Risk Point you ask?  Emile Borel once said

“Events with a sufficiently small probability never occur”.

Your Borel Risk Point (BRP) is your definition of “sufficiently small probability” that causes you to ignore unlikely risks.

Chances are, your BRP is set at much too high of a level of likelihood.  You see, when Borel said that, he was thinking of a 1 in 1 million type of likelihood.  Human nature, that has survival instincts that help us to survive on a day to day basis, would have us ignoring things that are not likely to happen this week.

Even insurance professionals will often want to ignore risks that are as common as 1 in 100 year events.  Treating them as if they will never happen.

And in general, the markets allow us to get away with that.  If a serious adverse event happens, the unprepared generally are excused if it is something as unlikely as a 1 in 100 event.

That works until another factor comes into play.  That other factor is the number of potential 1 in 100 events that we are exposed to.  Because if you are exposed to fifty 1 in 100 events, you are still pretty unlikely to see any particular event, but very likely to see some such event.

Governor Andrew Cuomo of New York State reportedly told President Obama,

New York “has a 100-year flood every two years now.”
Solvency II has Europeans all focused on the 1 in 200 year loss.  RISKVIEWS would suggest that is still too high of a likelihood for a good Borel Risk Point for insurers. RISKVIEWS would argue that insurers need to have a higher BRP because of the business that they are in.  For example, Life Insurers primary product (which is life insurance, at least in some parts of the world) pays for individual risks (unexpected deaths) that occur at an average rate of less than 1 in 1000.  How does an insurance company look their customers in the eye and say that they need to buy protection against a 1 in 1000 event from a company that only has a BRP of 1 in 200?
So RISKVIEWS suggest that insurers have a BRP somewhere just above 1 in 1000.  That might sound aggressive but it is pretty close to the Secure Risk Capital standard.  With a Risk Capital Standard of 1 in 1000, you can also use the COR instead of a model to calculate your capital needed.

Is it rude to ask “How fat is your tail?”

July 23, 2014

In fact, not only is it not rude, the question is central to understanding risk models.  The Coefficient of Riskiness(COR) allows us for the first time to talk about this critical question.

332px-36_Stanley_Hawk

You see, “normal” sized tails have a COR of three. If everything were normal, then risk models wouldn’t be all that important. We could just measure volatility and multiply it by 3 to get the 1 in 1000 result. If you instead want the 1 in 200 result, you would multiply the 1 in 1000 result by 83%.

Amazing maths fact – 3 is always the answer.

But everything is not normal. Everything does not have a COR of 3. So how fat are your tails?

RISKVIEWS looked at an equity index model. That model was carefully calibrated to match up with very long term index returns (using Robert Shiller’s database). The fat tailed result there has a COR of 3.5. With that model the 2008 S&P 500 total return loss of 37% is a 1 in 100 loss.

So if we take that COR of 3.5 and apply it to the experience of 1971 to 2013 that happens to be handy, the mean return is 12% and the volatility is about 18%. Using the simple COR approach, we estimate the 1 in 1000 loss as 50% (3.5 times the volatility subtracted from the average). To get the 1/200 loss, we can take 83% of that and we get a 42% loss.

RISKVIEWS suggests that the COR can be an important part of Model Validation.

 Looking at the results above for the stock index model, the question becomes why is 3.5 then the correct COR for the index? We know that in 2008, the stock market actually dropped 50% from high point to low point within a 12 month period that was not a calendar year. If we go back to Shiller’s database, which actually tracks the index values monthly (with extensions estimated for 50 years before the actual index was first defined), we find that there are approximately 1500 12 month periods. RISKVIEWS recognizes that these are not independent observations, but to answer this particular question, these actually are the right data points. And looking at that data, a 50% drop in a 12 month period is around the 1000th worst 12 month period. So a model with a 3.5 COR is pretty close to an exact fit with the historical record. And what if you have an opinion about the future riskiness of the stock market? You can vary the volatility assumptions if you think that the current market with high speed trading and globally instantaneously interlinked markets will be more volatile than the past 130 years that Schiller’s data covers. You can also adjust the future mean. You might at least want to replace the historic geometric mean of 10.6% for the arithmetic mean quoted above of 12% since we are not really taking about holding stocks for just one year. And you can have an opinion about the Riskiness of stocks in the future. A COR of 3.5 means that the tail at the 1 in 1000 point is 3.5 / 3 or 116.6% of the normal tails. That is hardly an obese tail.

The equity index model that we started with here has a 1 in 100 loss value of 37%. That was the 2008 calendar total return for the S&P 500. If we want to know what we would get with tails that are twice as fat, with the concept of COR, we can look at a COR of 4.0 instead of 3.5. That would put the 1 in 1000 loss at 9% worse or 59%. That would make the 1 in 200 loss 7% worse or 49%.

Those answers are not exact. But they are reasonable estimates that could be used in a validation process.

Non-technical management can look at the COR for each model can participate in a discussion of the reasonability of the fat in the tails for each and every risk.

RISKVIEWS believes that the COR can provide a basis for that discussion. It can be like the Richter scale for earthquakes or the Saffir-Simpson scale for hurricanes. Even though people in general do not know the science underlying either scale, they do believe that they understand what the scale means in terms of severity of experience. With exposure, the COR can take that place for risk models.

Chicken Little or Coefficient of Riskiness (COR)

July 21, 2014

Running around waving your arms and screaming “the Sky is Falling” is one way to communicate risk positions.  But as the story goes, it is not a particularly effective approach.  The classic story lays the blame on the lack of perspective on the part of Chicken Little.  But the way that the story is told suggests that in general people have almost zero tolerance for information about risk – they only want to hear from Chicken Little about certainties.

But insurers live in the world of risk.  Each insurer has their own complex stew of risks.  Their riskiness is a matter of extreme concern.  Many insurers use complex models to assess their riskiness.  But in some cases, there is a war for the hearts and minds of the decision makers in the insurer.  It is a war between the traditional qualitative gut view of riskiness and the new quantitative view of riskiness.  One tactic in that war used by the qualitative camp is to paint the quantitative camp as Chicken Little.

In a recent post, Riskviews told of a scale, a Coefficient of Riskiness.  The idea of the COR is to provide a simple basis for taking the argument about riskiness from the name calling stage to an actual discussion about Riskiness.

For each risk, we usually have some observations.  And from those observations, we can form the two basic statistical facts, the observed average and observed volatility (known as standard deviation to the quants).  But in the past 15 years, the discussion about risk has shifted away from the observable aspects of risk to an estimate of the amount of capital needed for each risk.

Now, if each risk held by an insurer could be subdivided into a large number of small risks that are similar in riskiness for each (including size of potential loss) and where the reasons for the losses for each individual risk were statistically separate (independent) then the maximum likely loss to be expected (99.9%tile) would be something like the average loss plus three times the volatility.  It does not matter what number is the average or what number is the standard deviation.

RISKVIEWS has suggested that this multiple of 3 would represent a standard amount of riskiness and become the index value for the Coefficient of Riskiness.

This could also be a starting point in looking at the amount of capital needed for any risks.  Three times the observed volatility plus the observed average loss.  (For the quants, this assumes that losses are positive values and gains negative.  If you want losses to be negative values, then take the observed average loss and subtract three times the volatility).

So in the debate about risk capital, that value is the starting point, the minimum to be expected.  So if a risk is viewed as made up of substantially similar but totally separate smaller risks (homogeneous and independent), then we start with a maximum likely loss of average plus three times volatility.  Many insurers choose (or have chosen for them) to hold capital for a loss at the 1 in 200 level.  That means holding capital for 83% of this Maximum Likely Loss.  This is the Viable capital level.  Some insurers who wish to be at the Robust level of capital will hold capital roughly 10% higher than the Maximum Likely Loss.  Insurers targeting the Secure capital level will hold capital at approximately 100% of the Maximum Likely Loss level.

But that is not the end of the discussion of capital.  Many of the portfolios of risks held by an insurer are not so well behaved.  Those portfolios are not similar and separate.  They are dissimilar in the likelihood of loss for individual exposures, they are dissimilar for the possible amount of loss.  One way of looking at those dissimilarities is that the variability of rate and of size result in a larger number of pooled risks acting statistically more like a smaller number of similar risks.

So if we can imagine that evaluation of riskiness can be transformed into a problem of translating a block of somewhat dissimilar, somewhat interdependent risks into a pool of similar, independent risks, this riskiness question comes clearly into focus.  Now we can use a binomial distribution to look at riskiness.  The plot below takes up one such analysis for a risk with an average incidence of 1 in 1000.  You see that for up to 1000 of these risks, the COR is 5 or higher.  The COR gets up to 6 for a pool of only 100 risks.  It gets close to 9 for a pool of only 50 risks.

 

cor

 

There is a different story for a risk with average incidence of 1 in 100.  COR is less than 6 for a pool as small as 25 exposures and the COR gets down to as low as 3.5.

Cor100

In producing these graphs, RISKVIEW notices that COR is largely a function of number of expected claims.  So The following graph shows COR plotted against number of expected claims for low expected number of claims.  (High expected claims produces COR that is very close to 3 so are not very interesting.)

COR4You see that the COR stays below 4.5 for expected claims 1 or greater.  And there does seem to be a gently sloping trend connecting the number of expected claims and the COR.

So for risks where losses are expected every year, the maximum COR seems to be under 4.5.  When we look at risks where the losses are expected less frequently, the COR can get much higher.  Values of COR above 5 start showing up with expected losses that are in the range of .2 and values above .1 are even higher.

cor5

What sorts of things fit with this frequency?  Major hurricanes in a particular zone, earthquakes, major credit losses all have expected frequencies of one every several years.

So what has this told us?  It has told us that fat tails can come from the small portfolio effect.  For a large portfolio of similar and separate risks, the tails are highly likely to be normal with a COR of 3.  For risks with a small number of exposures, the COR, and therefore the tail, might get as much as 50% fatter with a COR of up to 4.5. And the COR goes up as the number of expected losses goes down.

Risks with very fat tails are those with expected losses less frequent than one per year can have much fatter tails, up to three times as fat as normal.

So when faced with those infrequent risks, the Chicken Little approach is perhaps a reasonable approximation of the riskiness, if not a good indicator of the likelihood of an actual impending loss.

 

Quantitative vs. Qualitative Risk Assessment

July 14, 2014

There are two ways to assess risk.  Quantitative and Qualitative.  But when those two words are used in the NAIC ORSA Guidance Manual, their meaning is a little tricky.

In general, one might think that a quantitative assessment uses numbers and a qualitative assessment does not.  The difference is as simple as that.  The result of a quantitative assessment would be a number such as $53 million.  The result of a qualitative assessment would be words, such as “very risky” or “moderately risky”.

But that straightforward approach to the meaning of those words does not really fit with how they are used by the NAIC.  The ORSA Guidance Manual suggests that an insurer needs to include those qualitative risk assessments in its determination of capital adequacy.  Well, that just will not work if you have four risks that total $400 million and three others that are two “very riskys” and one “not so risk”.  How much capital is enough for two “very riskys”, perhaps you need a qualitative amount of surplus to provide for that, something like “a good amount”.

RISKVIEWS believes that then the NAIC says “Quantitative” and “Qualitative” they mean to describe two approaches to developing a quantity.  For ease, we will call these two approaches Q1 and Q2.

The Q1 approach is data and analysis driven approach to developing the quantity of loss that the company’s capital standard provides for.  It is interesting to RISKVIEWS that very few participants or observers of this risk quantification regularly recognize that this process has a major step that is much less quantitative and scientific than others.

The Q1 approach starts and ends with numbers and has mathematical steps in between.  But the most significant step in the process is largely judgmental.  So at its heart, the “quantitative” approach is “qualitative”.  That step is the choice of mathematical model that is used to extrapolate and interpolate between actual data points.  In some cases, there are enough data points that the choice of model can be based upon somewhat less subjective fit criteria.  But in other cases, that level of data is reached by shortening the time step for observations and THEN making heroic (and totally subjective) assumptions about the relationship between successive time periods.

These subjective decisions are all made to enable the modelers to make a connection between the middle of the distribution, where there usually is enough data to reliably model outcomes and the tail, particularly the adverse tail of the distribution where the risk calculations actually take place and where there is rarely if ever any data.

There are only a couple of subjective decisions possibilities, in broad terms…

  • Benign – Adverse outcomes are about as likely as average outcomes and are only moderately more severe.
  • Moderate – Outcomes similar to the average are much more likely than outcomes significantly different from average.  Outcomes significantly higher than average are possible, but likelihood of extremely adverse outcomes are extremely highly unlikely.
  • Highly risky – Small and moderately adverse outcomes are highly likely while extremely adverse outcomes are possible, but fairly unlikely.

The first category of assumption, Benign,  is appropriate for large aggregations of small loss events where contagion is impossible.  Phenomenon that fall into this category are usually not the concern for risk analysis.  These phenomenon are never subject to any contagion.

The second category, Moderate, is appropriate for moderate sized aggregations of large loss events.  Within this class, there are two possibilities:  Low or no contagion and moderate to high contagion.  The math is much simpler if no contagion is assumed.

But unfortunately, for risks that include any significant amount of human choice, contagion has been observed.  And this contagion has been variable and unpredictable.  Even more unfortunately, the contagion has a major impact on risks at both ends of the spectrum.  When past history suggests a favorable trend, human contagion has a strong tendency to over play that trend.  This process is called “bubbles”.  When past history suggests an unfavorable trend, human contagion also over plays the trend and markets for risks crash.

The modelers who wanted to use the zero contagion models, call this “Fat Tails”.  It is seen to be an unusual model, only because it was so common to use the zero contagion model with the simpler maths.

RISKVIEWS suggests that when communicating that the  approach to modeling is to use the Moderate model, the degree of contagion assumed should be specified and an assumption of zero contagion should be accompanied with a disclaimer that past experience has proven this assumption to be highly inaccurate when applied to situations that include humans and therefore seriously understates potential risk.

The Highly Risky models are appropriate for risks where large losses are possible but highly infrequent.  This applies to insurance losses due to major earthquakes, for example.  And with a little reflection, you will notice that this is nothing more than a Benign risk with occasional high contagion.  The complex models that are used to forecast the distribution of potential losses for these risks, the natural catastrophe models go through one step to predict possible extreme events and the second step to calculate an event specific degree of contagion for an insurer’s specific set of coverages.

So it just happens that in a Moderate model, the 1 in 1000 year loss is about 3 standard deviations worse than the mean.  So if we use that 1 in 1000 year loss as a multiple of standard deviations, we can easily talk about a simple scale for riskiness of a model:

Scale

So in the end the choice is to insert an opinion about the steepness of the ramp up between the mean and an extreme loss in terms of multiples of the standard deviation.  Where standard deviation is a measure of the average spread of the observed data.  This is a discussion that on these terms include all of top management and the conclusions can be reviewed and approved by the board with the use of this simple scale.  There will need to be an educational step, which can be largely in terms of placing existing models on the scale.  People are quite used to working with a Richter Scale for earthquakes.  This is nothing more than a similar scale for risks.  But in addition to being descriptive and understandable, once agreed, it can be directly tied to models, so that the models are REALLY working from broadly agreed upon assumptions.

*                  *                *               *             *                *

So now we go the “Qualitative” determination of the risk value.  Looking at the above discussion, RISKVIEWS would suggest that we are generally talking about situations where we for some reason do not think that we know enough to actually know the standard deviation.  Perhaps this is a phenomenon that has never happened, so that the past standard deviation is zero.  So we cannot use the multiple of standard deviation method discussed above.  Or to put is another way, we can use the above method, but we have to use judgment to estimate the standard deviation.

*                  *                *               *             *                *

So in the end, with a Q1 “quantitative” approach, we have a historical standard deviation and we use judgment to decide how risky things are in the extreme compared to that value.  In the Q2 “qualitative” approach, we do not have a reliable historical standard deviation and we need to use judgment to decide how risky things are in the extreme.

Not as much difference as one might have guessed!

Why some think that there is No Need for Storm Shelters

May 22, 2013

The BBC featured a story about the dearth of storm shelters in the area hit last week by tornadoes.

Why so few storm shelters in Tornado Alley hotspot?

The story goes on to discuss the fact that Americans, especially in red states like Oklahoma, strongly prefer keeping the government out of the business of providing things like storm shelters, allowing that to be an individual option.  Then reports that few individuals opt to spend their money on shelters.

The answer might well be in the numbers…

Below, from the National Oceanic and Atmospheric Administration (NOAA) is a list of the 25 deadliest tornadoes in US history:

1. Tri-State (MO, IL, IN) – March 18, 1925 – 695 deaths
2. Natchez, MS – May 6, 1840 – 317 deaths
3. St. Louis, MO – May 27, 1896 – 255 deaths
4. Tupelo, MS – April 5, 1936 – 216 deaths
5. Gainesville, GA – April 6, 1936 – 203 deaths
6. Woodward, OK – April 9, 1947 – 181 deaths
7. Joplin, MO – May 22, 2011 – 158 deaths
8. Amite, LA, Purvis, MS – April 24, 1908 – 143 deaths
9. New Richmond, WI – June 12, 1899 – 117 deaths
10. Flint, MI – June 8, 1953 – 116 deaths
11. Waco, TX – May 11, 1953 – 114 deaths
12. Goliad, TX – May 18, 1902 – 114 deaths
13. Omaha, NE – March 23, 1913 – 103 deaths
14. Mattoon, IL – May 26, 1917 – 101 deaths
15. Shinnston, WV – June 23, 1944 – 100 deaths
16. Marshfield, MO – April 18, 1880 – 99 deaths
17. Gainesville, GA – June 1, 1903 – 98 deaths
18. Poplar Bluff, MO – May 9, 1927 – 98 deaths
19. Snyder, OK – May 10, 1905 – 97 deaths
20. Comanche, IA & Albany, IL – June 3, 1860 – 92 deaths
21. Natchez, MS – April 24, 1908 – 91 deaths
22. Worcester, MA – June 9, 1953 – 90 deaths
23. Starkville, MS to Waco, AL -April 20, 1920 – 88 deaths
24. Lorain & Sandusky, OH – June 28, 1924 – 85 deaths
25. Udall, KS – May 25, 1955 – 80 deaths

Looks scary and impressively dangerous.  Until you look more carefully at the dates.  Most of those events are OLD.  In fact, if you look at this as a histogram, you see something interesting…

Deadly Tornadoes

You see from this chart, why there are few storm shelters.  Between the 1890’s and 1950’s, there were at least two very deadly tornadoes per decade.  Enough to keep people scared.  But before the last week, there had not been a decade in over 50 years with any major events.  50  years is a long time to go between times when someone somewhere in the US needed a storm shelter to protect them from a very deadly storm.

This is not to say that there have not been storms in the past 50 years.  The chart below from the Washington Post, shows the losses from tornadoes for that same 50 year period and the numbers are not small.

It is RISKVIEWS’ guess that in the face of smaller, less deadly but destructive storms, people are much more likely to attribute their own outcome to some innate talent that they have and the losers do not have.  Sort of like the folks who have had one or several good experiences at the slot machines who believe that they have a talent for gambling.

Another reason is that almost 45% of storm fatalities are folks who live in trailers.  They often will not even have an option to build their own storm shelter.  There it is probably something that could be addressed by regulations regarding zoning of trailer parks.

Proper risk management can only be done in advance.  The risk management second guessing that is done after the fact helps to create a tremendous drag on society.  We are forced into spending money to prevent recurrence of the last disaster, regardless of whether that expenditure makes any sense at all on the basis of frequency and severity of the potential adverse events or not.

We cannot see the future as clearly as we can see the past.  We can only prepare for some of the possible futures. 

The BBC article stands on the side of that discussion that looks back after the fact and finds fault with whoever did not properly see the future exactly as clearly as they are now able to see the past.

A simple recent example of this is the coverage of the Boston Marathon bombers.  Much has been made of the fact that there were warnings about one or more members of the family before the event.  But no one has chosen to mention how many others who did not commit bombings were there similar or even much more dire warnings about.  It seems quite likely, that the warnings about these people were dots in a stream of hundreds of thousands of similar warnings.

Getting Paid for Risk Taking

April 15, 2013

Consideration for accepting a risk needs to be at a level that will sustain the business and produce a return that is satisfactory to investors.

Investors usually want additional return for extra risk.  This is one of the most misunderstood ideas in investing.

“In an efficient market, investors realize above-average returns only by taking above-average risks.  Risky stocks have high returns, on average, and safe stocks do not.”

Baker, M. Bradley, B. Wurgler, J.  Benchmarks as Limits to Arbitrage: Understanding the Low-Volatility Anomaly

But their study found that stocks in the top quintile of trailing volatility had real return of -90% vs. a real return of 1000% for the stocks in the bottom quintile.

But the thinking is wrong.  Excess risk does not produce excess return.  The cause and effect are wrong in the conventional wisdom.  The original statement of this principle may have been

“in all undertakings in which there are risks of great losses, there must also be hopes of great gains.”
Alfred Marshall 1890 Principles of Economics

Marshal has it right.  There are only “hopes” of great gains.  These is no invisible hand that forces higher risks to return higher gains.  Some of the higher risk investment choices are simply bad choices.

Insurers opportunity to make “great gains” out of “risks of great losses” is when they are determining what consideration, or price, that they will require to accept a risk.  Most insurers operate in competitive markets that are not completely efficient.  Individual insurers do not usually set the price in the market, but there is a range of prices at which insurance is purchased in any time period.  Certainly the process that an insurer uses to determine the price that makes a risk acceptable to accept is a primary determinant in the profits of the insurer.  If that price contains a sufficient load for the extreme risks that might threaten the existence of the insurer, then over time, the insurer has the ability to hold and maintain sufficient resources to survive some large loss situations.

One common goal conflict that leads to problems with pricing is the conflict between sales and profits.  In insurance as in many businesses, it is quite easy to increase sales by lowering prices.  In most businesses, it is very difficult to keep up that strategy for very long as the realization of lower profits or losses from inadequate prices is quickly realized.  In insurance, the the premiums are paid in advance, sometimes many years in advance of when the insurer must provide the promised insurance benefits.  If provisioning is tilted towards the point of view that supports the consideration, the pricing deficiencies will not be apparent for years.  So insurance is particularly susceptible to the tension between volume of business and margins for risk and profits,
and since sales is a more fundamental need than profits, the margins often suffer.
As just mentioned, insurers simply do not know for certain what the actual cost of providing an insurance benefit will be.  Not with the degree of certainty that businesses in other sectors can know their cost of goods sold.  The appropriateness of pricing will often be validated in the market.  Follow-the-leader pricing can lead a herd of insurers over the cliff.  The whole sector can get pricing wrong for a time.  Until, sometimes years later, the benefits are collected and their true cost is know.

“A decade of short sighted price slashing led to industry losses of nearly $3 billion last year.”  Wall Street Journal June 24, 2002

Pricing can also go wrong on an individual case level.  The “Winners Curse”  sends business to the insurer who most underimagines riskiness of a particular risk.

There are two steps to reflecting risk in pricing.  The first step is to capture the expected loss properly.  Most of the discussion above relates to this step and the major part of pricing risk comes from the possibility of missing that step as has already been discussed.  But the second step is to appropriately reflect all aspects of the risk that the actual losses will be different from expected.  There are many ways that such deviations can manifest.

The following is a partial listing of the risks that might be examined:

• Type A Risk—Short-Term Volatility of cash flows in 1 year

• Type B Risk—Short -Term Tail Risk of cash flows in 1 year
• Type C Risk—Uncertainty Risk (also known as parameter risk)
• Type D Risk—Inexperience Risk relative to full multiple market cycles
• Type E Risk—Correlation to a top 10
• Type F Risk—Market value volatility in 1 year
• Type G Risk—Execution Risk regarding difficulty of controlling operational
losses
• Type H Risk—Long-Term Volatility of cash flows over 5 or more years
• Type J Risk—Long-Term Tail Risk of cash flows over 5 years or more
• Type K Risk—Pricing Risk (cycle risk)
• Type L Risk—Market Liquidity Risk
• Type M Risk—Instability Risk regarding the degree that the risk parameters are
stable

See “Risk and Light” or “The Law of Risk and Light

There are also many different ways that risk loads are specifically applied to insurance pricing.  Three examples are:

  • Capital Allocation – Capital is allocated to a product (based upon the provisioning) and the pricing then needs to reflect the cost of holding the capital.  The cost of holding capital may be calculated as the difference between the risk free rate (after tax) and the hurdle rate for the insurer.  Some firms alternately use the difference between the investment return on the assets backing surplus (after tax) and the hurdle rate.  This process assures that the pricing will support achieving the hurdle rate on the capital that the insurer needs to hold for the risks of the business.  It does not reflect any margin for the volatility in earnings that the risks assumed might create, nor does it necessarily include any recognition of parameter risk or general uncertainty.
  • Provision for Adverse Deviation – Each assumption is adjusted to provide for worse experience than the mean or median loss.  The amount of stress may be at a predetermined confidence interval (Such as 65%, 80% or 90%).  Higher confidence intervals would be used for assumptions with higher degree of parameter risk.  Similarly, some companies use a multiple (or fraction) of the standard deviation of the loss distribution as the provision.  More commonly, the degree of adversity is set based upon historical provisions or upon judgement of the person setting the price.  Provision for Adverse Deviation usually does not reflect anything specific for extra risk of insolvency.
  • Risk Adjusted Profit Target – Using either or both of the above techniques, a profit target is determined and then that target is translated into a percentage of premium of assets to make for a simple risk charge when constructing a price indication.

The consequences of failing to recognize as aspect of risk in pricing will likely be that the firm will accumulate larger than expected concentrations of business with higher amounts of that risk aspect.  See “Risk and Light” or “The Law of Risk and Light“.

To get Consideration right you need to (1)regularly get a second opinion on price adequacy either from the market or from a reliable experienced person; (2) constantly update your view of your risks in the light of emerging experience and market feedback; and (3) recognize that high sales is a possible market signal of underpricing.

This is one of the seven ERM Principles for Insurers

During a Crisis – A Lesson from Fire Fighters

December 10, 2012

800px-FIRE_01

The fire cycle: “The action-cycle of a fire from birth to death follows a certain pattern.  The fire itself may vary in proportion from insignificance to conflagration, but regardless of its proportions, origin, propagation or rate of progression, the cycle or pattern of controlling it includes these phases:

1. the period between discovery and the transmittal of the alarm or alerting of the fire forces;

2. the period between receipt of alarm by the fire service and arrival of firemen at the scene of the fire; and, finally,

3. the period between arrival on the fire ground and final extinguishment of the fire itself.

It is important to fire fighting to make sure that the right things happen during each phase and that each step takes as little time as possible.  For the first phase, that means having fire detection equipment in place and working properly that produces a signal that will be noticed and conveyed to the fire forces.  In the second phase, the fire fighters need to be organized to respond appropriately to the alarm.  And the third phase includes the process of diagnosing the situation and taking the necessary steps to put out the fire.

That is a good process model for risk managers to contemplate.  Ask yourself and your staff:

  1. This is about the attitude and preparedness of company staff to accept that there may be a problem.  How long will it be before we know when an actual crisis hits the company?  How do our alarms work?  Are they all in functioning order?  Or will those closest to the problems delay notifying you of a potential problem?  Sometimes with fires and company crises, an alarm sounds and it is immediately turned off.  The presumption is that everything is normal and the alarm must be malfunctioning.  Or perhaps that the alarm is correct, but that it it calibrated to be too sensitive and there is not a significant problem.  As risk manager, you should urge everyone to err on the side of reporting every possible situation.  Better to have some extra responses than to have events, like fires, rage completely out of control before calling for help.
  2.  This is about the preparedness of risk management staff to begin to respond to a crisis.  One problem that many risk management programs face is that their main task seems to be measuring and reporting risk positions.  If that is what people believe is their primary function, then the risk management function will not attract any action oriented people.  If that is the case in your firm, then you as risk manager need to determine who are the best people to recruit as responders and build a rapport with them in advance of the next crisis so that when it happens, you can mobilize their help.  If the risk staff is all people who excel at measuring, then you also need to define their roles in an emergency – and have them practice those roles.   No matter what, you do not want to find out who will freeze in a crisis during the first major crisis of your tenure.  And freezing (rather than panic) is by far the most common reaction.  You need to find those few people whose reaction to a crisis is to go into a totally focuses active survival mode.
  3. This is about being able to properly diagnose a crisis and to execute the needed actions.  Fire Fighters need to determine the source of the blaze, wind conditions, evacuation status and many other things to make their plan for fighting the fire.  They usually need to form that plan quickly, mobilize and execute the plan effectively, making both the planned actions and the unplanned modifications happen as well as can be done.  Risk managers need to perform similar steps.  They need to understand the source of the problem, the conditions around the problem that are outside of the firm and the continuing involvement of company employees, customers and others.  While risk managers usually do not have to form their plan in minutes as fire fighters must, they do have to do so quickly.  Especially when there are reputational issues involved, swift and sure initial actions can make the world of difference.  And execution is key.  Getting this right means that the risk manager needs to know in advance of a crisis, what sorts of actions can be taken in a crisis and that the company staff has the ability to execute.  There is no sense planning to take actions that require the physical prowess  of Navy Seals if your staff are a bunch of ordinary office workers.  And recognizing the limitations of the rest of the world is important also.  If your crisis effects many others, they may not be able to provide the help from outside that you may have planned on.  If the crisis is unique to you, you need to recognize that some will question getting involved in something that they do not understand but that may create large risks for their organizations.

 


Follow

Get every new post delivered to your Inbox.

Join 646 other followers

%d bloggers like this: