Key Ideas of ERM

Posted July 24, 2014 by riskviews
Categories: Chief Risk Officer, Enterprise Risk Management, Risk Culture, Risk Management System

Tags: , ,

For a set of activities to be called ERM, they must satisfy ALL of these Key Ideas…

  1. Transition from Evolved Risk Management to planned ERM
  2. Comprehensive – includes ALL risks
  3. Measurement – on a consistent basis allows ranking and…
  4. Aggregation – adding up the risks to know total
  5. Capital – comparing sum of risks to capital – can apply security standard to judge
  6. Hierarchy – decisions about risks are made at the appropriate level in the organization – which means information must be readily available

Risk management activities that do not satisfy ALL Key Ideas may well be good and useful things that must be done, but they are not, by themselves ERM.

Many activities that seek to be called ERM do not really satisfy ALL Key Ideas.  The most common “fail” is item 2, Comprehensive.  When risks are left out of consideration, that is the same as a measurement of zero.  So no matter how difficult to measure, it is extremely important to really, really be Comprehensive.

But it is quite possible to “fail” on any of the other Key Ideas.

The Transition idea usually “fails” when the longest standing traditional risk management practices are not challenged to come up to ERM standards that are being applied to other risks and risk management activities.

Measurement “fails” when the tails of the risk model are not of the correct “fatness“.  Risks are significantly undervalued.

Aggregation “fails” when too much independence of risks is assumed.  Most often ignored is interdependence caused by common counter parties.

Capital “fails” when the security standard is based upon a very partial risk model and not on a completely comprehensive risk model.

Hierarchy “fails” when top management and/or the board do not personally take responsibility for ERM.  The CRO should not be an independent advocate for risk management, the CRO should be the agent of the power structure of the firm.

In fact Hierarchy Failure is the other most common reason for ERM to fail.

Is it rude to ask “How fat is your tail?”

Posted July 23, 2014 by riskviews
Categories: Enterprise Risk Management, Tail Risk

Tags: ,

In fact, not only is it not rude, the question is central to understanding risk models.  The Coefficient of Riskiness(COR) allows us for the first time to talk about this critical question.

332px-36_Stanley_Hawk

You see, “normal” sized tails have a COR of three. If everything were normal, then risk models wouldn’t be all that important. We could just measure volatility and multiply it by 3 to get the 1 in 1000 result. If you instead want the 1 in 200 result, you would multiply the 1 in 1000 result by 83%.

Amazing maths fact – 3 is always the answer.

But everything is not normal. Everything does not have a COR of 3. So how fat are your tails?

RISKVIEWS looked at an equity index model. That model was carefully calibrated to match up with very long term index returns (using Robert Shiller’s database). The fat tailed result there has a COR of 3.5. With that model the 2008 S&P 500 total return loss of 37% is a 1 in 100 loss.

So if we take that COR of 3.5 and apply it to the experience of 1971 to 2013 that happens to be handy, the mean return is 12% and the volatility is about 18%. Using the simple COR approach, we estimate the 1 in 1000 loss as 50% (3.5 times the volatility subtracted from the average). To get the 1/200 loss, we can take 83% of that and we get a 42% loss.

RISKVIEWS suggests that the COR can be an important part of Model Validation.

 Looking at the results above for the stock index model, the question becomes why is 3.5 then the correct COR for the index? We know that in 2008, the stock market actually dropped 50% from high point to low point within a 12 month period that was not a calendar year. If we go back to Shiller’s database, which actually tracks the index values monthly (with extensions estimated for 50 years before the actual index was first defined), we find that there are approximately 1500 12 month periods. RISKVIEWS recognizes that these are not independent observations, but to answer this particular question, these actually are the right data points. And looking at that data, a 50% drop in a 12 month period is around the 1000th worst 12 month period. So a model with a 3.5 COR is pretty close to an exact fit with the historical record. And what if you have an opinion about the future riskiness of the stock market? You can vary the volatility assumptions if you think that the current market with high speed trading and globally instantaneously interlinked markets will be more volatile than the past 130 years that Schiller’s data covers. You can also adjust the future mean. You might at least want to replace the historic geometric mean of 10.6% for the arithmetic mean quoted above of 12% since we are not really taking about holding stocks for just one year. And you can have an opinion about the Riskiness of stocks in the future. A COR of 3.5 means that the tail at the 1 in 1000 point is 3.5 / 3 or 116.6% of the normal tails. That is hardly an obese tail.

The equity index model that we started with here has a 1 in 100 loss value of 37%. That was the 2008 calendar total return for the S&P 500. If we want to know what we would get with tails that are twice as fat, with the concept of COR, we can look at a COR of 4.0 instead of 3.5. That would put the 1 in 1000 loss at 9% worse or 59%. That would make the 1 in 200 loss 7% worse or 49%.

Those answers are not exact. But they are reasonable estimates that could be used in a validation process.

Non-technical management can look at the COR for each model can participate in a discussion of the reasonability of the fat in the tails for each and every risk.

RISKVIEWS believes that the COR can provide a basis for that discussion. It can be like the Richter scale for earthquakes or the Saffir-Simpson scale for hurricanes. Even though people in general do not know the science underlying either scale, they do believe that they understand what the scale means in terms of severity of experience. With exposure, the COR can take that place for risk models.

Chicken Little or Coefficient of Riskiness (COR)

Posted July 21, 2014 by riskviews
Categories: Enterprise Risk Management, risk assessment, Tail Risk

Tags: ,

Running around waving your arms and screaming “the Sky is Falling” is one way to communicate risk positions.  But as the story goes, it is not a particularly effective approach.  The classic story lays the blame on the lack of perspective on the part of Chicken Little.  But the way that the story is told suggests that in general people have almost zero tolerance for information about risk – they only want to hear from Chicken Little about certainties.

But insurers live in the world of risk.  Each insurer has their own complex stew of risks.  Their riskiness is a matter of extreme concern.  Many insurers use complex models to assess their riskiness.  But in some cases, there is a war for the hearts and minds of the decision makers in the insurer.  It is a war between the traditional qualitative gut view of riskiness and the new quantitative view of riskiness.  One tactic in that war used by the qualitative camp is to paint the quantitative camp as Chicken Little.

In a recent post, Riskviews told of a scale, a Coefficient of Riskiness.  The idea of the COR is to provide a simple basis for taking the argument about riskiness from the name calling stage to an actual discussion about Riskiness.

For each risk, we usually have some observations.  And from those observations, we can form the two basic statistical facts, the observed average and observed volatility (known as standard deviation to the quants).  But in the past 15 years, the discussion about risk has shifted away from the observable aspects of risk to an estimate of the amount of capital needed for each risk.

Now, if each risk held by an insurer could be subdivided into a large number of small risks that are similar in riskiness for each (including size of potential loss) and where the reasons for the losses for each individual risk were statistically separate (independent) then the maximum likely loss to be expected (99.9%tile) would be something like the average loss plus three times the volatility.  It does not matter what number is the average or what number is the standard deviation.

RISKVIEWS has suggested that this multiple of 3 would represent a standard amount of riskiness and become the index value for the Coefficient of Riskiness.

This could also be a starting point in looking at the amount of capital needed for any risks.  Three times the observed volatility plus the observed average loss.  (For the quants, this assumes that losses are positive values and gains negative.  If you want losses to be negative values, then take the observed average loss and subtract three times the volatility).

So in the debate about risk capital, that value is the starting point, the minimum to be expected.  So if a risk is viewed as made up of substantially similar but totally separate smaller risks (homogeneous and independent), then we start with a maximum likely loss of average plus three times volatility.  Many insurers choose (or have chosen for them) to hold capital for a loss at the 1 in 200 level.  That means holding capital for 83% of this Maximum Likely Loss.  This is the Viable capital level.  Some insurers who wish to be at the Robust level of capital will hold capital roughly 10% higher than the Maximum Likely Loss.  Insurers targeting the Secure capital level will hold capital at approximately 100% of the Maximum Likely Loss level.

But that is not the end of the discussion of capital.  Many of the portfolios of risks held by an insurer are not so well behaved.  Those portfolios are not similar and separate.  They are dissimilar in the likelihood of loss for individual exposures, they are dissimilar for the possible amount of loss.  One way of looking at those dissimilarities is that the variability of rate and of size result in a larger number of pooled risks acting statistically more like a smaller number of similar risks.

So if we can imagine that evaluation of riskiness can be transformed into a problem of translating a block of somewhat dissimilar, somewhat interdependent risks into a pool of similar, independent risks, this riskiness question comes clearly into focus.  Now we can use a binomial distribution to look at riskiness.  The plot below takes up one such analysis for a risk with an average incidence of 1 in 1000.  You see that for up to 1000 of these risks, the COR is 5 or higher.  The COR gets up to 6 for a pool of only 100 risks.  It gets close to 9 for a pool of only 50 risks.

 

cor

 

There is a different story for a risk with average incidence of 1 in 100.  COR is less than 6 for a pool as small as 25 exposures and the COR gets down to as low as 3.5.

Cor100

In producing these graphs, RISKVIEW notices that COR is largely a function of number of expected claims.  So The following graph shows COR plotted against number of expected claims for low expected number of claims.  (High expected claims produces COR that is very close to 3 so are not very interesting.)

COR4You see that the COR stays below 4.5 for expected claims 1 or greater.  And there does seem to be a gently sloping trend connecting the number of expected claims and the COR.

So for risks where losses are expected every year, the maximum COR seems to be under 4.5.  When we look at risks where the losses are expected less frequently, the COR can get much higher.  Values of COR above 5 start showing up with expected losses that are in the range of .2 and values above .1 are even higher.

cor5

What sorts of things fit with this frequency?  Major hurricanes in a particular zone, earthquakes, major credit losses all have expected frequencies of one every several years.

So what has this told us?  It has told us that fat tails can come from the small portfolio effect.  For a large portfolio of similar and separate risks, the tails are highly likely to be normal with a COR of 3.  For risks with a small number of exposures, the COR, and therefore the tail, might get as much as 50% fatter with a COR of up to 4.5. And the COR goes up as the number of expected losses goes down.

Risks with very fat tails are those with expected losses less frequent than one per year can have much fatter tails, up to three times as fat as normal.

So when faced with those infrequent risks, the Chicken Little approach is perhaps a reasonable approximation of the riskiness, if not a good indicator of the likelihood of an actual impending loss.

 

Quantitative vs. Qualitative Risk Assessment

Posted July 14, 2014 by riskviews
Categories: Enterprise Risk Management, Modeling, risk assessment, Statistics, Tail Risk, Volatility

Tags: ,

There are two ways to assess risk.  Quantitative and Qualitative.  But when those two words are used in the NAIC ORSA Guidance Manual, their meaning is a little tricky.

In general, one might think that a quantitative assessment uses numbers and a qualitative assessment does not.  The difference is as simple as that.  The result of a quantitative assessment would be a number such as $53 million.  The result of a qualitative assessment would be words, such as “very risky” or “moderately risky”.

But that straightforward approach to the meaning of those words does not really fit with how they are used by the NAIC.  The ORSA Guidance Manual suggests that an insurer needs to include those qualitative risk assessments in its determination of capital adequacy.  Well, that just will not work if you have four risks that total $400 million and three others that are two “very riskys” and one “not so risk”.  How much capital is enough for two “very riskys”, perhaps you need a qualitative amount of surplus to provide for that, something like “a good amount”.

RISKVIEWS believes that then the NAIC says “Quantitative” and “Qualitative” they mean to describe two approaches to developing a quantity.  For ease, we will call these two approaches Q1 and Q2.

The Q1 approach is data and analysis driven approach to developing the quantity of loss that the company’s capital standard provides for.  It is interesting to RISKVIEWS that very few participants or observers of this risk quantification regularly recognize that this process has a major step that is much less quantitative and scientific than others.

The Q1 approach starts and ends with numbers and has mathematical steps in between.  But the most significant step in the process is largely judgmental.  So at its heart, the “quantitative” approach is “qualitative”.  That step is the choice of mathematical model that is used to extrapolate and interpolate between actual data points.  In some cases, there are enough data points that the choice of model can be based upon somewhat less subjective fit criteria.  But in other cases, that level of data is reached by shortening the time step for observations and THEN making heroic (and totally subjective) assumptions about the relationship between successive time periods.

These subjective decisions are all made to enable the modelers to make a connection between the middle of the distribution, where there usually is enough data to reliably model outcomes and the tail, particularly the adverse tail of the distribution where the risk calculations actually take place and where there is rarely if ever any data.

There are only a couple of subjective decisions possibilities, in broad terms…

  • Benign – Adverse outcomes are about as likely as average outcomes and are only moderately more severe.
  • Moderate – Outcomes similar to the average are much more likely than outcomes significantly different from average.  Outcomes significantly higher than average are possible, but likelihood of extremely adverse outcomes are extremely highly unlikely.
  • Highly risky – Small and moderately adverse outcomes are highly likely while extremely adverse outcomes are possible, but fairly unlikely.

The first category of assumption, Benign,  is appropriate for large aggregations of small loss events where contagion is impossible.  Phenomenon that fall into this category are usually not the concern for risk analysis.  These phenomenon are never subject to any contagion.

The second category, Moderate, is appropriate for moderate sized aggregations of large loss events.  Within this class, there are two possibilities:  Low or no contagion and moderate to high contagion.  The math is much simpler if no contagion is assumed.

But unfortunately, for risks that include any significant amount of human choice, contagion has been observed.  And this contagion has been variable and unpredictable.  Even more unfortunately, the contagion has a major impact on risks at both ends of the spectrum.  When past history suggests a favorable trend, human contagion has a strong tendency to over play that trend.  This process is called “bubbles”.  When past history suggests an unfavorable trend, human contagion also over plays the trend and markets for risks crash.

The modelers who wanted to use the zero contagion models, call this “Fat Tails”.  It is seen to be an unusual model, only because it was so common to use the zero contagion model with the simpler maths.

RISKVIEWS suggests that when communicating that the  approach to modeling is to use the Moderate model, the degree of contagion assumed should be specified and an assumption of zero contagion should be accompanied with a disclaimer that past experience has proven this assumption to be highly inaccurate when applied to situations that include humans and therefore seriously understates potential risk.

The Highly Risky models are appropriate for risks where large losses are possible but highly infrequent.  This applies to insurance losses due to major earthquakes, for example.  And with a little reflection, you will notice that this is nothing more than a Benign risk with occasional high contagion.  The complex models that are used to forecast the distribution of potential losses for these risks, the natural catastrophe models go through one step to predict possible extreme events and the second step to calculate an event specific degree of contagion for an insurer’s specific set of coverages.

So it just happens that in a Moderate model, the 1 in 1000 year loss is about 3 standard deviations worse than the mean.  So if we use that 1 in 1000 year loss as a multiple of standard deviations, we can easily talk about a simple scale for riskiness of a model:

Scale

So in the end the choice is to insert an opinion about the steepness of the ramp up between the mean and an extreme loss in terms of multiples of the standard deviation.  Where standard deviation is a measure of the average spread of the observed data.  This is a discussion that on these terms include all of top management and the conclusions can be reviewed and approved by the board with the use of this simple scale.  There will need to be an educational step, which can be largely in terms of placing existing models on the scale.  People are quite used to working with a Richter Scale for earthquakes.  This is nothing more than a similar scale for risks.  But in addition to being descriptive and understandable, once agreed, it can be directly tied to models, so that the models are REALLY working from broadly agreed upon assumptions.

*                  *                *               *             *                *

So now we go the “Qualitative” determination of the risk value.  Looking at the above discussion, RISKVIEWS would suggest that we are generally talking about situations where we for some reason do not think that we know enough to actually know the standard deviation.  Perhaps this is a phenomenon that has never happened, so that the past standard deviation is zero.  So we cannot use the multiple of standard deviation method discussed above.  Or to put is another way, we can use the above method, but we have to use judgment to estimate the standard deviation.

*                  *                *               *             *                *

So in the end, with a Q1 “quantitative” approach, we have a historical standard deviation and we use judgment to decide how risky things are in the extreme compared to that value.  In the Q2 “qualitative” approach, we do not have a reliable historical standard deviation and we need to use judgment to decide how risky things are in the extreme.

Not as much difference as one might have guessed!

ORSA ==> AC – ST > RCS

Posted June 30, 2014 by riskviews
Categories: Economic Capital, Enterprise Risk Management, ORSA, Stress Test

Tags: ,

The Own Risk and Solvency Assessment (or Forward Looking Assessment of Own Risks based on ORSA principles) initially seems daunting.  But the simple formula in the title of this post provides a guide to what is really going on.

  1. To preform an ORSA, an insurer must first decide upon its own Risk Capital Standard.
  2. The insurer needs to develop the capacity to project the financial and risk exposure statistics forward for several years under a range of specified conditions.
  3. Included in the projection capacity is the ability to determine (a) the amount of capital required under their own risk capital standard and (b) the projected amount of capital available.
  4. The insurer needs to select a range of Stress Tests that will be used for the projections.
  5. If, under a projection based upon a Stress Test, the available capital exceeds the Risk Capital Standard, then that Stress Test is a pass.                   AC – ST >RCS
  6. If, under a projection based upon a Stress Test, the available capital is less than the Risk Capital Standard, then that Stress Test is a fail and requires an explanation of intended management actions.                            AC – ST < RCS  ==> MA

RISKVIEWS suggests that Stress Tests should be chosen so that the company can demonstrate that they can pass (AC – ST >RCS) the tests under a wide range of scenarios AND in addition, that one or several of the Stress Tests are severe enough to produce a fail (AC – ST < RCS  ==> MA) condition so that they can demonstrate that management has conceptualized the actions that would be needed in extreme loss situations.

RISKVIEWS also guesses that an insurer that picks a low Risk Capital Standard and Normal Volatility Stress Tests will get push back from the regulators reviewing the ORSA.

RISKVIEWS also guesses that an insurer that picks a high Risk Capital Standard will fail some or all of the more severe Stress Tests.

Furthermore, RISKVIEWS predicts that many insurers will fail the real Future Worst Case Stress Tests.  Only firms that hold themselves to a Robust Risk Capital Standard are likely to have sufficient capital to potentially maintain solvency.  In RISKVIEWS opinion, these Future Worst Case Stress Tests are useful mainly as the starting point for a Reverse Stress Test process.  In financial markets, we have experienced a real life worst case stress with the 2008 Financial Crisis and the following events.  Imaging insurance worst case scenarios that are as adverse as those events seems useful to promoting insurer survival.  Imagining events that are much worse than those – which is what is meant by the Future Worst Case Scenario idea – seems to be overkill.  But, in fact,  the history of adverse events in the recent past seems to indicate that each new major loss is at least twice the previous record.

A Reverse Stress Test is a process under which an insurer would determine the adverse scenario that drives the insurer to insolvency.  Under the NAIC ORSA, Reverse Stress Tests are required, but it is not specified whether those tests should be based upon a condition of failing to meet the insurer’s own Risk Capital Standard or the regulators solvency standard.  RISKVIEWS would recommend both types of tests be performed.

This discussion is the heart of the ORSA.  The full ORSA requires many other elements.  See the recent post INSTRUCTIONS FOR A 17 STEP ORSA PROCESS for the full discussion.

What kind of Stress Test?

Posted June 25, 2014 by riskviews
Categories: Enterprise Risk Management, Stress Test

Tags:

What kind of future were you thinking of when you constructed your stress tests?  Here are six different visions of the stressed future that have been the basis for stress tests.

  • Historical Worst Case – Worst experience in the past 20 – 25 years
  • Normal Variability – Stress falls within expected range for a normal five year period
  • Adverse Environment Variability – Stress falls within expected range for a five year period that includes general deterioration such as recession or major weather/climate deviation
  • Future Realistic Disaster – Worst experience that is reasonably expected in the future (even if it has never happened)
  • Adverse Environment Disaster – Worst experience that is reasonably expected in the future if the future is significantly worse than the past
  • Future Worst Case – Maximum plausible loss that could occur even if you believe that likelihood is extremely remote

Here are a long list of stress scenarios that comes from the exposure draft of the NAIC document for ORSA reviewers:

1. Credit

• Counterparty exposure (loss of specified amount to reinsurer, derivatives party, supplier)
• Equity securities (40%/50% drop, no growth in stocks in 3 years)
• General widening of credit spreads (increase in defaults)
• Other risk assets

2. Market

• 300 basis point pop up in interest rates
• Prolonged low interest rates (10 year treasury of 1%)
• Material drop in GDP & related impacts
• Stock market crash or specific extreme condition (Great Depression)
• Eurozone collapse
• U.S. Treasury collapse
• Foreign currency shocks (e.g. percentages)
• Municipal bond market collapse
• Prolonged multiple market downturn (e.g. 2008/2009 crisis/or 1987 stock market drop-or 50% drop in equities, 150bp of realized credit losses)

3. Pricing/Underwriting

• Significant drop in sales/premiums due to varying reasons
• Impact of 20% reduction in mortality rates on annuities
• Material product demonstrates specific losses (e.g. 1 in 20 year events)
• Severe pandemic (e.g. Avian bird flu based upon World Health Organization mortality assumption)
• California and New Madrid earthquakes, biological, chemical or nuclear terrorist attacks in locations of heaviest coverage (consider a specified level of industry losses)
• Atlantic hurricane (consider a specified level of industry losses previously unseen/may consider specified levels per different lines of coverage) in different areas (far northeast, northeast, southeast, etc.)
• U.S. tornado over major metropolitan area with largest exposure
• Japanese typhoon/earthquake (consider a specified level of industry losses previously unseen)
• Major aviation/marine collision
• Dirty bomb attack
• Drop in rating to BB

4. Reserving

• Specified level of adverse development (e.g. 30%)
• Regulatory policy change requires additional reserves (e.g. 30%)

5. Liquidity • Catastrophe results in material immediate claims of 3X normalized amounts
• Call on any existing debt
• Material spike in lapses (e.g. 3X normal rates)
• Drop in rating to BB

6. Operational

• Loss of systems for 30 days
• Terrorist act
• Cybercrime
• Loss of key personnel
• Specified level of fraud within claims

7. Legal

• Material adverse finding on pending claim
• Worst historical 10 year loss is multiplied at varying levels

8. Strategic

• Product distribution breakup

9. Reputational

• PR crisis
• Drop in rating to BB

These seem to RISKVIEWS to fall into all six of the categories.  Many of these scenarios would fall into the “Normal Volatility” category for some companies and into the worst historical for others.  A few are in the area of “Future Worst Case” – such as the Treasury Collapse.

RISKVIEWS suggests that when doing Stress Testing, you should decide what sort of Stress you are intending.  You may not agree with RISKVIEWS categories, but you should have your own categories.  It might be a big help to the reader of your Stress Test report to know which sort of stress you think that you are testing.  They may or may not agree with you on which category that your Stress Scenario falls into, and that would be a valuable revealing discussion.

Risk Capital Standard

Posted June 23, 2014 by riskviews
Categories: Economic Capital, Enterprise Risk Management, Regulatory Risk, Solvency II

Tags: ,

Insurers in the US and Canada are required to state their own internal Risk Capital Standard in their ORSA Summary Report. From RISKVIEWS observations over the years of actual insurer actions, insurers have actually operated  with four levels of Risk Capital Standards:

  • Solvency – enough capital to avoid take-over by regulators
  • Viable – enough capital to avoid reaching Solvency level with “normal” volatility
  • Secure – enough capital to satisfy sophisticated commercial buyers that you will pay claims in most situations
  • Robust – enough capital to maintain a Secure level of capital after a major loss

In many cases, this is not necessarily a clear conscious decision, but insurers do seem to pick one of those four levels and stick with it.

Insurers operating at the Solvency levels are usually in constant contact with their regulator.  They are almost always very small insurers who are operating on the verge of regulatory takeover.  They operate in markets where there is no concern on the part of their customers for the security of their insurer.  Sometimes these insurers are government sponsored and are permitted to operate at this level for as long as they are able because the government is unwilling to provide enough capital and the company is not able to charge enough premiums to build up additional capital, possibly because of government restrictions to rates.  This group of insurers is very small in most times.  Any adverse experience will mean the end of the line for these companies.

Many insurers operate at the Viable level.  These insurers are usually operating in one or several personal/individual insurance lines where their customers are not aware of or are not sensitive to security concerns.  Most often these insurers write short term coverages such as health insurance, auto insurance or term insurance.  These insurers can operate in this manner for decades or until they experience a major loss event.  They do not have capital for such an event so their are three possible outcomes:  insolvency and breakup of the company, continued operation at the Solvency level of capital with or without gradual recovery of capital to the Viable level.

The vast bulk of the insurance industry operates at the Secure level of capital.  Companies with a Secure capital level are able to operate in commercial/group lines of business, reinsurance or the large amount individual products where there is a somewhat knowledgeable assessment of security as a part of the due diligence process of the insurance buyer.   With capital generally at the level of a major loss plus the Viable capital level, these companies can usually withstand a major loss event on paper, but if their business model is dependent upon those products and niches where high security is required, a major loss will likely put them out of business because of a loss of confidence of their customer base.  After a large loss, some insurers have been able to shift to operating with a Viable capital level and gradually rebuild their capital to regain the Secure position and re-engage with their original markets.  But most commonly, a major loss causes these insurers to allow themselves to be acquired so that they can get value for the infrastructure that supports their high end business model.

A few insurers and reinsurers have the goal of retaining their ability to operate in their high end markets in the event of a major loss by targeting a Robust capital level.  These insurers are holding capital that is at least as much as a major loss plus the Secure capital level.  In some cases, these groups are the reinsurers who provide risk relief to other Robust insurers and to the more cautious insurers at the Secure level.  Other firms in this groups include larger old mutual insurers who are under no market pressure to shed excess capital to improve Return on Capital.  These firms are easily able to absorb moderate losses without significant damage to their level of security and can usually retain at least the Secure level of capital after a major loss event.  If that major loss event is a systematic loss, they are able to retain their market leading position.  However, if they sustain a major loss that is less broadly shared, they might end up losing their most security conscious customers.  Risk management strategy for these firms should focus on avoiding such an idiosyncratic loss.  However, higher profits are often hoped for from concentrated, unique (re)insurance deals which is usually the temptation that leads to these firms falling from grace.

One of the goals of Solvency II in Europe has been to outlaw operating an insurer at the Solvency or Viable levels of capital.  This choice presents two problems:

  • It has led to the problem regarding the standard capital formula.  As noted above, the Solvency level is where most insurers would choose to operate.  Making this the regulatory minimum capital means that the standard formula must be near perfectly correct, a daunting task even without the political pressures on the project.  Regulators tendency would be to make all approximations rounding up.  That is likely to raise the cost of the lines of insurance that are most effected by the rounding.
  • It is likely to send many insurers into the arms of the regulators for resolution in the event of a significant systematic loss event.  Since there is not ever going to be regulatory capacity to deal with resolution of a large fraction of the industry, nor is resolution likely to be needed (since many insurers have been operating in Europe just fine with a Viable level of capital for many years).  It is therefore likely that the response to such an event will be to adjust the minimum capital requirement in one way or another, perhaps allowing several years for insurers to regain the “minimum” capital requirement.  Such actions will undermine the degree to which insurers who operate in markets that have traditionally accepted a Viable capital level will take the capital requirement completely seriously.

It is RISKVIEWS impression that the Canadian regulatory minimum capital is closer to the Viable level. While the US RBC action level is at the Solvency level.

It is yet to be seen whether the US eventually raises the RBC requirement to the Viable level or if Canada raises its MCCSR to the Secure level because of pressure to comply with the European experiment.

If asked, RISKVIEWS would suggest that the US and Canada waits until (a) the Europeans actually implement Solvency II (which is not expected to be fully inforce for many years after initial implementation due to phase in rules) and (b) the European industry experiences a systematic loss event.  RISKVIEWS is not likely to be asked, however.

It is RISKVIEWS prediction that the highly theoretical ideas that drive Solvency II will need major adjustment and that those adjustments will need to be made at that time when there is a major systematic loss event.  So the ultimate nature of Solvency II will remain a complete mystery until then.


Follow

Get every new post delivered to your Inbox.

Join 625 other followers

%d bloggers like this: