Archive for the ‘risk assessment’ category

Too Much Risk

August 18, 2014

Risk Management is all about avoiding taking Too Much Risk.

And when it really comes down to it, there are only a few ways to get into the situation of taking too much risk.

  1. Misunderstanding the risk involved in the choices made and to be made by the organization
  2. Misunderstanding the risk appetite of the organization
  3. Misunderstanding the risk taking capacity of the organization
  4. Deliberately ignoring the risk, the risk appetite and/or the risk taking capacity

So Risk Management needs to concentrate on preventing these four situations.  Here are some thoughts regarding how Risk Management can provide that.

1. Misunderstanding the risk involved in the choices made and to be made by an organization

This is the most common driver of Too Much Risk.  There are two major forms of misunderstanding:  Misunderstanding the riskiness of individual choices and Misunderstanding the way that risk from each choice aggregates.  Both of these drivers were strongly in evidence in the run up to the financial crisis.  The risk of each individual mortgage backed security was not seriously investigated by most participants in the market.  And the aggregation of the risk from the mortgages was misunderestimated as well.  In both cases, there was some rationalization for the misunderstanding.  The Misunderstanding was apparent to most only in hindsight.  And that is most common for misunderstanding risks.  Those who are later found to have made the wrong decisions about risk were most often acting on their beliefs about the risks at the time.  This problem is particularly common for firms with no history of consistently and rigorously measuring risks.  Those firms usually have very experienced managers who have been selecting their risks for a long time, who may work from rules of thumb.  Those firms suffer this problem most when new risks are encountered, when the environment changes making their experience less valid and when there is turnover of their experienced managers.  Firms that use a consistent and rigorous risk measurement process also suffer from model induced risk blindness.  The best approach is to combine analysis with experienced judgment.

2.  Misunderstanding the risk appetite of the organization

This is common for organizations where the risk appetite has never been spelled out.  All firms have risk appetites, it is just that in many, many cases, no one knows what they are in advance of a significant loss event.  So misunderstanding the unstated risk appetite is fairly common.  But actually, the most common problem with unstated risk appetites is under utilization of risk capacity.  Because the risk appetite is unknown, some ambitious managers will push to take as much risk as possible, but the majority will be over cautious and take less risk to make sure that things are “safe”.

3.  Misunderstanding the risk taking capacity of the organization

 This misunderstanding affects both companies who do state their risk appetites and companies who do not.  For those who do state their risk appetite, this problem comes about when the company assumes that they have contingent capital available but do not fully understand the contingencies.  The most important contingency is the usual one regarding money – no one wants to give money to someone who really, really needs it.  The preference is to give money to someone who has lots of money who is sure to repay.  For those who do not state a risk appetite, each person who has authority to take on risks does their own estimate of the risk appetite based upon their own estimate of the risk taking capacity.  It is likely that some will view the capacity as huge, especially in comparison to their decision.  So most often the problem is not misunderstanding the total risk taking capacity, but instead, mistaking the available risk capacity.

4.  Deliberately ignoring the risk, the risk appetite and/or the risk taking capacity of the organization

A well established risk management system will have solved the above problems.  However, that does not mean that their problems are over.  In most companies, there are rewards for success in terms of current compensation and promotions.  But it is usually difficult to distinguish luck from talent and good execution in a business about risk taking.  So there is a great temptation for managers to deliberately ignore the risk evaluation, the risk appetite and the risk taking capacity of the firm.  If the excess risk that they then take produces excess losses, then the firm may take a large loss.  But if the excess risk taking does not result in an excess loss, then there may be outsized gains reported and the manager may be seen as highly successful person who saw an opportunity that others did not.  This dynamic will create a constant friction between the Risk staff and those business managers who have found the opportunity that they believe will propel their career forward.

So get to work, risk managers.

Make sure that your organization

  1. Understands the risks
  2. Articulates and understands the risk appetite
  3. Understands the aggregate and remaining risk capacity at all times
  4. Keeps careful track of risks and risk taking to be sure to stop any managers who might want to ignore the risk, the risk appetite and the risk taking capacity

Insurers need to adapt COSO/ISO Risk Management to achieve ERM

July 29, 2014

Both the COSO and ISO risk management frameworks describe many excellent practices.  However, in practice, insurers need to make two major changes from the typical COSO/ISO risk management process to achieve real ERM.

  1. RISK MEASUREMENT – Both COSO and ISO emphasize what RISKVIEWS calls the Risk Impressions approach to risk measurement.  That means asking people what their impression is of the frequency and severity of each risk.  Sometimes they get real fancy and also ask for an impression of Risk Velocity.  RISKVIEWS sees two problems with this for insurers.  First, impressions of risk are notoriously inaccurate.  People are just not very good at making subjective judgments about risk.  Second, the frequency/severity pair idea does not actually represent reality.  The idea properly applies to very specific incidents, not to risks, which are broad classes of incidents.  Each possible incident that makes up the class that we call a risk has a different frequency severity pair.   There is no single pair that represents the class.  Insurers risks are in one major way different from the risks of non-financial firms.  Insurers almost always buy and sell the risks that make up 80% or more of their risk profile.  That means that to make those transactions they should be making an estimate of the expected value of ALL of those frequency and severity pairs.  No insurance company that expects to survive for more than a year would consider setting its prices based upon something as lacking in reality testing as a single frequency and severity pair.  So an insurer should apply the same discipline to measuring its risks as it does to setting its prices.  After all, risk is the business that it is in.
  2. HIERARCHICAL RISK FOCUS – Neither COSO nor ISO demand that the risk manager run to their board or senior management and proudly expect them to sit still while the risk manager expounds upon the 200 risks in their risk register.  But a highly depressingly large number of COSO/ISO shops do exactly that.  Then they wonder why they never get a second chance in front of top management and the board.  However, neither COSO nor ISO provide strong enough guidance regarding the Hierarchical principal that is one of the key ideas of real ERM.    COSO and ISO both start with a bottoms up process for identifying risks.  That means that many people at various levels in the company get to make input into the risk identification process.  This is the fundamental way that COSO/ISO risk management ends up with risk registers of 200 risks.  COSO and ISO do not, however, offer much if any guidance regarding how to make that into something that can be used by top management and the board.  In RISKVIEWS experience, the 200 item list needs to be sorted into no more than 25 broad categories.  Then those categories need to be considered the Risks of the firm and the list of 200 items considered the Riskettes.  Top management should have a say in the development of that list.  It should be their chooses of names for the 25 Risks. The 25 Risks then need to be divided into three groups.  The top 5 to 7 Risks are the first rank risks that are the focus of discussions with the Board.    Those should be the Risks that are most likely to cause a financial or other major disruption to the firm.   Besides focusing on those first rank risks, the board should make sure that management is attending to all of the 25 risks.  The remaining 18 to 20 Risks then can be divided into two ranks.  The Top management should then focus on the first and second rank risks.  And they should make sure that the risk owners are attending to the third rank risks.  Top management, usually through a risk committee, needs to regularly look at these risk assignments and promote and demote risks as the company’s exposure and the risk environment changes.  Now, if you are a risk manager who has recently spent a year or more constructing the list of the 200 Riskettes, you are doubtless wondering what use would be made of all that hard work.  Under the Hierarchical principle of ERM, the process described above is repeated down the org chart.  The risk committee will appoint a risk owner for each of the 25 Risks and that risk owner will work with their list of Riskettes.  If their Riskette list is longer than 10, they might want to create a priority structure, ranking the risks as is done for the board and top management.  But if the initial risk register was done properly, then the Riskettes will be separate because there is something about them that requires something different in their monitoring or their risk treatment.  So the risk register and Riskettes will be an valuable and actionable way to organize their responsibilities as risk owner.  Even if it is never again shown to the Top management and the board.

These two ideas do not contradict the main thrust of COSO and ISO but they do represent a major adjustment in approach for insurance company risk managers who have been going to COSO or ISO for guidance.  It would be best if those risk managers knew in advance about these two differences from the COSO/ISO approach that is applied in non-financial firms.

Chicken Little or Coefficient of Riskiness (COR)

July 21, 2014

Running around waving your arms and screaming “the Sky is Falling” is one way to communicate risk positions.  But as the story goes, it is not a particularly effective approach.  The classic story lays the blame on the lack of perspective on the part of Chicken Little.  But the way that the story is told suggests that in general people have almost zero tolerance for information about risk – they only want to hear from Chicken Little about certainties.

But insurers live in the world of risk.  Each insurer has their own complex stew of risks.  Their riskiness is a matter of extreme concern.  Many insurers use complex models to assess their riskiness.  But in some cases, there is a war for the hearts and minds of the decision makers in the insurer.  It is a war between the traditional qualitative gut view of riskiness and the new quantitative view of riskiness.  One tactic in that war used by the qualitative camp is to paint the quantitative camp as Chicken Little.

In a recent post, Riskviews told of a scale, a Coefficient of Riskiness.  The idea of the COR is to provide a simple basis for taking the argument about riskiness from the name calling stage to an actual discussion about Riskiness.

For each risk, we usually have some observations.  And from those observations, we can form the two basic statistical facts, the observed average and observed volatility (known as standard deviation to the quants).  But in the past 15 years, the discussion about risk has shifted away from the observable aspects of risk to an estimate of the amount of capital needed for each risk.

Now, if each risk held by an insurer could be subdivided into a large number of small risks that are similar in riskiness for each (including size of potential loss) and where the reasons for the losses for each individual risk were statistically separate (independent) then the maximum likely loss to be expected (99.9%tile) would be something like the average loss plus three times the volatility.  It does not matter what number is the average or what number is the standard deviation.

RISKVIEWS has suggested that this multiple of 3 would represent a standard amount of riskiness and become the index value for the Coefficient of Riskiness.

This could also be a starting point in looking at the amount of capital needed for any risks.  Three times the observed volatility plus the observed average loss.  (For the quants, this assumes that losses are positive values and gains negative.  If you want losses to be negative values, then take the observed average loss and subtract three times the volatility).

So in the debate about risk capital, that value is the starting point, the minimum to be expected.  So if a risk is viewed as made up of substantially similar but totally separate smaller risks (homogeneous and independent), then we start with a maximum likely loss of average plus three times volatility.  Many insurers choose (or have chosen for them) to hold capital for a loss at the 1 in 200 level.  That means holding capital for 83% of this Maximum Likely Loss.  This is the Viable capital level.  Some insurers who wish to be at the Robust level of capital will hold capital roughly 10% higher than the Maximum Likely Loss.  Insurers targeting the Secure capital level will hold capital at approximately 100% of the Maximum Likely Loss level.

But that is not the end of the discussion of capital.  Many of the portfolios of risks held by an insurer are not so well behaved.  Those portfolios are not similar and separate.  They are dissimilar in the likelihood of loss for individual exposures, they are dissimilar for the possible amount of loss.  One way of looking at those dissimilarities is that the variability of rate and of size result in a larger number of pooled risks acting statistically more like a smaller number of similar risks.

So if we can imagine that evaluation of riskiness can be transformed into a problem of translating a block of somewhat dissimilar, somewhat interdependent risks into a pool of similar, independent risks, this riskiness question comes clearly into focus.  Now we can use a binomial distribution to look at riskiness.  The plot below takes up one such analysis for a risk with an average incidence of 1 in 1000.  You see that for up to 1000 of these risks, the COR is 5 or higher.  The COR gets up to 6 for a pool of only 100 risks.  It gets close to 9 for a pool of only 50 risks.

 

cor

 

There is a different story for a risk with average incidence of 1 in 100.  COR is less than 6 for a pool as small as 25 exposures and the COR gets down to as low as 3.5.

Cor100

In producing these graphs, RISKVIEW notices that COR is largely a function of number of expected claims.  So The following graph shows COR plotted against number of expected claims for low expected number of claims.  (High expected claims produces COR that is very close to 3 so are not very interesting.)

COR4You see that the COR stays below 4.5 for expected claims 1 or greater.  And there does seem to be a gently sloping trend connecting the number of expected claims and the COR.

So for risks where losses are expected every year, the maximum COR seems to be under 4.5.  When we look at risks where the losses are expected less frequently, the COR can get much higher.  Values of COR above 5 start showing up with expected losses that are in the range of .2 and values above .1 are even higher.

cor5

What sorts of things fit with this frequency?  Major hurricanes in a particular zone, earthquakes, major credit losses all have expected frequencies of one every several years.

So what has this told us?  It has told us that fat tails can come from the small portfolio effect.  For a large portfolio of similar and separate risks, the tails are highly likely to be normal with a COR of 3.  For risks with a small number of exposures, the COR, and therefore the tail, might get as much as 50% fatter with a COR of up to 4.5. And the COR goes up as the number of expected losses goes down.

Risks with very fat tails are those with expected losses less frequent than one per year can have much fatter tails, up to three times as fat as normal.

So when faced with those infrequent risks, the Chicken Little approach is perhaps a reasonable approximation of the riskiness, if not a good indicator of the likelihood of an actual impending loss.

 

Quantitative vs. Qualitative Risk Assessment

July 14, 2014

There are two ways to assess risk.  Quantitative and Qualitative.  But when those two words are used in the NAIC ORSA Guidance Manual, their meaning is a little tricky.

In general, one might think that a quantitative assessment uses numbers and a qualitative assessment does not.  The difference is as simple as that.  The result of a quantitative assessment would be a number such as $53 million.  The result of a qualitative assessment would be words, such as “very risky” or “moderately risky”.

But that straightforward approach to the meaning of those words does not really fit with how they are used by the NAIC.  The ORSA Guidance Manual suggests that an insurer needs to include those qualitative risk assessments in its determination of capital adequacy.  Well, that just will not work if you have four risks that total $400 million and three others that are two “very riskys” and one “not so risk”.  How much capital is enough for two “very riskys”, perhaps you need a qualitative amount of surplus to provide for that, something like “a good amount”.

RISKVIEWS believes that then the NAIC says “Quantitative” and “Qualitative” they mean to describe two approaches to developing a quantity.  For ease, we will call these two approaches Q1 and Q2.

The Q1 approach is data and analysis driven approach to developing the quantity of loss that the company’s capital standard provides for.  It is interesting to RISKVIEWS that very few participants or observers of this risk quantification regularly recognize that this process has a major step that is much less quantitative and scientific than others.

The Q1 approach starts and ends with numbers and has mathematical steps in between.  But the most significant step in the process is largely judgmental.  So at its heart, the “quantitative” approach is “qualitative”.  That step is the choice of mathematical model that is used to extrapolate and interpolate between actual data points.  In some cases, there are enough data points that the choice of model can be based upon somewhat less subjective fit criteria.  But in other cases, that level of data is reached by shortening the time step for observations and THEN making heroic (and totally subjective) assumptions about the relationship between successive time periods.

These subjective decisions are all made to enable the modelers to make a connection between the middle of the distribution, where there usually is enough data to reliably model outcomes and the tail, particularly the adverse tail of the distribution where the risk calculations actually take place and where there is rarely if ever any data.

There are only a couple of subjective decisions possibilities, in broad terms…

  • Benign – Adverse outcomes are about as likely as average outcomes and are only moderately more severe.
  • Moderate – Outcomes similar to the average are much more likely than outcomes significantly different from average.  Outcomes significantly higher than average are possible, but likelihood of extremely adverse outcomes are extremely highly unlikely.
  • Highly risky – Small and moderately adverse outcomes are highly likely while extremely adverse outcomes are possible, but fairly unlikely.

The first category of assumption, Benign,  is appropriate for large aggregations of small loss events where contagion is impossible.  Phenomenon that fall into this category are usually not the concern for risk analysis.  These phenomenon are never subject to any contagion.

The second category, Moderate, is appropriate for moderate sized aggregations of large loss events.  Within this class, there are two possibilities:  Low or no contagion and moderate to high contagion.  The math is much simpler if no contagion is assumed.

But unfortunately, for risks that include any significant amount of human choice, contagion has been observed.  And this contagion has been variable and unpredictable.  Even more unfortunately, the contagion has a major impact on risks at both ends of the spectrum.  When past history suggests a favorable trend, human contagion has a strong tendency to over play that trend.  This process is called “bubbles”.  When past history suggests an unfavorable trend, human contagion also over plays the trend and markets for risks crash.

The modelers who wanted to use the zero contagion models, call this “Fat Tails”.  It is seen to be an unusual model, only because it was so common to use the zero contagion model with the simpler maths.

RISKVIEWS suggests that when communicating that the  approach to modeling is to use the Moderate model, the degree of contagion assumed should be specified and an assumption of zero contagion should be accompanied with a disclaimer that past experience has proven this assumption to be highly inaccurate when applied to situations that include humans and therefore seriously understates potential risk.

The Highly Risky models are appropriate for risks where large losses are possible but highly infrequent.  This applies to insurance losses due to major earthquakes, for example.  And with a little reflection, you will notice that this is nothing more than a Benign risk with occasional high contagion.  The complex models that are used to forecast the distribution of potential losses for these risks, the natural catastrophe models go through one step to predict possible extreme events and the second step to calculate an event specific degree of contagion for an insurer’s specific set of coverages.

So it just happens that in a Moderate model, the 1 in 1000 year loss is about 3 standard deviations worse than the mean.  So if we use that 1 in 1000 year loss as a multiple of standard deviations, we can easily talk about a simple scale for riskiness of a model:

Scale

So in the end the choice is to insert an opinion about the steepness of the ramp up between the mean and an extreme loss in terms of multiples of the standard deviation.  Where standard deviation is a measure of the average spread of the observed data.  This is a discussion that on these terms include all of top management and the conclusions can be reviewed and approved by the board with the use of this simple scale.  There will need to be an educational step, which can be largely in terms of placing existing models on the scale.  People are quite used to working with a Richter Scale for earthquakes.  This is nothing more than a similar scale for risks.  But in addition to being descriptive and understandable, once agreed, it can be directly tied to models, so that the models are REALLY working from broadly agreed upon assumptions.

*                  *                *               *             *                *

So now we go the “Qualitative” determination of the risk value.  Looking at the above discussion, RISKVIEWS would suggest that we are generally talking about situations where we for some reason do not think that we know enough to actually know the standard deviation.  Perhaps this is a phenomenon that has never happened, so that the past standard deviation is zero.  So we cannot use the multiple of standard deviation method discussed above.  Or to put is another way, we can use the above method, but we have to use judgment to estimate the standard deviation.

*                  *                *               *             *                *

So in the end, with a Q1 “quantitative” approach, we have a historical standard deviation and we use judgment to decide how risky things are in the extreme compared to that value.  In the Q2 “qualitative” approach, we do not have a reliable historical standard deviation and we need to use judgment to decide how risky things are in the extreme.

Not as much difference as one might have guessed!

Just Stop IT! Right Now. And Don’t Do IT again.

June 16, 2014

IT is a medieval, or possibly pre-medieval practice for evaluating risks.  That is the assignment of a single Frequency and Severity pair to each risk and calling that a risk evaluation.

In the mid 1700’s Daniel Bernoulli wrote:

EVER SINCE mathematicians first began to study the measurement of risk there has been general agreement on the following proposition: Expected values are computed by multiplying each possible gain by the number of ways in which it can occur, and then dividing the sum of these products by the total number of possible cases where, in this theory, the consideration of cases which are all of the same probability is insisted upon. If this rule be accepted, what remains to be done within the framework of this theory amounts to the enumeration of all alternatives, their breakdown into equi-probable cases and, finally, their insertion into corresponding classifications.

Many modern writers attribute this process to Bernoulli but this is the very first sentence of his “Exposition of a New Theory for Measuring Risk” published in 1738.  He suggests that the idea is so common in his time that he does not cite an original author.  His work is not to prove that this basic idea is correct, but to propose a new methodology for implementing.

It is hard to say how the single pair idea (i.e. that a risk can be represented by a sing frequency/severity pair of values) has crept into basic modern risk assessment practice, but it has. And it is firmly established.  But in 1738, Bernoulli knew that each risk has many possible gain amounts.  NOT A SINGLE PAIR.

But let me ask you this…

How did you pick the particular pair of values that you use to characterize any of your risks?

You see, as far as RISKVIEWS can tell, Bernoulli was correct – each and every risk has an infinite number of such pairs that are valid.  So how did you pick the one that you use?

Take for an example, the risk of a fire.  There are an infinite number of possible fires that could happen.  Some more likely and some less likely.  Some would do lots of damage some only a little.  The likelihood of a fire is not actually always related to the damage.  Some highly unlikely fires might be very small and low damage.  Hopefully, you do not have the situation of a likely high damage fire.  But all by itself, you could make up a frequency severity heat map for any single risk with many points on the chart.

FS

So RISKVIEWS asks again, how do you pick which point from that chart to be the one single point for your main risk report and heat map?

And those heat maps that you are so fond of…

Do you realize that the points on the heat map are not rationally comparable?  That is because there is no single criteria that most risk managers use to pick the pairs that they use.  To compare values they need to have been selected by applying the same exact criteria.  But usually the actual criteria for choosing the pairs is not clearly articulated.

So here you stand, you have a risk register that is populated with these bogus statistics.  What can you do to move away towards a more rational view of your risks?

You can start to reveal to people that you are aware that your risks are NOT fully measured by that single statistic.  Try revealing some additional statistics about each risk on your risk register:

  • The Likelihood of zero (or an inconsequential low amount) loss from each risk in any one year
  • The Likelihood of a loss of 1% of earnings or more
  • The expected loss at a 1% likelihood (or 1 in 100 year expected loss)

Try plotting those values and show how the risks on your risk register compare.  Create a heat map that plots likelihood of zero loss against expected loss at a 1% likelihood.

Those values are then comparable.

So stop IT.  Stop misinforming everyone about your risks.  Stop using frequency severity pairs to represent your risks.

 

Can’t skip measuring Risk and still call it ERM

January 15, 2014

Many insurers are pushing ahead with ERM at the urging of new executives, boards, rating agencies and regulators.  Few of those firms who have resisted ERM for many years have a history of measuring most of their risks.

But ERM is not one of those liberal arts like the study of English Literature.  In Eng Lit, you may set up literature classification schemes, read materials, organize discussion groups and write papers.  ERM can have those elements, but the heart of ERM is Risk Measurement.  Comparing those risk measures to expectations and to prior period measures.  If a company does not have Risk Measurement, then they do not have ERM.

That is the tough side of this discussion, the other side is that there are many ways to measure risks and most companies can implement several of them for each risk without the need for massive projects.

Here are a few of those measures, listed in order of increasing sophistication:

1. Risk Guesses (AKA Qualitative Risk Assessment)
– Guesses, feelings
– Behavioral Economics Biases
2. Key Risk Indicators (KRI)
– Risk is likely to be similar to …
3. Standard Factors
– AM Best,  S&P, RBC
4. Historical Analysis
– Worst Loss in past 10 years as pct of base (premiums,assets).
5. Stress Tests
– Potential loss from historical or hypothetical scenario
6. Risk Models
– If the future is like the past …
– Or if the future is different from the past in this way …

More discussion of Risk Measurement on WillisWire:

     Part 2 of a 14 part series
And on RISKVIEWS:
Risk Assessment –  55 other posts relating to risk measurement and risk assessment.

Provisioning – Packing for your trip into the future

April 26, 2013

There are two levels of provisioning for an insurer.  Reserves and Risk Capital.  The two are intimately related.  In fact, in some cases, insurers will spend more time and care in determining the correct number for the sum of the two, called Total Asset Requirement (TAR) by some.

Insurers need an realistic picture of future obligations long before the future is completely clear. This is a key part of the feedback mechanism.  The results of the first year of business is the most important indication of business success for non-life insurance.  That view of results depends largely upon the integrity of the reserve value.  This feedback information effects performance evaluation, pricing for the next year, risk analysis and capital adequacy analysis and capital allocation.

The other part of provisioning is risk capital.  Insurers also need to hold capital for less likely swings in potential losses.  This risk capital is the buffer that provides for the payment of policyholder claims in a very high proportion of imagined circumstances.  The insurance marketplace, the rating agencies and insurance regulatory bodies all insist that the insurer holds a high buffer for this purpose.

In addition, many valuable insights into the insurance business can be gained from careful analysis of the data that is input to the provisioning process for both levels of provisioning.

However, reserves are most often set to be consistent with considerations.  Swings of adequate and inadequate pricing is tightly linked to swings in reserves.  When reserves are optimistically set capital levels may reflect same bias. This means that inadequate prices can ripple through to cause deferred recognition of actual claims costs as well as under provisioning at both levels.  This is more evidence that consideration is key to risk management.

There is often pressure for small and smooth changes to reserves and risk capital but information flows and analysis provide jumps in insights both as to expectations for emerging losses as well as in terms of methodologies for estimation of reserves and capital.  The business pressures may threaten to overwhelm the best analysis efforts here.  The analytical team that prepares the reserves and capital estimates needs to be aware of and be prepared for this eventuality.  One good way to prepare for this is to make sure that management and the board are fully aware of the weaknesses of the modeling approach and so are more prepared for the inevitable model corrections.

Insurers need to have a validation process to make sure that the sum of reserves and capital is an amount that provides the degree of security that is sought.  Modelers must allow for variations in risk environment as well as the impact of risk profile, financial security and risk management systems of the insurer in considering the risk capital amount.  Changes in any of those elements may cause abrupt shifts in the amount of capital needed.

The Total Asset Requirement should be determined without regard to where the reserves have been set so that risk capital level does not double up on redundancy or implicitly affirm inadequacy of reserves.

The capital determined through the Provisioning process will usually be the key element to the Risk Portfolio process.  That means that accuracy in the sub totals within the models are just as important as the overall total.  The common practice of tolerating offsetting inadequacies in the models may totally distort company strategic decision making.

This is one of the seven ERM Principles for Insurers.


Follow

Get every new post delivered to your Inbox.

Join 649 other followers

%d bloggers like this: