Posted tagged ‘risk assessment’

Risk Measurement & Reporting

October 18, 2021

Peter Drucker is reported to have once said “what gets measured, gets managed.” That truism of modern management applied to risk as well as it does to other more commonly measured things like sales, profits and expens es .

Regulators take a similar view; what gets measured should get managed. ORSA f rameworks aim to support prospective solvency by giving management a clear view of their on-going corporate risk positions.

This in turn should reduce the likelihood of large unanticipated losses if timely action can be taken when a risk limit is breached.

From a regulatory perspective, each identified risk should have at least one measurable metric that is reported upwards, ultimately to the board.

The Need to Measure Up

Many risk management programs build up extensive risk registers but are stymied by this obvious next step – that of measuring the risks that have been identif ied.

Almost every CEO can cite the company’s latest f igures f or sales, expenses and profits, but very few know what the company’s risk position might be.

Risks are somewhat more difficult to measure than profits due to the degree to which they depend upon opinions.

Insurance company profits are already seen as opaque by many non-industry observers because profits depend on more than just sales and expenses:profits depend upon claims estimates, which are based on current (and often incomplete) information about those transactions.

Risk, on the other hand, is all about things that might happen in the f uture: specif ically, bad things that might happen in the f uture.

Arisk measure reflects an opinion about the size of the exposure to f uture losses. All risk measures are opinions; there are no f acts about the f uture. At least not yet.

Rationalizing Risk

There are, however, several ways that risk can be measured to facilitate management in the classical sense that Drucker was thinking of.

That classic idea is the management control cycle, where management sets a plan and then monitors emerging experience in comparison to that plan.

To achieve this objective, risk measures need to be consistent from period to period. They need to increase when volume of activity increases, but they also need to reflect changes in the riskiness of activities as time passes and as the portfolio of the risk taker changes .

Good risk measures provide a projected outcome; but in some
cases, such calculations are not available and risk indicators must be used instead.

Risk indicators measure something that is closely related to the risk and so can be expected to vary similarly to an actual risk measure, if one were available.

For insurers, current state-of-the-art risk measures are based upon computer models of the risk taking act ivit ies .

With these models, risk managers can determine a broad range of possible outcomes for a risk taking activity and then define the risk measure as some subset of those outcomes.

Value at Risk

The most common such measure is called value at risk (VaR). If the risk model is run with a random element, usually called a Monte Carlo or stochastic model, a 99% VaR would be the 99th worst result in a run of 100 outcomes, or the 990th worst out of 1000.

Contingent Tail Expectation

This value might represent the insurer’s risk capital target.Asimilar risk measure is the contingent tail expectation (CTE), which is also called the tail value at risk (TVaR).

The 99% CTE is the average of all the values that are worse than the 99% VaR. You can think of these two values in this manner: if a company holds capital at the 99% VaR level, then the 99% CTE minus the 99% VaR is the average amount of loss to policyholders should the company become insolvent.

Rating agencies, and increasingly regulators, require companies to provide results of risk measures from stochastic models of natural catastrophes.

Stochastic models are also used to estimate other risk exposures, including underwriting risk from other lines of insurance coverage and investment risk.

In addition to stochastic models, insurers also model possible losses under single well-defined adverse scenarios. The results are often called stress tests.

Regulators are also increasingly calling for stress tests to provide risk measures that they feel are more easily understood and compared among companies.

Key Risk Indicators

Most other risks, especially strategic and operational risks, are monitored by key risk indicators (KRIs). For these risks, good measures are not available and so we must rely on indicators.

For example, an economic downturn could pose risk to an insurer’s growth strategy. While it may be dif f icult to measure the likelihood of a downturn or the extent to which it would impair growth, the insurer can use economic f orecasts as risk indicators.

Of course,simplymeasuringriskisinsufficient.Theresultsof themeasurementmustbecommunicatedto people who can and will use the risk information to appropriately steer the future activity of the company.

Risk Dashboard

Simple charts of numbers are sufficient in some cases, but the state of the art approach to presenting risk measurement information is the risk dashboard.

With a risk dashboard, several important charts and graphs are presented on a single page, like the dashboard of a car or airplane, so that the user can see important information and trends at a glance.

The risk dashboard is often accompanied by the charts of numbers, either on later pages of a hard copy or on a click-through basis for on-screen risk dashboards.

Dashboard Example

Advertisement

Is it rude to ask “How fat is your tail?”

July 23, 2014

In fact, not only is it not rude, the question is central to understanding risk models.  The Coefficient of Riskiness(COR) allows us for the first time to talk about this critical question.

332px-36_Stanley_Hawk

You see, “normal” sized tails have a COR of three. If everything were normal, then risk models wouldn’t be all that important. We could just measure volatility and multiply it by 3 to get the 1 in 1000 result. If you instead want the 1 in 200 result, you would multiply the 1 in 1000 result by 83%.

Amazing maths fact – 3 is always the answer.

But everything is not normal. Everything does not have a COR of 3. So how fat are your tails?

RISKVIEWS looked at an equity index model. That model was carefully calibrated to match up with very long term index returns (using Robert Shiller’s database). The fat tailed result there has a COR of 3.5. With that model the 2008 S&P 500 total return loss of 37% is a 1 in 100 loss.

So if we take that COR of 3.5 and apply it to the experience of 1971 to 2013 that happens to be handy, the mean return is 12% and the volatility is about 18%. Using the simple COR approach, we estimate the 1 in 1000 loss as 50% (3.5 times the volatility subtracted from the average). To get the 1/200 loss, we can take 83% of that and we get a 42% loss.

RISKVIEWS suggests that the COR can be an important part of Model Validation.

 Looking at the results above for the stock index model, the question becomes why is 3.5 then the correct COR for the index? We know that in 2008, the stock market actually dropped 50% from high point to low point within a 12 month period that was not a calendar year. If we go back to Shiller’s database, which actually tracks the index values monthly (with extensions estimated for 50 years before the actual index was first defined), we find that there are approximately 1500 12 month periods. RISKVIEWS recognizes that these are not independent observations, but to answer this particular question, these actually are the right data points. And looking at that data, a 50% drop in a 12 month period is around the 1000th worst 12 month period. So a model with a 3.5 COR is pretty close to an exact fit with the historical record. And what if you have an opinion about the future riskiness of the stock market? You can vary the volatility assumptions if you think that the current market with high speed trading and globally instantaneously interlinked markets will be more volatile than the past 130 years that Schiller’s data covers. You can also adjust the future mean. You might at least want to replace the historic geometric mean of 10.6% for the arithmetic mean quoted above of 12% since we are not really taking about holding stocks for just one year. And you can have an opinion about the Riskiness of stocks in the future. A COR of 3.5 means that the tail at the 1 in 1000 point is 3.5 / 3 or 116.6% of the normal tails. That is hardly an obese tail.

The equity index model that we started with here has a 1 in 100 loss value of 37%. That was the 2008 calendar total return for the S&P 500. If we want to know what we would get with tails that are twice as fat, with the concept of COR, we can look at a COR of 4.0 instead of 3.5. That would put the 1 in 1000 loss at 9% worse or 59%. That would make the 1 in 200 loss 7% worse or 49%.

Those answers are not exact. But they are reasonable estimates that could be used in a validation process.

Non-technical management can look at the COR for each model can participate in a discussion of the reasonability of the fat in the tails for each and every risk.

RISKVIEWS believes that the COR can provide a basis for that discussion. It can be like the Richter scale for earthquakes or the Saffir-Simpson scale for hurricanes. Even though people in general do not know the science underlying either scale, they do believe that they understand what the scale means in terms of severity of experience. With exposure, the COR can take that place for risk models.

Chicken Little or Coefficient of Riskiness (COR)

July 21, 2014

Running around waving your arms and screaming “the Sky is Falling” is one way to communicate risk positions.  But as the story goes, it is not a particularly effective approach.  The classic story lays the blame on the lack of perspective on the part of Chicken Little.  But the way that the story is told suggests that in general people have almost zero tolerance for information about risk – they only want to hear from Chicken Little about certainties.

But insurers live in the world of risk.  Each insurer has their own complex stew of risks.  Their riskiness is a matter of extreme concern.  Many insurers use complex models to assess their riskiness.  But in some cases, there is a war for the hearts and minds of the decision makers in the insurer.  It is a war between the traditional qualitative gut view of riskiness and the new quantitative view of riskiness.  One tactic in that war used by the qualitative camp is to paint the quantitative camp as Chicken Little.

In a recent post, Riskviews told of a scale, a Coefficient of Riskiness.  The idea of the COR is to provide a simple basis for taking the argument about riskiness from the name calling stage to an actual discussion about Riskiness.

For each risk, we usually have some observations.  And from those observations, we can form the two basic statistical facts, the observed average and observed volatility (known as standard deviation to the quants).  But in the past 15 years, the discussion about risk has shifted away from the observable aspects of risk to an estimate of the amount of capital needed for each risk.

Now, if each risk held by an insurer could be subdivided into a large number of small risks that are similar in riskiness for each (including size of potential loss) and where the reasons for the losses for each individual risk were statistically separate (independent) then the maximum likely loss to be expected (99.9%tile) would be something like the average loss plus three times the volatility.  It does not matter what number is the average or what number is the standard deviation.

RISKVIEWS has suggested that this multiple of 3 would represent a standard amount of riskiness and become the index value for the Coefficient of Riskiness.

This could also be a starting point in looking at the amount of capital needed for any risks.  Three times the observed volatility plus the observed average loss.  (For the quants, this assumes that losses are positive values and gains negative.  If you want losses to be negative values, then take the observed average loss and subtract three times the volatility).

So in the debate about risk capital, that value is the starting point, the minimum to be expected.  So if a risk is viewed as made up of substantially similar but totally separate smaller risks (homogeneous and independent), then we start with a maximum likely loss of average plus three times volatility.  Many insurers choose (or have chosen for them) to hold capital for a loss at the 1 in 200 level.  That means holding capital for 83% of this Maximum Likely Loss.  This is the Viable capital level.  Some insurers who wish to be at the Robust level of capital will hold capital roughly 10% higher than the Maximum Likely Loss.  Insurers targeting the Secure capital level will hold capital at approximately 100% of the Maximum Likely Loss level.

But that is not the end of the discussion of capital.  Many of the portfolios of risks held by an insurer are not so well behaved.  Those portfolios are not similar and separate.  They are dissimilar in the likelihood of loss for individual exposures, they are dissimilar for the possible amount of loss.  One way of looking at those dissimilarities is that the variability of rate and of size result in a larger number of pooled risks acting statistically more like a smaller number of similar risks.

So if we can imagine that evaluation of riskiness can be transformed into a problem of translating a block of somewhat dissimilar, somewhat interdependent risks into a pool of similar, independent risks, this riskiness question comes clearly into focus.  Now we can use a binomial distribution to look at riskiness.  The plot below takes up one such analysis for a risk with an average incidence of 1 in 1000.  You see that for up to 1000 of these risks, the COR is 5 or higher.  The COR gets up to 6 for a pool of only 100 risks.  It gets close to 9 for a pool of only 50 risks.

 

cor

 

There is a different story for a risk with average incidence of 1 in 100.  COR is less than 6 for a pool as small as 25 exposures and the COR gets down to as low as 3.5.

Cor100

In producing these graphs, RISKVIEW notices that COR is largely a function of number of expected claims.  So The following graph shows COR plotted against number of expected claims for low expected number of claims.  (High expected claims produces COR that is very close to 3 so are not very interesting.)

COR4You see that the COR stays below 4.5 for expected claims 1 or greater.  And there does seem to be a gently sloping trend connecting the number of expected claims and the COR.

So for risks where losses are expected every year, the maximum COR seems to be under 4.5.  When we look at risks where the losses are expected less frequently, the COR can get much higher.  Values of COR above 5 start showing up with expected losses that are in the range of .2 and values above .1 are even higher.

cor5

What sorts of things fit with this frequency?  Major hurricanes in a particular zone, earthquakes, major credit losses all have expected frequencies of one every several years.

So what has this told us?  It has told us that fat tails can come from the small portfolio effect.  For a large portfolio of similar and separate risks, the tails are highly likely to be normal with a COR of 3.  For risks with a small number of exposures, the COR, and therefore the tail, might get as much as 50% fatter with a COR of up to 4.5. And the COR goes up as the number of expected losses goes down.

Risks with very fat tails are those with expected losses less frequent than one per year can have much fatter tails, up to three times as fat as normal.

So when faced with those infrequent risks, the Chicken Little approach is perhaps a reasonable approximation of the riskiness, if not a good indicator of the likelihood of an actual impending loss.

 

Quantitative vs. Qualitative Risk Assessment

July 14, 2014

There are two ways to assess risk.  Quantitative and Qualitative.  But when those two words are used in the NAIC ORSA Guidance Manual, their meaning is a little tricky.

In general, one might think that a quantitative assessment uses numbers and a qualitative assessment does not.  The difference is as simple as that.  The result of a quantitative assessment would be a number such as $53 million.  The result of a qualitative assessment would be words, such as “very risky” or “moderately risky”.

But that straightforward approach to the meaning of those words does not really fit with how they are used by the NAIC.  The ORSA Guidance Manual suggests that an insurer needs to include those qualitative risk assessments in its determination of capital adequacy.  Well, that just will not work if you have four risks that total $400 million and three others that are two “very riskys” and one “not so risk”.  How much capital is enough for two “very riskys”, perhaps you need a qualitative amount of surplus to provide for that, something like “a good amount”.

RISKVIEWS believes that then the NAIC says “Quantitative” and “Qualitative” they mean to describe two approaches to developing a quantity.  For ease, we will call these two approaches Q1 and Q2.

The Q1 approach is data and analysis driven approach to developing the quantity of loss that the company’s capital standard provides for.  It is interesting to RISKVIEWS that very few participants or observers of this risk quantification regularly recognize that this process has a major step that is much less quantitative and scientific than others.

The Q1 approach starts and ends with numbers and has mathematical steps in between.  But the most significant step in the process is largely judgmental.  So at its heart, the “quantitative” approach is “qualitative”.  That step is the choice of mathematical model that is used to extrapolate and interpolate between actual data points.  In some cases, there are enough data points that the choice of model can be based upon somewhat less subjective fit criteria.  But in other cases, that level of data is reached by shortening the time step for observations and THEN making heroic (and totally subjective) assumptions about the relationship between successive time periods.

These subjective decisions are all made to enable the modelers to make a connection between the middle of the distribution, where there usually is enough data to reliably model outcomes and the tail, particularly the adverse tail of the distribution where the risk calculations actually take place and where there is rarely if ever any data.

There are only a couple of subjective decisions possibilities, in broad terms…

  • Benign – Adverse outcomes are about as likely as average outcomes and are only moderately more severe.
  • Moderate – Outcomes similar to the average are much more likely than outcomes significantly different from average.  Outcomes significantly higher than average are possible, but likelihood of extremely adverse outcomes are extremely highly unlikely.
  • Highly risky – Small and moderately adverse outcomes are highly likely while extremely adverse outcomes are possible, but fairly unlikely.

The first category of assumption, Benign,  is appropriate for large aggregations of small loss events where contagion is impossible.  Phenomenon that fall into this category are usually not the concern for risk analysis.  These phenomenon are never subject to any contagion.

The second category, Moderate, is appropriate for moderate sized aggregations of large loss events.  Within this class, there are two possibilities:  Low or no contagion and moderate to high contagion.  The math is much simpler if no contagion is assumed.

But unfortunately, for risks that include any significant amount of human choice, contagion has been observed.  And this contagion has been variable and unpredictable.  Even more unfortunately, the contagion has a major impact on risks at both ends of the spectrum.  When past history suggests a favorable trend, human contagion has a strong tendency to over play that trend.  This process is called “bubbles”.  When past history suggests an unfavorable trend, human contagion also over plays the trend and markets for risks crash.

The modelers who wanted to use the zero contagion models, call this “Fat Tails”.  It is seen to be an unusual model, only because it was so common to use the zero contagion model with the simpler maths.

RISKVIEWS suggests that when communicating that the  approach to modeling is to use the Moderate model, the degree of contagion assumed should be specified and an assumption of zero contagion should be accompanied with a disclaimer that past experience has proven this assumption to be highly inaccurate when applied to situations that include humans and therefore seriously understates potential risk.

The Highly Risky models are appropriate for risks where large losses are possible but highly infrequent.  This applies to insurance losses due to major earthquakes, for example.  And with a little reflection, you will notice that this is nothing more than a Benign risk with occasional high contagion.  The complex models that are used to forecast the distribution of potential losses for these risks, the natural catastrophe models go through one step to predict possible extreme events and the second step to calculate an event specific degree of contagion for an insurer’s specific set of coverages.

So it just happens that in a Moderate model, the 1 in 1000 year loss is about 3 standard deviations worse than the mean.  So if we use that 1 in 1000 year loss as a multiple of standard deviations, we can easily talk about a simple scale for riskiness of a model:

Scale

So in the end the choice is to insert an opinion about the steepness of the ramp up between the mean and an extreme loss in terms of multiples of the standard deviation.  Where standard deviation is a measure of the average spread of the observed data.  This is a discussion that on these terms include all of top management and the conclusions can be reviewed and approved by the board with the use of this simple scale.  There will need to be an educational step, which can be largely in terms of placing existing models on the scale.  People are quite used to working with a Richter Scale for earthquakes.  This is nothing more than a similar scale for risks.  But in addition to being descriptive and understandable, once agreed, it can be directly tied to models, so that the models are REALLY working from broadly agreed upon assumptions.

*                  *                *               *             *                *

So now we go the “Qualitative” determination of the risk value.  Looking at the above discussion, RISKVIEWS would suggest that we are generally talking about situations where we for some reason do not think that we know enough to actually know the standard deviation.  Perhaps this is a phenomenon that has never happened, so that the past standard deviation is zero.  So we cannot use the multiple of standard deviation method discussed above.  Or to put is another way, we can use the above method, but we have to use judgment to estimate the standard deviation.

*                  *                *               *             *                *

So in the end, with a Q1 “quantitative” approach, we have a historical standard deviation and we use judgment to decide how risky things are in the extreme compared to that value.  In the Q2 “qualitative” approach, we do not have a reliable historical standard deviation and we need to use judgment to decide how risky things are in the extreme.

Not as much difference as one might have guessed!

What if there are no clocks?

March 17, 2014

RISKVIEWS recently told someone that the idea of a Risk Control Cycle was quite simple.  In fact, it is just as simple as making an appointment and keeping it.

But what if you are in a culture that has no clocks?

Bahnsteiguhr[1]

Imagine how difficult the conversation might be about an appointment for 9:25 tomorrow morning.

That is the situation for companies who want to learn about adopting a risk control cycle who have no tradition of measuring risk.

The companies who have dutifully followed a regulatory imperative to install a capital model may think that they have a risk measurement system.  But that system is like a clock that they only look at once per month.  Not very helpful for making and keeping appointments.

Risk control needs to be done with risk measures that are available frequently.  That probably will mean that the risk measure that is most useful for risk control might not be as spectacularly accurate as a capital model.  The risk control process needs a quick measure of risk that can be available every week or at least every month.  Information at the speed of your business decision making process.

But none of us are really in a culture where there are no clocks.  Instead, we are in cultures where we choose not to put any clocks up on the walls.  We choose not to set times for our appointments.

I found that if you have a goal, that you might not reach it. But if you don’t have one, then you are never disappointed. And I gotta tell ya… it feels phenomenal.

from the movie Dodgeball

Stress to reduce Distress

February 12, 2014

Distress lurks. Just out of sight. Perhaps around a corner, perhaps down the road just past your view.
For some their rule is “out of sight, out of mind”. For them, worry and preparation can start when, and if, distress comes into sight.

But risk managers see it as our jobs to look for and prepare for distress. Whether it is in sight or not. Especially because some sorts of distress come on so very quickly and some methods of mitigation take effect so slowly.
Stress testing is one of the most effective tools, both for imagining the potential magnitude of distresses, but almost more importantly, in developing compelling stories to communicate about that distress potential.

This week, Willis Wire is featuring a piece about Stress Testing in the “ERM Practices” series;

ERM Practices:  Stress Testing

RISKVIEWS has features many posts related to Stress Testing:

RISKVIEWS Archive of Posts related to Stress Testing

You need good Risk Sense to run an insurance company

January 16, 2014

It seems to happen all too frequently.

A company experiences a bad loss and the response of management is that they were not aware that the company had such a risk exposure.

For an insurance company, that response just isn’t good enough.  And most of the companies where management has given that sort of answer were not insurers.

At an insurance company, managers all need to have a good Risk Sense.

Risk Sense is a good first order estimate of the riskiness of all of their activities. 

Some of the companies who have resisted spending the time, effort and money to build good risk models are the companies whose management already has an excellent Risk Sense.  Management does not see the return for spending all that is required to get what is usually just the second digit.

By the way, if you think that your risk model provides reliable information beyond that second digit, you need to spend more time on model validation.

To have a reliable Risk Sense, you need to have reliable risk selection and risk mitigation processes.  You need to have some fundamental understanding of the risks that are out there in the areas in which you do business.  You also need to  be constantly vigilant about changes to the risk environment that will require you to adjust your perception of risk as well as your risk selection and mitigation practices.

Risk Sense is not at all a “gut feel” for the risk.  It is instead more of a refined heuristic.  (See Evolution of Thinking.)  The person with Risk Sense has the experience and knowledge to fairly accurately assess risk based upon the few really important facts about the risks that they need to get to a conclusion.

The company that needs a model to do basic risk assessment, i.e. that does not have executives who have a Risk Sense, can be highly fragile.  That is because risk models can be highly fragile.  Good model building actually requires plenty of risk sense.

The JP Morgan Chase experiences with the “London Whale” were a case of little Risk Sense and staff who exploited that weakness to try to get away with excessive risk taking.  They relied completely on a model to tell them how much risk that they were taking.  No one looked at the volume of activity and had a usual way to create a good first order estimate of the risk.  The model that they were using was either inaccurate for the actual situation that they were faced with or else it was itself gamed.

A risk management system does not need to work quite so hard when executives have a reliable Risk Sense.  If an executive can look at an activity report and apply their well honed risk heuristics, they can be immediately informed of whether there is an inappropriate risk build up or not.  They need control processes that will make sure that the risk per unit of activity is within regular bounds.  If they start to have approved activities that involve situations with much higher levels of risk per unit of activity, then their activity reports need to separate out the more risky activities.

Models are too fragile to be the primary guide to the level of risk.  Risk taking organizations like insurers need Risk Sense.

Can’t skip measuring Risk and still call it ERM

January 15, 2014

Many insurers are pushing ahead with ERM at the urging of new executives, boards, rating agencies and regulators.  Few of those firms who have resisted ERM for many years have a history of measuring most of their risks.

But ERM is not one of those liberal arts like the study of English Literature.  In Eng Lit, you may set up literature classification schemes, read materials, organize discussion groups and write papers.  ERM can have those elements, but the heart of ERM is Risk Measurement.  Comparing those risk measures to expectations and to prior period measures.  If a company does not have Risk Measurement, then they do not have ERM.

That is the tough side of this discussion, the other side is that there are many ways to measure risks and most companies can implement several of them for each risk without the need for massive projects.

Here are a few of those measures, listed in order of increasing sophistication:

1. Risk Guesses (AKA Qualitative Risk Assessment)
– Guesses, feelings
– Behavioral Economics Biases
2. Key Risk Indicators (KRI)
– Risk is likely to be similar to …
3. Standard Factors
– AM Best,  S&P, RBC
4. Historical Analysis
– Worst Loss in past 10 years as pct of base (premiums,assets).
5. Stress Tests
– Potential loss from historical or hypothetical scenario
6. Risk Models
– If the future is like the past …
– Or if the future is different from the past in this way …

More discussion of Risk Measurement on WillisWire:

     Part 2 of a 14 part series
And on RISKVIEWS:
Risk Assessment –  55 other posts relating to risk measurement and risk assessment.

Reviewing the Risk Environment

January 14, 2014

The new US Actuarial Standards of Practice 46 and 47 suggest that the actuary needs to assess the risk environment as a part of risk evaluation and risk treatment professional services. The result of that evaluation should be considered in that work.

And assessment of the risk environment would probably be a good idea, even if the risk manager is not a US actuary.
But what does it mean to assess the risk environment?  One example of a risk environment assessment can be found on the OCC website.  They prepare a report titled “Semi Annual Risk Perspective“.

This report could be a major source of information, especially for Life Insurers, about the risk environment.  And for Non-Life carriers, the outline can be a good road map of the sorts of things to review regarding their risk environment.

Part I: Operating Environment

  • Slow U.S. Economic Growth Weighs on Labor Market
  • Sluggish European Growth Also Likely to Weigh on U.S. Economic Growth in Near Term
  • Treasury Yields Remain Historically Low
  • Housing Metrics Improved
  • Commercial Real Estate Vacancy Recovery Uneven Across Property Types

Part II: Condition and Performance of Banks.

A. Profitability and Revenues: Improving Slowly..

  • Profitability Increasing .
  • Return on Equity Improving, Led by Larger Banks .
  • Fewer Banks Report Losses
  • Noninterest Income Improving for Large and Small Banks.
  • Trading Revenues Return to Pre-Crisis Levels
  • Counterparty Credit Exposure on Derivatives Continues to Decline ………….
  • Low Market Volatility May Understate Risk
  • Net Interest Margin Compression Continues..

B. Loan Growth Challenges

  • Total Loan Growth: C&I Driven at Large Banks; Regionally Uneven for Small Banks….
  • Commercial Loan Growth Led by Finance and Insurance, Real Estate, and Energy …
  • Residential Mortgage Runoff Continues, Offsetting Rising Demand for Auto and Student Loans………….

C. Credit Quality: Continued Improvement, Although Residential Real Estate Lags

  • Charge-Off Rates for Most Loan Types Drop Below Long-Term Averages
  • Shared National Credit Review: Adversely Rated Credits Still Above Average Levels .
  • Significant Leveraged Loan Issuance Accompanied by Weaker Underwriting.
  • New Issuance Covenant-Lite Leveraged Loan Volume Surges .
  • Commercial Loan Underwriting Standards Easing .
  • Mortgage Delinquencies Declining, but Remain Elevated.
  • Auto Lending Terms Extending ..

Part III: Funding, Liquidity, and Interest Rate Risk

  • Retention Rate of Post-Crisis Core Deposit Growth Remains Uncertain
  • Small Banks’ Investment Portfolios Concentrated in Mortgage Securities
  • Commercial Banks Increasing Economic Value of Equity Risk

Part IV: Elevated Risk Metrics

  • VIX Index Signals Low Volatility…
  • Bond Volatility Rising but Near Long-Term Average
  • Financials’ Share of the S&P 500 Rising but Remains Below Average
  • Home Prices Rising .
  • Commercial Loan Delinquencies and Losses Decline to Near or Below Average ..
  • Credit Card Delinquencies and Losses Near Cyclical Lows .

Part V: Regulatory Actions

  • Banks Rated 4 or 5 Continue to Decline
  • Matters Requiring Attention Gradually Decline
  • Enforcement Actions Against Banks Slow in 2013

For those who need a broader perspective, the IMF regularly publishes a report called World Economic Output.  That report is much longer but more specifically focused on the general level of economic activity.  Here are the main chapter headings:

Chapter 1. Global Prospects and Policies

Chapter 2. Country and Regional Perspectives

Chapter 3. Dancing Together? Spillovers, Common Shocks, and the Role of Financial and Trade Linkages

Chapter 4. The Yin and Yang of Capital Flow Management: Balancing Capital Inflows with Capital outflows

The IMF report also includes forecasts, such as the following:

IMF

 

Provisioning – Packing for your trip into the future

April 26, 2013

There are two levels of provisioning for an insurer.  Reserves and Risk Capital.  The two are intimately related.  In fact, in some cases, insurers will spend more time and care in determining the correct number for the sum of the two, called Total Asset Requirement (TAR) by some.

Insurers need an realistic picture of future obligations long before the future is completely clear. This is a key part of the feedback mechanism.  The results of the first year of business is the most important indication of business success for non-life insurance.  That view of results depends largely upon the integrity of the reserve value.  This feedback information effects performance evaluation, pricing for the next year, risk analysis and capital adequacy analysis and capital allocation.

The other part of provisioning is risk capital.  Insurers also need to hold capital for less likely swings in potential losses.  This risk capital is the buffer that provides for the payment of policyholder claims in a very high proportion of imagined circumstances.  The insurance marketplace, the rating agencies and insurance regulatory bodies all insist that the insurer holds a high buffer for this purpose.

In addition, many valuable insights into the insurance business can be gained from careful analysis of the data that is input to the provisioning process for both levels of provisioning.

However, reserves are most often set to be consistent with considerations.  Swings of adequate and inadequate pricing is tightly linked to swings in reserves.  When reserves are optimistically set capital levels may reflect same bias. This means that inadequate prices can ripple through to cause deferred recognition of actual claims costs as well as under provisioning at both levels.  This is more evidence that consideration is key to risk management.

There is often pressure for small and smooth changes to reserves and risk capital but information flows and analysis provide jumps in insights both as to expectations for emerging losses as well as in terms of methodologies for estimation of reserves and capital.  The business pressures may threaten to overwhelm the best analysis efforts here.  The analytical team that prepares the reserves and capital estimates needs to be aware of and be prepared for this eventuality.  One good way to prepare for this is to make sure that management and the board are fully aware of the weaknesses of the modeling approach and so are more prepared for the inevitable model corrections.

Insurers need to have a validation process to make sure that the sum of reserves and capital is an amount that provides the degree of security that is sought.  Modelers must allow for variations in risk environment as well as the impact of risk profile, financial security and risk management systems of the insurer in considering the risk capital amount.  Changes in any of those elements may cause abrupt shifts in the amount of capital needed.

The Total Asset Requirement should be determined without regard to where the reserves have been set so that risk capital level does not double up on redundancy or implicitly affirm inadequacy of reserves.

The capital determined through the Provisioning process will usually be the key element to the Risk Portfolio process.  That means that accuracy in the sub totals within the models are just as important as the overall total.  The common practice of tolerating offsetting inadequacies in the models may totally distort company strategic decision making.

This is one of the seven ERM Principles for Insurers.

Spreadsheets are not the problem

February 18, 2013

The media have latched on to a story.

Microsoft’s Excel Might Be The Most Dangerous Software On The Planet

The culprit in the 2012 JP Morgan trading loss has been exposed.  Spreadsheets are to blame!

The only problem with this answer is that it is simply incorrect.  It is blaming the bad result on the last step in the process.  Like the announcers for a football game who blame the last play of the game for the outcome.  It really wasn’t missing that one last ditch scoring effort that made the difference.  It was how the two teams played the whole game.

And for situations like the JP Morgan trading loss, the spreadsheet was one of the last steps in the process.

But the fundamental problem was that they were allowing someone in the bank to take very large risks that no one could understand directly.  Risks that no one had a rule of thumb that told them that they were nearing a situation where any bad day, they could lose billions.

That is pretty fundamental to a risk taking business.  To understand your risks.  And if you have no idea whatsoever of how much risk that you are taking without running that position through a model, then you are in big trouble.

That does not mean that models shouldn’t be used to evaluate risk.  The problem is the need to use a model in the heat of battle, when there is no time to check for the kinds of mistakes that tripped up JP Morgan.  The models should be used in advance of going to market and rules of thumb, or heuristics for those who like the academic labels, need to be developed.

The model should be a tool for building understanding of the business, not as a substitute for understanding the business.

Humans have developed very powerful skills to work with heuristics over tens of thousands of years.  Models should feed into that capability, not be used to totally override it.

Chances are that the traders at JP Morgan did have heuristics for the risk and knew that they were arbitraging their own risk management process.  They may not have known why they gut told them that there was more risk than the model, but they are likely to have known that there was nore risk there.

The risk managers are the ones who most need to have those heuristics.  And management needs to set down clear rules about the situations where the risk models are later found to be in error that protect the bank, rather than the traders bonuses.

No, spreadsheets are not the problem.

The problem is the idea that you can be in a business that neither top management nor risk management has any “feel” for.

Is this just MATH that you do to make yourself feel better?

November 19, 2012

Megyn Kelly asked that of Karl Rove on Fox TV on election night about his prediction of Ohio voting.

But does most risk analysis fall into this category as well?

How many companies funded the development of economic capital models with the expressed interest in achieving lower capital requirements?  How many of those companies encouraged the use of “MATH that you do to make yourself feel better” MTYDTMYFB

Model validation is now one of the hot topics in Risk model land.  Why? Is it because modelers stopped checking when they got the answer that was wanted, rather than working at it until they got it right?  If the later was the answer, then there would be zero additional work to do to validate a model.  That validation work would already be done.  MTYDTMYFB

The Use Test is quite a challenge for many.  First part of the challenge is to produce an example of a situation where they did modeling of a major risk decision before that decision was finalized.  Or are the models only brought into play after all of the decisions are made?  MTYDTMYFB

There are many other examples of MTYDTMYFB.   Many years ago when computers were relatively new and dot matrix printers were the sign of high tech, it was possible to write a program to print out a table of numbers that had been developed somewhere else.  The fact that they appeared on 11 x 14 computer paper from a dot matrix printer gave those numbers a sheen of credibility.  Some managers were willing to believe then that computers were infallible.

But in fact, computers, and math, are about as infallible as gravity and about as selective.  Gravity will be a big help if you need to get something from a higher place to a lower place.  But it will be quite a hindrance if you need to do the opposite.  Math and computers are quite good at some things, like analyzing large amounts of data and finding patterns that may or may not really exist.

Math and computers need to be used with judgement, skepticism and experience.  Especially when approaching the topic of risk.

Statistics works like gravity helping us take things downhill when you are seeking to estimate the most likely value of some uncertain event.  That is because each additional piece of data helps you to hone in one the average of the distribution of possibilities.  Your confidence in your prediction of the most likely value should improve.

But when you are looking at risk, you are usually looking for an estimate for extremely unlikely adverse results.  The principles of statistics are just like the effect of gravity on moving heavy things uphill.  They work against you.

Take correlation, for example.  The chart above can be easily reproduced by anyone with a spreadsheet program.  RISKVIEWS simply created two columns of random numbers that each contained 1000 numbers.  The correlation of these two series for all 1000 numbers is zero to several decimal places.  This chart is created by measuring the correlation of subsets of that 1000 that contained 10 values.

What this shows is how easy it is to get any answer that you want.  MTYDTMYFB

Unintended Consequences – Distortion of Decisions

October 7, 2012

Central bankers have tools to help the economy, but for the most part, those tools all have the effect of lowering interest rates.

But there are consequences of overriding the market to change the price of something.  The consequences are that every decision that uses the information from the affected market prices will be distorted.

Interest rates are a price for deferral of receiving cash.  Low interest rates signal that there is very little risk to deferral of receiving cash.  So one only has to pay a little extra to pay later rather than now.

This is helpful in stimulating consumption.  People without the money right now can promise to pay later with low penalty for the deferral.

But is the risk from the deferral really lower?  The interest rates are very low because the central bank is overwhelming the market demand.  Not because anyone really believes that deferral of receipt of cash is low risk.

But anyone who simply uses the market interest rates is having their decision distorted.  They are open to taking deferral risk without expecting to be reasonably compensated for that risk.

To purists who believe that the only usable value is the market price, this is the only real information.

But if you want to make good decisions about transactions that stretch out over a long time, you might want to consider making your own adjustment for the risk of deferral.

Where is the Metric for Diversity?

June 18, 2012

“What gets measured, gets managed.” – Peter Drucker

By gaetanlee, via Wikimedia Commons

It seems that while diversification is widely touted as the fundamental principle behind insurance and behind risk management in general, there is no general measure of diversity. So based upon Drucker’s rule of thumb RISKVIEWS would say that we all fail to manage diversity.

A measure of diversity would tell us when we take more similar risks and when we are taking more distinct risks.  But we do not even look.

This may well be another part of good financial management that has been stolen by the presumptions of financial economics.  Financial economics PRESUMES that we all have full diversification.  It tells us that we cannot get paid for our lack of diversification.

But those presumptions are untested and untestable, at least as long as we fail to even measure diversity.

Correlation is the best measure that we have and it is barely used.  For the most part, correlation is used mainly to look at macro portfolio effects on Economic Capital Models.  And it is not a particularly good measure of diversity anyway.  It actually only measures a certain type of statistical comovement of data.  For example, below is a chart that shows that equity market comovement is increasing.

But have the activities of the largest companies in those markets been converging?  Or is this picture just an artifact of the continuing Euro crisis? In either case, if we were looking at a measure of diversity, rather than just comovement, we might have an idea whether this chart makes any sense or not.

Many believe that they are protected by indexing.  That an index is automatically diverse.  But there is little guarantee of that.  Particularly for a market-value weighted index.  In fact, a market-values weighted index is almost guaranteed to have less diversity just when it is needed most.

For a clear indication of that look at the TSX index during the internet bubble Nortel represented 35% of the index!  Concentration increases risk.  In this case, the results were disastrous for any indexers. While Nortel stock rose in the Dot Com mania, buyers of the TSX index were holding a larger and larger fraction of their investment in a single stock.

We badly need a metric for diversity.

 

Risk Evaluation Standard

April 4, 2012

The US Actuarial Standards Board (ASB) recently approved proposed ASOP Risk Evaluation in Enterprise Risk Management as an exposure draft.

In March 2011, discussion drafts on the topics of risk evaluation and risk treatment were issued. The ERM Task Force of the ASB reviewed the comments received and based on those comments, began work on the development of exposure drafts in those areas.

The proposed ASOP on risk evaluation provides guidance to actuaries when performing professional services with respect to risk evaluation systems, including designing, implementing, using, and reviewing those systems. The comment deadline for the exposure draft is June 30, 2012. An exposure draft on risk treatment is scheduled to be reviewed in summer 2012.

ASB Approves ERM Risk Evaluation Exposure DraftRisk Evaluation in Enterprise Risk Management.  Comment deadline: June 30, 2012

Who are we kidding?

September 14, 2011

When we say that we are “measuring” the 1/200 risk of loss of our activities?
For most risks, we do not even have one observation of a 200 year period.
What we have instead is an extrapolation based upon an assumption that there is a mathematical formula that relates the 1/200 year loss to something that we do have confidence in.

Let’s look at some numbers.  I am testing the idea that we might be able to know what the 1/10 loss would be if we have 50 years of observations.  Our process is to rank the 50 years and look at the 45th worst loss.  We find that loss is $10 million.

Now if we build a model where our probability of losing $10 million or more is 10% and we run that model 100 times, we get a histogram like this:

So in this test, with an underlying probability of 10%, the frequency of 50 year periods with 5 observations of losses of $10 million or larger is only 22%!

When I repeat the test with a frequency assumption of 15% or of 6.67%, I get exactly 5 observations with a frequency of about 10% in each case.

So given 50 years of observations and 5 occurrences, it seems that it is quite possible that the underlying likelihood might be 50% higher or 1/3 lower.

Try to imagine the math of getting a 1/200 loss pick correct.  What might the confidence interval be around that number?

Who are we kidding?

Actuarial Risk Management Volunteer Opportunity

August 11, 2011

Actuarial Review of Enterprise Risk Management Practices –

A Working Group formed by The Enterprise and Financial Risks Committee of the IAA has started working on a white paper to be titled: “Actuarial Review of Enterprise Risk Management Practices”.  We are seeking volunteers to assist with writing, editing and research.

This project would set out a systematic process for actuaries to use when evaluating risk management practices.  Actuaries in Australia are now called to certify risk management practices of insurers and that the initial reaction of some actuaries was that they were somewhat unprepared to do that.  This project would produce a document that could be used by actuaries and could be the basis for actuaries to propose to take on a similar role in other parts of the world.  Recent events have shown that otherwise comparable businesses can differ greatly in the effectiveness of their risk management practices. Many of these differences appear to be qualitative in character and centered on management processes. Actuaries can take a role to offer opinion on process quality and on possible avenues for improvement. More specifically, recent events seem likely to increase emphasis on what the supervisory community calls Pillar 2 of prudential supervision – the review of risk and solvency governance. In Solvency II in Europe, a hot topic is the envisaged requirement for an ‘Own Risk and Solvency Assessment’ by firms and many are keen to see actuaries have a significant role in advising on this. The International Association of Insurance Supervisors has taken up the ORSA requirement as an Insurance Core Principle and encourages all regulators to adopt as part of their regulatory structure.  It seems an opportune time to pool knowledge.

The plan is to write the paper over the next six months and to spend another six months on comment & exposure prior to finalization.  If we get enough volunteers the workload for each will be small.   This project is being performed on a wiki which allows many people to contribute from all over the world.  Each volunteer can make as large or as small a contribution as their experience and energy allows.  People with low experience but high energy are welcome as well as people with high experience.

A similar working group recently completed a white paper titled the CARE report.  http://www.actuaries.org/CTTEES_FINRISKS/Documents/CARE_EN.pdf  You can see what the product of this sort of effort looks like.

Further information is available from Mei Dong, or David Ingram

==============================================================

David Ingram, CERA, FRM, PRM
+1 212 915 8039
(daveingram@optonline.net )

FROM 2009

ERM BOOKS – Ongoing Project – Volunteers still needed

A small amount of development work was been done to create the framework for a global resource for ERM Readings and References.

http://ermbooks.wordpress.com

Volunteers are needed to help to make this into a real resource.  Over 200 books, articles and papers have been identified as possible resources ( http://ermbooks.wordpress.com/lists-of-books/ )
Posts to this website give a one paragraph summary of a resource and identify it within several classification categories.  15 examples of posts with descriptions and categorizations can be found on the site.
Volunteers are needed to (a) identify additional resources and (b) write 1 paragraph descriptions and identify classifications.
If possible, we are hoping that this site will ultimately contain information on the reading materials for all of the global CERA educational programs.  So help from students and/or people who are developing CERA reading lists is solicited.
Participants will be given author access to the ermbooks site.  Registration with wordpress at www.wordpress.com is needed prior to getting that access.
Please contact Dave Ingram if you are interested in helping with this project.

(more…)

Frequency vs. Likelihood

June 26, 2011

Much risk management literature talks about identifying the frequency and severity of risks.

There are several issues with this suggestion.  It is a fairly confused way of saying that there needs to be a probabilistic measure of the risk.

However, most classes of risks – things like market, credit, natural catastrophe, legal, or data security will not have a single pair of numbers that represent them.  Instead they will have a series of pairs of probabilities and loss amounts.

The word frequency adds another confusion.  Frequency refers to observations.  It is a backwards looking approach to the risk.  What is really needed is likelihood – a forward looking probability.

For some risks, all we will ever have is an ever changing frequency.

So what do we do?  With some data in hand and a view of the underlying nature of the risk, we form a likelihood assumption.  With that assumption, we can then develop an actual gain and loss distribution that gives our best picture of the risk reward trade-offs.

For example, the following is three sets of observations of some phenomena.

On this example, the 1s represent the incidence of major loss experiences.  There are at least four ways that these observations might be interpreted.

  1. One analyst might say that the average of all 60 observations is 2 (or a 10% frequency) so that is what they will use to project the forward likelihood of this problem.
  2. Another analyst might say that they want to be sure that they account for the worst case, so they will focus on the first set of observations and use a 15% likelihood assumption.
  3. A third analyst will focus on the trend and make a likelihood assumption below 5%.
  4. The fourth analyst will say that there is just not enough consistent information to form a reliable likelihood assumption.

Then the next 20 observations come up all zeros.  How do the four analysts update their likelihood assumptions?

In fact, this illustration was developed with random numbers generated from a binomial distribution with a 5% likelihood.

The math is simple to determine that probability of frequency observations from 20 trials with a likelihood of 5% are:

      • 0 – 36%
      • 1 – 38%
      • 2 – 19%
      • 3 –  6%
      • 4 –  1%

To be responsible in setting your likelihood assumptions, you should be fully aware of the actual distributions of possibilities based upon the frequency observations that you have to work with. So the first set of observations had a 6% likelihood, the second with 2 observations had a 19% likelihood and the third with 1 observations had a 38% likelihood.

That is when we know the actual likelihood.  Usually you do not.  But you can look at this sort of table for each possible assumption for likelihood.

Here we actually had 60 observations.  The same sort of table for the 60 trials and for different assumptions of likelihood:

This type of thinking will only make sense for the first analyst above.  The other three will not be swayed.  But for that first analyst, some more detailed reflection can help them to better understand that their assumptions of likelihood are just that, assumptions; not facts.

Echo Chamber Risk Models

June 12, 2011

The dilemma is a classic – in order for a risk model to be credible, it must be an Echo Chamber – it must reflect the starting prejudices of management. But to be useful – and worth the time and effort of building it – it must provide some insights that management did not have before building the model.

The first thing that may be needed is to realize that the big risk model cannot be the only tool for risk management.  The Big Risk Model, also known as the Economic Capital Model, is NOT the Swiss Army Knife of risk management.  This Echo Chamber issue is only one reason why.

It is actually a good thing that the risk model reflects the beliefs of management and therefore gets credibility.  The model can then perform the one function that it is actually suited for.  That is to facilitate the process of looking at all of the risks of the firm on the same basis and to provide information about how those risks add up to make up the total risk of the firm.

That is very, very valuable to a risk management program that strives to be Enterprise-wide in scope.  The various risks of the firm can then be compared one to another.  The aggregation of risk can be explored.

All based on the views of management about the underlying characteristics of the risks. That functionality allows a quantum leap in the ability to understand and consistently manage the risks of the firm.

Before creating this capability, the risks of each firm were managed totally separately.  Some risks were highly restricted and others were allowed to grow in a mostly uncontrolled fashion.  With a credible risk model, management needs to face their inconsistencies embedded in the historical risk management of the firm.

Some firms look into this mirror and see their problems and immediately make plans to rationalize their risk profile.  Others lash out at the model in a shoot the messenger fashion.  A few will claim that they are running an ERM program, but the new information about risk will result in absolutely no change in risk profile.

It is difficult to imagine that a firm that had no clear idea of aggregate risk and the relative size of the components thereof would find absolutely nothing that needs adjustment.  Often it is a lack of political will within the firm to act upon the new risk knowledge.

For example, when major insurers started to create the economic capital models in the early part of this century, many found that their equity risk exposure was very large compared to their other risks and to their business strategy of being an insurer rather than an equity mutual fund.  Some firms used this new information to help guide a divestiture of equity risk.  Others delayed and delayed even while saying that they had too much equity risk.  Those firms were politically unable to use the new risk information to reduce the equity position of the group.  More than one major business segment had heavy equity positions and they could not choose which to tell to reduce.  They also rejected the idea of reducing exposure through hedging, perhaps because there was a belief at the top of the firm that the extra return of equities was worth the extra risk.

This situation is not at all unique to equity risk.   Other firms had the same experience with Catastrophe risks, interest rate risks and Casualty risk concentrations.

A risk model that was not an Echo Chamber model would be any use at all in these situation above. The differences between management beliefs and the model assumptions of a non Echo Chamber model would result in it being left out of the discussion entirely.

Other methods, such as stress tests can be used to bring in alternate views of the risks.

So an Echo Chamber is useful, but only if you are willing to listen to what you are saying.

Risk Assessment is always Opinion

June 10, 2011

Risk Assessment is most often done with very high tech models.

There is a cycle for risk models though.  The cycle starts with a simple model and progresses to ever more sophisticated models.  The ability to calculate risk at any time of the day or night becomes an achievable goal.

But just as the models get to be almost perfect, something often happens.  People start to doubt the model.  Then it shifts from sporadic doubt to rampant disbelief.  Then the process reaches its final stage and the model is totally ignored and abandoned.

What is the cause of that cycle?  It is caused by the fact that the process of modeling is always built on an opinion.  But as the model gets more and more sophisticated, the modelers forget the basic opinion.  They come to feel that the sophistication makes the model a machine that is capable of producing ultimate truth.

But folks who are not involved in the modeling, who are not drawn in to the process of creating greater and greater refinement to the risk assessments, will judge the model by the degree to which it helps with managing the business.  By the results of management judgements that are informed by the models.

The models will of course be perfectly fine when they deal with events that occur within one standard deviation of the mean.  Those events happen fairly frequently and there will be plenty of data to calibrate the frequency for those events.

But that is not where the real risk is located – within one standard deviation.

Real risk is most often found at least 2 standard deviations out.

Nassim Taleb has indicated that it is important to notice that the most significant risks are always out so far in the distribution that there is never enough data to properly calibrate the model.

But Taleb would only be correct if the important information about a risk is the PAST frequency.

That is not correct.  The important thing about risks is the FUTURE frequency.  The future frequency is unknowable.

But you can have an opinion about that frequency.

  1. Your opinion could be that the future will be just like the past.
  2. Your opinion could be that the future will be worse than the past.
  3. Your opinion could be that the future will be better than the past.
  4. Your opinion could be that you do not know the future.

You may form that opinion based on the opinion that seems to be implied by the market prices, or by listening to experts.

The folks with opinion 1 tend to build the models.  They can collect the data to calibrate their models.  But the idea that the future will be just like the past is simply their OPINION.  They do not know.

The folks with opinion 2 tend to try to avoid risks.  They do not need models to do that.

The folks with opinion 3 tend to take risks that they think are overpriced from the folks with opinions 1 & 2.  Models get n their way.

The folks with opinion 4 do not believe in models.

So the people who have opinion 1 look around and see that everyone who makes models believes that the future will be just like the past and they eventually come to believe that it is the TRUTH, not just an OPINION.

They come to believe that people with opinions 2,3,4 are all misguided.

But in fact, sometimes the future is riskier than the past.  Sometimes it is less risks.  And sometimes, it is just too uncertain to tell (like right now).

And sometimes the future is just like the past.  And the models work just fine.

A Cure for Overconfidence

May 30, 2011

“FACTS FROM THE INTERNET”

  • 86% of a group of college students say that they are better looking than their classmates
  • 19% of people think that they belong to the richest 1% of the population
  • 82% of people say they are in the top 30% of Safe Drivers
  • 80% of students think they will finish in the top half of their class
  • In a confidence-intervals task, where subjects had to judge quantities such as the total egg production of the U.S. or the total number of physicians and surgeons in the Boston Yellow Pages, they expected an error rate of 2% when their real error rate was 46%.
  • 68% of lawyers in civil cases believe that their side will win
  • 81% of new business owners think their business will succeed, but also say that 61% of the businesses like theirs will fail

But on the other hand,

  • A test of 25,000 predictions by weather forecasters found no overconfidence

We all know what is different about weather forecasters.  The make predictions regularly with confidence intervals attached AND they always get feedback about how good that their forecast actually was.

So the Overconfidence effect, that is seen by psychologists as one of the most reliable of biases in decision making, is merely the effect of under training in developing opinions about confidence intervals.

This conclusion leads directly to a very important suggestion for risk managers.  Of course risk managers are trying to act like weather forecasters.  But they are often faced with an audience who are overconfident – they believe that their ability to manage the risks of the firm will result in much better outcomes than is actually likely.

But the example of weather forecasters seems to show that the ability to realistically forecast confidence intervals can be learned by a feedback process.  Risk managers should make sure that in advance of every forecast period that they make the model for frequency and severity of losses are widely known.  And then at the end of every forecast period that they show how actual experience does or does not confirm the forecast.

Many risk models allow for a prediction of the likelihood of every single exact dollar gain or loss that is seen to be possible.  So at the end of each period, when the gain or loss for that period is known, the risk manager should make a very public review of the likelihoods that were predicted for the level of gain or loss that actually occurred.

This sort of process is performed by the cat modelers.  After every major storm, they go through a very public process of discovering what the model said was the likelihood of the size loss that the storm produced.

The final step is to decide whether or not to recalibrate the model as a result of the storm.

Overconfidence can be cured by experience.

Getting Independence Right

May 11, 2011

Independence of the risk function is very important.  But often, the wrong part of the risk function is made independent.

It is the RISK MEASUREMENT AND REPORTING part of the risk function that needs to be independent.  If this part of the risk function is not independent of the risk takers, then you have the Nick Leeson risk – the risk that once you start to lose money that you will delay reporting the bad news to give yourself a little more time to earn back the losses, or the Jérôme Kerviel risk that you will simply understate the risk of what you are doing to allow you to enhance return on risk calculations and avoid pesky risk limits.

When Risk Reporting is independent, then the risk reports are much less likely to be fudged in the favor of the risk takers.  They are much more likely to simply and factually report the risk positions.  Then the risk management system either reacts to the risk information or not, but at least it has the correct information to make the decision on whether to act or not.

Many discussions of risk management suggest that there needs to be independence between the risk taking and the entire risk management function.  This is a model for risk disaster, but a model that is very common in banking.  Under this type of independence there will be a steady war.  A war that it it likely that the risk management folks will lose.  The risk takers are in charge of making money and the independent risk management folks are in charge of preventing that.  The risk takers, since they bring in the bacon, will always be much more popular with management than the risk managers, who add to costs and detract from revenue.

Instead, the actual risk management needs to be totally integrated within the risk taking function.  This will be resisted by any risk takers who have had a free ride to date.  So the risk takers can decide what would be the least destructive way to stay within their risk limits.  In a system of independent risk management, the risk managers are responsible for monitoring limit breaches and taking actions to unwind over limit situations.  In many cases, there are quite heated arguments around those unwinding transactions.

Under the reporting only independence model, the risk taking area would have responsibility for taking the actions needed to stay within limits and resolving breaches to limits.  (Most often those breaches are not due to deliberate violations of limits, but to market movements that cause breaches to limits to grow out of previously hedged positions.)

Ultimately, it would be preferable if the risk taking area would totally own their limits and the process to stay within those limits.

However, if the risk measurement and reporting is independent, then the limit breaches are reported and the decisions about what to do about any risk taking area that is not owning their limits is a top management decision, rather than a risk manager decision that sometimes gets countermanded by the top management.

Where to Draw the Line

March 22, 2011

“The unprecedented scale of the earthquake and tsunami that struck Japan, frankly speaking, were among many things that happened that had not been anticipated under our disaster management contingency plans.”  Japanese Chief Cabinet Secretary Yukio Edano.

In the past 30 days, there have been 10 earthquakes of magnitude 6 or higher.  In the past 100 years, there have been over 80 earthquakes magnitude 8.0 or greater.  The Japanese are reputed to be the most prepared for earthquakes.  And also to experience the most earthquakes of any settled region on the globe.  By some counts, Japan experiences 10% of all earthquakes that are on land and 20% of all severe earthquakes.

But where should they, or anyone making risk management decisions, draw the line in preparation?

In other words, what amount of safety are you willing to pay for in advance and what magnitude of loss event are you willing to say that you will have to live with the consequences.

That amount is your risk tolerance.  You will do what you need to do to manage the risk – but only up to a certain point.

That is because too much security is too expensive, too disruptive.

You are willing to tolerate the larger loss events because you believe them to be sufficiently rare.

In New Zealand,  that cost/risk trade off thinking allowed them to set a standard for retrofitting of existing structures of 1/3 of the standard for new buildings.  But, they also allowed 20 years transition.  Not as much of an issue now.  Many of the older buildings, at least in Christchurch are gone.

But experience changes our view of frequency.  We actually change the loss distribution curve in our minds that is used for decision making.

Risk managers need to be aware of these shifts.  We need to recognize them.  We want to say that these shifts represent shifts in risk appetite.  But we need to also realize that they represent changes in risk perception.  When our models do not move as risk perception moves, the models lose fundamental credibility.

In addition, when modelers do things like what some of the cat modeling firms are doing right now, that is moving the model frequency when people’s risk perceptions are not moving at all, they also lose credibility for that.

So perhaps you want scientists and mathematicians creating the basic models, but someone who is familiar with the psychology of risk needs to learn an effective way to integrate those changes in risk perceptions (or lack thereof) with changes in models (or lack thereof).

The idea of moving risk appetite and tolerance up and down as management gets more or less comfortable with the model estimations of risk might work.  But you are still then left with the issue of model credibility.

What is really needed is a way to combine the science/math with the psychology.

Market consistent models come the closest to accomplishing that.  The pure math/science folks see the herding aspect of market psychology as a miscalibration of the model.  But they are just misunderstanding what is being done.  What is needed is an ability to create adjustments to risk calculations that are applied to non-traded risks that allow for the combination of science & math analysis of the risk with the emotional component.

Then the models will accurately reflect how and where management wants to draw the line.

Assessing Risk Capacity Utilization

March 7, 2011

by Jean-Pierre Berliet

In practice, the risk tolerance constraints (i.e. maximum expected default probability at the company’s target rating) of rating agencies determine the minimum amount of capital that a company needs to secure the rating it needs to execute its strategic plan on a going concern basis. When a company’s available capital is higher than this minimum, the company uses a fraction of its risk capacity, equal to the ratio of this minimum amount to available capital. If available capital is lower than this capital requirement, the ratio becomes greater than one; the company is overextended and needs to take corrective action.

In this discussion, we are deliberately refraining from using “economic capital” as a measure of capital utilization, capital availability, risk capacity or risk capacity utilization because the term “economic” can have several distinct meanings that create confusion. We focus on measures of capital and risk capacity that can provide robust guideposts for making decisions about management and deployment of a company’s risk capacity, in relation to its available capital.

The figure below displays how the proposed risk capacity and capital concepts interact with each other and provide a framework for an insurance company to assess the adequacy of its risk capacity and its capital as well as determine its risk appetite. It sets out a framework that links the principal uses of an insurance company’s total available capital, to strategic drivers of capital and risk capacity utilization. Under this framework, a company needs to:

  • Set aside a “strategic reserve” intended to fund unforeseen opportunities (e.g. acquisitions) that is deducted from available capital to determine available risk capacity
  • Determine the risk capital requirement needed to execute its growth strategy, including i) the modeled strategic risk capital requirement derived from analysis of its prospective risk profile and ii) a “safety buffer” ensuring that its strategy can be executed, at a high level of confidence set by its Board of Directors, in spite of:
    • Catastrophic  loss or investment scenarios that might cause downgrading by rating agencies or RBC adequacy to so decline that regulators would be required to intervene
    • Understatement  of the modeled strategic risk capital requirement caused by “model risk”,  or by risks that are inherently difficult to model appropriately (e.g. systemic risk, operational risks that increase insurance or investment losses, parameter risks, unstable correlations)
    • Extreme circumstances that can reduce and sometimes eliminate benefits from diversification across lines or across insurance and investment activities
    • The risk that a company might not be able to raise capital from investors on terms acceptable to shareholders when needed to restore its capital position

In the wake of the financial crisis, many companies are trying to determine how large a capital safety buffer (including off balance sheet contingent capital) they should have to absorb losses caused by catastrophic events while containing the negative impact of additional capital on profitability metrics, especially their return on shareholders’ equity. In practice, they would like to hold enough capital to ensure that their insurance strength rating would remain at or above the level needed to sustain the confidence of customers and regulators over a suitably long time period (e.g. ten years) at a high confidence level, while avoiding declines in returns that might reduce their valuation multiples.

As shown by the figure, a company’s capital requirement represents its risk capacity utilization under its strategic plan as well as its risk appetite.  When this capital requirement, including the suitable safety buffer discussed above, is smaller than the company’s risk capacity (as shown on the figure) the company has “excess capital”. It has an option to deploy some or all of its excess capital productively or return it to investors. When this capital requirement, including the safety buffer, is greater than the company’s risk capacity, a company is overextended and need to take action to reduce planned capacity utilization or raise additional capital to increase its risk capacity.

The mutual dependency of a company’s risk profile, risk capacity utilization, risk capacity, capital available and risk appetite and the irreducible uncertainty of financial results of insurance activities create a context in which management needs to ensure that risk capacity management and strategy management are aligned with the return expectations and risk concerns of shareholders.

Jean-Pierre Berliet

(203) 247-6448

jpberliet@att.net

February 14, 2011

Note: This article is abstracted from the “Risk Management and Business Strategy in P/C Insurance Companies” briefing paper published by Advisen (www.advisen.com) and available at the Corner Store.

Sins of Risk Measurement

February 5, 2011
.
Read The Seven Deadly Sins of Measurement by Jim Campy

Measuring risk means walking a thin line.  Balancing what is highly unlikely from what it totally impossible.  Financial institutions need to be prepared for the highly unlikely but must avoid getting sucked into wasting time worrying about the totally impossible.

Here are some sins that are sometimes committed by risk measurers:

1.  Downplaying uncertainty.  Risk measurement will always become more and more uncertain with increasing size of the potential loss numbers.  In other words, the larger the potential loss, the less certain you can be about how certain it might be.  Downplaying uncertainty is usually a sin of omission.  It is just not mentioned.  Risk managers are lured into this sin by the simple fact that the less that they mention uncertainty, the more credibility their work will be given.

2.  Comparing incomparables.  In many risk measurement efforts, values are developed for a wide variety of risks and then aggregated.  Eventually, they are disaggregated and compared.  Each of the risk measurements are implicitly treated as if they were all calculated totally consistently.  However,  in fact, we are usually adding together measurements that were done with totally different amounts of historical data, for markets that have totally different degrees of stability and using tools that have totally different degrees of certitude built into them.  In the end, this will encourage decisions to take on whatever risks that we underestimate the most through this process.

3.  Validate to Confirmation.  When we validate risk models, it is common to stop the validation process when we have evidence that our initial calculation is correct.  What that sometimes means is that one validation is attempted and if validation fails, the process is revised and tried again.  This is repeated until the tester is either exhausted or gets positive results.  We are biased to finding that our risk measurements are correct and are willing to settle for validations that confirm our bias.

4.  Selective Parameterization.  There are no general rules for parameterization.  Generally, someone must choose what set of data is used to develop the risk model parameters.  In most cases, this choice determines the answers of the risk measurement.  If data from a benign period is used, then the measures of risk will be low.  If data from an adverse period is used, then risk measures will be high.  Selective paramaterization means that the period is chosen because the experience was good or bad to deliberately influence the outcome.

5.  Hiding behind Math.  Measuring risk can only mean measuring a future unknown contingency.  No amount of fancy math can change that fact.  But many who are involved in risk measurement will avoid ever using plain language to talk about what they are doing, preferring to hide in a thicket of mathematical jargon.  Real understanding of what one is doing with a risk measurement process includes the ability to say what that entails to someone without an advanced quant degree.

6.  Ignoring consequences.  There is a stream of thinking that science can be disassociated from its consequences.  Whether or not that is true, risk measurement cannot.  The person doing the risk measurement must be aware of the consequences of their findings and anticipate what might happen if management truly believes the measurements and acts upon them.

7.  Crying Wolf.  Risk measurement requires a concentration on the negative side of potential outcomes.  Many in risk management keep trying to tie the idea of “risk” to both upsides and downsides.  They have it partly right.  Risk is a word that means what it means, and the common meaning associated risk with downside potential.  However, the risk manager who does not keep in mind that their risk calculations are also associated with potential gains will be thought to be a total Cassandra and will lose all attention.  This is one of the reasons why scenario and stress tests are difficult to use.  One set of people will prepare the downside story and another set the upside story.  Decisions become a tug of war between opposing points of view, when in fact both points of view are correct.

There are doubtless many more possible sins.  Feel free to add your favorites in the comments.

But one final thought.  Calling it MEASUREMENT might be the greatest sin.

Risk Environment

January 10, 2011

It seems that there are three approaches to how to look at the riskiness of the future when assessing risk of a specific exposure:

  1. Look at the “long term” frequency and severity and look at risk based upon assuming that the near term future is a “typically” risky period.
  2. Look at the market’s current idea of near term future riskiness.  This is evident in terms of items such as implied volatility.
  3. Focus on “Expert Opinion” of the risk environment.

There are proponents of each approach.  That is because there are strengths and weaknesses for each approach.

Long Term Approach

The long term view of risk environment will help to make sure that the company takes into account “all” of the risk that could be inherent in  their risk positions.  The negative of this approach is that it will in most times not represent the risk environment that will be faced in the immediate future.

Market View

The market view of risk does definitely give an immediate view of risk environment.  It is thought by proponents to be the only valid approach to getting a view of that.  However, the market implied risk view may be a little too short term for some purposes.  And when trying to look at longer term risk environment through market implied factors, there may be very large inaccuracies that creep into the view.  That is because there are other factors other than view of risk that are a part of the market implied factors that are not so large for very short term periods, but that grow to predominate with longer time periods.

Expert Opinion

Expert opinion can also reflect the current risk environment and is potentially adaptable to the desired time frame.  However, the main complaint with Expert Opinion is that there is no specific way to know whether an expert opinion of the risk environment is equivalent between one point of time and another.  One explanation for the uncertainty in that can be found in the changing risk attitudes that are described by the Theory of Plural Rationalities. Experts may have methods to overcome the changing waves of risk attitudes that they are personally exposed to, but it is hard to believe that they can escape that basic human cycle entirely.

Risk environment is important in setting risk strategies and adjusting risk tolerances and appetites.

Using the Long Term approach at all times and effectively ignoring the different risk environments is going to be as effective as crossing a street using long term averages for the amount of traffic.

Intrinsic Risk

November 26, 2010

If you were told that someone had flipped a coin 60 times and had found that heads were the results 2/3 of the time, you might have several reactions.

  • You might doubt whether the coin was a real coin or whether it was altered.
  • You might suspect that the person who got that result was doing something other than a fair flip.
  • You might doubt whether they are able to count or whether they actually counted.
  • You doubt whether they are telling the truth.
  • You start to calculate the likelihood of that result with a fair coin.

Once you take that last step, you find that the story is highly unlikely, but definitely not impossible.  In fact, my computer tells me that if I lined up 225 people and had them all flip a coin 60 times, there is a fifty-fifty chance  that at least one person will get that many heads.

So how should you evaluate the risk of getting 40 heads out of 60 flips?  Should you do calculations based upon the expected likelihood of heads based upon an examination of the coin?  You look at it and see that there are two sides and a thin edge.  You assess whether it seems to be uniformly balanced.  Then you conclude that you are fairly certain of the inherent risk of the coin flipping process.

Your other choice to assess the risk is to base your evaluation on the observed outcomes of coin flips.  This will mean that the central limit theorem should help us to eventually get the right number.  But if your first observation is that person described above, then it will be quite a few additional observations before you find out what the Central Limit Theorem has to tell you.

The point being that a purely observation based approach will not always give you the best answer.   Good to make sure that you understand something about the intrinsic risk.

If you are still not convinced of this, ask the turkey.  Taleb uses that turkey story to explain a Black Swan.  But if you think about it, many Black Swans are nothing more than ignorance of intrinsic risk.

Risk Regimes

November 18, 2010

Lately, economists talk of three phases of the economy, boom, bust and “normal”. These could all be seen as risk regimes. And that these regimes exist for many different risks.

There is actually a fourth regime and for many financial markets we are in that regime now. I would call that regime “Uncertain”. In July, Bernanke said that the outlook for the economy was “unusually uncertain”.

So these regimes would be:

  • Boom – high drift, low vol
  • Bust – negative drift, low vol
  • Normal – moderate drift, moderate vol
  • Uncertain – unknown drift and unknown vol (both with a high degree of variability)

So managing risk effectively requires that you must know the current risk regime.

There is no generic ERM that works in all risk regimes.  And there is no risk model that is valid in all risk regimes.

Risk Management is done NOW to impact on your current risk positions and to manage your next period potential losses.

So think about four risk models, not about how to calibrate one model to incorporate experience from all four regimes.  The one model will ALWAYS be fairly wrong, at least with four different models, you have a chance to be approximately right some of the time.

Risk Managers do not know the Future any Better than Anyone Else

September 17, 2010

Criticisms of risk managers for not anticipating some emerging future are overdone.  When a major unexpected loss happens, everyone missed it.

Risk managers do not have any special magic ball.  The future is just as dim to us as to everyone else.

Sometimes we forget that.  Our methods seem to be peering into the future.

But that is not really correct.  We are not looking into the future.  Not only do we not know the future, we do not even know the likelihood of various future possibilities, the probability distribution of the future.

That does not make our work a waste of time.  However.

What we should be doing with our models is to write down clearly that view of the future that we use to base our decisions upon.

You see, everyone who makes a decision must have a picture of the future possibilities that they are using to weigh the possibilities and make that decision.  Most people cannot necessarily articulate that picture with any specificity.  Management teams try to make sure that they are all working with similar visions of the future so that the sum of all their decisions makes sense together.

But one of the innovations of the new risk management discipline is to provide a very detailed exposition of that picture of the future.

Unfortunately, many risk managers are caught up in the mechanics of creating the model and they fail to recognize the extreme importance of this aspect of their work.  Risk Managers need to make sure that the future that is in their model IS the future that management wants to use to base their decisions upon.  The Risk Manager needs to understand whether he/she is the leader or the follower in the process of identifying that future vision.

If the leader, then there needs to be an explicit discussion where the other top managers affirm that they agree with the future suggested by the Risk Manager.

If the follower, then the risk manager will first need to say back to the rest of management what they are hearing to make sure that they are all on the same page.  They might still want to present alternate futures, but they need to be prepared to have those visions heavily discounted in decision making.

The Risk Managers who do not understand this process go forward developing their models based upon their best vision of the future and are frustrated when management does not find their models to be very helpful.  Sometimes, the risk manager presents their models as if they DO have some special insight into the future.

My vision of the future is that path will not succeed.

Simplicity Fails

September 16, 2010

Usually the maxim KISS (Keep it Simple Stupid) is the way to go.

But in Risk Management, just the opposite is true. If you keep it simple, you will end up being eaten alive.

That is because risk is constantly changing. At any time, your competitors will try to change the game trying to take the better risks and if you keep it simple and stand still, leaving you with the worst risks.

If you keep it simple and focused and identify the ONE MOST IMPORTANT RISK METRIC and focus all of your risk management systems around controlling risk as defined by that one metric, you will eventually end up accumulating more and more of some risk that fails to register under that metric.  See Risk and Light.

The solution is not to get better at being Simple, but to get good at managing complexity.

That means looking at risk through many lenses, and then focusing on the most important aspects of risk for each situation.  That may mean that you will need to have different risk measures for different risks.  Something that is actually the opposite of the thrust of the ERM movement towards the homogenization of risk measurement.  There are clearly benefits of having one common measure of risk that can be applied across all risks, but some folks went too far and abandoned their risk specific metrics in the process.

And there needs to be a process of regularly going back to what your had decided were the most important risk measures and making sure that there had not been some sort of regime change that meant that you should be adding some new risk measure.

So, you should try Simple at your own risk.

It’s simple.  Just pick the important strand.

Did you accept your data due to Confirmation Bias?

August 15, 2010

Confirmation bias (also called confirmatory bias or myside bias) is a tendency for people to favor information that confirms their preconceptions or hypotheses regardless of whether the information is true. As a result, people gather evidence and recall information from memory selectively, and interpret it in a biased way. The biases appear in particular for emotionally significant issues and for established beliefs. For example, in reading about gun control, people usually prefer sources that affirm their existing attitudes. They also tend to interpret ambiguous evidence as supporting their existing position. Biased search, interpretation and/or recall have been invoked to explain attitude polarization (when a disagreement becomes more extreme even though the different parties are exposed to the same evidence), belief perseverance (when beliefs persist after the evidence for them is shown to be false), the irrational primacy effect (a stronger weighting for data encountered early in an arbitrary series) and illusory correlation (in which people falsely perceive an association between two events or situations).

From wikipedia

Today’s New York Times tells a story of Japanese longevity data.  Japan has long thought itself to be the home of a large fraction of the world’s oldest people.  The myth of self was that the Japanese lifestyle was healthier than that of any other people and the longevity was a result.

A google search on “The secret of Japanese Longevity” turns up 400,000 web pages that extol the virtues of Japanese diet and lifestyle.  But the news story says that as many as 281 of these extremely old Japanese folks cannot be found.  The efforts to find them revealed numerous cases of fraud and neglect.  This investigation started after they found that the man who had been on their records as the longest lived male had actually been dead for over 20 years!  Someone had been cashing his pension checks for those years and neglecting to report the death.

The secret of Japanese Longevity may well be just bad data.

But the bad data was accepted because it confirmed the going in belief – the belief that Japanese lifestyle was healthier.

The same sort of bad data fed the Sub Prime crisis.  Housing prices were believed to never go down.  So data that confirmed that belief was readily accepted.  Defaults on Sub Prime mortgages were thought to fall within a manageable range and data that confirmed that belief was accepted.

Data that started to appear in late 2006 that indicated that those trends were not going to be permanent and in fact that they were reversing was widely ignored.  One of the most common aspects of confirmation bias is to consider non-confirming data as unusable in some way.

We try to filter out noise and work only with signal.  But sometimes, the noise is a signal all its own.  And a very important signal to risk managers.

A Friedman Model

July 29, 2010

Friedman freed the economists with his argument that economic theories did not need to be tied to realistic assumptions:

a theory cannot be tested by comparing its “assumptions” directly with “reality.” Indeed, there is no meaningful way in which this can be done. Complete “realism” is clearly unattainable, and the question whether a theory is realistic “enough” can be settled only by seeing whether it yields predictions that are good enough for the purpose in hand or that are better than predictions from alternative theories. Yet the belief that a theory can be tested by the realism of its assumptions independently of the accuracy of its predictions is widespread and the source of much of the perennial criticism of economic theory as unrealistic. Such criticism is largely irrelevant, and, in consequence, most attempts to reform economic theory that it has stimulated have been unsuccessful.

Milton Friedman, 1953, Some Implications for Economic Issues

Maybe Friedman fully understood the implications of what he suggested.  But it seems likely that many of the folks who used this argument to justify their models and theories definitely took them to extremes.

You see, the issue relates to the question of how you test that the theory predictions are realistic.  Because it is quite easy to imagine that a theory could make good predictions during a period of time when the missing or unrealistic assumptions are not important because they are constant or are overwhelmed by the importance of other factors.

The alternate idea that a model has both realistic inputs and outputs is more encouraging.  The realistic inputs will be a more stringent test of the model’s ability to make predictions that take into account the lumpiness of reality.  A model with unrealistic assumptions or inputs does not give that.

Friedman argued that since it was impossible for a theory (or model) to be totally realistic, that realism could not be a criteria for accepting a theory.

That is certainly an argument that cannot be logically refuted.

But he fails to mention an important consideration.  All theories and models need to be re-validated.  His criteria of “seeing whether it yields predictions that are good enough for the purpose in hand or that are better than predictions from alternative theories” can be true for some period of time and then not true under different conditions.

So users of theories and models MUST be constantly vigilant.

And they should be aware that since their test of model validity is purely empirical, that as things change that are not included in the partial reality of the theory or model, that the model or theory may no longer be valid.

So a Friedman Model is one that lacks some fundamental realism in its inputs but gives answers that give “good enough” predictions.  Users of Friedman models should beware.

Biased Risk Decisions

June 18, 2010

The information is all there.  We have just wrapped it in so many technical terms that perhaps we forget what it is referring to.

Behavioral Finance explains exactly how people tend to make decisions without models.  They call them Biases and Heuristics.

This link is to one of my absolute favorite pages on the entire internet.  LIST OF COGNITIVE BIASES Take a look.  See if you can find the ways that you made your last 10 major business decisions there.

Now models are the quants ways to overcome these biases.  Quants believe that they can build a model that keeps the user from falling into some of the more emotional cognitive biases, such as the anchoring effect.  With a model, for example, anchoring is avoided because the modeler very carefully gives equal weight to many data points instead of more weight to the most recent data point.

But what the quants fail to recognize is that models strengthen some of the biases.  For example, models and modelers often fall under the Clustering illusion, finding patterns and attributing statistical distributions to data recording phenomena that has just finished one phase and is about to move on to another.

Models promote the hindsight bias.  No matter how surprising an event is at the time, within a few years, the data recording the impact of the event is incorporated into the data sets and the modelers henceforth give the impression that the model is now calibrated to consider just such an event.

And in the end, the model is often no more than a complicated version of the biases of the modeler, an example of the Confirmation Bias where the modeler has constructed a model that confirms their going in world view, rather than representing the actual world.

So that is the trade-off, between biased decisions with a model and biased decisions without a model.  What is a non-modeling manager to do?

I would suggest that they should go to that wikipedia page on biases and learn about their own biases and also sit down with that list with their modeler and get the modeler to reveal their biases as well.

Fortunately or unfortunately, things in most financial firms are very complicated.  It is almost impossible to get it right balancing all of the moving parts that make up the entirety of most firms without the help of a model.  But if the decision maker understands their own biases as well as the biases of the model, perhaps they can avoid more of them.

Finally, Jos Berkemeijer asks what must a modeler know if they are also the decision maker.  I would suggest that such a person needs desperately to understand their own biases.  They can get a little insight into this from traditional peer review.  But I would suggest even more than that they need to review the wiki list of biases with their peer reviewer and hope that the peer reviewer feels secure enough to be honest with them.

Risk Velocity

June 17, 2010

By Chris Mandel

Understand the probability of loss, adjusted for the severity of its impact, and you have a sure-fire method for measuring risk.

Sounds familiar and seems on point; but is it? This actuarial construct is useful and adds to our understanding of many types of risk. But if we had these estimates down pat, then how do we explain the financial crisis and its devastating results? The consequences of this failure have been overwhelming.

Enter “risk velocity,” or how quickly risks create loss events. Another way to think about the concept is in terms of “time to impact” a military phrase, a perspective that implies proactively assessing when the objective will be achieved. While relatively new in the risk expert forums I read, I would suggest this is a valuable concept to understand and more so to apply.

It is well and good to know how likely it is that a risk will manifest into a loss. Better yet to understand what the loss will be if it manifests. But perhaps the best way to generate a more comprehensive assessment of risk is to estimate how much time there may be to prepare a response or make some other risk treatment decision about an exposure. This allows you to prioritize more rapidly, developing exposures for action. Dynamic action is at the heart of robust risk management.

After all, expending all of your limited resources on identification and assessment really doesn’t buy you much but awareness. In fact awareness, from a legal perspective, creates another element of risk, one that can be quite costly if reasonable action is not taken in a timely manner. Not every exposure will result in this incremental risk, but a surprising number do.

Right now, there’s a substantial number of actors in the financial services sector who wish they’d understood risk velocity and taken some form of prudent action that could have perhaps altered the course of loss events as they came home to roost; if only.

More at Risk and Insurance

Not Complex Enough

June 10, 2010

Things changed and the models did not adapt.  But I am saying that is mostly because the models had no place to put the information.

With 20-20 hindsight, perhaps the models would have been better if instead of boiling everyone in one pot, you separated out folks into 5 or 10 pots.  Put the flippers into a separate pot.  Put the doctors into another pot.  (Did folks really believe that the no doc mortgages represented 10 times as many doctors than previously).  What about the no doc loans to contractors?  Wasn’t there a double risk there?  Put the people with LTV>100% in another pot.  Then model your 20% drop in prices.

And there was also no model of what the real estate market would do if there were 500,000 more houses than buyers.  Or any attempt to understand whether there were too many houses or not.

And the whole financial modeling framework has never had the ability to reflect the spirals that happen.

The models are just not complex enough for the world we live in.

Many are taught to look at a picture like the view above of the situation in Afghanistan and immediately demand that the picture be simplified.  To immediately conclude that if we draw a picture that complicated then it MUST be because we do not really understand the situation.  However, complexity like the above may be a sign that the situation is really being understood and that the model might just be complex enough to work as things change.

The idea that we will change the world so that the models work is tragically wrong headed.   But that is exactly the thinking that is behind most of the attempts at “reforming” the financial markets.  The thinking is that our models accurately describe the world when it is “normal” and that when our models are wrong it is because the world is “abnormal”.  So the conclusion is that we should be trying to keep the world in the normal range.

But the way that our models always fail is when the world makes a change, a non-linearity in the terminology of modelers.  The oft used analogy is the non-linearity that ruined the businesses of the buggy whip manufacturers.  They had a great model of demand for their product that showed how there was more demand every spring so that they put on extra shifts in the winter and rolled out the new models every April.

Then one April, the bottom fell out of their market.  That was because not only did those pesky horseless carriages cut into their businesses, but the very folks who bought the cars were the people who were always sure sales for new buggy whips each and every year.  That early adopter set who just had to have the latest model of buggy whip.

So we must recognize that these troubling times when the models do not work are frequently because the world is fundamentally changing and the models were simply not complex enough to capture the non-linearities.

What’s the Truth?

May 21, 2010

There has always been an issue with TRUTH with regard to risk.  At least there is when dealing with SOME PEOPLE. 

The risk analyst prepares a report about a proposal that shows the new proposal in a bad light.  The business person who is the champion of the proposal questions the TRUTH of the matter.  An unprepared analyst can easily get picked apart by this sort of attack.  If it becomes a true showdown between the business person and the analyst, in many companies, the business person can find a way to shed enough doubt on the TRUTH of the situation to win the day. 

The preparation needed by the analyst is to understand that there is more than one TRUTH to the matter of risk.  I can think of at least four points of view.  In addition, there are many, many different angles and approaches to evaluating risk.  And since risk analysis is about the future, there is no ONE TRUTH.  The preparation needed is to understand ALL of the points of view as well many of the different angles and approaches to analysis of risk. 

The four points of view are:

  1. Mean Reversion – things will have their ups and downs but those will cancel out and this will be very profitable. 
  2. History Repeats – we can understand risk just fine by looking at the past. 
  3. Impending Disaster – anything you can imagine, I can imagine something worse.
  4. Unpredictable – we can’t know the future so why bother trying. 

Each point of view will have totally different beliefs about the TRUTH of a risk evaluation.  You will not win an argument with someone who has one belief by marshalling facts and analysis from one of the other beliefs.  And most confusing of all, each of these beliefs is actually the TRUTH at some point in time. 

For periods of time, the world does act in a mean reverting manner.  When it does, make sure that you are buying on the dips. 

Other times, things do bounce along within a range of ups and downs that are consistent with some part of the historical record.  Careful risk taking is in order then. 

And as we saw in the fall of 2008 in the financial markets there are times when every day you wake up and wish you had sold out of your risk positions yesterday. 

But right now, things are pretty unpredictable with major ups and downs coming with very little notice.  Volatility is again far above historical ranges.  Best to keep your exposures small and spread out. 

So understand that with regard to RISK, TRUTH is not quite so easy to pin down. 

Comprehensive Actuarial Risk Evaluation

May 11, 2010

The new CARE report has been posted to the IAA website this week.CARE_EN

It raises a point that must be fairly obvious to everyone that you just cannot manage risks without looking at them from multiple angles.

Or at least it should now be obvious. Here are 8 different angles on risk that are discussed in the report and my quick take on each:

  1. MARKET CONSISTENT VALUE VS. FUNDAMENTAL VALUE   –  Well, maybe the market has it wrong.  Do your own homework in addition to looking at what the market thinks.  If the folks buying exposure to US mortgages had done fundamental evaluation, they might have noticed that there were a significant amount of sub prime mortgages where the Gross mortgage payments were higher than the Gross income of the mortgagee.
  2. ACCOUNTING BASIS VS. ECONOMIC BASIS  –  Some firms did all of their analysis on an economic basis and kept saying that they were fine as their reported financials showed them dying.  They should have known in advance of the risk of accounting that was different from their analysis.
  3. REGULATORY MEASURE OF RISK  –  vs. any of the above.  The same logic applies as with the accounting.  Even if you have done your analysis “right” you need to know how important others, including your regulator will be seeing things.  Better to have a discussion with the regulator long before a problem arises.  You are just not as credible in the middle of what seems to be a crisis to the regulator saying that the regulatory view is off target.
  4. SHORT TERM VS. LONG TERM RISKS  –  While it is really nice that everyone has agreed to focus in on a one year view of risks, for situations that may well extend beyond one year, it can be vitally important to know how the risk might impact the firm over a multi year period.
  5. KNOWN RISK AND EMERGING RISKS  –  the fact that your risk model did not include anything for volcano risk, is no help when the volcano messes up your business plans.
  6. EARNINGS VOLATILITY VS. RUIN  –  Again, an agreement on a 1 in 200 loss focus is convenient, it does not in any way exempt an organization from risks that could have a major impact at some other return period.
  7. VIEWED STAND-ALONE VS. FULL RISK PORTFOLIO  –  Remember, diversification does not reduce absolute risk.
  8. CASH VS. ACCRUAL  –  This is another way of saying to focus on the economic vs the accounting.

Read the report to get the more measured and complete view prepared by the 15 actuaries from US, UK, Australia and China who participated in the working group to prepare the report.

Comprehensive Actuarial Risk Evaluation

Dangerous Words

April 27, 2010

One of the causes of the Financial Crisis that is sometimes cited is an inappropriate reliance on complex financial models.  In our defense, risk managers have often said that users did not take the time to understand the models that they relied upon.

And I have said that in some sense, blaming the bad decisions on the models is like a driver who gets lost blaming it on the car.

But we risk managers and risk modelers do need to be careful with the words that we use.  Some of the most common risk management terminology is guilty of being totally misleading to someone who has no risk management training – who simply relies upon their understanding of English.

One of the fundamental steps of risk management is to MEASURE RISK.

I would suggest that this very common term is potentially misleading and risk managers should consider using it less.

In common usage, you could say that you measure a distance between two points or measure the weight of an object.  Measurement usually refers to something completely objective.

However, when we “measure” risk, it is not at all objective.  That is because Risk is actually about the future.  We cannot measure the future.  Or any specific aspect of the future.

While I can measure my height and weight today, I cannot now measure what it will be tomorrow.  I can predict what it might be tomorrow.  I might be pretty certain of a fairly tight range of values, but that does not make my prediction into a measurement.

So by the very words we use to describe what we are doing, we sought to instill a degree of certainty and reliability that is impossible and unwarranted.  We did that perhaps as mathematicians who are used to starting a problem by defining terms.  So we start our work by defining our calculation as a “measurement” of risk.

However, non-mathematicians are not so used to defining A = B at the start of the day and then remembering thereafter that whenever they hear someone refer to A, that they really mean B.

We also may have defined our work as “measuring risk” to instill in it enough confidence from the users that they would actually pay attention to the work and rely upon it.  In which case we are not quite as innocent as we might claim on the over reliance front.

It might be difficult now to retreat however.  Try telling management that you do not now, not have you ever measured risk.  And see what happens to your budget.

Volcano Risk 2

April 20, 2010

Top 10 European Volcanos in terms of people nearby and potential losses from an eruption:

Volcano/Country/Affected population/Values of residences at risk
1.Vesuvius/Italy/1,651,950/$66.1bn
2.Campi Flegrei/Italy/144,144/$7.8bn
3.La Soufrière Guadeloupe/Guadeloupe,France/94,037 /$3.8bn
4.Etna/Italy/70,819/$2.8bn
5.Agua de Pau/Azores,Portugal/34,307/$1.4bn
6.Soufrière Saint Vincent/Saint Vincent,Caribbean/24,493/$1bn
7.Furnas/Azores,Portugal/19,862/$0.8bn
8.Sete Cidades/Azores,Portugal/17,889/$0.7bn
9.Hekla/Iceland/10,024/$0.4bn
10.Mt Pelée/Martinique,France/10,002/$0.4bn

http://www.strategicrisk.co.uk/story.asp?source=srbreaknewsRel&storycode=384008

Why the valuation of RMBS holdings needed changing

January 18, 2010

Post from Michael A Cohen, Principal – Cohen Strategic Consulting

Last November’s decision by the National Association of Insurance Commissioners (NAIC) to appoint PIMCO Advisory to assess the holdings of non-agency residential mortgage-backed securities (RMBS) signaled a marked change in attitude towards the major ratings agencies. This move by the NAIC — the regulatory body for the insurance industry in the US, comprising the insurance commissioners of the 50 states – was aimed at determining the appropriate amount of risk-adjusted capital to be held by US insurers (more than 1,600 companies in both the life and property/casualty segments) for RMBS on their balance sheets.

Why did the NAIC act?

A number of problems had arisen from the way RMBS held by insurers had historically been rated by some rating agencies which are “nationally recognized statistical rating organizations” (NRSROs), though it is important to note that not all rating agencies which are NRSROs had engaged in this particular rating activity.

RMBS had been assigned (much) higher ratings than they seem to have deserved at the time, albeit with the benefit of hindsight. The higher ratings also led to lower capital charges for entities holding these securitizations (insurers, in this example) in determining the risk-adjusted capital they needed to hold for regulatory standards.

Consequently, these insurance organizations were ultimately viewed to be undercapitalized for their collective investment risks. These higher ratings also led to lower prices for the securitizations, which meant that the purchasers were ultimately getting much lower risk-adjusted returns than had been envisaged (and in many cases losses) for their purchases.

The analysis that was performed by the NRSROs has been strenuously called into question by many industry observers during the financial crisis of the past two years, for two primary reasons:

  • The level of analytical due diligence was weak and the default statistics used to evaluate these securities did not reflect the actual level of stress in the marketplace; as a consequence ratings were issued at higher levels than the underlying analytics in part to placate the purchasers of the ratings, and a number of industry insiders observed that this was done.
  • Once the RMBS marketplace came under extreme stress, the rating agencies subsequently determined that the risk charges for these securities would increase several fold, materially increasing the amount of risk-adjusted capital needed to be held by insurers with RMBS, and ultimately jeopardizing the companies’ financial strength ratings themselves.

Flaws in rating RMBS

Rating agencies have historically been paid for their rating services by those entities to which they assign ratings (that reflect claims paying, debt paying, principal paying, etc. abilities). Industry observers have long viewed this relationship as a potential conflict of interest, but, because insurers and buyers had not been materially harmed by this process until recently, the industry practice of rating agencies assigning ratings to companies who were paying them for the service was not strenuously challenged.

Further, since the rating agencies can increase their profit margins by increasing their overall rating fees while maintaining their expenses in the course of performing rating analysis, it follows that there is an incentive to increase the volume of ratings issued by the staff, which implies less time being spent on a particular analysis. Again, until recently, the rated entities and the purchasers of rated securities and insurance policies did not feel sufficiently harmed to challenge the process.

(more…)

Turn VAR Inside Out – To Get S

November 13, 2009

S

Survival.  That is what you really want to know.  When the Board meeting ends, the last thing that they should hear is management assuring them that the company will be in business still when the next meeting is due to be held.

S

But it really is not in terms of bankruptcy, or even regulatory take-over.  If your firm is in the assurance business, then the company does not necessarily need to go that far.  There is usually a point, that might be pretty far remote from bankruptcy, where the firm loses confidence of the market and is no longer able to do business.  And good managers know exactly where that point lies.  

S

So S is the likelihood of avoiding that point of no return.  It is a percentage.  Some might cry that no one will understand a percentage.  That they need dollars to understand.  But VAR includes a percentage as well.  Just because no one says the percentage, that does not mean it is there.  It actually means that no one is even bothering to try to help people to understand what VAR is.  The VAR nuber is really one part of a three part sentence:

The 99% VAR over one-year is $67.8 M.  By itself, VAR does not tell you whether the firm has trouble.  If the VAR doubles from one period to the next, is the firm in trouble?  The answer to that cannot be determined without further information.

S

Survival is the probability that, given the real risks of the firm and the real capital of the firm, the firm will sustain a loss large enough to put an end to their business model.  If your S is 80%, then there is about  50% chance that your firm will not survive three years! But if your S is 95%, then there is a 50-50 chance that your firm will last at least 13 years.  This arithmetic is why a firm, like an insurer, that makes long term promises, need to have a very high S.  An S of 95% does not really seem high enough.

S

Survival is something that can be calculated with the existing VAR model.  Instead of focusing on a arbitrary probability, the calculation instead focuses on the loss that management feels is enough to put them out of business.  S can be recalculated after a proposed share buy back or payment of dividends.  S responds to management actions and assists management decisions.

If your board asks how much risk you are taking, try telling them the firm has a 98.5% Survival probability.  That might actually make more sense to them than saying that the firm might lose as much as $523 M at a 99% confidence interval over one year.

So turn your VAR inside out – to get S 

Myths of Market Consistent Valuation

October 31, 2009

    Guest Post from Elliot Varnell

    Myth 1: An arbitrage free model will by itself give a market consistent valuation.

    An arbitrage-free model which is calibrated to deep and liquid market data will give a market consistent valuation. An arbitrage-free model which ignores available deep and liquid market data does not give a market consistent valuation. Having said this there is not a tight definition of what constitutes deep and liquid market data, therefore there is no tight definition of what constitutes market consistent valuation. For example a very relevant question is whether calibrating to historic volatility can be considered market consistent if there is a marginally liquid market in options. CEIOPs CP39 published in July 2009 appears to leave open the questions of which volatility could be used, while CP41 requires that a market is deep and liquid, transparent and that these properties are permanent.

    Myth 2: A model calibrated to deep and liquid market data will give a Market Consistent Valuation.

    A model calibrated to deep and liquid market data will only give a market consistent valuation if the model is also arbitrage free. If a model ignores arbitrage free dynamics then it could still be calibrated to replicate certain prices. However this would not be a sensible framework marking to model the prices of other assets and liabilities as is required for the valuation of many participating life insurance contracts Having said this the implementation of some theoretically arbitrage free models are not always fully arbitrage free themselves, due to issues such as discretisation, although they can be designed to not be noticeably arbitrage free within the level of materiality of the overall technical provision calculation.

    Myth 3: Market Consistent Valuation gives the right answer.

    Market consistent valuation does not give the right answer, per se, but an answer conditional on the model and the calibration parameters. The valuation is only as good as these underlying assumptions. One thing we can be sure of is that the model will be wrong in some way. This is why understanding and documenting the weakness of an ESG model and its calibration is as important as the actual model design and calibration itself.

    Myth 4: Market Consistent Valuation gives the amount that a 3rd party will pay for the business.

    Market Consistent Valuation (as calculated using an ESG) gives a value based on pricing at the margin. As with many financial economic models the model is designed to provide a price based on a small scale transaction, ignoring trading costs, and market illiquidity. The assumption is made that the marginal price of the liability can be applied to the entire balance sheet. Separate economic models are typically required to account for micro-market features; for example the illiquidity of markets or the trading and frictional costs inherent from following an (internal) dynamic hedge strategy. Micro-market features can be most significant in the most extreme market conditions; for example a 1-in-200 stress event.

    Even allowing for the micro-market features a transaction price will account (most likely in much less quantitative manner than using an ESG) the hard to value assets (e.g. franchise value) or hard to value liabilities (e.g. contingent liabilities).

    Myth 5: Market Consistent Valuation is no more accurate than Discounted Cash Flow techniques using long term subjective rates of return.

    The previous myths could have suggested that market consistent valuation is in some way devalued or not useful. This is certainly the viewpoint of some actuaries especially in the light of the recent financial crisis. However it could be argued that market consistent valuation, if done properly, gives a more economically meaningful value than traditional DCF techniques and provides better disclosure than traditional DCF. It does this by breaking down the problem into clear assumptions about what economic theory is being applied and clear assumption regarding what assumptions are being made. By breaking down the models and assumptions weaknesses are more readily identified and economic theory can be applied.


The Glass Box Risk Model

October 19, 2009

I learned a new term today “The Glass Box Risk Model” from a post by Donald R. van Deventer,

Glass Boxes, Black Boxes, CDOs and Grocery Lists

You can read what he has to say about it.  I just wanted to pass along the term “Glass Box.”

A Glass Box Risk Model is one that is exactly the opposit of a Black Box.  With a Black Box Model, you have no idea what is going on inside.  WIth a Glass Box, you can see everything inside.

Something is needed, however, in addition to transparency, and that is clarity.  To use the physical metaphor further, the glass box could easily be crammed with so, so much complicated stuff that it is only transparent in name.  The complexity acts as a shroud that keeps real transparency from happening.

I would suggest that argues for separability of parts of the risk model.  The more different things that one tries to cram into a single model, the less likely that it is separable or truely transparent.

That probably argues against any of the elegance that modelers sometimes prize.  More code is probably preferable to less if that makes things easier to understand.

For example, I give away my age, but I stopped being a programmer about the time when actuaries took up APL.  But I heard from everyone who ever tried to assign maintenance of an APL program to someone other than the developer, that APL was a totally elegant but totally opaque programming language.

But I would suggest that the Glass Box should be the ideal for which we strive with our models.

Custard Cream Risk – Compared to What???

September 26, 2009

It was recently revealed that the custard Cream is the most dangerous biscuit.

custard-cream-192b_684194e

But his illustrates the issue with stand alone risk analysis.  Compared to what?  Last spring, there was quite a bit of concern raised when it was reported that 18 people had died from Swine Flu.  That sounds VERY BAD.  But Compared to What?  Later stories revealed that seasonal flu is on the average responsible for 30,000 deaths in the US.  That breaks down to an average of 82 per day annually, or more during the flu season if you reflect the fact that there is little flu in the summer months.  No one was ever willing to say whether the 18 deaths were in addition to the 82 expected or if they were just a part of that total.

The chart below suggests that Swine flu is significantly less deadly than the seasonal flu.  However, what it fails to reveal is that Swine Flu is highly transmissable because there is very little immunity in the population.  So even with a very low fatality rate per infection, with a very high infection rate, expectations now are for more than twice as many deaths from the Swine Flu than from the seasonal flu.

disease_fatalities_550

For many years, being aware of the issue I tried to make a comparison whenever I presented a risk assessment.  Most commonly, I used a comparison to the risk in a common stock portfolio.  Was the risk I was assessing more or less risky than the stocks.  I would compare both the average return, the standard deviation of returns as well as the tail risk.  If appropriate, I would make that comparison for one year as well as for many years.

But I now realize that was not the best choice.  Experience in the past year reveals that many people did not really have a good idea of how risky the stock market is.  Many risk models would have pegged the 2008 37% drop in the S&P as a 1/250 year event or worse, even though there have now been similar levels of loss three times in the last 105 years on a calendar year basis and more if you look within calendar years.

spx-1825-2008-return

The chart above was made before the end of the year.  By the end of the year, 2008 fell back into the 30% to 40% return column.  But if your hypothesis had been that a loss that large was a 1/200 event, the likelihood of one occurrence in a 105 year period is only about 31%.  Much more likely to see none (60%).  Two occurrences only about 8% of the time.  Three or more, only about 1% of the time.  So it seems that a 1/200 return period hypothesis has about a 99% likelihood of being incorrect.  If you assume a return period of 1/50 years, that would make the three observations a 75th percentile event.

So that is a fundamental issue in communicating risk.  Is there really some risk that we really know – so that we can use it as a standard of comparison?

The article on Custard Creams was brought to my attention by Johann Meeke.  He says that he will continue to live dangerously with his biscuits.


%d bloggers like this: