Archive for the ‘Modeling’ category

Determining Risk Capital

February 5, 2022

Knowing the amount of surplus an insurer needs to support risk is fundamental to enterprise risk management (ERM) and to the own risk and solvency assessment (ORSA).

With the increasing focus on ERM, regulators, rating agencies, and insurance and reinsurance executives are more focused on risk capital modeling than ever before.

Risk – and the economic capital associated with it – cannot actually be measured as you can measure your height. Risk is about the future.

To measure risk, you must measure it against an idea of the future. A risk model is the most common tool for comparing one idea of the future against others.

Types of Risk Models

There are many ways to create a model of risk to provide quantitative metrics and derive a figure for the economic capital requirement.

Each approach has inherent strengths and weaknesses; the trade-offs are between factors such as implementation cost, complexity, run time, ability to represent reality, and ease of explaining the findings. Different types of models suit different purposes.

Each of the approaches described below can be used for purposes such as determining economic capital need, capital allocation, and making decisions about risk mitigation strategies.

Some methods may fit a particular situation, company, or philosophy of risk better than others.

Factor-Based Models

Here the concept is to define a relatively small number of risk categories; for each category, we require an exposure metric and a measure of riskiness.

The overall risk can then be calculated by multiplying “exposure × riskiness” for each category, and adding up the category scores.

Because factor-based models are transparent and straightforward to apply, they are commonly used by regulators and rating agencies.

The NAIC Risk-Based Capital and the Solvency II Standard Formula are calculated in this way, as is A.M. Best’s BCAR score and S&P’s Insurance Capital Model.

Stress Test Models

Stress tests can provide valuable information about how a company might hold up under adversity. As a stand-alone measure or as an adjunct to factor-based methods, stress tests can provide concrete indications that reflect company-specific features without the need for complex modeling. A robust stress testing regime might reflect, for example:

Worst company results experienced in last 20 years
Worst results observed across peer group in last 20 years
Worst results across peer group in last 50 years (or, 20% worse than stage 2) Magnitude of stress-to-failure

Stress test models focus on the severity of possible adverse scenarios. While the framework used to create the stress scenario may allow rough estimates of likelihood, this is not the primary goal.

High-Level Stochastic Models

Stochastic models enable us to analyze both the severity and likelihood of possible future scenarios. Such models need not be excessively complex. Indeed, a high-level model can provide useful guidance.

Categories of risk used in a high-level stochastic model might reflect the main categories from a factor-based model already in use; for example, the model might reflect risk sources such as underwriting risk, reserve risk, asset risk, and credit risk.

A stochastic model requires a probability distribution for each of these risk sources. This might be constructed in a somewhat ad-hoc way by building on the results of a stress test model, or it might be developed using more complex actuarial analysis.

Ideally, the stochastic model should also reflect any interdependencies among the various sources of risk. Timing of cash flows and present value calculations may also be included.

Detailed Stochastic Models

Some companies prefer to construct a more detailed stochastic model. The level of detail may vary; in order to keep the model practical and facilitate quality control, it may be best to avoid making the model excessively complicated, but rather develop only the level of granularity required to answer key business questions.

Such a model may, for example, sub-divide underwriting risk into several lines of business and/or profit centers, and associate to each of these units a probability distribution for both the frequency and the severity of claims. Naturally, including more granular sources of risk makes the question of interdependency more complicated.

Multi-Year Strategic Models with Active Management

In the real world, business decisions are rarely made in a single-year context. It is possible to create models that simulate multiple, detailed risk distributions over a multi-year time frame.

And it is also possible to build in “management logic,” so that the model responds to evolving circumstances in a way that approximates what management might actually do.

For example, if a company sustained a major catastrophic loss, in the ensuing year management might buy more reinsurance to maintain an adequate A.M. Best rating, rebalance the investment mix, and reassess growth strategy.

Simulation models can approximate this type of decision making, though of course the complexity of the model increases rapidly.

Key Questions and Decisions

Once a type of risk model has been chosen, there are many different ways to use this model to quantify risk capital. To decide how best to proceed, insurer management should consider questions such as:

  • What are the issues to be aware of when creating or refining our model?
  • What software offers the most appropriate platform?
  • What data will we need to collect?
  • What design choices must we make, and which selections are most appropriate for us?
  • How best can we aggregate risk from different sources and deal with interdependency?
  • There are so many risk metrics that can be used to determine risk capital – Value at Risk, Tail Value at Risk, Probability of Ruin, etc. – what are their implications, and how can we choose among them?
  • How should this coordinate with catastrophe modeling?
  • Will our model actually help us to answer the questions most important to our firm?
  • What are best practices for validating our model?
  • How should we allocate risk capital to business units, lines of business, and/or insurance policies?
  • How should we think about the results produced by our model in the context of rating agency capital benchmarks?
  • Introducing a risk capital model may create management issues – how can we anticipate and deal with these?

In answering these questions, it is important to consider the intended applications. Will the model be used to establish or refine risk appetite and risk tolerance?

Will modeled results drive reinsurance decisions, or affect choices about growth and merger opportunities? Does the company intend to use risk capital for performance management, or ratemaking?

Will the model be used to complete the NAIC ORSA, or inform rating agency capital adequacy discussions?

The intended applications, along with the strengths and weaknesses of the various modeling approaches and range of risk metrics, should guide decisions throughout the economic capital model design process.

Advertisement

Top 10 RISKVIEWS Posts of 2014 – ORSA Heavily Featured

December 29, 2014

RISKVIEWS believes that this may be the best top 10 list of posts in the history of this blog.  Thanks to our readers whose clicks resulted in their selection.

  • Instructions for a 17 Step ORSA Process – Own Risk and Solvency Assessment is here for Canadian insurers, coming in 2015 for US and required in Europe for 2016. At least 10 other countries have also adopted ORSA and are moving towards full implementation. This post leads you to 17 other posts that give a detailed view of the various parts to a full ORSA process and report.
  • Full Limits Stress Test – Where Solvency and ERM Meet – This post suggests a link between your ERM program and your stress tests for ORSA that is highly logical, but not generally practiced.
  • What kind of Stress Test? – Risk managers need to do a better job communicating what they are doing. Much communications about risk models and stress tests is fairly mechanical and technical. This post suggests some plain English terminology to describe the stress tests to non-technical audiences such as boards and top management.
  • How to Build and Use a Risk Register – A first RISKVIEWS post from a new regular contributor, Harry Hall. Watch for more posts along these lines from Harry in the coming months. And catch Harry on his blog, http://www.pmsouth.com
  • ORSA ==> AC – ST > RCS – You will notice a recurring theme in 2014 – ORSA. That topic has taken up much of RISKVIEWS time in 2014 and will likely take up even more in 2015 and after as more and more companies undertake their first ORSA process and report. This post is a simple explanation of the question that ORSA is trying to answer that RISKVIEWS has used when explaining ORSA to a board of directors.
  • The History of Risk Management – Someone asked RISKVIEWS to do a speech on the history of ERM. This post and the associated new permanent page are the notes from writing that speech. Much more here than could fit into a 15 minute talk.
  • Hierarchy Principle of Risk Management – There are thousands of risks faced by an insurer that do not belong in their ERM program. That is because of the Hierarchy Principle. Many insurers who have followed someone’s urging that ALL risk need to be included in ERM belatedly find out that no one in top management wants to hear from them or to let them talk to the board. A good dose of the Hierarchy Principle will fix that, though it will take time. Bad first impressions are difficult to fix.
  • Risk Culture, Neoclassical Economics, and Enterprise Risk Management – A discussion of the different beliefs about how business and risk work. A difference in the beliefs that are taught in MBA and Finance programs from the beliefs about risk that underpin ERM make it difficult to reconcile spending time and money on risk management.
  • What CEO’s Think about Risk – A discussion of three different aspects of decision-making as practiced by top management of companies and the decision making processes that are taught to quants can make quants less effective when trying to explain their work and conclusions.
  • Decision Making Under Deep Uncertainty – Explores the concepts of Deep Uncertainty and Wicked Problems. Of interest if you have any risks that you find yourself unable to clearly understand or if you have any problems where all of the apparent solutions are strongly opposed by one group of stakeholders or another.

Economic Capital for Banking Industry

December 22, 2014

Everything you ever wanted to know but were afraid to ask.

For the last seventeen years I have hated conversations with board members around economic capital. It is perfectly acceptable to discuss Market risk, Credit risk or interest rates mismatch in isolation but the minute you start talking about the Enterprise, you enter a minefield.

The biggest hole in that ground is produced by correlations. The smartest board members know exactly which buttons to press to shoot your model down. They don’t do it out of malice but they won’t buy anything they can’t accept, reproduce or believe.

Attempt to explain Copulas or the stability of historical correlations in the future and your board presentation will head south. Don’t take my word for it. Try it next time.  It is not a reflection on the board, it is a simple manifestation of the disconnect that exist today between the real world of Enterprise risk and applied statistical modeling. And when it comes to banking regulation and economic capital for banking industry, the disconnect is only growing larger.

Frustrated with our ineptitude with the state of modeling in this space three years ago we started working on an alternate model for economic capital.  The key trigger was the shift to shortfall and probability of ruin models in bank regulation as well as Taleb’s assertions in the area of how risk results should be presented to ensure informed decision making.   While the proposed model was a simple extension of the same principles on which value at risk is based, we felt that some of our tweaks and hacks delivered on our end objective – meaningful, credible conversations with the board around economic capital estimates.

Enterprise models for estimating economic capital simply extend the regulatory value at risk (VaR) model. The theory focuses on anchoring expectations.  If institutional risk expectations max out at 97.5% then 99.9% can represent unexpected risk. The appealing part of these logistics is that the anchors can shift as more points become visible in the underlying risk distribution. In the simplest and crudest of forms, here is what economic capital models suggest

While regulatory capital model compensate for expected risk, economic capital should account for unexpected risk. The difference between two estimates is the amount you need to put aside for economic capital modeling.”

The plus point with this approach is that it ensures that Economic Capital requirements will always exceed regulatory capital requirements. It removes the possibility of arbitrage that occurs when this condition doesn’t hold. The downside is the estimation of dependence between business lines.  The variations that we proposed short circuited the correlation debate. It also recommended using accounting data, data that the board had already reconciled and sign off on.

EconomicCapitalModel

Without further ado, there is the series that presents our alternate model for estimating economic capital for banking industry Discuss, dissect, modify, suggest. We would love to hear your feedback.

Economic Capital – An alternate Model

Can we use the accounting data series and skip copulas and correlation modeling for business lines altogether? Take a look to find the answer.

EconomicCapital-Framework

Economic Capital Case Study – setting the context

We use publicly available data from Goldman Sachs, JP Morgan Chase, Citibank, Wells Fargo & Barclays Bank from the years 2002 to 2014 to calculate economic capital buffers in place at these 5 banks. Three different approaches are used. Two centered around Capital Adequacy. One using the regulatory Tier 1 leverage ratio.

EconomicCapital-CaseStudy

Economic Capital Models – The appeal of using accounting data

Why does accounting data work? What is the business case for using accounting data for economic capital estimation? How does the modeling work.

EconomicCapital-ModelFlow

Calculating Economic Capital – Using worst case losses

Our first model uses worst case loss. If you are comfortable with value at risk terminology, this is historical simulation approach for economic capital estimation.  We label it model one

EconomicCapitalCaseStudy

Calculating Economic Capital – Using volatility

Welcome to the variance covariance model for economic capital estimation. The results will surprise you.  Presenting model two.

EconomicCapital-Intervention

Calculating Economic Capital – Using Leverage ratio

We figured it was time that we moved from capital adequacy to leverage ratios.  Introducing model three.

TrailingLeverageRatio

Too Much Risk

August 18, 2014

Risk Management is all about avoiding taking Too Much Risk.

And when it really comes down to it, there are only a few ways to get into the situation of taking too much risk.

  1. Misunderstanding the risk involved in the choices made and to be made by the organization
  2. Misunderstanding the risk appetite of the organization
  3. Misunderstanding the risk taking capacity of the organization
  4. Deliberately ignoring the risk, the risk appetite and/or the risk taking capacity

So Risk Management needs to concentrate on preventing these four situations.  Here are some thoughts regarding how Risk Management can provide that.

1. Misunderstanding the risk involved in the choices made and to be made by an organization

This is the most common driver of Too Much Risk.  There are two major forms of misunderstanding:  Misunderstanding the riskiness of individual choices and Misunderstanding the way that risk from each choice aggregates.  Both of these drivers were strongly in evidence in the run up to the financial crisis.  The risk of each individual mortgage backed security was not seriously investigated by most participants in the market.  And the aggregation of the risk from the mortgages was misunderestimated as well.  In both cases, there was some rationalization for the misunderstanding.  The Misunderstanding was apparent to most only in hindsight.  And that is most common for misunderstanding risks.  Those who are later found to have made the wrong decisions about risk were most often acting on their beliefs about the risks at the time.  This problem is particularly common for firms with no history of consistently and rigorously measuring risks.  Those firms usually have very experienced managers who have been selecting their risks for a long time, who may work from rules of thumb.  Those firms suffer this problem most when new risks are encountered, when the environment changes making their experience less valid and when there is turnover of their experienced managers.  Firms that use a consistent and rigorous risk measurement process also suffer from model induced risk blindness.  The best approach is to combine analysis with experienced judgment.

2.  Misunderstanding the risk appetite of the organization

This is common for organizations where the risk appetite has never been spelled out.  All firms have risk appetites, it is just that in many, many cases, no one knows what they are in advance of a significant loss event.  So misunderstanding the unstated risk appetite is fairly common.  But actually, the most common problem with unstated risk appetites is under utilization of risk capacity.  Because the risk appetite is unknown, some ambitious managers will push to take as much risk as possible, but the majority will be over cautious and take less risk to make sure that things are “safe”.

3.  Misunderstanding the risk taking capacity of the organization

 This misunderstanding affects both companies who do state their risk appetites and companies who do not.  For those who do state their risk appetite, this problem comes about when the company assumes that they have contingent capital available but do not fully understand the contingencies.  The most important contingency is the usual one regarding money – no one wants to give money to someone who really, really needs it.  The preference is to give money to someone who has lots of money who is sure to repay.  For those who do not state a risk appetite, each person who has authority to take on risks does their own estimate of the risk appetite based upon their own estimate of the risk taking capacity.  It is likely that some will view the capacity as huge, especially in comparison to their decision.  So most often the problem is not misunderstanding the total risk taking capacity, but instead, mistaking the available risk capacity.

4.  Deliberately ignoring the risk, the risk appetite and/or the risk taking capacity of the organization

A well established risk management system will have solved the above problems.  However, that does not mean that their problems are over.  In most companies, there are rewards for success in terms of current compensation and promotions.  But it is usually difficult to distinguish luck from talent and good execution in a business about risk taking.  So there is a great temptation for managers to deliberately ignore the risk evaluation, the risk appetite and the risk taking capacity of the firm.  If the excess risk that they then take produces excess losses, then the firm may take a large loss.  But if the excess risk taking does not result in an excess loss, then there may be outsized gains reported and the manager may be seen as highly successful person who saw an opportunity that others did not.  This dynamic will create a constant friction between the Risk staff and those business managers who have found the opportunity that they believe will propel their career forward.

So get to work, risk managers.

Make sure that your organization

  1. Understands the risks
  2. Articulates and understands the risk appetite
  3. Understands the aggregate and remaining risk capacity at all times
  4. Keeps careful track of risks and risk taking to be sure to stop any managers who might want to ignore the risk, the risk appetite and the risk taking capacity

Quantitative vs. Qualitative Risk Assessment

July 14, 2014

There are two ways to assess risk.  Quantitative and Qualitative.  But when those two words are used in the NAIC ORSA Guidance Manual, their meaning is a little tricky.

In general, one might think that a quantitative assessment uses numbers and a qualitative assessment does not.  The difference is as simple as that.  The result of a quantitative assessment would be a number such as $53 million.  The result of a qualitative assessment would be words, such as “very risky” or “moderately risky”.

But that straightforward approach to the meaning of those words does not really fit with how they are used by the NAIC.  The ORSA Guidance Manual suggests that an insurer needs to include those qualitative risk assessments in its determination of capital adequacy.  Well, that just will not work if you have four risks that total $400 million and three others that are two “very riskys” and one “not so risk”.  How much capital is enough for two “very riskys”, perhaps you need a qualitative amount of surplus to provide for that, something like “a good amount”.

RISKVIEWS believes that then the NAIC says “Quantitative” and “Qualitative” they mean to describe two approaches to developing a quantity.  For ease, we will call these two approaches Q1 and Q2.

The Q1 approach is data and analysis driven approach to developing the quantity of loss that the company’s capital standard provides for.  It is interesting to RISKVIEWS that very few participants or observers of this risk quantification regularly recognize that this process has a major step that is much less quantitative and scientific than others.

The Q1 approach starts and ends with numbers and has mathematical steps in between.  But the most significant step in the process is largely judgmental.  So at its heart, the “quantitative” approach is “qualitative”.  That step is the choice of mathematical model that is used to extrapolate and interpolate between actual data points.  In some cases, there are enough data points that the choice of model can be based upon somewhat less subjective fit criteria.  But in other cases, that level of data is reached by shortening the time step for observations and THEN making heroic (and totally subjective) assumptions about the relationship between successive time periods.

These subjective decisions are all made to enable the modelers to make a connection between the middle of the distribution, where there usually is enough data to reliably model outcomes and the tail, particularly the adverse tail of the distribution where the risk calculations actually take place and where there is rarely if ever any data.

There are only a couple of subjective decisions possibilities, in broad terms…

  • Benign – Adverse outcomes are about as likely as average outcomes and are only moderately more severe.
  • Moderate – Outcomes similar to the average are much more likely than outcomes significantly different from average.  Outcomes significantly higher than average are possible, but likelihood of extremely adverse outcomes are extremely highly unlikely.
  • Highly risky – Small and moderately adverse outcomes are highly likely while extremely adverse outcomes are possible, but fairly unlikely.

The first category of assumption, Benign,  is appropriate for large aggregations of small loss events where contagion is impossible.  Phenomenon that fall into this category are usually not the concern for risk analysis.  These phenomenon are never subject to any contagion.

The second category, Moderate, is appropriate for moderate sized aggregations of large loss events.  Within this class, there are two possibilities:  Low or no contagion and moderate to high contagion.  The math is much simpler if no contagion is assumed.

But unfortunately, for risks that include any significant amount of human choice, contagion has been observed.  And this contagion has been variable and unpredictable.  Even more unfortunately, the contagion has a major impact on risks at both ends of the spectrum.  When past history suggests a favorable trend, human contagion has a strong tendency to over play that trend.  This process is called “bubbles”.  When past history suggests an unfavorable trend, human contagion also over plays the trend and markets for risks crash.

The modelers who wanted to use the zero contagion models, call this “Fat Tails”.  It is seen to be an unusual model, only because it was so common to use the zero contagion model with the simpler maths.

RISKVIEWS suggests that when communicating that the  approach to modeling is to use the Moderate model, the degree of contagion assumed should be specified and an assumption of zero contagion should be accompanied with a disclaimer that past experience has proven this assumption to be highly inaccurate when applied to situations that include humans and therefore seriously understates potential risk.

The Highly Risky models are appropriate for risks where large losses are possible but highly infrequent.  This applies to insurance losses due to major earthquakes, for example.  And with a little reflection, you will notice that this is nothing more than a Benign risk with occasional high contagion.  The complex models that are used to forecast the distribution of potential losses for these risks, the natural catastrophe models go through one step to predict possible extreme events and the second step to calculate an event specific degree of contagion for an insurer’s specific set of coverages.

So it just happens that in a Moderate model, the 1 in 1000 year loss is about 3 standard deviations worse than the mean.  So if we use that 1 in 1000 year loss as a multiple of standard deviations, we can easily talk about a simple scale for riskiness of a model:

Scale

So in the end the choice is to insert an opinion about the steepness of the ramp up between the mean and an extreme loss in terms of multiples of the standard deviation.  Where standard deviation is a measure of the average spread of the observed data.  This is a discussion that on these terms include all of top management and the conclusions can be reviewed and approved by the board with the use of this simple scale.  There will need to be an educational step, which can be largely in terms of placing existing models on the scale.  People are quite used to working with a Richter Scale for earthquakes.  This is nothing more than a similar scale for risks.  But in addition to being descriptive and understandable, once agreed, it can be directly tied to models, so that the models are REALLY working from broadly agreed upon assumptions.

*                  *                *               *             *                *

So now we go the “Qualitative” determination of the risk value.  Looking at the above discussion, RISKVIEWS would suggest that we are generally talking about situations where we for some reason do not think that we know enough to actually know the standard deviation.  Perhaps this is a phenomenon that has never happened, so that the past standard deviation is zero.  So we cannot use the multiple of standard deviation method discussed above.  Or to put is another way, we can use the above method, but we have to use judgment to estimate the standard deviation.

*                  *                *               *             *                *

So in the end, with a Q1 “quantitative” approach, we have a historical standard deviation and we use judgment to decide how risky things are in the extreme compared to that value.  In the Q2 “qualitative” approach, we do not have a reliable historical standard deviation and we need to use judgment to decide how risky things are in the extreme.

Not as much difference as one might have guessed!

You need good Risk Sense to run an insurance company

January 16, 2014

It seems to happen all too frequently.

A company experiences a bad loss and the response of management is that they were not aware that the company had such a risk exposure.

For an insurance company, that response just isn’t good enough.  And most of the companies where management has given that sort of answer were not insurers.

At an insurance company, managers all need to have a good Risk Sense.

Risk Sense is a good first order estimate of the riskiness of all of their activities. 

Some of the companies who have resisted spending the time, effort and money to build good risk models are the companies whose management already has an excellent Risk Sense.  Management does not see the return for spending all that is required to get what is usually just the second digit.

By the way, if you think that your risk model provides reliable information beyond that second digit, you need to spend more time on model validation.

To have a reliable Risk Sense, you need to have reliable risk selection and risk mitigation processes.  You need to have some fundamental understanding of the risks that are out there in the areas in which you do business.  You also need to  be constantly vigilant about changes to the risk environment that will require you to adjust your perception of risk as well as your risk selection and mitigation practices.

Risk Sense is not at all a “gut feel” for the risk.  It is instead more of a refined heuristic.  (See Evolution of Thinking.)  The person with Risk Sense has the experience and knowledge to fairly accurately assess risk based upon the few really important facts about the risks that they need to get to a conclusion.

The company that needs a model to do basic risk assessment, i.e. that does not have executives who have a Risk Sense, can be highly fragile.  That is because risk models can be highly fragile.  Good model building actually requires plenty of risk sense.

The JP Morgan Chase experiences with the “London Whale” were a case of little Risk Sense and staff who exploited that weakness to try to get away with excessive risk taking.  They relied completely on a model to tell them how much risk that they were taking.  No one looked at the volume of activity and had a usual way to create a good first order estimate of the risk.  The model that they were using was either inaccurate for the actual situation that they were faced with or else it was itself gamed.

A risk management system does not need to work quite so hard when executives have a reliable Risk Sense.  If an executive can look at an activity report and apply their well honed risk heuristics, they can be immediately informed of whether there is an inappropriate risk build up or not.  They need control processes that will make sure that the risk per unit of activity is within regular bounds.  If they start to have approved activities that involve situations with much higher levels of risk per unit of activity, then their activity reports need to separate out the more risky activities.

Models are too fragile to be the primary guide to the level of risk.  Risk taking organizations like insurers need Risk Sense.

The biggest Risk is that the rules keep changing

December 27, 2013

risk legacy

RISKVIEWS played the board game Risk Legacy with the family yesterday.  We were playing for the 8th time.  This game is a version of the board game Risk where the rules are changed by the players after each time playing the game.  Most often, the winner is the person who most quickly adapts to the new rules.  Once the other players see how the rules can be exploited, they can adapt to defend against that particular strategy, but at the same time, the rules have changed again, presenting a new way to win.

This game provides a brilliant metaphor for the real world and the problems faced by business and risk managers in constantly having to adapt both to avoid losing and to find the path to winning.  The biggest risk is that the rules keep changing.  But unlike the game, where the changes are public and happen only once per game, in the real world, the changes to the rules are often hidden and can happen at any time.

Regulators are forced to follow a path very much like the Risk Legacy game of making public changes on a clear timetable, but  competitors can change their prices or their products or their distribution strategy at any time.  Customers can change their behaviors, sometimes drastically, most often gradually without notice.  Even the weather seems to change, but we are not really sure how much.

Meanwhile, risk managers have been forced into a universe of their own design with the movement towards heavy metal complex risk models.  Those models are most often based upon the premise that when it comes to risk, things will not change.  That the future will be much like the past and in fact, that even inquiring about changes may be difficult and may therefore be discouraged due to limited resources.

But risk can be thought of as the tail of the cat.  The exact path of the cat is unpredictable.  The rules for what a cat is trying to accomplish at any point in time keep changing.  Not constantly changing, but changing nonetheless without warning.  So imagine trying to model the path of the cat.  Now shift to the tail of the cat representing the risk.  The tail has a much wider and more unpredictable path than the body of the cat.

That is not to suggest that the path of the tail (the risk) is wildly unpredictable.  But keeping up with the tail requires much more than simply extrapolating the path of the cat from the recent past.  It requires keeping up with the ever changing path of the cat.  And the tail movement will often represent the possibilities for changes in the future path.

Some risk models and risk management programs are created with recognition of the likelihood that the rules will change, sometimes even between the time that the model assumptions are set and when the model results are presented.  In those programs, the models are valued for their insights into the nature of risk, but of the risk as it was in the recent past.  And with recognition that the risk that will be will be somewhat different because the rules will change.

Delusions about Success and Failure

April 8, 2013

In his book, The Halo Effect: … and the Eight Other Business Delusions That Deceive Managers, author Phil Rosenzweig discusses the following 8 delusions about success:

1. Halo Effect: Tendency to look at a company’s overall performance and make attributions about its culture, leadership, values, and more.

2. Correlation and Causality: Two things may be correlated, but we may not know which one causes which.

3. Single Explanations: Many studies show that a particular factor leads to improved performance. But since many of these factors are highly correlated, the effect of each one is usually less than suggested.

4. Connecting the Winning Dots: If we pick a number of successful companies and search for what they have in common, we’ll never isolate the reasons for their success, because we have no way of comparing them with less successful companies.

5. Rigorous Research: If the data aren’t of good quality, the data size and research methodology don’t matter.

6. Lasting Success: Almost all high-performing companies regress over time. The promise of a blueprint for lasting success is attractive but unrealistic.

7. Absolute Performance: Company performance is relative, not absolute. A company can improve and fall further behind its rivals at the same time.

8. The Wrong End of the Stick: It may be true that successful companies often pursued highly focused strategies, but highly focused strategies do not necessarily lead to success.

9. Organizational Physics: Company performance doesn’t obey immutable laws of nature and can’t be predicted with the accuracy of science – despite our desire for certainty and order.

By Julian Voss-Andreae (Own work) [CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0)%5D, via Wikimedia Commons

A good risk manager will notice that all 8 of these delusions have a flip side that applies to risk analysis and risk management.

a.  Bad results <> Bad Culture – there are may possible reasons for poor results.  Culture is one possible reason for bad results, but by far not the only one.

b.  Causation and Correlation – actually this one need not be flipped.  Correlation is the most misunderstood statistic.  Risk managers would do well to study and understand what valuable and reliable uses that there are for correlation calculations.  They are very likely to find few.

c.  Single explanations  – are sometimes completely wrong (see c. above), they can be the most important of several causes, they can be the correct and only reason for a loss, or a correct but secondary reason.  Scapegoating is a process of identifying a single explanation and quickly moving on.  Often without much effort to determine which of the four possibilities above applies to the scapegoat.  Scapegoats are sometimes chosen that make the loss event appear to be non-repeatable, therefore requiring no further remedial action.

d.  Barn door solutions – looking backwards and finding the activities that seemed to lead to the worst losses at the companies that failed can provide valuable insights or it can lead to barn door solutions that fix past problems but have no impact on future situations.

e.  Data Quality – same exact issue applies to loss analysis.  GIGO

f.  Regression to the mean – may be how you describe what happens to great performing companies, but for most firms, entropy is the force that they need to be worried about.  A firm does not need to sport excellent performance to experience deteriorating results.

g.  Concentration risk – should be what a risk manager sees when strategy is too highly concentrated.

h.  Uncertainty prevails – precision does not automatically come from expensive and complicated models.

Spreadsheets are not the problem

February 18, 2013

The media have latched on to a story.

Microsoft’s Excel Might Be The Most Dangerous Software On The Planet

The culprit in the 2012 JP Morgan trading loss has been exposed.  Spreadsheets are to blame!

The only problem with this answer is that it is simply incorrect.  It is blaming the bad result on the last step in the process.  Like the announcers for a football game who blame the last play of the game for the outcome.  It really wasn’t missing that one last ditch scoring effort that made the difference.  It was how the two teams played the whole game.

And for situations like the JP Morgan trading loss, the spreadsheet was one of the last steps in the process.

But the fundamental problem was that they were allowing someone in the bank to take very large risks that no one could understand directly.  Risks that no one had a rule of thumb that told them that they were nearing a situation where any bad day, they could lose billions.

That is pretty fundamental to a risk taking business.  To understand your risks.  And if you have no idea whatsoever of how much risk that you are taking without running that position through a model, then you are in big trouble.

That does not mean that models shouldn’t be used to evaluate risk.  The problem is the need to use a model in the heat of battle, when there is no time to check for the kinds of mistakes that tripped up JP Morgan.  The models should be used in advance of going to market and rules of thumb, or heuristics for those who like the academic labels, need to be developed.

The model should be a tool for building understanding of the business, not as a substitute for understanding the business.

Humans have developed very powerful skills to work with heuristics over tens of thousands of years.  Models should feed into that capability, not be used to totally override it.

Chances are that the traders at JP Morgan did have heuristics for the risk and knew that they were arbitraging their own risk management process.  They may not have known why they gut told them that there was more risk than the model, but they are likely to have known that there was nore risk there.

The risk managers are the ones who most need to have those heuristics.  And management needs to set down clear rules about the situations where the risk models are later found to be in error that protect the bank, rather than the traders bonuses.

No, spreadsheets are not the problem.

The problem is the idea that you can be in a business that neither top management nor risk management has any “feel” for.

An ERM Carol

December 22, 2012

You awake with a start.  There is an eerie presence in your bedroom.  A voice says “Come with me!”

You see yourself, many years ago, starting out in your career.  With an interest in risk, you feel lucky that you were able to land a position in an insurance company.  You are encouraged when you hear your boss say “its all about risk and reward”.  But it didn’t take you too long to find out that while there were daily, weekly, monthly, quarterly, annual and special reports about the rewards that the company was experiencing, there was not one single report about risk.  You confront your manager about this and he tells you that “risk isn’t something that you measure”, it is in your gut.  You just know when something is risky. “.  He advised that once you were more experienced, you too would be able to tell when something was risky or not.  

You drift back to sleep when a second voice calls you to “Behold!”.  You see yourself a manager in an insurance company:

You are being told that risk is very important. Your company takes risk management very seriously. Several years ago, the company spent millions to build a state of the art Economic Capital Model.  Now, all plans and all performance is viewed in terms of the amount of risk associated with each and every activity.  And you hate the whole thing!

To you, this has become a technocratic nightmare.  Your performance is judged by a computer using an algorithm that seems to be spewing forth somewhat random values.  It seems like your promotions and bonuses are being determined by a slot machine, but a slot machine with no window to see what is happening inside.

The high priests of risk operate the model.  But they are too busy to actually explain what is going on in a manner that could help the business.

So if somehow, you are lucky enough to get to the top, that will be the last day for that complex risk model.

And you pull the covers up over your head.  This is too much like a workday.  You need your sleep.  But before long, a third voice wakes you again.   “This way…”

You are on the hot seat.  The board wants to know how the company was able to get into such a problem.  Didn’t you see that there were such enormous build ups of exposures to that risky indoor snow experience sector?  The frostbite claims were double what they were last year.  Dividends will have to be eliminated.  And we probably need to turn down the corporate air conditioners.  No longer could the offices be kept at a tolerable 31 degrees.  Next summer would be unbearable.  Your only defense is that your gut told you that there was little risk and big rewards in the indoor snow business.  But that is not how it went.  They end the meeting by letting you go.  The inglorious end to your career as a risk manager. 

You wake up shouting that it was not your fault.  And you see the light coming in the window.  You turn on the TV to find that all this happened in one night.  You get dressed and go back into the office.  You are finishing up your staff meeting and you direct your attention to your risk management staff.

Starting today, I want you to spend more of your time making your models more transparant and the findings more actionable.  I am tired of risk being something that comes at us after the fact to tell us that something was wrong.  We need to focus on leading indicators that all of the managers can use in real time to manage the business.  You can still use that fancy model that you all so love, but I only want to hear about the model when it actually explains something about the business that I can use next quarter to do a better job of managing my risk and reward.

And with that, we ended the meeting and all went to our holiday party.  Next year will be interesting…..

Is this just MATH that you do to make yourself feel better?

November 19, 2012

Megyn Kelly asked that of Karl Rove on Fox TV on election night about his prediction of Ohio voting.

But does most risk analysis fall into this category as well?

How many companies funded the development of economic capital models with the expressed interest in achieving lower capital requirements?  How many of those companies encouraged the use of “MATH that you do to make yourself feel better” MTYDTMYFB

Model validation is now one of the hot topics in Risk model land.  Why? Is it because modelers stopped checking when they got the answer that was wanted, rather than working at it until they got it right?  If the later was the answer, then there would be zero additional work to do to validate a model.  That validation work would already be done.  MTYDTMYFB

The Use Test is quite a challenge for many.  First part of the challenge is to produce an example of a situation where they did modeling of a major risk decision before that decision was finalized.  Or are the models only brought into play after all of the decisions are made?  MTYDTMYFB

There are many other examples of MTYDTMYFB.   Many years ago when computers were relatively new and dot matrix printers were the sign of high tech, it was possible to write a program to print out a table of numbers that had been developed somewhere else.  The fact that they appeared on 11 x 14 computer paper from a dot matrix printer gave those numbers a sheen of credibility.  Some managers were willing to believe then that computers were infallible.

But in fact, computers, and math, are about as infallible as gravity and about as selective.  Gravity will be a big help if you need to get something from a higher place to a lower place.  But it will be quite a hindrance if you need to do the opposite.  Math and computers are quite good at some things, like analyzing large amounts of data and finding patterns that may or may not really exist.

Math and computers need to be used with judgement, skepticism and experience.  Especially when approaching the topic of risk.

Statistics works like gravity helping us take things downhill when you are seeking to estimate the most likely value of some uncertain event.  That is because each additional piece of data helps you to hone in one the average of the distribution of possibilities.  Your confidence in your prediction of the most likely value should improve.

But when you are looking at risk, you are usually looking for an estimate for extremely unlikely adverse results.  The principles of statistics are just like the effect of gravity on moving heavy things uphill.  They work against you.

Take correlation, for example.  The chart above can be easily reproduced by anyone with a spreadsheet program.  RISKVIEWS simply created two columns of random numbers that each contained 1000 numbers.  The correlation of these two series for all 1000 numbers is zero to several decimal places.  This chart is created by measuring the correlation of subsets of that 1000 that contained 10 values.

What this shows is how easy it is to get any answer that you want.  MTYDTMYFB

The Danger of Optimization

November 21, 2011

RISKVIEWS was recently asked “How do insurers Optimize Risk and Reward?”

The response was “That is dangerous. Why do you want to know that?” You see, a guru must always answer a question with a question. And in this case, RISKVIEWS was being treated as a guru.

Optimizing risk and reward is dangerous because it is done with a model.  Not all things that use a model are dangerous.  But Optimizing is definitely dangerous.

One definition of optimizing is

“to make as perfect as possible.”

Most often, optimization means taking maximum possible advantage of the diversification effect.  You will often hear someone talking about the ability to add risk without adding capital.  Getting a free ride on risk.

There are two reasons that optimizing ends up being dangerous…

  1. The idea of adding risk without adding capital is a misunderstanding.  Adding risk always adds risk.  It may well not add to a specific measure of risk because of either size or correlation or both, but the risk is there.  The idea that adding a risk that is low correlation with the firm’s predominant risk is a free ride will sooner or later seep into the minds of the people who ultimately set the prices.  They will start to think that it is just fine to give away some or all of the risk premium and eventually to give up most of the risk margin because there is thought to be no added risk.  This free risk idea will also lead to possibly taking on too much of that uncorrelated risk.  More than one insurer has looked at an acquisition of a large amount of the uncorrelated risk where the price for the acquisition only makes sense with a diminished risk charge.  But with the acquisition, the risk becomes a major concentration of loss potential and suddenly, the risk charge is substantial.
  2. In almost all cases, the best looking opportunities, based on the information that you are getting out of the model are the places where the model is in error, where the model is missing one or more of the real risks.  Those opportunities will look to have unusually fat risk premiums. To the insurer with the incorrect model, those look like extra margin.  This is exactly what happened with the super senior tranches of sub prime mortgage securities.  If you believed the standard assumption that house prices would never go down, there was no risk in the super senior, but they paid 5 – 10 bps more than a risk free bond.

The reliance on a model for optimization is dangerous.

That does not mean that the model is always dangerous.  The model only becomes dangerous when there is undue reliance is placed upon the exact accuracy of the model, without regard for model error and/or parameter uncertainty.

The proper use of the model is Risk Steering.  The model helps to determine the risks that should be held steady, which risks would be good to grow (as long as the environment stays the same as what the model assumes) and which risk to reduce.

Who are we kidding?

September 14, 2011

When we say that we are “measuring” the 1/200 risk of loss of our activities?
For most risks, we do not even have one observation of a 200 year period.
What we have instead is an extrapolation based upon an assumption that there is a mathematical formula that relates the 1/200 year loss to something that we do have confidence in.

Let’s look at some numbers.  I am testing the idea that we might be able to know what the 1/10 loss would be if we have 50 years of observations.  Our process is to rank the 50 years and look at the 45th worst loss.  We find that loss is $10 million.

Now if we build a model where our probability of losing $10 million or more is 10% and we run that model 100 times, we get a histogram like this:

So in this test, with an underlying probability of 10%, the frequency of 50 year periods with 5 observations of losses of $10 million or larger is only 22%!

When I repeat the test with a frequency assumption of 15% or of 6.67%, I get exactly 5 observations with a frequency of about 10% in each case.

So given 50 years of observations and 5 occurrences, it seems that it is quite possible that the underlying likelihood might be 50% higher or 1/3 lower.

Try to imagine the math of getting a 1/200 loss pick correct.  What might the confidence interval be around that number?

Who are we kidding?

The World is not the Same – After

September 12, 2011

In reality, there is no accurate way to calibrate a risk model right after a major loss event. That is because there is always a good chance that the world will change as a result of the experiences of the event.

In Japan, the rebuilding after the losses from the earthquake/tsunami will not replace what was there. The buyers of the products that were manufactured in Japan who were disrupted by the event have all found alternatives. And they have learned from the even to diversify their suppliers or at least deal with a supplier who has diversified exposure to risk. The Japan after the event will not be the same Japan as before.

A market or an industry, a company or a people rarely go back to doing things exactly the same way after a major crisis.

They may become much more conservative about the risk that caused the crisis.  They may just move on, like New Orleans which is now less than one third its pre-Katrina size.  They may adopt many new rules and regulations like Sarbanes-Oxley or Dodd-Frank.  Or they may finally start listening to their risk managers or even hire new CROs.

If you want to have a model that includes the year after a crisis, then you will need to study past crises and the reactions to those events.  What that may mean is that there are ripple effects of the crisis in the model. Not just another random year.  Because regardless of what the theories say, the world displays multi year effects.  Events are not over simply because the model turns to another time slot.

 

 

High Risk Adjusted Returns and Risk Management – 10 Key ERM Questions from an Investor – The Answer Key (5)

July 20, 2011

Riskviews was once asked by an insurance sector equity analyst for 10 questions that they could ask company CEOs and CFOs about ERM.  Riskviews gave them 10 but they were trick questions.  Each one would take an hour to answer properly.  Not really what the analyst wanted.

Here they are:

  1. What is the firm’s risk profile?
  2. How much time does the board spend discussing risk with management each quarter?
  3. Who is responsible for risk management for the risk that has shown the largest percentage rise over the past year?
  4. What outside the box risks are of concern to management?
  5. What is driving the results that you are getting in the area with the highest risk adjusted returns?
  6. Describe a recent action taken to trim a risk position?
  7. How does management know that old risk management programs are still being followed?
  8. What were the largest positions held by company in excess of risk the limits in the last year?
  9. Where have your risk experts disagreed with your risk models in the past year?
  10. What are the areas where you see the firm being able to achieve better risk adjusted returns over the near term and long term?

They never come back and asked for the answer key.  Here it is:

5.  In the sub prime market prior to the crisis, investors were buying AAA securities but getting a little more yield.  Since they were AAA rated, the capital required was minimal.  So the return on equity could be attractive.  Unless you held that story up to the light and freely admitted that your bank profits were bolstered by exploiting the fact that the market and the regulators had different opinions of the creditworthiness of the sub prime securities.

So, if the banks had answered honestly, they would have been saying that their profits were coming from regulatory arbitrage.  There are only three possible outcomes from this situation.  First, the market could wise up and the excess profits would disappear.  Second, the regulators could wise up and suddenly the banks would find themselves needing lots more capital, and third, the market persists in its opinion of higher risk and it turns out the market is correct.  But since under this third option, the bank is playing the regulators for the fools, as the risks stay the same or grow ever larger, the banks take more and more advantage of the stupid regulators.  They pretend to their board that the bank is safe because they are holding the capital that the regulators require.  The banks takes more and more risk – the compulsion to grow and grow earnings in the face of the shrinking spreads for everything with “normal” risk is an immutable imperative that requires banks to multiply their risk.

So one of the possible reasons that Risk Adjusted Return is high is that the risk adjustment is based upon regulatory requirements – not on an actual assessment of risk.  And there are three possible outcomes of playing the regulatory arb game that are unfavorable.

Another reason for higher risk adjusted returns is a competitive advantage.  Investors should be happy to hear about a competitive advantage.  They should also do their own assessment about how permanent that advantage might be.

From the point of view of assessing an ERM system, the answer to this question should reveal how seriously that management takes the idea of risk management.  High and unexpected returns are as good a signal as any of higher risk.  In fact, in the financial markets, high returns are almost always a symptom of higher risk.

Tranching Expectations

July 18, 2011

Stochastic Monte Carlo simulations (SMCS) of insurer activities have been used to create nearly continuous distribution curves of expected gains and losses.  Economic Capital models are often aggregations of these separate SMCS models to create a similar distribution of total group gains and losses.  There are several primary characteristics of the results of these models:

  • They produce a nearly infinite number of numerical results giving an incredibly rich vision of the possibilities for the results of the activities of the firm.
  • That tidal wave of numeric results is very difficult to digest and make sense of.
  • While the modelers may have developed methods for validating the models, it is extremely difficult for general management who are supposed to be the primary users to “validate” the models and therefore, very difficult for them to trust the models

What would validation mean for the general managers?  It would mean that they would have confidence that the experiences that they have of the company’s risks and their expectations of future experience would be consistent with the model.

Ultimately, managers should have the same sort of reliance on the model that they have on the speedometer in their car or the clock on their wall.  The same sort of confidence that a cook might have on the thermometer on the oven.  They get that confidence not by having an expert show them a report, they get that confidence by experience.  The speedometer tells them that they are driving within the speed limit and they go past a police speed trap and they are not stopped for speeding.  They leave for work with enough time to get there and they arrive on time.  The cook puts the cake in the oven and it does not burn and it does cook appropriately in the time expected.  Not just once, but over and over again.

The problem is trickier for the SMCS model.  The output is really a set of likelihoods.  But when that output is presented as the infinite stream of numbers, there is no intuitive way to validate it naturally against experience.  And also, when the output is presented as a single remote number, like a 1 in 200 loss, it is also nearly impossible to validate naturally, by experience.

Tranching a security means splitting up the cashflows in some particular, predetermined way.  The idea of tranching can be used to help with promoting the natural validation process for a SMCS model.  The future possibilities can be tranched into 6 or 8 natural stories.  Then the model results can be sorted into the scenarios that match up with the stories.  The model output can be characterized as predictions of likelihood for the stories.  Here are some possible stories:

  • Highly favorable results – Bonuses are maximized
  • Favorable results – Bonuses are above expected/average
  • Somewhat unfavorable results – Bonuses are paid but below average/expected
  • Unfavorable results – no bonuses are paid
  • Highly unfavorable results – Layoffs and/or executive firings
  • Critical Loss – Company has to drastically change activity – may go into runoff
  • Disaster – company insolvent – seised by regulator

Management can participate in defining the range of results that frame each story there.  Then the model can provide its prediction of likelihood of each tranche.  Management can also provide their prediction of the likelihood of each tranche.  The stories do not have to be compensation related.  Riskviews has found that if the bonus program of a company was thoughtfully constructed, there are likely to be other stories that can be told of company experience to define the same ranges of results.

A seriously valuable and interesting discussion might result.

Riskviews was once an executive manager for a business unit within an insurer.  The insurer’s risk model was used to produce a projection of what might happen to that business in an adverse market.  Riskviews response was to ask on which day of that crisis the modeler was predicting that Riskviews was going into a coma, because the business decisions that are predicted would never happen if Riskviews was conscious.

These stories can be used to promote the validation process by the managers.  At the conclusion of each period, the modelers and the managers can review the actual experience in terms of the stories.  Then they can all decide if the experience validated the model or if it provided experience that suggests recalibrating the model.

Now if this process happens once per year, then it will take a very long time for that natural validation to take place.  Probably longer than the tenure of any single management team.  And as the management team turns over, the validation process is likely going to need more time.  Therefore, it is highly recommended that this process be repeated quarterly.  And perhaps repeated for each of the sub models of the economic capital model.

The way that the cook or the driver or the commuter got to rely on their tools was by repeated experiences.  The same sort of repeated experience is needed to validate the SMCS model in the minds of the management users.

Echo Chamber Risk Models

June 12, 2011

The dilemma is a classic – in order for a risk model to be credible, it must be an Echo Chamber – it must reflect the starting prejudices of management. But to be useful – and worth the time and effort of building it – it must provide some insights that management did not have before building the model.

The first thing that may be needed is to realize that the big risk model cannot be the only tool for risk management.  The Big Risk Model, also known as the Economic Capital Model, is NOT the Swiss Army Knife of risk management.  This Echo Chamber issue is only one reason why.

It is actually a good thing that the risk model reflects the beliefs of management and therefore gets credibility.  The model can then perform the one function that it is actually suited for.  That is to facilitate the process of looking at all of the risks of the firm on the same basis and to provide information about how those risks add up to make up the total risk of the firm.

That is very, very valuable to a risk management program that strives to be Enterprise-wide in scope.  The various risks of the firm can then be compared one to another.  The aggregation of risk can be explored.

All based on the views of management about the underlying characteristics of the risks. That functionality allows a quantum leap in the ability to understand and consistently manage the risks of the firm.

Before creating this capability, the risks of each firm were managed totally separately.  Some risks were highly restricted and others were allowed to grow in a mostly uncontrolled fashion.  With a credible risk model, management needs to face their inconsistencies embedded in the historical risk management of the firm.

Some firms look into this mirror and see their problems and immediately make plans to rationalize their risk profile.  Others lash out at the model in a shoot the messenger fashion.  A few will claim that they are running an ERM program, but the new information about risk will result in absolutely no change in risk profile.

It is difficult to imagine that a firm that had no clear idea of aggregate risk and the relative size of the components thereof would find absolutely nothing that needs adjustment.  Often it is a lack of political will within the firm to act upon the new risk knowledge.

For example, when major insurers started to create the economic capital models in the early part of this century, many found that their equity risk exposure was very large compared to their other risks and to their business strategy of being an insurer rather than an equity mutual fund.  Some firms used this new information to help guide a divestiture of equity risk.  Others delayed and delayed even while saying that they had too much equity risk.  Those firms were politically unable to use the new risk information to reduce the equity position of the group.  More than one major business segment had heavy equity positions and they could not choose which to tell to reduce.  They also rejected the idea of reducing exposure through hedging, perhaps because there was a belief at the top of the firm that the extra return of equities was worth the extra risk.

This situation is not at all unique to equity risk.   Other firms had the same experience with Catastrophe risks, interest rate risks and Casualty risk concentrations.

A risk model that was not an Echo Chamber model would be any use at all in these situation above. The differences between management beliefs and the model assumptions of a non Echo Chamber model would result in it being left out of the discussion entirely.

Other methods, such as stress tests can be used to bring in alternate views of the risks.

So an Echo Chamber is useful, but only if you are willing to listen to what you are saying.

Risk Assessment is always Opinion

June 10, 2011

Risk Assessment is most often done with very high tech models.

There is a cycle for risk models though.  The cycle starts with a simple model and progresses to ever more sophisticated models.  The ability to calculate risk at any time of the day or night becomes an achievable goal.

But just as the models get to be almost perfect, something often happens.  People start to doubt the model.  Then it shifts from sporadic doubt to rampant disbelief.  Then the process reaches its final stage and the model is totally ignored and abandoned.

What is the cause of that cycle?  It is caused by the fact that the process of modeling is always built on an opinion.  But as the model gets more and more sophisticated, the modelers forget the basic opinion.  They come to feel that the sophistication makes the model a machine that is capable of producing ultimate truth.

But folks who are not involved in the modeling, who are not drawn in to the process of creating greater and greater refinement to the risk assessments, will judge the model by the degree to which it helps with managing the business.  By the results of management judgements that are informed by the models.

The models will of course be perfectly fine when they deal with events that occur within one standard deviation of the mean.  Those events happen fairly frequently and there will be plenty of data to calibrate the frequency for those events.

But that is not where the real risk is located – within one standard deviation.

Real risk is most often found at least 2 standard deviations out.

Nassim Taleb has indicated that it is important to notice that the most significant risks are always out so far in the distribution that there is never enough data to properly calibrate the model.

But Taleb would only be correct if the important information about a risk is the PAST frequency.

That is not correct.  The important thing about risks is the FUTURE frequency.  The future frequency is unknowable.

But you can have an opinion about that frequency.

  1. Your opinion could be that the future will be just like the past.
  2. Your opinion could be that the future will be worse than the past.
  3. Your opinion could be that the future will be better than the past.
  4. Your opinion could be that you do not know the future.

You may form that opinion based on the opinion that seems to be implied by the market prices, or by listening to experts.

The folks with opinion 1 tend to build the models.  They can collect the data to calibrate their models.  But the idea that the future will be just like the past is simply their OPINION.  They do not know.

The folks with opinion 2 tend to try to avoid risks.  They do not need models to do that.

The folks with opinion 3 tend to take risks that they think are overpriced from the folks with opinions 1 & 2.  Models get n their way.

The folks with opinion 4 do not believe in models.

So the people who have opinion 1 look around and see that everyone who makes models believes that the future will be just like the past and they eventually come to believe that it is the TRUTH, not just an OPINION.

They come to believe that people with opinions 2,3,4 are all misguided.

But in fact, sometimes the future is riskier than the past.  Sometimes it is less risks.  And sometimes, it is just too uncertain to tell (like right now).

And sometimes the future is just like the past.  And the models work just fine.

Volume Variances and Rate Variances

June 8, 2011

There is only one reason why you might think that you really need to frequently use a complex stochastic model to measure your risks.  That would be because you do not know how risky your activities might be at any point in time.

Some risks are the type where you might not know what you got when you wrote the risk.  This happens at underwriting.

Some risks are the type that do not stay the same over time.  This could be reserve risk on long tailed coverages or any risk on any naked position that is extended over time.

Others require constant painstaking adjustment to hedging or other offsets.  Hedged positions or ALM systems fall into this category.

These are all rate variances.  The rate of risk per unit of activity is uncontrolled. Volume variances are usually easy to see.  They are evidenced by different volumes of activities.  You might easily see that you have more insurance risk because you wrote more insurance coverages.

But uncontrolled Rate variances seems to be a particularly scarey situation.

It seems that the entire purpose of risk management is to reduce the degree to which there might be uncontrolled rate variances.

So the need for a complex model seems to be proof that the risk management is inadequate.

A good underwriting system should make it so that you do know the risk you are writing – whether it is higher or lower than expected.

For the risks that might change over time, it you have no plans other than to stay long, then you are using the model to tell you how much to change your plans because of a decision to write and then not further manage long tailed risks.  The existence of a model does not make that practice actually risk management.  It seems like the tail wagging the dog.  Much better to develop management options for those long tailed risks.  Has anyone done any risk reward analysis on the decision to keep the long tailed exposure  looking at the opportunity risk that you will sometime in the future need to do less profitable business because of this strategy?

For the risks that are managed via hedging and/or ALM,  what is needed is a good system to making sure that the retained risk never ever exceeds the risk tolerance.  Making sure that there never is a rate variance.

The complex risk model does not seem to be a need for firms unless they suspect that they have these serious flaws in their risk management program or thinking AND they believe that they are able to control their model risk better than their actual risk.

The entire concept seems suspect.

Riskviews would suggest that if you think that your firm has uncontrolled rate variances, then you should not sleep until you get them under control.

Then you will not need a complex model.

Major Regime Change – The Debt Crisis

May 24, 2011

A regime change is a corner that you cannot see around until you get to it.  It is when many of the old assumptions no longer hold.  It is the start of a new set of patterns.  Regime changes are not necessarily bad, but they are disruptive.  Many of the things that made people and companies successful under the old regime will no longer work.  But there will be completely new things that will now work.

The current regime has lasted for over 50 years.  Over that time, debt went all in one direction – UP.  Most other financial variables went up and down over that time, but their variability was in the context of a money supply that was generally growing somewhat faster than the economy.

Increasing debt funds some of the growth that has fueled the world economies over that time.

But that was a ride that could not go on forever.  At some point in time the debt servicing gets to be too high in comparison to the capacity of the economy.  The economy has gone through the stage of hedge lending (see Financial Instability) where activities are able to afford payments on their debt as well as repayment of principal long ago.  The economy is in the stage of Speculative Finance where activities are able to afford payments on the debt, but not the repayment of principal.  The efforts to pay down debt will tell us whether it is possible to reverse course on that.  If one looks ahead to the massive pensions crisis that looms in the moderate term, then you would likely judge that the economy is in Ponzi Financing land where the economy can neither afford the debt servicing or the payment of principal.

All this seems to be pointing towards a regime change regarding the level of debt and other forward obligations in society.  With that regime change, the world economy may shift to a regime of long term contraction in the amount of debt or else a sudden contraction (default) followed by a long period of massive caution and reduced lending.

Riskviews does not have a prediction for when this will happen or what other things will change when that regime change takes place.  But risk managers are urged to take into account that any models that are calibrated to historical experience may well mislead the users.  And market consistent models may also mislead for long term decision making (or is that will continue to mislead for long term decision making – how else to characterize a spot calculation) until the markets come to incorporate the impact of a regime change.

This may be felt in terms of further extension of the uncertainty that has dogged some markets since the financial crisis or in some other manner.

However it materializes, we will be living in interesting times.

Modeling Uncertainty

March 31, 2011

The message that windows gives when you are copying a large number of files gives a good example of an uncertain environment.  That process recently took over 30 minutes and over the course of that time, the message box was constantly flashing completely different information about the time remaining.  Over the course of one minute in the middle of that process the readings were:

8 minutes remaining

53 minutes remaining

45 minutes remaining

3 minutes remaining

11 minutes remaining

It is not true that the answer is random.  But with the process that Microsoft has chosen to apply to the problem, the answer is certainly unknowable.  For an expected value to vary over a very short period of time by such a range – that is what I would think that a model reflecting uncertainty would  look like.

An uncertain situation could be one where you cannot tell the mean or the standard deviation because there does not seem to be any repeatable pattern to the experience.

Those uncertain times are when the regular model – the one with the steady mean and variance – does not seem to give any useful information.

The flip side of the uncertain times and the model with unsteady mean and variance that represents those times is the person who expects that things will be unpredictable.  That person will be surprised if there is an extended period of time when experience follows a stable pattern, either good or bad or even a stable pattern centered around zero with gains and losses.  In any of those situations, the competitors of that uncertain expecting person will be able to use their models to run their businesses and to reap profits from things that their models tell them about the world and their risks.

The uncertainty expecting person is not likely to trust a model to give them any advice about the world.  Their model would not have cycles of predictable length.  They would not expect the world to even conform to a model with the volatile mean and variance of their expectation, because they expect that they would probably get the volatility of the mean and variance wrong.

That is just the way that they expect it will happen.   A new Black Swan every morning.

Correction, not every morning, that would be regular.  Some mornings.

Regime Change

February 18, 2011

In risk modeling, the idea of regime change is a mathematical expression.  A change from one mathematical rule to another.

But in the world, Regime Change can have a totally different meaning.  Like what is happening in Egypt.

When someone sits atop a government for 30 years, it is easy to assume that next week they will still be on top.

Until that is no longer true.

When there is a regime change, it happens because the forces that were in a stable equilibrium shift in some way so that they can no longer support a continuation of the past equilibrium.  In hindsight, it is possible to see that shift.  But the shift is often not so obvious in advance.

Again, as when the Soviet Union fell apart, the intelligence services were seemingly taken by surprise.

But is there really any difference between the two types of regime change?  Is it any easier to actually notice an impending regime change on a modeled risk than an impending political risk?

Why are we so bad at seeing around corners?

In the area of public health, it is well known that diseases follow a standard path called an S curve.  That is the path of a curve plotting the number of people infected by a disease over time.  The path has a slight upward slope at first then the slope gets much, much steeper and eventually it slows down again.

When a new disease is noticed, some observers who come upon information about the disease during that middle period during the rapid upward slope will extrapolate and predict that the disease incidence will grow to be much higher than it ever gets.

The reason for the slowdown in the rate of growth of the disease is because diseases are most often self limiting because people do not usually get the disease twice.  Diseases are spread by contact between a carrier and an uninfected person.  In the early stages of a disease, the people who make the most contacts with others are the most likely to become infected and themselves become carriers.  Eventually, they all lose the ability to be carriers and become immune and the number of times that infected carriers come into contact with uninfected persons starts to drop.  Eventually, such contacts become rare.

It is relatively easy to build a model of the progression of a disease.  We know what parameters are needed.  We can easily estimate those that we cannot measure exactly and can correct our estimates as we make observations.

We start out with of model of a disease that assumes that the disease is not permanent.

We plan for regime change.

Perhaps that is what we need for the rest of our models.  We should start out by assuming that no pattern that we observe is permanent.  That each regime carries the seeds of its own destruction.

If we start out with that assumption, we will look to build the impermanence of the regime into our models and look for the signs that will show that whatever guesses we had to make initially about the path of the next regime change can be improved.

Because when we build a model that does not include that assumption, we do not even think about what might cause the next regime change.  We do not make any preliminary guesses.  The signs that the next change is coming are totally ignored.

In the temperate zones where four very different seasons are the norm, the signs of the changes of seasons are well known and widely noticed.

The signs of the changes in regimes of risks can be well known and widely noticed as well, but only if we start out with a model that allows for regime changes.

Liquidity Risk Management for a Bank

February 9, 2011

A framework for estimating liquidity risk capital for a bank

From Jawwad Farid

Capital estimation for Liquidity Risk Management is a difficult exercise. It comes up as part of the internal liquidity risk management process as well as the internal capital adequacy assessment process (ICAAP). This post and the liquidity risk management series that can be found at the Learning Corporate Finance blog suggests a framework for ongoing discussion based on the work done by our team with a number of regional banking customers.

By definition banks take a small Return on asset (1% – 1.5%) and use leverage and turnover to scale it to a 15% – 18% Return on Equity. When market conditions change and a bank becomes the subject of a name crisis and a subsequent liquidity run, the same process becomes the basis for a death chant for the bank.  We try to de-lever the bank by selling assets and paying down liabilities and the process quickly turns into a fire sale driven by the speed at which word gets out about the crisis.

Figure 1 Increasing Cash Reserves

Reducing leverage by distressed asset sales to generate cash is one of the primary defense mechanisms used by the operating teams responsible for shoring up cash reserves. Unfortunately every slice of value lost to the distressed sale process is a slice out of the equity pool or capital base of the bank. An alternate mechanism that can protect capital is using the interbank Repurchase (Repo) contract to use liquid or acceptable assets as collateral but that too is dependent on the availability of un-encumbered liquid securities on the balance sheet as well as availability of counterparty limits. Both can quickly disappear in times of crisis. The last and final option is the central bank discount window the use of which may provide temporary relief but serves as a double edge sword by further feeding the name and reputational crisis.  While a literature review on the topic also suggest cash conservation approaches by a re-alignment of businesses and a restructuring of resources, these last two solutions assume that the bank in question would actually survive the crisis to see the end of re-alignment and re-structuring exercise.

Liquidity Reserves: Real or a Mirage

A questionable assumption that often comes up when we review Liquidity Contingency Plans is the availability or usage of Statutory Liquidity and Cash Reserves held for our account with the Central Bank.  You can only touch those assets when your franchise and license is gone and the bank has been shut down. This means that if you want to survive the crisis with your banking license intact there is a very good chance that the 6% core liquidity you had factored into your liquidation analysis would NOT be available to you as a going concern in times of a crisis. That liquidity layer has been reserved by the central bank as the last defense for depositor protection and no central bank is likely to grant abuse of that layer.

Figure 2 Liquidity Risk and Liquidity Run Crisis

As the Bear Stearns case study below illustrate the typical Liquidity crisis begins with a negative event that can take many shapes and forms. The resulting coverage and publicity leads to pressure on not just the share price but also on the asset portfolio carried on the bank’s balance sheet as market players take defensive cover by selling their own inventory or aggressive bets by short selling the securities in question. Somewhere in this entire process rating agencies finally wake up and downgrade the issuer across the board leading to a reduction or cancellation of counterparty lines.  Even when lines are not cancelled given the write down in value witnessed in the market, calls for margin and collateral start coming in and further feed liquidity pressures.

What triggers a Name Crisis that leads to the vicious cycle that can destroy the inherent value in a 90 year old franchise in less than 3 months.  Typically a name crisis is triggered by a change in market conditions that impact a fundamental business driver for the bank. The change in market conditions triggers either a large operational loss or a series of operation losses, at times related to a correction in asset prices, at other resulting in a permanent reduction in margins and spreads.  Depending on when this is declared and becomes public knowledge and what the bank does to restore confidence drives what happens next. One approach used by management teams is to defer the news as much as possible by creative accounting or accounting hand waving which simply changes the nature of the crisis from an asset price or margin related crisis to a much more serious regulatory or accounting scandal with similar end results.

Figure 3 What triggers a name crisis?

The problem however is that market players have a very well established defensive response to a name crisis after decades of bank failures. Which implies that once you hit a crisis the speed with which you generate cash, lock in a deal with a buyer and get rid of questionable assets determined how much value you will lose to the market driven liquidation process. The only failsafe here is the ability of the local regulator and lender of last resort to keep the lifeline of counterparty and interbank credit lines open.  As was observed at the peak of the crisis in North America, UK and a number of Middle Eastern market this ability to keep market opens determines how low prices will go, the magnitude of the fire sale and the number of banks that actually go under.

Figure 4 Market response to a Name Crisis and the Liquidity Run cycle.

The above context provides a clear roadmap for building a framework for liquidity risk management. The ending position or the end game is a liquidity driven asset sale. A successful framework would simply jump the gun and get to the asset sale before the market does. The only reason why you would not jump the gun is if you have cash, a secured contractually bound commitment for cash, a white knight or any other acceptable buyer for your franchise and an agreement on the sale price and shareholders’ approval for that sale in place.  If you are missing any of the above, your only defense is to get to the asset sale before the market does.

The problem with the above assertion is the responsiveness of the Board of directors and the Senior executive team to the seriousness of the name crisis. The most common response by both is a combination of the following

a)     The crisis is temporary and will pass. If there is a need we will sell later.

b)    We can’t accept these fire sale prices.

c)     There must be another option. Please investigate and report back.

This happens especially when the liquidity policy process was run as a compliance checklist and did not run its full course at the board and executive management level.  If a full blown liquidity simulation was run for the board and the senior management team and if they had seen for themselves the consequences of speed as well as delay such reaction don’t happen. The board and the senior team must understand that illiquid assets are equivalent of high explosives and delay in asset sale is analogous to a short fuse. When you combine the two with a name crisis you will blow the bank irrespective of its history or the power of its franchise. When the likes of Bear, Lehman, Merrill, AIG and Morgan failed, your bank and your board is not going to see through the crisis to a different and pleasant fate.

(more…)

Global Convergence of ERM Requirements in the Insurance Industry

January 27, 2011

Role of Own Risk and Solvency Assessment in Enterprise Risk Management

Insurance companies tend to look backwards to see if there was enough capital for the risks that were present then. It is important for insurance companies to be forward looking and assess whether enough capital is in place to take care risks in the future. Though it is mandatory for insurance firms to comply with solvency standards set by regulatory authorities, what is even more important is the need for top management to be responsible for certifying solvency. Performing Own Risk and Solvency Assessment (ORSA) is the key for the insurance industry.

  • Global Convergence of ERM Regulatory requirements with NAIC adoption of ORSA regulations
  • Importance of evaluating Enterprise Risk Management for ORSA
  • When to do an ORSA and what goes in an ORSA report?
  • Basic and Advanced ERM Practices
  • ORSA Plan for Insurers
  • Role of Technology in Risk Management

Join this MetricStream webinar

Date: Wednesday February 16, 2011
Time: 10 am EST | 4 pm CET | 3pm GMT
Duration: 1 hour

Eggs and Baskets

December 1, 2010

Andrew Carnegie once famously said

“put all your eggs in one basket. and then watch that basket”

It seems impossible on first thought to think of that as a view consistent with risk management.  But Carnegie was phenomenally successful.  Is it possible that he did that flaunting risk management?

Garry Kasparov – World Chess Champ (22 years) put it this way…

“You have to rely on your intuition.  My intuition was wrong very few times.”

George Soros has said that he actually gets an ache in his back when the market is about to turn, indicating that he needs to abruptly change his strategy.

Soros, Kasparov, Carnegie are not your run of the mill punters.  They each had successful runs for many years.

My theory of their success is that the intuition of Kasparov actually does take into account much more than the long hard careful consideration of a middling chess master.  Carnegie and Soros also knew much more about their markets than any other person alive in their time.

While they may not have consciously been following the rules, they were actually incorporating all of the drivers of those rules into their decisions.  Most of those rules are actually “heuristics” or shortcuts that work as long as things are what they have been but are not of much use when things are changing.  In fact, those rules may be what is getting one into trouble during shifts in the world.

Risk models embody an implicit set of rules about how the market work.  Those models fail when the market fails to conform to the rules embedded in the model.  That is when things change, when your thinking needs to transcend the heuristics.

So where does that leave the risk manager?

The insights of the ultra successful types that are cited above can be seen to refute the risk management approach, OR they can be seen as a goal for risk managers.

The basket that Carnegie was putting all of his eggs into was steel.  His insight about steel was correct, but his statement about eggs and baskets is not particularly applicable to situations less transformational than steel.  It is the logic that many applied during the dot com boom, much to their regret in 2001/2002.

The risk manager should look at statements and positions like those above as levels of understanding to strive for.  If the risk managers work starts and remains a gigantic mass of data and risk positions without ever reaching any insights about the underlying nature of the risks that are at play, then something is missing.

Perhaps the business that the risk manager works for is one that by choice and risk tolerance insists on plodding about the middle of the pack in risk.

But the way that the risk manager can add the most value is when they are able to provide the insights about the baskets that can handle more eggs.  And can start to have intuitions about risks that are reliable and perhaps are accompanied by unmistakable physical side effects.

Turkey Risk

November 25, 2010

On Thanksgiving Day her in the US, let us recall Nassim Taleb’s story about the turkey.  For 364 days the turkey saw no risk whatsoever, just good eats.  Then one day, the turkey became dinner.

For some risks, studying the historical record and making a model from experience just will not give useful results.

And, remembering the experience of the turkey, a purely historical basis for parameterizing risk models could get you cooked.

Happy Thanksgiving.

Risk Regimes

November 18, 2010

Lately, economists talk of three phases of the economy, boom, bust and “normal”. These could all be seen as risk regimes. And that these regimes exist for many different risks.

There is actually a fourth regime and for many financial markets we are in that regime now. I would call that regime “Uncertain”. In July, Bernanke said that the outlook for the economy was “unusually uncertain”.

So these regimes would be:

  • Boom – high drift, low vol
  • Bust – negative drift, low vol
  • Normal – moderate drift, moderate vol
  • Uncertain – unknown drift and unknown vol (both with a high degree of variability)

So managing risk effectively requires that you must know the current risk regime.

There is no generic ERM that works in all risk regimes.  And there is no risk model that is valid in all risk regimes.

Risk Management is done NOW to impact on your current risk positions and to manage your next period potential losses.

So think about four risk models, not about how to calibrate one model to incorporate experience from all four regimes.  The one model will ALWAYS be fairly wrong, at least with four different models, you have a chance to be approximately right some of the time.

A Posteriori Creation

September 29, 2010

The hunters had come back to the village empty handed after a particularly difficult day. They talked through the evening around the fire about what had happened. They needed to make sense out of their experience, so that they could go back out tomorrow and feel that they knew how the world worked well enough to risk another hunt. This day, they were able to convince themselves that what had happened was similar to another day many years ago and that it was an unusually bad day, but driven by natural forces that they could expect and plan for in the future.

Other days, they could not reconcile an unusually bad day and they attributed their experience to the wrath of one or another of their gods.

Risk managers still do the same thing.  They have given this process a fancy name, Bayesian inference.  The very bad days, we now call Black Swans instead of an act of the gods.

Where we have truly advanced is in our ability to claim that we can reverse this process.  We claim that we can create the stories in advance of the experience and thereby provide better protection.

But we fail to realize that underneath, we are still those hunters.  We tell the stories to make ourselves feel better, to feel safe enough to go back out the next day.  Once we have gone through the process of a posteriori creation of the framework, the past events fit neatly into a framework that did not really exist when those same events were in the future.

If you do not believe that, think about how many risk models have had to be significantly recalibrated in the last 10 years.

To correct for this, we need to go against 10,000 or more years of human experience.  The correction can be summed up with the line from the movie The Fly,

Be afraid.  Be very afraid.

There is another answer.  That answer is

Be smart.  Be very smart.

That is because it is not always the best or even a very good strategy to be very afraid.  Only sometimes.  So you need to become smart enough to:

  1. Know when it is really important to mistrust the models and to be very afraid
  2. Have built up the credibility and trust so that you are not ignored.

While you are doing that,be careful with the a posteriori creations.  The better people get with explaining away the bad days, the harder it will be for you to convince them that a really bad day is at hand.

Risk Managers do not know the Future any Better than Anyone Else

September 17, 2010

Criticisms of risk managers for not anticipating some emerging future are overdone.  When a major unexpected loss happens, everyone missed it.

Risk managers do not have any special magic ball.  The future is just as dim to us as to everyone else.

Sometimes we forget that.  Our methods seem to be peering into the future.

But that is not really correct.  We are not looking into the future.  Not only do we not know the future, we do not even know the likelihood of various future possibilities, the probability distribution of the future.

That does not make our work a waste of time.  However.

What we should be doing with our models is to write down clearly that view of the future that we use to base our decisions upon.

You see, everyone who makes a decision must have a picture of the future possibilities that they are using to weigh the possibilities and make that decision.  Most people cannot necessarily articulate that picture with any specificity.  Management teams try to make sure that they are all working with similar visions of the future so that the sum of all their decisions makes sense together.

But one of the innovations of the new risk management discipline is to provide a very detailed exposition of that picture of the future.

Unfortunately, many risk managers are caught up in the mechanics of creating the model and they fail to recognize the extreme importance of this aspect of their work.  Risk Managers need to make sure that the future that is in their model IS the future that management wants to use to base their decisions upon.  The Risk Manager needs to understand whether he/she is the leader or the follower in the process of identifying that future vision.

If the leader, then there needs to be an explicit discussion where the other top managers affirm that they agree with the future suggested by the Risk Manager.

If the follower, then the risk manager will first need to say back to the rest of management what they are hearing to make sure that they are all on the same page.  They might still want to present alternate futures, but they need to be prepared to have those visions heavily discounted in decision making.

The Risk Managers who do not understand this process go forward developing their models based upon their best vision of the future and are frustrated when management does not find their models to be very helpful.  Sometimes, the risk manager presents their models as if they DO have some special insight into the future.

My vision of the future is that path will not succeed.

Why All Risk Models Understate Risk?

August 10, 2010

There are three types of reasons:  mechanical, psychological and market.

Mechanical Reasons

Parameter Risk – all of the parameters of risk models are uncertain.  That fact is usually ignored.

Residual Risk – there are two parts to this one.  Within the range of the data and outside the range.  Within the range, the process of modeling always produces smoother results than the actual observed results.  This understates risk.  Outside the range, the method might over state or understate risk.  Possibly by orders of magnitude.

Randomness – many of the risks that we model with random variables are not at all random.  They are causal, but we do not know how to follow the casual chain to its conclusion.  Reality of these risks will involve much more discontinuities that are usually included in our continuous risk models.

Psychological

Humans are hard wired to have a better memory of good times than bad times.  This manifests itself in many of the biases chronicled by psychologists.

Many of those biases boil down to the fact that we all tend to see the world as we want it to be, rather than as it is.

Market

Because of the above, the market tends to underprice risk.  You often do not get paid enough for the real risk, you get paid for the risk in the model.  Those few who look past the models and come the closest to understanding the real risk will simply not play.  So the markets are dominated by folks with models that understate risk.

The place to play is identified clearly above.  Did you notice?

A Friedman Model

July 29, 2010

Friedman freed the economists with his argument that economic theories did not need to be tied to realistic assumptions:

a theory cannot be tested by comparing its “assumptions” directly with “reality.” Indeed, there is no meaningful way in which this can be done. Complete “realism” is clearly unattainable, and the question whether a theory is realistic “enough” can be settled only by seeing whether it yields predictions that are good enough for the purpose in hand or that are better than predictions from alternative theories. Yet the belief that a theory can be tested by the realism of its assumptions independently of the accuracy of its predictions is widespread and the source of much of the perennial criticism of economic theory as unrealistic. Such criticism is largely irrelevant, and, in consequence, most attempts to reform economic theory that it has stimulated have been unsuccessful.

Milton Friedman, 1953, Some Implications for Economic Issues

Maybe Friedman fully understood the implications of what he suggested.  But it seems likely that many of the folks who used this argument to justify their models and theories definitely took them to extremes.

You see, the issue relates to the question of how you test that the theory predictions are realistic.  Because it is quite easy to imagine that a theory could make good predictions during a period of time when the missing or unrealistic assumptions are not important because they are constant or are overwhelmed by the importance of other factors.

The alternate idea that a model has both realistic inputs and outputs is more encouraging.  The realistic inputs will be a more stringent test of the model’s ability to make predictions that take into account the lumpiness of reality.  A model with unrealistic assumptions or inputs does not give that.

Friedman argued that since it was impossible for a theory (or model) to be totally realistic, that realism could not be a criteria for accepting a theory.

That is certainly an argument that cannot be logically refuted.

But he fails to mention an important consideration.  All theories and models need to be re-validated.  His criteria of “seeing whether it yields predictions that are good enough for the purpose in hand or that are better than predictions from alternative theories” can be true for some period of time and then not true under different conditions.

So users of theories and models MUST be constantly vigilant.

And they should be aware that since their test of model validity is purely empirical, that as things change that are not included in the partial reality of the theory or model, that the model or theory may no longer be valid.

So a Friedman Model is one that lacks some fundamental realism in its inputs but gives answers that give “good enough” predictions.  Users of Friedman models should beware.

Murphy was a Risk Manager!

July 6, 2010

Perhaps you have heard the saying…

If anything can go wrong, it will.

Widely known as Murphy’s Law.  Well, you may not know it but Murphy was actually a risk manager.

The originator of Murphy’s Law was an engineer named Captain Ed Murphy.  He was responsible for safety testing for the Air Force and later for several private engineering firms.

He was a reliability engineer.  And in his mind, the statement that became known as Murphy’s Law was just his way of describing how you had to think to design stress tests.

He had just experienced the failure of a device that he had designed because of incorrect wiring.  At the time, he may have blamed the problem on the people who installed his device, but later, he came to realize that he should have anticipated the possibility of confusion over which lead to connect to what and made provision for the wiring error in his design.

His original design required that the installer would have perfect knowledge of his intentions with the design.  Instead he should have assumed that the installer would have been completely ignorant of what was in his head.

Does that sound like a word of caution for the designers of risk models?

Will future operators of your risk model need to fully understand what you had intended?  Or do you anticipate that they will doubtless not.

I had that experience.  Fifteen years after I had completed a risk model for a company and in the process taken some shortcuts that made perfect sense to me, I was told that the firm was still using my model, but they suddenly noticed that it was giving very troubling signals, signals that turned out to be almost completely incorrect.

Those shortcuts had moved further and further away from the truth.  I had some realization that the model needed regular recalibration, but I had failed to make that completely clear to the people who inherited the model from me and they certainly had not thought it important to pass along my verbal instructions to the people who inherited it from them.

So remember Murphy’s Law and this little story about how Murphy came to originally say what became known as his law.  It could happen to you.

Risk Managers MUST be Humble

July 3, 2010

Once you think of it, it seems obvious.  Risk Managers need humility.

If you are dealing with any killer physical risk, there are two types of people who work close to that risk, the humble and the dead.

Being humble means that you never lose sight of the fact that RISK may at any time rise up in some new and unforeseen way and kill you or your firm.

Risk managers should read the ancient Greek story of Icarus.

Risk managers without humility will suffer the same fate.

Humility means remembering that you must do every step in the risk management process, every time.  The World Cup goalkeeper Robert Green who lets an easy shot bounce off of his hands and into the goal has presumed that they do not need to consciously attend to the mundane task of catching the ball.  They can let their reflexes do that and their mind can move on to the task of finding the perfect place to put the ball next.

But they have forgotten their primary loss prevention task and are focusing on their secondary offense advancement task.

The risk managers with humility will be ever watchful.  They will be looking for the next big unexpected risk.  They will not be out there saying how well that they are managing the risks, they will be more concerned about the risks that they are unprepared for.

Risk managers who are able to say that they have done all that can be done, who have taken all reasonable precautions, who can help their firm to find the exact right level and mix of risks to optimize the risk reward of the firm are at serious risk of having the wax holding their feathers melt away and of falling to earth.

Biased Risk Decisions

June 18, 2010

The information is all there.  We have just wrapped it in so many technical terms that perhaps we forget what it is referring to.

Behavioral Finance explains exactly how people tend to make decisions without models.  They call them Biases and Heuristics.

This link is to one of my absolute favorite pages on the entire internet.  LIST OF COGNITIVE BIASES Take a look.  See if you can find the ways that you made your last 10 major business decisions there.

Now models are the quants ways to overcome these biases.  Quants believe that they can build a model that keeps the user from falling into some of the more emotional cognitive biases, such as the anchoring effect.  With a model, for example, anchoring is avoided because the modeler very carefully gives equal weight to many data points instead of more weight to the most recent data point.

But what the quants fail to recognize is that models strengthen some of the biases.  For example, models and modelers often fall under the Clustering illusion, finding patterns and attributing statistical distributions to data recording phenomena that has just finished one phase and is about to move on to another.

Models promote the hindsight bias.  No matter how surprising an event is at the time, within a few years, the data recording the impact of the event is incorporated into the data sets and the modelers henceforth give the impression that the model is now calibrated to consider just such an event.

And in the end, the model is often no more than a complicated version of the biases of the modeler, an example of the Confirmation Bias where the modeler has constructed a model that confirms their going in world view, rather than representing the actual world.

So that is the trade-off, between biased decisions with a model and biased decisions without a model.  What is a non-modeling manager to do?

I would suggest that they should go to that wikipedia page on biases and learn about their own biases and also sit down with that list with their modeler and get the modeler to reveal their biases as well.

Fortunately or unfortunately, things in most financial firms are very complicated.  It is almost impossible to get it right balancing all of the moving parts that make up the entirety of most firms without the help of a model.  But if the decision maker understands their own biases as well as the biases of the model, perhaps they can avoid more of them.

Finally, Jos Berkemeijer asks what must a modeler know if they are also the decision maker.  I would suggest that such a person needs desperately to understand their own biases.  They can get a little insight into this from traditional peer review.  But I would suggest even more than that they need to review the wiki list of biases with their peer reviewer and hope that the peer reviewer feels secure enough to be honest with them.

Not Complex Enough

June 10, 2010

Things changed and the models did not adapt.  But I am saying that is mostly because the models had no place to put the information.

With 20-20 hindsight, perhaps the models would have been better if instead of boiling everyone in one pot, you separated out folks into 5 or 10 pots.  Put the flippers into a separate pot.  Put the doctors into another pot.  (Did folks really believe that the no doc mortgages represented 10 times as many doctors than previously).  What about the no doc loans to contractors?  Wasn’t there a double risk there?  Put the people with LTV>100% in another pot.  Then model your 20% drop in prices.

And there was also no model of what the real estate market would do if there were 500,000 more houses than buyers.  Or any attempt to understand whether there were too many houses or not.

And the whole financial modeling framework has never had the ability to reflect the spirals that happen.

The models are just not complex enough for the world we live in.

Many are taught to look at a picture like the view above of the situation in Afghanistan and immediately demand that the picture be simplified.  To immediately conclude that if we draw a picture that complicated then it MUST be because we do not really understand the situation.  However, complexity like the above may be a sign that the situation is really being understood and that the model might just be complex enough to work as things change.

The idea that we will change the world so that the models work is tragically wrong headed.   But that is exactly the thinking that is behind most of the attempts at “reforming” the financial markets.  The thinking is that our models accurately describe the world when it is “normal” and that when our models are wrong it is because the world is “abnormal”.  So the conclusion is that we should be trying to keep the world in the normal range.

But the way that our models always fail is when the world makes a change, a non-linearity in the terminology of modelers.  The oft used analogy is the non-linearity that ruined the businesses of the buggy whip manufacturers.  They had a great model of demand for their product that showed how there was more demand every spring so that they put on extra shifts in the winter and rolled out the new models every April.

Then one April, the bottom fell out of their market.  That was because not only did those pesky horseless carriages cut into their businesses, but the very folks who bought the cars were the people who were always sure sales for new buggy whips each and every year.  That early adopter set who just had to have the latest model of buggy whip.

So we must recognize that these troubling times when the models do not work are frequently because the world is fundamentally changing and the models were simply not complex enough to capture the non-linearities.

Holding Sufficient Capital

May 23, 2010

From Jean-Pierre Berliet

The companies that withstood the crisis and are now poised for continuing success have been disciplined about holding sufficient capital. However, the issue of how much capital an insurance company should hold beyond requirements set by regulators or rating agencies is contentious.

Many insurance executives hold the view that a company with a reputation for using capital productively on behalf of shareholders would be able to raise additional capital rapidly and efficiently, as needed to execute its business strategy. According to this view, a company would be able to hold just as much “solvency” capital as it needs to protect itself over a one year horizon from risks associated with the run off of in-force policies plus one year of new business. In this framework, the capital need is calculated to enable a company to pay off all its liabilities, at a specified confidence level, at the end of the one year period of stress, under the assumption that assets and liabilities are sold into the market at then prevailing “good prices”. If more capital were needed than is held, the company would raise it in the capital market.

Executives with a “going concern” perspective do not agree. They observe first that solvency capital requirements increase with the length of the planning horizon. Then, they correctly point out that, during a crisis, prices at which assets and liabilities can be sold will not be “good times” prices upon which the “solvency” approach is predicated. Asset prices are likely to be lower, perhaps substantially, while liability prices will be higher. As a result, they believe that the “solvency” approach, such as the Solvency II framework adopted by European regulators, understates both the need for and the cost of capital. In addition, these executives remember that, during crises, capital can become too onerous or unavailable in the capital market. They conclude that, under a going concern assumption, a company should hold more capital, as an insurance policy against many risks to its survival that are ignored under a solvency framework.

The recent meltdown of debt markets made it impossible for many banks and insurance companies to shore up their capital positions. It prompted federal authorities to rescue AIG, Fannie Mae and Freddie Mac. The “going concern” view appears to have been vindicated.

Directors and CEOs have a fiduciary obligation to ensure that their companies hold an amount of capital that is appropriate in relation to risks assumed and to their business plan. Determining just how much capital to hold, however, is fraught with difficulties because changes in capital held have complex impacts about which reasonable people can disagree. For example, increasing capital reduces solvency concerns and the strength of a company’s ratings while also reducing financial leverage and the rate of return on capital that is being earned; and conversely.

Since Directors and CEOs have an obligation to act prudently, they need to review the process and analyses used to make capital strategy decisions, including:

  • Economic capital projections, in relation to risks assumed under a going concern assumption, with consideration of strategic risks and potential systemic shocks, to ensure company survival through a collapse of financial markets during which capital cannot be raised or becomes exceedingly onerous
  • Management of relationships with leading investors and financial analysts
  • Development of reinsurance capacity, as a source of “off balance sheet” capital
  • Management of relationships with leading rating agencies and regulators
  • Development of “contingent” capital capacity.

The integration of risk, capital and business strategy is very important to success. Directors and CEOs cannot let actuaries and finance professionals dictate how this is to happen, because they and the risk models they use have been shown to have important blind spots. In their deliberations, Directors and CEOs need to remember that models cannot reflect credibly the impact of strategic risks. Models are bound to “miss the point” because they cannot reflect surprises that occur outside the boundaries of the closed business systems to which they apply.

©Jean-Pierre Berliet   Berliet Associates, LLC (203) 972-0256  jpberliet@att.net

Comprehensive Actuarial Risk Evaluation

May 11, 2010

The new CARE report has been posted to the IAA website this week.

It raises a point that must be fairly obvious to everyone that you just cannot manage risks without looking at them from multiple angles.

Or at least it should now be obvious. Here are 8 different angles on risk that are discussed in the report and my quick take on each:

  1. MARKET CONSISTENT VALUE VS. FUNDAMENTAL VALUE   –  Well, maybe the market has it wrong.  Do your own homework in addition to looking at what the market thinks.  If the folks buying exposure to US mortgages had done fundamental evaluation, they might have noticed that there were a significant amount of sub prime mortgages where the Gross mortgage payments were higher than the Gross income of the mortgagee.
  2. ACCOUNTING BASIS VS. ECONOMIC BASIS  –  Some firms did all of their analysis on an economic basis and kept saying that they were fine as their reported financials showed them dying.  They should have known in advance of the risk of accounting that was different from their analysis.
  3. REGULATORY MEASURE OF RISK  –  vs. any of the above.  The same logic applies as with the accounting.  Even if you have done your analysis “right” you need to know how important others, including your regulator will be seeing things.  Better to have a discussion with the regulator long before a problem arises.  You are just not as credible in the middle of what seems to be a crisis to the regulator saying that the regulatory view is off target.
  4. SHORT TERM VS. LONG TERM RISKS  –  While it is really nice that everyone has agreed to focus in on a one year view of risks, for situations that may well extend beyond one year, it can be vitally important to know how the risk might impact the firm over a multi year period.
  5. KNOWN RISK AND EMERGING RISKS  –  the fact that your risk model did not include anything for volcano risk, is no help when the volcano messes up your business plans.
  6. EARNINGS VOLATILITY VS. RUIN  –  Again, an agreement on a 1 in 200 loss focus is convenient, it does not in any way exempt an organization from risks that could have a major impact at some other return period.
  7. VIEWED STAND-ALONE VS. FULL RISK PORTFOLIO  –  Remember, diversification does not reduce absolute risk.
  8. CASH VS. ACCRUAL  –  This is another way of saying to focus on the economic vs the accounting.

Read the report to get the more measured and complete view prepared by the 15 actuaries from US, UK, Australia and China who participated in the working group to prepare the report.

Comprehensive Actuarial Risk Evaluation

Will History Repeat?

May 10, 2010

In the 1980’s a dozen or more firms in the US and Canadian Life Insurance sector created and used what were commonly called required surplus systems.  Dale Hagstrom wrote a paper that was published in 1981, titled Insurance Company Growth .  That paper described the process that many firms used of calculating what Dale called Augmented Book Profits.  An Augmented Book Profit later came to be called Distributable Earnings in insurance company valuations.  If you download that paper, you will see on page 40, my comments on Dale’s work where I state that my employer was using the method described by Dale.

In 1980, in the first work that I was able to affix my newly minted MAAA, I documented the research into the risks of Penn Mutual Life Insurance Company that resulted in the recommendation of the Required Surplus, what we would now call the economic capital of the firm.  By the time that Dale’s paper was published in 1981, I had documented a small book of memos that described how the company would use a capital budgeting process to look at the capital utilized by each line of business and each product.  I was the scribe, the ideas come mostly from the Corporate Actuary, Henry B. Ramsey. We created a risk and profit adjusted new business report that allowed us to show that with each new product innovation, our agents immediately shifted sales into the most capital intensive or least profitable product.  It also showed that more and more capital was being used by the line with the most volatile short term profitability.  Eventually, the insights about risk and return caused a shift in product design and pricing that resulted in a much more efficient use of capital.

Each year, throughout the 1980’s, we improved upon the risk model each year, refining the methods of calculating each risk.  Whenever the company took on a new risk a committee was formed to develop the new required surplus calculation for that risk.

In the middle of the decade, one firm, Lincoln National, published the exact required surplus calculation process used by their firm in the actuarial literature.

By the early 1990’s, the rating agencies and regulators all had their own capital requirements built along the same lines.

AND THEN IT HAPPENED.

Companies quickly stopped allocating resources to the development and enhancement of their own capital models.  By the mid-1990’s, most had fully adopted the rating agency or regulatory models in the place of their own internal models.

When a new risk came around, everyone looked into how the standard models would treat the new risk.  It was common to find that the leading writers of a new risk were taking the approach that if the rating agency and regulatory capital models did not assess any capital to the new risk, then there was NO RISK TO THE FIRM.

Companies wrote more and more of risks such as the guaranteed minimum benefits for variable annuities and did not assess any risk capital to those risks.  It took the losses of 2001/2002 for firms to recognize that there really was risk there.

Things are moving rapidly in the direction of a repeat of that same exact mistake.  With the regulators and rating agencies more and more dictating the calculations for internal capital models and proscribing the ERM programs that are needed, things are headed towards the creation of a risk management regime that focuses primarily on the management of regulatory and rating agency perception of risk management and away from the actual management of risks.

This is not what anyone in the risk management community wants.  But once the regulatory and rating agency visions of economic capital and ERM systems are fully defined, the push will start to limit activity in risk evaluation and risk management to just what is in those visions – away from the true evaluation of and management of the real risks of the firm.

It will be clear that it is more expensive to pursue the elusive and ever changing “true risk” than to satisfy the fixed and closed ended requirements that anyone can read.  Budgets will be slashed and people reassigned.

Will History Repeat?

Dangerous Words

April 27, 2010

One of the causes of the Financial Crisis that is sometimes cited is an inappropriate reliance on complex financial models.  In our defense, risk managers have often said that users did not take the time to understand the models that they relied upon.

And I have said that in some sense, blaming the bad decisions on the models is like a driver who gets lost blaming it on the car.

But we risk managers and risk modelers do need to be careful with the words that we use.  Some of the most common risk management terminology is guilty of being totally misleading to someone who has no risk management training – who simply relies upon their understanding of English.

One of the fundamental steps of risk management is to MEASURE RISK.

I would suggest that this very common term is potentially misleading and risk managers should consider using it less.

In common usage, you could say that you measure a distance between two points or measure the weight of an object.  Measurement usually refers to something completely objective.

However, when we “measure” risk, it is not at all objective.  That is because Risk is actually about the future.  We cannot measure the future.  Or any specific aspect of the future.

While I can measure my height and weight today, I cannot now measure what it will be tomorrow.  I can predict what it might be tomorrow.  I might be pretty certain of a fairly tight range of values, but that does not make my prediction into a measurement.

So by the very words we use to describe what we are doing, we sought to instill a degree of certainty and reliability that is impossible and unwarranted.  We did that perhaps as mathematicians who are used to starting a problem by defining terms.  So we start our work by defining our calculation as a “measurement” of risk.

However, non-mathematicians are not so used to defining A = B at the start of the day and then remembering thereafter that whenever they hear someone refer to A, that they really mean B.

We also may have defined our work as “measuring risk” to instill in it enough confidence from the users that they would actually pay attention to the work and rely upon it.  In which case we are not quite as innocent as we might claim on the over reliance front.

It might be difficult now to retreat however.  Try telling management that you do not now, not have you ever measured risk.  And see what happens to your budget.

Skating Away on the Thin Ice of the New Day

April 23, 2010

The title of an old Jethro Tull song.  It sounds like the theme song for the economy today!

Now we all know.  The correlations that we used for our risk models were not reliable in the one instance where we really wanted an answer.

In times of stress, correlations go to one.

That is finally, after only four or five examples with the exact same result, become accepted wisdom.

But does that mean that Diversification is dead as a strategy?

I would argue that it certainly puts a hurt to diversification as a strategy for finding risk free returns.  Which is how it was being (mis) used in the Sub Prime markets.

But Diversification should still reign as the king of risk management strategies.  But it needs to be real diversification.  Not tiny diversification that is observable only under a mathematical microscope.  Real Diversification is where risks have completely different drivers.  Not slightly different statistical histories.

So in Uncertain Times, and these days must be labeled Uncertain Times (or the thin ice age), diversification is the best risk management strategy.  Along with its mirror image twin, avoidance of concentrations.

The banks had given up on diversification as a risk strategy.  Instead they believed that they were making risk free returns by taking lots and lots of concentrated risk that they were either fully hedging or moving the risk off their balance sheets very quickly.

Both ideas failed.  Hedging failed when the counter party was Lehman Brothers.  It succeeded when the counter party was any of the other institutions that were bailed out, but there was an extended period of severe uncertainty about that before the bailouts were finally put into place.  Moving the risks off the balance sheet failed in two ways.  First it failed because they were really playing hot potato without admitting it.  When the music stopped, someone was holding the potato.  And some banks were holding many potatoes.  It also failed because some banks had been offloading the risks to hedge funds and other investors who they were lending funds to finance the purchase.  When the CDOs soured, the loans secured by the CDOs were underwated and the CDOs came back onto the bank balance sheets.

The banks that were hurt the least were the banks who were not so very concentrated in just one major risk.

The cost of the simple diversification strategy is that those banks with real diversification showed lower returns during the build up of the bubble.

So that is the risk reward trade off of real diversification – it will often produce lower returns than the mathematical diversification but it will also show lower losses in proportion to total revenue than a strategy that concentrates in the most profitable risk choices according to a model that is tuned to the accounting or performance bonus system.

Diversification is the risk management strategy for the Thin Ice Age.

LIVE from the ERM Symposium

April 17, 2010

(Well not quite LIVE, but almost)

The ERM Symposium is now 8 years old.  Here are some ideas from the 2010 ERM Symposium…

  • Survivor Bias creates support for bad risk models.  If a model underestimates risk there are two possible outcomes – good and bad.  If bad, then you fix the model or stop doing the activity.  If the outcome is good, then you do more and more of the activity until the result is bad.  This suggests that model validation is much more important than just a simple minded tick the box exercize.  It is a life and death matter.
  • BIG is BAD!  Well maybe.  Big means large political power.  Big will mean that the political power will fight for parochial interests of the Big entity over the interests of the entire firm or system.  Safer to not have your firm dominated by a single business, distributor, product, region.  Safer to not have your financial system dominated by a handful of banks.
  • The world is not linear.  You cannot project the macro effects directly from the micro effects.
  • Due Diligence for mergers is often left until the very last minute and given an extremely tight time frame.  That will not change, so more due diligence needs to be a part of the target pre-selection process.
  • For merger of mature businesses, cultural fit is most important.
  • For newer businesses, retention of key employees is key
  • Modelitis = running the model until you get the desired answer
  • Most people when asked about future emerging risks, respond with the most recent problem – prior knowledge blindness
  • Regulators are sitting and waiting for a housing market recovery to resolve problems that are hidden by accounting in hundreds of banks.
  • Why do we think that any bank will do a good job of creating a living will?  What is their motivation?
  • We will always have some regulatory arbitrage.
  • Left to their own devices, banks have proven that they do not have a survival instinct.  (I have to admit that I have never, ever believed for a minute that any bank CEO has ever thought for even one second about the idea that their bank might be bailed out by the government.  They simply do not believe that they will fail. )
  • Economics has been dominated by a religious belief in the mantra “markets good – government bad”
  • Non-financial businesses are opposed to putting OTC derivatives on exchanges because exchanges will only accept cash collateral.  If they are hedging physical asset prices, why shouldn’t those same physical assets be good collateral?  Or are they really arguing to be allowed to do speculative trading without posting collateral? Probably more of the latter.
  • it was said that systemic problems come from risk concentrations.  Not always.  They can come from losses and lack of proper disclosure.  When folks see some losses and do not know who is hiding more losses, they stop doing business with everyone.  None do enough disclosure and that confirms the suspicion that everyone is impaired.
  • Systemic risk management plans needs to recognize that this is like forest fires.  If they prevent the small fires then the fires that eventually do happen will be much larger and more dangerous.  And someday, there will be another fire.
  • Sometimes a small change in the input to a complex system will unpredictably result in a large change in the output.  The financial markets are complex systems.  The idea that the market participants will ever correctly anticipate such discontinuities is complete nonsense.  So markets will always be efficient, except when they are drastically wrong.
  • Conflicting interests for risk managers who also wear other hats is a major issue for risk management in smaller companies.
  • People with bad risk models will drive people with good risk models out of the market.
  • Inelastic supply and inelastic demand for oil is the reason why prices are so volatile.
  • It was easy to sell the idea of starting an ERM system in 2008 & 2009.  But will firms who need that much evidence of the need for risk management forget why they approved it when things get better?
  • If risk function is constantly finding large unmanaged risks, then something is seriously wrong with the firm.
  • You do not want to ever have to say that you were aware of a risk that later became a large loss but never told the board about it.  Whether or not you have a risk management program.

The Use Test – A Simple Suggestion

February 23, 2010

Many are concerned about what the “Use Test” will be. Will it be a pop quiz or will companies be allowed to study?

Well, I have a suggestion for a simple and, I believe, fairly foolproof test. That would be for top management (not risk management or modeling staff) to be able to hold a conversation about their risk profile each year.

Now the first time that they can demonstrate that would not be the “Use Test”. It would be the second or third time that would constitute the test.

The conversation would be simple. It would involve explaining the risk profile of the firm – why the insurer is taking each of the major risks, what do they expect to get out of that risk exposure and how are they making sure that the potential losses that they experience are not worse than represented by their risk model. This discussion should include recognition of gross risk before offsets as well as net retained risk.

After the first time, the discussion would include an explanation of the reasons for the changes in the risk profile – did the profile change because the world shifted or did it change due to a deliberate decision on the part of management to take more or less or to retain more or less of a risk.

Finally a third part of the discussion would be to identify the experience of the past year in terms of its likelihood as predicted by the model and the degree to which that experience caused the firm to recalibrate its view of each risk.

To pass the test, management would merely need to have a complete story that is largely consistent from year to year.

Those who fail the test would be making large changes to their model calibration and their story from year to year – stretching to make it look like the model information was a part of management decisions.

Some firms who might have passed before the crisis who should have failed were firms who in successive years told the same story of good intentions with no actions in reducing outsized risks.

For firms who are really using their models, there will be no preparation time needed for this test. Their story for this test will be the story of their firm’s financial management.

Ideally, I would suggest that the test be held publicly at an investor call.

Making Sense of Immanent Failure

February 2, 2010

In the recent paper from the Said School, “Beyond the Financial Crisis” the authors use the phrase “inability to make sense of immanent failure” to describe one of the aspects that lead up to the financial crisis.

That matches up well with Jared Diamond’s ideas about Why Civilizations Fail.

And perfectly describes the otherwise baffling Chuck Prince quote about dancing.

I imagine that it is a problem that is more common with people who believe that they have really done their homework.  They have looked under every rock and they do not see the rock falling out of the sky.  It is not that they are failures.  In most times their extreme diligence will pay off handsomely.  There is just one sort of time period when they will not benefit appropriately from their careful work.

That is when there is a REGIME CHANGE.  Also called a SURPRISE.  All of the tried and true signals are green. But the intersection is uncharacteristicly clogged.

A major task for risk managers is to look for those regime changes – those times when the risk models no longer fit and at that point to CHANGE MODELS.  That is different from recalibrating the same old model.  That means applying the Baysian thinking not just to the parameters of the model but to the model selection as well.

It is not a failure when a new model must be chosen.  It is a normal and natural state of affairs.  Changing models is what I will call “Rational Adaptability”.

The reason why it will not work to simply recalibrate the old model is that the model with combined calibration for several regimes would be too broad to give appropriate guidance in different regimes.

You ride a car on highways, a boat on water and a plane on air.  Multi vehicles exist but they are never as efficient in any environment as the specialized vehicle.

So the risk manager needs to make sense of immanent failure and practice rational adaptability.

Get out of the car when you are wet up to the doors and get into a boat!

Best Risk Management Quotes

January 12, 2010

The Risk Management Quotes page of Riskviews has consistently been the most popular part of the site.  Since its inception, the page has received almost 2300 hits, more than twice the next most popular part of the site.

The quotes are sometimes actually about risk management, but more often they are statements or questions that risk managers should keep in mind.

They have been gathered from a wide range of sources, and most of the authors of the quotes were not talking about risk management, at least they were not intending to talk about risk management.

The list of quotes has recently hit its 100th posting (with something more than 100 quotes, since a number of the posts have multiple quotes.)  So on that auspicous occasion, here are my favotites:

  1. Human beings, who are almost unique in having the ability to learn from the experience of others, are also remarkable for their apparent disinclination to do so.  Douglas Adams
  2. “when the map and the territory don’t agree, always believe the territory” Gause and Weinberg – describing Swedish Army Training
  3. When you find yourself in a hole, stop digging.-Will Rogers
  4. “The major difference between a thing that might go wrong and a thing that cannot possibly go wrong is that when a thing that cannot possibly go wrong goes wrong it usually turns out to be impossible to get at or repair” Douglas Adams
  5. “A foreign policy aimed at the achievement of total security is the one thing I can think of that is entirely capable of bringing this country to a point where it will have no security at all.”– George F. Kennan, (1954)
  6. “THERE ARE IDIOTS. Look around.” Larry Summers
  7. the only virtue of being an aging risk manager is that you have a large collection of your own mistakes that you know not to repeat  Donald Van Deventer
  8. Philip K. Dick “Reality is that which, when you stop believing in it, doesn’t go away.”
  9. Everything that can be counted does not necessarily count; everything that counts cannot necessarily be counted.  Albert Einstein
  10. “Perhaps when a man has special knowledge and special powers like my own, it rather encourages him to seek a complex explanation when a simpler one is at hand.”  Sherlock Holmes (A. Conan Doyle)
  11. The fact that people are full of greed, fear, or folly is predictable. The sequence is not predictable. Warren Buffett
  12. “A good rule of thumb is to assume that “everything matters.” Richard Thaler
  13. “The technical explanation is that the market-sensitive risk models used by thousands of market participants work on the assumption that each user is the only person using them.”  Avinash Persaud
  14. There are more things in heaven and earth, Horatio,
    Than are dreamt of in your philosophy.
    W Shakespeare Hamlet, scene v
  15. When Models turn on, Brains turn off  Til Schuermann

You might have other favorites.  Please let us know about them.

New Decade Resolutions

January 1, 2010

Here are New Decade Resolutions for firms to adopt who are looking to be prepared for another decade

  1. Attention to risk management by top management and the board.  The past decade has been just one continuous lesson that losses can happen from any direction. This is about the survival of the firm.  Survival must not be delegated to a middle manager.  It must be a key concern for the CEO and board.
  2. Action oriented approach to risk.  Risk reports are made to point out where and what actions are needed.  Management expects to and does act upon the information from the risk reports.
  3. Learning from own losses and from the losses of others.  After a loss, the firm should learn not just what went wrong that resulted in the loss, but how they can learn from their experience to improve their responses to future situations both similar and dissimilar.  Two different areas of a firm shouldn’t have to separately experience a problem to learn the same lesson. Competitor losses should present the exact same opportunity to improve rather than a feeling of smug superiority.
  4. Forwardlooking risk assessment. Painstaking calibration of risk models to past experience is only valuable for firms that own time machines.  Risk assessment needs to be calibrated to the future. 
  5. Skeptical of common knowledge. The future will NOT be a repeat of the past.  Any risk assessment that is properly calibrated to the future is only one one of many possible results.  Look back on the past decade’s experience and remember how many times risk models needed to be recalibrated.  That recalibration experience should form the basis for healthy skepticism of any and all future risk assessments.

  6. Drivers of risks will be highlighted and monitored.  Key risk indicators is not just an idea for Operational risks that are difficult to measure directly.  Key risk indicators should be identified and monitored for all important risks.  Key risk indicators need to include leading and lagging indicators as well as indicators from information that is internal to the firm as well as external. 
  7. Adaptable. Both risk measurement and risk management will not be designed after the famously fixed Ligne Maginot that spectacularly failed the French in 1940.  The ability needs to be developed and maintained to change focus of risk assessment and to change risk treatment methods on short notice without major cost or disruption. 
  8. Scope will be clear for risk management.  I have personally favored a split between risk of failure of the firm strategy and risk of losses within the form strategy, with only the later within the scope of risk management.  That means that anything that is potentially loss making except failure of sales would be in the scope of risk management. 
  9. Focus on  the largest exposures.  All of the details of execution of risk treatment will come to naught if the firm is too concentrated in any risk that starts making losses at a rate higher than expected.  That means that the largest exposures need to be examined and re-examined with a “no complacency” attitude.  There should never be a large exposure that is too safe to need attention.   Big transactions will also get the same kind of focus on risk. 

Risk Management Changed the Landscape of Risk

December 9, 2009

The use of derivatives and risk management processes to control risk was very successful in changing the risk management Landscape.

But that change has been in the same vein as the changes to forest management practices that saw us eliminating the small forest fires only to find that the only fires that we then had were the fires that were too big to control.  Those giant forest fires were out of control from the start and did more damage than 10 years of small fires.

The geography of the world from a risk management view is represented by this picture:

The ball represents the state of the world.  Taking a risk is represented by moving the ball one direction or the other.  If the ball goes over the top and falls down the sides, then that is a disaster.

So risk managers spend lots of time trying to measure the size of the valley and setting up processes and procedures so that the firm does not get up to the top of the valley onto one of the peaks, where a good stiff wind might blow the firm into the abyss.

The tools for risk management, things like derivatives with careful hedging programs now allowed firms to take almost any risk imaginable and to “fully” offset that risk.  The landscape was changed to look like this:

Managers believed that the added risk management bars could be built as high as needed so that any imagined risk could be taken.  In fact, they started to believe that the possibility of failure was not even real.  They started to think of the topology of risk looking like this:

Notice that in this map, there is almost no way to take a big enough risk to fall off the map into disaster.  So with this map of risk in mind, company managers loaded up on more and more risk.

But then we all learned that the hedges were never really perfect.  (There is no profit possible with a perfect hedge.)  And in addition, some of the hedge counterparties were firms who jumped right to the last map without bothering to build up the hedging walls.

And we also learned that there was actually a limit to how high the walls could be built.  Our skill in building walls had limits.  So it was important to have kept track of the gross amount of risk before the hedging.  Not just the small net amount of risk after the hedging.

Now we need to build a new view of risk and risk management.  A new map.  Some people have drawn their new map like this:

They are afraid to do anything.  Any move, any risk taken might just lead to disaster.

Others have given up.  They saw the old map fail and do not know if they are ever again going to trust those maps.

They have no idea where the ball will go if they take any risks.

So we risk managers need to go back to the top map again and revalidate our map of risk and start to convince others that we do know where the peaks are and how to avoid them.  We need to understand the limitations to the wall building version of risk management and help to direct our firms to stay away from the disasters.

Economic Risk Capital

December 1, 2009

Guest Post from Chitro Majumdar

Economic capital models can be complex, embodying many component parts and it may not be immediately obvious that a complex model works satisfactorily. Moreover, a model may embody assumptions about relationships between variables or about their behaviour that may not hold in all circumstances (e.g under periods of stress). We have developed an algorithm for Dynamic Financial Analysis (DFA) that enables the creation of a comprehensive framework to manage Enterprise Risk’s Economic Risk Capital. DFA is used in the capital budgeting decision process of a company to launch a new invention and predict the impact of the strategic decision on the balance sheet in the horizon. DFA gives strategy for Enterprise Risk Management in order to avoid undesirable outcomes, which could be disastrous.

“The Quants know better than anyone how their models can fail. The surest way to replicate this adversity is to trust the models blindly while taking large-scale advantage of situations where they seem to provide ERM strategies that would yield results too superior to be true”

Dynamic Financial Analysis (DFA) is the most advance modelling process in today’s property and casualty industry-allowing us to develop financial forecasts that integrate the variability and interrelationships of critical factors affecting our results. Through the modeling of DFA, we see the company’s relevant random variables is based on the categorization of risks which is generated solvency testing where the financial position of the company is evaluated from the perspective of the customers. The central idea is to quantify in probabilistic terms whether the company will be able to meet its commitments in the future.  DFA is in the capital budgeting decision process of a company launching a new invention and predicting the impact of the strategic decision on the balance sheet in a horizon of few years.

 

The validation of economic capital models is at a very preliminary stage. There exists a wide range of validation techniques, each of which provides corroboration for (or against) only some of the desirable properties of a model. Moreover, validation techniques are powerful in some areas such as risk sensitivity but not in other areas such as overall absolute accuracy or accuracy in the tail of the loss distribution. It is advisable that validation processes are designed alongside development of the models rather than chronologically following the model building process. There is a wide range of validation processes and each one provides evidence for only some of the desirable properties of a model. Certain industry validation practices are weak with improvements needed in benchmarking, industry wide exercises, back-testing, profit and loss analysis and stress testing and followed by other advanced simulation model. For validation we adhere to the below mentioned method to calculate.

 

Calculation of risk measures

In their internal use of risk measures, banks need to determine an appropriate confidence level for their economic capital models. It generally does not coincide with the 99.9% confidence level used for credit and operational risk under Pillar 1 of Basel II or with the 99% confidence level for general and specific market risk. Frequently, the link between a bank’s target rating and the choice of confidence level is interpreted as the amount of economic capital necessary to prevent the bank from eroding its capital buffer at a given confidence level. According to this view, which can be interpreted as a going concern view, capital planning is seen more as a dynamic exercise than a static one, in which banks want to hold a capital buffer “on top” of their regulatory capital and where it is the probability of eroding such a buffer (rather than all available capital) that is linked to the target rating. This would reflect the expectation (by analysts, rating agencies and the market) that the bank operates with capital that exceeds the regulatory minimum requirement. Apart from considerations about the link to a target rating, the choice of a confidence level might differ based on the question to be addressed. On the one hand, high confidence levels reflect the perspective of creditors, rating agencies and regulators in that they are used to determine the amount of capital required to minimise bankruptcy risk. On the other hand,  use of lower confidence levels for management purposes in order to allocate capital to business lines and/or individual exposures and to identify those exposures that are critical for profit objectives in a normal business environment. Another interesting aspect of the internal use of different risk measures is that the choice of risk measure and confidence level heavily influences relative capital allocations to individual exposures or portfolios. In short, the farther out in the tail of a loss distribution, the more relative capital gets allocated to concentrated exposures. As such, the choice of the risk measure as well as the confidence level can have a strategic impact since some portfolios might look relatively better or worse under risk-adjusted performance measures than they would based on an alternative risk measure.

 

Chitro Majumdar CSO – R-square RiskLab

 

 

More details: http://www.riskreturncorp.com

You may have missed these . . .

November 22, 2009

Riskviews was dormant from April to July 2009 and restarted as a forum for discussions of risk and risk management.  You may have missed some of these posts from shortly after the restart…

Crafting Risk Policy and Processes

From Jawwad Farid

Describes different styles of Risk Policy statements and warns against creating unnecessary bottlenecks with overly restrictive policies.

A Model Defense

From Chris Mandel

Suggests that risk models are just a tool of risk managers and therefore cannot be blamed.

No Thanks, I have enough “New”

Urges thinking of a risk limit for “new” risks.

The Days After – NEVER AGAIN

Tells how firms who have survived a near death experience approach their risk management.

Whose Loss is it?

Asks about who gets what shares of losses from bad loans and suggests that shares havedrifted over time and should be reconsidered.

How about a Risk Diet?

Discusses how an aggregate risk limit is better than silo risk limits.

ERM: Law of Unintended Consequences

From Neil Bodoff

Suggests that accounting changes will have unintended consequences.

Lessons from a Bull Market that Never Happened

Translates lessons learned from the 10 year bull market that was predicted 10 years ago from investors to risk managers.

Choosing the Wrong Part of the Office

From Neil Bodoff

Suggests that by seeking tobe risk managers, actuaries are choosing the wrong part of the office.

Random Numbers

Some comments on how random number generators might be adapted to better reflect the variability of reality.

Reflexivity of Risk

November 19, 2009

George Soros says that financial markets are reflexive.  He means that the participants in the system influence the system. Market prices reflect not just fundamentals, but investors expectations.

The same thing is true of risk systems.  This can be illustrated by a point that is frequently made by John Adams.  Seat belts are widely thought to be good safety devices.  However, Adams points out that aggregate statistics of traffic fatalities do not indicate any improvement whatsoever in safety.  He suggests that because of the real added safety from the seat belts, people drive more recklessly, counteracting the added safety with added risky behavior.

That is one of the problems that firms who adopted and were very strong believers in their sophisticated ERM systems.  Some of those firms used their ERM systems to enable them to take more and more risk.  In effect, they were using the ERM system to tell them where the edge of the cliff was and they then proceeded to drive along the extreme edge at a very fast speed.

What they did not realize was that the cliff was undercut in some places – it was not such a steady place to put all of your weight.

Stated more directly, the risk system caused a feeling of safety that encouraged more risk taking.

What was lost was the understanding of uncertainty.  Those firms were perfectly safe from risks that had happened before and perhaps from risks that were anticipated by the markets.  The highly sophisticated systems were pretty accurate at measuring those risks.  However, they were totally unprepared for the risks that were new.  Mark Twain once said that history does not repeat itself, but it rhymes.  Risk is the same only worse.

Non-Linearities and Capacity

November 18, 2009

I bought my current house 11 years ago.  The area where it is located was then in the middle of a long drought.  There was never any rain during the summer.  Spring rains were slight and winter snow in the mountains that fed the local rivers was well below normal for a number of years in a row.  The newspapers started to print stories about the levels of the reservoirs – showing that the water was slightly lower at the end of each succeeding summer.  One year they even outlawed watering the lawns and everyone’s grass turned brown.

Then, for no reason that was ever explained, the drought ended.  Rainy days in the spring became common and one week it rained for six days straight.

Every system has a capacity.  When the capacity of a system is exceeded, there will be a breakdown of the system of some type.  The breakdown will be a non-linearity of performance of the system.

For example, the ground around my house has a capacity for absorbing and running off water.  When it rained for six days straight,  that capacity was exceeded, some of the water showed up in my basement.   The first time that happened, I was shocked and surprised.  I had lived in the house for 5 years and there had never been a hint of water in the basement. I cleaned up the effects of the water and promptly forgot about it. I put it down to a 1 in 100 year rainstorm.  In other parts of town, streets had been flooded.  It really was an unusual situation.

When it happened again the very next spring, this time after just 3 days of very, very heavy rain.  The flooding in the local area was extreme.  People were driven from their homes and they turned the high school gymnasium into a shelter for a week or two.

It appeared that we all had to recalibrate our models of rainfall possibilities.  We had to realize that the system we had for dealing with rainfall was being exceeded regularly and that these wetter springs were going to continue to exceed the system.  During the years of drought, we had built more and more in low lying areas and in ways that we might not have understood at the time, we altered to overall capacity of the system by paving over ground that would have absorbed the water.

For me, I added a drainage system to my basement.  The following spring, I went into my basement during the heaviest rains and listened to the pump taking the water away.

I had increased the capacity of that system.  Hopefully the capacity is now higher than the amount of rain that we will experience in the next 20 years while I live here.

Financial firms have capacities.  Management generally tries to make sure that the capacity of the firm to absorb losses is not exceeded by losses during their tenure.  But just like I underestimated the amount of rain that might fall in my home town, it seems to be common that managers underestimate the severity of the losses that they might experience.

Writers of liability insurance in the US underestimated the degree to which the courts would assign blame for use of a substance that was thought to be largely benign at one time that turned out to be highly dangerous.

In other cases, though it was the system capacity that was misunderstood.  Investors miss-estimated the capacity of internet firms to productively absorb new cash from the investors.  Just a few years earlier, the capacity of Asian economies to absorb investors cash was over-estimated as well.

Understanding the capacity of large sectors or entire financial systems to absorb additional money and put it to work productively is particularly difficult.  There are no rules of thumb to tell what the capacity of a system is in the first place.  Then to make it even more difficult, the addition of cash to a system changes the capacity.

Think of it this way, there is a neighborhood in a city where there are very few stores.  Given the income and spending of the people living there, an urban planner estimates that there is capacity for 20 stores in that area.  So with encouragement of the city government and private investors, a 20 store shopping center is built in an underused property in that neighborhood.  What happens next is that those 20 stores employ 150 people and for most of those people, the new job is a substantial increase in income.  In addition, everyone in the neighborhood is saving money by not having to travel to do all of their shopping.  Some just save money and all save time.  A few use that extra time to work longer hours, increasing their income.  A new survey by the urban planner a year after the stores open shows that the capacity for stores in the neighborhood is now 22.  However, entrepreneurs see the success of the 20 stores and they convert other properties into 10 more stores.  The capacity temporarily grows to 25, but eventually, half of the now 30 stores in the neighborhood go out of business.

This sort of simple micro economic story is told every year in university classes.

Version:1.0 StartHTML:0000000165 EndHTML:0000006093 StartFragment:0000002593 EndFragment:0000006057 SourceURL:file://localhost/Users/daveingr/Desktop/Capacity

It clearly applies to macroeconomics as well – to large systems as well as small.  Another word for these situations where system capacity is exceeded is systemic risk.  The term is misleading.  Systemic risk is not a particular type of risk, like market or credit risk.  Systemic risk is the risk that the system will become overloaded and start to behave in severely non-linear manner.  One severe non-linear behavior is shutting down.  That is what the interbank lending did in 2008.

In 2008, many knew that the capacity of the banking system had been exceeded.  They knew that because they knew that their own bank’s capacity had been exceeded.  And they knew that the other banks had been involved in the same sort of business as them.  There is a name for the risks that hit everyone who is in a market – systematic risks.  Systemic risks are usually Systematic risks that grow so large that they exceed the capacity of the system.  The third broad category of risk, specific risks, are not an issue, unless a firm with a large amount of specific risk that exceeds their capacity is “too big to fail”.  Then suddenly specific risk can become systemic risk.

So everyone just watched when the sub prime systematic risk became a systemic risk to the banking sector.  And watch the specific risk to AIG lead to the largest single firm bailout in history.

Many have proposed the establishment of a systemic risk regulator.  What that person would be in charge of doing would be to identify growing systematic risks that could become large enough to become systemic problems.  THen they are responsible to taking or urging actions that are intended to diffuse the systematic risk before it becomes a systemic risk.

A good risk manager has a systemic risk job as well.  THe good risk manager needs to pay attention to the exact same things – to watch out for systematic risks that are growing to a level that might overwhelm the capacity of the system.  The risk manager’s responsibility is then to urge their firm to withdraw from holding any of the systematic risk.   Stories tell us that happened at JP Morgan and at Goldman.  Other stories tell us that didn’t happen at Bear or Lehman.

So the moral of this is that you need to watch not just your own capacity but everyone else’s capacity as well if you do not want stories told about you.


%d bloggers like this: