Archive for the ‘Assumptions’ category

Variety of Decision Making

July 20, 2022

Over the past several years, an anthropologist (Thompson), a control engineer (Beck) and an actuary (Ingram) have formed an unlikely collaboration that has resulted in countless discussions among the three of us along with several published (and posted) documents.

Our work was first planned in 2018. One further part of what was planned is still under development — the application of these ideas to economic thinking. This is previewed in document (2) below, where it is presented as Institutional Evolutionary Economics.

Here are abstracts and links to the existing documents:

  1. Model Governance and Rational Adaptability in Enterprise Risk Management, January 2020, AFIR-ERM section of the International Actuarial Association. The problem context here is what has been called the “Insurance Cycle”. In this cycle we recognize four qualitatively different risk environments, or seasons of risk. We address the use of models for supporting an insurer’s decision making for enterprise risk management (ERM) across all four seasons of the cycle. In particular, the report focuses expressly on: first, the matter of governance for dealing with model risk; and, second, model support for Rational Adaptability (RA) at the transitions among the seasons of risk. This latter examines what may happen around the turning points in the insurance cycle (any cycle, for that matter), when the risk of a model generating flawed foresight will generally be at its highest.
  2. Modeling the Variety of Decision Making, August 2021, Joint Risk Management Section. The four qualitatively different seasons of risk call for four distinctly different risk-coping decision rules. And if exercising those strategies is to be supported and informed by a model, four qualitatively different parameterizations of the model are also required. This is the variety of decision making that is being modeled. Except that we propose and develop in this work a first blueprint for a fifth decision-making strategy, to which we refer as the adaptor. It is a strategy for assisting the process of RA in ERM and navigating adaptively through all the seasons of risk, insurance cycle after insurance cycle. What is more, the variety of everyday risk-coping decision rules and supporting models can be substituted by a single corresponding rule and model whose parameters vary (slowly) with time, as the model tracks the seasonal business and risk transitions.
  3. The Adaptor Emerges, December 2021, The Actuary Magazine, Society of Actuaries. The adaptor strategy focuses on strategic change: on the chops and changes among the seasons of risk over the longer term. The attention of actuaries coping with everyday risk is necessarily focused on the short term. When the facts change qualitatively, as indeed they did during the pandemic, mindsets, models, and customary everyday rules must be changed. Our adaptor indeed emerged during the pandemic, albeit coincidentally, since such was already implied in RA for ERM.
  4. An Adaptor Strategy for Enterprise Risk Management, April 2022, Risk Management Newsletter, Joint Risk Management Section. In our earlier work (2009-13), something called the “Surprise Game” was introduced and experimented with. In it, simulated businesses are obliged to be surprised and shaken into eventually switching their risk-coping decision strategies as the seasons of risk undergo qualitative seasonal shifts and transitions. That “eventually” can be much delayed, with poor business performance accumulating all the while. In control engineering, the logic of the Surprise Game is closely similar to something called cascade control. We show how the adaptor strategy is akin to switching the “autopilot” in the company driving seat of risk-coping, but ideally much more promptly than waiting (and waiting) for any eventual surprise to dawn on the occupant of the driving seat.
  5. An Adaptor Strategy for Enterprise Risk Management (Part 2), July 2022, Risk Management Newsletter, Joint Risk Management Section. Rather than its switching function, the priority of the adaptor strategy should really be that of nurturing the human and financial resources in the makeup of a business — so that the business can perform with resilience, season in, season out, economic cycle after economic cycle. The nurturing function can be informed and supported by an adaptor “dashboard”. For example, the dashboard can be designed to alert the adaptor to the impending loss or surfeit of personnel skilled in implementing any one of the four risk-coping strategies of RA for ERM. We cite evidence of such a dashboard from both the insurance industry and an innovation ecosystem in Linz, Austria.
  6. Adaptor Exceptionalism:Structural Change & Systems Thinking, March 2022, RISKVIEWS, Here we link Parts 1 and 2 of the Risk Management Newsletter article ((4) and (5) above). When we talk of “when the facts change, we change our mindsets”, we are essentially talking about structural change in a system, most familiarly, the economy. One way of grasping the essence of this, hence the essence of the invaluable (but elusive) systemic property of resilience, is through the control engineering device of a much simplified model of the system with a parameterization that changes relatively slowly over time — the adaptor model of document (2) above, in fact. This work begins to show how the nurturing function of the adaptor strategy is so important for the achievement of resilient business performance.
  7. Adaptor Strategy: Foresight, May 2022, RISKVIEWS. This is a postscript to the two-part Newsletter article and, indeed, its linking technical support material of document (6). It identifies a third possible component of an adaptor strategy: that of deliberately probing the uncertainties in business behaviour and its surrounding risk environment. This probing function derives directly from the principle of “dual adaptive control” — something associated with systems such as guided missiles. Heaven forbid: that such should be the outcome of a discussion between the control engineer, the actuary, and the anthropologist!

Still to be completed is the full exposition of Institutional Evolutionary Economics that is previewed in Section 1 of Modeling the Variety of Decision Making (Item 2 above).

Top 10 RISKVIEWS Posts of 2014 – ORSA Heavily Featured

December 29, 2014

RISKVIEWS believes that this may be the best top 10 list of posts in the history of this blog.  Thanks to our readers whose clicks resulted in their selection.

  • Instructions for a 17 Step ORSA Process – Own Risk and Solvency Assessment is here for Canadian insurers, coming in 2015 for US and required in Europe for 2016. At least 10 other countries have also adopted ORSA and are moving towards full implementation. This post leads you to 17 other posts that give a detailed view of the various parts to a full ORSA process and report.
  • Full Limits Stress Test – Where Solvency and ERM Meet – This post suggests a link between your ERM program and your stress tests for ORSA that is highly logical, but not generally practiced.
  • What kind of Stress Test? – Risk managers need to do a better job communicating what they are doing. Much communications about risk models and stress tests is fairly mechanical and technical. This post suggests some plain English terminology to describe the stress tests to non-technical audiences such as boards and top management.
  • How to Build and Use a Risk Register – A first RISKVIEWS post from a new regular contributor, Harry Hall. Watch for more posts along these lines from Harry in the coming months. And catch Harry on his blog, http://www.pmsouth.com
  • ORSA ==> AC – ST > RCS – You will notice a recurring theme in 2014 – ORSA. That topic has taken up much of RISKVIEWS time in 2014 and will likely take up even more in 2015 and after as more and more companies undertake their first ORSA process and report. This post is a simple explanation of the question that ORSA is trying to answer that RISKVIEWS has used when explaining ORSA to a board of directors.
  • The History of Risk Management – Someone asked RISKVIEWS to do a speech on the history of ERM. This post and the associated new permanent page are the notes from writing that speech. Much more here than could fit into a 15 minute talk.
  • Hierarchy Principle of Risk Management – There are thousands of risks faced by an insurer that do not belong in their ERM program. That is because of the Hierarchy Principle. Many insurers who have followed someone’s urging that ALL risk need to be included in ERM belatedly find out that no one in top management wants to hear from them or to let them talk to the board. A good dose of the Hierarchy Principle will fix that, though it will take time. Bad first impressions are difficult to fix.
  • Risk Culture, Neoclassical Economics, and Enterprise Risk Management – A discussion of the different beliefs about how business and risk work. A difference in the beliefs that are taught in MBA and Finance programs from the beliefs about risk that underpin ERM make it difficult to reconcile spending time and money on risk management.
  • What CEO’s Think about Risk – A discussion of three different aspects of decision-making as practiced by top management of companies and the decision making processes that are taught to quants can make quants less effective when trying to explain their work and conclusions.
  • Decision Making Under Deep Uncertainty – Explores the concepts of Deep Uncertainty and Wicked Problems. Of interest if you have any risks that you find yourself unable to clearly understand or if you have any problems where all of the apparent solutions are strongly opposed by one group of stakeholders or another.

Free Download of Valuation and Common Sense Book

December 19, 2013

RISKVIEWS recently got the material below in an email.  This material seems quite educational and also somewhat amusing.  The authors keep pointing out the extreme variety of actual detailed approach from any single theory in the academic literature.  

For example, the table following shows a plot of Required Equity Premium by publication date of book. 

Equity Premium

You get a strong impression from reading this book that all of the concepts of modern finance are extremely plastic and/or ill defined in practice. 

RISKVIEWS wonders if that is in any way related to the famous Friedman principle that economics models need not be at all realistic.  See post Friedman Model.

===========================================

Book “Valuation and Common Sense” (3rd edition).  May be downloaded for free

The book has been improved in its 3rd edition. Main changes are:

  1. Tables (with all calculations) and figures are available in excel format in: http://web.iese.edu/PabloFernandez/Book_VaCS/valuation%20CaCS.html
  2. We have added questions at the end of each chapter.
  3. 5 new chapters:

Chapters

Downloadable at:

32 Shareholder Value Creation: A Definition http://ssrn.com/abstract=268129
33 Shareholder value creators in the S&P 500: 1991 – 2010 http://ssrn.com/abstract=1759353
34 EVA and Cash value added do NOT measure shareholder value creation http://ssrn.com/abstract=270799
35 Several shareholder returns. All-period returns and all-shareholders return http://ssrn.com/abstract=2358444
36 339 questions on valuation and finance http://ssrn.com/abstract=2357432

The book explains the nuances of different valuation methods and provides the reader with the tools for analyzing and valuing any business, no matter how complex. The book has 326 tables, 190 diagrams and more than 180 examples to help the reader. It also has 480 readers’ comments of previous editions.

The book has 36 chapters. Each chapter may be downloaded for free at the following links:

Chapters

Downloadable at:

     Table of contents, acknowledgments, glossary http://ssrn.com/abstract=2209089
Company Valuation Methods http://ssrn.com/abstract=274973
Cash Flow is a Fact. Net Income is Just an Opinion http://ssrn.com/abstract=330540
Ten Badly Explained Topics in Most Corporate Finance Books http://ssrn.com/abstract=2044576
Cash Flow Valuation Methods: Perpetuities, Constant Growth and General Case http://ssrn.com/abstract=743229
5   Valuation Using Multiples: How Do Analysts Reach Their Conclusions? http://ssrn.com/abstract=274972
6   Valuing Companies by Cash Flow Discounting: Ten Methods and Nine Theories http://ssrn.com/abstract=256987
7   Three Residual Income Valuation Methods and Discounted Cash Flow Valuation http://ssrn.com/abstract=296945
8   WACC: Definition, Misconceptions and Errors http://ssrn.com/abstract=1620871
Cash Flow Discounting: Fundamental Relationships and Unnecessary Complications http://ssrn.com/abstract=2117765
10 How to Value a Seasonal Company Discounting Cash Flows http://ssrn.com/abstract=406220
11 Optimal Capital Structure: Problems with the Harvard and Damodaran Approaches http://ssrn.com/abstract=270833
12 Equity Premium: Historical, Expected, Required and Implied http://ssrn.com/abstract=933070
13 The Equity Premium in 150 Textbooks http://ssrn.com/abstract=1473225
14 Market Risk Premium Used in 82 Countries in 2012: A Survey with 7,192 Answers http://ssrn.com/abstract=2084213
15 Are Calculated Betas Good for Anything? http://ssrn.com/abstract=504565
16 Beta = 1 Does a Better Job than Calculated Betas http://ssrn.com/abstract=1406923
17 Betas Used by Professors: A Survey with 2,500 Answers http://ssrn.com/abstract=1407464
18 On the Instability of Betas: The Case of Spain http://ssrn.com/abstract=510146
19 Valuation of the Shares after an Expropriation: The Case of ElectraBul http://ssrn.com/abstract=2191044
20 A solution to Valuation of the Shares after an Expropriation: The Case of ElectraBul http://ssrn.com/abstract=2217604
21 Valuation of an Expropriated Company: The Case of YPF and Repsol in Argentina http://ssrn.com/abstract=2176728
22 1,959 valuations of the YPF shares expropriated to Repsol http://ssrn.com/abstract=2226321
23 Internet Valuations: The Case of Terra-Lycos http://ssrn.com/abstract=265608
24 Valuation of Internet-related companies http://ssrn.com/abstract=265609
25 Valuation of Brands and Intellectual Capital http://ssrn.com/abstract=270688
26 Interest rates and company valuation http://ssrn.com/abstract=2215926
27 Price to Earnings ratio, Value to Book ratio and Growth http://ssrn.com/abstract=2212373
28 Dividends and Share Repurchases http://ssrn.com/abstract=2215739
29 How Inflation destroys Value http://ssrn.com/abstract=2215796
30 Valuing Real Options: Frequently Made Errors http://ssrn.com/abstract=274855
31 119 Common Errors in Company Valuations http://ssrn.com/abstract=1025424
32 Shareholder Value Creation: A Definition http://ssrn.com/abstract=268129
33 Shareholder value creators in the S&P 500: 1991 – 2010 http://ssrn.com/abstract=1759353
34 EVA and Cash value added do NOT measure shareholder value creation http://ssrn.com/abstract=270799
35 Several shareholder returns. All-period returns and all-shareholders return http://ssrn.com/abstract=2358444
36 339 questions on valuation and finance http://ssrn.com/abstract=2357432

I would very much appreciate any of your suggestions for improving the book.

Best regards,
Pablo Fernandez

Getting Paid for Risk Taking

April 15, 2013

Consideration for accepting a risk needs to be at a level that will sustain the business and produce a return that is satisfactory to investors.

Investors usually want additional return for extra risk.  This is one of the most misunderstood ideas in investing.

“In an efficient market, investors realize above-average returns only by taking above-average risks.  Risky stocks have high returns, on average, and safe stocks do not.”

Baker, M. Bradley, B. Wurgler, J.  Benchmarks as Limits to Arbitrage: Understanding the Low-Volatility Anomaly

But their study found that stocks in the top quintile of trailing volatility had real return of -90% vs. a real return of 1000% for the stocks in the bottom quintile.

But the thinking is wrong.  Excess risk does not produce excess return.  The cause and effect are wrong in the conventional wisdom.  The original statement of this principle may have been

“in all undertakings in which there are risks of great losses, there must also be hopes of great gains.”
Alfred Marshall 1890 Principles of Economics

Marshal has it right.  There are only “hopes” of great gains.  These is no invisible hand that forces higher risks to return higher gains.  Some of the higher risk investment choices are simply bad choices.

Insurers opportunity to make “great gains” out of “risks of great losses” is when they are determining what consideration, or price, that they will require to accept a risk.  Most insurers operate in competitive markets that are not completely efficient.  Individual insurers do not usually set the price in the market, but there is a range of prices at which insurance is purchased in any time period.  Certainly the process that an insurer uses to determine the price that makes a risk acceptable to accept is a primary determinant in the profits of the insurer.  If that price contains a sufficient load for the extreme risks that might threaten the existence of the insurer, then over time, the insurer has the ability to hold and maintain sufficient resources to survive some large loss situations.

One common goal conflict that leads to problems with pricing is the conflict between sales and profits.  In insurance as in many businesses, it is quite easy to increase sales by lowering prices.  In most businesses, it is very difficult to keep up that strategy for very long as the realization of lower profits or losses from inadequate prices is quickly realized.  In insurance, the the premiums are paid in advance, sometimes many years in advance of when the insurer must provide the promised insurance benefits.  If provisioning is tilted towards the point of view that supports the consideration, the pricing deficiencies will not be apparent for years.  So insurance is particularly susceptible to the tension between volume of business and margins for risk and profits,
and since sales is a more fundamental need than profits, the margins often suffer.
As just mentioned, insurers simply do not know for certain what the actual cost of providing an insurance benefit will be.  Not with the degree of certainty that businesses in other sectors can know their cost of goods sold.  The appropriateness of pricing will often be validated in the market.  Follow-the-leader pricing can lead a herd of insurers over the cliff.  The whole sector can get pricing wrong for a time.  Until, sometimes years later, the benefits are collected and their true cost is know.

“A decade of short sighted price slashing led to industry losses of nearly $3 billion last year.”  Wall Street Journal June 24, 2002

Pricing can also go wrong on an individual case level.  The “Winners Curse”  sends business to the insurer who most underimagines riskiness of a particular risk.

There are two steps to reflecting risk in pricing.  The first step is to capture the expected loss properly.  Most of the discussion above relates to this step and the major part of pricing risk comes from the possibility of missing that step as has already been discussed.  But the second step is to appropriately reflect all aspects of the risk that the actual losses will be different from expected.  There are many ways that such deviations can manifest.

The following is a partial listing of the risks that might be examined:

• Type A Risk—Short-Term Volatility of cash flows in 1 year

• Type B Risk—Short -Term Tail Risk of cash flows in 1 year
• Type C Risk—Uncertainty Risk (also known as parameter risk)
• Type D Risk—Inexperience Risk relative to full multiple market cycles
• Type E Risk—Correlation to a top 10
• Type F Risk—Market value volatility in 1 year
• Type G Risk—Execution Risk regarding difficulty of controlling operational
losses
• Type H Risk—Long-Term Volatility of cash flows over 5 or more years
• Type J Risk—Long-Term Tail Risk of cash flows over 5 years or more
• Type K Risk—Pricing Risk (cycle risk)
• Type L Risk—Market Liquidity Risk
• Type M Risk—Instability Risk regarding the degree that the risk parameters are
stable

See “Risk and Light” or “The Law of Risk and Light

There are also many different ways that risk loads are specifically applied to insurance pricing.  Three examples are:

  • Capital Allocation – Capital is allocated to a product (based upon the provisioning) and the pricing then needs to reflect the cost of holding the capital.  The cost of holding capital may be calculated as the difference between the risk free rate (after tax) and the hurdle rate for the insurer.  Some firms alternately use the difference between the investment return on the assets backing surplus (after tax) and the hurdle rate.  This process assures that the pricing will support achieving the hurdle rate on the capital that the insurer needs to hold for the risks of the business.  It does not reflect any margin for the volatility in earnings that the risks assumed might create, nor does it necessarily include any recognition of parameter risk or general uncertainty.
  • Provision for Adverse Deviation – Each assumption is adjusted to provide for worse experience than the mean or median loss.  The amount of stress may be at a predetermined confidence interval (Such as 65%, 80% or 90%).  Higher confidence intervals would be used for assumptions with higher degree of parameter risk.  Similarly, some companies use a multiple (or fraction) of the standard deviation of the loss distribution as the provision.  More commonly, the degree of adversity is set based upon historical provisions or upon judgement of the person setting the price.  Provision for Adverse Deviation usually does not reflect anything specific for extra risk of insolvency.
  • Risk Adjusted Profit Target – Using either or both of the above techniques, a profit target is determined and then that target is translated into a percentage of premium of assets to make for a simple risk charge when constructing a price indication.

The consequences of failing to recognize as aspect of risk in pricing will likely be that the firm will accumulate larger than expected concentrations of business with higher amounts of that risk aspect.  See “Risk and Light” or “The Law of Risk and Light“.

To get Consideration right you need to (1)regularly get a second opinion on price adequacy either from the market or from a reliable experienced person; (2) constantly update your view of your risks in the light of emerging experience and market feedback; and (3) recognize that high sales is a possible market signal of underpricing.

This is one of the seven ERM Principles for Insurers

Embedded Assumptions are Blind Spots

October 28, 2012

Embedded assumptions are dangerous. That is because we are usually unaware and almost always not concerned about whether those embedded assumptions are still true or not.

One embedded assumption is that looking backwards, at the last year end, will get us to a conclusion about the financial strength of a financial firm.

We have always done that.  Solvency assessments are always about the past year end.

But the last year end is over.  We already know that the firm has survived that time period.  What we really need to know is whether the firm will have the resources to withstand the next period. We assess the risks that the firm had at the last year end.  Without regard to whether the firm actually is still exposed to those risks.  When what we really need to know is whether the firm will survive the risks that it is going to be exposed to in the future.

We also apply standards for assessing solvency that are constant.  However, the ability of a firm to take on additional risk quickly varies significantly in different markets.  In 2006, financial firms were easily able to grow their risks at a high rate.  Credit and capital were readily available and standards for the amount of actual cash or capital that a counterparty would expect a financial firm to have were particularly low.

Another embedded assumption is that we can look at risk based upon the holding period of a security or an insurance contract.  What we fail to recognize is that even if every insurance contract lasts for only a short time, an insurer who regularly renews those contracts is exposed to risk over time in almost exactly the same way as someone who writes very long term contracts.  The same holds for securities.  A firm that typically holds positions for less than 30 days seems to have very limited exposure to losses that emerge over much longer periods.  But if that firm tends to trade among similar positions and maintains a similar level of risk in a particular class of risk, then they are likely to be all in for any systematic losses from that class of risks.  They are likely to find that exiting a position once those systematic losses start is costly, difficult and maybe impossible.

There are embedded assumptions all over the place.  Banks have the embedded assumptions that they have zero risk from their liabilities.  That works until some clever bank figures out how to make some risk there.

Insurers had the embedded assumption that variable products had no asset related risk.  That embedded assumption led insurers to load up with highly risky guarantees for those products.  Even after the 2001 dot com crash drove major losses and a couple of failures, companies still had the embedded assumption that there was no risk in the M&E fees.  The hedged away their guarantee risk and kept all of their fee risk because they had an embedded assumption that there was no risk there.  In fact, variable annuity writers faced massive DAC write-offs when the stock markets tanked.  There was a blind spot that kept them from seeing this risk.

Many commentators have mentioned the embedded assumption that real estate always rose in value.   In fact, the actual embedded assumption was that there would not be a nationwide drop in real estate values.  This was backed up by over 20 years of experience.  In fact, everyone started keeping detailed electronic records right after…… The last time when there was an across the board drop in home prices.

The blind spot caused it to take longer than it should have for many to notice that prices actually were falling nationally.  Each piece of evidence was fit in and around the blind spots.

So a very important job for the risk manager is to be able to identify all of the embedded assumptions / blind spots that prevail in the firm and set up processes to continually assess whether there is a danger lurking right there – hiding in a blind spot.

You Must Abandon All Presumptions

August 5, 2011

If you really want to have Enterprise Risk Management, then you must at all times abandon all presumptions. You must make sure that all of the things to successfully manage risks are being done, and done now, not sometime in the distant past.

A pilot of an aircraft will spend over an hour checking things directly and reviewing other people’s checks.  The pilot will review:

  • the route of flight
  • weather at the origin, destination, and enroute.
  • the mechanical status of the airplane
  • mechanical issues that may have been improperly logged.
  • the items that may have been fixed just prior to the flight to make certain that system works
  • the flight computer
  • the outside of the airplane for obvious defects that may have been overlooked
  • the paperwork
  • the fuel load
  • the takeoff and landing weights to make sure that they are within limits for the flight

Most of us do not do anything like this when we get into our cars to drive.  Is this overkill?  You decide.

When you are expecting to fly somewhere and there is a last minute delay because of something that seems like it should have really been taken care of, that is likely because the pilot finds something that someone might normally PRESUME was ok that was not.

Personally, as someone who takes lots and lots of flights, RISKVIEWS thinks that this is a good process.  One that RISKVIEWS would recommend to be used by risk managers.

THE NO PRESUMPTION APPROACH TO RISK MANAGEMENT

Here are the things that the Pilot of the ERM program needs to check before taking off on each flight.

1.  Risks need to be diversified.  There is no risk management if a firm is just taking one big bet.

2.  Firm needs to be sure of the quality of the risks that they take.  This implies that multiple ways of evaluating risks are needed to maintain quality, or to be aware of changes in quality.  There is no single source of information about quality that is adequate.

3.  A control cycle is needed regarding the amount of risk taken.  This implies measurements, appetites, limits, treatment actions, reporting, feedback

4.  The pricing of the risks needs to be adequate.  At least if you are in the risk business like insurers, for risks that are traded.  For risks that are not traded, the benefit of the risk needs to exceed the cost in terms of potential losses.

5.  The firm needs to manage its portfolio of risks so that it can take advantage of the opportunities that are often associated with its risks.  This involves risk reward management.

6.   The firm needs to provision for its retained risks appropriately, in terms of set asides (reserves) for expected losses and capital for excess losses.

A firm ultimately needs all six of these things.  Things like a CRO, or risk committees or board involvement are not on this list because those are ways to get these six things.

The Risk Manager needs to take a NO PRESUMPTIONS approach to checking these things.  Many of the problems of the financial crisis can be traced back to presumptions that one or more of these six things were true without any attempt to verify.

Frequency vs. Likelihood

June 26, 2011

Much risk management literature talks about identifying the frequency and severity of risks.

There are several issues with this suggestion.  It is a fairly confused way of saying that there needs to be a probabilistic measure of the risk.

However, most classes of risks – things like market, credit, natural catastrophe, legal, or data security will not have a single pair of numbers that represent them.  Instead they will have a series of pairs of probabilities and loss amounts.

The word frequency adds another confusion.  Frequency refers to observations.  It is a backwards looking approach to the risk.  What is really needed is likelihood – a forward looking probability.

For some risks, all we will ever have is an ever changing frequency.

So what do we do?  With some data in hand and a view of the underlying nature of the risk, we form a likelihood assumption.  With that assumption, we can then develop an actual gain and loss distribution that gives our best picture of the risk reward trade-offs.

For example, the following is three sets of observations of some phenomena.

On this example, the 1s represent the incidence of major loss experiences.  There are at least four ways that these observations might be interpreted.

  1. One analyst might say that the average of all 60 observations is 2 (or a 10% frequency) so that is what they will use to project the forward likelihood of this problem.
  2. Another analyst might say that they want to be sure that they account for the worst case, so they will focus on the first set of observations and use a 15% likelihood assumption.
  3. A third analyst will focus on the trend and make a likelihood assumption below 5%.
  4. The fourth analyst will say that there is just not enough consistent information to form a reliable likelihood assumption.

Then the next 20 observations come up all zeros.  How do the four analysts update their likelihood assumptions?

In fact, this illustration was developed with random numbers generated from a binomial distribution with a 5% likelihood.

The math is simple to determine that probability of frequency observations from 20 trials with a likelihood of 5% are:

      • 0 – 36%
      • 1 – 38%
      • 2 – 19%
      • 3 –  6%
      • 4 –  1%

To be responsible in setting your likelihood assumptions, you should be fully aware of the actual distributions of possibilities based upon the frequency observations that you have to work with. So the first set of observations had a 6% likelihood, the second with 2 observations had a 19% likelihood and the third with 1 observations had a 38% likelihood.

That is when we know the actual likelihood.  Usually you do not.  But you can look at this sort of table for each possible assumption for likelihood.

Here we actually had 60 observations.  The same sort of table for the 60 trials and for different assumptions of likelihood:

This type of thinking will only make sense for the first analyst above.  The other three will not be swayed.  But for that first analyst, some more detailed reflection can help them to better understand that their assumptions of likelihood are just that, assumptions; not facts.

Echo Chamber Risk Models

June 12, 2011

The dilemma is a classic – in order for a risk model to be credible, it must be an Echo Chamber – it must reflect the starting prejudices of management. But to be useful – and worth the time and effort of building it – it must provide some insights that management did not have before building the model.

The first thing that may be needed is to realize that the big risk model cannot be the only tool for risk management.  The Big Risk Model, also known as the Economic Capital Model, is NOT the Swiss Army Knife of risk management.  This Echo Chamber issue is only one reason why.

It is actually a good thing that the risk model reflects the beliefs of management and therefore gets credibility.  The model can then perform the one function that it is actually suited for.  That is to facilitate the process of looking at all of the risks of the firm on the same basis and to provide information about how those risks add up to make up the total risk of the firm.

That is very, very valuable to a risk management program that strives to be Enterprise-wide in scope.  The various risks of the firm can then be compared one to another.  The aggregation of risk can be explored.

All based on the views of management about the underlying characteristics of the risks. That functionality allows a quantum leap in the ability to understand and consistently manage the risks of the firm.

Before creating this capability, the risks of each firm were managed totally separately.  Some risks were highly restricted and others were allowed to grow in a mostly uncontrolled fashion.  With a credible risk model, management needs to face their inconsistencies embedded in the historical risk management of the firm.

Some firms look into this mirror and see their problems and immediately make plans to rationalize their risk profile.  Others lash out at the model in a shoot the messenger fashion.  A few will claim that they are running an ERM program, but the new information about risk will result in absolutely no change in risk profile.

It is difficult to imagine that a firm that had no clear idea of aggregate risk and the relative size of the components thereof would find absolutely nothing that needs adjustment.  Often it is a lack of political will within the firm to act upon the new risk knowledge.

For example, when major insurers started to create the economic capital models in the early part of this century, many found that their equity risk exposure was very large compared to their other risks and to their business strategy of being an insurer rather than an equity mutual fund.  Some firms used this new information to help guide a divestiture of equity risk.  Others delayed and delayed even while saying that they had too much equity risk.  Those firms were politically unable to use the new risk information to reduce the equity position of the group.  More than one major business segment had heavy equity positions and they could not choose which to tell to reduce.  They also rejected the idea of reducing exposure through hedging, perhaps because there was a belief at the top of the firm that the extra return of equities was worth the extra risk.

This situation is not at all unique to equity risk.   Other firms had the same experience with Catastrophe risks, interest rate risks and Casualty risk concentrations.

A risk model that was not an Echo Chamber model would be any use at all in these situation above. The differences between management beliefs and the model assumptions of a non Echo Chamber model would result in it being left out of the discussion entirely.

Other methods, such as stress tests can be used to bring in alternate views of the risks.

So an Echo Chamber is useful, but only if you are willing to listen to what you are saying.

Major Regime Change – The Debt Crisis

May 24, 2011

A regime change is a corner that you cannot see around until you get to it.  It is when many of the old assumptions no longer hold.  It is the start of a new set of patterns.  Regime changes are not necessarily bad, but they are disruptive.  Many of the things that made people and companies successful under the old regime will no longer work.  But there will be completely new things that will now work.

The current regime has lasted for over 50 years.  Over that time, debt went all in one direction – UP.  Most other financial variables went up and down over that time, but their variability was in the context of a money supply that was generally growing somewhat faster than the economy.

Increasing debt funds some of the growth that has fueled the world economies over that time.

But that was a ride that could not go on forever.  At some point in time the debt servicing gets to be too high in comparison to the capacity of the economy.  The economy has gone through the stage of hedge lending (see Financial Instability) where activities are able to afford payments on their debt as well as repayment of principal long ago.  The economy is in the stage of Speculative Finance where activities are able to afford payments on the debt, but not the repayment of principal.  The efforts to pay down debt will tell us whether it is possible to reverse course on that.  If one looks ahead to the massive pensions crisis that looms in the moderate term, then you would likely judge that the economy is in Ponzi Financing land where the economy can neither afford the debt servicing or the payment of principal.

All this seems to be pointing towards a regime change regarding the level of debt and other forward obligations in society.  With that regime change, the world economy may shift to a regime of long term contraction in the amount of debt or else a sudden contraction (default) followed by a long period of massive caution and reduced lending.

Riskviews does not have a prediction for when this will happen or what other things will change when that regime change takes place.  But risk managers are urged to take into account that any models that are calibrated to historical experience may well mislead the users.  And market consistent models may also mislead for long term decision making (or is that will continue to mislead for long term decision making – how else to characterize a spot calculation) until the markets come to incorporate the impact of a regime change.

This may be felt in terms of further extension of the uncertainty that has dogged some markets since the financial crisis or in some other manner.

However it materializes, we will be living in interesting times.

Systemic Risk, Financial Reform, and Moving Forward from the Financial Crisis

April 22, 2011

A second series of essays from the actuarial profession about the financial crisis.  Download them  HERE.

A Tale of Two Density Functions
By Dick Joss

The Systemic Risk of Risk Capital (Or the "No Matter What" Premise)
By C. Frytos &I.Chatzivasiloglou

Actuaries and Assumptions
By Jonathan Jacobs

Managing Financial Crises, Today and Beyond
By Vivek Gupta

What Did We Learn from the Financial Crisis?
By Shibashish Mukherjee

Financial Reform: A Legitimate Function of Government
By John Wiesner

The Economy and Self-Organized Criticality
By Matt Wilson

Systemic Risk Arising from a Financial System that Required Growth in a World with Limited Oil Supply
By Gail Tverberg

Managing Systemic Risk in Retirement Systems
By Minaz Lalani

Worry About Your Own Systemic Risk Exposures
By Dave Ingram

Systemic Risk as Negative Externality
By Rick Gorvette

Who Dares Oppose a Boom?
By David Merkel

Risk Management and the Board of Directors–Suggestions for Reform
By Richard Leblanc

Victory at All Costs
By Tim Cardinal and Jin Li

The Financial Crisis: Why Won't We Use the F(raud) Word?
By Louise Francis

PerfectSunrise–A Warning Before the Perfect Storm
By Max Rudolph

Strengthening Systemic Risk Regulation
By Alfred Weller

It's Securitization Stupid
By Paul Conlin

I Want You to Feel Your Pain
By Krzysztof Ostaszewski

Federal Reform Bill and the Insurance Industry
By David Sherwood

What’s Next?

March 25, 2011

Turbulent Times are Next.

At BusinessInsider.com, a feature from Guillermo Felices tells of 8 shocks that are about to slam the global economy.

#1 Higher Food Prices in Emerging Markets

#2 Higher Interest Rates and Tighter Money in Emerging Markets

#3 Political Crises in the Middle East

#4 Surging Oil Prices

#5 An Increase in Interest Rates in Developed Markets

#6 The End of QE2

#7 Fiscal Cuts and Sovereign Debt Crises

#8 The Japanese Disaster

How should ideas like these impact on ERM systems?  Is it at all reasonable to say that they should not? Definitely not.

These potential shocks illustrate the need for the ERM system to be reflexive.  The system needs to react to changes in the risk environment.  That would mean that it needs to reflect differences in the risk environment in three possible ways:

  1. In the calibration of the risk model.  Model assumptions can be adjusted to reflect the potential near term impact of the shocks.  Some of the shocks are certain and could be thought to impact on expected economic activity (Japanese disaster) but have a range of possible consequences (changing volatility).  Other shocks, which are much less certain (end of QE2 – because there could still be a QE3) may be difficult to work into model assumptions.
  2. With Stress and Scenario Tests – each of these shocks as well as combinations of the shocks could be stress or scenario tests.  Riskviews suggest that developing a handful of fully developed scenarios with 3 or more of these shocks in each would be the modst useful.
  3. In the choices of Risk Appetite.  The information and stress.scenario tests should lead to a serious reexamination of risk appetite.  There are several reasonable reactions – to simply reduce risk appetite in total, to selectively reduce risk appetite, to increase efforts to diversify risks, or to plan to aggressively take on more risk as some risks are found to have much higher reward.

The last strategy mentioned above (aggressively take on more risk) might not be thought of by most to be a risk management strategy.  But think of it this way, the strategy could be stated as an increase in the minimum target reward for risk.  Since things are expected to be riskier, the firm decides that it must get paid more for risk taking, staying away from lower paid risks.  This actually makes quite a bit MORE sense than taking the same risks, expecting the same reward for risks and just taking less risk, which might be the most common strategy selected.

The final consideration is compensation.  How should the firm be paying people for their performance in a riskier environment?  How should the increase in market risk premium be treated?

See Risk adjusted performance measures for starters.

More discussion on a future post.

Where to Draw the Line

March 22, 2011

“The unprecedented scale of the earthquake and tsunami that struck Japan, frankly speaking, were among many things that happened that had not been anticipated under our disaster management contingency plans.”  Japanese Chief Cabinet Secretary Yukio Edano.

In the past 30 days, there have been 10 earthquakes of magnitude 6 or higher.  In the past 100 years, there have been over 80 earthquakes magnitude 8.0 or greater.  The Japanese are reputed to be the most prepared for earthquakes.  And also to experience the most earthquakes of any settled region on the globe.  By some counts, Japan experiences 10% of all earthquakes that are on land and 20% of all severe earthquakes.

But where should they, or anyone making risk management decisions, draw the line in preparation?

In other words, what amount of safety are you willing to pay for in advance and what magnitude of loss event are you willing to say that you will have to live with the consequences.

That amount is your risk tolerance.  You will do what you need to do to manage the risk – but only up to a certain point.

That is because too much security is too expensive, too disruptive.

You are willing to tolerate the larger loss events because you believe them to be sufficiently rare.

In New Zealand,  that cost/risk trade off thinking allowed them to set a standard for retrofitting of existing structures of 1/3 of the standard for new buildings.  But, they also allowed 20 years transition.  Not as much of an issue now.  Many of the older buildings, at least in Christchurch are gone.

But experience changes our view of frequency.  We actually change the loss distribution curve in our minds that is used for decision making.

Risk managers need to be aware of these shifts.  We need to recognize them.  We want to say that these shifts represent shifts in risk appetite.  But we need to also realize that they represent changes in risk perception.  When our models do not move as risk perception moves, the models lose fundamental credibility.

In addition, when modelers do things like what some of the cat modeling firms are doing right now, that is moving the model frequency when people’s risk perceptions are not moving at all, they also lose credibility for that.

So perhaps you want scientists and mathematicians creating the basic models, but someone who is familiar with the psychology of risk needs to learn an effective way to integrate those changes in risk perceptions (or lack thereof) with changes in models (or lack thereof).

The idea of moving risk appetite and tolerance up and down as management gets more or less comfortable with the model estimations of risk might work.  But you are still then left with the issue of model credibility.

What is really needed is a way to combine the science/math with the psychology.

Market consistent models come the closest to accomplishing that.  The pure math/science folks see the herding aspect of market psychology as a miscalibration of the model.  But they are just misunderstanding what is being done.  What is needed is an ability to create adjustments to risk calculations that are applied to non-traded risks that allow for the combination of science & math analysis of the risk with the emotional component.

Then the models will accurately reflect how and where management wants to draw the line.

Risk Management Success

March 8, 2011

Many people struggle with clearly identifying how to measure the success of their risk management program.

But they really are struggling with is either a lack of clear objectives or with unobtainable objectives.

Because if there are clear and obtainable objectives, then measuring success means comparing performance to those objectives.

The objectives need to be framed in terms of the things that risk management concentrates upon – that is likelihood and severity of future problems.

The objectives need to be obtainable with the authority and resources that are given to the risk manager.  A risk manager who is expected to produce certainty about losses needs to either have unlimited authority or unlimited budget to produce that certainty.

The most difficult part of judging the success of a risk management program is when those programs are driven by assessments of risk that end up being totally insufficient.  But again the real answer to this issue is authority and budget.  If the assumptions of the model are under the control of the risk manager, that is totally under the risk manager’s control, then the risk manager would be prudent to incorporate significant amounts of margin either into the model or into the processes that use the model for model risk.  But then the risk manager is incented to make the model as conservative as their imagination can make it.  The result will be no business – it will all look too risky.

So a business can only work if the model assumptions are the join responsibility of the risk manager and the business users.

But there are objectives for a risk management program that can be clear and obtainable.  Here are some examples:

  1. The Risk Management program will be compliant with regulatory and/or rating agency requirements
  2. The Risk Management program will provide the information and facilitate the process for management to maintain capital at the most efficient level for the risks of the firm.
  3. The Risk Management program will provide the information and facilitate the process for management to maintain profit margins for risk (pricing in insurance terms) at a level consistent with corporate goals.
  4. The Risk Management program will provide the information and facilitate the process for management to maintain risk exposures to within corporate risk tolerances and appetites.
  5. The Risk Management program will provide the information and facilitate the process for management and the board to set and update goals for risk management and return for the organization as well as risk tolerances and appetites at a level and form consistent with corporate goals.
  6. The Risk Management program will provide the information and facilitate the process for management to avoid concentrations and achieve diversification that is consistent with corporate goals.
  7. The Risk Management program will provide the information and facilitate the process for management to select strategic alternatives that optimize the risk adjusted returns of the firm over the short and long term in a manner that is consistent with corporate goals.
  8. The Risk Management program will provide information to the board and for public distribution about the risk management program and about whether company performance is consistent with the firm goals for risk management.

Note that the firm’s goals for risk management are usually not exactly the same as the risk management program’s goals.  The responsibility for achieving the risk management goals is shared by the management team and the risk management function.

Goals for the risk management program that are stated like the following are the sort that are clear, but unobtainable without unlimited authority and/or budget as described above:

X1  The Risk Management program will assure that the firm maintains profit margins for risk at a level consistent with corporate goals.

X2  The Risk Management program will assure that the firm maintains risk exposures to within corporate risk tolerances and appetites so that losses will not occur that are in excess of corporate goals.

X3  The Risk Management program will assure that the firm avoids concentrations and achieve diversification that is consistent with corporate goals.

X4  The Risk Management program will assure that the firm selects strategic alternatives that optimize the risk adjusted returns of the firm over the short and long term in a manner that is consistent with corporate goals.

The worst case situation for a risk manager is to have the position in a firm where there are no clear risk management goals for the organization (item 4 above) and where they are judged on one of the X goals but which one that they will be judged upon is not determined in advance.

Unfortunately, this is exactly the situation that many, many risk managers find themselves in.

Regime Change

February 18, 2011

In risk modeling, the idea of regime change is a mathematical expression.  A change from one mathematical rule to another.

But in the world, Regime Change can have a totally different meaning.  Like what is happening in Egypt.

When someone sits atop a government for 30 years, it is easy to assume that next week they will still be on top.

Until that is no longer true.

When there is a regime change, it happens because the forces that were in a stable equilibrium shift in some way so that they can no longer support a continuation of the past equilibrium.  In hindsight, it is possible to see that shift.  But the shift is often not so obvious in advance.

Again, as when the Soviet Union fell apart, the intelligence services were seemingly taken by surprise.

But is there really any difference between the two types of regime change?  Is it any easier to actually notice an impending regime change on a modeled risk than an impending political risk?

Why are we so bad at seeing around corners?

In the area of public health, it is well known that diseases follow a standard path called an S curve.  That is the path of a curve plotting the number of people infected by a disease over time.  The path has a slight upward slope at first then the slope gets much, much steeper and eventually it slows down again.

When a new disease is noticed, some observers who come upon information about the disease during that middle period during the rapid upward slope will extrapolate and predict that the disease incidence will grow to be much higher than it ever gets.

The reason for the slowdown in the rate of growth of the disease is because diseases are most often self limiting because people do not usually get the disease twice.  Diseases are spread by contact between a carrier and an uninfected person.  In the early stages of a disease, the people who make the most contacts with others are the most likely to become infected and themselves become carriers.  Eventually, they all lose the ability to be carriers and become immune and the number of times that infected carriers come into contact with uninfected persons starts to drop.  Eventually, such contacts become rare.

It is relatively easy to build a model of the progression of a disease.  We know what parameters are needed.  We can easily estimate those that we cannot measure exactly and can correct our estimates as we make observations.

We start out with of model of a disease that assumes that the disease is not permanent.

We plan for regime change.

Perhaps that is what we need for the rest of our models.  We should start out by assuming that no pattern that we observe is permanent.  That each regime carries the seeds of its own destruction.

If we start out with that assumption, we will look to build the impermanence of the regime into our models and look for the signs that will show that whatever guesses we had to make initially about the path of the next regime change can be improved.

Because when we build a model that does not include that assumption, we do not even think about what might cause the next regime change.  We do not make any preliminary guesses.  The signs that the next change is coming are totally ignored.

In the temperate zones where four very different seasons are the norm, the signs of the changes of seasons are well known and widely noticed.

The signs of the changes in regimes of risks can be well known and widely noticed as well, but only if we start out with a model that allows for regime changes.

Sins of Risk Measurement

February 5, 2011
.
Read The Seven Deadly Sins of Measurement by Jim Campy

Measuring risk means walking a thin line.  Balancing what is highly unlikely from what it totally impossible.  Financial institutions need to be prepared for the highly unlikely but must avoid getting sucked into wasting time worrying about the totally impossible.

Here are some sins that are sometimes committed by risk measurers:

1.  Downplaying uncertainty.  Risk measurement will always become more and more uncertain with increasing size of the potential loss numbers.  In other words, the larger the potential loss, the less certain you can be about how certain it might be.  Downplaying uncertainty is usually a sin of omission.  It is just not mentioned.  Risk managers are lured into this sin by the simple fact that the less that they mention uncertainty, the more credibility their work will be given.

2.  Comparing incomparables.  In many risk measurement efforts, values are developed for a wide variety of risks and then aggregated.  Eventually, they are disaggregated and compared.  Each of the risk measurements are implicitly treated as if they were all calculated totally consistently.  However,  in fact, we are usually adding together measurements that were done with totally different amounts of historical data, for markets that have totally different degrees of stability and using tools that have totally different degrees of certitude built into them.  In the end, this will encourage decisions to take on whatever risks that we underestimate the most through this process.

3.  Validate to Confirmation.  When we validate risk models, it is common to stop the validation process when we have evidence that our initial calculation is correct.  What that sometimes means is that one validation is attempted and if validation fails, the process is revised and tried again.  This is repeated until the tester is either exhausted or gets positive results.  We are biased to finding that our risk measurements are correct and are willing to settle for validations that confirm our bias.

4.  Selective Parameterization.  There are no general rules for parameterization.  Generally, someone must choose what set of data is used to develop the risk model parameters.  In most cases, this choice determines the answers of the risk measurement.  If data from a benign period is used, then the measures of risk will be low.  If data from an adverse period is used, then risk measures will be high.  Selective paramaterization means that the period is chosen because the experience was good or bad to deliberately influence the outcome.

5.  Hiding behind Math.  Measuring risk can only mean measuring a future unknown contingency.  No amount of fancy math can change that fact.  But many who are involved in risk measurement will avoid ever using plain language to talk about what they are doing, preferring to hide in a thicket of mathematical jargon.  Real understanding of what one is doing with a risk measurement process includes the ability to say what that entails to someone without an advanced quant degree.

6.  Ignoring consequences.  There is a stream of thinking that science can be disassociated from its consequences.  Whether or not that is true, risk measurement cannot.  The person doing the risk measurement must be aware of the consequences of their findings and anticipate what might happen if management truly believes the measurements and acts upon them.

7.  Crying Wolf.  Risk measurement requires a concentration on the negative side of potential outcomes.  Many in risk management keep trying to tie the idea of “risk” to both upsides and downsides.  They have it partly right.  Risk is a word that means what it means, and the common meaning associated risk with downside potential.  However, the risk manager who does not keep in mind that their risk calculations are also associated with potential gains will be thought to be a total Cassandra and will lose all attention.  This is one of the reasons why scenario and stress tests are difficult to use.  One set of people will prepare the downside story and another set the upside story.  Decisions become a tug of war between opposing points of view, when in fact both points of view are correct.

There are doubtless many more possible sins.  Feel free to add your favorites in the comments.

But one final thought.  Calling it MEASUREMENT might be the greatest sin.

The Year in Risk – 2010

January 3, 2011

It is very difficult to strike the right note looking backwards and talking about risk and risk management.  The natural tendency is to talk about the right and wrong “picks”.  The risks that you decided not to hedge or reinsure that did not develop losses and the ones that you did offload that did show losses.

But if we did that, we would be falling into exactly the same trap that makes it almost impossible to keep support for risk management over time.  Risk Management will fail if it becomes all about making the right risk “picks”.

There are other important and useful topics that we can address.  One of those is the changing risk environment over the year. In addition, we can try to assess the prevailing views of the risk environment throughout the year.


VIX is an interesting indicator of the prevailing market view of risk throughout the year.  VIX is in indicator of the price of insurance against market volatility.  The price goes up when the market believes that future volatility will be higher or alternately when the market is simply highly uncertain about the future.

Uncertain is the word used most throughout the year to represent the economic situation.  But one insight that you can glean from looking at VIX over a longer time period is that volatility in 2010 was not historically high.

If you look at the world in terms of long term averages, a single regime view of the world, then you see 2010 as an above average year for volatility.  But if instead of a single regime world, you think of a multi regime world, then 2010 is not unusual for the higher volatility regimes.

So for stocks, the VIX indicates that 2010 was a year when market opinions were for a higher volatility risk environment.  Which is about the same as the opinion in half of the past 20 years.

That is what everyone believed.

Here is what happened:

Return
December 6.0%
November -0.4%
October 3.5%
September 8.7%
August -5.3%
July 6.8%
Jun -5.2%
May -8.3%
April 1.3%
March 5.8%
February 2.8%
January -3.8%
Average 1.0%
Std Dev 5.6%

That looks pretty volatile.  And comparing to the past several years, we see below that 2010 was just a little less actually volatile than 2008 and 2009.  So we are still in a regime of high volatility.

So we can conclude that 2010 was a year of both high expected and high actual volatility.

If an exercize like this is repeated each year for each important risk, eventually insights of the possibilities for both expectations and actual risk levels can be formed and strategies and tactics developed for different combinations.

The other thing that we should do when we look back at a year is to note how the year looked in the artificial universe of our risk model.

For example, when many folks looked back at 2008 stock market results in early 2009, many risk manager had to admit that their models told them that 2008 was a 1 in 250 to 1 in 500 year.  That did not quite seem right, especially since losses of that size had occurred two or three times in the past 125 years.

What many risk managers decided to do was to change the (usually unstated) assumption that things had permanently changed and that the long term experience with those large losses was not relevant. Once they did that, the risk models were recalibrated and 2008 became something like a 1 in 75 to 1 in 100 year event.

For the stock market, the 15.1% total return was not unusual and causes no concern for recalibration.

But there are many other risks, particularly when you look at general insurance risks, that had higher than expected claims.  Some were frequency driven and some were severity driven.  Here is a partial list:

  • Queensland flood
  • December snowstorms (Europe & US)
  • Earthquakes (Haiti, Chile, China, New Zealand)
  • Iceland Volcano

Munich Re estimates that 2010 will go down as the sixth worst year for amount of general insurance claims paid for disasters.

Each insurer and reinsurer can look at their losses and see, in the aggregate and for each peril separately, what their models would assign as likelihood for 2010.

The final topic for the year in risk is Systemic Risk.  2010 will go down as the year that we started to worry about Systemic Risk.  Regulators, both in the US and globally are working on their methods for inoculating the financial markets against systemic risk.  Firms around the globe are honing their arguments for why they do not pose a systemic threat so that they can avoid the extra regulation that will doubtless befall the firms that do.

Riskviews fervently hopes that those who do work on this are very open minded.  As Mark Twain once said,

History does not repeat itself, but it does rhyme.”

And for Systemic Risk, my hope is that the resources and necessary drag from additional regulation are applied, not to prevent an exact repeat of the recent events, while recognizing the possibility of rhyming as well as what I would think would be the most likely systemic issue – that financial innovation will bring us an entirely new way to bollocks up the system next time.

Happy New Year!

Turkey Risk

November 25, 2010

On Thanksgiving Day her in the US, let us recall Nassim Taleb’s story about the turkey.  For 364 days the turkey saw no risk whatsoever, just good eats.  Then one day, the turkey became dinner.

For some risks, studying the historical record and making a model from experience just will not give useful results.

And, remembering the experience of the turkey, a purely historical basis for parameterizing risk models could get you cooked.

Happy Thanksgiving.

Risk Managers do not know the Future any Better than Anyone Else

September 17, 2010

Criticisms of risk managers for not anticipating some emerging future are overdone.  When a major unexpected loss happens, everyone missed it.

Risk managers do not have any special magic ball.  The future is just as dim to us as to everyone else.

Sometimes we forget that.  Our methods seem to be peering into the future.

But that is not really correct.  We are not looking into the future.  Not only do we not know the future, we do not even know the likelihood of various future possibilities, the probability distribution of the future.

That does not make our work a waste of time.  However.

What we should be doing with our models is to write down clearly that view of the future that we use to base our decisions upon.

You see, everyone who makes a decision must have a picture of the future possibilities that they are using to weigh the possibilities and make that decision.  Most people cannot necessarily articulate that picture with any specificity.  Management teams try to make sure that they are all working with similar visions of the future so that the sum of all their decisions makes sense together.

But one of the innovations of the new risk management discipline is to provide a very detailed exposition of that picture of the future.

Unfortunately, many risk managers are caught up in the mechanics of creating the model and they fail to recognize the extreme importance of this aspect of their work.  Risk Managers need to make sure that the future that is in their model IS the future that management wants to use to base their decisions upon.  The Risk Manager needs to understand whether he/she is the leader or the follower in the process of identifying that future vision.

If the leader, then there needs to be an explicit discussion where the other top managers affirm that they agree with the future suggested by the Risk Manager.

If the follower, then the risk manager will first need to say back to the rest of management what they are hearing to make sure that they are all on the same page.  They might still want to present alternate futures, but they need to be prepared to have those visions heavily discounted in decision making.

The Risk Managers who do not understand this process go forward developing their models based upon their best vision of the future and are frustrated when management does not find their models to be very helpful.  Sometimes, the risk manager presents their models as if they DO have some special insight into the future.

My vision of the future is that path will not succeed.

Filters are Sometimes Blinders

September 10, 2010

We saw a graph recently that tried to show how the stock market is totally disconnected with the GDP. It showed that the stock market growth was totally not correlated with GDP growth and perhaps has been negatively correlated on a decade by decade basis over the last 100 years or so.

http://www.businessinsider.com/chart-of-the-day-economy-stock-market-unrelated-2010-9

This made me think of something that we did years ago when I was working on trying to compare performance between stockholder owned insurers to mutual insurers. Eventually we figured that it might make more sense to compare the total return on total capital of stockholder owned companies to return on capital for mutuals. The division of ownership between bondholders and stockholders is artificial and not important to this comparison.

It makes me wonder what the chart above might look like if the value of the companies is represented by the value of the stocks PLUS the value of the bonds.

Just an example of how we have sometimes been taught to filter out some very important information. It is sometimes very hard to see outside of the filters that “everyone” has been taught to use.

Why All Risk Models Understate Risk?

August 10, 2010

There are three types of reasons:  mechanical, psychological and market.

Mechanical Reasons

Parameter Risk – all of the parameters of risk models are uncertain.  That fact is usually ignored.

Residual Risk – there are two parts to this one.  Within the range of the data and outside the range.  Within the range, the process of modeling always produces smoother results than the actual observed results.  This understates risk.  Outside the range, the method might over state or understate risk.  Possibly by orders of magnitude.

Randomness – many of the risks that we model with random variables are not at all random.  They are causal, but we do not know how to follow the casual chain to its conclusion.  Reality of these risks will involve much more discontinuities that are usually included in our continuous risk models.

Psychological

Humans are hard wired to have a better memory of good times than bad times.  This manifests itself in many of the biases chronicled by psychologists.

Many of those biases boil down to the fact that we all tend to see the world as we want it to be, rather than as it is.

Market

Because of the above, the market tends to underprice risk.  You often do not get paid enough for the real risk, you get paid for the risk in the model.  Those few who look past the models and come the closest to understanding the real risk will simply not play.  So the markets are dominated by folks with models that understate risk.

The place to play is identified clearly above.  Did you notice?

Regime Change

July 30, 2010

If something happens more or less the same way for any extended period of time, the normal reaction of humans is consider that phenomena as constant and to largely filter it out.  We do not then even try to capture new information about changes to that phenomena because our senses tell us that that input is “pure noise” with no signal.  Hence the famous story about boiling frogs.  Which may or may not be actually true about frogs, but it definitely reveals something about the way that humans take in information about the world.

But things can and do actually change.  Even things that are more or less the same for a very long time.

In the book, “This Time It’s Different”, the authors state that

“The median inflation rates before World War I were well below those of the more recent period: 0.5% per annum for 1500 – 1799 and 0.71% for 1800 – 1913, in contrast with 5% for 1914 – 2006.”

Imagine that.  Inflation averaged below 0.75% for about 300 years.  Since there is no history of extended periods of negative inflation, to get an average that low, there must be a very low standard deviation as well.  Inflation at a level of 3 or 4% is probably a one in a million situation.  Or so intelligent financial analysts before WWI must have thought that they could make plans without any concern for inflation.

But in the years following WWI, governments found a new way to default on their debts, especially their internal debts.  Reinhart and Rogoff point out that almost all of the discussion by economists regarding sovereign default is about external debt.  But they show that internal debt is very important to the situations of sovereign defaults.  Countries with high levels of internal debt and low external debt will usually not default, but countries with high levels of both internal and external debt will often default.

So as we contemplate the future of the aging western economies, we need to be careful that we do not exclude the regime changes that could occur.  And which regime changes that we should be concerned about becomes clearer when we look at all of the entitlements to retirees as debt (is there any effective difference between debt and these obligations?).  When we do that we see that there are quite a few western nations with very, very large internal debt.  And many of those countries have indexed much of that debt, taking the inflation option off of the table.

Reinhart and Rogoff also point out the sovereign default is usually not about ability to pay, it is about willingness to make the sacrifices that repayment of debt would entail.

So Risk Managers need to think about possible drastic regime changes, in addition to the seemingly highly unlikely scenario that the future will be more or less like the past.

Crippling Epistemology

July 17, 2010

Google the term crippled epistemology and you get lots of articles and blog posts about extremists and fanatics and also some blog posts BY the extremists and fanatics.

Crippled epistemology means that someone cannot see the truth.

Daniel Patrick  Moynahan is reported as saying “Everyone is entitled to his own opinion, but not his own facts.”

But there are just too many facts.  Any one person cannot attend to ALL of the facts.  They must filter the facts, choose the facts that are more important.  We all filter the facts that we pay attention to.

But sometimes, those filters become too strong.  Things went along in a certain pattern for a length of time, so we filtered out of our consideration many of those things that either failed to evidence any variability or that had totally predictable variability.

Those filters take on the aspect of a crippling epistemology.  Our approach to knowledge keeps us from understanding what is actually happening.

Sounds pretty esoteric.  But in fact it is one of the most important issues in risk management.

We need to have systems that work on a real time basis to provide the information that drives our risk decisions.  But we must be careful that that expensive and impressive risk information system does not actually obscure the information that we really need.

For the investors in sub prime mortgages prior to 2007, they had developed an epistemology, an approach to their knowledge of the markets.  Ultimately that epistemology crippled them, because it did not allow them to see the real underlying weakness to that market.

So a very important step to be performed periodically for risk managers is an Epistemology Review.  Making sure that the risk systems actually are capturing the needed information about the risks of the firm.

Biased Risk Decisions

June 18, 2010

The information is all there.  We have just wrapped it in so many technical terms that perhaps we forget what it is referring to.

Behavioral Finance explains exactly how people tend to make decisions without models.  They call them Biases and Heuristics.

This link is to one of my absolute favorite pages on the entire internet.  LIST OF COGNITIVE BIASES Take a look.  See if you can find the ways that you made your last 10 major business decisions there.

Now models are the quants ways to overcome these biases.  Quants believe that they can build a model that keeps the user from falling into some of the more emotional cognitive biases, such as the anchoring effect.  With a model, for example, anchoring is avoided because the modeler very carefully gives equal weight to many data points instead of more weight to the most recent data point.

But what the quants fail to recognize is that models strengthen some of the biases.  For example, models and modelers often fall under the Clustering illusion, finding patterns and attributing statistical distributions to data recording phenomena that has just finished one phase and is about to move on to another.

Models promote the hindsight bias.  No matter how surprising an event is at the time, within a few years, the data recording the impact of the event is incorporated into the data sets and the modelers henceforth give the impression that the model is now calibrated to consider just such an event.

And in the end, the model is often no more than a complicated version of the biases of the modeler, an example of the Confirmation Bias where the modeler has constructed a model that confirms their going in world view, rather than representing the actual world.

So that is the trade-off, between biased decisions with a model and biased decisions without a model.  What is a non-modeling manager to do?

I would suggest that they should go to that wikipedia page on biases and learn about their own biases and also sit down with that list with their modeler and get the modeler to reveal their biases as well.

Fortunately or unfortunately, things in most financial firms are very complicated.  It is almost impossible to get it right balancing all of the moving parts that make up the entirety of most firms without the help of a model.  But if the decision maker understands their own biases as well as the biases of the model, perhaps they can avoid more of them.

Finally, Jos Berkemeijer asks what must a modeler know if they are also the decision maker.  I would suggest that such a person needs desperately to understand their own biases.  They can get a little insight into this from traditional peer review.  But I would suggest even more than that they need to review the wiki list of biases with their peer reviewer and hope that the peer reviewer feels secure enough to be honest with them.

Winners and Losers

June 14, 2010

Sometimes quants who get involved with building new economic capital models have the opinion that their work will reveal the truth about the risks of the group and that the best approach is to just let the truth be told and let the chips fall where they may.

Then they are completely surprised that their project has enemies within management.  And that those enemies are actively at work undermining the credibility of the model.  Eventually, the modelers are faced with a choice of adjusting the model assumptions to suit those enemies or having the entire project discarded because it has failed to get the confidence of management.

But that situation is actually totally predictable.

That is because it is almost a sure thing that the first comprehensive and consistent look at the group’s risks will reveal winners and losers.  And if this really is a new way of approaching things, one or more of the losers will come as a complete surprise to many.

The easiest path for the managers of the new loser business is to undermine the model.  And it is completely natural to find that they will usually be completely skeptical of this new model that makes their business look bad.  It is quite likely that they do not think that their business takes too much risk or has too little profits in comparison to their risk.

In the most primitive basis, I saw this first in the late 1970’s when the life insurer where I worked shifted from a risk approach that allocated all capital in proportion to reserves to one that recognized the insurance risk as well as the investment risk as two separate factors.  The term insurance products suddenly were found to be drastically underpriced.  Of course, the product manager of that product was an instant enemy of the new approach and was able to find many reasons why capital shouldn’t be allocated to insurance risk.

The same sorts of issues had been experienced by firms when they first adopted nat cat models and shifted from a volatility risk focus to a ruin risk focus.

What needs to be done to diffuse these sorts of issues, is that steps must be taken to separate the message from the messenger.  There are 2 main ways to accomplish this:

  1. The message about the new level of risks needs to be delivered long before the model is completed.  This cannot wait until the model is available and the exact values are completely known.  Management should be exposed to broad approximations of the findings of the model at the earliest possible date.  And the rationale for the levels of the risk needs to be revealed and discussed and agreed long before the model is completed.
  2. Once the broad levels of the risk  are accepted and the problem areas are known, a realistic period of time should be identified for resolving these newly identified problems.   And appropriate resources allocated to developing the solution.  Too often the reaction is to keep doing business and avoid attempting a solution.

That way, the model can take its rightful place as a bringer of light to the risk situation, rather than the enemy of one or more businesses.

Comprehensive Actuarial Risk Evaluation

May 11, 2010

The new CARE report has been posted to the IAA website this week.

It raises a point that must be fairly obvious to everyone that you just cannot manage risks without looking at them from multiple angles.

Or at least it should now be obvious. Here are 8 different angles on risk that are discussed in the report and my quick take on each:

  1. MARKET CONSISTENT VALUE VS. FUNDAMENTAL VALUE   –  Well, maybe the market has it wrong.  Do your own homework in addition to looking at what the market thinks.  If the folks buying exposure to US mortgages had done fundamental evaluation, they might have noticed that there were a significant amount of sub prime mortgages where the Gross mortgage payments were higher than the Gross income of the mortgagee.
  2. ACCOUNTING BASIS VS. ECONOMIC BASIS  –  Some firms did all of their analysis on an economic basis and kept saying that they were fine as their reported financials showed them dying.  They should have known in advance of the risk of accounting that was different from their analysis.
  3. REGULATORY MEASURE OF RISK  –  vs. any of the above.  The same logic applies as with the accounting.  Even if you have done your analysis “right” you need to know how important others, including your regulator will be seeing things.  Better to have a discussion with the regulator long before a problem arises.  You are just not as credible in the middle of what seems to be a crisis to the regulator saying that the regulatory view is off target.
  4. SHORT TERM VS. LONG TERM RISKS  –  While it is really nice that everyone has agreed to focus in on a one year view of risks, for situations that may well extend beyond one year, it can be vitally important to know how the risk might impact the firm over a multi year period.
  5. KNOWN RISK AND EMERGING RISKS  –  the fact that your risk model did not include anything for volcano risk, is no help when the volcano messes up your business plans.
  6. EARNINGS VOLATILITY VS. RUIN  –  Again, an agreement on a 1 in 200 loss focus is convenient, it does not in any way exempt an organization from risks that could have a major impact at some other return period.
  7. VIEWED STAND-ALONE VS. FULL RISK PORTFOLIO  –  Remember, diversification does not reduce absolute risk.
  8. CASH VS. ACCRUAL  –  This is another way of saying to focus on the economic vs the accounting.

Read the report to get the more measured and complete view prepared by the 15 actuaries from US, UK, Australia and China who participated in the working group to prepare the report.

Comprehensive Actuarial Risk Evaluation

Your Mother Should Know

April 29, 2010

Something as massive as the current financial crisis is much too large to have one or two or even three simple drivers.  There were many, many mistakes made by many different people.

My mother, who was never employed in the financial world,  would have cautioned against many of those mistakes.

When I was 16, I had some fine arguments with my mother about the girls that I was dating. My mother did not want me dating any girls that she did not want me to marry.

That was absolutely silly, I argued. I was years and years away from getting married. That was a concern for another time. My mother knew that in those days, “shotgun marriages” were common, a sudden unexpected change that triggered a long-term commitment. Well, as it happened, even without getting a shotgun involved, five years later I got married to a girl that I started dating when I was 16.

There are two different approaches to risk that firms in the risk-taking business use. One approach is to assume that they can and will always be able to trade away risks at will. The other approach is to assume that any risks will be held by the firm to maturity. If the risk managers of the firms with the risk-trading approach would have listened to their mothers, they would have treated those traded risks as if they might one day hold those risks until maturity. In most cases, the risk traders can easily offload their risks at
will. Using that approach, they can exploit little bits of risk insight to trade ahead of market drops. But when the news reveals a sudden unexpected adverse turn, the trading away option often disappears. In fact, using the trading option will often result in locking in more severe losses than what might eventually occur. And in the most extreme situations, trading just freezes up and there is not even the option to get out with an excessive loss.

So the conclusion here is that, at some level, every entity that handles risks should be assessing what would happen if they ended up owning the risk that they thought they would only have temporarily. This would have a number of consequences. First of all, it could well stop the idea of high speed trading of very, very complex risks. If these risks are too complex to evaluate fully during the intended holding period, then perhaps it would be better for all if the trading just did not happen so very quickly. In the case of the recent subprime-related issues, banks often had very different risk analysis requirements for trading books of risks vs. their banking book of risks. The banking (credit mostly) risks required intense due diligence or underwriting.  The trading book only had to be run through models, where the assignment of assumptions was not required to be based upon internal analysis.

From 2008 . . .

Risk Management: The Current Financial Crisis, Lessons Learned and Future Implications

Lots more great stuff there.  Check it out.

Dangerous Words

April 27, 2010

One of the causes of the Financial Crisis that is sometimes cited is an inappropriate reliance on complex financial models.  In our defense, risk managers have often said that users did not take the time to understand the models that they relied upon.

And I have said that in some sense, blaming the bad decisions on the models is like a driver who gets lost blaming it on the car.

But we risk managers and risk modelers do need to be careful with the words that we use.  Some of the most common risk management terminology is guilty of being totally misleading to someone who has no risk management training – who simply relies upon their understanding of English.

One of the fundamental steps of risk management is to MEASURE RISK.

I would suggest that this very common term is potentially misleading and risk managers should consider using it less.

In common usage, you could say that you measure a distance between two points or measure the weight of an object.  Measurement usually refers to something completely objective.

However, when we “measure” risk, it is not at all objective.  That is because Risk is actually about the future.  We cannot measure the future.  Or any specific aspect of the future.

While I can measure my height and weight today, I cannot now measure what it will be tomorrow.  I can predict what it might be tomorrow.  I might be pretty certain of a fairly tight range of values, but that does not make my prediction into a measurement.

So by the very words we use to describe what we are doing, we sought to instill a degree of certainty and reliability that is impossible and unwarranted.  We did that perhaps as mathematicians who are used to starting a problem by defining terms.  So we start our work by defining our calculation as a “measurement” of risk.

However, non-mathematicians are not so used to defining A = B at the start of the day and then remembering thereafter that whenever they hear someone refer to A, that they really mean B.

We also may have defined our work as “measuring risk” to instill in it enough confidence from the users that they would actually pay attention to the work and rely upon it.  In which case we are not quite as innocent as we might claim on the over reliance front.

It might be difficult now to retreat however.  Try telling management that you do not now, not have you ever measured risk.  And see what happens to your budget.

LIVE from the ERM Symposium

April 17, 2010

(Well not quite LIVE, but almost)

The ERM Symposium is now 8 years old.  Here are some ideas from the 2010 ERM Symposium…

  • Survivor Bias creates support for bad risk models.  If a model underestimates risk there are two possible outcomes – good and bad.  If bad, then you fix the model or stop doing the activity.  If the outcome is good, then you do more and more of the activity until the result is bad.  This suggests that model validation is much more important than just a simple minded tick the box exercize.  It is a life and death matter.
  • BIG is BAD!  Well maybe.  Big means large political power.  Big will mean that the political power will fight for parochial interests of the Big entity over the interests of the entire firm or system.  Safer to not have your firm dominated by a single business, distributor, product, region.  Safer to not have your financial system dominated by a handful of banks.
  • The world is not linear.  You cannot project the macro effects directly from the micro effects.
  • Due Diligence for mergers is often left until the very last minute and given an extremely tight time frame.  That will not change, so more due diligence needs to be a part of the target pre-selection process.
  • For merger of mature businesses, cultural fit is most important.
  • For newer businesses, retention of key employees is key
  • Modelitis = running the model until you get the desired answer
  • Most people when asked about future emerging risks, respond with the most recent problem – prior knowledge blindness
  • Regulators are sitting and waiting for a housing market recovery to resolve problems that are hidden by accounting in hundreds of banks.
  • Why do we think that any bank will do a good job of creating a living will?  What is their motivation?
  • We will always have some regulatory arbitrage.
  • Left to their own devices, banks have proven that they do not have a survival instinct.  (I have to admit that I have never, ever believed for a minute that any bank CEO has ever thought for even one second about the idea that their bank might be bailed out by the government.  They simply do not believe that they will fail. )
  • Economics has been dominated by a religious belief in the mantra “markets good – government bad”
  • Non-financial businesses are opposed to putting OTC derivatives on exchanges because exchanges will only accept cash collateral.  If they are hedging physical asset prices, why shouldn’t those same physical assets be good collateral?  Or are they really arguing to be allowed to do speculative trading without posting collateral? Probably more of the latter.
  • it was said that systemic problems come from risk concentrations.  Not always.  They can come from losses and lack of proper disclosure.  When folks see some losses and do not know who is hiding more losses, they stop doing business with everyone.  None do enough disclosure and that confirms the suspicion that everyone is impaired.
  • Systemic risk management plans needs to recognize that this is like forest fires.  If they prevent the small fires then the fires that eventually do happen will be much larger and more dangerous.  And someday, there will be another fire.
  • Sometimes a small change in the input to a complex system will unpredictably result in a large change in the output.  The financial markets are complex systems.  The idea that the market participants will ever correctly anticipate such discontinuities is complete nonsense.  So markets will always be efficient, except when they are drastically wrong.
  • Conflicting interests for risk managers who also wear other hats is a major issue for risk management in smaller companies.
  • People with bad risk models will drive people with good risk models out of the market.
  • Inelastic supply and inelastic demand for oil is the reason why prices are so volatile.
  • It was easy to sell the idea of starting an ERM system in 2008 & 2009.  But will firms who need that much evidence of the need for risk management forget why they approved it when things get better?
  • If risk function is constantly finding large unmanaged risks, then something is seriously wrong with the firm.
  • You do not want to ever have to say that you were aware of a risk that later became a large loss but never told the board about it.  Whether or not you have a risk management program.

Burn out, Fade Away …or Adapt

February 27, 2010

When I was a kid in the 1960’s, I was sick and tired of how much time on TV and movies was taken up with stories of WWII.  Didn’t my parent’s generation get it?  WWII was ancient history.  It was done.  Move on.  Join the real world that was happening now.

From that statement, you can tell that I am a Boomer.  But I am already sick and tired of how much ink and TV and movies and Web time is devoted to the passing of the world as the Boomers remember the golden age of our youth.  Gag me.  Am I going to have to hear this the entire rest of my life?  Get over it.  Move on.  Live in the current world.

Risk managers need to carefully convey that message to the folks who run their companies as well.  What ever way the world was in the “Glory Days” of the CEO or Business Unit manager’s career, things are different.  Business is different.  Risks are different.  Strategies and companies must adapt.  Adapt, Burn Out or Fade Away are the choices.  Better to Adapt.

I saw this happen once before in my career.  Interest rates steadily rose from the late 1940’s through the early 1980’s.  A business strategy that emphasized amassing cash, locking in a return promise and investing it in interest bearing instruments could show a steady growth in profits almost every single year without too much difficulty.  Then suddenly in the mid-1980’s that didn’t work anymore.  Interest rates went down more than up for a decade and have since stayed low.  Firms either adapted, burned out or faded away.

We have just concluded a (thankfully) brief period of massive financial destruction and are in an uncertain period.  When we come out of this uncertainty, some of the long held strategies of firms will not work.  Risks will be different.

The risk manager needs to be one of the voices that helps to make sure that this is recognized.

In addition, the risk manager needs to recognize that one or many of the risk models that were used to assess risk in past periods will no longer work well.  The risk manager needs to stand ready to adapt or fade away.

And the models need to be calibrated to the new world, not the old.  Calibrating to include the worst of the recent past might seem like prudent risk management, but it may well not be realistic.  If the world reverts to a reasonable growth pattern, the next such event may well not happen for 75 years.  Does your firm really need to avoid exposures to the sorts of things that lost money in 2008 for 75 years?  Or would that mean forgoing most of the business opportunities of that period?

Getting the correct answers to those questions will mean the different between Growth, burn out or fading away for your firm.

Making Sense of Immanent Failure

February 2, 2010

In the recent paper from the Said School, “Beyond the Financial Crisis” the authors use the phrase “inability to make sense of immanent failure” to describe one of the aspects that lead up to the financial crisis.

That matches up well with Jared Diamond’s ideas about Why Civilizations Fail.

And perfectly describes the otherwise baffling Chuck Prince quote about dancing.

I imagine that it is a problem that is more common with people who believe that they have really done their homework.  They have looked under every rock and they do not see the rock falling out of the sky.  It is not that they are failures.  In most times their extreme diligence will pay off handsomely.  There is just one sort of time period when they will not benefit appropriately from their careful work.

That is when there is a REGIME CHANGE.  Also called a SURPRISE.  All of the tried and true signals are green. But the intersection is uncharacteristicly clogged.

A major task for risk managers is to look for those regime changes – those times when the risk models no longer fit and at that point to CHANGE MODELS.  That is different from recalibrating the same old model.  That means applying the Baysian thinking not just to the parameters of the model but to the model selection as well.

It is not a failure when a new model must be chosen.  It is a normal and natural state of affairs.  Changing models is what I will call “Rational Adaptability”.

The reason why it will not work to simply recalibrate the old model is that the model with combined calibration for several regimes would be too broad to give appropriate guidance in different regimes.

You ride a car on highways, a boat on water and a plane on air.  Multi vehicles exist but they are never as efficient in any environment as the specialized vehicle.

So the risk manager needs to make sense of immanent failure and practice rational adaptability.

Get out of the car when you are wet up to the doors and get into a boat!

All Things Being Equal

January 26, 2010

is a phrase that is left out more often than left in when it is actually a key and seldom true assumption behind an argument.

If you are talking about risk and risk models, that phrase should be a red flag.  If the phrase is actually stated, the risk manager should immediately challenge it.  Because when a major risk becomes a loss or threatens to become a loss, very rarely are all things equal.

Most, and possibly all, major loss situations have ripple effects.  These ripple effects may be direct or they may be because they affect people who then in turn take actions that cause other unusual things to happen.

Here is a map of how the World Economic Forum thinks that the major risks of the world are interconnected:

Another example of a problem with the “All things Being Equal” assumption is the discussion of inflation.  Few people remember to say it but when they worry that addional money in the system due to direct Fed actions or Stimulus spending will cause inflation – that would be true – ALL THINGS BEING EQUAL.  But in fact, they are not equal, or even close to equal.

What is different is the amount of money that was in the system prior to the crisis other than the money from the Fed and the Stimulus.  The losses suffered by the banks and the shrinkage of loans and the inability of consumers and businesses to get loans – each of those things REDUCES the amount of money in the economy.  So in no stretch of the imagination are all things equal.

So the old rule about government spending being inflationary is only true ALL THINGS BEING EQUAL.

That does not, however, mean that there is not a difficult task ahead for the Fed to try to discern how fast the total money supply catches up with the economy so that they can reel back the money that they have put in.  But the problem with that idea is that because of the amount of economic activity that has been totally privatized, the Fed does not necessarily have the information to do that directly.

So ALL THINGS BEING EQUAL, they will have to try anyway by looking at the pick up in activity from the parts of the economy that they do have information about.

Meanwhile, folks like the NIF are looking to help to improve the information flow so that proper management of the money supply is possible from direct information.

Best Risk Management Quotes

January 12, 2010

The Risk Management Quotes page of Riskviews has consistently been the most popular part of the site.  Since its inception, the page has received almost 2300 hits, more than twice the next most popular part of the site.

The quotes are sometimes actually about risk management, but more often they are statements or questions that risk managers should keep in mind.

They have been gathered from a wide range of sources, and most of the authors of the quotes were not talking about risk management, at least they were not intending to talk about risk management.

The list of quotes has recently hit its 100th posting (with something more than 100 quotes, since a number of the posts have multiple quotes.)  So on that auspicous occasion, here are my favotites:

  1. Human beings, who are almost unique in having the ability to learn from the experience of others, are also remarkable for their apparent disinclination to do so.  Douglas Adams
  2. “when the map and the territory don’t agree, always believe the territory” Gause and Weinberg – describing Swedish Army Training
  3. When you find yourself in a hole, stop digging.-Will Rogers
  4. “The major difference between a thing that might go wrong and a thing that cannot possibly go wrong is that when a thing that cannot possibly go wrong goes wrong it usually turns out to be impossible to get at or repair” Douglas Adams
  5. “A foreign policy aimed at the achievement of total security is the one thing I can think of that is entirely capable of bringing this country to a point where it will have no security at all.”– George F. Kennan, (1954)
  6. “THERE ARE IDIOTS. Look around.” Larry Summers
  7. the only virtue of being an aging risk manager is that you have a large collection of your own mistakes that you know not to repeat  Donald Van Deventer
  8. Philip K. Dick “Reality is that which, when you stop believing in it, doesn’t go away.”
  9. Everything that can be counted does not necessarily count; everything that counts cannot necessarily be counted.  Albert Einstein
  10. “Perhaps when a man has special knowledge and special powers like my own, it rather encourages him to seek a complex explanation when a simpler one is at hand.”  Sherlock Holmes (A. Conan Doyle)
  11. The fact that people are full of greed, fear, or folly is predictable. The sequence is not predictable. Warren Buffett
  12. “A good rule of thumb is to assume that “everything matters.” Richard Thaler
  13. “The technical explanation is that the market-sensitive risk models used by thousands of market participants work on the assumption that each user is the only person using them.”  Avinash Persaud
  14. There are more things in heaven and earth, Horatio,
    Than are dreamt of in your philosophy.
    W Shakespeare Hamlet, scene v
  15. When Models turn on, Brains turn off  Til Schuermann

You might have other favorites.  Please let us know about them.

Risk Management Changed the Landscape of Risk

December 9, 2009

The use of derivatives and risk management processes to control risk was very successful in changing the risk management Landscape.

But that change has been in the same vein as the changes to forest management practices that saw us eliminating the small forest fires only to find that the only fires that we then had were the fires that were too big to control.  Those giant forest fires were out of control from the start and did more damage than 10 years of small fires.

The geography of the world from a risk management view is represented by this picture:

The ball represents the state of the world.  Taking a risk is represented by moving the ball one direction or the other.  If the ball goes over the top and falls down the sides, then that is a disaster.

So risk managers spend lots of time trying to measure the size of the valley and setting up processes and procedures so that the firm does not get up to the top of the valley onto one of the peaks, where a good stiff wind might blow the firm into the abyss.

The tools for risk management, things like derivatives with careful hedging programs now allowed firms to take almost any risk imaginable and to “fully” offset that risk.  The landscape was changed to look like this:

Managers believed that the added risk management bars could be built as high as needed so that any imagined risk could be taken.  In fact, they started to believe that the possibility of failure was not even real.  They started to think of the topology of risk looking like this:

Notice that in this map, there is almost no way to take a big enough risk to fall off the map into disaster.  So with this map of risk in mind, company managers loaded up on more and more risk.

But then we all learned that the hedges were never really perfect.  (There is no profit possible with a perfect hedge.)  And in addition, some of the hedge counterparties were firms who jumped right to the last map without bothering to build up the hedging walls.

And we also learned that there was actually a limit to how high the walls could be built.  Our skill in building walls had limits.  So it was important to have kept track of the gross amount of risk before the hedging.  Not just the small net amount of risk after the hedging.

Now we need to build a new view of risk and risk management.  A new map.  Some people have drawn their new map like this:

They are afraid to do anything.  Any move, any risk taken might just lead to disaster.

Others have given up.  They saw the old map fail and do not know if they are ever again going to trust those maps.

They have no idea where the ball will go if they take any risks.

So we risk managers need to go back to the top map again and revalidate our map of risk and start to convince others that we do know where the peaks are and how to avoid them.  We need to understand the limitations to the wall building version of risk management and help to direct our firms to stay away from the disasters.

Non-Linearities and Capacity

November 18, 2009

I bought my current house 11 years ago.  The area where it is located was then in the middle of a long drought.  There was never any rain during the summer.  Spring rains were slight and winter snow in the mountains that fed the local rivers was well below normal for a number of years in a row.  The newspapers started to print stories about the levels of the reservoirs – showing that the water was slightly lower at the end of each succeeding summer.  One year they even outlawed watering the lawns and everyone’s grass turned brown.

Then, for no reason that was ever explained, the drought ended.  Rainy days in the spring became common and one week it rained for six days straight.

Every system has a capacity.  When the capacity of a system is exceeded, there will be a breakdown of the system of some type.  The breakdown will be a non-linearity of performance of the system.

For example, the ground around my house has a capacity for absorbing and running off water.  When it rained for six days straight,  that capacity was exceeded, some of the water showed up in my basement.   The first time that happened, I was shocked and surprised.  I had lived in the house for 5 years and there had never been a hint of water in the basement. I cleaned up the effects of the water and promptly forgot about it. I put it down to a 1 in 100 year rainstorm.  In other parts of town, streets had been flooded.  It really was an unusual situation.

When it happened again the very next spring, this time after just 3 days of very, very heavy rain.  The flooding in the local area was extreme.  People were driven from their homes and they turned the high school gymnasium into a shelter for a week or two.

It appeared that we all had to recalibrate our models of rainfall possibilities.  We had to realize that the system we had for dealing with rainfall was being exceeded regularly and that these wetter springs were going to continue to exceed the system.  During the years of drought, we had built more and more in low lying areas and in ways that we might not have understood at the time, we altered to overall capacity of the system by paving over ground that would have absorbed the water.

For me, I added a drainage system to my basement.  The following spring, I went into my basement during the heaviest rains and listened to the pump taking the water away.

I had increased the capacity of that system.  Hopefully the capacity is now higher than the amount of rain that we will experience in the next 20 years while I live here.

Financial firms have capacities.  Management generally tries to make sure that the capacity of the firm to absorb losses is not exceeded by losses during their tenure.  But just like I underestimated the amount of rain that might fall in my home town, it seems to be common that managers underestimate the severity of the losses that they might experience.

Writers of liability insurance in the US underestimated the degree to which the courts would assign blame for use of a substance that was thought to be largely benign at one time that turned out to be highly dangerous.

In other cases, though it was the system capacity that was misunderstood.  Investors miss-estimated the capacity of internet firms to productively absorb new cash from the investors.  Just a few years earlier, the capacity of Asian economies to absorb investors cash was over-estimated as well.

Understanding the capacity of large sectors or entire financial systems to absorb additional money and put it to work productively is particularly difficult.  There are no rules of thumb to tell what the capacity of a system is in the first place.  Then to make it even more difficult, the addition of cash to a system changes the capacity.

Think of it this way, there is a neighborhood in a city where there are very few stores.  Given the income and spending of the people living there, an urban planner estimates that there is capacity for 20 stores in that area.  So with encouragement of the city government and private investors, a 20 store shopping center is built in an underused property in that neighborhood.  What happens next is that those 20 stores employ 150 people and for most of those people, the new job is a substantial increase in income.  In addition, everyone in the neighborhood is saving money by not having to travel to do all of their shopping.  Some just save money and all save time.  A few use that extra time to work longer hours, increasing their income.  A new survey by the urban planner a year after the stores open shows that the capacity for stores in the neighborhood is now 22.  However, entrepreneurs see the success of the 20 stores and they convert other properties into 10 more stores.  The capacity temporarily grows to 25, but eventually, half of the now 30 stores in the neighborhood go out of business.

This sort of simple micro economic story is told every year in university classes.

Version:1.0 StartHTML:0000000165 EndHTML:0000006093 StartFragment:0000002593 EndFragment:0000006057 SourceURL:file://localhost/Users/daveingr/Desktop/Capacity

It clearly applies to macroeconomics as well – to large systems as well as small.  Another word for these situations where system capacity is exceeded is systemic risk.  The term is misleading.  Systemic risk is not a particular type of risk, like market or credit risk.  Systemic risk is the risk that the system will become overloaded and start to behave in severely non-linear manner.  One severe non-linear behavior is shutting down.  That is what the interbank lending did in 2008.

In 2008, many knew that the capacity of the banking system had been exceeded.  They knew that because they knew that their own bank’s capacity had been exceeded.  And they knew that the other banks had been involved in the same sort of business as them.  There is a name for the risks that hit everyone who is in a market – systematic risks.  Systemic risks are usually Systematic risks that grow so large that they exceed the capacity of the system.  The third broad category of risk, specific risks, are not an issue, unless a firm with a large amount of specific risk that exceeds their capacity is “too big to fail”.  Then suddenly specific risk can become systemic risk.

So everyone just watched when the sub prime systematic risk became a systemic risk to the banking sector.  And watch the specific risk to AIG lead to the largest single firm bailout in history.

Many have proposed the establishment of a systemic risk regulator.  What that person would be in charge of doing would be to identify growing systematic risks that could become large enough to become systemic problems.  THen they are responsible to taking or urging actions that are intended to diffuse the systematic risk before it becomes a systemic risk.

A good risk manager has a systemic risk job as well.  THe good risk manager needs to pay attention to the exact same things – to watch out for systematic risks that are growing to a level that might overwhelm the capacity of the system.  The risk manager’s responsibility is then to urge their firm to withdraw from holding any of the systematic risk.   Stories tell us that happened at JP Morgan and at Goldman.  Other stories tell us that didn’t happen at Bear or Lehman.

So the moral of this is that you need to watch not just your own capacity but everyone else’s capacity as well if you do not want stories told about you.

Many Deadly Sins of Risk Management

November 16, 2009

Compiled by Anton Kobelev at www.inarm.org

Communication Breakdown

  • CEO thinks that risk management is the CRO’s job;
  • Not listening to your CRO – having him too low down the management chain;
  • Hiring a CEO who “doesn’t want to hear bad news”;
  • Not linking the Board tolerance for risk to the risk management practices of the company;
  • Having the CRO report to the CFO instead of to the CEO or Board, i.e., not having a system of checks and balances in place regarding risk practices;
  • The board not leading the risk management charge;
  • Not communicating the risk management goals;
  • Not driving the risk management culture down to the lower levels of the organization;

Ignorance is not Bliss

  • Not doing your own risk evaluations;
  • Not expecting the unexpected;
  • Overreacting to risks that turn out to be harmless;
  • Don’t shun the risk you understand, only to jump into a risk you don’t understand;
  • Failure to pay attention to actual risk exposure in the context of risk appetite;
  • Using outsider view of how much capital the firm should hold uncritically;

Cocksureness

  • Believing your risk model;
  • The opinion held by the majority is not always the right one;
  • There can be several logical, but contradictive explanations for one sequence of events, and logical doesn’t mean true;
  • We do not have perfect information about the future, or even the past and present;
  • Don’t use old normal assumptions to model in the new normal;
  • Arrogance of quantifying the unquantifiable;
  • Not believing your risk model –  waiting until you have enough evidence to prove the risk is real;

Not Seeing the Big Picture

  • Making major changes without heavy involvement of Risk Management;
  • Conflict of interest: not separating risk taking and risk management;
  • Disconnection of strategy and risk management: Allocating capital blindly without understanding the risk-adjusted value creation;
  • One of the biggest mistakes has to be thinking that you can understand the risks of an enterprise just by looking at the components of risk and “adding them up” – the complex interactions between factors are what lead to real enterprise risk;
  • Looking at risk using one single measure;
  • Measuring and reporting risks is the same as managing risks;
  • Risk can always be measured;

Fixation on Structure

  • Thinking that ERM is about meetings and org charts and capital models and reports;
  • Think and don’t check boxes;
  • Forgetting that we are here to protect the organization against risks;
  • Don’t let an ERM process become a tick-box exercise;
  • Not taking a whole company view of risk management;

Nearsightedness

  • Failing to seize historic opportunities for reform, post crisis;
  • Failure to optimize the corporate risk-return profile by turning risk into opportunity where appropriate;
  • Don’t be a stop sign.  Understand the risks AND REWARDS of a proposal before venturing an opinion;
  • Talking about ERM but never executing on anything;
  • Waiting until ratings agencies or regulatory requirements demand better ERM practices before doing anything;
  • There is no obstacle so difficult that, with sufficient thought, cannot be turned into an opportunity;
  • No opportunity so assured that, with insufficient thought, cannot be turned into a disaster;
  • Do not confuse trauma with learning;
  • Using a consistent discipline to search for opportunities where you are paid to accept risk in the context of the entire entity will move you toward an optimized position. Just as important is using that discipline to avoid “opportunities” where this is not the case.
    • undertake positive NPV projects
    • risk comes along with these projects and should be priced in the NPV equation
    • the price of risk is the lesser of the external cost of disposal (e.g., hedging) or the cost of retention “in the context of the entire entity”;
    • also hidden in these words is the need to look at the marginal impact on the entity of accepting the risk. Am I better off after this decision than I was before? A silo NPV may not give the same answer for all firms/individuals;
  • What is important is the optimization journey, understanding it as a goal we will never achieve;

More Skin in the Game

  • Misalign the incentives;
  • Most people will act based on their financial incentives, and that certainly happened (and continues to happen) over the past couple of years. Perhaps we could include one saying that no one is peer reviewing financial incentives to make sure they don’t increase risk elsewhere in the system;
  • Not tying risk management practices to compensation;
  • Not aligning risk management goals with compensation;

The Future of Risk Management – Conference at NYU November 2009

November 14, 2009

Some good and not so good parts to this conference.  Hosted by Courant Institute of Mathematical Sciences, it was surprisingly non-quant.  In fact several of the speakers, obviously with no idea of what the other speakers were doing said that they were going to give some relief from the quant stuff.

Sad to say, the only suggestion that anyone had to do anything “different” was to do more stress testing.  Not exactly, or even slightly, a new idea.  So if this is the future of risk management, no one should expect any significant future contributions from the field.

There was much good discussion, but almost all of it was about the past of risk management, primarily the very recent past.

Here are some comments from the presenters:

  • Banks need regulator to require Stress tests so that they will be taken seriously.
  • Most banks did stress tests that were far from extreme risk scenarios, extreme risk scenarios would not have been given any credibility by bank management.
  • VAR calculations for illiquid securities are meaningless
  • Very large positions can be illiquid because of their size, even though the underlying security is traded in a liquid market.
  • Counterparty risk should be stress tested
  • Securities that are too illiquid to be exchange traded should have higher capital charges
  • Internal risk disclosure by traders should be a key to bonus treatment.  Losses that were disclosed and that are within tolerances should be treated one way and losses from risks that were not disclosed and/or that fall outside of tolerances should be treated much more harshly for bonus calculation purposes.
  • Banks did not accurately respond to the Spring 2009 stress tests
  • Banks did not accurately self assess their own risk management practices for the SSG report.  Usually gave themselves full credit for things that they had just started or were doing in a formalistic, non-committed manner.
  • Most banks are unable or unwilling to state a risk appetite and ADHERE to it.
  • Not all risks taken are disclosed to boards.
  • For the most part, losses of banks were < Economic Capital
  • Banks made no plans for what they would do to recapitalize after a large loss.  Assumed that fresh capital would be readily available if they thought of it at all.  Did not consider that in an extreme situation that results in the losses of magnitude similar to Economic Capital, that capital might not be available at all.
  • Prior to Basel reliance on VAR for capital requirements, banks had a multitude of methods and often used more than one to assess risks.  With the advent of Basel specifications of methodology, most banks stopped doing anything other than the required calculation.
  • Stress tests were usually at 1 or at most 2 standard deviation scenarios.
  • Risk appetites need to be adjusted as markets change and need to reflect the input of various stakeholders.
  • Risk management is seen as not needed in good times and gets some of the first budget cuts in tough times.
  • After doing Stress tests need to establish a matrix of actions that are things that will be DONE if this stress happens, things to sell, changes in capital, changes in business activities, etc.
  • Market consists of three types of risk takers, Innovators, Me Too Followers and Risk Avoiders.  Innovators find good businesses through real trial and error and make good gains from new businesses, Me Too follow innovators, getting less of gains because of slower, gradual adoption of innovations, and risk avoiders are usually into these businesses too late.  All experience losses eventually.  Innovators losses are a small fraction of gains, Me Too losses are a sizable fraction and Risk Avoiders often lose money.  Innovators have all left the banks.  Banks are just the Me Too and Avoiders.
  • T-Shirt – In my models, the markets work
  • Most of the reform suggestions will have the effect of eliminating alternatives, concentrating risk and risk oversight.  Would be much safer to diversify and allow multiple options.  Two exchanges are better than one, getting rid of all the largest banks will lead to lack of diversity of size.
  • Problem with compensation is that (a) pays for trades that have not closed as if they had closed and (b) pay for luck without adjustment for possibility of failure (risk).
  • Counter-cyclical capital rules will mean that banks will have much more capital going into the next crisis, so will be able to afford to lose much more.  Why is that good?
  • Systemic risk is when market reaches equilibrium at below full production capacity.  (Isn’t that a Depression – Funny how the words change)
  • Need to pay attention to who has cash when the crisis happens.  They are the potential white knights.
  • Correlations are caused by cross holdings of market participants – Hunts held cattle and silver in 1908’s causing correlations in those otherwise unrelated markets.  Such correlations are totally unpredictable in advance.
  • National Institute of Financa proposal for a new body to capture and analyze ALL financial market data to identify interconnectedness and future systemic risks.
  • If there is better information about systemic risk, then firms will manage their own systemic risk (Wanna Bet?)
  • Proposal to tax firms based on their contribution to gross systemic risk.
  • Stress testing should focus on changes to correlations
  • Treatment of the GSE Preferred stock holders was the actual start of the panic.  Leahman a week later was actually the second shoe to drop.
  • Banks need to include variability of Vol in their VAR models.  Models that allowed Vol to vary were faster to pick up on problems of the financial markets.  (So the stampede starts a few weeks earlier.)
  • Models turn on, Brains turn off.

Diversification Causes Correlations

November 3, 2009

The Bond insurers diversified out of their niche of municpal bonds into real estate backed securities and suddenly these two markets that previously seemed to have low correlation were highly correlated as the sub prime crisis brought down the Bond Insurers and their problems rippled into the Muni market.

(I say seemed uncorrelated, but of course they are highly dependent since a high fraction of municipal incomes comes from taxes relating to real estate values.  That is a major problem with the statistical idea of correlation – statistical approaches must never be used uncritically.)

But the point of the first paragraph above is that interdependencies do not have to come from the fundamentals of two markets – that is to come from common drivers of risk.  Interdependencies especially of market prices can and often do come from common ownership of securities from different markets.  The practice of holding risks from seemingly unrelated risks or markets is generally thought to create better risk adjusted results because of diversification.

But the perverse truth is that like many things in real economics (not book economics) the more people use this rule, the less likely it is that it will work.

There are several reasons for this:

  • When a particularly large organization diversifies, their positions in every market will be large.  For anyone to get the most benefit from diversification, they need to have positions in each diversifying risk that are similar in size.  Since even the largest firms had to have started somewhere, they will have a primary business that is very large and so will seek to take very large positions in the diversifying markets to get that diversifying benefit.  So there ends up being some very significant specific risk of a sudden change in correlation if that large firm runs into trouble.  These events only ever happen once to a firm so there is never, ever any historical correlations to be found.  But if you want to avoid this diversification pitfall, it pays to pay attention to where the largest firms operate and be cautious in assuming diversification benefits where THEY are the correlating factor.
  • When large numbers of firms use the same correlation factors (think Solvency II), then they will tend to all try to get into the same diversifying lines of business where they can get the best diversification benefits.  This results in both the specific risk factor mentioned above and to a pricing pressure on those markets.  Those risks with “good” diversification will tend to price down to their marginal cost, which will be net of the diversification benefit.  The customers will end up getting the advantage of diversification.
  • Diversification is commonly believed to eliminate risk.  THis is decidedly NOT TRUE.  No risk is destroyed via diversification.  All of the losses that were going to happen do happen, unaffected by diversification.  What diversification hopes to accomplish is to make this losses relatively less important and more affordable because some risk taking activity is likely to be showing gains while others is showing losses.  So people who thought that because they were diversified, that they had less risk, were willing to go out and take more risk.  This effect causes more of the stampede for the exits behaviors when times get tough and the losses that were NOT destroyed by diversification occur.
  • The theory of a free lunch with diversification encourages firms who are inexperienced with managing a risk to take on that risk because their diversification analysis says that it is “free”.  These firms will often help to drive down prices for everyon, sometimes to the point that they do not make money from their “diversification play” even in good years.  Guess what?  All that fancy correlation math does not work as advertised if the expected earnings from a “diversifying risk” is negative.  These is no diversification from a losing operation because it has no gains to offset the losses of other risks.

Myths of Market Consistent Valuation

October 31, 2009

    Guest Post from Elliot Varnell

    Myth 1: An arbitrage free model will by itself give a market consistent valuation.

    An arbitrage-free model which is calibrated to deep and liquid market data will give a market consistent valuation. An arbitrage-free model which ignores available deep and liquid market data does not give a market consistent valuation. Having said this there is not a tight definition of what constitutes deep and liquid market data, therefore there is no tight definition of what constitutes market consistent valuation. For example a very relevant question is whether calibrating to historic volatility can be considered market consistent if there is a marginally liquid market in options. CEIOPs CP39 published in July 2009 appears to leave open the questions of which volatility could be used, while CP41 requires that a market is deep and liquid, transparent and that these properties are permanent.

    Myth 2: A model calibrated to deep and liquid market data will give a Market Consistent Valuation.

    A model calibrated to deep and liquid market data will only give a market consistent valuation if the model is also arbitrage free. If a model ignores arbitrage free dynamics then it could still be calibrated to replicate certain prices. However this would not be a sensible framework marking to model the prices of other assets and liabilities as is required for the valuation of many participating life insurance contracts Having said this the implementation of some theoretically arbitrage free models are not always fully arbitrage free themselves, due to issues such as discretisation, although they can be designed to not be noticeably arbitrage free within the level of materiality of the overall technical provision calculation.

    Myth 3: Market Consistent Valuation gives the right answer.

    Market consistent valuation does not give the right answer, per se, but an answer conditional on the model and the calibration parameters. The valuation is only as good as these underlying assumptions. One thing we can be sure of is that the model will be wrong in some way. This is why understanding and documenting the weakness of an ESG model and its calibration is as important as the actual model design and calibration itself.

    Myth 4: Market Consistent Valuation gives the amount that a 3rd party will pay for the business.

    Market Consistent Valuation (as calculated using an ESG) gives a value based on pricing at the margin. As with many financial economic models the model is designed to provide a price based on a small scale transaction, ignoring trading costs, and market illiquidity. The assumption is made that the marginal price of the liability can be applied to the entire balance sheet. Separate economic models are typically required to account for micro-market features; for example the illiquidity of markets or the trading and frictional costs inherent from following an (internal) dynamic hedge strategy. Micro-market features can be most significant in the most extreme market conditions; for example a 1-in-200 stress event.

    Even allowing for the micro-market features a transaction price will account (most likely in much less quantitative manner than using an ESG) the hard to value assets (e.g. franchise value) or hard to value liabilities (e.g. contingent liabilities).

    Myth 5: Market Consistent Valuation is no more accurate than Discounted Cash Flow techniques using long term subjective rates of return.

    The previous myths could have suggested that market consistent valuation is in some way devalued or not useful. This is certainly the viewpoint of some actuaries especially in the light of the recent financial crisis. However it could be argued that market consistent valuation, if done properly, gives a more economically meaningful value than traditional DCF techniques and provides better disclosure than traditional DCF. It does this by breaking down the problem into clear assumptions about what economic theory is being applied and clear assumption regarding what assumptions are being made. By breaking down the models and assumptions weaknesses are more readily identified and economic theory can be applied.


Understanding and Balance

October 27, 2009

Everything needs to balance.  A requirement that management understand the model creates and equal and opposite obligation on the part of the modelers to really explain the assumptions that are embedded in the model and the characteristics that the model will exhibit over time.

This means that the modelers themselves have to actually understand the assumptions of the model – not just the mechanical assumptions that support the mechanical calculations of the model.  But the fundamental underlying assumptions about why the sort of model chosen is a reliable way to represent the world.

For example, one of the aspects of models that is often disturbing to senior management is the degree to which the models require recalibration.  That need for recalibration is an aspect of the fundamental nature of the model.  And I would be willing to guess that few modelers have in their explanation of their model fully described that aspect of their model and explained why it exists and why it is a necessary aspect of the model.

That is just an example.  We modelers need to understand all of these fundamental points where models are simply baffling to senior management users and work to overcome the gap between what is being explained and what needs to be explain.

We are focused on the process.  Getting the process right.  If we choose the right process and follow it correctly, then the result should be correct.

But the explanations that we need are about why the choice of the process made sense in the first place.  And more importantly, how, now that we have followed the process for so long that we barely remember why we chose it, do we NOW believe that the result is correct.

What is needed is a validation process that gets to the heart of the fundamental questions about the model that are not yet known!  Sound frustrating enough?

The process of using risk models appropriately is an intellectual journey.  There is a need to step past the long ingrained approach to projections and models that put models in the place of fortune tellers.  The next step is to begin to find value in a what-if exercise.  Then there is the giant leap of the stochastic scenario generator.  Many major conceptual and practical leaps are needed to move from (a) getting a result that is not reams and reams of complete nonsense to (b) getting a result that gives some insight into the shape of the future to (c) realizing that once you actually have the model right, it starts to work like all of the other models you have ever worked with with vast amount of confirmation of what you already know (now that you have been doing this for a couple of years) along with an occasional insight that was totally unavailable without the model.

But while you have been taking this journey of increasing insight, you cross over and become one of those who you previously thought to talk mostly in riddles and dense jargon.

But to be fully effective, you need to be able to explain all of this to someone who has not taken the journey.

The first step is to understand that in almost all cases they do not give a flip about your model and the journey you went throughto get it to work.

The next step is to realize that they are often grounded in an understanding of the business.  For each person in your management team, you need to understand which part of the business that they are grounded in and convince them that the model captures what they understand about the part of the business that they know.

Then you need to satisfy those whse grounding is in the financials.  For those folks, we usually do a process called static validation – show that if we set the assumptions of the model to the actual experience of last year, that the model actually reproduces last year’s financial results.

Then you can start to work on an understanding of the variability of the results.  Where on the probability spectrum was last year – both for each element and for the aggregate result.

That one is usually troublesome.  For 2008, it was particularly troublesome for any firms that owned any equities.  Most models would have placed 2008 stock market losses almost totally off the charts.

But in the end, it is better to have the discussion.  It will give the management users a healthy skepticism for the model and more of an appreciation for the uses and limitations of the entire modeling methodology.

These discussions should lead to understanding and balance.  Enough understanding that there is a balanced view of the model.  Not total reliance and not total skepticism.

No Thanks, I have enough “New”

September 24, 2009

It seems sad when 75 year old businesses go bust.  They had something that worked for several generations of managers, employees and investors.  And now they are gone.  How could that be?

There are two ways that old businesses can come to their demise.  They can do it because they stick to what they know and their product or service  (usually) slowly goes out of fashion.  Usually slowly, because all but the most ossified large successful companies can adapt enough to keep going for quite some time, even when faced by competition with a better business model/product or service.  Think of the US auto industry slowly declining for 40 years.

The second way is a quick demise. This usually happens after the old company chooses to completely embrace something completely new.  If their historic business is in decline, many large old firms are on the look out for that new transformational thing.  The mistake that they sometimes make is to be in much too much of a hurry. They want to apply their size advantage to the new thing and start getting economies of scale in addition to early adopter advantages.

The failure rate of new business is very, very high.  A big business that jumps to putting a large amount of its resources into the new business will be transforming a solid longstanding business effectively into a start-up.  But rarely do the big businesses in restart mode deliver anything like start-up returns.  So investors bare the risks of of the start-up with the returns only slightly higher than long term averages.

This is a clear example of when the CEO needs to be the risk manager.  The established firm needs to have a limit for “New” businesses.  The plan for the new business should reflect an orderly transition between the franchise business and what MAY become the new franchise.  This requires the CEO to have a time frame in mind that is appropriate for a business that may have existed before he/she was born and that, if the risks are managed well, should exist long after they are gone.

There are good underlying reasons why the “New” needs to be limited for a company with long term survival plans.  “New” involves several risks that a well established firm may have mastered a generation ago and have relegated to the corporate unconscious.

The first is execution risk.  The established firm will doubtless be excellent at execution of its franchise business.  But the “New” will doubtless require different execution.  An example of this from the insurance industry, when US Life Insurers started into the equity linked products, man of them experienced severe execution problems.  Their traditional products involved collecting cash and putting it into their general fund.  They only provided annual information to their customers if any.  Their administrative systems and procedures were set up within an environment that was not particularly time sensitive.  The money was in the right place, their accounting could catch up “whenever”.   With the new equity linked products, exacting execution was important.  Money was not left in the general fund of the insurer but needed to be transferred to the investment manager within three days of receipt.  So insurers adapted to this new world by getting to the accounting and cash transfers “whenever” but crediting the customer with the performance of their chosen equity fund within the legal 3 day limit.  This worked out fine with small timing delays creating some small gains and some small losses for the insurers.  But the extended bull market of the late 1990’s made for a repeated loss because the delay of processing and cash transfer meant that the insurer was commonly backdating to a lower purchase price for the shares than what they paid.  Some large old insurers who had jumped into this new world with both feet were losing millions to this simple execution risk.  In addition, for those who were slow to fix things, they got hit on the way down as well.  When the Internet bubble popped, there were many, many calls for customer funds to be taken out of the equity funds.  Slow processing meant that they paid out at a higher rate than what they received from their delayed transactions with the investment funds.

The insurers had a well established set of operational procedures that actually put them at a disadvantage compared to start-ups in the same business.

The second is the “unknown” risk.  A firm that has been operating for many years is often very familiar with the risks of its franchise business.  In fact, their approach to risk management for that business may well be so ingrained, that it is no longer considered a high priority.  It just happens.  And the risk management systems that have been in place may work well with little active top management attention.  These organizations are usually not very well positioned to be able to notice and prepare for the new unknown risks that the new business will have.

The third is the “Unknowable”.  For a new activity, product or business, you just cannot tell what the periodicity of loss events or the severity of those events.  That was one of the mistakes in the sub prime market  The mortgage market has about a 15 year periodicity.  Since a large percentage of people operating in the sub prime space were not in that market the last time there was a downturn, they had no personal experience with the normal cycle of losses in the mortgage market.  Then there was the unknowable impact of the new mortgage products and the drastic expansion into sub prime.  It was just unknowable what would be the periodicity and severity of losses in the “new” mortgage market.

So the point is that these things that are observed about the prior “new” things can be learned and extrapolated to future “new” things.

But the solution is not to never do anything “new”, it is to keep the “new” reasonable in proportion to the rest of the organization, to put limits on “new” just like there are limits on any other major aspect of risk.

How Many Dependencies did you Declare?

September 12, 2009

Correlation is a statement of historical fact.  The measurement of correlation does not necessarily give any indication of future tendencies unless there is a clear interdependency or lack thereof.  That is especially true when we seek to calculate losses at probability levels that are far outside the size of the historical data set.  (If you want to calculate a 1/200 loss and have 50 years of data, you have 25% of 1 observation)

Using historical correlations in the absence of understand the actual interdependencies could possibly result in drastic problems.

An example is the sub primes.  One of the key differences between what actually happened and the models used prior to the collapse of these markets is that historical correlations were used to drive the models for sub primes.  The correlations were between regions.  Historically, there had been low correlations between mortgage default rates in different regions of the US.  Unfortunately, those correlations were an artifact of regional unemployment driven defaults and unemployment is not the only factor that affects defaults.   The mortgage market had changed drastically from the period over which the defaults were measured.  Mortgage lending practices changed in most of the larger markets.  The prevalence of modified payment mortgages meant that the relationship between mortgages and income was changing as the payments shifted.  In addition, the amount of mortgage granted compared to income also shifted drastically.

So the long term low regional correlations were no longer applicable to the new mortgage market, because the market had changed.  The historical correlation was still a true fact, but is did not have much predictive power.

And it makes some sense to talk about interdependency rising in extreme events.  Just like in the subprime situation, there are drivers of risks that shift into new patterns because systems exceed their carrying capacity.

Everything that is dependent on confidence in the market may not correlate in most times, but that interdependency will show through when confidence is shaken.  In addition to confidence, financial market instruments may also be dependent on the level of liquidity in the markets.  Is confidence in the market a stochastic variable in the risk models?  It should be – it is one of the main drivers of levels of correlation of otherwise unrelated activities.

So before jumping to using correlations, we must seek to understand dependencies.

Are You Sure About That?

September 6, 2009

Most risk models consist of a series of best guesses for the size of each risk. Some of the risks are very well known. The risk models here have relatively little uncertainty. They are mostly models of volatility, where there is a long history of past volatility and good reason to expect future volatility to be similar. Others of the risks have little or no track record. The volatility assumptions in these models are based on extensions of information from other situations. There may be very high degrees of uncertainty in the parameters for these models. However, many of the folks who build the models believe for various reasons that reflecting parameter uncertainty is too cautious an approach to the risk model and adds so much to the risk evaluation that it makes the risk model unusable. The numbers from both types of risk are usually just added together or presented on the same page with no distinction between their credibility. So it seems that the users of risk models are faced with two choices – to have risk models that reflect high potential risk for new and untested risks and therefore stifle participation in new business opportunities and risk models that sometimes drastically understate the risks.

The alternate is to keep track of many different aspects of risk and pay attention to all of them.  See Multidimensional risk.

Then everyone can know that the economic capital or any other comprehensive risk measurement does NOT reflect the degree of uncertainty, but that another report gives information about uncertainty.

The report on uncertainty might look at each of the risks and give an indication of the level of uncertainty of each of the values in the economic capital.  So it might say that 75% of economic capital comes from risks with low uncertainty, 20% moderate and 5% high uncertainty.

Even more revealing, profits could be analyzed in the same manner.  That might help to show how much of profits are coming from activities with higher uncertainty – a dangerous situation that should trigger a high degree of concern among management.


Good data, Models, Instincts and statistics

September 2, 2009

Guest Post from Jawwad Farid

http://alchemya.com/wordpress2/

Risk and transaction systems differ in many ways. But they both suffer from a common ailment – Good data and working models. On a risk platform the integrity of the data set is dependent on the underlying transaction platform and the quality of data feeds. Keeping the incoming stream of information clean and ensuring that the historical data set remains pure is a full time job. The resources allocated to this problem show how committed and reliant an organization is to its risk systems.

In organizations still ruled by the compliance driven check list mindset, you will find that it is sufficient to simply generate risk reports. It is sufficient because no one really looks at the results and when they do in most cases they may not have any idea about how to interpret them. Or even worse, work with the numbers to understand the challenges they represent for that organization’s future.

The same problem haunts the modeling domain. It is not sufficient to have a model in place. It is just as important to understand how it works and how it will fail. But once again as long as a model exists and as long as it produces something on a periodic basis, most Boards in the region feel they have met the necessary and sufficient condition for risk management.

Is there anything that we can do to change this mindset and fix this problem?

One could start with the confusion at Board level between Risk and the underlying transaction. A market risk platform is a very different animal from the underlying treasury transaction. The common ground however is the pricing model and market behavior, the uncommon factor is the trader’s instinct and his gut. Where risk and the transaction systems clash is on the uncommon ground. Instincts versus statistics!

The instinct and gut effect is far more prominent on the credit side. Relationships and strategic imperatives drive the credit business. Analytics and models drive the credit risk side. The credit business is “name” based, dominated by subjective factors that asses relationship, one at a time. There is some weight assigned to sector exposure and concentration limits at the portfolio level but the primary “lend”, “no lend” call is still relationship based. The credit risk side on the other hand is scoring, behavior and portfolio based. A payment delay is a payment delay, a default is a default. While the softer side can protect the underlying relationship and possibly increase the chances of recovery and help attain “current” status more quickly, the job of a risk system is to document and highlight exceptions and project their impact on the portfolio. A risk system focuses on the trend. While it is interested in the cause of the underlying event, the interest is purely mathematical; there is no human side.

I asked earlier if there is anything we can do to change. To begin with Boards need to spend more time and allocate more resources to the risk debate. Data, models and reports are not enough. They need to be poked, challenged, stressed, understood, grown and invested in. Two hours once a quarter for a Board Risk Committee meeting is not sufficient time to dissect the effectiveness of your risk function. You may as well close your eyes and ignore it.

But before you do that remember hell hath no fury like a risk scorned.

Models & Manifesto

September 1, 2009

Have you ever heard anyone say that their car got lost? Or that they got into a massive pile-up because it was a 1-in-200-year event that someone drove on the wrong side of a highway? Probably not.

But statements similar to these have been made many times since mid-2007 by CEOs and risk managers whose firms have lost great sums of money in the financial crisis. And instead of blaming their cars, they blame their risk models. In the 8 February 2009 Financial Times, Goldman Sachs’ CEO Lloyd Blankfein said “many risk models incorrectly assumed that positions could be fully hedged . . . risk models failed to capture the risk inherent in off-balance sheet activities,” clearly placing the blame on the models.

But in reality, it was, for the most part, the modellers, not the models, that failed. A car goes where the driver steers it and a model evaluates the risks it is designed to evaluate and uses the data the model operator feeds into the model. In fact, isn’t it the leadership of these enterprises that are really responsible for not clearly assessing the limitations of these models prior to mass usage for billion-dollar decisions?

But humans, who to varying degrees all have a limit to their capacity to juggle multiple inter-connected streams of information, need models to assist with decision-making at all but the smallest and least complex firms.

These points are all captured in the Financial Modeler’s Manifesto from Paul Wilmott and Emanuel Derman.

But before you use any model you did not build yourself, I suggest that you ask the model builder if they have read the manifesto.

If you do build models, I suggest that you read it before and after each model building project.

The Black Swan Test

August 31, 2009

Many commentators have suggested that firms need to do stress tests to examine their vulnerability to adverse situations that are not within the data set used to parameterize their risk models. In the article linked below, I suggest the adoption of a terminology to describe stress tests and also a methodology that can be adopted by any risk model user to test and
communicate a test of the stability of model results. This method can be called a Black Swan test. The terminology would be to set one Black Swan equal to the most adverse data point. A one Black Swan stress test would be a test of a repeat of the worst event in the data set. A two Black Swan stress test would
be a test of experience twice as adverse as the worst data point.

So for credit losses for a certain class of bonds, if the historical period worst loss was 2 percent, then a 1BLS stress test would be a 2 percent loss, a 4 percent loss a 2BLS stress test, etc.

Article

Further, the company could state their resiliency in terms of Black Swans. For example:

Tests show that the company can withstand a 3.5BLS stress test for credit and a 4.2BLS for equity risk and a simultaneous 1.7BLS credit and equity stress.

Similar terminology could be used to describe a test of model stability. A 1BLS model stability test would be performed by adding a single additional point to the data used to parameterize the model. So a 1BLS model stability test would involve adding a single data point equal to the worst point in the data set. A 2BLS test would be adding a data point that is twice as bad as the worst point.

Multi dimensional Risk Management

August 28, 2009

Many ERM programs are one dimensional. They look at VaR or they look at Economic Capital. The Multi-dimensional Risk manager consider volatility, ruin, and everything in between. They consider not only types of risk that are readily quantifiable, but also those that may be extremely difficult to measure. The following is a partial listing of the risks that a multidimensional risk manager might examine:
o Type A Risk – Short-term volatility of cash flows in one year
o Type B Risk – Short-term tail risk of cash flows in one year
o Type C Risk – Uncertainty risk (also known as parameter risk)
o Type D Risk – Inexperience risk relative to full multiple market cycles
o Type E Risk – Correlation to a top 10
o Type F Risk – Market value volatility in one year
o Type G Risk – Execution risk regarding difficulty of controlling operational losses
o Type H Risk – Long-term volatility of cash flows over five or more years
o Type J Risk – Long-term tail risk of cash flows over 5 five years or more
o Type K Risk – Pricing risk (cycle risk)
o Type L Risk – Market liquidity risk
o Type M Risk – Instability risk regarding the degree that the risk parameters are stable

Many of these types of risk can be measured using a comprehensive risk model, but several are not necessarily directly measurable. But the muilti dimensional risk manager realizes that you can get hurt by a risk even if you cannot measure it.

Know your embedded assumptions

August 27, 2009

An implicit assumption in the way that many practitioners use financial models is that their planned activity is marginal to the market. If you ask the manager of a large mutual find about that assumption and they will generally laugh out loud. They are well aware that their trades must be made carefully to avoid moving the market price. Often they will build up a position over a period of time based upon the normal flow of trading in a security. That is a very micro example of non-marginality. What happened with the sub-prime mortgage market was a drastic shift in activity that was clearly not marginal. When the volume of sub prime mortgages rose 10 fold there were two major changes that occurred. First, the sub prime mortgages were no longer going to a marginally more creditworthy subset of the folks who would technically into the sub prime class, they were going to anyone in that class. Any prior experience factors that were observed of the highly select sub prime folks would not apply to the average sub prime folks. So what was true on the margin is not true in general. The second marginal issue is the change in the real estate market that was driven by the non-marginal amount of new sub prime buyers who came into the market. On the way up, this expansion in the number of folks who could buy houses helped to drive the late stages of the price run up because of that increased demand. That increase in price fed into the confidence of the market participants who were feeding money into the market. Risk managers should always be aware that marginal analysis can produce incorrect results. They should follow my mother’s caution “what if everybody did that?”


%d bloggers like this: