Archive for the ‘Volatility’ category

Quantitative vs. Qualitative Risk Assessment

July 14, 2014

There are two ways to assess risk.  Quantitative and Qualitative.  But when those two words are used in the NAIC ORSA Guidance Manual, their meaning is a little tricky.

In general, one might think that a quantitative assessment uses numbers and a qualitative assessment does not.  The difference is as simple as that.  The result of a quantitative assessment would be a number such as $53 million.  The result of a qualitative assessment would be words, such as “very risky” or “moderately risky”.

But that straightforward approach to the meaning of those words does not really fit with how they are used by the NAIC.  The ORSA Guidance Manual suggests that an insurer needs to include those qualitative risk assessments in its determination of capital adequacy.  Well, that just will not work if you have four risks that total $400 million and three others that are two “very riskys” and one “not so risk”.  How much capital is enough for two “very riskys”, perhaps you need a qualitative amount of surplus to provide for that, something like “a good amount”.

RISKVIEWS believes that then the NAIC says “Quantitative” and “Qualitative” they mean to describe two approaches to developing a quantity.  For ease, we will call these two approaches Q1 and Q2.

The Q1 approach is data and analysis driven approach to developing the quantity of loss that the company’s capital standard provides for.  It is interesting to RISKVIEWS that very few participants or observers of this risk quantification regularly recognize that this process has a major step that is much less quantitative and scientific than others.

The Q1 approach starts and ends with numbers and has mathematical steps in between.  But the most significant step in the process is largely judgmental.  So at its heart, the “quantitative” approach is “qualitative”.  That step is the choice of mathematical model that is used to extrapolate and interpolate between actual data points.  In some cases, there are enough data points that the choice of model can be based upon somewhat less subjective fit criteria.  But in other cases, that level of data is reached by shortening the time step for observations and THEN making heroic (and totally subjective) assumptions about the relationship between successive time periods.

These subjective decisions are all made to enable the modelers to make a connection between the middle of the distribution, where there usually is enough data to reliably model outcomes and the tail, particularly the adverse tail of the distribution where the risk calculations actually take place and where there is rarely if ever any data.

There are only a couple of subjective decisions possibilities, in broad terms…

  • Benign – Adverse outcomes are about as likely as average outcomes and are only moderately more severe.
  • Moderate – Outcomes similar to the average are much more likely than outcomes significantly different from average.  Outcomes significantly higher than average are possible, but likelihood of extremely adverse outcomes are extremely highly unlikely.
  • Highly risky – Small and moderately adverse outcomes are highly likely while extremely adverse outcomes are possible, but fairly unlikely.

The first category of assumption, Benign,  is appropriate for large aggregations of small loss events where contagion is impossible.  Phenomenon that fall into this category are usually not the concern for risk analysis.  These phenomenon are never subject to any contagion.

The second category, Moderate, is appropriate for moderate sized aggregations of large loss events.  Within this class, there are two possibilities:  Low or no contagion and moderate to high contagion.  The math is much simpler if no contagion is assumed.

But unfortunately, for risks that include any significant amount of human choice, contagion has been observed.  And this contagion has been variable and unpredictable.  Even more unfortunately, the contagion has a major impact on risks at both ends of the spectrum.  When past history suggests a favorable trend, human contagion has a strong tendency to over play that trend.  This process is called “bubbles”.  When past history suggests an unfavorable trend, human contagion also over plays the trend and markets for risks crash.

The modelers who wanted to use the zero contagion models, call this “Fat Tails”.  It is seen to be an unusual model, only because it was so common to use the zero contagion model with the simpler maths.

RISKVIEWS suggests that when communicating that the  approach to modeling is to use the Moderate model, the degree of contagion assumed should be specified and an assumption of zero contagion should be accompanied with a disclaimer that past experience has proven this assumption to be highly inaccurate when applied to situations that include humans and therefore seriously understates potential risk.

The Highly Risky models are appropriate for risks where large losses are possible but highly infrequent.  This applies to insurance losses due to major earthquakes, for example.  And with a little reflection, you will notice that this is nothing more than a Benign risk with occasional high contagion.  The complex models that are used to forecast the distribution of potential losses for these risks, the natural catastrophe models go through one step to predict possible extreme events and the second step to calculate an event specific degree of contagion for an insurer’s specific set of coverages.

So it just happens that in a Moderate model, the 1 in 1000 year loss is about 3 standard deviations worse than the mean.  So if we use that 1 in 1000 year loss as a multiple of standard deviations, we can easily talk about a simple scale for riskiness of a model:

Scale

So in the end the choice is to insert an opinion about the steepness of the ramp up between the mean and an extreme loss in terms of multiples of the standard deviation.  Where standard deviation is a measure of the average spread of the observed data.  This is a discussion that on these terms include all of top management and the conclusions can be reviewed and approved by the board with the use of this simple scale.  There will need to be an educational step, which can be largely in terms of placing existing models on the scale.  People are quite used to working with a Richter Scale for earthquakes.  This is nothing more than a similar scale for risks.  But in addition to being descriptive and understandable, once agreed, it can be directly tied to models, so that the models are REALLY working from broadly agreed upon assumptions.

*                  *                *               *             *                *

So now we go the “Qualitative” determination of the risk value.  Looking at the above discussion, RISKVIEWS would suggest that we are generally talking about situations where we for some reason do not think that we know enough to actually know the standard deviation.  Perhaps this is a phenomenon that has never happened, so that the past standard deviation is zero.  So we cannot use the multiple of standard deviation method discussed above.  Or to put is another way, we can use the above method, but we have to use judgment to estimate the standard deviation.

*                  *                *               *             *                *

So in the end, with a Q1 “quantitative” approach, we have a historical standard deviation and we use judgment to decide how risky things are in the extreme compared to that value.  In the Q2 “qualitative” approach, we do not have a reliable historical standard deviation and we need to use judgment to decide how risky things are in the extreme.

Not as much difference as one might have guessed!

Insanity is . . .

May 11, 2014

Albert Einstein is famously quoted as saying that

Insanity is doing the same thing over and over and expecting different results.

Of course, risk management is based upon an assumption that if you do something that is risky over and over again, you will get different results.  And that through careful management of the possibilities, you can, on the average, over time, get acceptable results, both in terms of sufficient gains and limited losses.

But there is also an embedded assumption in that statement that is hidden.  The statement should include the standard caveat “all else being equal”.

But in fact, all else is NEVER equal.  At least not out in the world where things count.  Not for things that involve people.  Out in the real world, once can count on the same result from the same actions, but only for a while.

All else never stays the same in the situations where people are involved because people rarely continue to follow rules like the rules of physics.  People keep changing what they do.

For example, the ideas of Hyman Minsky regarding the changing approach to credit.  People just do not leave things alone.  With credit, people are constantly seeking to get just a little more juice out of the lemon.

Free Download of Valuation and Common Sense Book

December 19, 2013

RISKVIEWS recently got the material below in an email.  This material seems quite educational and also somewhat amusing.  The authors keep pointing out the extreme variety of actual detailed approach from any single theory in the academic literature.  

For example, the table following shows a plot of Required Equity Premium by publication date of book. 

Equity Premium

You get a strong impression from reading this book that all of the concepts of modern finance are extremely plastic and/or ill defined in practice. 

RISKVIEWS wonders if that is in any way related to the famous Friedman principle that economics models need not be at all realistic.  See post Friedman Model.

===========================================

Book “Valuation and Common Sense” (3rd edition).  May be downloaded for free

The book has been improved in its 3rd edition. Main changes are:

  1. Tables (with all calculations) and figures are available in excel format in: http://web.iese.edu/PabloFernandez/Book_VaCS/valuation%20CaCS.html
  2. We have added questions at the end of each chapter.
  3. 5 new chapters:

Chapters

Downloadable at:

32 Shareholder Value Creation: A Definition http://ssrn.com/abstract=268129
33 Shareholder value creators in the S&P 500: 1991 – 2010 http://ssrn.com/abstract=1759353
34 EVA and Cash value added do NOT measure shareholder value creation http://ssrn.com/abstract=270799
35 Several shareholder returns. All-period returns and all-shareholders return http://ssrn.com/abstract=2358444
36 339 questions on valuation and finance http://ssrn.com/abstract=2357432

The book explains the nuances of different valuation methods and provides the reader with the tools for analyzing and valuing any business, no matter how complex. The book has 326 tables, 190 diagrams and more than 180 examples to help the reader. It also has 480 readers’ comments of previous editions.

The book has 36 chapters. Each chapter may be downloaded for free at the following links:

Chapters

Downloadable at:

     Table of contents, acknowledgments, glossary http://ssrn.com/abstract=2209089
Company Valuation Methods http://ssrn.com/abstract=274973
Cash Flow is a Fact. Net Income is Just an Opinion http://ssrn.com/abstract=330540
Ten Badly Explained Topics in Most Corporate Finance Books http://ssrn.com/abstract=2044576
Cash Flow Valuation Methods: Perpetuities, Constant Growth and General Case http://ssrn.com/abstract=743229
5   Valuation Using Multiples: How Do Analysts Reach Their Conclusions? http://ssrn.com/abstract=274972
6   Valuing Companies by Cash Flow Discounting: Ten Methods and Nine Theories http://ssrn.com/abstract=256987
7   Three Residual Income Valuation Methods and Discounted Cash Flow Valuation http://ssrn.com/abstract=296945
8   WACC: Definition, Misconceptions and Errors http://ssrn.com/abstract=1620871
Cash Flow Discounting: Fundamental Relationships and Unnecessary Complications http://ssrn.com/abstract=2117765
10 How to Value a Seasonal Company Discounting Cash Flows http://ssrn.com/abstract=406220
11 Optimal Capital Structure: Problems with the Harvard and Damodaran Approaches http://ssrn.com/abstract=270833
12 Equity Premium: Historical, Expected, Required and Implied http://ssrn.com/abstract=933070
13 The Equity Premium in 150 Textbooks http://ssrn.com/abstract=1473225
14 Market Risk Premium Used in 82 Countries in 2012: A Survey with 7,192 Answers http://ssrn.com/abstract=2084213
15 Are Calculated Betas Good for Anything? http://ssrn.com/abstract=504565
16 Beta = 1 Does a Better Job than Calculated Betas http://ssrn.com/abstract=1406923
17 Betas Used by Professors: A Survey with 2,500 Answers http://ssrn.com/abstract=1407464
18 On the Instability of Betas: The Case of Spain http://ssrn.com/abstract=510146
19 Valuation of the Shares after an Expropriation: The Case of ElectraBul http://ssrn.com/abstract=2191044
20 A solution to Valuation of the Shares after an Expropriation: The Case of ElectraBul http://ssrn.com/abstract=2217604
21 Valuation of an Expropriated Company: The Case of YPF and Repsol in Argentina http://ssrn.com/abstract=2176728
22 1,959 valuations of the YPF shares expropriated to Repsol http://ssrn.com/abstract=2226321
23 Internet Valuations: The Case of Terra-Lycos http://ssrn.com/abstract=265608
24 Valuation of Internet-related companies http://ssrn.com/abstract=265609
25 Valuation of Brands and Intellectual Capital http://ssrn.com/abstract=270688
26 Interest rates and company valuation http://ssrn.com/abstract=2215926
27 Price to Earnings ratio, Value to Book ratio and Growth http://ssrn.com/abstract=2212373
28 Dividends and Share Repurchases http://ssrn.com/abstract=2215739
29 How Inflation destroys Value http://ssrn.com/abstract=2215796
30 Valuing Real Options: Frequently Made Errors http://ssrn.com/abstract=274855
31 119 Common Errors in Company Valuations http://ssrn.com/abstract=1025424
32 Shareholder Value Creation: A Definition http://ssrn.com/abstract=268129
33 Shareholder value creators in the S&P 500: 1991 – 2010 http://ssrn.com/abstract=1759353
34 EVA and Cash value added do NOT measure shareholder value creation http://ssrn.com/abstract=270799
35 Several shareholder returns. All-period returns and all-shareholders return http://ssrn.com/abstract=2358444
36 339 questions on valuation and finance http://ssrn.com/abstract=2357432

I would very much appreciate any of your suggestions for improving the book.

Best regards,
Pablo Fernandez

Getting Paid for Risk Taking

April 15, 2013

Consideration for accepting a risk needs to be at a level that will sustain the business and produce a return that is satisfactory to investors.

Investors usually want additional return for extra risk.  This is one of the most misunderstood ideas in investing.

“In an efficient market, investors realize above-average returns only by taking above-average risks.  Risky stocks have high returns, on average, and safe stocks do not.”

Baker, M. Bradley, B. Wurgler, J.  Benchmarks as Limits to Arbitrage: Understanding the Low-Volatility Anomaly

But their study found that stocks in the top quintile of trailing volatility had real return of -90% vs. a real return of 1000% for the stocks in the bottom quintile.

But the thinking is wrong.  Excess risk does not produce excess return.  The cause and effect are wrong in the conventional wisdom.  The original statement of this principle may have been

“in all undertakings in which there are risks of great losses, there must also be hopes of great gains.”
Alfred Marshall 1890 Principles of Economics

Marshal has it right.  There are only “hopes” of great gains.  These is no invisible hand that forces higher risks to return higher gains.  Some of the higher risk investment choices are simply bad choices.

Insurers opportunity to make “great gains” out of “risks of great losses” is when they are determining what consideration, or price, that they will require to accept a risk.  Most insurers operate in competitive markets that are not completely efficient.  Individual insurers do not usually set the price in the market, but there is a range of prices at which insurance is purchased in any time period.  Certainly the process that an insurer uses to determine the price that makes a risk acceptable to accept is a primary determinant in the profits of the insurer.  If that price contains a sufficient load for the extreme risks that might threaten the existence of the insurer, then over time, the insurer has the ability to hold and maintain sufficient resources to survive some large loss situations.

One common goal conflict that leads to problems with pricing is the conflict between sales and profits.  In insurance as in many businesses, it is quite easy to increase sales by lowering prices.  In most businesses, it is very difficult to keep up that strategy for very long as the realization of lower profits or losses from inadequate prices is quickly realized.  In insurance, the the premiums are paid in advance, sometimes many years in advance of when the insurer must provide the promised insurance benefits.  If provisioning is tilted towards the point of view that supports the consideration, the pricing deficiencies will not be apparent for years.  So insurance is particularly susceptible to the tension between volume of business and margins for risk and profits,
and since sales is a more fundamental need than profits, the margins often suffer.
As just mentioned, insurers simply do not know for certain what the actual cost of providing an insurance benefit will be.  Not with the degree of certainty that businesses in other sectors can know their cost of goods sold.  The appropriateness of pricing will often be validated in the market.  Follow-the-leader pricing can lead a herd of insurers over the cliff.  The whole sector can get pricing wrong for a time.  Until, sometimes years later, the benefits are collected and their true cost is know.

“A decade of short sighted price slashing led to industry losses of nearly $3 billion last year.”  Wall Street Journal June 24, 2002

Pricing can also go wrong on an individual case level.  The “Winners Curse”  sends business to the insurer who most underimagines riskiness of a particular risk.

There are two steps to reflecting risk in pricing.  The first step is to capture the expected loss properly.  Most of the discussion above relates to this step and the major part of pricing risk comes from the possibility of missing that step as has already been discussed.  But the second step is to appropriately reflect all aspects of the risk that the actual losses will be different from expected.  There are many ways that such deviations can manifest.

The following is a partial listing of the risks that might be examined:

• Type A Risk—Short-Term Volatility of cash flows in 1 year

• Type B Risk—Short -Term Tail Risk of cash flows in 1 year
• Type C Risk—Uncertainty Risk (also known as parameter risk)
• Type D Risk—Inexperience Risk relative to full multiple market cycles
• Type E Risk—Correlation to a top 10
• Type F Risk—Market value volatility in 1 year
• Type G Risk—Execution Risk regarding difficulty of controlling operational
losses
• Type H Risk—Long-Term Volatility of cash flows over 5 or more years
• Type J Risk—Long-Term Tail Risk of cash flows over 5 years or more
• Type K Risk—Pricing Risk (cycle risk)
• Type L Risk—Market Liquidity Risk
• Type M Risk—Instability Risk regarding the degree that the risk parameters are
stable

See “Risk and Light” or “The Law of Risk and Light

There are also many different ways that risk loads are specifically applied to insurance pricing.  Three examples are:

  • Capital Allocation – Capital is allocated to a product (based upon the provisioning) and the pricing then needs to reflect the cost of holding the capital.  The cost of holding capital may be calculated as the difference between the risk free rate (after tax) and the hurdle rate for the insurer.  Some firms alternately use the difference between the investment return on the assets backing surplus (after tax) and the hurdle rate.  This process assures that the pricing will support achieving the hurdle rate on the capital that the insurer needs to hold for the risks of the business.  It does not reflect any margin for the volatility in earnings that the risks assumed might create, nor does it necessarily include any recognition of parameter risk or general uncertainty.
  • Provision for Adverse Deviation – Each assumption is adjusted to provide for worse experience than the mean or median loss.  The amount of stress may be at a predetermined confidence interval (Such as 65%, 80% or 90%).  Higher confidence intervals would be used for assumptions with higher degree of parameter risk.  Similarly, some companies use a multiple (or fraction) of the standard deviation of the loss distribution as the provision.  More commonly, the degree of adversity is set based upon historical provisions or upon judgement of the person setting the price.  Provision for Adverse Deviation usually does not reflect anything specific for extra risk of insolvency.
  • Risk Adjusted Profit Target – Using either or both of the above techniques, a profit target is determined and then that target is translated into a percentage of premium of assets to make for a simple risk charge when constructing a price indication.

The consequences of failing to recognize as aspect of risk in pricing will likely be that the firm will accumulate larger than expected concentrations of business with higher amounts of that risk aspect.  See “Risk and Light” or “The Law of Risk and Light“.

To get Consideration right you need to (1)regularly get a second opinion on price adequacy either from the market or from a reliable experienced person; (2) constantly update your view of your risks in the light of emerging experience and market feedback; and (3) recognize that high sales is a possible market signal of underpricing.

This is one of the seven ERM Principles for Insurers

Is there a “Normal” Level for Volatility?

August 10, 2011

Much of modern Financial Economics is built upon a series of assumptions about the markets. One of those assumptions is that the markets are equilibrium seeking. If that was the case, it would seem that it would be possible to determine the equilibrium level, because things would be constantly be tugging towards that level.
But look at Volatility as represented by the VIX…

The above graph shows the VIX for 30 years.  It is difficult to see an equilibrium level in this graph.

What is Volatility?  It is actually a mixture of two main things as well as anything else that the model forgets.  It is a forced value that balances prices for equity options and risk free rates using the Black Scholes formula.

The two main items that are cooked into the volatility number are market expectations of the future variability of returns on the stock market and the second is the risk premium or margin of error that the market wants to be paid for taking on the uncertainty of future transactions.

Looking an the decidedly smoother plot of annual values…


There does not seem to be any evidence that the actual variability of prices is unsteady.  It has been in the range of 20% since it drifted up from the range of 10%.  If there was going to be an equilibrium, this chart seems to show where it might be.  But the chart above shows that the market trades all over the place on volatility, not even necessarily around the level of the experienced volatility.

And much of that is doubtless the uncertainty, or risk premium.  The second graph does show that experienced volatility has drifted to twice the level that it was in the early 1990’s.  There is no guarantee that it will not double again.  The markets keep changing.  There is no reason to rely on these historical analyses.  Stories that the majority of trades today are computer driven very short term positions taken by hedge funds suggest that there is no reason whatsoever to think that the market of the next quarter will be in any way like the market of ten or twenty years ago.  If anything, you would guess that it will be much more volatile.  Those trading schemes make their money off of price movements, not stability.

So is there a normal level for volatility?  Doubtless not.   At least not in this imperfect world.  

Modeling Uncertainty

March 31, 2011

The message that windows gives when you are copying a large number of files gives a good example of an uncertain environment.  That process recently took over 30 minutes and over the course of that time, the message box was constantly flashing completely different information about the time remaining.  Over the course of one minute in the middle of that process the readings were:

8 minutes remaining

53 minutes remaining

45 minutes remaining

3 minutes remaining

11 minutes remaining

It is not true that the answer is random.  But with the process that Microsoft has chosen to apply to the problem, the answer is certainly unknowable.  For an expected value to vary over a very short period of time by such a range – that is what I would think that a model reflecting uncertainty would  look like.

An uncertain situation could be one where you cannot tell the mean or the standard deviation because there does not seem to be any repeatable pattern to the experience.

Those uncertain times are when the regular model – the one with the steady mean and variance – does not seem to give any useful information.

The flip side of the uncertain times and the model with unsteady mean and variance that represents those times is the person who expects that things will be unpredictable.  That person will be surprised if there is an extended period of time when experience follows a stable pattern, either good or bad or even a stable pattern centered around zero with gains and losses.  In any of those situations, the competitors of that uncertain expecting person will be able to use their models to run their businesses and to reap profits from things that their models tell them about the world and their risks.

The uncertainty expecting person is not likely to trust a model to give them any advice about the world.  Their model would not have cycles of predictable length.  They would not expect the world to even conform to a model with the volatile mean and variance of their expectation, because they expect that they would probably get the volatility of the mean and variance wrong.

That is just the way that they expect it will happen.   A new Black Swan every morning.

Correction, not every morning, that would be regular.  Some mornings.

What’s the Truth?

May 21, 2010

There has always been an issue with TRUTH with regard to risk.  At least there is when dealing with SOME PEOPLE. 

The risk analyst prepares a report about a proposal that shows the new proposal in a bad light.  The business person who is the champion of the proposal questions the TRUTH of the matter.  An unprepared analyst can easily get picked apart by this sort of attack.  If it becomes a true showdown between the business person and the analyst, in many companies, the business person can find a way to shed enough doubt on the TRUTH of the situation to win the day. 

The preparation needed by the analyst is to understand that there is more than one TRUTH to the matter of risk.  I can think of at least four points of view.  In addition, there are many, many different angles and approaches to evaluating risk.  And since risk analysis is about the future, there is no ONE TRUTH.  The preparation needed is to understand ALL of the points of view as well many of the different angles and approaches to analysis of risk. 

The four points of view are:

  1. Mean Reversion – things will have their ups and downs but those will cancel out and this will be very profitable. 
  2. History Repeats – we can understand risk just fine by looking at the past. 
  3. Impending Disaster – anything you can imagine, I can imagine something worse.
  4. Unpredictable – we can’t know the future so why bother trying. 

Each point of view will have totally different beliefs about the TRUTH of a risk evaluation.  You will not win an argument with someone who has one belief by marshalling facts and analysis from one of the other beliefs.  And most confusing of all, each of these beliefs is actually the TRUTH at some point in time. 

For periods of time, the world does act in a mean reverting manner.  When it does, make sure that you are buying on the dips. 

Other times, things do bounce along within a range of ups and downs that are consistent with some part of the historical record.  Careful risk taking is in order then. 

And as we saw in the fall of 2008 in the financial markets there are times when every day you wake up and wish you had sold out of your risk positions yesterday. 

But right now, things are pretty unpredictable with major ups and downs coming with very little notice.  Volatility is again far above historical ranges.  Best to keep your exposures small and spread out. 

So understand that with regard to RISK, TRUTH is not quite so easy to pin down. 

Best Risk Management Quotes

January 12, 2010

The Risk Management Quotes page of Riskviews has consistently been the most popular part of the site.  Since its inception, the page has received almost 2300 hits, more than twice the next most popular part of the site.

The quotes are sometimes actually about risk management, but more often they are statements or questions that risk managers should keep in mind.

They have been gathered from a wide range of sources, and most of the authors of the quotes were not talking about risk management, at least they were not intending to talk about risk management.

The list of quotes has recently hit its 100th posting (with something more than 100 quotes, since a number of the posts have multiple quotes.)  So on that auspicous occasion, here are my favotites:

  1. Human beings, who are almost unique in having the ability to learn from the experience of others, are also remarkable for their apparent disinclination to do so.  Douglas Adams
  2. “when the map and the territory don’t agree, always believe the territory” Gause and Weinberg – describing Swedish Army Training
  3. When you find yourself in a hole, stop digging.-Will Rogers
  4. “The major difference between a thing that might go wrong and a thing that cannot possibly go wrong is that when a thing that cannot possibly go wrong goes wrong it usually turns out to be impossible to get at or repair” Douglas Adams
  5. “A foreign policy aimed at the achievement of total security is the one thing I can think of that is entirely capable of bringing this country to a point where it will have no security at all.”– George F. Kennan, (1954)
  6. “THERE ARE IDIOTS. Look around.” Larry Summers
  7. the only virtue of being an aging risk manager is that you have a large collection of your own mistakes that you know not to repeat  Donald Van Deventer
  8. Philip K. Dick “Reality is that which, when you stop believing in it, doesn’t go away.”
  9. Everything that can be counted does not necessarily count; everything that counts cannot necessarily be counted.  Albert Einstein
  10. “Perhaps when a man has special knowledge and special powers like my own, it rather encourages him to seek a complex explanation when a simpler one is at hand.”  Sherlock Holmes (A. Conan Doyle)
  11. The fact that people are full of greed, fear, or folly is predictable. The sequence is not predictable. Warren Buffett
  12. “A good rule of thumb is to assume that “everything matters.” Richard Thaler
  13. “The technical explanation is that the market-sensitive risk models used by thousands of market participants work on the assumption that each user is the only person using them.”  Avinash Persaud
  14. There are more things in heaven and earth, Horatio,
    Than are dreamt of in your philosophy.
    W Shakespeare Hamlet, scene v
  15. When Models turn on, Brains turn off  Til Schuermann

You might have other favorites.  Please let us know about them.

The Future of Risk Management – Conference at NYU November 2009

November 14, 2009

Some good and not so good parts to this conference.  Hosted by Courant Institute of Mathematical Sciences, it was surprisingly non-quant.  In fact several of the speakers, obviously with no idea of what the other speakers were doing said that they were going to give some relief from the quant stuff.

Sad to say, the only suggestion that anyone had to do anything “different” was to do more stress testing.  Not exactly, or even slightly, a new idea.  So if this is the future of risk management, no one should expect any significant future contributions from the field.

There was much good discussion, but almost all of it was about the past of risk management, primarily the very recent past.

Here are some comments from the presenters:

  • Banks need regulator to require Stress tests so that they will be taken seriously.
  • Most banks did stress tests that were far from extreme risk scenarios, extreme risk scenarios would not have been given any credibility by bank management.
  • VAR calculations for illiquid securities are meaningless
  • Very large positions can be illiquid because of their size, even though the underlying security is traded in a liquid market.
  • Counterparty risk should be stress tested
  • Securities that are too illiquid to be exchange traded should have higher capital charges
  • Internal risk disclosure by traders should be a key to bonus treatment.  Losses that were disclosed and that are within tolerances should be treated one way and losses from risks that were not disclosed and/or that fall outside of tolerances should be treated much more harshly for bonus calculation purposes.
  • Banks did not accurately respond to the Spring 2009 stress tests
  • Banks did not accurately self assess their own risk management practices for the SSG report.  Usually gave themselves full credit for things that they had just started or were doing in a formalistic, non-committed manner.
  • Most banks are unable or unwilling to state a risk appetite and ADHERE to it.
  • Not all risks taken are disclosed to boards.
  • For the most part, losses of banks were < Economic Capital
  • Banks made no plans for what they would do to recapitalize after a large loss.  Assumed that fresh capital would be readily available if they thought of it at all.  Did not consider that in an extreme situation that results in the losses of magnitude similar to Economic Capital, that capital might not be available at all.
  • Prior to Basel reliance on VAR for capital requirements, banks had a multitude of methods and often used more than one to assess risks.  With the advent of Basel specifications of methodology, most banks stopped doing anything other than the required calculation.
  • Stress tests were usually at 1 or at most 2 standard deviation scenarios.
  • Risk appetites need to be adjusted as markets change and need to reflect the input of various stakeholders.
  • Risk management is seen as not needed in good times and gets some of the first budget cuts in tough times.
  • After doing Stress tests need to establish a matrix of actions that are things that will be DONE if this stress happens, things to sell, changes in capital, changes in business activities, etc.
  • Market consists of three types of risk takers, Innovators, Me Too Followers and Risk Avoiders.  Innovators find good businesses through real trial and error and make good gains from new businesses, Me Too follow innovators, getting less of gains because of slower, gradual adoption of innovations, and risk avoiders are usually into these businesses too late.  All experience losses eventually.  Innovators losses are a small fraction of gains, Me Too losses are a sizable fraction and Risk Avoiders often lose money.  Innovators have all left the banks.  Banks are just the Me Too and Avoiders.
  • T-Shirt – In my models, the markets work
  • Most of the reform suggestions will have the effect of eliminating alternatives, concentrating risk and risk oversight.  Would be much safer to diversify and allow multiple options.  Two exchanges are better than one, getting rid of all the largest banks will lead to lack of diversity of size.
  • Problem with compensation is that (a) pays for trades that have not closed as if they had closed and (b) pay for luck without adjustment for possibility of failure (risk).
  • Counter-cyclical capital rules will mean that banks will have much more capital going into the next crisis, so will be able to afford to lose much more.  Why is that good?
  • Systemic risk is when market reaches equilibrium at below full production capacity.  (Isn’t that a Depression – Funny how the words change)
  • Need to pay attention to who has cash when the crisis happens.  They are the potential white knights.
  • Correlations are caused by cross holdings of market participants – Hunts held cattle and silver in 1908’s causing correlations in those otherwise unrelated markets.  Such correlations are totally unpredictable in advance.
  • National Institute of Financa proposal for a new body to capture and analyze ALL financial market data to identify interconnectedness and future systemic risks.
  • If there is better information about systemic risk, then firms will manage their own systemic risk (Wanna Bet?)
  • Proposal to tax firms based on their contribution to gross systemic risk.
  • Stress testing should focus on changes to correlations
  • Treatment of the GSE Preferred stock holders was the actual start of the panic.  Leahman a week later was actually the second shoe to drop.
  • Banks need to include variability of Vol in their VAR models.  Models that allowed Vol to vary were faster to pick up on problems of the financial markets.  (So the stampede starts a few weeks earlier.)
  • Models turn on, Brains turn off.

Are We “Due” for an Interest Rate Risk Episode?

November 11, 2009

In the last ten years, we have had major problems from Credit, Natural Catastrophes and Equities all at least twice.  Looking around at the risk exposures of insurers, it seems that we are due for a fall on Interest Rate Risk.

And things are very well positioned to make that a big time problem.  Interest rates have been generally very low for much of the past decade (in fact, most observers think that low interest rates have caused many of the other problems – perhaps not the nat cats).  This has challenged the minimum guaranteed rates of many insurance contracts.

Interest rate risk management has focused primarily around lobbying regulators to allow lower minimum guarantees.  Active ALM is practiced by many insurers, but by no means all.

Rates cannot get much lower.  The full impact of the historically low current risk free rates (are we still really using that term – can anyone really say that anything is risk free any longer?) has been shielded form some insurers by the historically high credit spreads.  As the economy recovers and credit spreads contract, the rates could go slightly lower for corporate credit.

But keeping rates from exploding as the economy comes back to health will be very difficult.  The sky high unemployment makes it difficult to predict that the monetary authorities will act to avoid overheating and the sharp rise of interest rates.

Calibration of ALM systems will be challenged if there is an interest rate spike.  Many Economic Capital models are calibrated to show a 2% rise in interest rates as a 1/200 event.  It seems highly likely that rates could rise 2% or 3% or 4% or more.  How well prepared will those firms be who have been doing diciplined ALM with a model that tops out at a 2% rise?  Or will the ALM actuaries be the next ones talking of a 25 standard deviation event?

Is there any way that we can justify calling the next interest rate spike a Black Swan?

UNRISK (2)

September 30, 2009

From Jawwad Farid

UNRISK Part 2 – Understanding the distribution

(Part One)

UNR1

Before you completely write this post off as statistical gibberish, and for those of you were fortunate enough to not get exposure to the subject, let’s just see what the distribution looks like.

UNR2

Not too bad! What you see above is a simple slotting of credit scores across a typical credit portfolio. For the month of June, the scores rate from 1 to 12, with 1 good and 12 evul. The axis on the left hand side shows how much have we bet per score / grade category. We collect the scores, then sort them, then bunch them in clusters and then simply plot the results in a graph (in statistical terms, we call it a histogram). Drawn the histogram for a data set enough number of times and the shape of the distribution will begin to speak with you. In this specific case you can see that the scoring function is reasonably effective since it’s doing a good job of classifying and recording relationships at least as far as scores represent reasonable credit behavior.

So how do you understand the distribution? Within the risk function there are multiple dimensions that this understanding may take.

The first is effectiveness. For instance the first snapshot of a distribution that we saw was effective. This one isn’t?

Why? Let’s treat that as your homework assignment. (Hint: the first one is skewed in the direction it should be skewed in, this one isn’t).

The second is behavior over time. So far you have only seen the distribution at a given instance, a snapshot. Here is how it changes over time.

UNR3

Notice anything? Homework assignment number two. (Hint: 10, 11 and 12 are NPL, Classified, Non performing, delinquent loans. Do you see a trend?)

The third is dissection across products and customer segments. Heading into an economic cycle where profitability and liquidity is going to be under pressure, which exposure would you cut? Which one is going to keep you awake at night? How did you get here in the first place? Assignment number three.

UNR4

Can you stop here? Is this enough? Well no.

UNR5

This is where my old nemesis, the moment generating function makes an evul comeback. Volatility (or vol) is the second moment. That is a fancy risqué (pun intended) way of saying it is the standard deviation of your data set. You can treat volatility of the distribution as a static parameter or treat it with more respect and dive a little deeper and see how it trends over time. What you see above is a simple tracking series that is plotting 60 day volatility over a period of time for 8 commodity groups together.

See vol. See vol run… (My apologies to my old friend Spot and the HBS EGS Case)

If you are really passionate about the distribution and half as crazy as I am, you could also delve into relationships across parameters as well as try and assess lagged effects across dimensions.

UNR6

The graph above shows how volatility for different interest rates moves together and the one below shows the same phenomenon for a selection of currency pair. When you look at the volatility of commodities, interest rates and currencies do you see what I see? Can you hear the distribution? Is it speaking to you now?

Nope. I think you need to snort some more unrisk! Home work assignment number four. (Hint: Is there a relationship, a delayed and lagged effect between the volatility of the three groups? If yes, where and who does it start with?)

UNR7

So far so good! This is what most of us do for a living. Where we fail is in the next step.

You can understand the distribution as much as you want, but it will only make sense to the business side when you translate it into profitability. If you can’t communicate your understanding or put it to work by explaining it to the business side in the language they understand, all of your hard work is irrelevant. A distribution is a wonderful thing only if you understand it. If you don’t, you might as well be praising the beauty of Jupiter’s moon under Saturn’s light in Greek to someone who has only seen Persian landscapes and speaks Pushto.

To bring profitability in, you need to integrate all the above dimensions into profitability. Where do you start? Taking the same example of the credit portfolio above you start with what we call the transition matrix. Remember the distribution plot across time from above.

UNR8

THis has appeared previously in Jawwad’s excellent blog.

Are You Sure About That?

September 6, 2009

Most risk models consist of a series of best guesses for the size of each risk. Some of the risks are very well known. The risk models here have relatively little uncertainty. They are mostly models of volatility, where there is a long history of past volatility and good reason to expect future volatility to be similar. Others of the risks have little or no track record. The volatility assumptions in these models are based on extensions of information from other situations. There may be very high degrees of uncertainty in the parameters for these models. However, many of the folks who build the models believe for various reasons that reflecting parameter uncertainty is too cautious an approach to the risk model and adds so much to the risk evaluation that it makes the risk model unusable. The numbers from both types of risk are usually just added together or presented on the same page with no distinction between their credibility. So it seems that the users of risk models are faced with two choices – to have risk models that reflect high potential risk for new and untested risks and therefore stifle participation in new business opportunities and risk models that sometimes drastically understate the risks.

The alternate is to keep track of many different aspects of risk and pay attention to all of them.  See Multidimensional risk.

Then everyone can know that the economic capital or any other comprehensive risk measurement does NOT reflect the degree of uncertainty, but that another report gives information about uncertainty.

The report on uncertainty might look at each of the risks and give an indication of the level of uncertainty of each of the values in the economic capital.  So it might say that 75% of economic capital comes from risks with low uncertainty, 20% moderate and 5% high uncertainty.

Even more revealing, profits could be analyzed in the same manner.  That might help to show how much of profits are coming from activities with higher uncertainty – a dangerous situation that should trigger a high degree of concern among management.


Models & Manifesto

September 1, 2009

Have you ever heard anyone say that their car got lost? Or that they got into a massive pile-up because it was a 1-in-200-year event that someone drove on the wrong side of a highway? Probably not.

But statements similar to these have been made many times since mid-2007 by CEOs and risk managers whose firms have lost great sums of money in the financial crisis. And instead of blaming their cars, they blame their risk models. In the 8 February 2009 Financial Times, Goldman Sachs’ CEO Lloyd Blankfein said “many risk models incorrectly assumed that positions could be fully hedged . . . risk models failed to capture the risk inherent in off-balance sheet activities,” clearly placing the blame on the models.

But in reality, it was, for the most part, the modellers, not the models, that failed. A car goes where the driver steers it and a model evaluates the risks it is designed to evaluate and uses the data the model operator feeds into the model. In fact, isn’t it the leadership of these enterprises that are really responsible for not clearly assessing the limitations of these models prior to mass usage for billion-dollar decisions?

But humans, who to varying degrees all have a limit to their capacity to juggle multiple inter-connected streams of information, need models to assist with decision-making at all but the smallest and least complex firms.

These points are all captured in the Financial Modeler’s Manifesto from Paul Wilmott and Emanuel Derman.

But before you use any model you did not build yourself, I suggest that you ask the model builder if they have read the manifesto.

If you do build models, I suggest that you read it before and after each model building project.

Multi dimensional Risk Management

August 28, 2009

Many ERM programs are one dimensional. They look at VaR or they look at Economic Capital. The Multi-dimensional Risk manager consider volatility, ruin, and everything in between. They consider not only types of risk that are readily quantifiable, but also those that may be extremely difficult to measure. The following is a partial listing of the risks that a multidimensional risk manager might examine:
o Type A Risk – Short-term volatility of cash flows in one year
o Type B Risk – Short-term tail risk of cash flows in one year
o Type C Risk – Uncertainty risk (also known as parameter risk)
o Type D Risk – Inexperience risk relative to full multiple market cycles
o Type E Risk – Correlation to a top 10
o Type F Risk – Market value volatility in one year
o Type G Risk – Execution risk regarding difficulty of controlling operational losses
o Type H Risk – Long-term volatility of cash flows over five or more years
o Type J Risk – Long-term tail risk of cash flows over 5 five years or more
o Type K Risk – Pricing risk (cycle risk)
o Type L Risk – Market liquidity risk
o Type M Risk – Instability risk regarding the degree that the risk parameters are stable

Many of these types of risk can be measured using a comprehensive risk model, but several are not necessarily directly measurable. But the muilti dimensional risk manager realizes that you can get hurt by a risk even if you cannot measure it.

VaR is not a Bad Risk Measure

August 24, 2009

VaR has taken a lot of heat in the current financial crisis. Some go so far as to blame the financial crisis on VaR.

But VaR is a good risk measure. The problem is with the word RISK. You see, VaR has a precise definition, RISK does not. There is no way that you could possible measure an ill defined idea as RISK with a precise measure.

VaR is a good measure of one aspect of RISK. Is measures volatility of value under the assumption that the future will be like the recent past. If everyone understands that is what VaR does, then there is no problem.

Unfortunately, some people thought that VaR measured RISK period. What I mean is that they were led to believe that VaR was the same as RISK. In that context VaR (and any other single metric) is a failure. VaR is not the same as RISK.

That is because RISK has many aspects. Here is one partial list of the aspects of risk:

Type A Risk – Short Term Volatility of cash flows in 1 year
Type B Risk – Short Term Tail Risk of cash flows in 1 year
Type C Risk – Uncertainty Risk (also known as parameter risk)
Type D Risk – Inexperience Risk relative to full multiple market cycles
Type E Risk – Correlation to a top 10
Type F Risk – Market value volatility in 1 year
Type G Risk – Execution Risk regarding difficulty of controlling operational losses
Type H Risk – Long Term Volatility of cash flows over 5 or more years
Type J Risk – Long Term Tail Risk of cash flows over 5 years or more
Type K Risk – Pricing Risk (cycle risk)
Type L Risk – Market Liquidity Risk
Type M Risk – Instability Risk regarding the degree that the risk parameters are stable
(excerpted from Risk & Light)

VaR measures Type F risk only.


%d bloggers like this: