Archive for the ‘Volatility’ category

Quantitative vs. Qualitative Risk Assessment

July 14, 2014

There are two ways to assess risk.  Quantitative and Qualitative.  But when those two words are used in the NAIC ORSA Guidance Manual, their meaning is a little tricky.

In general, one might think that a quantitative assessment uses numbers and a qualitative assessment does not.  The difference is as simple as that.  The result of a quantitative assessment would be a number such as $53 million.  The result of a qualitative assessment would be words, such as “very risky” or “moderately risky”.

But that straightforward approach to the meaning of those words does not really fit with how they are used by the NAIC.  The ORSA Guidance Manual suggests that an insurer needs to include those qualitative risk assessments in its determination of capital adequacy.  Well, that just will not work if you have four risks that total $400 million and three others that are two “very riskys” and one “not so risk”.  How much capital is enough for two “very riskys”, perhaps you need a qualitative amount of surplus to provide for that, something like “a good amount”.

RISKVIEWS believes that then the NAIC says “Quantitative” and “Qualitative” they mean to describe two approaches to developing a quantity.  For ease, we will call these two approaches Q1 and Q2.

The Q1 approach is data and analysis driven approach to developing the quantity of loss that the company’s capital standard provides for.  It is interesting to RISKVIEWS that very few participants or observers of this risk quantification regularly recognize that this process has a major step that is much less quantitative and scientific than others.

The Q1 approach starts and ends with numbers and has mathematical steps in between.  But the most significant step in the process is largely judgmental.  So at its heart, the “quantitative” approach is “qualitative”.  That step is the choice of mathematical model that is used to extrapolate and interpolate between actual data points.  In some cases, there are enough data points that the choice of model can be based upon somewhat less subjective fit criteria.  But in other cases, that level of data is reached by shortening the time step for observations and THEN making heroic (and totally subjective) assumptions about the relationship between successive time periods.

These subjective decisions are all made to enable the modelers to make a connection between the middle of the distribution, where there usually is enough data to reliably model outcomes and the tail, particularly the adverse tail of the distribution where the risk calculations actually take place and where there is rarely if ever any data.

There are only a couple of subjective decisions possibilities, in broad terms…

  • Benign – Adverse outcomes are about as likely as average outcomes and are only moderately more severe.
  • Moderate – Outcomes similar to the average are much more likely than outcomes significantly different from average.  Outcomes significantly higher than average are possible, but likelihood of extremely adverse outcomes are extremely highly unlikely.
  • Highly risky – Small and moderately adverse outcomes are highly likely while extremely adverse outcomes are possible, but fairly unlikely.

The first category of assumption, Benign,  is appropriate for large aggregations of small loss events where contagion is impossible.  Phenomenon that fall into this category are usually not the concern for risk analysis.  These phenomenon are never subject to any contagion.

The second category, Moderate, is appropriate for moderate sized aggregations of large loss events.  Within this class, there are two possibilities:  Low or no contagion and moderate to high contagion.  The math is much simpler if no contagion is assumed.

But unfortunately, for risks that include any significant amount of human choice, contagion has been observed.  And this contagion has been variable and unpredictable.  Even more unfortunately, the contagion has a major impact on risks at both ends of the spectrum.  When past history suggests a favorable trend, human contagion has a strong tendency to over play that trend.  This process is called “bubbles”.  When past history suggests an unfavorable trend, human contagion also over plays the trend and markets for risks crash.

The modelers who wanted to use the zero contagion models, call this “Fat Tails”.  It is seen to be an unusual model, only because it was so common to use the zero contagion model with the simpler maths.

RISKVIEWS suggests that when communicating that the  approach to modeling is to use the Moderate model, the degree of contagion assumed should be specified and an assumption of zero contagion should be accompanied with a disclaimer that past experience has proven this assumption to be highly inaccurate when applied to situations that include humans and therefore seriously understates potential risk.

The Highly Risky models are appropriate for risks where large losses are possible but highly infrequent.  This applies to insurance losses due to major earthquakes, for example.  And with a little reflection, you will notice that this is nothing more than a Benign risk with occasional high contagion.  The complex models that are used to forecast the distribution of potential losses for these risks, the natural catastrophe models go through one step to predict possible extreme events and the second step to calculate an event specific degree of contagion for an insurer’s specific set of coverages.

So it just happens that in a Moderate model, the 1 in 1000 year loss is about 3 standard deviations worse than the mean.  So if we use that 1 in 1000 year loss as a multiple of standard deviations, we can easily talk about a simple scale for riskiness of a model:

Scale

So in the end the choice is to insert an opinion about the steepness of the ramp up between the mean and an extreme loss in terms of multiples of the standard deviation.  Where standard deviation is a measure of the average spread of the observed data.  This is a discussion that on these terms include all of top management and the conclusions can be reviewed and approved by the board with the use of this simple scale.  There will need to be an educational step, which can be largely in terms of placing existing models on the scale.  People are quite used to working with a Richter Scale for earthquakes.  This is nothing more than a similar scale for risks.  But in addition to being descriptive and understandable, once agreed, it can be directly tied to models, so that the models are REALLY working from broadly agreed upon assumptions.

*                  *                *               *             *                *

So now we go the “Qualitative” determination of the risk value.  Looking at the above discussion, RISKVIEWS would suggest that we are generally talking about situations where we for some reason do not think that we know enough to actually know the standard deviation.  Perhaps this is a phenomenon that has never happened, so that the past standard deviation is zero.  So we cannot use the multiple of standard deviation method discussed above.  Or to put is another way, we can use the above method, but we have to use judgment to estimate the standard deviation.

*                  *                *               *             *                *

So in the end, with a Q1 “quantitative” approach, we have a historical standard deviation and we use judgment to decide how risky things are in the extreme compared to that value.  In the Q2 “qualitative” approach, we do not have a reliable historical standard deviation and we need to use judgment to decide how risky things are in the extreme.

Not as much difference as one might have guessed!

Advertisements

Insanity is . . .

May 11, 2014

Albert Einstein is famously quoted as saying that

Insanity is doing the same thing over and over and expecting different results.

Of course, risk management is based upon an assumption that if you do something that is risky over and over again, you will get different results.  And that through careful management of the possibilities, you can, on the average, over time, get acceptable results, both in terms of sufficient gains and limited losses.

But there is also an embedded assumption in that statement that is hidden.  The statement should include the standard caveat “all else being equal”.

But in fact, all else is NEVER equal.  At least not out in the world where things count.  Not for things that involve people.  Out in the real world, once can count on the same result from the same actions, but only for a while.

All else never stays the same in the situations where people are involved because people rarely continue to follow rules like the rules of physics.  People keep changing what they do.

For example, the ideas of Hyman Minsky regarding the changing approach to credit.  People just do not leave things alone.  With credit, people are constantly seeking to get just a little more juice out of the lemon.

Free Download of Valuation and Common Sense Book

December 19, 2013

RISKVIEWS recently got the material below in an email.  This material seems quite educational and also somewhat amusing.  The authors keep pointing out the extreme variety of actual detailed approach from any single theory in the academic literature.  

For example, the table following shows a plot of Required Equity Premium by publication date of book. 

Equity Premium

You get a strong impression from reading this book that all of the concepts of modern finance are extremely plastic and/or ill defined in practice. 

RISKVIEWS wonders if that is in any way related to the famous Friedman principle that economics models need not be at all realistic.  See post Friedman Model.

===========================================

Book “Valuation and Common Sense” (3rd edition).  May be downloaded for free

The book has been improved in its 3rd edition. Main changes are:

  1. Tables (with all calculations) and figures are available in excel format in: http://web.iese.edu/PabloFernandez/Book_VaCS/valuation%20CaCS.html
  2. We have added questions at the end of each chapter.
  3. 5 new chapters:

Chapters

Downloadable at:

32 Shareholder Value Creation: A Definition http://ssrn.com/abstract=268129
33 Shareholder value creators in the S&P 500: 1991 – 2010 http://ssrn.com/abstract=1759353
34 EVA and Cash value added do NOT measure shareholder value creation http://ssrn.com/abstract=270799
35 Several shareholder returns. All-period returns and all-shareholders return http://ssrn.com/abstract=2358444
36 339 questions on valuation and finance http://ssrn.com/abstract=2357432

The book explains the nuances of different valuation methods and provides the reader with the tools for analyzing and valuing any business, no matter how complex. The book has 326 tables, 190 diagrams and more than 180 examples to help the reader. It also has 480 readers’ comments of previous editions.

The book has 36 chapters. Each chapter may be downloaded for free at the following links:

Chapters

Downloadable at:

     Table of contents, acknowledgments, glossary http://ssrn.com/abstract=2209089
Company Valuation Methods http://ssrn.com/abstract=274973
Cash Flow is a Fact. Net Income is Just an Opinion http://ssrn.com/abstract=330540
Ten Badly Explained Topics in Most Corporate Finance Books http://ssrn.com/abstract=2044576
Cash Flow Valuation Methods: Perpetuities, Constant Growth and General Case http://ssrn.com/abstract=743229
5   Valuation Using Multiples: How Do Analysts Reach Their Conclusions? http://ssrn.com/abstract=274972
6   Valuing Companies by Cash Flow Discounting: Ten Methods and Nine Theories http://ssrn.com/abstract=256987
7   Three Residual Income Valuation Methods and Discounted Cash Flow Valuation http://ssrn.com/abstract=296945
8   WACC: Definition, Misconceptions and Errors http://ssrn.com/abstract=1620871
Cash Flow Discounting: Fundamental Relationships and Unnecessary Complications http://ssrn.com/abstract=2117765
10 How to Value a Seasonal Company Discounting Cash Flows http://ssrn.com/abstract=406220
11 Optimal Capital Structure: Problems with the Harvard and Damodaran Approaches http://ssrn.com/abstract=270833
12 Equity Premium: Historical, Expected, Required and Implied http://ssrn.com/abstract=933070
13 The Equity Premium in 150 Textbooks http://ssrn.com/abstract=1473225
14 Market Risk Premium Used in 82 Countries in 2012: A Survey with 7,192 Answers http://ssrn.com/abstract=2084213
15 Are Calculated Betas Good for Anything? http://ssrn.com/abstract=504565
16 Beta = 1 Does a Better Job than Calculated Betas http://ssrn.com/abstract=1406923
17 Betas Used by Professors: A Survey with 2,500 Answers http://ssrn.com/abstract=1407464
18 On the Instability of Betas: The Case of Spain http://ssrn.com/abstract=510146
19 Valuation of the Shares after an Expropriation: The Case of ElectraBul http://ssrn.com/abstract=2191044
20 A solution to Valuation of the Shares after an Expropriation: The Case of ElectraBul http://ssrn.com/abstract=2217604
21 Valuation of an Expropriated Company: The Case of YPF and Repsol in Argentina http://ssrn.com/abstract=2176728
22 1,959 valuations of the YPF shares expropriated to Repsol http://ssrn.com/abstract=2226321
23 Internet Valuations: The Case of Terra-Lycos http://ssrn.com/abstract=265608
24 Valuation of Internet-related companies http://ssrn.com/abstract=265609
25 Valuation of Brands and Intellectual Capital http://ssrn.com/abstract=270688
26 Interest rates and company valuation http://ssrn.com/abstract=2215926
27 Price to Earnings ratio, Value to Book ratio and Growth http://ssrn.com/abstract=2212373
28 Dividends and Share Repurchases http://ssrn.com/abstract=2215739
29 How Inflation destroys Value http://ssrn.com/abstract=2215796
30 Valuing Real Options: Frequently Made Errors http://ssrn.com/abstract=274855
31 119 Common Errors in Company Valuations http://ssrn.com/abstract=1025424
32 Shareholder Value Creation: A Definition http://ssrn.com/abstract=268129
33 Shareholder value creators in the S&P 500: 1991 – 2010 http://ssrn.com/abstract=1759353
34 EVA and Cash value added do NOT measure shareholder value creation http://ssrn.com/abstract=270799
35 Several shareholder returns. All-period returns and all-shareholders return http://ssrn.com/abstract=2358444
36 339 questions on valuation and finance http://ssrn.com/abstract=2357432

I would very much appreciate any of your suggestions for improving the book.

Best regards,
Pablo Fernandez

Getting Paid for Risk Taking

April 15, 2013

Consideration for accepting a risk needs to be at a level that will sustain the business and produce a return that is satisfactory to investors.

Investors usually want additional return for extra risk.  This is one of the most misunderstood ideas in investing.

“In an efficient market, investors realize above-average returns only by taking above-average risks.  Risky stocks have high returns, on average, and safe stocks do not.”

Baker, M. Bradley, B. Wurgler, J.  Benchmarks as Limits to Arbitrage: Understanding the Low-Volatility Anomaly

But their study found that stocks in the top quintile of trailing volatility had real return of -90% vs. a real return of 1000% for the stocks in the bottom quintile.

But the thinking is wrong.  Excess risk does not produce excess return.  The cause and effect are wrong in the conventional wisdom.  The original statement of this principle may have been

“in all undertakings in which there are risks of great losses, there must also be hopes of great gains.”
Alfred Marshall 1890 Principles of Economics

Marshal has it right.  There are only “hopes” of great gains.  These is no invisible hand that forces higher risks to return higher gains.  Some of the higher risk investment choices are simply bad choices.

Insurers opportunity to make “great gains” out of “risks of great losses” is when they are determining what consideration, or price, that they will require to accept a risk.  Most insurers operate in competitive markets that are not completely efficient.  Individual insurers do not usually set the price in the market, but there is a range of prices at which insurance is purchased in any time period.  Certainly the process that an insurer uses to determine the price that makes a risk acceptable to accept is a primary determinant in the profits of the insurer.  If that price contains a sufficient load for the extreme risks that might threaten the existence of the insurer, then over time, the insurer has the ability to hold and maintain sufficient resources to survive some large loss situations.

One common goal conflict that leads to problems with pricing is the conflict between sales and profits.  In insurance as in many businesses, it is quite easy to increase sales by lowering prices.  In most businesses, it is very difficult to keep up that strategy for very long as the realization of lower profits or losses from inadequate prices is quickly realized.  In insurance, the the premiums are paid in advance, sometimes many years in advance of when the insurer must provide the promised insurance benefits.  If provisioning is tilted towards the point of view that supports the consideration, the pricing deficiencies will not be apparent for years.  So insurance is particularly susceptible to the tension between volume of business and margins for risk and profits,
and since sales is a more fundamental need than profits, the margins often suffer.
As just mentioned, insurers simply do not know for certain what the actual cost of providing an insurance benefit will be.  Not with the degree of certainty that businesses in other sectors can know their cost of goods sold.  The appropriateness of pricing will often be validated in the market.  Follow-the-leader pricing can lead a herd of insurers over the cliff.  The whole sector can get pricing wrong for a time.  Until, sometimes years later, the benefits are collected and their true cost is know.

“A decade of short sighted price slashing led to industry losses of nearly $3 billion last year.”  Wall Street Journal June 24, 2002

Pricing can also go wrong on an individual case level.  The “Winners Curse”  sends business to the insurer who most underimagines riskiness of a particular risk.

There are two steps to reflecting risk in pricing.  The first step is to capture the expected loss properly.  Most of the discussion above relates to this step and the major part of pricing risk comes from the possibility of missing that step as has already been discussed.  But the second step is to appropriately reflect all aspects of the risk that the actual losses will be different from expected.  There are many ways that such deviations can manifest.

The following is a partial listing of the risks that might be examined:

• Type A Risk—Short-Term Volatility of cash flows in 1 year

• Type B Risk—Short -Term Tail Risk of cash flows in 1 year
• Type C Risk—Uncertainty Risk (also known as parameter risk)
• Type D Risk—Inexperience Risk relative to full multiple market cycles
• Type E Risk—Correlation to a top 10
• Type F Risk—Market value volatility in 1 year
• Type G Risk—Execution Risk regarding difficulty of controlling operational
losses
• Type H Risk—Long-Term Volatility of cash flows over 5 or more years
• Type J Risk—Long-Term Tail Risk of cash flows over 5 years or more
• Type K Risk—Pricing Risk (cycle risk)
• Type L Risk—Market Liquidity Risk
• Type M Risk—Instability Risk regarding the degree that the risk parameters are
stable

See “Risk and Light” or “The Law of Risk and Light

There are also many different ways that risk loads are specifically applied to insurance pricing.  Three examples are:

  • Capital Allocation – Capital is allocated to a product (based upon the provisioning) and the pricing then needs to reflect the cost of holding the capital.  The cost of holding capital may be calculated as the difference between the risk free rate (after tax) and the hurdle rate for the insurer.  Some firms alternately use the difference between the investment return on the assets backing surplus (after tax) and the hurdle rate.  This process assures that the pricing will support achieving the hurdle rate on the capital that the insurer needs to hold for the risks of the business.  It does not reflect any margin for the volatility in earnings that the risks assumed might create, nor does it necessarily include any recognition of parameter risk or general uncertainty.
  • Provision for Adverse Deviation – Each assumption is adjusted to provide for worse experience than the mean or median loss.  The amount of stress may be at a predetermined confidence interval (Such as 65%, 80% or 90%).  Higher confidence intervals would be used for assumptions with higher degree of parameter risk.  Similarly, some companies use a multiple (or fraction) of the standard deviation of the loss distribution as the provision.  More commonly, the degree of adversity is set based upon historical provisions or upon judgement of the person setting the price.  Provision for Adverse Deviation usually does not reflect anything specific for extra risk of insolvency.
  • Risk Adjusted Profit Target – Using either or both of the above techniques, a profit target is determined and then that target is translated into a percentage of premium of assets to make for a simple risk charge when constructing a price indication.

The consequences of failing to recognize as aspect of risk in pricing will likely be that the firm will accumulate larger than expected concentrations of business with higher amounts of that risk aspect.  See “Risk and Light” or “The Law of Risk and Light“.

To get Consideration right you need to (1)regularly get a second opinion on price adequacy either from the market or from a reliable experienced person; (2) constantly update your view of your risks in the light of emerging experience and market feedback; and (3) recognize that high sales is a possible market signal of underpricing.

This is one of the seven ERM Principles for Insurers

Is there a “Normal” Level for Volatility?

August 10, 2011

Much of modern Financial Economics is built upon a series of assumptions about the markets. One of those assumptions is that the markets are equilibrium seeking. If that was the case, it would seem that it would be possible to determine the equilibrium level, because things would be constantly be tugging towards that level.
But look at Volatility as represented by the VIX…

The above graph shows the VIX for 30 years.  It is difficult to see an equilibrium level in this graph.

What is Volatility?  It is actually a mixture of two main things as well as anything else that the model forgets.  It is a forced value that balances prices for equity options and risk free rates using the Black Scholes formula.

The two main items that are cooked into the volatility number are market expectations of the future variability of returns on the stock market and the second is the risk premium or margin of error that the market wants to be paid for taking on the uncertainty of future transactions.

Looking an the decidedly smoother plot of annual values…


There does not seem to be any evidence that the actual variability of prices is unsteady.  It has been in the range of 20% since it drifted up from the range of 10%.  If there was going to be an equilibrium, this chart seems to show where it might be.  But the chart above shows that the market trades all over the place on volatility, not even necessarily around the level of the experienced volatility.

And much of that is doubtless the uncertainty, or risk premium.  The second graph does show that experienced volatility has drifted to twice the level that it was in the early 1990’s.  There is no guarantee that it will not double again.  The markets keep changing.  There is no reason to rely on these historical analyses.  Stories that the majority of trades today are computer driven very short term positions taken by hedge funds suggest that there is no reason whatsoever to think that the market of the next quarter will be in any way like the market of ten or twenty years ago.  If anything, you would guess that it will be much more volatile.  Those trading schemes make their money off of price movements, not stability.

So is there a normal level for volatility?  Doubtless not.   At least not in this imperfect world.  

Modeling Uncertainty

March 31, 2011

The message that windows gives when you are copying a large number of files gives a good example of an uncertain environment.  That process recently took over 30 minutes and over the course of that time, the message box was constantly flashing completely different information about the time remaining.  Over the course of one minute in the middle of that process the readings were:

8 minutes remaining

53 minutes remaining

45 minutes remaining

3 minutes remaining

11 minutes remaining

It is not true that the answer is random.  But with the process that Microsoft has chosen to apply to the problem, the answer is certainly unknowable.  For an expected value to vary over a very short period of time by such a range – that is what I would think that a model reflecting uncertainty would  look like.

An uncertain situation could be one where you cannot tell the mean or the standard deviation because there does not seem to be any repeatable pattern to the experience.

Those uncertain times are when the regular model – the one with the steady mean and variance – does not seem to give any useful information.

The flip side of the uncertain times and the model with unsteady mean and variance that represents those times is the person who expects that things will be unpredictable.  That person will be surprised if there is an extended period of time when experience follows a stable pattern, either good or bad or even a stable pattern centered around zero with gains and losses.  In any of those situations, the competitors of that uncertain expecting person will be able to use their models to run their businesses and to reap profits from things that their models tell them about the world and their risks.

The uncertainty expecting person is not likely to trust a model to give them any advice about the world.  Their model would not have cycles of predictable length.  They would not expect the world to even conform to a model with the volatile mean and variance of their expectation, because they expect that they would probably get the volatility of the mean and variance wrong.

That is just the way that they expect it will happen.   A new Black Swan every morning.

Correction, not every morning, that would be regular.  Some mornings.

What’s the Truth?

May 21, 2010

There has always been an issue with TRUTH with regard to risk.  At least there is when dealing with SOME PEOPLE. 

The risk analyst prepares a report about a proposal that shows the new proposal in a bad light.  The business person who is the champion of the proposal questions the TRUTH of the matter.  An unprepared analyst can easily get picked apart by this sort of attack.  If it becomes a true showdown between the business person and the analyst, in many companies, the business person can find a way to shed enough doubt on the TRUTH of the situation to win the day. 

The preparation needed by the analyst is to understand that there is more than one TRUTH to the matter of risk.  I can think of at least four points of view.  In addition, there are many, many different angles and approaches to evaluating risk.  And since risk analysis is about the future, there is no ONE TRUTH.  The preparation needed is to understand ALL of the points of view as well many of the different angles and approaches to analysis of risk. 

The four points of view are:

  1. Mean Reversion – things will have their ups and downs but those will cancel out and this will be very profitable. 
  2. History Repeats – we can understand risk just fine by looking at the past. 
  3. Impending Disaster – anything you can imagine, I can imagine something worse.
  4. Unpredictable – we can’t know the future so why bother trying. 

Each point of view will have totally different beliefs about the TRUTH of a risk evaluation.  You will not win an argument with someone who has one belief by marshalling facts and analysis from one of the other beliefs.  And most confusing of all, each of these beliefs is actually the TRUTH at some point in time. 

For periods of time, the world does act in a mean reverting manner.  When it does, make sure that you are buying on the dips. 

Other times, things do bounce along within a range of ups and downs that are consistent with some part of the historical record.  Careful risk taking is in order then. 

And as we saw in the fall of 2008 in the financial markets there are times when every day you wake up and wish you had sold out of your risk positions yesterday. 

But right now, things are pretty unpredictable with major ups and downs coming with very little notice.  Volatility is again far above historical ranges.  Best to keep your exposures small and spread out. 

So understand that with regard to RISK, TRUTH is not quite so easy to pin down. 


%d bloggers like this: