Archive for the ‘Data’ category

Management by Onside Kick

June 6, 2016

Many American football fans can recall a game when their team drove the ball 80 or more yards in the waning moments of the game to pull within a touchdown of the team that had been dominating them. Then they call for the on side kick – recover the ball and charge to a win within a few more plays.

But according to NFL stats, that onside kick succeeds only 20% of the time in the waning minutes of the game.

Mid game onside kicks – that are surprises – work 60% of the time.

But mostly it is the successful onside kicks that make the highlights reel. RISKVIEWS guesses that on the highlights those kicks are 80% or more successful.

And if you look back on the games of the teams that make it to the Super Bowl, they probably were successful the few times that they called that play.

What does that mean for risk managers?

Be careful where you get your statistics. Big data is now very popular. Winners use Big Data. So many conclude that it will give better indications. But make sure that your data inputs are not from highlight reels or from the records of the best year for a company.

Many firms use default data collected by rating agencies for example to parameterize their credit models. But the rating agencies would point out that the data is from rated companies only. This makes little difference for rated Bonds. There the bonds are rated from issue to maturity or default. But if you want to build a default model of insurers or reinsurers then you need to know that many insurers and some reinsurers will drop their rating if it falls below a level where it hurts their business. So ratings transition statistics for insurers are more like the highlight reels below a certain level.

Some models of dynamic hedging strategies were in effect taking the mid game success rates and assuming that they would apply in bad times. But like the onside kick, things worked very different.

So realize that a business strategy and especially a risk mitigation strategy may work differently when things have gone all a mess.

And an onside kick is nothing more than putting the ball in play and praying that something good will happen.

Advertisements

Free Download of Valuation and Common Sense Book

December 19, 2013

RISKVIEWS recently got the material below in an email.  This material seems quite educational and also somewhat amusing.  The authors keep pointing out the extreme variety of actual detailed approach from any single theory in the academic literature.  

For example, the table following shows a plot of Required Equity Premium by publication date of book. 

Equity Premium

You get a strong impression from reading this book that all of the concepts of modern finance are extremely plastic and/or ill defined in practice. 

RISKVIEWS wonders if that is in any way related to the famous Friedman principle that economics models need not be at all realistic.  See post Friedman Model.

===========================================

Book “Valuation and Common Sense” (3rd edition).  May be downloaded for free

The book has been improved in its 3rd edition. Main changes are:

  1. Tables (with all calculations) and figures are available in excel format in: http://web.iese.edu/PabloFernandez/Book_VaCS/valuation%20CaCS.html
  2. We have added questions at the end of each chapter.
  3. 5 new chapters:

Chapters

Downloadable at:

32 Shareholder Value Creation: A Definition http://ssrn.com/abstract=268129
33 Shareholder value creators in the S&P 500: 1991 – 2010 http://ssrn.com/abstract=1759353
34 EVA and Cash value added do NOT measure shareholder value creation http://ssrn.com/abstract=270799
35 Several shareholder returns. All-period returns and all-shareholders return http://ssrn.com/abstract=2358444
36 339 questions on valuation and finance http://ssrn.com/abstract=2357432

The book explains the nuances of different valuation methods and provides the reader with the tools for analyzing and valuing any business, no matter how complex. The book has 326 tables, 190 diagrams and more than 180 examples to help the reader. It also has 480 readers’ comments of previous editions.

The book has 36 chapters. Each chapter may be downloaded for free at the following links:

Chapters

Downloadable at:

     Table of contents, acknowledgments, glossary http://ssrn.com/abstract=2209089
Company Valuation Methods http://ssrn.com/abstract=274973
Cash Flow is a Fact. Net Income is Just an Opinion http://ssrn.com/abstract=330540
Ten Badly Explained Topics in Most Corporate Finance Books http://ssrn.com/abstract=2044576
Cash Flow Valuation Methods: Perpetuities, Constant Growth and General Case http://ssrn.com/abstract=743229
5   Valuation Using Multiples: How Do Analysts Reach Their Conclusions? http://ssrn.com/abstract=274972
6   Valuing Companies by Cash Flow Discounting: Ten Methods and Nine Theories http://ssrn.com/abstract=256987
7   Three Residual Income Valuation Methods and Discounted Cash Flow Valuation http://ssrn.com/abstract=296945
8   WACC: Definition, Misconceptions and Errors http://ssrn.com/abstract=1620871
Cash Flow Discounting: Fundamental Relationships and Unnecessary Complications http://ssrn.com/abstract=2117765
10 How to Value a Seasonal Company Discounting Cash Flows http://ssrn.com/abstract=406220
11 Optimal Capital Structure: Problems with the Harvard and Damodaran Approaches http://ssrn.com/abstract=270833
12 Equity Premium: Historical, Expected, Required and Implied http://ssrn.com/abstract=933070
13 The Equity Premium in 150 Textbooks http://ssrn.com/abstract=1473225
14 Market Risk Premium Used in 82 Countries in 2012: A Survey with 7,192 Answers http://ssrn.com/abstract=2084213
15 Are Calculated Betas Good for Anything? http://ssrn.com/abstract=504565
16 Beta = 1 Does a Better Job than Calculated Betas http://ssrn.com/abstract=1406923
17 Betas Used by Professors: A Survey with 2,500 Answers http://ssrn.com/abstract=1407464
18 On the Instability of Betas: The Case of Spain http://ssrn.com/abstract=510146
19 Valuation of the Shares after an Expropriation: The Case of ElectraBul http://ssrn.com/abstract=2191044
20 A solution to Valuation of the Shares after an Expropriation: The Case of ElectraBul http://ssrn.com/abstract=2217604
21 Valuation of an Expropriated Company: The Case of YPF and Repsol in Argentina http://ssrn.com/abstract=2176728
22 1,959 valuations of the YPF shares expropriated to Repsol http://ssrn.com/abstract=2226321
23 Internet Valuations: The Case of Terra-Lycos http://ssrn.com/abstract=265608
24 Valuation of Internet-related companies http://ssrn.com/abstract=265609
25 Valuation of Brands and Intellectual Capital http://ssrn.com/abstract=270688
26 Interest rates and company valuation http://ssrn.com/abstract=2215926
27 Price to Earnings ratio, Value to Book ratio and Growth http://ssrn.com/abstract=2212373
28 Dividends and Share Repurchases http://ssrn.com/abstract=2215739
29 How Inflation destroys Value http://ssrn.com/abstract=2215796
30 Valuing Real Options: Frequently Made Errors http://ssrn.com/abstract=274855
31 119 Common Errors in Company Valuations http://ssrn.com/abstract=1025424
32 Shareholder Value Creation: A Definition http://ssrn.com/abstract=268129
33 Shareholder value creators in the S&P 500: 1991 – 2010 http://ssrn.com/abstract=1759353
34 EVA and Cash value added do NOT measure shareholder value creation http://ssrn.com/abstract=270799
35 Several shareholder returns. All-period returns and all-shareholders return http://ssrn.com/abstract=2358444
36 339 questions on valuation and finance http://ssrn.com/abstract=2357432

I would very much appreciate any of your suggestions for improving the book.

Best regards,
Pablo Fernandez

Sins of Risk Measurement

February 5, 2011
.
Read The Seven Deadly Sins of Measurement by Jim Campy

Measuring risk means walking a thin line.  Balancing what is highly unlikely from what it totally impossible.  Financial institutions need to be prepared for the highly unlikely but must avoid getting sucked into wasting time worrying about the totally impossible.

Here are some sins that are sometimes committed by risk measurers:

1.  Downplaying uncertainty.  Risk measurement will always become more and more uncertain with increasing size of the potential loss numbers.  In other words, the larger the potential loss, the less certain you can be about how certain it might be.  Downplaying uncertainty is usually a sin of omission.  It is just not mentioned.  Risk managers are lured into this sin by the simple fact that the less that they mention uncertainty, the more credibility their work will be given.

2.  Comparing incomparables.  In many risk measurement efforts, values are developed for a wide variety of risks and then aggregated.  Eventually, they are disaggregated and compared.  Each of the risk measurements are implicitly treated as if they were all calculated totally consistently.  However,  in fact, we are usually adding together measurements that were done with totally different amounts of historical data, for markets that have totally different degrees of stability and using tools that have totally different degrees of certitude built into them.  In the end, this will encourage decisions to take on whatever risks that we underestimate the most through this process.

3.  Validate to Confirmation.  When we validate risk models, it is common to stop the validation process when we have evidence that our initial calculation is correct.  What that sometimes means is that one validation is attempted and if validation fails, the process is revised and tried again.  This is repeated until the tester is either exhausted or gets positive results.  We are biased to finding that our risk measurements are correct and are willing to settle for validations that confirm our bias.

4.  Selective Parameterization.  There are no general rules for parameterization.  Generally, someone must choose what set of data is used to develop the risk model parameters.  In most cases, this choice determines the answers of the risk measurement.  If data from a benign period is used, then the measures of risk will be low.  If data from an adverse period is used, then risk measures will be high.  Selective paramaterization means that the period is chosen because the experience was good or bad to deliberately influence the outcome.

5.  Hiding behind Math.  Measuring risk can only mean measuring a future unknown contingency.  No amount of fancy math can change that fact.  But many who are involved in risk measurement will avoid ever using plain language to talk about what they are doing, preferring to hide in a thicket of mathematical jargon.  Real understanding of what one is doing with a risk measurement process includes the ability to say what that entails to someone without an advanced quant degree.

6.  Ignoring consequences.  There is a stream of thinking that science can be disassociated from its consequences.  Whether or not that is true, risk measurement cannot.  The person doing the risk measurement must be aware of the consequences of their findings and anticipate what might happen if management truly believes the measurements and acts upon them.

7.  Crying Wolf.  Risk measurement requires a concentration on the negative side of potential outcomes.  Many in risk management keep trying to tie the idea of “risk” to both upsides and downsides.  They have it partly right.  Risk is a word that means what it means, and the common meaning associated risk with downside potential.  However, the risk manager who does not keep in mind that their risk calculations are also associated with potential gains will be thought to be a total Cassandra and will lose all attention.  This is one of the reasons why scenario and stress tests are difficult to use.  One set of people will prepare the downside story and another set the upside story.  Decisions become a tug of war between opposing points of view, when in fact both points of view are correct.

There are doubtless many more possible sins.  Feel free to add your favorites in the comments.

But one final thought.  Calling it MEASUREMENT might be the greatest sin.

Intrinsic Risk

November 26, 2010

If you were told that someone had flipped a coin 60 times and had found that heads were the results 2/3 of the time, you might have several reactions.

  • You might doubt whether the coin was a real coin or whether it was altered.
  • You might suspect that the person who got that result was doing something other than a fair flip.
  • You might doubt whether they are able to count or whether they actually counted.
  • You doubt whether they are telling the truth.
  • You start to calculate the likelihood of that result with a fair coin.

Once you take that last step, you find that the story is highly unlikely, but definitely not impossible.  In fact, my computer tells me that if I lined up 225 people and had them all flip a coin 60 times, there is a fifty-fifty chance  that at least one person will get that many heads.

So how should you evaluate the risk of getting 40 heads out of 60 flips?  Should you do calculations based upon the expected likelihood of heads based upon an examination of the coin?  You look at it and see that there are two sides and a thin edge.  You assess whether it seems to be uniformly balanced.  Then you conclude that you are fairly certain of the inherent risk of the coin flipping process.

Your other choice to assess the risk is to base your evaluation on the observed outcomes of coin flips.  This will mean that the central limit theorem should help us to eventually get the right number.  But if your first observation is that person described above, then it will be quite a few additional observations before you find out what the Central Limit Theorem has to tell you.

The point being that a purely observation based approach will not always give you the best answer.   Good to make sure that you understand something about the intrinsic risk.

If you are still not convinced of this, ask the turkey.  Taleb uses that turkey story to explain a Black Swan.  But if you think about it, many Black Swans are nothing more than ignorance of intrinsic risk.

Did you accept your data due to Confirmation Bias?

August 15, 2010

Confirmation bias (also called confirmatory bias or myside bias) is a tendency for people to favor information that confirms their preconceptions or hypotheses regardless of whether the information is true. As a result, people gather evidence and recall information from memory selectively, and interpret it in a biased way. The biases appear in particular for emotionally significant issues and for established beliefs. For example, in reading about gun control, people usually prefer sources that affirm their existing attitudes. They also tend to interpret ambiguous evidence as supporting their existing position. Biased search, interpretation and/or recall have been invoked to explain attitude polarization (when a disagreement becomes more extreme even though the different parties are exposed to the same evidence), belief perseverance (when beliefs persist after the evidence for them is shown to be false), the irrational primacy effect (a stronger weighting for data encountered early in an arbitrary series) and illusory correlation (in which people falsely perceive an association between two events or situations).

From wikipedia

Today’s New York Times tells a story of Japanese longevity data.  Japan has long thought itself to be the home of a large fraction of the world’s oldest people.  The myth of self was that the Japanese lifestyle was healthier than that of any other people and the longevity was a result.

A google search on “The secret of Japanese Longevity” turns up 400,000 web pages that extol the virtues of Japanese diet and lifestyle.  But the news story says that as many as 281 of these extremely old Japanese folks cannot be found.  The efforts to find them revealed numerous cases of fraud and neglect.  This investigation started after they found that the man who had been on their records as the longest lived male had actually been dead for over 20 years!  Someone had been cashing his pension checks for those years and neglecting to report the death.

The secret of Japanese Longevity may well be just bad data.

But the bad data was accepted because it confirmed the going in belief – the belief that Japanese lifestyle was healthier.

The same sort of bad data fed the Sub Prime crisis.  Housing prices were believed to never go down.  So data that confirmed that belief was readily accepted.  Defaults on Sub Prime mortgages were thought to fall within a manageable range and data that confirmed that belief was accepted.

Data that started to appear in late 2006 that indicated that those trends were not going to be permanent and in fact that they were reversing was widely ignored.  One of the most common aspects of confirmation bias is to consider non-confirming data as unusable in some way.

We try to filter out noise and work only with signal.  But sometimes, the noise is a signal all its own.  And a very important signal to risk managers.

Common Terms for Severity

June 1, 2010

In the US, firms are required to disclose their risks.  This has led to an exercize that is particularly useless.  Firms obviously spend very little time on what they publish under this part of their financial statement.  Most firms seem to be using boilerplate language and a list of risks that is as long as possible.  It is clearly a totally compliance based CYA activity.  The only way that a firm could “lose” under this system is if they fail to disclose something that later creates a major loss.  So best to mention everything under the sun.  But when you look across a sector at these lists, you find a startling degree to which the risks actually differ.  That is because there is absolutely no standard that is applied to tell what is a risk and if something is a risk, how significant is it.  The idea of risk severity is totally missing.  

Bread Box

 

What would help would be a common set of terms for Severity of losses from risks.  Here is a suggestion of a scale for discussing loss severity for an individual firm: 

  1. A Loss that is a threat to earnings.  This level of risk could result in a loss that would seriously impair or eliminate earnings. 
  2. A Loss that could result in a significant reduction to capital.  This level of risk would result in a loss that would eliminate earnings and in addition eat into capital, reducing it by 10% to 20%
  3. A Loss that could result in severe reduction of business activity.  For insurers, this would be called “Going into Run-off”.  It means that the firm is not insolvent, but it is unable to continue doing new business.  This state often lasts for several years as old liabilities of the insurer are slowly paid of as they become due.  Usually the firm in this state has some capital, but not enough to make any customers comfortable trusting them for future risks. 
  4. A Loss that would result in the insolvency of the firm. 

Then in addition, for an entire sector or sub sector of firms: 

  1. Losses that significantly reduce earnings of the sector.  A few firms might have capital reductions.
  2. Losses that significantly impair capital for the sector.  A few firms might be run out of business from these losses.
  3. Losses that could cause a significant number of firms in the sector to be run out of business.  The remainder of the sector still has capacity to pick up the business of the firms that go into run-off.  A few firms might be insolvent. 
  4. Losses that are large enough that the sector no longer has the capacity to do the business that it had been doing.  There is a forced reduction in activity in the sector until capacity can be replaced, either internally or from outside funds.  A large number of firms are either insolvent or will need to go into run-off. 

These can be referred to as Class 1, Class 2, Class 3, Class 4 risks for a firm or for a sector.  

Class 3 and Class 4 Sector risks are Systemic Risks.  

Care should be taken to make sure that everyone understands that risk drivers such as equity markets, or CDS can possibly produce Class 1, Class 2, Class 3 or Class 4 losses for a firm or for a sector in a severe enough scenario.  There is no such thing as classifying a risk as always falling into one Class.  However, it is possible that at a point in time, a risk may be small enough that it cannot produce a loss that is more than a Class 1 event.  

For example, at a point in time (perhaps 2001), US sub prime mortgages were not a large enough class to rise above a Class 1 loss for any firms except those whose sole business was in that area.  By 2007, Sub Prime mortgage exposure was large enough that Class 4 losses were created for the banking sector.  

Looking at Sub Prime mortgage exposure in 2006, a bank should have been able to determine that sub primes could create a Class 1, Class 2, Class 3 or even Class 4 loss in the future.  The banks could have determined the situations that would have led to losses in each Class for their firm and determined the likelihood of each situation, as well as the degree of preparation needed for the situation.  This activity would have shown the startling growth of the sub prime mortgage exposure from a Class 1 to a Class 2 through Class 3 to Class 4 in a very short time period.  

Similarly, the prudential regulators could theoretically have done the same activity at the sector level.  Only in theory, because the banking regulators do not at this time collect the information needed to do such an exercize.  There is a proposal that is part of the financial regulation legislation to collect such information.  See CE_NIF.

The Yin and Yang of Risk

October 2, 2009
Guest Post By Chris Mandel

One thing I’ve discovered in the last year is that extremes seem to be the rule of thumb these days.

The obvious example is that which represents the more significant aspects of the current financial crisis; huge amounts of mortgage defaults; unfathomable aggregations of loss in credit default swaps; inordinate quantums of market confidence destruction and the resulting 50 percent portfolio reductions in the wake, etc, etc.

In recent years it has been reflected in the more traditional insurable risk realm with record-setting natural catastrophe seasons and increasingly severe terrorism events. The fundamental insurance concept of pooling and sharing risk for profitable diversification is threatened. Even the expected level of loss is growing increasingly unexpected in actual results.

Examining the risk discipline and its evolving practice, I see management by extremes beginning to subsume the norm. So here are some examples of how this looks.

Reams of Data–Little Data Intelligence: We have tons of “risk” related data but limited ability to interpret it and use it in order to head off losses that were ostensibly preventable or at least reducible.

Continued at Risk and Insurance


%d bloggers like this: