Math Wins

The emerging US election results are showing that the more math based people like Nate Silver were extremely accurate in predicting the outcome of the election and the GUT based people were totally off base. See NYT.

This is the same comparison that psychologists have been doing for 50 years between clinical judgement and statistical reasoning.  See The Evolution of Thinking.

The pundits making GUT predictions seem to be totally fooled by the Confirmation Bias.  They only gave any credibility to information that matched their preferred conclusion.

Risk managers need to take care.

This does not mean that the statistical risk models must be right.

That is because risk models are fundamentally based upon opinions.  They are fundamentally a tool of the Confirmation Bias.

Risk models are not models of “what is” as much as they are models of “what will be”.  They always reflect one or more biases:

  • A bias that the future is predictable.  That has not particularly been the case for the past 4 years or so.  The future has been decidedly unpredictable.  Uncertain is the word that we read over and over.  Companies with highly complex models have had less of an advantage over companies without than during the Great Moderation.
  • The bias that the future will be just like the past.  This bias manifested itself as a totally disastrous blindness to the risks that led to the Great Recession.  Or the Fukishima  Reactor disaster.  It was thought that something could not happen if it has not happened before.
  • The bias that the market reflects all available information.  The market value of sub prime mortgage CDO in 2006 when mortgage defaults first started happening just does not confirm this bias.  And at least half of all corporate defaults happen in a cliff, not a gradual decline.
  • The bias that things will be much worse than everyone else thinks.  (This is the position of folks like Nassim Taleb and Nouriel Roubini.  They can predict disaster every week and be right occasionally.  But this is not a useful position for risk managers to take in general.  Chicken Little was right, but just once.

So risk managers need to be careful about taking too much comfort from the win for statistics in the Presidential race.

About these ads
Explore posts in the same categories: Enterprise Risk Management

Tags:

You can comment below, or link to this permanent URL from your own site.

3 Comments on “Math Wins”

  1. riskviews Says:

    That is an impression. Perhaps an exaggeration. I tried to find some facts. I suggest that you look at Moody’s Default and Recovery Rate study for 2009. On page 11, there is a chart that shows the mean and median ratings prior to default for issues that defaulted. Half of the defaults were b3 or better six months prior to default.

    I also looked for statistics on CDS spreads. One study said that CDS spreads were good for predicting defaults 20 days in advance. But they were only able to get data on less than 15% of the defaults in their study period.

    My impression about the 50% was formed from working inside of fixed income asset management shops where actual defaults always seemed to be a shock. But that is probably skewed because they sought to sell out of positions that were heading to default. So when they were able to form a clear opinion, they sold, otherwise they were surprised. 50% was my generous estimate of the split.

    What nobody seems to have studied is the bonds that didn’t default. To see if the information that looks like it ties even slightly (ratings or spreads) gives many false positives as well as good indications.

    In general this topic is called Jump Risk. Let me know if you find something clearer.

  2. riskviews Says:

    That is an impression. Perhaps an exaggeration. I tried to find some facts. I suggest that you look at Moody’s Default and Recovery Rate study for 2009. On page 11, there is a chart that shows the mean and median ratings prior to default for issues that defaulted. Half of the defaults were b3 or better six months prior to default.

    I also looked for statistics on CDS spreads. One study said that CDS spreads were good for predicting defaults 20 days in advance. But they were only able to get data on less than 15% of the defaults in their study period.

    My impression about the 50% was formed from working inside of fixed income asset management shops where actual defaults always seemed to be a shock. But that is probably skewed because they sought to sell out of positions that were heading to default. So when they were able to form a clear opinion, they sold, otherwise they were surprised. 50% was my generous estimate of the split.

    What nobody seems to have studied is the bonds that didn’t default. To see if the information that looks like it ties even slightly (ratings or spreads) gives many false positives as well as good indications.

  3. Dermot Says:

    On the third bullet “at least half of all corporate defaults happen in a cliff, not a gradual decline” do you have a reference?
    (Not doubting the assertation, but am interested in knowing more)


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Follow

Get every new post delivered to your Inbox.

Join 628 other followers

%d bloggers like this: