Archive for the ‘Equity Risk’ category

Is there a “Normal” Level for Volatility?

August 10, 2011

Much of modern Financial Economics is built upon a series of assumptions about the markets. One of those assumptions is that the markets are equilibrium seeking. If that was the case, it would seem that it would be possible to determine the equilibrium level, because things would be constantly be tugging towards that level.
But look at Volatility as represented by the VIX…

The above graph shows the VIX for 30 years.  It is difficult to see an equilibrium level in this graph.

What is Volatility?  It is actually a mixture of two main things as well as anything else that the model forgets.  It is a forced value that balances prices for equity options and risk free rates using the Black Scholes formula.

The two main items that are cooked into the volatility number are market expectations of the future variability of returns on the stock market and the second is the risk premium or margin of error that the market wants to be paid for taking on the uncertainty of future transactions.

Looking an the decidedly smoother plot of annual values…


There does not seem to be any evidence that the actual variability of prices is unsteady.  It has been in the range of 20% since it drifted up from the range of 10%.  If there was going to be an equilibrium, this chart seems to show where it might be.  But the chart above shows that the market trades all over the place on volatility, not even necessarily around the level of the experienced volatility.

And much of that is doubtless the uncertainty, or risk premium.  The second graph does show that experienced volatility has drifted to twice the level that it was in the early 1990’s.  There is no guarantee that it will not double again.  The markets keep changing.  There is no reason to rely on these historical analyses.  Stories that the majority of trades today are computer driven very short term positions taken by hedge funds suggest that there is no reason whatsoever to think that the market of the next quarter will be in any way like the market of ten or twenty years ago.  If anything, you would guess that it will be much more volatile.  Those trading schemes make their money off of price movements, not stability.

So is there a normal level for volatility?  Doubtless not.   At least not in this imperfect world.  

Advertisement

The Year in Risk – 2010

January 3, 2011

It is very difficult to strike the right note looking backwards and talking about risk and risk management.  The natural tendency is to talk about the right and wrong “picks”.  The risks that you decided not to hedge or reinsure that did not develop losses and the ones that you did offload that did show losses.

But if we did that, we would be falling into exactly the same trap that makes it almost impossible to keep support for risk management over time.  Risk Management will fail if it becomes all about making the right risk “picks”.

There are other important and useful topics that we can address.  One of those is the changing risk environment over the year. In addition, we can try to assess the prevailing views of the risk environment throughout the year.


VIX is an interesting indicator of the prevailing market view of risk throughout the year.  VIX is in indicator of the price of insurance against market volatility.  The price goes up when the market believes that future volatility will be higher or alternately when the market is simply highly uncertain about the future.

Uncertain is the word used most throughout the year to represent the economic situation.  But one insight that you can glean from looking at VIX over a longer time period is that volatility in 2010 was not historically high.

If you look at the world in terms of long term averages, a single regime view of the world, then you see 2010 as an above average year for volatility.  But if instead of a single regime world, you think of a multi regime world, then 2010 is not unusual for the higher volatility regimes.

So for stocks, the VIX indicates that 2010 was a year when market opinions were for a higher volatility risk environment.  Which is about the same as the opinion in half of the past 20 years.

That is what everyone believed.

Here is what happened:

Return
December 6.0%
November -0.4%
October 3.5%
September 8.7%
August -5.3%
July 6.8%
Jun -5.2%
May -8.3%
April 1.3%
March 5.8%
February 2.8%
January -3.8%
Average 1.0%
Std Dev 5.6%

That looks pretty volatile.  And comparing to the past several years, we see below that 2010 was just a little less actually volatile than 2008 and 2009.  So we are still in a regime of high volatility.

So we can conclude that 2010 was a year of both high expected and high actual volatility.

If an exercize like this is repeated each year for each important risk, eventually insights of the possibilities for both expectations and actual risk levels can be formed and strategies and tactics developed for different combinations.

The other thing that we should do when we look back at a year is to note how the year looked in the artificial universe of our risk model.

For example, when many folks looked back at 2008 stock market results in early 2009, many risk manager had to admit that their models told them that 2008 was a 1 in 250 to 1 in 500 year.  That did not quite seem right, especially since losses of that size had occurred two or three times in the past 125 years.

What many risk managers decided to do was to change the (usually unstated) assumption that things had permanently changed and that the long term experience with those large losses was not relevant. Once they did that, the risk models were recalibrated and 2008 became something like a 1 in 75 to 1 in 100 year event.

For the stock market, the 15.1% total return was not unusual and causes no concern for recalibration.

But there are many other risks, particularly when you look at general insurance risks, that had higher than expected claims.  Some were frequency driven and some were severity driven.  Here is a partial list:

  • Queensland flood
  • December snowstorms (Europe & US)
  • Earthquakes (Haiti, Chile, China, New Zealand)
  • Iceland Volcano

Munich Re estimates that 2010 will go down as the sixth worst year for amount of general insurance claims paid for disasters.

Each insurer and reinsurer can look at their losses and see, in the aggregate and for each peril separately, what their models would assign as likelihood for 2010.

The final topic for the year in risk is Systemic Risk.  2010 will go down as the year that we started to worry about Systemic Risk.  Regulators, both in the US and globally are working on their methods for inoculating the financial markets against systemic risk.  Firms around the globe are honing their arguments for why they do not pose a systemic threat so that they can avoid the extra regulation that will doubtless befall the firms that do.

Riskviews fervently hopes that those who do work on this are very open minded.  As Mark Twain once said,

History does not repeat itself, but it does rhyme.”

And for Systemic Risk, my hope is that the resources and necessary drag from additional regulation are applied, not to prevent an exact repeat of the recent events, while recognizing the possibility of rhyming as well as what I would think would be the most likely systemic issue – that financial innovation will bring us an entirely new way to bollocks up the system next time.

Happy New Year!

It’s All Relative

November 7, 2010

Another way to differentiate risks and loss situations is to distinguish between systematic losses and losses where your firm ends up in the bottom quartile of worst losses.

You can get to that by way of having a higher concentration of a risk exposure than your peers.  Or else you can lose more in proportion to your exposure than your peers.

The reason it can be important to distinguish these situations is that there is some forgiveness from the market, from your customers and from your distributors if you lose money when everyone else is losing it.  But there is little sympathy for the firm that manages to lose much more than everyone else.

And worst of all is to lose money when no one else is losing it.

So perhaps you might want to go through each of your largest risk exposures and imagine how either of these three scenarios might hit you.

  • One company had a loss of 50% of capital during the credit crunch of the early 1990’s.  Their largest credit exposure was over 50% of capital and it went south.  Average recoveries were 60% to 80% in those days, but this default had a 10% recovery.  That 60% to 80% was an average, not a guaranteed recovery amount.  Most companies lost less than 5% of capital in that year.
  • Another company lost well over 25% of capital during the dot com bust.  They had concentrated in variable annuities.  No fancy guarantees, just guaranteed death benefits.  But their clientele was several years older than their average competitors.  And the difference in mortality rate was enough that they had losses that were much larger than their competitors, who were also not so concentrated in variable annuities.
  • Explaining their claims for Hurricane Katrina that were about 50% higher as a percent of their expected total claims, one insurer found that they had failed to reinsure a large commercial customer whose total loss from the hurricane made up almost 75% of the excess.  Had they followed their own retention rules on that one case, that excess would have been reduced by half.

So go over your risks.  Create scenarios for each major risk category that might send your losses far over the rest of the pack.  Then look for what needs to be done to prevent those extraordinary losses.

Another Fine Mess

May 9, 2010

High speed trading ran amok on Thursday, May 6.  It sounds like exactly the same thing that lead to the 1987 market crash.  There never was an explanation in 1987 and there most likely will not be one now.

Why not?  Because it is not in the interest of the people who are in a position to know the answer to tell anyone.

Look, the news says that this high speed trading is 75% of the volume of trading on the exchanges. That means that it is probably close to 75% of the exchanges revenue.

Most likely, the answer is that this sort of crash has always been possible at any time of any day with computers sending in orders by the thousands per minute. The people who programmed the computers just do not have enough imagination to anticipate the possibility that no one would want to take the other side of their trade.

Of course this is much less likely if someone actually looked at what was going on, but that would eliminate 90% of that volume.  Back before we handed all of the work to computers, the floor brokers who were the market makers would take care of these situations.

The exchange, that is benefiting from all of this volume, should perhaps be responsible to take some responsibility to maintain an orderly market.  Or else someone else should.  The problem is that there needs to be someone with deep pockets and the ability to discern the difference between a temporary lack of buyers or sellers and a real market route.

Oh, that was the definition of the old market makers – perhaps we eliminated that job too soon.  But people resented paying anything to those folks during the vast majority of the time when their services were not needed.

The problem most likely is that there is not a solution that will maintain the revenue to the exchanges.   Because if you brought back the market makers and then they got paid enough to make the very high risk that they were taking worth their while, that would cut into the margins of both the exchanges and the high speed traders.

Just one more practice that is beneficial to the financial sector but destructive to the economy.  After the 1929 crash, many regular people stayed out of the markets for almost 50 years.  It seems that every year, we are learning one more way that the deck is stacked against the common man.

In poker, when you sit down at the table, it is said that you should look around and determine who is the chump at the table.  If you cannot tell, then you are the chump.

As we learn about more and more of these practices that are employed in the financial markets to extract extra returns for someone, it seems more and more likely that those of us who are not involved in those activities are the chumps.

Understanding and Balance

October 27, 2009

Everything needs to balance.  A requirement that management understand the model creates and equal and opposite obligation on the part of the modelers to really explain the assumptions that are embedded in the model and the characteristics that the model will exhibit over time.

This means that the modelers themselves have to actually understand the assumptions of the model – not just the mechanical assumptions that support the mechanical calculations of the model.  But the fundamental underlying assumptions about why the sort of model chosen is a reliable way to represent the world.

For example, one of the aspects of models that is often disturbing to senior management is the degree to which the models require recalibration.  That need for recalibration is an aspect of the fundamental nature of the model.  And I would be willing to guess that few modelers have in their explanation of their model fully described that aspect of their model and explained why it exists and why it is a necessary aspect of the model.

That is just an example.  We modelers need to understand all of these fundamental points where models are simply baffling to senior management users and work to overcome the gap between what is being explained and what needs to be explain.

We are focused on the process.  Getting the process right.  If we choose the right process and follow it correctly, then the result should be correct.

But the explanations that we need are about why the choice of the process made sense in the first place.  And more importantly, how, now that we have followed the process for so long that we barely remember why we chose it, do we NOW believe that the result is correct.

What is needed is a validation process that gets to the heart of the fundamental questions about the model that are not yet known!  Sound frustrating enough?

The process of using risk models appropriately is an intellectual journey.  There is a need to step past the long ingrained approach to projections and models that put models in the place of fortune tellers.  The next step is to begin to find value in a what-if exercise.  Then there is the giant leap of the stochastic scenario generator.  Many major conceptual and practical leaps are needed to move from (a) getting a result that is not reams and reams of complete nonsense to (b) getting a result that gives some insight into the shape of the future to (c) realizing that once you actually have the model right, it starts to work like all of the other models you have ever worked with with vast amount of confirmation of what you already know (now that you have been doing this for a couple of years) along with an occasional insight that was totally unavailable without the model.

But while you have been taking this journey of increasing insight, you cross over and become one of those who you previously thought to talk mostly in riddles and dense jargon.

But to be fully effective, you need to be able to explain all of this to someone who has not taken the journey.

The first step is to understand that in almost all cases they do not give a flip about your model and the journey you went throughto get it to work.

The next step is to realize that they are often grounded in an understanding of the business.  For each person in your management team, you need to understand which part of the business that they are grounded in and convince them that the model captures what they understand about the part of the business that they know.

Then you need to satisfy those whse grounding is in the financials.  For those folks, we usually do a process called static validation – show that if we set the assumptions of the model to the actual experience of last year, that the model actually reproduces last year’s financial results.

Then you can start to work on an understanding of the variability of the results.  Where on the probability spectrum was last year – both for each element and for the aggregate result.

That one is usually troublesome.  For 2008, it was particularly troublesome for any firms that owned any equities.  Most models would have placed 2008 stock market losses almost totally off the charts.

But in the end, it is better to have the discussion.  It will give the management users a healthy skepticism for the model and more of an appreciation for the uses and limitations of the entire modeling methodology.

These discussions should lead to understanding and balance.  Enough understanding that there is a balanced view of the model.  Not total reliance and not total skepticism.

Black Swan Free World (5)

October 9, 2009

On April 7 2009, the Financial Times published an article written by Nassim Taleb called Ten Principles for a Black Swan Free World. Let’s look at them one at a time…

5. Counter-balance complexity with simplicity. Complexity from globalisation and highly networked economic life needs to be countered by simplicity in financial products. The complex economy is already a form of leverage: the leverage of efficiency. Such systems survive thanks to slack and redundancy; adding debt produces wild and dangerous gyrations and leaves no room for error. Capitalism cannot avoid fads and bubbles: equity bubbles (as in 2000) have proved to be mild; debt bubbles are vicious.

Complexity gets away from us very, very quickly.  And at the same time, we may spend so much time worrying about the complexity, building very complex models to deal with the complexity, that we lose sight of the basics.  So Complexity can hurt us both coming and going.

So why do we insist on Complexity?  That at least is simple.  Most complexity exists to provide differentiation between financial products that otherwise would be pure commodities.  The excuse is that the Complex products are needed to match up with the risks of a complex world.  Another, even less admirable reason for the complexity is to create something that sounds like a simple risk relief product but that costs the seller much less to provide, by carving out the parts of the risk relief that are more expensive but less desirable or less well understood by the customer.

Generally, customers who are buying risk relief products like insurance or hedges have a simple objective.  If they have a loss they want something that will make a payment that will offset the loss.  Complexity comes in when the risk relief products are customized to potentially better meet customer needs. (according to the sales literature).

Taleb suggests that complexity also hides leverage.  That is ver definitely the case.  For example, a CDS can be replicated by a long position in a credit and a short position in a treasury.  A short position in a treasury is finance speak for a loan at a better rate than you can actually get.  And a loan is leverage.  THe amount of the leverage is the full notional amount of the CDS.  Fans of derivatives will scoff at the idea that the notional amount if of any interest to anyone, but in this case at least, anyone who wants to know how much leverage the buyer of a CDS has, needs to add in the full notional amount of all of the CDS.

Debt bubbles are vicious because of the feedback loop in debt.  If one borrows money to purchase an asset and the asset increases in value, then you can use that increased value as collateral to increase the debt and purchase more of the asset.  The increase in demand for the asset causes prices to rise and so it goes.

But ultimately the reason that may economists have a hard time identifying bubbles (other than they do not believe that bubbles really ever exist) is that they do not know the capacity of any asset market to absorb additional investment.  Clearly in the example above, if there is a fixed amount of the asset that becomes subject to a debt bubble, it will very, very quickly run into a bubble situation.  But if the asset is a business or more likely a sector, it is not so easy to know exactly when the capacity of that sector to efficiently use additional capital is reached.

Black Swan Free World (10)

Black Swan Free World (9)

Black Swan Free World (8)

Black Swan Free World (7)

Black Swan Free World (6)

Black Swan Free World (5)

Black Swan Free World (4)

Black Swan Free World (3)

Black Swan Free World (2)

Black Swan Free World (1)

Custard Cream Risk – Compared to What???

September 26, 2009

It was recently revealed that the custard Cream is the most dangerous biscuit.

custard-cream-192b_684194e

But his illustrates the issue with stand alone risk analysis.  Compared to what?  Last spring, there was quite a bit of concern raised when it was reported that 18 people had died from Swine Flu.  That sounds VERY BAD.  But Compared to What?  Later stories revealed that seasonal flu is on the average responsible for 30,000 deaths in the US.  That breaks down to an average of 82 per day annually, or more during the flu season if you reflect the fact that there is little flu in the summer months.  No one was ever willing to say whether the 18 deaths were in addition to the 82 expected or if they were just a part of that total.

The chart below suggests that Swine flu is significantly less deadly than the seasonal flu.  However, what it fails to reveal is that Swine Flu is highly transmissable because there is very little immunity in the population.  So even with a very low fatality rate per infection, with a very high infection rate, expectations now are for more than twice as many deaths from the Swine Flu than from the seasonal flu.

disease_fatalities_550

For many years, being aware of the issue I tried to make a comparison whenever I presented a risk assessment.  Most commonly, I used a comparison to the risk in a common stock portfolio.  Was the risk I was assessing more or less risky than the stocks.  I would compare both the average return, the standard deviation of returns as well as the tail risk.  If appropriate, I would make that comparison for one year as well as for many years.

But I now realize that was not the best choice.  Experience in the past year reveals that many people did not really have a good idea of how risky the stock market is.  Many risk models would have pegged the 2008 37% drop in the S&P as a 1/250 year event or worse, even though there have now been similar levels of loss three times in the last 105 years on a calendar year basis and more if you look within calendar years.

spx-1825-2008-return

The chart above was made before the end of the year.  By the end of the year, 2008 fell back into the 30% to 40% return column.  But if your hypothesis had been that a loss that large was a 1/200 event, the likelihood of one occurrence in a 105 year period is only about 31%.  Much more likely to see none (60%).  Two occurrences only about 8% of the time.  Three or more, only about 1% of the time.  So it seems that a 1/200 return period hypothesis has about a 99% likelihood of being incorrect.  If you assume a return period of 1/50 years, that would make the three observations a 75th percentile event.

So that is a fundamental issue in communicating risk.  Is there really some risk that we really know – so that we can use it as a standard of comparison?

The article on Custard Creams was brought to my attention by Johann Meeke.  He says that he will continue to live dangerously with his biscuits.


%d bloggers like this: