Archive for October 2009

Myths of Market Consistent Valuation

October 31, 2009

    Guest Post from Elliot Varnell

    Myth 1: An arbitrage free model will by itself give a market consistent valuation.

    An arbitrage-free model which is calibrated to deep and liquid market data will give a market consistent valuation. An arbitrage-free model which ignores available deep and liquid market data does not give a market consistent valuation. Having said this there is not a tight definition of what constitutes deep and liquid market data, therefore there is no tight definition of what constitutes market consistent valuation. For example a very relevant question is whether calibrating to historic volatility can be considered market consistent if there is a marginally liquid market in options. CEIOPs CP39 published in July 2009 appears to leave open the questions of which volatility could be used, while CP41 requires that a market is deep and liquid, transparent and that these properties are permanent.

    Myth 2: A model calibrated to deep and liquid market data will give a Market Consistent Valuation.

    A model calibrated to deep and liquid market data will only give a market consistent valuation if the model is also arbitrage free. If a model ignores arbitrage free dynamics then it could still be calibrated to replicate certain prices. However this would not be a sensible framework marking to model the prices of other assets and liabilities as is required for the valuation of many participating life insurance contracts Having said this the implementation of some theoretically arbitrage free models are not always fully arbitrage free themselves, due to issues such as discretisation, although they can be designed to not be noticeably arbitrage free within the level of materiality of the overall technical provision calculation.

    Myth 3: Market Consistent Valuation gives the right answer.

    Market consistent valuation does not give the right answer, per se, but an answer conditional on the model and the calibration parameters. The valuation is only as good as these underlying assumptions. One thing we can be sure of is that the model will be wrong in some way. This is why understanding and documenting the weakness of an ESG model and its calibration is as important as the actual model design and calibration itself.

    Myth 4: Market Consistent Valuation gives the amount that a 3rd party will pay for the business.

    Market Consistent Valuation (as calculated using an ESG) gives a value based on pricing at the margin. As with many financial economic models the model is designed to provide a price based on a small scale transaction, ignoring trading costs, and market illiquidity. The assumption is made that the marginal price of the liability can be applied to the entire balance sheet. Separate economic models are typically required to account for micro-market features; for example the illiquidity of markets or the trading and frictional costs inherent from following an (internal) dynamic hedge strategy. Micro-market features can be most significant in the most extreme market conditions; for example a 1-in-200 stress event.

    Even allowing for the micro-market features a transaction price will account (most likely in much less quantitative manner than using an ESG) the hard to value assets (e.g. franchise value) or hard to value liabilities (e.g. contingent liabilities).

    Myth 5: Market Consistent Valuation is no more accurate than Discounted Cash Flow techniques using long term subjective rates of return.

    The previous myths could have suggested that market consistent valuation is in some way devalued or not useful. This is certainly the viewpoint of some actuaries especially in the light of the recent financial crisis. However it could be argued that market consistent valuation, if done properly, gives a more economically meaningful value than traditional DCF techniques and provides better disclosure than traditional DCF. It does this by breaking down the problem into clear assumptions about what economic theory is being applied and clear assumption regarding what assumptions are being made. By breaking down the models and assumptions weaknesses are more readily identified and economic theory can be applied.

RISK USA Conference – October 2009

October 29, 2009

Many, many good questions and good ideas at the RISK USA conference in New York.  Here is a brief sampling:

  • Risk managers are spending more time showing different constituencies that they really are managing risk.
  • May want to change the name to “Enterprise Uncertainty Management”
  • Two risk managers explained how their firms did withdraw from the mortgage market prior to the crisis and what sort of thinking by their top management supported that strategy
  • Now is the moment for risk management – we are being asked for our opinion on a wide range of things – we need to have good answers
  • Availability of risk management talent is an issue.  At both the operational level and the board level. 
  • Risk managers need to move to doing more explaining after better automating the calculating
  • Group think is one of the major barriers of good risk management
  • Regulators tend to want to save too many firms.  Need to have a middle path that allows a different sort of resolution of a troubled firm than bankrupcy.
  • Collateral will not be a sufficient solution to risks of derivatives.  Collateral covers only 30 – 50% of risk
  • No one has ever come up with a theory for the level of capital for financial firms.  Basel II is based upon the idea of keeping capital at about the same level as Basel I. 
  • Disclosure of Stress tests of major banks last Spring was a new level of transparency. 
  • Banking is risky. 
  • Systemic Risk Regulation is impossibly complicated and doomed to failure. 
  • Systemic Risk Regulation can be done.  (Two different speakers)
  • In Q2 2007, the Fed said that the sub-prime crisis is contained.  (let’s put them in charge)
  • Having a very good system for communicating was key to surviving the crisis.  Risk committees met 3 times per day 7 days per week in fall 2008. 
  • Should have worked out in advance what do do after environmental changes shifted exposures over limits
  • One firm used ratings plus 8 additional metrics to model their credit risk
  • Need to look through holdings in financial firms to their underlying risk exposures – one firm got red of all direct exposure to sub prime but retained a large exposure to banks with large sub prime exposure
  • Active management of counterparties and information flow to decision makers of the interactions with counter parties provided early warning to problems
  • Several speakers said that largest risk right now is regulatory changes
  • One speaker said that the largest Black Swan was another major terrorist attack
  • Next major systemic risk problem will be driven primarily by regulators/exchanges
  • Some of structured markets will never come back (CDO squareds)
  • Regret is needed to learn from mistakes
  • No one from major firms actually went physically to the hottest real estate markets to get an on the ground sense of what was happening there – it would have made a big difference – Instead of relying solely on models. 

Discussions of these and other ideas from the conference will appear here in the near future.

Understanding and Balance

October 27, 2009

Everything needs to balance.  A requirement that management understand the model creates and equal and opposite obligation on the part of the modelers to really explain the assumptions that are embedded in the model and the characteristics that the model will exhibit over time.

This means that the modelers themselves have to actually understand the assumptions of the model – not just the mechanical assumptions that support the mechanical calculations of the model.  But the fundamental underlying assumptions about why the sort of model chosen is a reliable way to represent the world.

For example, one of the aspects of models that is often disturbing to senior management is the degree to which the models require recalibration.  That need for recalibration is an aspect of the fundamental nature of the model.  And I would be willing to guess that few modelers have in their explanation of their model fully described that aspect of their model and explained why it exists and why it is a necessary aspect of the model.

That is just an example.  We modelers need to understand all of these fundamental points where models are simply baffling to senior management users and work to overcome the gap between what is being explained and what needs to be explain.

We are focused on the process.  Getting the process right.  If we choose the right process and follow it correctly, then the result should be correct.

But the explanations that we need are about why the choice of the process made sense in the first place.  And more importantly, how, now that we have followed the process for so long that we barely remember why we chose it, do we NOW believe that the result is correct.

What is needed is a validation process that gets to the heart of the fundamental questions about the model that are not yet known!  Sound frustrating enough?

The process of using risk models appropriately is an intellectual journey.  There is a need to step past the long ingrained approach to projections and models that put models in the place of fortune tellers.  The next step is to begin to find value in a what-if exercise.  Then there is the giant leap of the stochastic scenario generator.  Many major conceptual and practical leaps are needed to move from (a) getting a result that is not reams and reams of complete nonsense to (b) getting a result that gives some insight into the shape of the future to (c) realizing that once you actually have the model right, it starts to work like all of the other models you have ever worked with with vast amount of confirmation of what you already know (now that you have been doing this for a couple of years) along with an occasional insight that was totally unavailable without the model.

But while you have been taking this journey of increasing insight, you cross over and become one of those who you previously thought to talk mostly in riddles and dense jargon.

But to be fully effective, you need to be able to explain all of this to someone who has not taken the journey.

The first step is to understand that in almost all cases they do not give a flip about your model and the journey you went throughto get it to work.

The next step is to realize that they are often grounded in an understanding of the business.  For each person in your management team, you need to understand which part of the business that they are grounded in and convince them that the model captures what they understand about the part of the business that they know.

Then you need to satisfy those whse grounding is in the financials.  For those folks, we usually do a process called static validation – show that if we set the assumptions of the model to the actual experience of last year, that the model actually reproduces last year’s financial results.

Then you can start to work on an understanding of the variability of the results.  Where on the probability spectrum was last year – both for each element and for the aggregate result.

That one is usually troublesome.  For 2008, it was particularly troublesome for any firms that owned any equities.  Most models would have placed 2008 stock market losses almost totally off the charts.

But in the end, it is better to have the discussion.  It will give the management users a healthy skepticism for the model and more of an appreciation for the uses and limitations of the entire modeling methodology.

These discussions should lead to understanding and balance.  Enough understanding that there is a balanced view of the model.  Not total reliance and not total skepticism.

Black Swan Free World (8)

October 26, 2009

On April 7 2009, the Financial Times published an article written by Nassim Taleb called Ten Principles for a Black Swan Free World. Let’s look at them one at a time…

8. Do not give an addict more drugs if he has withdrawal pains. Using leverage to cure the problems of too much leverage is not homeopathy, it is denial. The debt crisis is not a temporary problem, it is a structural one. We need rehab.

George Soros has said that he believes that the GFC is the beginning of the unwinding of a fifty year credit buildup.  Clearly there was too much leverage.  But does anyone know what the right amount of leverage for a smoothly functioning capitalist system should be?

There is always a problem after a bubble.  Many people keep comparing things to how they were at the very height of the bubble.  Stock valuations are compared to the height of the market.  Employment is compared to the point where the most people had jobs.  But these are often not the right comparisons.  If in the month of May, for 30 days, I had an outstanding offer for my house of $300,000 and on one day a person flew in from far away and offered $3 million, and if I never made that sale, do I forever after compare the offering price for selling the house to $3 Million?

People talk about a “New Normal”.  Possibly, the new normal means nothing more than returning to the long term trend line.  Going back to where things would now be if everything had stayed rational.

That may seem sensible, but this new normal may be a very different economy than the overheated and overleveraged one that we had.

Taleb suggests that the only possible transition from excessive debt is cold turkey.  If Soros is right and we are going to transition to a new normal that is more like 50 years ago than 5 years ago, there that will be a long bout of DTs.

What we are seeing in the way of debt is the substitution of government debt for private debt.  While Taleb is probably too harsh, the Fed does need to be careful.   Careful not to go too far with the government debt.  The Fed should be acting like the football player who passes ahead of the teammate, not to where they are standing right now.  The amount of debt that they should be shooting for is a level that will make sense when the banks fully recover and again take up lending “like normal”.  That will keep enough money flowing in the economy to soften the slowdown to the economy from the contraction of bank lending.

However, if the Fed is shooting to put us back where we were at the peak, then we are in trouble and Taleb’s warning holds.  I would restate his warning as “Using too much leverage to combat the problem of too much leverage…” But using the right amount of leverage is just what is needed.

But that does mean learning to live with much less leverage.  It means that we need to better understand how much leverage is the right amount.  And we need to stop blaming the Chinese because they hold so much dollars and want to lend them to us.  We need to develop a structural solution to the global imbalance that the Chinese balances are a symptom of.

Like some of our other problems, the purely market based solutions will not work.  China is not playing by the market based rule book.  They are a mercantilist economy that is taking advantage of the global market economy systems.  We need to stop whining about that and develop strategies that work for everyone.

Black Swan Free World (7)

Black Swan Free World (6)

Black Swan Free World (5)

Black Swan Free World (4)

Black Swan Free World (3)

Black Swan Free World (2)

Black Swan Free World (1)

Cultural Theory of Risk

October 24, 2009

Back in 1984 an anthropologist, Mary Douglas, wrote about her theory for why people chose to form and continue to associate with groups.  She postulated that the way that people thought about RISK was a primary driver.

Cultural Theory describes four views of RISK:

Individualists see the world as mean reverting.  Any risk that they take will be offset by later gains.

Egalitarians see the world in a delicate balance where any risky behaviors might throw off that balance and result in major problems.

Authoritarians see the world as dangerous by manageable.  Some risk can be taken but must be tightly controlled.

Fatalists see the world as unpredictable.  No telling what the result might be from risk taking.

The dynamics of human behavior are influenced by these four groups.  People shift between the four groups because they find the environment either validating their belief or failing to validate their belief.

Cultural Theory also see that there are broadly four different risk regimes in the world.  The four groups exist because the risk regime that validates their view of risk will exist some of the time.

These four regimes are:

Normal Risk – when the ups and downs of the world fall within the expected ranges.

Low Risk – when everything seems to be working out well for the risk takers and the dips are quickly followed by jumps.

High Risk – when the world is on the edge of disaster and hard choices must be made very carefully.

High Loss – when the risks have all turned to losses and survival does not seem certain.

There are huge implications of these ideas for risk managers.  Risk management, as currently practiced, is process that is designed by Authoritarians for the Normal Risk regime.  The Global Financial Crisis has shown that current risk management fails when faced with the other regimes.

One solution would be to redesign risk management to be a broader idea that can both use the skills of those other three views of risk, adapting to the other three regimes of risk.

This idea is discussed in more detail here and in a forthcoming series of articles in Wilmott Magazine.

Which ERM are you talking about?

October 23, 2009

If you ask managers, and if they give an honest answer, the large majority of them will say that they do not really know what Enterprise Risk Management really means.

One major reason for that confusion is that there really are three different and largely separable ideas of ERM that are being performed in organizations and discussed by experts.  Much confusion results because of these highly mixed messages.  The three types of ERM are:

Loss Controlling – practiced in most non-financial firms and also the traditional risk management of financial firms.  This type of ERM has the objective of minimizing losses.

Risk Trading – practiced in firms like banks and insurers who see their business as risk taking.  The ERM of risk trading focuses primarily on pricing of those risks.  Modern ERM grew up in banks out of the trading desks of banks.

Risk Steering – is an ideal that exists much more prevalently in books about ERM and in articles by consultants than in reality.  Risk steering concentrates on using risk and reward information for strategic decision making.

Some firms seek to do all three.  Most are looking to do just one of the three.  Writers on ERM usually do not clearly distinguish between the very different activities that are needed to support the three different types of ERM or they might exclude one of the three completely from their discussion.

So some of the confusion about ERM arises from this confusing discussion.  Most confusing to everyone else is the Bankers.  They are focused almost solely on risks that can be traded in a real time basis.  Their risk trading.  They are so in love with that idea, that people like Alan Greenspan have suggested that all risks would be better managed by turning them into traded risks.

Unfortunately, what we have seen is almost the opposite of that.  Many risks can only be managed by a Loss Controlling process and the way that banks have abandoned Loss Controlling in favor of Risk Trading has proven disastrous for all of us.

For a discussion of how this idea impacts on actuaries seeking to practice in ERM see this post.

Coverage and Collateral

October 22, 2009

I thought that I must be just woefully old fashioned. 

In my mind the real reason for the financial crisis was that bankers lost sight of what it takes to operating a lending business. 

There are really only two simple factors that MUST be the first level of screen of borrowers:

1.  Coverage

2.  Collateral

And banks stopped looking at both.  No surprise that their loan books are going sour.  There is no theory on earth that will change those two fundamentals of lending. 

The amount of coverage, which means the amount of income available to make the loan payments, is the primary factor in creditworthiness.  Someone must have the ability to make the loan payments. 

The amount of collateral, which means the assets that the lender can take to offset any loan loss upon failure to repay, is a risk management technique that insulates the lender from “expected” losses. 

Thinking has changed over the last 10 – 15  years with the idea that there was no need for collateral, instead the lender could securitize the loan, atomize the risk, thereby spreading the specific risk to many, many parties, thereby making it inconsequential to each party.  Instead of collateral, the borrower would be charged for the cost of that securitization process. 

Funny thing about accounting.  If the lender does something very conservative (in terms of current standards) and requires collateral that would take up the first layer of loss then there will be no impact on P&L of this prudence. 

If the lender does not require collateral, then this charge that the borrower pays will be reported as profits!  The Banks has taken on more risk and therefore can show more profit! 

EXCEPT, in the year(s) when the losses hit! 

What this shows is that there is a HUGE problem with how accounting systems treat risks that have a frequency that is longer than the accounting period!  In all cases of such risks, the accounting system allows this up and down accounting.  Profits are recorded for all periods except when the loss actually hits.  This account treatment actually STRONGLY ENCOURAGES taking on risks with a longer frequency. 

What I mean by longer frequency risks, is risks that expect to show a loss, say once every 5 years.  These risks will all show profits in four years and a loss in the others.  Let’s say that the loss every 5 years is expected to be 10% of the loan, then the charge might be 3% per year in place of collateral.  So the banks collect the 3% and show results of 3%, 3%, 3%, 3%, (7%).  The bank pays out bonuses of about 50% of gains, so they pay 1.5%, 1.5%, 1.5%, 1.5%, 0.  The net result to the bank is 1.5%, 1.5%, 1.5%, 1.5%, (7%) for a cumulative result of (1%).  And that is when everything goes exactly as planned! 

Who is looking out for the shareholders here?  Clearly the deck is stacked very well in favor of the employees! 

What it took to make this look o.k. was an assumption of independence for the loans.  If the losses are atomized and spread around eliminating specific risk, then there would be a small amount of these losses every year, the negative net result that is shown above would NOT happen because every year, the losses would be netted against the gains and the cumulative result would be positive. 

Note however, that twice above it says that the SPECIFIC risk is eliminated.  That leaves the systematic risk.  And the systematic risk has exactly the characteristic shown by the example above.  Systematic risk is the underlying correlation of the loans in an adverse economy. 

So at the very least, collateral should be resurected and required to the tune of the systematic losses. 

Coverage… well that seems so obvious it doed not need discussion.  But if you need some, try this.


Get every new post delivered to your Inbox.

Join 776 other followers

%d bloggers like this: