Archive for the ‘Value at Risk’ category

Assumptions Embedded in Risk Analysis

April 28, 2010

The picture below from Dour VanDemeter’s blog gives an interesting take on the embedded assumptions in various approaches to risk analysis and risk treatment.

But what I take from this is a realization that many firms have activity in one or two or three of those boxes, but the only box that does not assume away a major part of reality is generally empty.

In reality, most financial firms do experience market, credit and liability risks all at the same time and most firms do expect to be continuing to receive future cashflows both from past activities and from future activities.

But most firms have chosen to measure and manage their risk by assuming that one or two or even three of those things are not a concern.  By selectively putting on blinders to major aspects of their risks – first blinding their right eye, then their left, then by not looking up and finally not looking down.

Some of these processes were designed that way in earlier times when computational power would not have allowed anything more.  For many firms their affairs are so very complicated and their future is so uncertain that it is simply impractical to incorporate everything into one all encompassing risk assessment and treatment framework.

At least that is the story that folks are most likely to use.

But the fact that their activity is too complicated for them to model does not seem to send them any flashing red signal that it is possible that they really do not understand their risk.

So look at Doug’s picture and see which are the embedded assumptions in each calculation – the ones I am thinking of are the labels on the OTHER rows and columns.

For Credit VaR – the embedded assumption is that there is no Market Risk and that there is no new assets or liabilities (business is in sell-off mode)

For Interest risk VaR – the embedded assumption is that there is no credit risk nor new assets or liabilities (business is in sell-off mode)

For ALM – the embedded assumption is that there is no credit risk and business is in run-off mode.

Those are the real embedded assumptions.  We should own up to them.

Advertisements

Take CARE in evaluating your Risks

February 12, 2010

Risk management is sometimes summarized as a short set of simply stated steps:

  1. Identify Risks
  2. Evaluate Risks
  3. Treat Risks

There are much more complicated expositions of risk management.  For example, the AS/NZ Risk Management Standard makes 8 steps out of that. 

But I would contend that those three steps are the really key steps. 

The middle step “Evaluate Risks” sounds easy.  However, there can be many pitfalls.  A new report [CARE] from a working party of the Enterprise and Financial Risks Committee of the International Actuarial Association gives an extensive discussion of the conceptual pitfalls that might arise from an overly narrow approach to Risk Evaluation.

The heart of that report is a discussion of eight different either or choices that are often made in evaluating risks:

  1. MARKET CONSISTENT VALUE VS. FUNDAMENTAL VALUE 
  2. ACCOUNTING BASIS VS. ECONOMIC BASIS         
  3. REGULATORY MEASURE OF RISK    
  4. SHORT TERM VS. LONG TERM RISKS          
  5. KNOWN RISK AND EMERGING RISKS        
  6. EARNINGS VOLATILITY VS. RUIN    
  7. VIEWED STAND-ALONE VS. FULL RISK PORTFOLIO       
  8. CASH VS. ACCRUAL 

The main point of the report is that for a comprehensive evaluation of risk, these are not choices.  Both paths must be explored.

Economic Risk Capital

December 1, 2009

Guest Post from Chitro Majumdar

Economic capital models can be complex, embodying many component parts and it may not be immediately obvious that a complex model works satisfactorily. Moreover, a model may embody assumptions about relationships between variables or about their behaviour that may not hold in all circumstances (e.g under periods of stress). We have developed an algorithm for Dynamic Financial Analysis (DFA) that enables the creation of a comprehensive framework to manage Enterprise Risk’s Economic Risk Capital. DFA is used in the capital budgeting decision process of a company to launch a new invention and predict the impact of the strategic decision on the balance sheet in the horizon. DFA gives strategy for Enterprise Risk Management in order to avoid undesirable outcomes, which could be disastrous.

“The Quants know better than anyone how their models can fail. The surest way to replicate this adversity is to trust the models blindly while taking large-scale advantage of situations where they seem to provide ERM strategies that would yield results too superior to be true”

Dynamic Financial Analysis (DFA) is the most advance modelling process in today’s property and casualty industry-allowing us to develop financial forecasts that integrate the variability and interrelationships of critical factors affecting our results. Through the modeling of DFA, we see the company’s relevant random variables is based on the categorization of risks which is generated solvency testing where the financial position of the company is evaluated from the perspective of the customers. The central idea is to quantify in probabilistic terms whether the company will be able to meet its commitments in the future.  DFA is in the capital budgeting decision process of a company launching a new invention and predicting the impact of the strategic decision on the balance sheet in a horizon of few years.

 

The validation of economic capital models is at a very preliminary stage. There exists a wide range of validation techniques, each of which provides corroboration for (or against) only some of the desirable properties of a model. Moreover, validation techniques are powerful in some areas such as risk sensitivity but not in other areas such as overall absolute accuracy or accuracy in the tail of the loss distribution. It is advisable that validation processes are designed alongside development of the models rather than chronologically following the model building process. There is a wide range of validation processes and each one provides evidence for only some of the desirable properties of a model. Certain industry validation practices are weak with improvements needed in benchmarking, industry wide exercises, back-testing, profit and loss analysis and stress testing and followed by other advanced simulation model. For validation we adhere to the below mentioned method to calculate.

 

Calculation of risk measures

In their internal use of risk measures, banks need to determine an appropriate confidence level for their economic capital models. It generally does not coincide with the 99.9% confidence level used for credit and operational risk under Pillar 1 of Basel II or with the 99% confidence level for general and specific market risk. Frequently, the link between a bank’s target rating and the choice of confidence level is interpreted as the amount of economic capital necessary to prevent the bank from eroding its capital buffer at a given confidence level. According to this view, which can be interpreted as a going concern view, capital planning is seen more as a dynamic exercise than a static one, in which banks want to hold a capital buffer “on top” of their regulatory capital and where it is the probability of eroding such a buffer (rather than all available capital) that is linked to the target rating. This would reflect the expectation (by analysts, rating agencies and the market) that the bank operates with capital that exceeds the regulatory minimum requirement. Apart from considerations about the link to a target rating, the choice of a confidence level might differ based on the question to be addressed. On the one hand, high confidence levels reflect the perspective of creditors, rating agencies and regulators in that they are used to determine the amount of capital required to minimise bankruptcy risk. On the other hand,  use of lower confidence levels for management purposes in order to allocate capital to business lines and/or individual exposures and to identify those exposures that are critical for profit objectives in a normal business environment. Another interesting aspect of the internal use of different risk measures is that the choice of risk measure and confidence level heavily influences relative capital allocations to individual exposures or portfolios. In short, the farther out in the tail of a loss distribution, the more relative capital gets allocated to concentrated exposures. As such, the choice of the risk measure as well as the confidence level can have a strategic impact since some portfolios might look relatively better or worse under risk-adjusted performance measures than they would based on an alternative risk measure.

 

Chitro Majumdar CSO – R-square RiskLab

 

 

More details: http://www.riskreturncorp.com

The Future of Risk Management – Conference at NYU November 2009

November 14, 2009

Some good and not so good parts to this conference.  Hosted by Courant Institute of Mathematical Sciences, it was surprisingly non-quant.  In fact several of the speakers, obviously with no idea of what the other speakers were doing said that they were going to give some relief from the quant stuff.

Sad to say, the only suggestion that anyone had to do anything “different” was to do more stress testing.  Not exactly, or even slightly, a new idea.  So if this is the future of risk management, no one should expect any significant future contributions from the field.

There was much good discussion, but almost all of it was about the past of risk management, primarily the very recent past.

Here are some comments from the presenters:

  • Banks need regulator to require Stress tests so that they will be taken seriously.
  • Most banks did stress tests that were far from extreme risk scenarios, extreme risk scenarios would not have been given any credibility by bank management.
  • VAR calculations for illiquid securities are meaningless
  • Very large positions can be illiquid because of their size, even though the underlying security is traded in a liquid market.
  • Counterparty risk should be stress tested
  • Securities that are too illiquid to be exchange traded should have higher capital charges
  • Internal risk disclosure by traders should be a key to bonus treatment.  Losses that were disclosed and that are within tolerances should be treated one way and losses from risks that were not disclosed and/or that fall outside of tolerances should be treated much more harshly for bonus calculation purposes.
  • Banks did not accurately respond to the Spring 2009 stress tests
  • Banks did not accurately self assess their own risk management practices for the SSG report.  Usually gave themselves full credit for things that they had just started or were doing in a formalistic, non-committed manner.
  • Most banks are unable or unwilling to state a risk appetite and ADHERE to it.
  • Not all risks taken are disclosed to boards.
  • For the most part, losses of banks were < Economic Capital
  • Banks made no plans for what they would do to recapitalize after a large loss.  Assumed that fresh capital would be readily available if they thought of it at all.  Did not consider that in an extreme situation that results in the losses of magnitude similar to Economic Capital, that capital might not be available at all.
  • Prior to Basel reliance on VAR for capital requirements, banks had a multitude of methods and often used more than one to assess risks.  With the advent of Basel specifications of methodology, most banks stopped doing anything other than the required calculation.
  • Stress tests were usually at 1 or at most 2 standard deviation scenarios.
  • Risk appetites need to be adjusted as markets change and need to reflect the input of various stakeholders.
  • Risk management is seen as not needed in good times and gets some of the first budget cuts in tough times.
  • After doing Stress tests need to establish a matrix of actions that are things that will be DONE if this stress happens, things to sell, changes in capital, changes in business activities, etc.
  • Market consists of three types of risk takers, Innovators, Me Too Followers and Risk Avoiders.  Innovators find good businesses through real trial and error and make good gains from new businesses, Me Too follow innovators, getting less of gains because of slower, gradual adoption of innovations, and risk avoiders are usually into these businesses too late.  All experience losses eventually.  Innovators losses are a small fraction of gains, Me Too losses are a sizable fraction and Risk Avoiders often lose money.  Innovators have all left the banks.  Banks are just the Me Too and Avoiders.
  • T-Shirt – In my models, the markets work
  • Most of the reform suggestions will have the effect of eliminating alternatives, concentrating risk and risk oversight.  Would be much safer to diversify and allow multiple options.  Two exchanges are better than one, getting rid of all the largest banks will lead to lack of diversity of size.
  • Problem with compensation is that (a) pays for trades that have not closed as if they had closed and (b) pay for luck without adjustment for possibility of failure (risk).
  • Counter-cyclical capital rules will mean that banks will have much more capital going into the next crisis, so will be able to afford to lose much more.  Why is that good?
  • Systemic risk is when market reaches equilibrium at below full production capacity.  (Isn’t that a Depression – Funny how the words change)
  • Need to pay attention to who has cash when the crisis happens.  They are the potential white knights.
  • Correlations are caused by cross holdings of market participants – Hunts held cattle and silver in 1908’s causing correlations in those otherwise unrelated markets.  Such correlations are totally unpredictable in advance.
  • National Institute of Financa proposal for a new body to capture and analyze ALL financial market data to identify interconnectedness and future systemic risks.
  • If there is better information about systemic risk, then firms will manage their own systemic risk (Wanna Bet?)
  • Proposal to tax firms based on their contribution to gross systemic risk.
  • Stress testing should focus on changes to correlations
  • Treatment of the GSE Preferred stock holders was the actual start of the panic.  Leahman a week later was actually the second shoe to drop.
  • Banks need to include variability of Vol in their VAR models.  Models that allowed Vol to vary were faster to pick up on problems of the financial markets.  (So the stampede starts a few weeks earlier.)
  • Models turn on, Brains turn off.

Turn VAR Inside Out – To Get S

November 13, 2009

S

Survival.  That is what you really want to know.  When the Board meeting ends, the last thing that they should hear is management assuring them that the company will be in business still when the next meeting is due to be held.

S

But it really is not in terms of bankruptcy, or even regulatory take-over.  If your firm is in the assurance business, then the company does not necessarily need to go that far.  There is usually a point, that might be pretty far remote from bankruptcy, where the firm loses confidence of the market and is no longer able to do business.  And good managers know exactly where that point lies.  

S

So S is the likelihood of avoiding that point of no return.  It is a percentage.  Some might cry that no one will understand a percentage.  That they need dollars to understand.  But VAR includes a percentage as well.  Just because no one says the percentage, that does not mean it is there.  It actually means that no one is even bothering to try to help people to understand what VAR is.  The VAR nuber is really one part of a three part sentence:

The 99% VAR over one-year is $67.8 M.  By itself, VAR does not tell you whether the firm has trouble.  If the VAR doubles from one period to the next, is the firm in trouble?  The answer to that cannot be determined without further information.

S

Survival is the probability that, given the real risks of the firm and the real capital of the firm, the firm will sustain a loss large enough to put an end to their business model.  If your S is 80%, then there is about  50% chance that your firm will not survive three years! But if your S is 95%, then there is a 50-50 chance that your firm will last at least 13 years.  This arithmetic is why a firm, like an insurer, that makes long term promises, need to have a very high S.  An S of 95% does not really seem high enough.

S

Survival is something that can be calculated with the existing VAR model.  Instead of focusing on a arbitrary probability, the calculation instead focuses on the loss that management feels is enough to put them out of business.  S can be recalculated after a proposed share buy back or payment of dividends.  S responds to management actions and assists management decisions.

If your board asks how much risk you are taking, try telling them the firm has a 98.5% Survival probability.  That might actually make more sense to them than saying that the firm might lose as much as $523 M at a 99% confidence interval over one year.

So turn your VAR inside out – to get S 

VAR is not a Good Risk Measure

November 6, 2009

Value at Risk (VAR) has taken alot of heat lately and deservedly so.

VAR, as banks are required to calculate it, relies solely on recent past data for calibration.  The use of “recent” data means that following any period of low losses, the VAR measure will show low risk.  That is just not the case.  It fails to recognize the longer term volatility that might exist.  In other words if there are problems that have a periodicity longer than the usual one year time frame of VAR, then VAR will ignore them most of the time and over emphasize them some of the time. Like the stopped clock that is right twice a day, except that VAR might never be right.

Risk models can be calibrated to history, long term or short term, or to future expectations, either long term or short term or they can be calibrated to assumptions consistent with market prices, either spot or over some period of time.  Of those six choices, VAR is calibrated from one of the less useful possible choices.

What VAR does is to answer the question of what would the 1/100 loss have been had I held the current risk positions over the past year.  The advantage of the definition chosen is that you can be sure of consistency.  However, that is only a consistently useful result if you always believe that the world will remain exactly as risky as it was in the past year.

If you believe in equilibrium, then of course the next year will be very similar to the last.  So risk management is a very small task relating to keeping things in line with past variability.  However, the world and the markets do not evidence a fundamental belief in equilibrium.  Some years are riskier than others.

So VAR is not a Good Risk Measure if it is taken alone.  If used with other more forward looking risk measures, it can be a part of a suite of information tat can lead to good risk management.

On the other hand, if you divorce the idea of VAR from the actual implementation of VAR in the banks, then you can conclude that Var is not a Bad Risk Measure.

Need to Shift the Defense . . . and the ERM

October 1, 2009

Sports analogies are so easy.

ERM is like the defense in football.  You would no more think of fielding a football team without a defensive squad then you would think of running a financial firm without ERM.  On the football field, if a team went out without any defensive players, they would doubtless be scored upon over and over again.

A financial firm without an ERM program would experience losses that were higher than what they wanted.

The ERM program can learn something from the football defenders.  The defenders, even when they do show up,  cannot get by doing the exact same thing over and over again.  The offensive of the other team would quickly figure out that they were entirely predictable and take them apart.  The defenders need to shift and compensate for the changes in the environment and in the play of the other team.

Banks with compliance oriented static ERM programs found this out in the financial crisis.  Their ERM program consisted of the required calculation of VaR using the required methods.  If you look at what happened in the crisis, many banks did not show any increase in VaR almost right up until the markets froze.  That is because the clever people at the origination end of the banks knew exactly how the ERM folks were going to calculate the VaR and they waltzed their fancy new CDO products right around the static defense of the ERM crew at the bank.

They knew that the ERM squad would not look into the quality of the underlying credit that went into the CDOs as long as those CDOs had the AAA stamp of approval from the rating agencies.  The ERM models worked very well off of the ratings and the banks had drastically cut back on their staff of credit analysts anyway.

They also knew that the spot on the gain and loss curve where the VaR would be calculated was fixed in advance.  As long as their new creation passed the VaR test at that one point, nobody was going to look any further.

So what would the football coach do if their defense kept doing the same thing over and over while the other team ran around them all game?  Would the coach decide to play the next season without a defense?  Or would he retrain and restaff his defense with new players who would move around and adapt and shift to different strategies as the game went along.

And that is what ERM needs to do.  ERM needs to make sure that it does not get stuck in a rut.  Because any predictable rut will not work for long.  The marketplace and perhaps some within their own companies will  find a way around them and defeat their purpose.


%d bloggers like this: