Archive for the ‘VaR’ category

What Do Your Threats Look Like?

December 6, 2012

Severe and intense threats are usually associated with dramatic weather events, terrorist attacks, earthquakes, nuclear accidents and such like.  When one of these types of threats is thought to be immanent, people will often cooperate with a cooperative ERM scheme, if one is offered.  But when the threat actually happens, there are four possible responses:  cooperation with disaster plan, becoming immobilized and ignoring the disaster, panic and anti-social advantage taking.  Disaster planning sometimes goes no further than developing a path for people with the first response.  A full disaster plan would need to take into account all four reactions.  Plans would be made to deal with the labile and panicked people and to prevent the damage from the anti-social.  In businesses, a business continuity or disaster plan would fall into this category of activity.

When businesses do a first assessment, risks are often displayed in four quadrants: Low Likelihood/Low Severity; Low Likelihood/High Severity; High Likelihood/Low Severity; and High Likelihood/High Severity.  It is extremely difficult to survive if your risks are High Likelihood/High Severity, so few businesses find that they have risks in that quadrant.  So businesses usually only have risks in this category that are Low Likelihood.

Highly Cooperative mode of Risk Management means that everyone is involved in risk management because you need everyone to be looking out for the threats.  This falls apart quickly if your threats are not Severe and Intense because people will question the need for so much vigilance.

Highly Complex threats usually come from the breakdown of a complex system of some sort that you are counting upon.  For an insurer, this usually means that events that they thought had low interdependency end up with a high correlation.  Or else a new source of large losses emerges from an existing area of coverage.  Other complex threats that threaten the life insurance industry include the interplay of financial markets and competing products, such as happened in the 1980’s when money market funds threatened to suck all of the money out of insurers, or in the 1990’s the variable products that decimated the more traditional guaranteed minimum return products.

In addition, financial firms all create their own complex threat situations because they tend to be exposed to a number of different risks.  Keeping track of the magnitude of several different risk types and their interplay is itself a complex task.  Without very complex risk evaluation tools and the help of trained professionals, financial firms would be flying blind.  But these risk evaluation tools themselves create a complex threat.

Highly Organized mode of Risk Management means that there are many very different specialized roles within the risk management process.  May have different teams doing risk assessment, risk mitigation and assurance, for each separate threat.  This can only make sense when the rewards for taking these risks is large because this mode of risk management is very expensive.

Highly Unpredictable Threats are common during times of transition when a system is reorganizing itself.  “Uncertain” has been the word most often used in the past several years to describe the current environment.  We just are not sure what will be hitting us next.  Neither the type of threat, the timing, frequency or severity is known in advance of these unpredictable threats.

Businesses operating in less developed economies will usually see this as their situation.  Governments change, regulations change, the economy dips and weaves, access to resources changes abruptly, wars and terrorism are real threats.

Highly Adaptable mode of Risk Management means that you are ready to shift among the other three modes at any time and operate in a different mode for each threat.  The highly adaptable mode of risk management also allows for quick decisions to abandon the activity that creates the threat at any time.  But taking up new activities with other unique threats is less of a problem under this mode.  Firms operating under the highly adaptive mode usually make sure that their activities do not all lead to a single threat and that they are highly diversified.

Benign Threats are things that will never do more than partially reduce earnings.  Small stuff.  Not good news, but not bad enough to lose any sleep over.

Low Cooperation mode of Risk Management means that individuals within their firm can be separately authorized to undertake activities that expand the threats to the firm.  The individuals will all operate under some rules that put boundaries around their freedom, but most often these firms police these rules after the action, rather than with a process that prevents infractions.  At the extreme of low cooperation mode of risk management, enforcement will be very weak.

For example, many banks have been trying to get by with a low cooperation mode of ERM.  Risk Management is usually separate and adversarial.  The idea is to allow the risk takers the maximum degree of freedom.  After all, they make the profits of the bank.  The idea of VaR is purely to monitor earnings fluctuations.  The risk management systems of banks had not even been looking for any possible Severe and Intense Threats.  As their risk shifted from a simple “Credit” or “Market” to very complex instruments that had elements of both with highly intricate structures there was not enough movement to the highly organized mode of risk management within many banks.  Without the highly organized risk management, the banks were unable to see the shift of those structures from highly complex threats to severe and intense threats. (Or the risk staff saw the problem, but were not empowered to force action.)  The low cooperation mode of risk management was not able to handle those threats and the banks suffered large losses or simply collapsed.

The End of ERM

October 16, 2012

In essence, if ERM is to be implemented in a way which helps an entity get to where it wants to go, it needs to have a bias toward action which many applications currently lack.   “The End of Enterprise Risk Management”  David Martin and Michael Power

In 2007, Martin and Power argued that the regulatory based Enterprise Risk Management programs that were COSO based provided the illusion of control, without actually achieving anything.  Now if you are an executive of a firm and you believe that things are being done just fine, thank you very much, then an ineffective ERM program is just what you want.  But if you really want ERM, the something else is needed.  Martin and Power suggest that the activities of ERM are focused much too much on activities that do not reault in actions to actually change the risks of the firm.  This is a favorite topic of RISKVIEWS as well.  See Beware the Risk Management Entertainment System

RISKVIEWS always tells managers who are interested in developing ERM systems that if some part of an ERM program cannot be clearly linked to decisions to take actions that would not have been taken without ERM, then they are better off without that part of ERM. 

Martin and Power go on to suggest that ERM that uses just one risk measure (usually VAR) is difficult to get right because of limitations of VAR.  RISKVIEWS would add that an ERM program that uses only one risk measure, no matter what that measure is, will be prone to problems.  See Law of Risk and Light. 

It is very nice to find someone who says the same things that you say.  Affirming.  But even better to read something that you haven’t said.  And Martin and Power provide that. 

Finally, there is a call for risk management that is Reflexive.  That reacts to the environment.  Most ERM systems do not have this Reflexive element.  Risk limits are set and risk positions are monitored most often assuming a static environment.  The static environment presumption in a risk management system works if you are operating in an environment that changes fairly infrequently.  In fact, it works best if the frequency of change to your environment is less then the frequency of your update to the risk factors that you use.  That is, if your update includes studying the environment and majing environment driven changes. 

RISKVIEWS has worked in ERM systems that were based upon risk assessment based upon “eternal” risk factors.  Eternal Risk factors are assumed to be good “for all time”.  The US RBC factors are such.  Those factors are changed only when there is a belief that the prior factors were inadequate in representing the full range of risk “for all time”. 

But firms would be better off looking at their risks in the light of a changing risk environment.  Plural Rationality theory suggests that there are four different risk environments.  If a company adopts this idea, then they need to look for signs that the environment is shifting and when it seems to be likely to be shifting, to consider how to change their risk acceptance and risk mitigation in the light of the expected new risk environment.  The idea of repeatedly catching this wave and correctly shifting course is called Rational Adaptability

So RISKVIEWS also strongly agrees with Martin and Powers that a risk management system needs to be reflexive. 

In “The End of ERM” Martin and Powers really mean the end of static ERM that is not action oriented and not reflexive with the environment.  With that RISKVIEWS can heartily agree.

Assumptions Embedded in Risk Analysis

April 28, 2010

The picture below from Dour VanDemeter’s blog gives an interesting take on the embedded assumptions in various approaches to risk analysis and risk treatment.

But what I take from this is a realization that many firms have activity in one or two or three of those boxes, but the only box that does not assume away a major part of reality is generally empty.

In reality, most financial firms do experience market, credit and liability risks all at the same time and most firms do expect to be continuing to receive future cashflows both from past activities and from future activities.

But most firms have chosen to measure and manage their risk by assuming that one or two or even three of those things are not a concern.  By selectively putting on blinders to major aspects of their risks – first blinding their right eye, then their left, then by not looking up and finally not looking down.

Some of these processes were designed that way in earlier times when computational power would not have allowed anything more.  For many firms their affairs are so very complicated and their future is so uncertain that it is simply impractical to incorporate everything into one all encompassing risk assessment and treatment framework.

At least that is the story that folks are most likely to use.

But the fact that their activity is too complicated for them to model does not seem to send them any flashing red signal that it is possible that they really do not understand their risk.

So look at Doug’s picture and see which are the embedded assumptions in each calculation – the ones I am thinking of are the labels on the OTHER rows and columns.

For Credit VaR – the embedded assumption is that there is no Market Risk and that there is no new assets or liabilities (business is in sell-off mode)

For Interest risk VaR – the embedded assumption is that there is no credit risk nor new assets or liabilities (business is in sell-off mode)

For ALM – the embedded assumption is that there is no credit risk and business is in run-off mode.

Those are the real embedded assumptions.  We should own up to them.

Take CARE in evaluating your Risks

February 12, 2010

Risk management is sometimes summarized as a short set of simply stated steps:

  1. Identify Risks
  2. Evaluate Risks
  3. Treat Risks

There are much more complicated expositions of risk management.  For example, the AS/NZ Risk Management Standard makes 8 steps out of that. 

But I would contend that those three steps are the really key steps. 

The middle step “Evaluate Risks” sounds easy.  However, there can be many pitfalls.  A new report [CARE] from a working party of the Enterprise and Financial Risks Committee of the International Actuarial Association gives an extensive discussion of the conceptual pitfalls that might arise from an overly narrow approach to Risk Evaluation.

The heart of that report is a discussion of eight different either or choices that are often made in evaluating risks:

  1. MARKET CONSISTENT VALUE VS. FUNDAMENTAL VALUE 
  2. ACCOUNTING BASIS VS. ECONOMIC BASIS         
  3. REGULATORY MEASURE OF RISK    
  4. SHORT TERM VS. LONG TERM RISKS          
  5. KNOWN RISK AND EMERGING RISKS        
  6. EARNINGS VOLATILITY VS. RUIN    
  7. VIEWED STAND-ALONE VS. FULL RISK PORTFOLIO       
  8. CASH VS. ACCRUAL 

The main point of the report is that for a comprehensive evaluation of risk, these are not choices.  Both paths must be explored.

Economic Risk Capital

December 1, 2009

Guest Post from Chitro Majumdar

Economic capital models can be complex, embodying many component parts and it may not be immediately obvious that a complex model works satisfactorily. Moreover, a model may embody assumptions about relationships between variables or about their behaviour that may not hold in all circumstances (e.g under periods of stress). We have developed an algorithm for Dynamic Financial Analysis (DFA) that enables the creation of a comprehensive framework to manage Enterprise Risk’s Economic Risk Capital. DFA is used in the capital budgeting decision process of a company to launch a new invention and predict the impact of the strategic decision on the balance sheet in the horizon. DFA gives strategy for Enterprise Risk Management in order to avoid undesirable outcomes, which could be disastrous.

“The Quants know better than anyone how their models can fail. The surest way to replicate this adversity is to trust the models blindly while taking large-scale advantage of situations where they seem to provide ERM strategies that would yield results too superior to be true”

Dynamic Financial Analysis (DFA) is the most advance modelling process in today’s property and casualty industry-allowing us to develop financial forecasts that integrate the variability and interrelationships of critical factors affecting our results. Through the modeling of DFA, we see the company’s relevant random variables is based on the categorization of risks which is generated solvency testing where the financial position of the company is evaluated from the perspective of the customers. The central idea is to quantify in probabilistic terms whether the company will be able to meet its commitments in the future.  DFA is in the capital budgeting decision process of a company launching a new invention and predicting the impact of the strategic decision on the balance sheet in a horizon of few years.

 

The validation of economic capital models is at a very preliminary stage. There exists a wide range of validation techniques, each of which provides corroboration for (or against) only some of the desirable properties of a model. Moreover, validation techniques are powerful in some areas such as risk sensitivity but not in other areas such as overall absolute accuracy or accuracy in the tail of the loss distribution. It is advisable that validation processes are designed alongside development of the models rather than chronologically following the model building process. There is a wide range of validation processes and each one provides evidence for only some of the desirable properties of a model. Certain industry validation practices are weak with improvements needed in benchmarking, industry wide exercises, back-testing, profit and loss analysis and stress testing and followed by other advanced simulation model. For validation we adhere to the below mentioned method to calculate.

 

Calculation of risk measures

In their internal use of risk measures, banks need to determine an appropriate confidence level for their economic capital models. It generally does not coincide with the 99.9% confidence level used for credit and operational risk under Pillar 1 of Basel II or with the 99% confidence level for general and specific market risk. Frequently, the link between a bank’s target rating and the choice of confidence level is interpreted as the amount of economic capital necessary to prevent the bank from eroding its capital buffer at a given confidence level. According to this view, which can be interpreted as a going concern view, capital planning is seen more as a dynamic exercise than a static one, in which banks want to hold a capital buffer “on top” of their regulatory capital and where it is the probability of eroding such a buffer (rather than all available capital) that is linked to the target rating. This would reflect the expectation (by analysts, rating agencies and the market) that the bank operates with capital that exceeds the regulatory minimum requirement. Apart from considerations about the link to a target rating, the choice of a confidence level might differ based on the question to be addressed. On the one hand, high confidence levels reflect the perspective of creditors, rating agencies and regulators in that they are used to determine the amount of capital required to minimise bankruptcy risk. On the other hand,  use of lower confidence levels for management purposes in order to allocate capital to business lines and/or individual exposures and to identify those exposures that are critical for profit objectives in a normal business environment. Another interesting aspect of the internal use of different risk measures is that the choice of risk measure and confidence level heavily influences relative capital allocations to individual exposures or portfolios. In short, the farther out in the tail of a loss distribution, the more relative capital gets allocated to concentrated exposures. As such, the choice of the risk measure as well as the confidence level can have a strategic impact since some portfolios might look relatively better or worse under risk-adjusted performance measures than they would based on an alternative risk measure.

 

Chitro Majumdar CSO – R-square RiskLab

 

 

More details: http://www.riskreturncorp.com

The Future of Risk Management – Conference at NYU November 2009

November 14, 2009

Some good and not so good parts to this conference.  Hosted by Courant Institute of Mathematical Sciences, it was surprisingly non-quant.  In fact several of the speakers, obviously with no idea of what the other speakers were doing said that they were going to give some relief from the quant stuff.

Sad to say, the only suggestion that anyone had to do anything “different” was to do more stress testing.  Not exactly, or even slightly, a new idea.  So if this is the future of risk management, no one should expect any significant future contributions from the field.

There was much good discussion, but almost all of it was about the past of risk management, primarily the very recent past.

Here are some comments from the presenters:

  • Banks need regulator to require Stress tests so that they will be taken seriously.
  • Most banks did stress tests that were far from extreme risk scenarios, extreme risk scenarios would not have been given any credibility by bank management.
  • VAR calculations for illiquid securities are meaningless
  • Very large positions can be illiquid because of their size, even though the underlying security is traded in a liquid market.
  • Counterparty risk should be stress tested
  • Securities that are too illiquid to be exchange traded should have higher capital charges
  • Internal risk disclosure by traders should be a key to bonus treatment.  Losses that were disclosed and that are within tolerances should be treated one way and losses from risks that were not disclosed and/or that fall outside of tolerances should be treated much more harshly for bonus calculation purposes.
  • Banks did not accurately respond to the Spring 2009 stress tests
  • Banks did not accurately self assess their own risk management practices for the SSG report.  Usually gave themselves full credit for things that they had just started or were doing in a formalistic, non-committed manner.
  • Most banks are unable or unwilling to state a risk appetite and ADHERE to it.
  • Not all risks taken are disclosed to boards.
  • For the most part, losses of banks were < Economic Capital
  • Banks made no plans for what they would do to recapitalize after a large loss.  Assumed that fresh capital would be readily available if they thought of it at all.  Did not consider that in an extreme situation that results in the losses of magnitude similar to Economic Capital, that capital might not be available at all.
  • Prior to Basel reliance on VAR for capital requirements, banks had a multitude of methods and often used more than one to assess risks.  With the advent of Basel specifications of methodology, most banks stopped doing anything other than the required calculation.
  • Stress tests were usually at 1 or at most 2 standard deviation scenarios.
  • Risk appetites need to be adjusted as markets change and need to reflect the input of various stakeholders.
  • Risk management is seen as not needed in good times and gets some of the first budget cuts in tough times.
  • After doing Stress tests need to establish a matrix of actions that are things that will be DONE if this stress happens, things to sell, changes in capital, changes in business activities, etc.
  • Market consists of three types of risk takers, Innovators, Me Too Followers and Risk Avoiders.  Innovators find good businesses through real trial and error and make good gains from new businesses, Me Too follow innovators, getting less of gains because of slower, gradual adoption of innovations, and risk avoiders are usually into these businesses too late.  All experience losses eventually.  Innovators losses are a small fraction of gains, Me Too losses are a sizable fraction and Risk Avoiders often lose money.  Innovators have all left the banks.  Banks are just the Me Too and Avoiders.
  • T-Shirt – In my models, the markets work
  • Most of the reform suggestions will have the effect of eliminating alternatives, concentrating risk and risk oversight.  Would be much safer to diversify and allow multiple options.  Two exchanges are better than one, getting rid of all the largest banks will lead to lack of diversity of size.
  • Problem with compensation is that (a) pays for trades that have not closed as if they had closed and (b) pay for luck without adjustment for possibility of failure (risk).
  • Counter-cyclical capital rules will mean that banks will have much more capital going into the next crisis, so will be able to afford to lose much more.  Why is that good?
  • Systemic risk is when market reaches equilibrium at below full production capacity.  (Isn’t that a Depression – Funny how the words change)
  • Need to pay attention to who has cash when the crisis happens.  They are the potential white knights.
  • Correlations are caused by cross holdings of market participants – Hunts held cattle and silver in 1908’s causing correlations in those otherwise unrelated markets.  Such correlations are totally unpredictable in advance.
  • National Institute of Financa proposal for a new body to capture and analyze ALL financial market data to identify interconnectedness and future systemic risks.
  • If there is better information about systemic risk, then firms will manage their own systemic risk (Wanna Bet?)
  • Proposal to tax firms based on their contribution to gross systemic risk.
  • Stress testing should focus on changes to correlations
  • Treatment of the GSE Preferred stock holders was the actual start of the panic.  Leahman a week later was actually the second shoe to drop.
  • Banks need to include variability of Vol in their VAR models.  Models that allowed Vol to vary were faster to pick up on problems of the financial markets.  (So the stampede starts a few weeks earlier.)
  • Models turn on, Brains turn off.

Turn VAR Inside Out – To Get S

November 13, 2009

S

Survival.  That is what you really want to know.  When the Board meeting ends, the last thing that they should hear is management assuring them that the company will be in business still when the next meeting is due to be held.

S

But it really is not in terms of bankruptcy, or even regulatory take-over.  If your firm is in the assurance business, then the company does not necessarily need to go that far.  There is usually a point, that might be pretty far remote from bankruptcy, where the firm loses confidence of the market and is no longer able to do business.  And good managers know exactly where that point lies.  

S

So S is the likelihood of avoiding that point of no return.  It is a percentage.  Some might cry that no one will understand a percentage.  That they need dollars to understand.  But VAR includes a percentage as well.  Just because no one says the percentage, that does not mean it is there.  It actually means that no one is even bothering to try to help people to understand what VAR is.  The VAR nuber is really one part of a three part sentence:

The 99% VAR over one-year is $67.8 M.  By itself, VAR does not tell you whether the firm has trouble.  If the VAR doubles from one period to the next, is the firm in trouble?  The answer to that cannot be determined without further information.

S

Survival is the probability that, given the real risks of the firm and the real capital of the firm, the firm will sustain a loss large enough to put an end to their business model.  If your S is 80%, then there is about  50% chance that your firm will not survive three years! But if your S is 95%, then there is a 50-50 chance that your firm will last at least 13 years.  This arithmetic is why a firm, like an insurer, that makes long term promises, need to have a very high S.  An S of 95% does not really seem high enough.

S

Survival is something that can be calculated with the existing VAR model.  Instead of focusing on a arbitrary probability, the calculation instead focuses on the loss that management feels is enough to put them out of business.  S can be recalculated after a proposed share buy back or payment of dividends.  S responds to management actions and assists management decisions.

If your board asks how much risk you are taking, try telling them the firm has a 98.5% Survival probability.  That might actually make more sense to them than saying that the firm might lose as much as $523 M at a 99% confidence interval over one year.

So turn your VAR inside out – to get S 

VAR is not a Good Risk Measure

November 6, 2009

Value at Risk (VAR) has taken alot of heat lately and deservedly so.

VAR, as banks are required to calculate it, relies solely on recent past data for calibration.  The use of “recent” data means that following any period of low losses, the VAR measure will show low risk.  That is just not the case.  It fails to recognize the longer term volatility that might exist.  In other words if there are problems that have a periodicity longer than the usual one year time frame of VAR, then VAR will ignore them most of the time and over emphasize them some of the time. Like the stopped clock that is right twice a day, except that VAR might never be right.

Risk models can be calibrated to history, long term or short term, or to future expectations, either long term or short term or they can be calibrated to assumptions consistent with market prices, either spot or over some period of time.  Of those six choices, VAR is calibrated from one of the less useful possible choices.

What VAR does is to answer the question of what would the 1/100 loss have been had I held the current risk positions over the past year.  The advantage of the definition chosen is that you can be sure of consistency.  However, that is only a consistently useful result if you always believe that the world will remain exactly as risky as it was in the past year.

If you believe in equilibrium, then of course the next year will be very similar to the last.  So risk management is a very small task relating to keeping things in line with past variability.  However, the world and the markets do not evidence a fundamental belief in equilibrium.  Some years are riskier than others.

So VAR is not a Good Risk Measure if it is taken alone.  If used with other more forward looking risk measures, it can be a part of a suite of information tat can lead to good risk management.

On the other hand, if you divorce the idea of VAR from the actual implementation of VAR in the banks, then you can conclude that Var is not a Bad Risk Measure.

Need to Shift the Defense . . . and the ERM

October 1, 2009

Sports analogies are so easy.

ERM is like the defense in football.  You would no more think of fielding a football team without a defensive squad then you would think of running a financial firm without ERM.  On the football field, if a team went out without any defensive players, they would doubtless be scored upon over and over again.

A financial firm without an ERM program would experience losses that were higher than what they wanted.

The ERM program can learn something from the football defenders.  The defenders, even when they do show up,  cannot get by doing the exact same thing over and over again.  The offensive of the other team would quickly figure out that they were entirely predictable and take them apart.  The defenders need to shift and compensate for the changes in the environment and in the play of the other team.

Banks with compliance oriented static ERM programs found this out in the financial crisis.  Their ERM program consisted of the required calculation of VaR using the required methods.  If you look at what happened in the crisis, many banks did not show any increase in VaR almost right up until the markets froze.  That is because the clever people at the origination end of the banks knew exactly how the ERM folks were going to calculate the VaR and they waltzed their fancy new CDO products right around the static defense of the ERM crew at the bank.

They knew that the ERM squad would not look into the quality of the underlying credit that went into the CDOs as long as those CDOs had the AAA stamp of approval from the rating agencies.  The ERM models worked very well off of the ratings and the banks had drastically cut back on their staff of credit analysts anyway.

They also knew that the spot on the gain and loss curve where the VaR would be calculated was fixed in advance.  As long as their new creation passed the VaR test at that one point, nobody was going to look any further.

So what would the football coach do if their defense kept doing the same thing over and over while the other team ran around them all game?  Would the coach decide to play the next season without a defense?  Or would he retrain and restaff his defense with new players who would move around and adapt and shift to different strategies as the game went along.

And that is what ERM needs to do.  ERM needs to make sure that it does not get stuck in a rut.  Because any predictable rut will not work for long.  The marketplace and perhaps some within their own companies will  find a way around them and defeat their purpose.

UNRISK (2)

September 30, 2009

From Jawwad Farid

UNRISK Part 2 – Understanding the distribution

(Part One)

UNR1

Before you completely write this post off as statistical gibberish, and for those of you were fortunate enough to not get exposure to the subject, let’s just see what the distribution looks like.

UNR2

Not too bad! What you see above is a simple slotting of credit scores across a typical credit portfolio. For the month of June, the scores rate from 1 to 12, with 1 good and 12 evul. The axis on the left hand side shows how much have we bet per score / grade category. We collect the scores, then sort them, then bunch them in clusters and then simply plot the results in a graph (in statistical terms, we call it a histogram). Drawn the histogram for a data set enough number of times and the shape of the distribution will begin to speak with you. In this specific case you can see that the scoring function is reasonably effective since it’s doing a good job of classifying and recording relationships at least as far as scores represent reasonable credit behavior.

So how do you understand the distribution? Within the risk function there are multiple dimensions that this understanding may take.

The first is effectiveness. For instance the first snapshot of a distribution that we saw was effective. This one isn’t?

Why? Let’s treat that as your homework assignment. (Hint: the first one is skewed in the direction it should be skewed in, this one isn’t).

The second is behavior over time. So far you have only seen the distribution at a given instance, a snapshot. Here is how it changes over time.

UNR3

Notice anything? Homework assignment number two. (Hint: 10, 11 and 12 are NPL, Classified, Non performing, delinquent loans. Do you see a trend?)

The third is dissection across products and customer segments. Heading into an economic cycle where profitability and liquidity is going to be under pressure, which exposure would you cut? Which one is going to keep you awake at night? How did you get here in the first place? Assignment number three.

UNR4

Can you stop here? Is this enough? Well no.

UNR5

This is where my old nemesis, the moment generating function makes an evul comeback. Volatility (or vol) is the second moment. That is a fancy risqué (pun intended) way of saying it is the standard deviation of your data set. You can treat volatility of the distribution as a static parameter or treat it with more respect and dive a little deeper and see how it trends over time. What you see above is a simple tracking series that is plotting 60 day volatility over a period of time for 8 commodity groups together.

See vol. See vol run… (My apologies to my old friend Spot and the HBS EGS Case)

If you are really passionate about the distribution and half as crazy as I am, you could also delve into relationships across parameters as well as try and assess lagged effects across dimensions.

UNR6

The graph above shows how volatility for different interest rates moves together and the one below shows the same phenomenon for a selection of currency pair. When you look at the volatility of commodities, interest rates and currencies do you see what I see? Can you hear the distribution? Is it speaking to you now?

Nope. I think you need to snort some more unrisk! Home work assignment number four. (Hint: Is there a relationship, a delayed and lagged effect between the volatility of the three groups? If yes, where and who does it start with?)

UNR7

So far so good! This is what most of us do for a living. Where we fail is in the next step.

You can understand the distribution as much as you want, but it will only make sense to the business side when you translate it into profitability. If you can’t communicate your understanding or put it to work by explaining it to the business side in the language they understand, all of your hard work is irrelevant. A distribution is a wonderful thing only if you understand it. If you don’t, you might as well be praising the beauty of Jupiter’s moon under Saturn’s light in Greek to someone who has only seen Persian landscapes and speaks Pushto.

To bring profitability in, you need to integrate all the above dimensions into profitability. Where do you start? Taking the same example of the credit portfolio above you start with what we call the transition matrix. Remember the distribution plot across time from above.

UNR8

THis has appeared previously in Jawwad’s excellent blog.

The Cheeky, the Funky and the Dummy Monkey… (2)

September 25, 2009

From Stelios Ioannides (risk manager)

Continuation of earlier post.

Who is to blame?

OK enough, I agree with you: this is an exaggeration of the situation or the situations that we are currently experiencing but reality can be quite close. What happened in the credit sub-prime crisis can only be justified, in my opinion, by such “monkey” logic. At the end of the day, it’s about designing products, valuing (appropriately) risk, and getting on board the “right” clients with a “desirable”, for our purposes profile. Who is doing that?  And how? The industry failed spectacularly on that. It allowed to this “monkey” concept to grow and to gain potential.

Who cares about Value at Risk or CTE and the associated graphics, if there is no clue at all on how these “interesting” numbers were derived in the first place? Using a number with out know the source it is like having a map with numbers but with no street names. You do not know where you are, you might know where you are heading (vaguely) but there is absolutely no way you can reach your destination.

Having some well defined risk measures is just a well accepted methodology that justifies capital intensive and risk sensitive decisions at the big scale. So if you are applying it wrongly, things can fail dramatically, at a huge scale, causing chaos. And of course, when things go wrong the funky or the dummy monkeys will be blamed… These are the ones that will loose their jobs. The cheeky ones stay alive and are the ones that will be hiring soon again.

The way forward

Understanding the details and being aware with the fundamentals is crucial is this arena. “Understanding” is about having the right combination of skills and applying these fundamentals. It is also about being able to realize how decisions that might be executed in interrelated contexts like pricing, capital reserving and hedging (just to mention a few) might be derived by ones work.

Knowledge exists, technology exists and in my opinion, it is a pity that people still stick to the old practices.  There is strong need to refresh or at least fine tune these well established “ways of doing things”. In no situation, we should act like “robots” that mechanically do things.  We need monkeys that are owners and responsible of their piece of work regardless how small that is.

If we fail to do that then the “disease” might propagate in otter industries, and in that case of course, the consequences might scary (at least to say).  We spent millions or even billions for initiatives like Basel, but we have to make sure that some basic, common sense and ethical rules are being obeyed at all levels.

Risk industry calls for better quality transparency and people should soon or later realize that sharing knowledge and information and aligning interests and objectives would benefit, in most cases all parties (of the same side) involved in the project or deal. The way assumptions are derived is crucial. At the same time, being able to control the behavior of clients is of paramount importance.  How this is achieved? A way is possibly by proper underwriting and classification.

Conclusions

We are working in various dimensions, we are dealing with risk free worlds, real words, real market assumptions, marking to market and so on and so fourth. Concepts like “Stress Testing” are gaining momentum and potential.  In our daily work we have to face concepts like implied volatilities, volatility surfaces, “short-selling” (it took me a while to get this right, honestly) and a few even more complicated terms that I do not want to even to mention them here. The list of these complicated terms is endless and growing fast.

In any case, people have the duty to use these concepts in a consistent and ethical way.  Sticking to basic and rather simplistic approaches with regards o problem solving is not wise in the fast world we are living.

We have the duty to teach the new generations how to synthesize skills and knowledge and judge impartially and ethically. I personally believe that the future belongs to the people that the have the courage to ask right questions and the patience to apply the fundamentals … it is the duty of each one of us to find out what that really means.

Perhaps, we could elaborate more one that but due to lack to time I cannot. Hopefully this won’t be the case when I will have to deliver a super important risk project in the future. What am I? Well something in between the dummy and the funky monkey (hopefully closer to the later or better the former?)…

The Cheeky, the Funky and the Dummy Monkey…

September 21, 2009

Guest Post from Stelios Ioannides (risk manager)

In the risk management field, various players are being involved; quite a few are more sophisticated, others are more intelligent and some others are being better informed than the rest. It is a fact that each of these players (as it happens besides in a variety of disciplines) has a different vision and understanding on what is meant by “risk management”. Most importantly, few of them might be passionate about pure modeling and quantitative work, and in the other extreme, a few others might even really hate their risk related work: as they view it as a very, very, boring task. They still continue to do it though out of necessity or due to lack to alternative options. As you can understand, the way these different people apply the concepts of risk management is quote different.

In this short piece of opinion, I will try to present the current “crisis” situation, trying to understand how we end up like this; I will focus on mainly three kinds of professionals or alternatively on three “Monkey” beings that are directly or indirectly relayed with this interesting and fast paced arena.

Using Sophisticated Risk Measures…

There has been a debate around on the usage and applicability of metrics like Value at Risk (VaR) and similar risk sensitive measures. Very important people support these measures but on the other side there is a group of equally intelligent and prestigious practitioners and academics that basically scarp such “dump” initiatives. Who is right? And who is wrong?

I think that metrics like VaR etc are quite useful as long as long their user knows the fundamentals, the assumptions and what is essentially happening the behind the senses. If you blindly trust such measures without asking the right questions or without challenging your assumptions, I think you run a high risk for various reasons. Let’s see how our professionals (all “males” monkeys” for simplicity) behave in this complex and chaotic world.

The Dummy Monkey…

This kind of professional, never or rarely asks questions. He blindly trusts the risk software that he is using in order to perform his job. Work can be hectic as he might need to elaborate and complete loads of calculations on a daily basis. He is neither interested or cares on risk management concept or best practices. The important thing for him is to prepare the report with the numbers or the information being asked for and that’s it.  The consequences of his work are unknown to him. He is not necessarily aware of the decisions that will be taken (such decisions will be based on the work that HE eventually has produced). In the majority of the cases he is not aware how the models were built or who was involved in the development of the models or software.  In that respect, he cannot improve or correct things. He is just capable of typing various inputs into the right, hopefully, boxes and derives some automated reports that in most cases mean nothing to him.  So who can build the models then? What is really going on here?!

The Funky Monkey…

The intellectual curiosity of the kind of professional urges him to study and work hard.  This monkey is very clever and gifted. He works restless and builds fantastic models. The thing is these models might be wrong and very possibly, these models do not necessarily reflect reality. But who cares?  That is fine thought. As long as these “reliable – enough” (who gives the approval, who validates?) financial models, that can be used easily by the dummy (user) monkeys is fine. Who cares about reflecting reality and getting the fundamentals 100% right? The thing is to have more or less an acceptable and an “accurate” tool (or better framework) that behaves as he (the model creator) wishes. But what happens when these “funky” beings are wrong? Because they can get perfectly wrong – they work hard, long hours and alone… – who guarantees that somewhere or somehow a mistake was not made (everybody can get confused every now and then, right?)? Do we have the right, objective and independent control measures in place? What happens if not?

Funky monkeys get hired by the Cheeky Monkeys; they get paid good money…

The Cheeky Monkey…

This “being” is the risk management professional at the very high level. Quite powerful and important, he dedicates loads of time executing risk management decisions. He is not merely interested on how answers are being derived or who derived these answers or even who designed the application, model or framework. As long as a clear and straight forward audit trail (well not necessarily) is accompanying such results, then is perfectly fine. The only thing that matter is that such risk measures are being used as indicators and reference for his performance bonus.

More stuff on this worth examining profile? Well, I cannot say mush as I having reached that level…

(To be Continued)

Models & Manifesto

September 1, 2009

Have you ever heard anyone say that their car got lost? Or that they got into a massive pile-up because it was a 1-in-200-year event that someone drove on the wrong side of a highway? Probably not.

But statements similar to these have been made many times since mid-2007 by CEOs and risk managers whose firms have lost great sums of money in the financial crisis. And instead of blaming their cars, they blame their risk models. In the 8 February 2009 Financial Times, Goldman Sachs’ CEO Lloyd Blankfein said “many risk models incorrectly assumed that positions could be fully hedged . . . risk models failed to capture the risk inherent in off-balance sheet activities,” clearly placing the blame on the models.

But in reality, it was, for the most part, the modellers, not the models, that failed. A car goes where the driver steers it and a model evaluates the risks it is designed to evaluate and uses the data the model operator feeds into the model. In fact, isn’t it the leadership of these enterprises that are really responsible for not clearly assessing the limitations of these models prior to mass usage for billion-dollar decisions?

But humans, who to varying degrees all have a limit to their capacity to juggle multiple inter-connected streams of information, need models to assist with decision-making at all but the smallest and least complex firms.

These points are all captured in the Financial Modeler’s Manifesto from Paul Wilmott and Emanuel Derman.

But before you use any model you did not build yourself, I suggest that you ask the model builder if they have read the manifesto.

If you do build models, I suggest that you read it before and after each model building project.

Multi dimensional Risk Management

August 28, 2009

Many ERM programs are one dimensional. They look at VaR or they look at Economic Capital. The Multi-dimensional Risk manager consider volatility, ruin, and everything in between. They consider not only types of risk that are readily quantifiable, but also those that may be extremely difficult to measure. The following is a partial listing of the risks that a multidimensional risk manager might examine:
o Type A Risk – Short-term volatility of cash flows in one year
o Type B Risk – Short-term tail risk of cash flows in one year
o Type C Risk – Uncertainty risk (also known as parameter risk)
o Type D Risk – Inexperience risk relative to full multiple market cycles
o Type E Risk – Correlation to a top 10
o Type F Risk – Market value volatility in one year
o Type G Risk – Execution risk regarding difficulty of controlling operational losses
o Type H Risk – Long-term volatility of cash flows over five or more years
o Type J Risk – Long-term tail risk of cash flows over 5 five years or more
o Type K Risk – Pricing risk (cycle risk)
o Type L Risk – Market liquidity risk
o Type M Risk – Instability risk regarding the degree that the risk parameters are stable

Many of these types of risk can be measured using a comprehensive risk model, but several are not necessarily directly measurable. But the muilti dimensional risk manager realizes that you can get hurt by a risk even if you cannot measure it.

VaR is not a Bad Risk Measure

August 24, 2009

VaR has taken a lot of heat in the current financial crisis. Some go so far as to blame the financial crisis on VaR.

But VaR is a good risk measure. The problem is with the word RISK. You see, VaR has a precise definition, RISK does not. There is no way that you could possible measure an ill defined idea as RISK with a precise measure.

VaR is a good measure of one aspect of RISK. Is measures volatility of value under the assumption that the future will be like the recent past. If everyone understands that is what VaR does, then there is no problem.

Unfortunately, some people thought that VaR measured RISK period. What I mean is that they were led to believe that VaR was the same as RISK. In that context VaR (and any other single metric) is a failure. VaR is not the same as RISK.

That is because RISK has many aspects. Here is one partial list of the aspects of risk:

Type A Risk – Short Term Volatility of cash flows in 1 year
Type B Risk – Short Term Tail Risk of cash flows in 1 year
Type C Risk – Uncertainty Risk (also known as parameter risk)
Type D Risk – Inexperience Risk relative to full multiple market cycles
Type E Risk – Correlation to a top 10
Type F Risk – Market value volatility in 1 year
Type G Risk – Execution Risk regarding difficulty of controlling operational losses
Type H Risk – Long Term Volatility of cash flows over 5 or more years
Type J Risk – Long Term Tail Risk of cash flows over 5 years or more
Type K Risk – Pricing Risk (cycle risk)
Type L Risk – Market Liquidity Risk
Type M Risk – Instability Risk regarding the degree that the risk parameters are stable
(excerpted from Risk & Light)

VaR measures Type F risk only.


%d bloggers like this: