Archive for the ‘Value at Risk’ category

Determining Risk Capital

February 5, 2022

Knowing the amount of surplus an insurer needs to support risk is fundamental to enterprise risk management (ERM) and to the own risk and solvency assessment (ORSA).

With the increasing focus on ERM, regulators, rating agencies, and insurance and reinsurance executives are more focused on risk capital modeling than ever before.

Risk – and the economic capital associated with it – cannot actually be measured as you can measure your height. Risk is about the future.

To measure risk, you must measure it against an idea of the future. A risk model is the most common tool for comparing one idea of the future against others.

Types of Risk Models

There are many ways to create a model of risk to provide quantitative metrics and derive a figure for the economic capital requirement.

Each approach has inherent strengths and weaknesses; the trade-offs are between factors such as implementation cost, complexity, run time, ability to represent reality, and ease of explaining the findings. Different types of models suit different purposes.

Each of the approaches described below can be used for purposes such as determining economic capital need, capital allocation, and making decisions about risk mitigation strategies.

Some methods may fit a particular situation, company, or philosophy of risk better than others.

Factor-Based Models

Here the concept is to define a relatively small number of risk categories; for each category, we require an exposure metric and a measure of riskiness.

The overall risk can then be calculated by multiplying “exposure × riskiness” for each category, and adding up the category scores.

Because factor-based models are transparent and straightforward to apply, they are commonly used by regulators and rating agencies.

The NAIC Risk-Based Capital and the Solvency II Standard Formula are calculated in this way, as is A.M. Best’s BCAR score and S&P’s Insurance Capital Model.

Stress Test Models

Stress tests can provide valuable information about how a company might hold up under adversity. As a stand-alone measure or as an adjunct to factor-based methods, stress tests can provide concrete indications that reflect company-specific features without the need for complex modeling. A robust stress testing regime might reflect, for example:

Worst company results experienced in last 20 years
Worst results observed across peer group in last 20 years
Worst results across peer group in last 50 years (or, 20% worse than stage 2) Magnitude of stress-to-failure

Stress test models focus on the severity of possible adverse scenarios. While the framework used to create the stress scenario may allow rough estimates of likelihood, this is not the primary goal.

High-Level Stochastic Models

Stochastic models enable us to analyze both the severity and likelihood of possible future scenarios. Such models need not be excessively complex. Indeed, a high-level model can provide useful guidance.

Categories of risk used in a high-level stochastic model might reflect the main categories from a factor-based model already in use; for example, the model might reflect risk sources such as underwriting risk, reserve risk, asset risk, and credit risk.

A stochastic model requires a probability distribution for each of these risk sources. This might be constructed in a somewhat ad-hoc way by building on the results of a stress test model, or it might be developed using more complex actuarial analysis.

Ideally, the stochastic model should also reflect any interdependencies among the various sources of risk. Timing of cash flows and present value calculations may also be included.

Detailed Stochastic Models

Some companies prefer to construct a more detailed stochastic model. The level of detail may vary; in order to keep the model practical and facilitate quality control, it may be best to avoid making the model excessively complicated, but rather develop only the level of granularity required to answer key business questions.

Such a model may, for example, sub-divide underwriting risk into several lines of business and/or profit centers, and associate to each of these units a probability distribution for both the frequency and the severity of claims. Naturally, including more granular sources of risk makes the question of interdependency more complicated.

Multi-Year Strategic Models with Active Management

In the real world, business decisions are rarely made in a single-year context. It is possible to create models that simulate multiple, detailed risk distributions over a multi-year time frame.

And it is also possible to build in “management logic,” so that the model responds to evolving circumstances in a way that approximates what management might actually do.

For example, if a company sustained a major catastrophic loss, in the ensuing year management might buy more reinsurance to maintain an adequate A.M. Best rating, rebalance the investment mix, and reassess growth strategy.

Simulation models can approximate this type of decision making, though of course the complexity of the model increases rapidly.

Key Questions and Decisions

Once a type of risk model has been chosen, there are many different ways to use this model to quantify risk capital. To decide how best to proceed, insurer management should consider questions such as:

  • What are the issues to be aware of when creating or refining our model?
  • What software offers the most appropriate platform?
  • What data will we need to collect?
  • What design choices must we make, and which selections are most appropriate for us?
  • How best can we aggregate risk from different sources and deal with interdependency?
  • There are so many risk metrics that can be used to determine risk capital – Value at Risk, Tail Value at Risk, Probability of Ruin, etc. – what are their implications, and how can we choose among them?
  • How should this coordinate with catastrophe modeling?
  • Will our model actually help us to answer the questions most important to our firm?
  • What are best practices for validating our model?
  • How should we allocate risk capital to business units, lines of business, and/or insurance policies?
  • How should we think about the results produced by our model in the context of rating agency capital benchmarks?
  • Introducing a risk capital model may create management issues – how can we anticipate and deal with these?

In answering these questions, it is important to consider the intended applications. Will the model be used to establish or refine risk appetite and risk tolerance?

Will modeled results drive reinsurance decisions, or affect choices about growth and merger opportunities? Does the company intend to use risk capital for performance management, or ratemaking?

Will the model be used to complete the NAIC ORSA, or inform rating agency capital adequacy discussions?

The intended applications, along with the strengths and weaknesses of the various modeling approaches and range of risk metrics, should guide decisions throughout the economic capital model design process.

Advertisement

Risk Measurement & Reporting

October 18, 2021

Peter Drucker is reported to have once said “what gets measured, gets managed.” That truism of modern management applied to risk as well as it does to other more commonly measured things like sales, profits and expens es .

Regulators take a similar view; what gets measured should get managed. ORSA f rameworks aim to support prospective solvency by giving management a clear view of their on-going corporate risk positions.

This in turn should reduce the likelihood of large unanticipated losses if timely action can be taken when a risk limit is breached.

From a regulatory perspective, each identified risk should have at least one measurable metric that is reported upwards, ultimately to the board.

The Need to Measure Up

Many risk management programs build up extensive risk registers but are stymied by this obvious next step – that of measuring the risks that have been identif ied.

Almost every CEO can cite the company’s latest f igures f or sales, expenses and profits, but very few know what the company’s risk position might be.

Risks are somewhat more difficult to measure than profits due to the degree to which they depend upon opinions.

Insurance company profits are already seen as opaque by many non-industry observers because profits depend on more than just sales and expenses:profits depend upon claims estimates, which are based on current (and often incomplete) information about those transactions.

Risk, on the other hand, is all about things that might happen in the f uture: specif ically, bad things that might happen in the f uture.

Arisk measure reflects an opinion about the size of the exposure to f uture losses. All risk measures are opinions; there are no f acts about the f uture. At least not yet.

Rationalizing Risk

There are, however, several ways that risk can be measured to facilitate management in the classical sense that Drucker was thinking of.

That classic idea is the management control cycle, where management sets a plan and then monitors emerging experience in comparison to that plan.

To achieve this objective, risk measures need to be consistent from period to period. They need to increase when volume of activity increases, but they also need to reflect changes in the riskiness of activities as time passes and as the portfolio of the risk taker changes .

Good risk measures provide a projected outcome; but in some
cases, such calculations are not available and risk indicators must be used instead.

Risk indicators measure something that is closely related to the risk and so can be expected to vary similarly to an actual risk measure, if one were available.

For insurers, current state-of-the-art risk measures are based upon computer models of the risk taking act ivit ies .

With these models, risk managers can determine a broad range of possible outcomes for a risk taking activity and then define the risk measure as some subset of those outcomes.

Value at Risk

The most common such measure is called value at risk (VaR). If the risk model is run with a random element, usually called a Monte Carlo or stochastic model, a 99% VaR would be the 99th worst result in a run of 100 outcomes, or the 990th worst out of 1000.

Contingent Tail Expectation

This value might represent the insurer’s risk capital target.Asimilar risk measure is the contingent tail expectation (CTE), which is also called the tail value at risk (TVaR).

The 99% CTE is the average of all the values that are worse than the 99% VaR. You can think of these two values in this manner: if a company holds capital at the 99% VaR level, then the 99% CTE minus the 99% VaR is the average amount of loss to policyholders should the company become insolvent.

Rating agencies, and increasingly regulators, require companies to provide results of risk measures from stochastic models of natural catastrophes.

Stochastic models are also used to estimate other risk exposures, including underwriting risk from other lines of insurance coverage and investment risk.

In addition to stochastic models, insurers also model possible losses under single well-defined adverse scenarios. The results are often called stress tests.

Regulators are also increasingly calling for stress tests to provide risk measures that they feel are more easily understood and compared among companies.

Key Risk Indicators

Most other risks, especially strategic and operational risks, are monitored by key risk indicators (KRIs). For these risks, good measures are not available and so we must rely on indicators.

For example, an economic downturn could pose risk to an insurer’s growth strategy. While it may be dif f icult to measure the likelihood of a downturn or the extent to which it would impair growth, the insurer can use economic f orecasts as risk indicators.

Of course,simplymeasuringriskisinsufficient.Theresultsof themeasurementmustbecommunicatedto people who can and will use the risk information to appropriately steer the future activity of the company.

Risk Dashboard

Simple charts of numbers are sufficient in some cases, but the state of the art approach to presenting risk measurement information is the risk dashboard.

With a risk dashboard, several important charts and graphs are presented on a single page, like the dashboard of a car or airplane, so that the user can see important information and trends at a glance.

The risk dashboard is often accompanied by the charts of numbers, either on later pages of a hard copy or on a click-through basis for on-screen risk dashboards.

Dashboard Example

Assumptions Embedded in Risk Analysis

April 28, 2010

The picture below from Dour VanDemeter’s blog gives an interesting take on the embedded assumptions in various approaches to risk analysis and risk treatment.

But what I take from this is a realization that many firms have activity in one or two or three of those boxes, but the only box that does not assume away a major part of reality is generally empty.

In reality, most financial firms do experience market, credit and liability risks all at the same time and most firms do expect to be continuing to receive future cashflows both from past activities and from future activities.

But most firms have chosen to measure and manage their risk by assuming that one or two or even three of those things are not a concern.  By selectively putting on blinders to major aspects of their risks – first blinding their right eye, then their left, then by not looking up and finally not looking down.

Some of these processes were designed that way in earlier times when computational power would not have allowed anything more.  For many firms their affairs are so very complicated and their future is so uncertain that it is simply impractical to incorporate everything into one all encompassing risk assessment and treatment framework.

At least that is the story that folks are most likely to use.

But the fact that their activity is too complicated for them to model does not seem to send them any flashing red signal that it is possible that they really do not understand their risk.

So look at Doug’s picture and see which are the embedded assumptions in each calculation – the ones I am thinking of are the labels on the OTHER rows and columns.

For Credit VaR – the embedded assumption is that there is no Market Risk and that there is no new assets or liabilities (business is in sell-off mode)

For Interest risk VaR – the embedded assumption is that there is no credit risk nor new assets or liabilities (business is in sell-off mode)

For ALM – the embedded assumption is that there is no credit risk and business is in run-off mode.

Those are the real embedded assumptions.  We should own up to them.

Take CARE in evaluating your Risks

February 12, 2010

Risk management is sometimes summarized as a short set of simply stated steps:

  1. Identify Risks
  2. Evaluate Risks
  3. Treat Risks

There are much more complicated expositions of risk management.  For example, the AS/NZ Risk Management Standard makes 8 steps out of that. 

But I would contend that those three steps are the really key steps. 

The middle step “Evaluate Risks” sounds easy.  However, there can be many pitfalls.  A new report [CARE] from a working party of the Enterprise and Financial Risks Committee of the International Actuarial Association gives an extensive discussion of the conceptual pitfalls that might arise from an overly narrow approach to Risk Evaluation.

The heart of that report is a discussion of eight different either or choices that are often made in evaluating risks:

  1. MARKET CONSISTENT VALUE VS. FUNDAMENTAL VALUE 
  2. ACCOUNTING BASIS VS. ECONOMIC BASIS         
  3. REGULATORY MEASURE OF RISK    
  4. SHORT TERM VS. LONG TERM RISKS          
  5. KNOWN RISK AND EMERGING RISKS        
  6. EARNINGS VOLATILITY VS. RUIN    
  7. VIEWED STAND-ALONE VS. FULL RISK PORTFOLIO       
  8. CASH VS. ACCRUAL 

The main point of the report is that for a comprehensive evaluation of risk, these are not choices.  Both paths must be explored.

Economic Risk Capital

December 1, 2009

Guest Post from Chitro Majumdar

Economic capital models can be complex, embodying many component parts and it may not be immediately obvious that a complex model works satisfactorily. Moreover, a model may embody assumptions about relationships between variables or about their behaviour that may not hold in all circumstances (e.g under periods of stress). We have developed an algorithm for Dynamic Financial Analysis (DFA) that enables the creation of a comprehensive framework to manage Enterprise Risk’s Economic Risk Capital. DFA is used in the capital budgeting decision process of a company to launch a new invention and predict the impact of the strategic decision on the balance sheet in the horizon. DFA gives strategy for Enterprise Risk Management in order to avoid undesirable outcomes, which could be disastrous.

“The Quants know better than anyone how their models can fail. The surest way to replicate this adversity is to trust the models blindly while taking large-scale advantage of situations where they seem to provide ERM strategies that would yield results too superior to be true”

Dynamic Financial Analysis (DFA) is the most advance modelling process in today’s property and casualty industry-allowing us to develop financial forecasts that integrate the variability and interrelationships of critical factors affecting our results. Through the modeling of DFA, we see the company’s relevant random variables is based on the categorization of risks which is generated solvency testing where the financial position of the company is evaluated from the perspective of the customers. The central idea is to quantify in probabilistic terms whether the company will be able to meet its commitments in the future.  DFA is in the capital budgeting decision process of a company launching a new invention and predicting the impact of the strategic decision on the balance sheet in a horizon of few years.

 

The validation of economic capital models is at a very preliminary stage. There exists a wide range of validation techniques, each of which provides corroboration for (or against) only some of the desirable properties of a model. Moreover, validation techniques are powerful in some areas such as risk sensitivity but not in other areas such as overall absolute accuracy or accuracy in the tail of the loss distribution. It is advisable that validation processes are designed alongside development of the models rather than chronologically following the model building process. There is a wide range of validation processes and each one provides evidence for only some of the desirable properties of a model. Certain industry validation practices are weak with improvements needed in benchmarking, industry wide exercises, back-testing, profit and loss analysis and stress testing and followed by other advanced simulation model. For validation we adhere to the below mentioned method to calculate.

 

Calculation of risk measures

In their internal use of risk measures, banks need to determine an appropriate confidence level for their economic capital models. It generally does not coincide with the 99.9% confidence level used for credit and operational risk under Pillar 1 of Basel II or with the 99% confidence level for general and specific market risk. Frequently, the link between a bank’s target rating and the choice of confidence level is interpreted as the amount of economic capital necessary to prevent the bank from eroding its capital buffer at a given confidence level. According to this view, which can be interpreted as a going concern view, capital planning is seen more as a dynamic exercise than a static one, in which banks want to hold a capital buffer “on top” of their regulatory capital and where it is the probability of eroding such a buffer (rather than all available capital) that is linked to the target rating. This would reflect the expectation (by analysts, rating agencies and the market) that the bank operates with capital that exceeds the regulatory minimum requirement. Apart from considerations about the link to a target rating, the choice of a confidence level might differ based on the question to be addressed. On the one hand, high confidence levels reflect the perspective of creditors, rating agencies and regulators in that they are used to determine the amount of capital required to minimise bankruptcy risk. On the other hand,  use of lower confidence levels for management purposes in order to allocate capital to business lines and/or individual exposures and to identify those exposures that are critical for profit objectives in a normal business environment. Another interesting aspect of the internal use of different risk measures is that the choice of risk measure and confidence level heavily influences relative capital allocations to individual exposures or portfolios. In short, the farther out in the tail of a loss distribution, the more relative capital gets allocated to concentrated exposures. As such, the choice of the risk measure as well as the confidence level can have a strategic impact since some portfolios might look relatively better or worse under risk-adjusted performance measures than they would based on an alternative risk measure.

 

Chitro Majumdar CSO – R-square RiskLab

 

 

More details: http://www.riskreturncorp.com

The Future of Risk Management – Conference at NYU November 2009

November 14, 2009

Some good and not so good parts to this conference.  Hosted by Courant Institute of Mathematical Sciences, it was surprisingly non-quant.  In fact several of the speakers, obviously with no idea of what the other speakers were doing said that they were going to give some relief from the quant stuff.

Sad to say, the only suggestion that anyone had to do anything “different” was to do more stress testing.  Not exactly, or even slightly, a new idea.  So if this is the future of risk management, no one should expect any significant future contributions from the field.

There was much good discussion, but almost all of it was about the past of risk management, primarily the very recent past.

Here are some comments from the presenters:

  • Banks need regulator to require Stress tests so that they will be taken seriously.
  • Most banks did stress tests that were far from extreme risk scenarios, extreme risk scenarios would not have been given any credibility by bank management.
  • VAR calculations for illiquid securities are meaningless
  • Very large positions can be illiquid because of their size, even though the underlying security is traded in a liquid market.
  • Counterparty risk should be stress tested
  • Securities that are too illiquid to be exchange traded should have higher capital charges
  • Internal risk disclosure by traders should be a key to bonus treatment.  Losses that were disclosed and that are within tolerances should be treated one way and losses from risks that were not disclosed and/or that fall outside of tolerances should be treated much more harshly for bonus calculation purposes.
  • Banks did not accurately respond to the Spring 2009 stress tests
  • Banks did not accurately self assess their own risk management practices for the SSG report.  Usually gave themselves full credit for things that they had just started or were doing in a formalistic, non-committed manner.
  • Most banks are unable or unwilling to state a risk appetite and ADHERE to it.
  • Not all risks taken are disclosed to boards.
  • For the most part, losses of banks were < Economic Capital
  • Banks made no plans for what they would do to recapitalize after a large loss.  Assumed that fresh capital would be readily available if they thought of it at all.  Did not consider that in an extreme situation that results in the losses of magnitude similar to Economic Capital, that capital might not be available at all.
  • Prior to Basel reliance on VAR for capital requirements, banks had a multitude of methods and often used more than one to assess risks.  With the advent of Basel specifications of methodology, most banks stopped doing anything other than the required calculation.
  • Stress tests were usually at 1 or at most 2 standard deviation scenarios.
  • Risk appetites need to be adjusted as markets change and need to reflect the input of various stakeholders.
  • Risk management is seen as not needed in good times and gets some of the first budget cuts in tough times.
  • After doing Stress tests need to establish a matrix of actions that are things that will be DONE if this stress happens, things to sell, changes in capital, changes in business activities, etc.
  • Market consists of three types of risk takers, Innovators, Me Too Followers and Risk Avoiders.  Innovators find good businesses through real trial and error and make good gains from new businesses, Me Too follow innovators, getting less of gains because of slower, gradual adoption of innovations, and risk avoiders are usually into these businesses too late.  All experience losses eventually.  Innovators losses are a small fraction of gains, Me Too losses are a sizable fraction and Risk Avoiders often lose money.  Innovators have all left the banks.  Banks are just the Me Too and Avoiders.
  • T-Shirt – In my models, the markets work
  • Most of the reform suggestions will have the effect of eliminating alternatives, concentrating risk and risk oversight.  Would be much safer to diversify and allow multiple options.  Two exchanges are better than one, getting rid of all the largest banks will lead to lack of diversity of size.
  • Problem with compensation is that (a) pays for trades that have not closed as if they had closed and (b) pay for luck without adjustment for possibility of failure (risk).
  • Counter-cyclical capital rules will mean that banks will have much more capital going into the next crisis, so will be able to afford to lose much more.  Why is that good?
  • Systemic risk is when market reaches equilibrium at below full production capacity.  (Isn’t that a Depression – Funny how the words change)
  • Need to pay attention to who has cash when the crisis happens.  They are the potential white knights.
  • Correlations are caused by cross holdings of market participants – Hunts held cattle and silver in 1908’s causing correlations in those otherwise unrelated markets.  Such correlations are totally unpredictable in advance.
  • National Institute of Financa proposal for a new body to capture and analyze ALL financial market data to identify interconnectedness and future systemic risks.
  • If there is better information about systemic risk, then firms will manage their own systemic risk (Wanna Bet?)
  • Proposal to tax firms based on their contribution to gross systemic risk.
  • Stress testing should focus on changes to correlations
  • Treatment of the GSE Preferred stock holders was the actual start of the panic.  Leahman a week later was actually the second shoe to drop.
  • Banks need to include variability of Vol in their VAR models.  Models that allowed Vol to vary were faster to pick up on problems of the financial markets.  (So the stampede starts a few weeks earlier.)
  • Models turn on, Brains turn off.

Turn VAR Inside Out – To Get S

November 13, 2009

S

Survival.  That is what you really want to know.  When the Board meeting ends, the last thing that they should hear is management assuring them that the company will be in business still when the next meeting is due to be held.

S

But it really is not in terms of bankruptcy, or even regulatory take-over.  If your firm is in the assurance business, then the company does not necessarily need to go that far.  There is usually a point, that might be pretty far remote from bankruptcy, where the firm loses confidence of the market and is no longer able to do business.  And good managers know exactly where that point lies.  

S

So S is the likelihood of avoiding that point of no return.  It is a percentage.  Some might cry that no one will understand a percentage.  That they need dollars to understand.  But VAR includes a percentage as well.  Just because no one says the percentage, that does not mean it is there.  It actually means that no one is even bothering to try to help people to understand what VAR is.  The VAR nuber is really one part of a three part sentence:

The 99% VAR over one-year is $67.8 M.  By itself, VAR does not tell you whether the firm has trouble.  If the VAR doubles from one period to the next, is the firm in trouble?  The answer to that cannot be determined without further information.

S

Survival is the probability that, given the real risks of the firm and the real capital of the firm, the firm will sustain a loss large enough to put an end to their business model.  If your S is 80%, then there is about  50% chance that your firm will not survive three years! But if your S is 95%, then there is a 50-50 chance that your firm will last at least 13 years.  This arithmetic is why a firm, like an insurer, that makes long term promises, need to have a very high S.  An S of 95% does not really seem high enough.

S

Survival is something that can be calculated with the existing VAR model.  Instead of focusing on a arbitrary probability, the calculation instead focuses on the loss that management feels is enough to put them out of business.  S can be recalculated after a proposed share buy back or payment of dividends.  S responds to management actions and assists management decisions.

If your board asks how much risk you are taking, try telling them the firm has a 98.5% Survival probability.  That might actually make more sense to them than saying that the firm might lose as much as $523 M at a 99% confidence interval over one year.

So turn your VAR inside out – to get S 

VAR is not a Good Risk Measure

November 6, 2009

Value at Risk (VAR) has taken alot of heat lately and deservedly so.

VAR, as banks are required to calculate it, relies solely on recent past data for calibration.  The use of “recent” data means that following any period of low losses, the VAR measure will show low risk.  That is just not the case.  It fails to recognize the longer term volatility that might exist.  In other words if there are problems that have a periodicity longer than the usual one year time frame of VAR, then VAR will ignore them most of the time and over emphasize them some of the time. Like the stopped clock that is right twice a day, except that VAR might never be right.

Risk models can be calibrated to history, long term or short term, or to future expectations, either long term or short term or they can be calibrated to assumptions consistent with market prices, either spot or over some period of time.  Of those six choices, VAR is calibrated from one of the less useful possible choices.

What VAR does is to answer the question of what would the 1/100 loss have been had I held the current risk positions over the past year.  The advantage of the definition chosen is that you can be sure of consistency.  However, that is only a consistently useful result if you always believe that the world will remain exactly as risky as it was in the past year.

If you believe in equilibrium, then of course the next year will be very similar to the last.  So risk management is a very small task relating to keeping things in line with past variability.  However, the world and the markets do not evidence a fundamental belief in equilibrium.  Some years are riskier than others.

So VAR is not a Good Risk Measure if it is taken alone.  If used with other more forward looking risk measures, it can be a part of a suite of information tat can lead to good risk management.

On the other hand, if you divorce the idea of VAR from the actual implementation of VAR in the banks, then you can conclude that Var is not a Bad Risk Measure.

Need to Shift the Defense . . . and the ERM

October 1, 2009

Sports analogies are so easy.

ERM is like the defense in football.  You would no more think of fielding a football team without a defensive squad then you would think of running a financial firm without ERM.  On the football field, if a team went out without any defensive players, they would doubtless be scored upon over and over again.

A financial firm without an ERM program would experience losses that were higher than what they wanted.

The ERM program can learn something from the football defenders.  The defenders, even when they do show up,  cannot get by doing the exact same thing over and over again.  The offensive of the other team would quickly figure out that they were entirely predictable and take them apart.  The defenders need to shift and compensate for the changes in the environment and in the play of the other team.

Banks with compliance oriented static ERM programs found this out in the financial crisis.  Their ERM program consisted of the required calculation of VaR using the required methods.  If you look at what happened in the crisis, many banks did not show any increase in VaR almost right up until the markets froze.  That is because the clever people at the origination end of the banks knew exactly how the ERM folks were going to calculate the VaR and they waltzed their fancy new CDO products right around the static defense of the ERM crew at the bank.

They knew that the ERM squad would not look into the quality of the underlying credit that went into the CDOs as long as those CDOs had the AAA stamp of approval from the rating agencies.  The ERM models worked very well off of the ratings and the banks had drastically cut back on their staff of credit analysts anyway.

They also knew that the spot on the gain and loss curve where the VaR would be calculated was fixed in advance.  As long as their new creation passed the VaR test at that one point, nobody was going to look any further.

So what would the football coach do if their defense kept doing the same thing over and over while the other team ran around them all game?  Would the coach decide to play the next season without a defense?  Or would he retrain and restaff his defense with new players who would move around and adapt and shift to different strategies as the game went along.

And that is what ERM needs to do.  ERM needs to make sure that it does not get stuck in a rut.  Because any predictable rut will not work for long.  The marketplace and perhaps some within their own companies will  find a way around them and defeat their purpose.

UNRISK (2)

September 30, 2009

From Jawwad Farid

UNRISK Part 2 – Understanding the distribution

(Part One)

UNR1

Before you completely write this post off as statistical gibberish, and for those of you were fortunate enough to not get exposure to the subject, let’s just see what the distribution looks like.

UNR2

Not too bad! What you see above is a simple slotting of credit scores across a typical credit portfolio. For the month of June, the scores rate from 1 to 12, with 1 good and 12 evul. The axis on the left hand side shows how much have we bet per score / grade category. We collect the scores, then sort them, then bunch them in clusters and then simply plot the results in a graph (in statistical terms, we call it a histogram). Drawn the histogram for a data set enough number of times and the shape of the distribution will begin to speak with you. In this specific case you can see that the scoring function is reasonably effective since it’s doing a good job of classifying and recording relationships at least as far as scores represent reasonable credit behavior.

So how do you understand the distribution? Within the risk function there are multiple dimensions that this understanding may take.

The first is effectiveness. For instance the first snapshot of a distribution that we saw was effective. This one isn’t?

Why? Let’s treat that as your homework assignment. (Hint: the first one is skewed in the direction it should be skewed in, this one isn’t).

The second is behavior over time. So far you have only seen the distribution at a given instance, a snapshot. Here is how it changes over time.

UNR3

Notice anything? Homework assignment number two. (Hint: 10, 11 and 12 are NPL, Classified, Non performing, delinquent loans. Do you see a trend?)

The third is dissection across products and customer segments. Heading into an economic cycle where profitability and liquidity is going to be under pressure, which exposure would you cut? Which one is going to keep you awake at night? How did you get here in the first place? Assignment number three.

UNR4

Can you stop here? Is this enough? Well no.

UNR5

This is where my old nemesis, the moment generating function makes an evul comeback. Volatility (or vol) is the second moment. That is a fancy risqué (pun intended) way of saying it is the standard deviation of your data set. You can treat volatility of the distribution as a static parameter or treat it with more respect and dive a little deeper and see how it trends over time. What you see above is a simple tracking series that is plotting 60 day volatility over a period of time for 8 commodity groups together.

See vol. See vol run… (My apologies to my old friend Spot and the HBS EGS Case)

If you are really passionate about the distribution and half as crazy as I am, you could also delve into relationships across parameters as well as try and assess lagged effects across dimensions.

UNR6

The graph above shows how volatility for different interest rates moves together and the one below shows the same phenomenon for a selection of currency pair. When you look at the volatility of commodities, interest rates and currencies do you see what I see? Can you hear the distribution? Is it speaking to you now?

Nope. I think you need to snort some more unrisk! Home work assignment number four. (Hint: Is there a relationship, a delayed and lagged effect between the volatility of the three groups? If yes, where and who does it start with?)

UNR7

So far so good! This is what most of us do for a living. Where we fail is in the next step.

You can understand the distribution as much as you want, but it will only make sense to the business side when you translate it into profitability. If you can’t communicate your understanding or put it to work by explaining it to the business side in the language they understand, all of your hard work is irrelevant. A distribution is a wonderful thing only if you understand it. If you don’t, you might as well be praising the beauty of Jupiter’s moon under Saturn’s light in Greek to someone who has only seen Persian landscapes and speaks Pushto.

To bring profitability in, you need to integrate all the above dimensions into profitability. Where do you start? Taking the same example of the credit portfolio above you start with what we call the transition matrix. Remember the distribution plot across time from above.

UNR8

THis has appeared previously in Jawwad’s excellent blog.

The Cheeky, the Funky and the Dummy Monkey… (2)

September 25, 2009

From Stelios Ioannides (risk manager)

Continuation of earlier post.

Who is to blame?

OK enough, I agree with you: this is an exaggeration of the situation or the situations that we are currently experiencing but reality can be quite close. What happened in the credit sub-prime crisis can only be justified, in my opinion, by such “monkey” logic. At the end of the day, it’s about designing products, valuing (appropriately) risk, and getting on board the “right” clients with a “desirable”, for our purposes profile. Who is doing that?  And how? The industry failed spectacularly on that. It allowed to this “monkey” concept to grow and to gain potential.

Who cares about Value at Risk or CTE and the associated graphics, if there is no clue at all on how these “interesting” numbers were derived in the first place? Using a number with out know the source it is like having a map with numbers but with no street names. You do not know where you are, you might know where you are heading (vaguely) but there is absolutely no way you can reach your destination.

Having some well defined risk measures is just a well accepted methodology that justifies capital intensive and risk sensitive decisions at the big scale. So if you are applying it wrongly, things can fail dramatically, at a huge scale, causing chaos. And of course, when things go wrong the funky or the dummy monkeys will be blamed… These are the ones that will loose their jobs. The cheeky ones stay alive and are the ones that will be hiring soon again.

The way forward

Understanding the details and being aware with the fundamentals is crucial is this arena. “Understanding” is about having the right combination of skills and applying these fundamentals. It is also about being able to realize how decisions that might be executed in interrelated contexts like pricing, capital reserving and hedging (just to mention a few) might be derived by ones work.

Knowledge exists, technology exists and in my opinion, it is a pity that people still stick to the old practices.  There is strong need to refresh or at least fine tune these well established “ways of doing things”. In no situation, we should act like “robots” that mechanically do things.  We need monkeys that are owners and responsible of their piece of work regardless how small that is.

If we fail to do that then the “disease” might propagate in otter industries, and in that case of course, the consequences might scary (at least to say).  We spent millions or even billions for initiatives like Basel, but we have to make sure that some basic, common sense and ethical rules are being obeyed at all levels.

Risk industry calls for better quality transparency and people should soon or later realize that sharing knowledge and information and aligning interests and objectives would benefit, in most cases all parties (of the same side) involved in the project or deal. The way assumptions are derived is crucial. At the same time, being able to control the behavior of clients is of paramount importance.  How this is achieved? A way is possibly by proper underwriting and classification.

Conclusions

We are working in various dimensions, we are dealing with risk free worlds, real words, real market assumptions, marking to market and so on and so fourth. Concepts like “Stress Testing” are gaining momentum and potential.  In our daily work we have to face concepts like implied volatilities, volatility surfaces, “short-selling” (it took me a while to get this right, honestly) and a few even more complicated terms that I do not want to even to mention them here. The list of these complicated terms is endless and growing fast.

In any case, people have the duty to use these concepts in a consistent and ethical way.  Sticking to basic and rather simplistic approaches with regards o problem solving is not wise in the fast world we are living.

We have the duty to teach the new generations how to synthesize skills and knowledge and judge impartially and ethically. I personally believe that the future belongs to the people that the have the courage to ask right questions and the patience to apply the fundamentals … it is the duty of each one of us to find out what that really means.

Perhaps, we could elaborate more one that but due to lack to time I cannot. Hopefully this won’t be the case when I will have to deliver a super important risk project in the future. What am I? Well something in between the dummy and the funky monkey (hopefully closer to the later or better the former?)…

The Cheeky, the Funky and the Dummy Monkey…

September 21, 2009

Guest Post from Stelios Ioannides (risk manager)

In the risk management field, various players are being involved; quite a few are more sophisticated, others are more intelligent and some others are being better informed than the rest. It is a fact that each of these players (as it happens besides in a variety of disciplines) has a different vision and understanding on what is meant by “risk management”. Most importantly, few of them might be passionate about pure modeling and quantitative work, and in the other extreme, a few others might even really hate their risk related work: as they view it as a very, very, boring task. They still continue to do it though out of necessity or due to lack to alternative options. As you can understand, the way these different people apply the concepts of risk management is quote different.

In this short piece of opinion, I will try to present the current “crisis” situation, trying to understand how we end up like this; I will focus on mainly three kinds of professionals or alternatively on three “Monkey” beings that are directly or indirectly relayed with this interesting and fast paced arena.

Using Sophisticated Risk Measures…

There has been a debate around on the usage and applicability of metrics like Value at Risk (VaR) and similar risk sensitive measures. Very important people support these measures but on the other side there is a group of equally intelligent and prestigious practitioners and academics that basically scarp such “dump” initiatives. Who is right? And who is wrong?

I think that metrics like VaR etc are quite useful as long as long their user knows the fundamentals, the assumptions and what is essentially happening the behind the senses. If you blindly trust such measures without asking the right questions or without challenging your assumptions, I think you run a high risk for various reasons. Let’s see how our professionals (all “males” monkeys” for simplicity) behave in this complex and chaotic world.

The Dummy Monkey…

This kind of professional, never or rarely asks questions. He blindly trusts the risk software that he is using in order to perform his job. Work can be hectic as he might need to elaborate and complete loads of calculations on a daily basis. He is neither interested or cares on risk management concept or best practices. The important thing for him is to prepare the report with the numbers or the information being asked for and that’s it.  The consequences of his work are unknown to him. He is not necessarily aware of the decisions that will be taken (such decisions will be based on the work that HE eventually has produced). In the majority of the cases he is not aware how the models were built or who was involved in the development of the models or software.  In that respect, he cannot improve or correct things. He is just capable of typing various inputs into the right, hopefully, boxes and derives some automated reports that in most cases mean nothing to him.  So who can build the models then? What is really going on here?!

The Funky Monkey…

The intellectual curiosity of the kind of professional urges him to study and work hard.  This monkey is very clever and gifted. He works restless and builds fantastic models. The thing is these models might be wrong and very possibly, these models do not necessarily reflect reality. But who cares?  That is fine thought. As long as these “reliable – enough” (who gives the approval, who validates?) financial models, that can be used easily by the dummy (user) monkeys is fine. Who cares about reflecting reality and getting the fundamentals 100% right? The thing is to have more or less an acceptable and an “accurate” tool (or better framework) that behaves as he (the model creator) wishes. But what happens when these “funky” beings are wrong? Because they can get perfectly wrong – they work hard, long hours and alone… – who guarantees that somewhere or somehow a mistake was not made (everybody can get confused every now and then, right?)? Do we have the right, objective and independent control measures in place? What happens if not?

Funky monkeys get hired by the Cheeky Monkeys; they get paid good money…

The Cheeky Monkey…

This “being” is the risk management professional at the very high level. Quite powerful and important, he dedicates loads of time executing risk management decisions. He is not merely interested on how answers are being derived or who derived these answers or even who designed the application, model or framework. As long as a clear and straight forward audit trail (well not necessarily) is accompanying such results, then is perfectly fine. The only thing that matter is that such risk measures are being used as indicators and reference for his performance bonus.

More stuff on this worth examining profile? Well, I cannot say mush as I having reached that level…

(To be Continued)

Multi dimensional Risk Management

August 28, 2009

Many ERM programs are one dimensional. They look at VaR or they look at Economic Capital. The Multi-dimensional Risk manager consider volatility, ruin, and everything in between. They consider not only types of risk that are readily quantifiable, but also those that may be extremely difficult to measure. The following is a partial listing of the risks that a multidimensional risk manager might examine:
o Type A Risk – Short-term volatility of cash flows in one year
o Type B Risk – Short-term tail risk of cash flows in one year
o Type C Risk – Uncertainty risk (also known as parameter risk)
o Type D Risk – Inexperience risk relative to full multiple market cycles
o Type E Risk – Correlation to a top 10
o Type F Risk – Market value volatility in one year
o Type G Risk – Execution risk regarding difficulty of controlling operational losses
o Type H Risk – Long-term volatility of cash flows over five or more years
o Type J Risk – Long-term tail risk of cash flows over 5 five years or more
o Type K Risk – Pricing risk (cycle risk)
o Type L Risk – Market liquidity risk
o Type M Risk – Instability risk regarding the degree that the risk parameters are stable

Many of these types of risk can be measured using a comprehensive risk model, but several are not necessarily directly measurable. But the muilti dimensional risk manager realizes that you can get hurt by a risk even if you cannot measure it.

VaR is not a Bad Risk Measure

August 24, 2009

VaR has taken a lot of heat in the current financial crisis. Some go so far as to blame the financial crisis on VaR.

But VaR is a good risk measure. The problem is with the word RISK. You see, VaR has a precise definition, RISK does not. There is no way that you could possible measure an ill defined idea as RISK with a precise measure.

VaR is a good measure of one aspect of RISK. Is measures volatility of value under the assumption that the future will be like the recent past. If everyone understands that is what VaR does, then there is no problem.

Unfortunately, some people thought that VaR measured RISK period. What I mean is that they were led to believe that VaR was the same as RISK. In that context VaR (and any other single metric) is a failure. VaR is not the same as RISK.

That is because RISK has many aspects. Here is one partial list of the aspects of risk:

Type A Risk – Short Term Volatility of cash flows in 1 year
Type B Risk – Short Term Tail Risk of cash flows in 1 year
Type C Risk – Uncertainty Risk (also known as parameter risk)
Type D Risk – Inexperience Risk relative to full multiple market cycles
Type E Risk – Correlation to a top 10
Type F Risk – Market value volatility in 1 year
Type G Risk – Execution Risk regarding difficulty of controlling operational losses
Type H Risk – Long Term Volatility of cash flows over 5 or more years
Type J Risk – Long Term Tail Risk of cash flows over 5 years or more
Type K Risk – Pricing Risk (cycle risk)
Type L Risk – Market Liquidity Risk
Type M Risk – Instability Risk regarding the degree that the risk parameters are stable
(excerpted from Risk & Light)

VaR measures Type F risk only.


%d bloggers like this: