Archive for the ‘risk assessment’ category

Can’t skip measuring Risk and still call it ERM

January 15, 2014

Many insurers are pushing ahead with ERM at the urging of new executives, boards, rating agencies and regulators.  Few of those firms who have resisted ERM for many years have a history of measuring most of their risks.

But ERM is not one of those liberal arts like the study of English Literature.  In Eng Lit, you may set up literature classification schemes, read materials, organize discussion groups and write papers.  ERM can have those elements, but the heart of ERM is Risk Measurement.  Comparing those risk measures to expectations and to prior period measures.  If a company does not have Risk Measurement, then they do not have ERM.

That is the tough side of this discussion, the other side is that there are many ways to measure risks and most companies can implement several of them for each risk without the need for massive projects.

Here are a few of those measures, listed in order of increasing sophistication:

1. Risk Guesses (AKA Qualitative Risk Assessment)
– Guesses, feelings
– Behavioral Economics Biases
2. Key Risk Indicators (KRI)
– Risk is likely to be similar to …
3. Standard Factors
– AM Best,  S&P, RBC
4. Historical Analysis
– Worst Loss in past 10 years as pct of base (premiums,assets).
5. Stress Tests
– Potential loss from historical or hypothetical scenario
6. Risk Models
– If the future is like the past …
– Or if the future is different from the past in this way …

More discussion of Risk Measurement on WillisWire:

     Part 2 of a 14 part series
And on RISKVIEWS:
Risk Assessment -  55 other posts relating to risk measurement and risk assessment.

Provisioning – Packing for your trip into the future

April 26, 2013

There are two levels of provisioning for an insurer.  Reserves and Risk Capital.  The two are intimately related.  In fact, in some cases, insurers will spend more time and care in determining the correct number for the sum of the two, called Total Asset Requirement (TAR) by some.

Insurers need an realistic picture of future obligations long before the future is completely clear. This is a key part of the feedback mechanism.  The results of the first year of business is the most important indication of business success for non-life insurance.  That view of results depends largely upon the integrity of the reserve value.  This feedback information effects performance evaluation, pricing for the next year, risk analysis and capital adequacy analysis and capital allocation.

The other part of provisioning is risk capital.  Insurers also need to hold capital for less likely swings in potential losses.  This risk capital is the buffer that provides for the payment of policyholder claims in a very high proportion of imagined circumstances.  The insurance marketplace, the rating agencies and insurance regulatory bodies all insist that the insurer holds a high buffer for this purpose.

In addition, many valuable insights into the insurance business can be gained from careful analysis of the data that is input to the provisioning process for both levels of provisioning.

However, reserves are most often set to be consistent with considerations.  Swings of adequate and inadequate pricing is tightly linked to swings in reserves.  When reserves are optimistically set capital levels may reflect same bias. This means that inadequate prices can ripple through to cause deferred recognition of actual claims costs as well as under provisioning at both levels.  This is more evidence that consideration is key to risk management.

There is often pressure for small and smooth changes to reserves and risk capital but information flows and analysis provide jumps in insights both as to expectations for emerging losses as well as in terms of methodologies for estimation of reserves and capital.  The business pressures may threaten to overwhelm the best analysis efforts here.  The analytical team that prepares the reserves and capital estimates needs to be aware of and be prepared for this eventuality.  One good way to prepare for this is to make sure that management and the board are fully aware of the weaknesses of the modeling approach and so are more prepared for the inevitable model corrections.

Insurers need to have a validation process to make sure that the sum of reserves and capital is an amount that provides the degree of security that is sought.  Modelers must allow for variations in risk environment as well as the impact of risk profile, financial security and risk management systems of the insurer in considering the risk capital amount.  Changes in any of those elements may cause abrupt shifts in the amount of capital needed.

The Total Asset Requirement should be determined without regard to where the reserves have been set so that risk capital level does not double up on redundancy or implicitly affirm inadequacy of reserves.

The capital determined through the Provisioning process will usually be the key element to the Risk Portfolio process.  That means that accuracy in the sub totals within the models are just as important as the overall total.  The common practice of tolerating offsetting inadequacies in the models may totally distort company strategic decision making.

This is one of the seven ERM Principles for Insurers.

Controlling with a Cycle

April 3, 2013

Helsinki_city_bikes

No, not that kind of cycle… This kind:

CycleThis is a Risk Control Cycle.  It includes Thinking/Observing steps and Action Steps.  The only reason a sane organization would spend the time on the Assessing, Planning and Monitoring steps is so that they could be more effective with the Risk Taking, Mitigating and Responding steps.

A process capable of limiting losses can be referred to as a complete risk control process, which would usually include the following:

  • Identification of risks—with a process that seeks to find all risks inherent in a insurance product, investment instrument, or other situation, rather than simply automatically targeting “the usual suspects.”
  • Assess Risks – This is both the beginning and the end of the cycle.  As the end, this step is looking back and determining whether your judgment about the risk and your ability to select and manage risks is as good as you thought that it would be.  As the beginning, you look forward to form a new opinion about the prospects for risk and rewards for the next year.  For newly identified risks/opportunities this is the due diligence phase.
  • Plan Risk Taking and Risk Management – Based upon the risk assessment, management will make plans for how much of each risk that the organization will plan to accept and then how much of that risk will be transferred, offset and retained.  These plans will also include the determination of limits
  • Take Risks – organizations will often have two teams of individuals involved in risk taking.  One set will identify potential opportunities based upon broad guidelines that are either carried over from a prior year or modified by the accepted risk plan.  (Sales) The other set will do a more detailed review of the acceptability of the risk and often the appropriate price for accepting the risk.  (Underwriting)
  • Measuring and monitoring of risk—with metrics that are adapted to the complexity and the characteristics of the risk as well as Regular Reporting of Positions versus Limits/Checkpoints— where the timing needed to be effective depends on the volatility of the risk and the rate at which the insurer changes their risk positions. Insurers may report at a granular level that supports all specific decision making and actions on a regular schedule.
  • Regular risk assessment and dissemination of risk positions and loss experience—with a standard set of risk and loss metrics and distribution of risk position reports, with clear attention from persons with significant standing and authority in the organization.
  • Risk limits and standards—directly linked to objectives. Terminology varies widely, but many insurers have both hard “Limits” that they seek to never exceed and softer “Checkpoints” that are sometimes exceeded. Limits will often be extended to individuals within the organization with escalating authority for individuals higher in the organizational hierarchy.
  • Response – Enforcement of limits and policing of checkpoints—with documented consequences for limit breaches and standard resolution processes for exceeding checkpoints. Risk management processes such as risk avoidance for risks where the insurer has zero tolerance. These processes will ensure that constant management attention is not needed to assure compliance. However, occasional assessment of compliance is often practiced. Loss control processes to reduce the avoidable excess frequency and severity of claims and to assure that when losses occur, the extent of the losses is contained to the extent possible. Risk transfer processes, which are used when an insurer takes more risk than they wish to retain and where there is a third party who can take the risk at a price that is sensible after accounting for any counterparty risk that is created by the risk transfer process. Risk offset processes, which are used when insurer risks can be offset by taking additional risks that are found to have opposite characteristics. These processes usually entail the potential for basis risk because the offset is not exact at any time or because the degree of offset varies as time passes and conditions change, which is overcome in whole or in part by frequent adjustment to the offsetting positions. Risk diversification, which can be used when risks can be pooled with other risks with relatively low correlation. Risk costing / pricing, which involves maintaining the capability to develop appropriate views of the cost of holding a risk in terms of expected losses and provision for risk. This view will influence the risks that an insurer will take and the provisioning for losses from risks that the insurer has taken (reserves). This applies to all risks but especially to insurance risk management. Coordination of insurance profit/loss analysis with pricing with loss control (claims) with underwriting (risk selection), risk costing, and reserving, so that all parties within the insurer are aware of the relationship between emerging experience of the 
risks that the insurer has chosen to retain and the expectations that the insurer held when it chose to write and retain the risks.
  • Assess Risks – and the cycle starts again.

This is one of the seven ERM Principles for Insurers

Risk Evaluation by Actuaries

October 22, 2012

The US Actuarial Standards Board has promulgated a new Actuarial Standard of Practice number 46 Risk Evaluation in Enterprise Risk Management.

ASB Adopts New ASOP No. 46

At its September meeting, the ASB adopted ASOP No. 46, Risk Evaluation in Enterprise Risk Management. The ASOP provides guidance to actuaries when performing professional services with respect to risk evaluation systems used for the purposes of enterprise risk management, including designing, developing, implementing, using, maintaining, and reviewing those systems. An ASOP providing guidance for activities related to risk treatment is being addressed in a proposed ASOP titled, Risk Treatment in Enterprise Risk Management, which will be released in late 2012. The topics of these two standards were chosen because they cover the most common actuarial services performed within risk management systems of organizations. ASOP No. 46 will be effective May 1, 2013 and can be viewed under the tab, “Current Actuarial Standards of Practice.”

 

Must have more than one View of Risk

May 14, 2012

Riskviews finds the headline Value-at-Risk model masked JP Morgan $2 bln loss to be totally appalling. JP Morgan is of course famous for having been one of the first large banks to use VaR for daily risk assessment.

During the late 1980’s, JP Morgan developed a firm-wide VaR system. This modeled several hundred risk factors. A covariance matrix was updated quarterly from historical data. Each day, trading units would report by e-mail their positions’ deltas with respect to each of the risk factors. These were aggregated to express the combined portfolio’s value as a linear polynomial of the risk factors. From this, the standard deviation of portfolio value was calculated. Various VaR metrics were employed. One of these was one-day 95% USD VaR, which was calculated using an assumption that the portfolio’s value was normally distributed.
With this VaR measure, JP Morgan replaced a cumbersome system of notional market risk limits with a simple system of VaR limits. Starting in 1990, VaR numbers were combined with P&L’s in a report for each day’s 4:15 PM Treasury meeting in New York. Those reports, with comments from the Treasury group, were forwarded to Chairman
Weatherstone.                        from History of Value-at-Risk:1922-1998 by Glyn Holten

JP Morgan went on to spin off a group, Riskmetrics, who sold the capability to do VaR calculations to all comers.

Riskviews had always assumed that JP Morgan had felt safe selling the VaR technology because they had moved on to something better.

But the story given about the $2 billion loss suggests that they were flubbing the measurement of their exposure because of a new risk measurement system.

Riskviews would suggest two ideas to JP Morgan:

  1. A firm that makes its money taking risks should never rely upon a single measure of risk.  See Risk and Light and the CARE Report for further information.
  2. The folks responsible for risk evaluation need to apply some serious standards for their work.  Take a look at the first attempt of the actuarial profession of standards for professionals performing risk evaluation in ERM programs.  This proposed standard suggests many things, but the most important idea is that a professional who is evaluating risk should look at three things: the risk taking capacity of the firm, the risk environment and the risk management program of the firm.

These are fundamental principles of risk management.  Not the only ones, but principles that speak to the problem that JP Morgan claims to have.

Risk Evaluation Standard

April 4, 2012

The US Actuarial Standards Board (ASB) recently approved proposed ASOP Risk Evaluation in Enterprise Risk Management as an exposure draft.

In March 2011, discussion drafts on the topics of risk evaluation and risk treatment were issued. The ERM Task Force of the ASB reviewed the comments received and based on those comments, began work on the development of exposure drafts in those areas.

The proposed ASOP on risk evaluation provides guidance to actuaries when performing professional services with respect to risk evaluation systems, including designing, implementing, using, and reviewing those systems. The comment deadline for the exposure draft is June 30, 2012. An exposure draft on risk treatment is scheduled to be reviewed in summer 2012.

ASB Approves ERM Risk Evaluation Exposure DraftRisk Evaluation in Enterprise Risk Management.  Comment deadline: June 30, 2012

Three Ideas of Risk Management

March 12, 2012

In the book Streetlights and Shadows, Gary Klein describes three sorts of risk management.

  • Prioritize and Reduce – the system used by safety and (insurance) risk managers.  In this view of risk management, there is a five step process to
    1. Identify Risks
    2. Assess and Prioritize Risks
    3. Develop plans to mitigate the highest priority risks
    4. implement plans
    5. Track effectiveness of mitigations and adapt plans as necessary
  • Calculate and Decide – the system used by investors (and insurers) to develop multi scenario probability trees of potential outcomes and to select the options with the best risk reward relationship.
  • Anticipate and Adapt – the system preferred by CEO’s.  For potential courses of action, the worst case scenario will be assessed.  If the worst case is within acceptable limits, then the action will be considered for its benefits.  If the worst case is outside of acceptable limits, then consideration is given to management to reduce or eliminate the adverse outcomes.  If those outcomes cannot be brought within acceptable limits then the option is rejected.

Most ERM System are set up to support the first two ideas of Risk Management.

But if it is true that most CEO’s favor the Anticipate and Adapt approach, a total mismatch between what the CEO is thinking and what the ERM system is doing emerges.

It would not be difficult to develop an ERM system that matches with the Anticipate and Adapt approach, but most risk managers are not even thinking of that possibility.

Under that system of risk management, the task would be to look at a pair of values for every major activity.  That pair would be the planned profit and the worst case loss.  During the planning stage, the Risk Manager would then be tasked to find ways to reduce the worst case losses of potential plans in a reliable manner.  Once plans are chosen, the Risk Manager would be responsible to make sure that any of the planned actions do not exceed the worst case losses.

Thinking of risk management in this manner allows us to understand the the worst possible outcome for a risk manager would not be a loss from one of the planned activities of the firm, it would be a loss that is significantly in excess of the maximum loss that was contemplated at the time of the plan.  The excessive loss would be a signal that the Risk area is not a reliable provider of risk information for planning, decision making or execution of plans or all three.

This is an interesting line of reasoning and may be a better explanation for the way that risk managers are treated within organizations and especially why risk managers are sometimes fired after losses.  They may be losing their jobs, not because there is a loss, but because they were unable to warn management of the potential size of the loss.  It could well be that management would have made different plans if they had known in advance the potential magnitude of losses from one of their choices.

Or at least, that is the story that they believe about themselves after the excessive loss.

This suggests that risk managers need to be particular with risk evaluations.  Klein also mentions that executives are usually not particularly impressed with evaluations of frequency.  They most often want to focus on severity.

So whatever is believed about frequency, the risk manager needs to be careful with the assessment of worst case losses.

Actuarial Risk Management Volunteer Opportunity

August 11, 2011

Actuarial Review of Enterprise Risk Management Practices –

A Working Group formed by The Enterprise and Financial Risks Committee of the IAA has started working on a white paper to be titled: “Actuarial Review of Enterprise Risk Management Practices”.  We are seeking volunteers to assist with writing, editing and research.

This project would set out a systematic process for actuaries to use when evaluating risk management practices.  Actuaries in Australia are now called to certify risk management practices of insurers and that the initial reaction of some actuaries was that they were somewhat unprepared to do that.  This project would produce a document that could be used by actuaries and could be the basis for actuaries to propose to take on a similar role in other parts of the world.  Recent events have shown that otherwise comparable businesses can differ greatly in the effectiveness of their risk management practices. Many of these differences appear to be qualitative in character and centered on management processes. Actuaries can take a role to offer opinion on process quality and on possible avenues for improvement. More specifically, recent events seem likely to increase emphasis on what the supervisory community calls Pillar 2 of prudential supervision – the review of risk and solvency governance. In Solvency II in Europe, a hot topic is the envisaged requirement for an ‘Own Risk and Solvency Assessment’ by firms and many are keen to see actuaries have a significant role in advising on this. The International Association of Insurance Supervisors has taken up the ORSA requirement as an Insurance Core Principle and encourages all regulators to adopt as part of their regulatory structure.  It seems an opportune time to pool knowledge.

The plan is to write the paper over the next six months and to spend another six months on comment & exposure prior to finalization.  If we get enough volunteers the workload for each will be small.   This project is being performed on a wiki which allows many people to contribute from all over the world.  Each volunteer can make as large or as small a contribution as their experience and energy allows.  People with low experience but high energy are welcome as well as people with high experience.

A similar working group recently completed a white paper titled the CARE report.  http://www.actuaries.org/CTTEES_FINRISKS/Documents/CARE_EN.pdf  You can see what the product of this sort of effort looks like.

Further information is available from Mei Dong, or David Ingram

==============================================================

David Ingram, CERA, FRM, PRM
+1 212 915 8039
(daveingram@optonline.net )

FROM 2009

ERM BOOKS – Ongoing Project – Volunteers still needed

A small amount of development work was been done to create the framework for a global resource for ERM Readings and References.

http://ermbooks.wordpress.com

Volunteers are needed to help to make this into a real resource.  Over 200 books, articles and papers have been identified as possible resources ( http://ermbooks.wordpress.com/lists-of-books/ )
Posts to this website give a one paragraph summary of a resource and identify it within several classification categories.  15 examples of posts with descriptions and categorizations can be found on the site.
Volunteers are needed to (a) identify additional resources and (b) write 1 paragraph descriptions and identify classifications.
If possible, we are hoping that this site will ultimately contain information on the reading materials for all of the global CERA educational programs.  So help from students and/or people who are developing CERA reading lists is solicited.
Participants will be given author access to the ermbooks site.  Registration with wordpress at www.wordpress.com is needed prior to getting that access.
Please contact Dave Ingram if you are interested in helping with this project.

(more…)

You Must Abandon All Presumptions

August 5, 2011

If you really want to have Enterprise Risk Management, then you must at all times abandon all presumptions. You must make sure that all of the things to successfully manage risks are being done, and done now, not sometime in the distant past.

A pilot of an aircraft will spend over an hour checking things directly and reviewing other people’s checks.  The pilot will review:

  • the route of flight
  • weather at the origin, destination, and enroute.
  • the mechanical status of the airplane
  • mechanical issues that may have been improperly logged.
  • the items that may have been fixed just prior to the flight to make certain that system works
  • the flight computer
  • the outside of the airplane for obvious defects that may have been overlooked
  • the paperwork
  • the fuel load
  • the takeoff and landing weights to make sure that they are within limits for the flight

Most of us do not do anything like this when we get into our cars to drive.  Is this overkill?  You decide.

When you are expecting to fly somewhere and there is a last minute delay because of something that seems like it should have really been taken care of, that is likely because the pilot finds something that someone might normally PRESUME was ok that was not.

Personally, as someone who takes lots and lots of flights, RISKVIEWS thinks that this is a good process.  One that RISKVIEWS would recommend to be used by risk managers.

THE NO PRESUMPTION APPROACH TO RISK MANAGEMENT

Here are the things that the Pilot of the ERM program needs to check before taking off on each flight.

1.  Risks need to be diversified.  There is no risk management if a firm is just taking one big bet.

2.  Firm needs to be sure of the quality of the risks that they take.  This implies that multiple ways of evaluating risks are needed to maintain quality, or to be aware of changes in quality.  There is no single source of information about quality that is adequate.

3.  A control cycle is needed regarding the amount of risk taken.  This implies measurements, appetites, limits, treatment actions, reporting, feedback

4.  The pricing of the risks needs to be adequate.  At least if you are in the risk business like insurers, for risks that are traded.  For risks that are not traded, the benefit of the risk needs to exceed the cost in terms of potential losses.

5.  The firm needs to manage its portfolio of risks so that it can take advantage of the opportunities that are often associated with its risks.  This involves risk reward management.

6.   The firm needs to provision for its retained risks appropriately, in terms of set asides (reserves) for expected losses and capital for excess losses.

A firm ultimately needs all six of these things.  Things like a CRO, or risk committees or board involvement are not on this list because those are ways to get these six things.

The Risk Manager needs to take a NO PRESUMPTIONS approach to checking these things.  Many of the problems of the financial crisis can be traced back to presumptions that one or more of these six things were true without any attempt to verify.

Cascading Failures

July 27, 2011

Most of the risks that concern us exist in systems. In massively complex systems.

However, our approach to risk assessment is often to isolate certain risk/loss events and treat them totally marginally.  That works fine when the events are actually marginal to the system but it may put us in a worse situation if the event triggers a cascading failure.

Within a system cycles are found.  Cycles that can ebb and flow over a long time.  And cycles that are self dampening or cycles that are self reinforcing.

The classic epidemiological disease model is an example of a self dampening system.  The dampening is caused by the fact that disease spread is self limiting.  Any person will have so many contacts with other people that might be sufficient to spread a disease were they infected.  In most disease situations, the spread of the disease starts to wane when enough people have already been exposed to the disease and developed immunity so that a significant fraction of the contacts that a newly infected and contagious person might have are already immune.  This produces the “S” curve of a disease. See  The Dynamics of SARS: Plotting the Risk of Epidemic Disasters.

The behavior of a financial markets in a large loss situation is a self reinforcing cycle.  Losses cause institutional investors to lose capital and because of their gearing the loss of capital triggers the need to reduce exposures which means selling into a falling market resulting in more losses.  Often the only cure is to close the market and hope that some exogenous event changes something.

These cascading situations are why the “tail” events are so terribly large compared to the ordinary events.

Each system has its own tipping point.  Your risk appetite should reflect how much you know about the tipping point of the system that each of your risks exists in.

And if you do not know the tipping point…

Echo Chamber Risk Models

June 12, 2011

The dilemma is a classic – in order for a risk model to be credible, it must be an Echo Chamber – it must reflect the starting prejudices of management. But to be useful – and worth the time and effort of building it – it must provide some insights that management did not have before building the model.

The first thing that may be needed is to realize that the big risk model cannot be the only tool for risk management.  The Big Risk Model, also known as the Economic Capital Model, is NOT the Swiss Army Knife of risk management.  This Echo Chamber issue is only one reason why.

It is actually a good thing that the risk model reflects the beliefs of management and therefore gets credibility.  The model can then perform the one function that it is actually suited for.  That is to facilitate the process of looking at all of the risks of the firm on the same basis and to provide information about how those risks add up to make up the total risk of the firm.

That is very, very valuable to a risk management program that strives to be Enterprise-wide in scope.  The various risks of the firm can then be compared one to another.  The aggregation of risk can be explored.

All based on the views of management about the underlying characteristics of the risks. That functionality allows a quantum leap in the ability to understand and consistently manage the risks of the firm.

Before creating this capability, the risks of each firm were managed totally separately.  Some risks were highly restricted and others were allowed to grow in a mostly uncontrolled fashion.  With a credible risk model, management needs to face their inconsistencies embedded in the historical risk management of the firm.

Some firms look into this mirror and see their problems and immediately make plans to rationalize their risk profile.  Others lash out at the model in a shoot the messenger fashion.  A few will claim that they are running an ERM program, but the new information about risk will result in absolutely no change in risk profile.

It is difficult to imagine that a firm that had no clear idea of aggregate risk and the relative size of the components thereof would find absolutely nothing that needs adjustment.  Often it is a lack of political will within the firm to act upon the new risk knowledge.

For example, when major insurers started to create the economic capital models in the early part of this century, many found that their equity risk exposure was very large compared to their other risks and to their business strategy of being an insurer rather than an equity mutual fund.  Some firms used this new information to help guide a divestiture of equity risk.  Others delayed and delayed even while saying that they had too much equity risk.  Those firms were politically unable to use the new risk information to reduce the equity position of the group.  More than one major business segment had heavy equity positions and they could not choose which to tell to reduce.  They also rejected the idea of reducing exposure through hedging, perhaps because there was a belief at the top of the firm that the extra return of equities was worth the extra risk.

This situation is not at all unique to equity risk.   Other firms had the same experience with Catastrophe risks, interest rate risks and Casualty risk concentrations.

A risk model that was not an Echo Chamber model would be any use at all in these situation above. The differences between management beliefs and the model assumptions of a non Echo Chamber model would result in it being left out of the discussion entirely.

Other methods, such as stress tests can be used to bring in alternate views of the risks.

So an Echo Chamber is useful, but only if you are willing to listen to what you are saying.

Getting Independence Right

May 11, 2011

Independence of the risk function is very important.  But often, the wrong part of the risk function is made independent.

It is the RISK MEASUREMENT AND REPORTING part of the risk function that needs to be independent.  If this part of the risk function is not independent of the risk takers, then you have the Nick Leeson risk – the risk that once you start to lose money that you will delay reporting the bad news to give yourself a little more time to earn back the losses, or the Jérôme Kerviel risk that you will simply understate the risk of what you are doing to allow you to enhance return on risk calculations and avoid pesky risk limits.

When Risk Reporting is independent, then the risk reports are much less likely to be fudged in the favor of the risk takers.  They are much more likely to simply and factually report the risk positions.  Then the risk management system either reacts to the risk information or not, but at least it has the correct information to make the decision on whether to act or not.

Many discussions of risk management suggest that there needs to be independence between the risk taking and the entire risk management function.  This is a model for risk disaster, but a model that is very common in banking.  Under this type of independence there will be a steady war.  A war that it it likely that the risk management folks will lose.  The risk takers are in charge of making money and the independent risk management folks are in charge of preventing that.  The risk takers, since they bring in the bacon, will always be much more popular with management than the risk managers, who add to costs and detract from revenue.

Instead, the actual risk management needs to be totally integrated within the risk taking function.  This will be resisted by any risk takers who have had a free ride to date.  So the risk takers can decide what would be the least destructive way to stay within their risk limits.  In a system of independent risk management, the risk managers are responsible for monitoring limit breaches and taking actions to unwind over limit situations.  In many cases, there are quite heated arguments around those unwinding transactions.

Under the reporting only independence model, the risk taking area would have responsibility for taking the actions needed to stay within limits and resolving breaches to limits.  (Most often those breaches are not due to deliberate violations of limits, but to market movements that cause breaches to limits to grow out of previously hedged positions.)

Ultimately, it would be preferable if the risk taking area would totally own their limits and the process to stay within those limits.

However, if the risk measurement and reporting is independent, then the limit breaches are reported and the decisions about what to do about any risk taking area that is not owning their limits is a top management decision, rather than a risk manager decision that sometimes gets countermanded by the top management.

What’s Next?

March 25, 2011

Turbulent Times are Next.

At BusinessInsider.com, a feature from Guillermo Felices tells of 8 shocks that are about to slam the global economy.

#1 Higher Food Prices in Emerging Markets

#2 Higher Interest Rates and Tighter Money in Emerging Markets

#3 Political Crises in the Middle East

#4 Surging Oil Prices

#5 An Increase in Interest Rates in Developed Markets

#6 The End of QE2

#7 Fiscal Cuts and Sovereign Debt Crises

#8 The Japanese Disaster

How should ideas like these impact on ERM systems?  Is it at all reasonable to say that they should not? Definitely not.

These potential shocks illustrate the need for the ERM system to be reflexive.  The system needs to react to changes in the risk environment.  That would mean that it needs to reflect differences in the risk environment in three possible ways:

  1. In the calibration of the risk model.  Model assumptions can be adjusted to reflect the potential near term impact of the shocks.  Some of the shocks are certain and could be thought to impact on expected economic activity (Japanese disaster) but have a range of possible consequences (changing volatility).  Other shocks, which are much less certain (end of QE2 – because there could still be a QE3) may be difficult to work into model assumptions.
  2. With Stress and Scenario Tests – each of these shocks as well as combinations of the shocks could be stress or scenario tests.  Riskviews suggest that developing a handful of fully developed scenarios with 3 or more of these shocks in each would be the modst useful.
  3. In the choices of Risk Appetite.  The information and stress.scenario tests should lead to a serious reexamination of risk appetite.  There are several reasonable reactions – to simply reduce risk appetite in total, to selectively reduce risk appetite, to increase efforts to diversify risks, or to plan to aggressively take on more risk as some risks are found to have much higher reward.

The last strategy mentioned above (aggressively take on more risk) might not be thought of by most to be a risk management strategy.  But think of it this way, the strategy could be stated as an increase in the minimum target reward for risk.  Since things are expected to be riskier, the firm decides that it must get paid more for risk taking, staying away from lower paid risks.  This actually makes quite a bit MORE sense than taking the same risks, expecting the same reward for risks and just taking less risk, which might be the most common strategy selected.

The final consideration is compensation.  How should the firm be paying people for their performance in a riskier environment?  How should the increase in market risk premium be treated?

See Risk adjusted performance measures for starters.

More discussion on a future post.

Where to Draw the Line

March 22, 2011

“The unprecedented scale of the earthquake and tsunami that struck Japan, frankly speaking, were among many things that happened that had not been anticipated under our disaster management contingency plans.”  Japanese Chief Cabinet Secretary Yukio Edano.

In the past 30 days, there have been 10 earthquakes of magnitude 6 or higher.  In the past 100 years, there have been over 80 earthquakes magnitude 8.0 or greater.  The Japanese are reputed to be the most prepared for earthquakes.  And also to experience the most earthquakes of any settled region on the globe.  By some counts, Japan experiences 10% of all earthquakes that are on land and 20% of all severe earthquakes.

But where should they, or anyone making risk management decisions, draw the line in preparation?

In other words, what amount of safety are you willing to pay for in advance and what magnitude of loss event are you willing to say that you will have to live with the consequences.

That amount is your risk tolerance.  You will do what you need to do to manage the risk – but only up to a certain point.

That is because too much security is too expensive, too disruptive.

You are willing to tolerate the larger loss events because you believe them to be sufficiently rare.

In New Zealand,  that cost/risk trade off thinking allowed them to set a standard for retrofitting of existing structures of 1/3 of the standard for new buildings.  But, they also allowed 20 years transition.  Not as much of an issue now.  Many of the older buildings, at least in Christchurch are gone.

But experience changes our view of frequency.  We actually change the loss distribution curve in our minds that is used for decision making.

Risk managers need to be aware of these shifts.  We need to recognize them.  We want to say that these shifts represent shifts in risk appetite.  But we need to also realize that they represent changes in risk perception.  When our models do not move as risk perception moves, the models lose fundamental credibility.

In addition, when modelers do things like what some of the cat modeling firms are doing right now, that is moving the model frequency when people’s risk perceptions are not moving at all, they also lose credibility for that.

So perhaps you want scientists and mathematicians creating the basic models, but someone who is familiar with the psychology of risk needs to learn an effective way to integrate those changes in risk perceptions (or lack thereof) with changes in models (or lack thereof).

The idea of moving risk appetite and tolerance up and down as management gets more or less comfortable with the model estimations of risk might work.  But you are still then left with the issue of model credibility.

What is really needed is a way to combine the science/math with the psychology.

Market consistent models come the closest to accomplishing that.  The pure math/science folks see the herding aspect of market psychology as a miscalibration of the model.  But they are just misunderstanding what is being done.  What is needed is an ability to create adjustments to risk calculations that are applied to non-traded risks that allow for the combination of science & math analysis of the risk with the emotional component.

Then the models will accurately reflect how and where management wants to draw the line.

Liquidity Risk Management for a Bank

February 9, 2011

A framework for estimating liquidity risk capital for a bank

From Jawwad Farid

Capital estimation for Liquidity Risk Management is a difficult exercise. It comes up as part of the internal liquidity risk management process as well as the internal capital adequacy assessment process (ICAAP). This post and the liquidity risk management series that can be found at the Learning Corporate Finance blog suggests a framework for ongoing discussion based on the work done by our team with a number of regional banking customers.

By definition banks take a small Return on asset (1% – 1.5%) and use leverage and turnover to scale it to a 15% – 18% Return on Equity. When market conditions change and a bank becomes the subject of a name crisis and a subsequent liquidity run, the same process becomes the basis for a death chant for the bank.  We try to de-lever the bank by selling assets and paying down liabilities and the process quickly turns into a fire sale driven by the speed at which word gets out about the crisis.

Figure 1 Increasing Cash Reserves

Reducing leverage by distressed asset sales to generate cash is one of the primary defense mechanisms used by the operating teams responsible for shoring up cash reserves. Unfortunately every slice of value lost to the distressed sale process is a slice out of the equity pool or capital base of the bank. An alternate mechanism that can protect capital is using the interbank Repurchase (Repo) contract to use liquid or acceptable assets as collateral but that too is dependent on the availability of un-encumbered liquid securities on the balance sheet as well as availability of counterparty limits. Both can quickly disappear in times of crisis. The last and final option is the central bank discount window the use of which may provide temporary relief but serves as a double edge sword by further feeding the name and reputational crisis.  While a literature review on the topic also suggest cash conservation approaches by a re-alignment of businesses and a restructuring of resources, these last two solutions assume that the bank in question would actually survive the crisis to see the end of re-alignment and re-structuring exercise.

Liquidity Reserves: Real or a Mirage

A questionable assumption that often comes up when we review Liquidity Contingency Plans is the availability or usage of Statutory Liquidity and Cash Reserves held for our account with the Central Bank.  You can only touch those assets when your franchise and license is gone and the bank has been shut down. This means that if you want to survive the crisis with your banking license intact there is a very good chance that the 6% core liquidity you had factored into your liquidation analysis would NOT be available to you as a going concern in times of a crisis. That liquidity layer has been reserved by the central bank as the last defense for depositor protection and no central bank is likely to grant abuse of that layer.

Figure 2 Liquidity Risk and Liquidity Run Crisis

As the Bear Stearns case study below illustrate the typical Liquidity crisis begins with a negative event that can take many shapes and forms. The resulting coverage and publicity leads to pressure on not just the share price but also on the asset portfolio carried on the bank’s balance sheet as market players take defensive cover by selling their own inventory or aggressive bets by short selling the securities in question. Somewhere in this entire process rating agencies finally wake up and downgrade the issuer across the board leading to a reduction or cancellation of counterparty lines.  Even when lines are not cancelled given the write down in value witnessed in the market, calls for margin and collateral start coming in and further feed liquidity pressures.

What triggers a Name Crisis that leads to the vicious cycle that can destroy the inherent value in a 90 year old franchise in less than 3 months.  Typically a name crisis is triggered by a change in market conditions that impact a fundamental business driver for the bank. The change in market conditions triggers either a large operational loss or a series of operation losses, at times related to a correction in asset prices, at other resulting in a permanent reduction in margins and spreads.  Depending on when this is declared and becomes public knowledge and what the bank does to restore confidence drives what happens next. One approach used by management teams is to defer the news as much as possible by creative accounting or accounting hand waving which simply changes the nature of the crisis from an asset price or margin related crisis to a much more serious regulatory or accounting scandal with similar end results.

Figure 3 What triggers a name crisis?

The problem however is that market players have a very well established defensive response to a name crisis after decades of bank failures. Which implies that once you hit a crisis the speed with which you generate cash, lock in a deal with a buyer and get rid of questionable assets determined how much value you will lose to the market driven liquidation process. The only failsafe here is the ability of the local regulator and lender of last resort to keep the lifeline of counterparty and interbank credit lines open.  As was observed at the peak of the crisis in North America, UK and a number of Middle Eastern market this ability to keep market opens determines how low prices will go, the magnitude of the fire sale and the number of banks that actually go under.

Figure 4 Market response to a Name Crisis and the Liquidity Run cycle.

The above context provides a clear roadmap for building a framework for liquidity risk management. The ending position or the end game is a liquidity driven asset sale. A successful framework would simply jump the gun and get to the asset sale before the market does. The only reason why you would not jump the gun is if you have cash, a secured contractually bound commitment for cash, a white knight or any other acceptable buyer for your franchise and an agreement on the sale price and shareholders’ approval for that sale in place.  If you are missing any of the above, your only defense is to get to the asset sale before the market does.

The problem with the above assertion is the responsiveness of the Board of directors and the Senior executive team to the seriousness of the name crisis. The most common response by both is a combination of the following

a)     The crisis is temporary and will pass. If there is a need we will sell later.

b)    We can’t accept these fire sale prices.

c)     There must be another option. Please investigate and report back.

This happens especially when the liquidity policy process was run as a compliance checklist and did not run its full course at the board and executive management level.  If a full blown liquidity simulation was run for the board and the senior management team and if they had seen for themselves the consequences of speed as well as delay such reaction don’t happen. The board and the senior team must understand that illiquid assets are equivalent of high explosives and delay in asset sale is analogous to a short fuse. When you combine the two with a name crisis you will blow the bank irrespective of its history or the power of its franchise. When the likes of Bear, Lehman, Merrill, AIG and Morgan failed, your bank and your board is not going to see through the crisis to a different and pleasant fate.

(more…)

Economic Capital Review by S&P

February 7, 2011

Standard & Poor’s started including an evaluation of insurers’ enterprise risk management in its ratings in late 2005. Companies that fared well under the stick of ERM evaluation there was the carrot of potentially lower capital requirements.  On 24 January, S&P published the basis for an economic capital review and adjustment process and announced that the process was being implemented immediately.

The ERM review is still the key. Insurers must already have a score from their ERM review of “strong” or “excellent” before they are eligible for any consideration of their capital model. That strong or excellent score implies that those firms have already passed S&P’s version of the Solvency II internal mode use test — which S&P calls strategic risk management (SRM). Those firms with the strong and excellent ERM rating will all have their economic capital models reviewed.

The new name for this process is the level III ERM review. The level I review is the original ERM process that was initiated in 2005. The level II process, started in 2006, is a more detailed review that S&P applies to firms with high levels of risk and/or complexity. That level II review included a more detailed look at the risk control processes of the firms.

The new level III ERM review looks at five aspects of the economic capital model: methodology, data quality, assumptions and parameters, process/execution and testing/validation.

Read More at InsuranceERM.com

Sins of Risk Measurement

February 5, 2011
.
Read The Seven Deadly Sins of Measurement by Jim Campy

Measuring risk means walking a thin line.  Balancing what is highly unlikely from what it totally impossible.  Financial institutions need to be prepared for the highly unlikely but must avoid getting sucked into wasting time worrying about the totally impossible.

Here are some sins that are sometimes committed by risk measurers:

1.  Downplaying uncertainty.  Risk measurement will always become more and more uncertain with increasing size of the potential loss numbers.  In other words, the larger the potential loss, the less certain you can be about how certain it might be.  Downplaying uncertainty is usually a sin of omission.  It is just not mentioned.  Risk managers are lured into this sin by the simple fact that the less that they mention uncertainty, the more credibility their work will be given.

2.  Comparing incomparables.  In many risk measurement efforts, values are developed for a wide variety of risks and then aggregated.  Eventually, they are disaggregated and compared.  Each of the risk measurements are implicitly treated as if they were all calculated totally consistently.  However,  in fact, we are usually adding together measurements that were done with totally different amounts of historical data, for markets that have totally different degrees of stability and using tools that have totally different degrees of certitude built into them.  In the end, this will encourage decisions to take on whatever risks that we underestimate the most through this process.

3.  Validate to Confirmation.  When we validate risk models, it is common to stop the validation process when we have evidence that our initial calculation is correct.  What that sometimes means is that one validation is attempted and if validation fails, the process is revised and tried again.  This is repeated until the tester is either exhausted or gets positive results.  We are biased to finding that our risk measurements are correct and are willing to settle for validations that confirm our bias.

4.  Selective Parameterization.  There are no general rules for parameterization.  Generally, someone must choose what set of data is used to develop the risk model parameters.  In most cases, this choice determines the answers of the risk measurement.  If data from a benign period is used, then the measures of risk will be low.  If data from an adverse period is used, then risk measures will be high.  Selective paramaterization means that the period is chosen because the experience was good or bad to deliberately influence the outcome.

5.  Hiding behind Math.  Measuring risk can only mean measuring a future unknown contingency.  No amount of fancy math can change that fact.  But many who are involved in risk measurement will avoid ever using plain language to talk about what they are doing, preferring to hide in a thicket of mathematical jargon.  Real understanding of what one is doing with a risk measurement process includes the ability to say what that entails to someone without an advanced quant degree.

6.  Ignoring consequences.  There is a stream of thinking that science can be disassociated from its consequences.  Whether or not that is true, risk measurement cannot.  The person doing the risk measurement must be aware of the consequences of their findings and anticipate what might happen if management truly believes the measurements and acts upon them.

7.  Crying Wolf.  Risk measurement requires a concentration on the negative side of potential outcomes.  Many in risk management keep trying to tie the idea of “risk” to both upsides and downsides.  They have it partly right.  Risk is a word that means what it means, and the common meaning associated risk with downside potential.  However, the risk manager who does not keep in mind that their risk calculations are also associated with potential gains will be thought to be a total Cassandra and will lose all attention.  This is one of the reasons why scenario and stress tests are difficult to use.  One set of people will prepare the downside story and another set the upside story.  Decisions become a tug of war between opposing points of view, when in fact both points of view are correct.

There are doubtless many more possible sins.  Feel free to add your favorites in the comments.

But one final thought.  Calling it MEASUREMENT might be the greatest sin.

Global Convergence of ERM Requirements in the Insurance Industry

January 27, 2011

Role of Own Risk and Solvency Assessment in Enterprise Risk Management

Insurance companies tend to look backwards to see if there was enough capital for the risks that were present then. It is important for insurance companies to be forward looking and assess whether enough capital is in place to take care risks in the future. Though it is mandatory for insurance firms to comply with solvency standards set by regulatory authorities, what is even more important is the need for top management to be responsible for certifying solvency. Performing Own Risk and Solvency Assessment (ORSA) is the key for the insurance industry.

  • Global Convergence of ERM Regulatory requirements with NAIC adoption of ORSA regulations
  • Importance of evaluating Enterprise Risk Management for ORSA
  • When to do an ORSA and what goes in an ORSA report?
  • Basic and Advanced ERM Practices
  • ORSA Plan for Insurers
  • Role of Technology in Risk Management

Join this MetricStream webinar

Date: Wednesday February 16, 2011
Time: 10 am EST | 4 pm CET | 3pm GMT
Duration: 1 hour

ERM Fundamentals

January 21, 2011

You have to start somewhere.

My suggestion it that rather than starting with someone else’s idea of ERM, you start with what YOUR COMPANY is already doing.

In that spirit, I offer up these eight Fundamental ERM Practices.  So to follow my suggestion, you would start in each of these eight areas with a self assessment.  Identify what you already have in these eight areas.  THEN start to think about what to build.  If there are gaping holes, plan to fill those in with new practices.  If there are areas where your company already has a rich vein of existing practice build gently on that foundation.  Much better to use ERM to enhance existing good practice than to tear down existing systems that are already working.  Making significant improvement to existing good practices should be one of your lowest priorities.

  1. Risk Identification: Systematic identification of principal risks – Identify and classify risks to which the firm is exposed and understand the important characteristics of the key risks

  2. Risk Language: Explicit firm-wide words for risk – A risk definition that can be applied to all exposures, that helps to clarify the range of size of potential loss that is of concern to management and that identifies the likelihood range of potential losses that is of concern. Common definitions of the usual terms used to describe risk management roles and activities.

  3. Risk Measurement: What gets measured gets managed – Includes: Gathering data, risk models, multiple views of risk and standards for data and models.

  4. Policies and Standards: Clear and comprehensive documentation – Clearly documented the firm’s policies and standards regarding how the firm will take risks and how and when the firm will look to offset, transfer or retain risks. Definitions of risk-taking authorities; definitions of risks to be always avoided; underlying approach to risk management; measurement of risk; validation of risk models; approach to best practice standards.

  5. Risk Organization: Roles & responsibilities – Coordination of ERM through: High-level risk committees; risk owners; Chief Risk Officer; corporate risk department; business unit management; business unit staff; internal audit. Assignment of responsibility, authority and expectations.

  6. Risk Limits and Controlling: Set, track, enforce – Comprehensively clarifying expectations and limits regarding authority, concentration, size, quality; a distribution of risk targets

    and limits, as well as plans for resolution of limit breaches and consequences of those breaches.

  7. Risk Management Culture: ERM & the staff – ERM can be much more effective if there is risk awareness throughout the firm. This is accomplished via a multi-stage training program, targeting universal understanding of how the firm is addressing risk management best practices.

  8. Risk Learning: Commitment to constant improvement – A learning and improvement environment that encourages staff to make improvements to company practices based on unfavorable and favorable experiences with risk management and losses, both within the firm and from outside the firm.

Risk Environment

January 10, 2011

It seems that there are three approaches to how to look at the riskiness of the future when assessing risk of a specific exposure:

  1. Look at the “long term” frequency and severity and look at risk based upon assuming that the near term future is a “typically” risky period.
  2. Look at the market’s current idea of near term future riskiness.  This is evident in terms of items such as implied volatility.
  3. Focus on “Expert Opinion” of the risk environment.

There are proponents of each approach.  That is because there are strengths and weaknesses for each approach.

Long Term Approach

The long term view of risk environment will help to make sure that the company takes into account “all” of the risk that could be inherent in  their risk positions.  The negative of this approach is that it will in most times not represent the risk environment that will be faced in the immediate future.

Market View

The market view of risk does definitely give an immediate view of risk environment.  It is thought by proponents to be the only valid approach to getting a view of that.  However, the market implied risk view may be a little too short term for some purposes.  And when trying to look at longer term risk environment through market implied factors, there may be very large inaccuracies that creep into the view.  That is because there are other factors other than view of risk that are a part of the market implied factors that are not so large for very short term periods, but that grow to predominate with longer time periods.

Expert Opinion

Expert opinion can also reflect the current risk environment and is potentially adaptable to the desired time frame.  However, the main complaint with Expert Opinion is that there is no specific way to know whether an expert opinion of the risk environment is equivalent between one point of time and another.  One explanation for the uncertainty in that can be found in the changing risk attitudes that are described by the Theory of Plural Rationalities. Experts may have methods to overcome the changing waves of risk attitudes that they are personally exposed to, but it is hard to believe that they can escape that basic human cycle entirely.

Risk environment is important in setting risk strategies and adjusting risk tolerances and appetites.

Using the Long Term approach at all times and effectively ignoring the different risk environments is going to be as effective as crossing a street using long term averages for the amount of traffic.

ERM an Economic Sustainability Proposition

January 6, 2011

Global ERM Webinars – January 12 – 14 (CPD credits)

We are pleased to announce the fourth global webinars on risk management. The programs are a mix of backward and forward looking subjects as our actuarial colleagues across the globe seek to develop the science and understanding of the factors that are likely to influence our business and professional environment in the future. The programs in each of the three regions are a mix of technical and qualitative dissertations dealing with subjects as diverse as regulatory reform, strategic and operational risks, on one hand, and the modeling on tail risks and implied volatility surfaces, on the other. For the first time, and in keeping with our desire to ensure a global exchange of information, each of the regional programs will have presentations from speakers from the other two regions on subjects that have particular relevance to their markets.

Asia Pacific Program
http://www.soa.org/professional-development/event-calendar/event-detail/erm-economic/2011-01-14-ap/agenda.aspx

Europe/Africa Program
http://www.soa.org/professional-development/event-calendar/event-detail/erm-economic/2011-01-14/agenda.aspx

Americas Program
http://www.soa.org/professional-development/event-calendar/event-detail/erm-economic/2011-01-12/agenda.aspx

Registration
http://www.soa.org/professional-development/event-calendar/event-detail/erm-economic/2011-01-12/registration.aspx

Intrinsic Risk

November 26, 2010

If you were told that someone had flipped a coin 60 times and had found that heads were the results 2/3 of the time, you might have several reactions.

  • You might doubt whether the coin was a real coin or whether it was altered.
  • You might suspect that the person who got that result was doing something other than a fair flip.
  • You might doubt whether they are able to count or whether they actually counted.
  • You doubt whether they are telling the truth.
  • You start to calculate the likelihood of that result with a fair coin.

Once you take that last step, you find that the story is highly unlikely, but definitely not impossible.  In fact, my computer tells me that if I lined up 225 people and had them all flip a coin 60 times, there is a fifty-fifty chance  that at least one person will get that many heads.

So how should you evaluate the risk of getting 40 heads out of 60 flips?  Should you do calculations based upon the expected likelihood of heads based upon an examination of the coin?  You look at it and see that there are two sides and a thin edge.  You assess whether it seems to be uniformly balanced.  Then you conclude that you are fairly certain of the inherent risk of the coin flipping process.

Your other choice to assess the risk is to base your evaluation on the observed outcomes of coin flips.  This will mean that the central limit theorem should help us to eventually get the right number.  But if your first observation is that person described above, then it will be quite a few additional observations before you find out what the Central Limit Theorem has to tell you.

The point being that a purely observation based approach will not always give you the best answer.   Good to make sure that you understand something about the intrinsic risk.

If you are still not convinced of this, ask the turkey.  Taleb uses that turkey story to explain a Black Swan.  But if you think about it, many Black Swans are nothing more than ignorance of intrinsic risk.

Measuring Risks

November 25, 2010

What gets measured gets managed.

Measuring risks is the second of the eight ERM Fundamental Practices.

There are many, many ways to measure risks.  For the most part, they give information about different aspects of the risks.  Some basic types of measures include:

  • Transaction Flows, measuring counts or dollar flows of risk transactions
  • mean Expected Loss
  • Standard deviation of loss
  • Expected loss at a particular confidence interval (also known as VaR or Value at Risk)
  • Average expected loss beyond a certain confidence interval (also known as TVaR, Expected Shortfall, and other names)

So you needs to think about what you want from a risk measure.  Here are some criteria of a GOOD RISK MEASURE:

1. Timely – if you do not get the information about risk in time, the information is potentially entertaining or educational, but not useful.

2. Accurately distinguishes broad degrees of riskiness within the broad risk class – allowing you to discern whether one choice is riskier than another.

3. Not too expensive or time intensive to produce – the information needs to be more valuable than the cost of the process that produces it, either in dollars or opportunity cost based on the time it uses up.

4. Understood by all who must use – some will spend lots of time making sure that they have a risk measure that is the theoretical BEST.  But the improvements from intellectual purity may come with a large trade-off in the ability of a broad audience to understand.  And if people in power do not understand something, there are few who will really rely on it when their career’s are at stake in an extreme risk situation.

5. Actionable – the risk measure must be able to point to a possible action.   Otherwise, it just presents management with a difficult and unpleasant puzzle that needs to be solved.  And risk is often not a presenting problem, but a possible problem, so it is easier always to defer actions that are unclearly indicated.

If you can set up your risk measurement systems so that you can satisfy all five of those criteria, then you can feel pretty good. Your risk management system is well served.

But some do not stop there.  They look for EXCELLENT RISK MEASURES.  Those are measures that in addition to satisfying the five criteria above:

6. Can help to identify changes to risk quality – this is the Achilles heel of the risk measurement process.  The deterioration of the key riskiness of the individual risks.  Without this ability, it is possible for a tightly managed risk portfolio to fail unexpectedly because the measures gradually drifted away from the actual underlying riskiness of the portfolio of risks.

7. Provides information that is consistent across different Broad Classes of Risk – this would allow a firm to actually do Risk Steering.  And to more quantitatively assess their Diversification.  So this quality is needed to support two of the four key ERM Strategies and is also needed to  apply an enterprise view to Risk Trading and Loss Controlling.

8. For most sensitive risks will pinpoint variations in risk levels – this is the characteristic that brings a risk measure to the ultimate level of actionability.  This is the information that risk managers who are seeking to drive their risk measurement down to the transaction level should be seeking to achieve.  However, it is very important to know the actual accuracy of the risk measure in these situations.  When the standard error is much larger than the differences in risk between similar transaction then process has gone ahead of substance.

 

 

Turkey Risk

November 25, 2010

On Thanksgiving Day her in the US, let us recall Nassim Taleb’s story about the turkey.  For 364 days the turkey saw no risk whatsoever, just good eats.  Then one day, the turkey became dinner.

For some risks, studying the historical record and making a model from experience just will not give useful results.

And, remembering the experience of the turkey, a purely historical basis for parameterizing risk models could get you cooked.

Happy Thanksgiving.

Simplicity Fails

September 16, 2010

Usually the maxim KISS (Keep it Simple Stupid) is the way to go.

But in Risk Management, just the opposite is true. If you keep it simple, you will end up being eaten alive.

That is because risk is constantly changing. At any time, your competitors will try to change the game trying to take the better risks and if you keep it simple and stand still, leaving you with the worst risks.

If you keep it simple and focused and identify the ONE MOST IMPORTANT RISK METRIC and focus all of your risk management systems around controlling risk as defined by that one metric, you will eventually end up accumulating more and more of some risk that fails to register under that metric.  See Risk and Light.

The solution is not to get better at being Simple, but to get good at managing complexity.

That means looking at risk through many lenses, and then focusing on the most important aspects of risk for each situation.  That may mean that you will need to have different risk measures for different risks.  Something that is actually the opposite of the thrust of the ERM movement towards the homogenization of risk measurement.  There are clearly benefits of having one common measure of risk that can be applied across all risks, but some folks went too far and abandoned their risk specific metrics in the process.

And there needs to be a process of regularly going back to what your had decided were the most important risk measures and making sure that there had not been some sort of regime change that meant that you should be adding some new risk measure.

So, you should try Simple at your own risk.

It’s simple.  Just pick the important strand.

On The Top of My List

August 28, 2010

I finished a two hour presentation on how to get started with ERM and was asked what were my top 3 things to keep in mind and top 3 things to avoid.

Here’s what I wish I had said:

Top three hings to keep in mind when starting an ERM Program:

  1. ERM must have a clear objective to be successful.  That objective should reflect both the view of management and the board about the amount of risk in the current environment as well as the direction that the company is headed in terms of the future target of risk as compared to capacity.  And finally, the objective for ERM must be compatible with the other objectives of the firm.  It must be reasonably possible to accomplish both the ERM objective and the growth and profit objectives of the firm at the same time.
  2. ERM must have someone who is committed to accomplishing the objective of ERM for the firm.  That person also must have the authority within the firm to resolve most conflicts between the ERM development process and the other objectives of the firm. And they must have access to the CEO to be able to resolve any conflicts that they do not have the authority to resolve personally.
  3. Exactly what you do first is much less important than the fact that you start doing something to develop an ERM program.   Doing something that involves actually managing risk and reporting the activity is a better choice than a long term developmental project.  It is not optimal for the firm to commit to ERM, to identify resources for that process and then to have those people and ERM disappear from  sight for a year or more to develop the ERM system.  Much better to start giving everyone in management of the firm some ideas of what ERM looks and feels like.  Recognize that one product that you are building is confidence in ERM.

Things to Avoid:

  1. Valuing ERM retrospectively taking into account only experienced gains and losses.  (see ERM Value)  A good ERM program changes the likelihood of losses, but in any short period of time actual losses are a matter of chance.  On the other hand, if your ERM programs works to a limit for losses from an individual transaction, then it IS a failure if the firm has losses above that amount for individual transactions.
  2. Starting out on ERM development with the idea that ERM is only correct if it validates existing company decisions.  New risk evaluation systems will almost always find one or more major decisions that expose the company to too much risk in some way. At least they will if the evaluation system is Comprehensive.
  3. Letting ERM routines substitute for critical judgment.  Some of the economic carnage of the Global Financial Crisis was perpetuated by firms where their actions were supported by risk management systems that told them that everything was ok.  But Risk managers need to be humble.

But in fact, I did get some of these out. So next time, I will be prepared.

Risk Adjusted Performance Measures

June 20, 2010

By Jean-Pierre Berliet

Design weaknesses are an important source of resistance to ERM implementation. Some are subtle and thus often remain unrecognized. Seasoned business executives recognize readily, however, that decision signals from ERM can be misleading in particular situations in which these design weaknesses can have a significant impact. This generates much organizational heat and can create a highly dysfunctional decision environment.

Discussions with senior executives have suggested that decision signals from ERM would be more credible and that ERM would be a more effective management process if ERM frameworks were shown to produce credible and useful risk adjusted performance measures

Risk adjusted performance measures (RAPM) such as RAROC (Risk Adjusted Return On Capital), first developed in banking institutions, or Risk Adjusted Economic Value Added (RAEVA) have been heralded as significant breakthroughs in performance measurement for insurance companies. They were seen as offering a way for risk bearing enterprises to relate financial performance to capital consumption in relation to risks assumed and thus to value creation.

Many insurance companies have attempted to establish RAROC/RAEVA performance measurement frameworks to assess their economic performance and develop value enhancing business and risk management strategies. A number of leading companies, mostly in Europe where regulators are demanding it, have continued to invest in refining and using these frameworks. Even those that have persevered, however, understand that framework weaknesses create management challenges that cannot be ignored.

Experienced executives recognize that the attribution of capital to business units or lines provides a necessary foundation for aligning the perspectives of policyholders and shareholders.

Many company executives recognize, however, that i) risk adjusted performance measures can be highly sensitive to methodologies that determine the attribution of income and capital and ii) earnings reported for a period do not adequately represent changes in the value of insurance businesses. As a result, these senior executives believe that decision signals provided by risk adjusted performance measures need to be evaluated with great caution, lest they might mislead. Except for Return on Embedded Value measures that are comparatively more challenging to develop and validate than RAROC/RAEVA measures, risk adjusted performance measures are not typically capable of relating financial performance to return on value considerations that are of critical importance to shareholders.

To provide information that is credible and useful to management and shareholders, insurance companies need to establish risk adjusted performance measures based on:

  • A ( paid up or economic) capital attribution method, with explicit allowance for deviations in special situations, that is approved by Directors
  • Period income measures aligned with pricing and expense decisions, with explicit separation of in-force/run-off, renewals, and new business
  • Supplemental statements relating period or projected economic performance/ changes in value to the value of the underlying business.
  • Reconciliation of risk adjusted performance metrics to reported financial results under accounting principles used in their jurisdictions (GAAP, IFRS, etc.)
  • Establishment and maintenance of appropriate controls, formally certified by management, reviewed and approved by the Audit Committee of the Board of Directors.

In many instances, limitations and weaknesses in performance measures create serious differences of view between a company’s central ERM staff and business executives.

Capital attribution

(more…)

Risk Velocity

June 17, 2010

By Chris Mandel

Understand the probability of loss, adjusted for the severity of its impact, and you have a sure-fire method for measuring risk.

Sounds familiar and seems on point; but is it? This actuarial construct is useful and adds to our understanding of many types of risk. But if we had these estimates down pat, then how do we explain the financial crisis and its devastating results? The consequences of this failure have been overwhelming.

Enter “risk velocity,” or how quickly risks create loss events. Another way to think about the concept is in terms of “time to impact” a military phrase, a perspective that implies proactively assessing when the objective will be achieved. While relatively new in the risk expert forums I read, I would suggest this is a valuable concept to understand and more so to apply.

It is well and good to know how likely it is that a risk will manifest into a loss. Better yet to understand what the loss will be if it manifests. But perhaps the best way to generate a more comprehensive assessment of risk is to estimate how much time there may be to prepare a response or make some other risk treatment decision about an exposure. This allows you to prioritize more rapidly, developing exposures for action. Dynamic action is at the heart of robust risk management.

After all, expending all of your limited resources on identification and assessment really doesn’t buy you much but awareness. In fact awareness, from a legal perspective, creates another element of risk, one that can be quite costly if reasonable action is not taken in a timely manner. Not every exposure will result in this incremental risk, but a surprising number do.

Right now, there’s a substantial number of actors in the financial services sector who wish they’d understood risk velocity and taken some form of prudent action that could have perhaps altered the course of loss events as they came home to roost; if only.

More at Risk and Insurance

Winners and Losers

June 14, 2010

Sometimes quants who get involved with building new economic capital models have the opinion that their work will reveal the truth about the risks of the group and that the best approach is to just let the truth be told and let the chips fall where they may.

Then they are completely surprised that their project has enemies within management.  And that those enemies are actively at work undermining the credibility of the model.  Eventually, the modelers are faced with a choice of adjusting the model assumptions to suit those enemies or having the entire project discarded because it has failed to get the confidence of management.

But that situation is actually totally predictable.

That is because it is almost a sure thing that the first comprehensive and consistent look at the group’s risks will reveal winners and losers.  And if this really is a new way of approaching things, one or more of the losers will come as a complete surprise to many.

The easiest path for the managers of the new loser business is to undermine the model.  And it is completely natural to find that they will usually be completely skeptical of this new model that makes their business look bad.  It is quite likely that they do not think that their business takes too much risk or has too little profits in comparison to their risk.

In the most primitive basis, I saw this first in the late 1970′s when the life insurer where I worked shifted from a risk approach that allocated all capital in proportion to reserves to one that recognized the insurance risk as well as the investment risk as two separate factors.  The term insurance products suddenly were found to be drastically underpriced.  Of course, the product manager of that product was an instant enemy of the new approach and was able to find many reasons why capital shouldn’t be allocated to insurance risk.

The same sorts of issues had been experienced by firms when they first adopted nat cat models and shifted from a volatility risk focus to a ruin risk focus.

What needs to be done to diffuse these sorts of issues, is that steps must be taken to separate the message from the messenger.  There are 2 main ways to accomplish this:

  1. The message about the new level of risks needs to be delivered long before the model is completed.  This cannot wait until the model is available and the exact values are completely known.  Management should be exposed to broad approximations of the findings of the model at the earliest possible date.  And the rationale for the levels of the risk needs to be revealed and discussed and agreed long before the model is completed.
  2. Once the broad levels of the risk  are accepted and the problem areas are known, a realistic period of time should be identified for resolving these newly identified problems.   And appropriate resources allocated to developing the solution.  Too often the reaction is to keep doing business and avoid attempting a solution.

That way, the model can take its rightful place as a bringer of light to the risk situation, rather than the enemy of one or more businesses.

Common Terms for Severity

June 1, 2010

In the US, firms are required to disclose their risks.  This has led to an exercize that is particularly useless.  Firms obviously spend very little time on what they publish under this part of their financial statement.  Most firms seem to be using boilerplate language and a list of risks that is as long as possible.  It is clearly a totally compliance based CYA activity.  The only way that a firm could “lose” under this system is if they fail to disclose something that later creates a major loss.  So best to mention everything under the sun.  But when you look across a sector at these lists, you find a startling degree to which the risks actually differ.  That is because there is absolutely no standard that is applied to tell what is a risk and if something is a risk, how significant is it.  The idea of risk severity is totally missing.  

Bread Box

 

What would help would be a common set of terms for Severity of losses from risks.  Here is a suggestion of a scale for discussing loss severity for an individual firm: 

  1. A Loss that is a threat to earnings.  This level of risk could result in a loss that would seriously impair or eliminate earnings. 
  2. A Loss that could result in a significant reduction to capital.  This level of risk would result in a loss that would eliminate earnings and in addition eat into capital, reducing it by 10% to 20%
  3. A Loss that could result in severe reduction of business activity.  For insurers, this would be called “Going into Run-off”.  It means that the firm is not insolvent, but it is unable to continue doing new business.  This state often lasts for several years as old liabilities of the insurer are slowly paid of as they become due.  Usually the firm in this state has some capital, but not enough to make any customers comfortable trusting them for future risks. 
  4. A Loss that would result in the insolvency of the firm. 

Then in addition, for an entire sector or sub sector of firms: 

  1. Losses that significantly reduce earnings of the sector.  A few firms might have capital reductions.
  2. Losses that significantly impair capital for the sector.  A few firms might be run out of business from these losses.
  3. Losses that could cause a significant number of firms in the sector to be run out of business.  The remainder of the sector still has capacity to pick up the business of the firms that go into run-off.  A few firms might be insolvent. 
  4. Losses that are large enough that the sector no longer has the capacity to do the business that it had been doing.  There is a forced reduction in activity in the sector until capacity can be replaced, either internally or from outside funds.  A large number of firms are either insolvent or will need to go into run-off. 

These can be referred to as Class 1, Class 2, Class 3, Class 4 risks for a firm or for a sector.  

Class 3 and Class 4 Sector risks are Systemic Risks.  

Care should be taken to make sure that everyone understands that risk drivers such as equity markets, or CDS can possibly produce Class 1, Class 2, Class 3 or Class 4 losses for a firm or for a sector in a severe enough scenario.  There is no such thing as classifying a risk as always falling into one Class.  However, it is possible that at a point in time, a risk may be small enough that it cannot produce a loss that is more than a Class 1 event.  

For example, at a point in time (perhaps 2001), US sub prime mortgages were not a large enough class to rise above a Class 1 loss for any firms except those whose sole business was in that area.  By 2007, Sub Prime mortgage exposure was large enough that Class 4 losses were created for the banking sector.  

Looking at Sub Prime mortgage exposure in 2006, a bank should have been able to determine that sub primes could create a Class 1, Class 2, Class 3 or even Class 4 loss in the future.  The banks could have determined the situations that would have led to losses in each Class for their firm and determined the likelihood of each situation, as well as the degree of preparation needed for the situation.  This activity would have shown the startling growth of the sub prime mortgage exposure from a Class 1 to a Class 2 through Class 3 to Class 4 in a very short time period.  

Similarly, the prudential regulators could theoretically have done the same activity at the sector level.  Only in theory, because the banking regulators do not at this time collect the information needed to do such an exercize.  There is a proposal that is part of the financial regulation legislation to collect such information.  See CE_NIF.

Comprehensive Actuarial Risk Evaluation

May 11, 2010

The new CARE report has been posted to the IAA website this week.

It raises a point that must be fairly obvious to everyone that you just cannot manage risks without looking at them from multiple angles.

Or at least it should now be obvious. Here are 8 different angles on risk that are discussed in the report and my quick take on each:

  1. MARKET CONSISTENT VALUE VS. FUNDAMENTAL VALUE   -  Well, maybe the market has it wrong.  Do your own homework in addition to looking at what the market thinks.  If the folks buying exposure to US mortgages had done fundamental evaluation, they might have noticed that there were a significant amount of sub prime mortgages where the Gross mortgage payments were higher than the Gross income of the mortgagee.
  2. ACCOUNTING BASIS VS. ECONOMIC BASIS  -  Some firms did all of their analysis on an economic basis and kept saying that they were fine as their reported financials showed them dying.  They should have known in advance of the risk of accounting that was different from their analysis.
  3. REGULATORY MEASURE OF RISK  -  vs. any of the above.  The same logic applies as with the accounting.  Even if you have done your analysis “right” you need to know how important others, including your regulator will be seeing things.  Better to have a discussion with the regulator long before a problem arises.  You are just not as credible in the middle of what seems to be a crisis to the regulator saying that the regulatory view is off target.
  4. SHORT TERM VS. LONG TERM RISKS  -  While it is really nice that everyone has agreed to focus in on a one year view of risks, for situations that may well extend beyond one year, it can be vitally important to know how the risk might impact the firm over a multi year period.
  5. KNOWN RISK AND EMERGING RISKS  -  the fact that your risk model did not include anything for volcano risk, is no help when the volcano messes up your business plans.
  6. EARNINGS VOLATILITY VS. RUIN  -  Again, an agreement on a 1 in 200 loss focus is convenient, it does not in any way exempt an organization from risks that could have a major impact at some other return period.
  7. VIEWED STAND-ALONE VS. FULL RISK PORTFOLIO  -  Remember, diversification does not reduce absolute risk.
  8. CASH VS. ACCRUAL  -  This is another way of saying to focus on the economic vs the accounting.

Read the report to get the more measured and complete view prepared by the 15 actuaries from US, UK, Australia and China who participated in the working group to prepare the report.

Comprehensive Actuarial Risk Evaluation

Assumptions Embedded in Risk Analysis

April 28, 2010

The picture below from Dour VanDemeter’s blog gives an interesting take on the embedded assumptions in various approaches to risk analysis and risk treatment.

But what I take from this is a realization that many firms have activity in one or two or three of those boxes, but the only box that does not assume away a major part of reality is generally empty.

In reality, most financial firms do experience market, credit and liability risks all at the same time and most firms do expect to be continuing to receive future cashflows both from past activities and from future activities.

But most firms have chosen to measure and manage their risk by assuming that one or two or even three of those things are not a concern.  By selectively putting on blinders to major aspects of their risks – first blinding their right eye, then their left, then by not looking up and finally not looking down.

Some of these processes were designed that way in earlier times when computational power would not have allowed anything more.  For many firms their affairs are so very complicated and their future is so uncertain that it is simply impractical to incorporate everything into one all encompassing risk assessment and treatment framework.

At least that is the story that folks are most likely to use.

But the fact that their activity is too complicated for them to model does not seem to send them any flashing red signal that it is possible that they really do not understand their risk.

So look at Doug’s picture and see which are the embedded assumptions in each calculation – the ones I am thinking of are the labels on the OTHER rows and columns.

For Credit VaR – the embedded assumption is that there is no Market Risk and that there is no new assets or liabilities (business is in sell-off mode)

For Interest risk VaR – the embedded assumption is that there is no credit risk nor new assets or liabilities (business is in sell-off mode)

For ALM – the embedded assumption is that there is no credit risk and business is in run-off mode.

Those are the real embedded assumptions.  We should own up to them.

Dangerous Words

April 27, 2010

One of the causes of the Financial Crisis that is sometimes cited is an inappropriate reliance on complex financial models.  In our defense, risk managers have often said that users did not take the time to understand the models that they relied upon.

And I have said that in some sense, blaming the bad decisions on the models is like a driver who gets lost blaming it on the car.

But we risk managers and risk modelers do need to be careful with the words that we use.  Some of the most common risk management terminology is guilty of being totally misleading to someone who has no risk management training – who simply relies upon their understanding of English.

One of the fundamental steps of risk management is to MEASURE RISK.

I would suggest that this very common term is potentially misleading and risk managers should consider using it less.

In common usage, you could say that you measure a distance between two points or measure the weight of an object.  Measurement usually refers to something completely objective.

However, when we “measure” risk, it is not at all objective.  That is because Risk is actually about the future.  We cannot measure the future.  Or any specific aspect of the future.

While I can measure my height and weight today, I cannot now measure what it will be tomorrow.  I can predict what it might be tomorrow.  I might be pretty certain of a fairly tight range of values, but that does not make my prediction into a measurement.

So by the very words we use to describe what we are doing, we sought to instill a degree of certainty and reliability that is impossible and unwarranted.  We did that perhaps as mathematicians who are used to starting a problem by defining terms.  So we start our work by defining our calculation as a “measurement” of risk.

However, non-mathematicians are not so used to defining A = B at the start of the day and then remembering thereafter that whenever they hear someone refer to A, that they really mean B.

We also may have defined our work as “measuring risk” to instill in it enough confidence from the users that they would actually pay attention to the work and rely upon it.  In which case we are not quite as innocent as we might claim on the over reliance front.

It might be difficult now to retreat however.  Try telling management that you do not now, not have you ever measured risk.  And see what happens to your budget.

Volcano Risk 2

April 20, 2010

Top 10 European Volcanos in terms of people nearby and potential losses from an eruption:

Volcano/Country/Affected population/Values of residences at risk
1.Vesuvius/Italy/1,651,950/$66.1bn
2.Campi Flegrei/Italy/144,144/$7.8bn
3.La Soufrière Guadeloupe/Guadeloupe,France/94,037 /$3.8bn
4.Etna/Italy/70,819/$2.8bn
5.Agua de Pau/Azores,Portugal/34,307/$1.4bn
6.Soufrière Saint Vincent/Saint Vincent,Caribbean/24,493/$1bn
7.Furnas/Azores,Portugal/19,862/$0.8bn
8.Sete Cidades/Azores,Portugal/17,889/$0.7bn
9.Hekla/Iceland/10,024/$0.4bn
10.Mt Pelée/Martinique,France/10,002/$0.4bn

http://www.strategicrisk.co.uk/story.asp?source=srbreaknewsRel&storycode=384008

LIVE from the ERM Symposium

April 17, 2010

(Well not quite LIVE, but almost)

The ERM Symposium is now 8 years old.  Here are some ideas from the 2010 ERM Symposium…

  • Survivor Bias creates support for bad risk models.  If a model underestimates risk there are two possible outcomes – good and bad.  If bad, then you fix the model or stop doing the activity.  If the outcome is good, then you do more and more of the activity until the result is bad.  This suggests that model validation is much more important than just a simple minded tick the box exercize.  It is a life and death matter.
  • BIG is BAD!  Well maybe.  Big means large political power.  Big will mean that the political power will fight for parochial interests of the Big entity over the interests of the entire firm or system.  Safer to not have your firm dominated by a single business, distributor, product, region.  Safer to not have your financial system dominated by a handful of banks.
  • The world is not linear.  You cannot project the macro effects directly from the micro effects.
  • Due Diligence for mergers is often left until the very last minute and given an extremely tight time frame.  That will not change, so more due diligence needs to be a part of the target pre-selection process.
  • For merger of mature businesses, cultural fit is most important.
  • For newer businesses, retention of key employees is key
  • Modelitis = running the model until you get the desired answer
  • Most people when asked about future emerging risks, respond with the most recent problem – prior knowledge blindness
  • Regulators are sitting and waiting for a housing market recovery to resolve problems that are hidden by accounting in hundreds of banks.
  • Why do we think that any bank will do a good job of creating a living will?  What is their motivation?
  • We will always have some regulatory arbitrage.
  • Left to their own devices, banks have proven that they do not have a survival instinct.  (I have to admit that I have never, ever believed for a minute that any bank CEO has ever thought for even one second about the idea that their bank might be bailed out by the government.  They simply do not believe that they will fail. )
  • Economics has been dominated by a religious belief in the mantra “markets good – government bad”
  • Non-financial businesses are opposed to putting OTC derivatives on exchanges because exchanges will only accept cash collateral.  If they are hedging physical asset prices, why shouldn’t those same physical assets be good collateral?  Or are they really arguing to be allowed to do speculative trading without posting collateral? Probably more of the latter.
  • it was said that systemic problems come from risk concentrations.  Not always.  They can come from losses and lack of proper disclosure.  When folks see some losses and do not know who is hiding more losses, they stop doing business with everyone.  None do enough disclosure and that confirms the suspicion that everyone is impaired.
  • Systemic risk management plans needs to recognize that this is like forest fires.  If they prevent the small fires then the fires that eventually do happen will be much larger and more dangerous.  And someday, there will be another fire.
  • Sometimes a small change in the input to a complex system will unpredictably result in a large change in the output.  The financial markets are complex systems.  The idea that the market participants will ever correctly anticipate such discontinuities is complete nonsense.  So markets will always be efficient, except when they are drastically wrong.
  • Conflicting interests for risk managers who also wear other hats is a major issue for risk management in smaller companies.
  • People with bad risk models will drive people with good risk models out of the market.
  • Inelastic supply and inelastic demand for oil is the reason why prices are so volatile.
  • It was easy to sell the idea of starting an ERM system in 2008 & 2009.  But will firms who need that much evidence of the need for risk management forget why they approved it when things get better?
  • If risk function is constantly finding large unmanaged risks, then something is seriously wrong with the firm.
  • You do not want to ever have to say that you were aware of a risk that later became a large loss but never told the board about it.  Whether or not you have a risk management program.

Take CARE in evaluating your Risks

February 12, 2010

Risk management is sometimes summarized as a short set of simply stated steps:

  1. Identify Risks
  2. Evaluate Risks
  3. Treat Risks

There are much more complicated expositions of risk management.  For example, the AS/NZ Risk Management Standard makes 8 steps out of that. 

But I would contend that those three steps are the really key steps. 

The middle step “Evaluate Risks” sounds easy.  However, there can be many pitfalls.  A new report [CARE] from a working party of the Enterprise and Financial Risks Committee of the International Actuarial Association gives an extensive discussion of the conceptual pitfalls that might arise from an overly narrow approach to Risk Evaluation.

The heart of that report is a discussion of eight different either or choices that are often made in evaluating risks:

  1. MARKET CONSISTENT VALUE VS. FUNDAMENTAL VALUE 
  2. ACCOUNTING BASIS VS. ECONOMIC BASIS         
  3. REGULATORY MEASURE OF RISK    
  4. SHORT TERM VS. LONG TERM RISKS          
  5. KNOWN RISK AND EMERGING RISKS        
  6. EARNINGS VOLATILITY VS. RUIN    
  7. VIEWED STAND-ALONE VS. FULL RISK PORTFOLIO       
  8. CASH VS. ACCRUAL 

The main point of the report is that for a comprehensive evaluation of risk, these are not choices.  Both paths must be explored.

Why the valuation of RMBS holdings needed changing

January 18, 2010

Post from Michael A Cohen, Principal – Cohen Strategic Consulting

Last November’s decision by the National Association of Insurance Commissioners (NAIC) to appoint PIMCO Advisory to assess the holdings of non-agency residential mortgage-backed securities (RMBS) signaled a marked change in attitude towards the major ratings agencies. This move by the NAIC — the regulatory body for the insurance industry in the US, comprising the insurance commissioners of the 50 states – was aimed at determining the appropriate amount of risk-adjusted capital to be held by US insurers (more than 1,600 companies in both the life and property/casualty segments) for RMBS on their balance sheets.

Why did the NAIC act?

A number of problems had arisen from the way RMBS held by insurers had historically been rated by some rating agencies which are “nationally recognized statistical rating organizations” (NRSROs), though it is important to note that not all rating agencies which are NRSROs had engaged in this particular rating activity.

RMBS had been assigned (much) higher ratings than they seem to have deserved at the time, albeit with the benefit of hindsight. The higher ratings also led to lower capital charges for entities holding these securitizations (insurers, in this example) in determining the risk-adjusted capital they needed to hold for regulatory standards.

Consequently, these insurance organizations were ultimately viewed to be undercapitalized for their collective investment risks. These higher ratings also led to lower prices for the securitizations, which meant that the purchasers were ultimately getting much lower risk-adjusted returns than had been envisaged (and in many cases losses) for their purchases.

The analysis that was performed by the NRSROs has been strenuously called into question by many industry observers during the financial crisis of the past two years, for two primary reasons:

  • The level of analytical due diligence was weak and the default statistics used to evaluate these securities did not reflect the actual level of stress in the marketplace; as a consequence ratings were issued at higher levels than the underlying analytics in part to placate the purchasers of the ratings, and a number of industry insiders observed that this was done.
  • Once the RMBS marketplace came under extreme stress, the rating agencies subsequently determined that the risk charges for these securities would increase several fold, materially increasing the amount of risk-adjusted capital needed to be held by insurers with RMBS, and ultimately jeopardizing the companies’ financial strength ratings themselves.

Flaws in rating RMBS

Rating agencies have historically been paid for their rating services by those entities to which they assign ratings (that reflect claims paying, debt paying, principal paying, etc. abilities). Industry observers have long viewed this relationship as a potential conflict of interest, but, because insurers and buyers had not been materially harmed by this process until recently, the industry practice of rating agencies assigning ratings to companies who were paying them for the service was not strenuously challenged.

Further, since the rating agencies can increase their profit margins by increasing their overall rating fees while maintaining their expenses in the course of performing rating analysis, it follows that there is an incentive to increase the volume of ratings issued by the staff, which implies less time being spent on a particular analysis. Again, until recently, the rated entities and the purchasers of rated securities and insurance policies did not feel sufficiently harmed to challenge the process.

(more…)

Best Risk Management Quotes

January 12, 2010

The Risk Management Quotes page of Riskviews has consistently been the most popular part of the site.  Since its inception, the page has received almost 2300 hits, more than twice the next most popular part of the site.

The quotes are sometimes actually about risk management, but more often they are statements or questions that risk managers should keep in mind.

They have been gathered from a wide range of sources, and most of the authors of the quotes were not talking about risk management, at least they were not intending to talk about risk management.

The list of quotes has recently hit its 100th posting (with something more than 100 quotes, since a number of the posts have multiple quotes.)  So on that auspicous occasion, here are my favotites:

  1. Human beings, who are almost unique in having the ability to learn from the experience of others, are also remarkable for their apparent disinclination to do so.  Douglas Adams
  2. “when the map and the territory don’t agree, always believe the territory” Gause and Weinberg – describing Swedish Army Training
  3. When you find yourself in a hole, stop digging.-Will Rogers
  4. “The major difference between a thing that might go wrong and a thing that cannot possibly go wrong is that when a thing that cannot possibly go wrong goes wrong it usually turns out to be impossible to get at or repair” Douglas Adams
  5. “A foreign policy aimed at the achievement of total security is the one thing I can think of that is entirely capable of bringing this country to a point where it will have no security at all.”– George F. Kennan, (1954)
  6. “THERE ARE IDIOTS. Look around.” Larry Summers
  7. the only virtue of being an aging risk manager is that you have a large collection of your own mistakes that you know not to repeat  Donald Van Deventer
  8. Philip K. Dick “Reality is that which, when you stop believing in it, doesn’t go away.”
  9. Everything that can be counted does not necessarily count; everything that counts cannot necessarily be counted.  Albert Einstein
  10. “Perhaps when a man has special knowledge and special powers like my own, it rather encourages him to seek a complex explanation when a simpler one is at hand.”  Sherlock Holmes (A. Conan Doyle)
  11. The fact that people are full of greed, fear, or folly is predictable. The sequence is not predictable. Warren Buffett
  12. “A good rule of thumb is to assume that “everything matters.” Richard Thaler
  13. “The technical explanation is that the market-sensitive risk models used by thousands of market participants work on the assumption that each user is the only person using them.”  Avinash Persaud
  14. There are more things in heaven and earth, Horatio,
    Than are dreamt of in your philosophy.
    W Shakespeare Hamlet, scene v
  15. When Models turn on, Brains turn off  Til Schuermann

You might have other favorites.  Please let us know about them.

New Decade Resolutions

January 1, 2010

Here are New Decade Resolutions for firms to adopt who are looking to be prepared for another decade

  1. Attention to risk management by top management and the board.  The past decade has been just one continuous lesson that losses can happen from any direction. This is about the survival of the firm.  Survival must not be delegated to a middle manager.  It must be a key concern for the CEO and board.
  2. Action oriented approach to risk.  Risk reports are made to point out where and what actions are needed.  Management expects to and does act upon the information from the risk reports.
  3. Learning from own losses and from the losses of others.  After a loss, the firm should learn not just what went wrong that resulted in the loss, but how they can learn from their experience to improve their responses to future situations both similar and dissimilar.  Two different areas of a firm shouldn’t have to separately experience a problem to learn the same lesson. Competitor losses should present the exact same opportunity to improve rather than a feeling of smug superiority.
  4. Forwardlooking risk assessment. Painstaking calibration of risk models to past experience is only valuable for firms that own time machines.  Risk assessment needs to be calibrated to the future. 
  5. Skeptical of common knowledge. The future will NOT be a repeat of the past.  Any risk assessment that is properly calibrated to the future is only one one of many possible results.  Look back on the past decade’s experience and remember how many times risk models needed to be recalibrated.  That recalibration experience should form the basis for healthy skepticism of any and all future risk assessments.

  6. Drivers of risks will be highlighted and monitored.  Key risk indicators is not just an idea for Operational risks that are difficult to measure directly.  Key risk indicators should be identified and monitored for all important risks.  Key risk indicators need to include leading and lagging indicators as well as indicators from information that is internal to the firm as well as external. 
  7. Adaptable. Both risk measurement and risk management will not be designed after the famously fixed Ligne Maginot that spectacularly failed the French in 1940.  The ability needs to be developed and maintained to change focus of risk assessment and to change risk treatment methods on short notice without major cost or disruption. 
  8. Scope will be clear for risk management.  I have personally favored a split between risk of failure of the firm strategy and risk of losses within the form strategy, with only the later within the scope of risk management.  That means that anything that is potentially loss making except failure of sales would be in the scope of risk management. 
  9. Focus on  the largest exposures.  All of the details of execution of risk treatment will come to naught if the firm is too concentrated in any risk that starts making losses at a rate higher than expected.  That means that the largest exposures need to be examined and re-examined with a “no complacency” attitude.  There should never be a large exposure that is too safe to need attention.   Big transactions will also get the same kind of focus on risk. 

Risk Management in 2009 – Reflections

December 26, 2009

Perhaps we will look back at 2009 and recall that it is the turning point year for Risk Management.  The year that boards ans management and regulators all at once embraced ERM and really took it to heart.  The year that many, many firms appointed their first ever Chief Risk Officer.  They year when they finally committed the resources to build the risk capital model of the entire firm.

On the other hand, it might be recalled as the false spring of ERM before its eventual relegation to the scrapyard of those incessant series of new business management fads like Management by Objective, Managerial Grid, TQM, Process Re-engineering and Six Sigma.

The Financial Crisis was in part due to risk management.  Put a helmet on a kid on a bicycle and they go faster down that hill.  And if the kid really doesn’t believe in helmets and they fail to buckle to chin strap and the helmet blows off in the wind, so much the better.  The wind in the hair feels exhilarating.

The true test of whether the top management is ready to actually DO risk management is whether they are expecting to have to vhange some of their decisions based upon what their risk assessment process tells them.

The dashboard metaphor is really a good way of thinking about risk management.  A reasonable person driving a car will look at their dashboard periodically to check on their speed and on the amount of gas that they have in the car.  That information will occasionally cause them to do something different than what they might have otherwise done.

Regulatory concentration on Risk Management is. on the whole, likely to be bad for firms.  While most banks were doing enough risk management to satisfy regulators, that risk management was not relevant to stopping or even slowing down the financial crisis.

Firms will tend to load up on risks that are not featured by their risk assessment system.  A regulatory driven risk management system tends to be fixed, while a real risk management system needs to be nimble.

Compliance based risk management makes as much sense for firms as driving at the speed limit regardless of the weather, road conditions or the conditions of the car’s breaks and steering.

Many have urged that risk management is as much about opportunities as it is about losses.  However, that is then usually followed by focusing on the opportunities and downplaying the importance of loss controlling.

Preventing a dollar of loss is just as valuable to the firm as adding a dollar of revenue.  A risk management loss controlling system provides management with a methodology to make that loss prevention a reliable and repeatable event.  Excess revenue has much more value if it is reliable and repeatable.  Loss control that is reliable and repeatable can have the same value.

Getting the price right for risks is key.  I like to think of the right price as having three components.  Expected losses.  Risk Margin.  Margin for expenses and profits.  The first thing that you have to decide about participating in a market for a particular type of risk is whether the market in sane.  That means that the market is realistically including some positive margin for expenses and profits above a realistic value for the expected losses and risk margin.

Most aspects of the home real estate and mortgage markets were not sane in 2006 and 2007.  Various insurance markets go through periods of low sanity as well.

Risk management needs to be sure to have the tools to identify the insane markets and the access to tell the story to the real decision makers.

Finally, individual risks or trades need to be assessed and priced properly.  That means that the insurance premium needs to provide a positive margin for expenses and profits above the realistic provision for expected losses and a reasonable margin for risk.

There were two big hits to insurers in 2009.  One was the continuing problems to AIG from its financial products unit.  The main lesson from their troubles ought to be TANSTAAFL.  There ain’t no such thing as a free lunch.  Selling far out of the money puts and recording the entire premium as a profit is a business model that will ALWAYS end up in disaster.

The other hit was to the variable annuity writers.  In their case, they were guilty of only pretending to do risk management.  Their risk limits were strange historical artifacts that had very little to do with the actual risk exposures of the firm.  The typical risk limits for a VA writer were very low risk retained from equities if the potential loss was due to an embedded guarantee and no limit whatsoever for equity risk that resulted in drops in basic M&E revenue.  A typical VA hedging program was like a homeowner who insured every item of his possessions from fire risk, but who failed to insure the house!

So insurers should end the year of 2009 thinking about whether they have either of those two problems lurking somewhere in their book of business.

Are there any “far out of the money” risks where no one is appropriately aware of the large loss potential ?

Are there parts of the business where risk limits are based on tradition rather than on risk?

Have a Happy New Year!

Reflexivity of Risk

November 19, 2009

George Soros says that financial markets are reflexive.  He means that the participants in the system influence the system. Market prices reflect not just fundamentals, but investors expectations.

The same thing is true of risk systems.  This can be illustrated by a point that is frequently made by John Adams.  Seat belts are widely thought to be good safety devices.  However, Adams points out that aggregate statistics of traffic fatalities do not indicate any improvement whatsoever in safety.  He suggests that because of the real added safety from the seat belts, people drive more recklessly, counteracting the added safety with added risky behavior.

That is one of the problems that firms who adopted and were very strong believers in their sophisticated ERM systems.  Some of those firms used their ERM systems to enable them to take more and more risk.  In effect, they were using the ERM system to tell them where the edge of the cliff was and they then proceeded to drive along the extreme edge at a very fast speed.

What they did not realize was that the cliff was undercut in some places – it was not such a steady place to put all of your weight.

Stated more directly, the risk system caused a feeling of safety that encouraged more risk taking.

What was lost was the understanding of uncertainty.  Those firms were perfectly safe from risks that had happened before and perhaps from risks that were anticipated by the markets.  The highly sophisticated systems were pretty accurate at measuring those risks.  However, they were totally unprepared for the risks that were new.  Mark Twain once said that history does not repeat itself, but it rhymes.  Risk is the same only worse.

Non-Linearities and Capacity

November 18, 2009

I bought my current house 11 years ago.  The area where it is located was then in the middle of a long drought.  There was never any rain during the summer.  Spring rains were slight and winter snow in the mountains that fed the local rivers was well below normal for a number of years in a row.  The newspapers started to print stories about the levels of the reservoirs – showing that the water was slightly lower at the end of each succeeding summer.  One year they even outlawed watering the lawns and everyone’s grass turned brown.

Then, for no reason that was ever explained, the drought ended.  Rainy days in the spring became common and one week it rained for six days straight.

Every system has a capacity.  When the capacity of a system is exceeded, there will be a breakdown of the system of some type.  The breakdown will be a non-linearity of performance of the system.

For example, the ground around my house has a capacity for absorbing and running off water.  When it rained for six days straight,  that capacity was exceeded, some of the water showed up in my basement.   The first time that happened, I was shocked and surprised.  I had lived in the house for 5 years and there had never been a hint of water in the basement. I cleaned up the effects of the water and promptly forgot about it. I put it down to a 1 in 100 year rainstorm.  In other parts of town, streets had been flooded.  It really was an unusual situation.

When it happened again the very next spring, this time after just 3 days of very, very heavy rain.  The flooding in the local area was extreme.  People were driven from their homes and they turned the high school gymnasium into a shelter for a week or two.

It appeared that we all had to recalibrate our models of rainfall possibilities.  We had to realize that the system we had for dealing with rainfall was being exceeded regularly and that these wetter springs were going to continue to exceed the system.  During the years of drought, we had built more and more in low lying areas and in ways that we might not have understood at the time, we altered to overall capacity of the system by paving over ground that would have absorbed the water.

For me, I added a drainage system to my basement.  The following spring, I went into my basement during the heaviest rains and listened to the pump taking the water away.

I had increased the capacity of that system.  Hopefully the capacity is now higher than the amount of rain that we will experience in the next 20 years while I live here.

Financial firms have capacities.  Management generally tries to make sure that the capacity of the firm to absorb losses is not exceeded by losses during their tenure.  But just like I underestimated the amount of rain that might fall in my home town, it seems to be common that managers underestimate the severity of the losses that they might experience.

Writers of liability insurance in the US underestimated the degree to which the courts would assign blame for use of a substance that was thought to be largely benign at one time that turned out to be highly dangerous.

In other cases, though it was the system capacity that was misunderstood.  Investors miss-estimated the capacity of internet firms to productively absorb new cash from the investors.  Just a few years earlier, the capacity of Asian economies to absorb investors cash was over-estimated as well.

Understanding the capacity of large sectors or entire financial systems to absorb additional money and put it to work productively is particularly difficult.  There are no rules of thumb to tell what the capacity of a system is in the first place.  Then to make it even more difficult, the addition of cash to a system changes the capacity.

Think of it this way, there is a neighborhood in a city where there are very few stores.  Given the income and spending of the people living there, an urban planner estimates that there is capacity for 20 stores in that area.  So with encouragement of the city government and private investors, a 20 store shopping center is built in an underused property in that neighborhood.  What happens next is that those 20 stores employ 150 people and for most of those people, the new job is a substantial increase in income.  In addition, everyone in the neighborhood is saving money by not having to travel to do all of their shopping.  Some just save money and all save time.  A few use that extra time to work longer hours, increasing their income.  A new survey by the urban planner a year after the stores open shows that the capacity for stores in the neighborhood is now 22.  However, entrepreneurs see the success of the 20 stores and they convert other properties into 10 more stores.  The capacity temporarily grows to 25, but eventually, half of the now 30 stores in the neighborhood go out of business.

This sort of simple micro economic story is told every year in university classes.

Version:1.0 StartHTML:0000000165 EndHTML:0000006093 StartFragment:0000002593 EndFragment:0000006057 SourceURL:file://localhost/Users/daveingr/Desktop/Capacity

It clearly applies to macroeconomics as well – to large systems as well as small.  Another word for these situations where system capacity is exceeded is systemic risk.  The term is misleading.  Systemic risk is not a particular type of risk, like market or credit risk.  Systemic risk is the risk that the system will become overloaded and start to behave in severely non-linear manner.  One severe non-linear behavior is shutting down.  That is what the interbank lending did in 2008.

In 2008, many knew that the capacity of the banking system had been exceeded.  They knew that because they knew that their own bank’s capacity had been exceeded.  And they knew that the other banks had been involved in the same sort of business as them.  There is a name for the risks that hit everyone who is in a market – systematic risks.  Systemic risks are usually Systematic risks that grow so large that they exceed the capacity of the system.  The third broad category of risk, specific risks, are not an issue, unless a firm with a large amount of specific risk that exceeds their capacity is “too big to fail”.  Then suddenly specific risk can become systemic risk.

So everyone just watched when the sub prime systematic risk became a systemic risk to the banking sector.  And watch the specific risk to AIG lead to the largest single firm bailout in history.

Many have proposed the establishment of a systemic risk regulator.  What that person would be in charge of doing would be to identify growing systematic risks that could become large enough to become systemic problems.  THen they are responsible to taking or urging actions that are intended to diffuse the systematic risk before it becomes a systemic risk.

A good risk manager has a systemic risk job as well.  THe good risk manager needs to pay attention to the exact same things – to watch out for systematic risks that are growing to a level that might overwhelm the capacity of the system.  The risk manager’s responsibility is then to urge their firm to withdraw from holding any of the systematic risk.   Stories tell us that happened at JP Morgan and at Goldman.  Other stories tell us that didn’t happen at Bear or Lehman.

So the moral of this is that you need to watch not just your own capacity but everyone else’s capacity as well if you do not want stories told about you.

The Future of Risk Management – Conference at NYU November 2009

November 14, 2009

Some good and not so good parts to this conference.  Hosted by Courant Institute of Mathematical Sciences, it was surprisingly non-quant.  In fact several of the speakers, obviously with no idea of what the other speakers were doing said that they were going to give some relief from the quant stuff.

Sad to say, the only suggestion that anyone had to do anything “different” was to do more stress testing.  Not exactly, or even slightly, a new idea.  So if this is the future of risk management, no one should expect any significant future contributions from the field.

There was much good discussion, but almost all of it was about the past of risk management, primarily the very recent past.

Here are some comments from the presenters:

  • Banks need regulator to require Stress tests so that they will be taken seriously.
  • Most banks did stress tests that were far from extreme risk scenarios, extreme risk scenarios would not have been given any credibility by bank management.
  • VAR calculations for illiquid securities are meaningless
  • Very large positions can be illiquid because of their size, even though the underlying security is traded in a liquid market.
  • Counterparty risk should be stress tested
  • Securities that are too illiquid to be exchange traded should have higher capital charges
  • Internal risk disclosure by traders should be a key to bonus treatment.  Losses that were disclosed and that are within tolerances should be treated one way and losses from risks that were not disclosed and/or that fall outside of tolerances should be treated much more harshly for bonus calculation purposes.
  • Banks did not accurately respond to the Spring 2009 stress tests
  • Banks did not accurately self assess their own risk management practices for the SSG report.  Usually gave themselves full credit for things that they had just started or were doing in a formalistic, non-committed manner.
  • Most banks are unable or unwilling to state a risk appetite and ADHERE to it.
  • Not all risks taken are disclosed to boards.
  • For the most part, losses of banks were < Economic Capital
  • Banks made no plans for what they would do to recapitalize after a large loss.  Assumed that fresh capital would be readily available if they thought of it at all.  Did not consider that in an extreme situation that results in the losses of magnitude similar to Economic Capital, that capital might not be available at all.
  • Prior to Basel reliance on VAR for capital requirements, banks had a multitude of methods and often used more than one to assess risks.  With the advent of Basel specifications of methodology, most banks stopped doing anything other than the required calculation.
  • Stress tests were usually at 1 or at most 2 standard deviation scenarios.
  • Risk appetites need to be adjusted as markets change and need to reflect the input of various stakeholders.
  • Risk management is seen as not needed in good times and gets some of the first budget cuts in tough times.
  • After doing Stress tests need to establish a matrix of actions that are things that will be DONE if this stress happens, things to sell, changes in capital, changes in business activities, etc.
  • Market consists of three types of risk takers, Innovators, Me Too Followers and Risk Avoiders.  Innovators find good businesses through real trial and error and make good gains from new businesses, Me Too follow innovators, getting less of gains because of slower, gradual adoption of innovations, and risk avoiders are usually into these businesses too late.  All experience losses eventually.  Innovators losses are a small fraction of gains, Me Too losses are a sizable fraction and Risk Avoiders often lose money.  Innovators have all left the banks.  Banks are just the Me Too and Avoiders.
  • T-Shirt – In my models, the markets work
  • Most of the reform suggestions will have the effect of eliminating alternatives, concentrating risk and risk oversight.  Would be much safer to diversify and allow multiple options.  Two exchanges are better than one, getting rid of all the largest banks will lead to lack of diversity of size.
  • Problem with compensation is that (a) pays for trades that have not closed as if they had closed and (b) pay for luck without adjustment for possibility of failure (risk).
  • Counter-cyclical capital rules will mean that banks will have much more capital going into the next crisis, so will be able to afford to lose much more.  Why is that good?
  • Systemic risk is when market reaches equilibrium at below full production capacity.  (Isn’t that a Depression – Funny how the words change)
  • Need to pay attention to who has cash when the crisis happens.  They are the potential white knights.
  • Correlations are caused by cross holdings of market participants – Hunts held cattle and silver in 1908′s causing correlations in those otherwise unrelated markets.  Such correlations are totally unpredictable in advance.
  • National Institute of Financa proposal for a new body to capture and analyze ALL financial market data to identify interconnectedness and future systemic risks.
  • If there is better information about systemic risk, then firms will manage their own systemic risk (Wanna Bet?)
  • Proposal to tax firms based on their contribution to gross systemic risk.
  • Stress testing should focus on changes to correlations
  • Treatment of the GSE Preferred stock holders was the actual start of the panic.  Leahman a week later was actually the second shoe to drop.
  • Banks need to include variability of Vol in their VAR models.  Models that allowed Vol to vary were faster to pick up on problems of the financial markets.  (So the stampede starts a few weeks earlier.)
  • Models turn on, Brains turn off.

Turn VAR Inside Out – To Get S

November 13, 2009

S

Survival.  That is what you really want to know.  When the Board meeting ends, the last thing that they should hear is management assuring them that the company will be in business still when the next meeting is due to be held.

S

But it really is not in terms of bankruptcy, or even regulatory take-over.  If your firm is in the assurance business, then the company does not necessarily need to go that far.  There is usually a point, that might be pretty far remote from bankruptcy, where the firm loses confidence of the market and is no longer able to do business.  And good managers know exactly where that point lies.  

S

So S is the likelihood of avoiding that point of no return.  It is a percentage.  Some might cry that no one will understand a percentage.  That they need dollars to understand.  But VAR includes a percentage as well.  Just because no one says the percentage, that does not mean it is there.  It actually means that no one is even bothering to try to help people to understand what VAR is.  The VAR nuber is really one part of a three part sentence:

The 99% VAR over one-year is $67.8 M.  By itself, VAR does not tell you whether the firm has trouble.  If the VAR doubles from one period to the next, is the firm in trouble?  The answer to that cannot be determined without further information.

S

Survival is the probability that, given the real risks of the firm and the real capital of the firm, the firm will sustain a loss large enough to put an end to their business model.  If your S is 80%, then there is about  50% chance that your firm will not survive three years! But if your S is 95%, then there is a 50-50 chance that your firm will last at least 13 years.  This arithmetic is why a firm, like an insurer, that makes long term promises, need to have a very high S.  An S of 95% does not really seem high enough.

S

Survival is something that can be calculated with the existing VAR model.  Instead of focusing on a arbitrary probability, the calculation instead focuses on the loss that management feels is enough to put them out of business.  S can be recalculated after a proposed share buy back or payment of dividends.  S responds to management actions and assists management decisions.

If your board asks how much risk you are taking, try telling them the firm has a 98.5% Survival probability.  That might actually make more sense to them than saying that the firm might lose as much as $523 M at a 99% confidence interval over one year.

So turn your VAR inside out – to get S 

VAR is not a Good Risk Measure

November 6, 2009

Value at Risk (VAR) has taken alot of heat lately and deservedly so.

VAR, as banks are required to calculate it, relies solely on recent past data for calibration.  The use of “recent” data means that following any period of low losses, the VAR measure will show low risk.  That is just not the case.  It fails to recognize the longer term volatility that might exist.  In other words if there are problems that have a periodicity longer than the usual one year time frame of VAR, then VAR will ignore them most of the time and over emphasize them some of the time. Like the stopped clock that is right twice a day, except that VAR might never be right.

Risk models can be calibrated to history, long term or short term, or to future expectations, either long term or short term or they can be calibrated to assumptions consistent with market prices, either spot or over some period of time.  Of those six choices, VAR is calibrated from one of the less useful possible choices.

What VAR does is to answer the question of what would the 1/100 loss have been had I held the current risk positions over the past year.  The advantage of the definition chosen is that you can be sure of consistency.  However, that is only a consistently useful result if you always believe that the world will remain exactly as risky as it was in the past year.

If you believe in equilibrium, then of course the next year will be very similar to the last.  So risk management is a very small task relating to keeping things in line with past variability.  However, the world and the markets do not evidence a fundamental belief in equilibrium.  Some years are riskier than others.

So VAR is not a Good Risk Measure if it is taken alone.  If used with other more forward looking risk measures, it can be a part of a suite of information tat can lead to good risk management.

On the other hand, if you divorce the idea of VAR from the actual implementation of VAR in the banks, then you can conclude that Var is not a Bad Risk Measure.

Myths of Market Consistent Valuation

October 31, 2009

    Guest Post from Elliot Varnell

    Myth 1: An arbitrage free model will by itself give a market consistent valuation.

    An arbitrage-free model which is calibrated to deep and liquid market data will give a market consistent valuation. An arbitrage-free model which ignores available deep and liquid market data does not give a market consistent valuation. Having said this there is not a tight definition of what constitutes deep and liquid market data, therefore there is no tight definition of what constitutes market consistent valuation. For example a very relevant question is whether calibrating to historic volatility can be considered market consistent if there is a marginally liquid market in options. CEIOPs CP39 published in July 2009 appears to leave open the questions of which volatility could be used, while CP41 requires that a market is deep and liquid, transparent and that these properties are permanent.

    Myth 2: A model calibrated to deep and liquid market data will give a Market Consistent Valuation.

    A model calibrated to deep and liquid market data will only give a market consistent valuation if the model is also arbitrage free. If a model ignores arbitrage free dynamics then it could still be calibrated to replicate certain prices. However this would not be a sensible framework marking to model the prices of other assets and liabilities as is required for the valuation of many participating life insurance contracts Having said this the implementation of some theoretically arbitrage free models are not always fully arbitrage free themselves, due to issues such as discretisation, although they can be designed to not be noticeably arbitrage free within the level of materiality of the overall technical provision calculation.

    Myth 3: Market Consistent Valuation gives the right answer.

    Market consistent valuation does not give the right answer, per se, but an answer conditional on the model and the calibration parameters. The valuation is only as good as these underlying assumptions. One thing we can be sure of is that the model will be wrong in some way. This is why understanding and documenting the weakness of an ESG model and its calibration is as important as the actual model design and calibration itself.

    Myth 4: Market Consistent Valuation gives the amount that a 3rd party will pay for the business.

    Market Consistent Valuation (as calculated using an ESG) gives a value based on pricing at the margin. As with many financial economic models the model is designed to provide a price based on a small scale transaction, ignoring trading costs, and market illiquidity. The assumption is made that the marginal price of the liability can be applied to the entire balance sheet. Separate economic models are typically required to account for micro-market features; for example the illiquidity of markets or the trading and frictional costs inherent from following an (internal) dynamic hedge strategy. Micro-market features can be most significant in the most extreme market conditions; for example a 1-in-200 stress event.

    Even allowing for the micro-market features a transaction price will account (most likely in much less quantitative manner than using an ESG) the hard to value assets (e.g. franchise value) or hard to value liabilities (e.g. contingent liabilities).

    Myth 5: Market Consistent Valuation is no more accurate than Discounted Cash Flow techniques using long term subjective rates of return.

    The previous myths could have suggested that market consistent valuation is in some way devalued or not useful. This is certainly the viewpoint of some actuaries especially in the light of the recent financial crisis. However it could be argued that market consistent valuation, if done properly, gives a more economically meaningful value than traditional DCF techniques and provides better disclosure than traditional DCF. It does this by breaking down the problem into clear assumptions about what economic theory is being applied and clear assumption regarding what assumptions are being made. By breaking down the models and assumptions weaknesses are more readily identified and economic theory can be applied.


Toward a New Theory of the Cost of Equity Capital

October 18, 2009

From David Merkel, Aleph Blog

I have never liked using MPT [Modern Portfolio Theory] for calculating the cost of equity capital for two reasons:

  • Beta is not a stable parameter; also, it does not measure risk well.
  • Company-specific risk is significant, and varies a great deal.  The effects on a company with a large amount of debt financing is significant.

What did they do in the old days?  They added a few percent on to where the company’s long debt traded, less for financially stable companies, more for those that took significant risks.  If less scientific, it was probably more accurate than MPT.  Science is often ill-applied to what may be an art.  Neoclassical economics is a beautiful shining edifice of mathematical complexity and practical uselessness.

I’ve also never been a fan of the Modigliani-Miller irrelevance theorems.  They are true in fair weather, but not in foul weather.  The costs of getting in financial stress are high, much less when a firm is teetering on the edge of insolvency.  The cost of financing assets goes up dramatically when a company needs financing in bad times.

But the fair weather use of the M-M theorems is still useful, in my opinion.  The cost of the combination of debt, equity and other instruments used to finance depends on the assets involved, and not the composition of the financing.  If one finances with equity only, the equityholders will demand less of a return, because the stock is less risky.  If there is a significant, but not prohibitively large slug of debt, the equity will be more risky, and will sell at a higher prospective return, or, a lower P/E or P/Free Cash Flow.

Securitization is another example of this.  I will use a securitization of commercial mortgages [CMBS], to serve as my example here.  There are often tranches rated AAA, AA+, AA, AA-, A+, A, A-, BBB+, BBB, BBB-, and junk-rated tranches, before ending with the residual tranche, which has the equity interest.

That is what the equity interest is – the party that gets the leftovers after all of the more senior capital interests get paid.  In many securitizations, that equity tranche is small, because the underlying assets are high quality.  The smaller the equity tranche, the greater percentage reward for success, and the greater possibility of a total wipeout if things go wrong.  That is the same calculus that lies behind highly levered corporations, and private equity.

All of this follows the contingent claims model that Merton posited regarding how debt should be priced, since the equityholders have the put option of giving the debtholders the firm if things go bad, but the equityholders have all of the upside if things go well.

So, using the M-M model, Merton’s model, and securitization, which are really all the same model, I can potentially develop estimates for where equities and debts should trade.  But for average investors, what does that mean?  How does that instruct us in how to value stock and bonds of the same company against each other?

There is a hierarchy of yields across the instruments that finance a corporation.  The driving rule should be that riskier instruments deserve higher yields.  Senior bonds trade with low yields, junior bonds at higher yields, and preferred stock at higher yields yet.  As for common stocks, they should trade at an earnings or FCF yield greater than that of the highest after-tax yield on debts and other instruments.

Thus, and application of contingent claims theory to the firm, much as Merton did it, should serve as a replacement for MPT in order to estimate the cost of capital for a firm, and for the equity itself.  Now, there are quantitative debt raters like Egan-Jones and the quantitative side of Moody’s – the part that bought KMV).  If they are not doing this already, this is another use for the model, to be able to consult with corporations over the cost of capital for a firm, and for the equity itself.  This can replace the use of beta in calculations of the cost of equity, and lead to a more sane measure of the weighted average cost of capital.

Values could then be used by private equity for a more accurate measurement of the cost of capital, and estimates of where a portfolio company could do and IPO.  The answer varies with the assets financed, and the degree of leverage already employed.  Beyond that, CFOs could use the data to see whether Wall Street was giving them fair financing options, and take advantage of finance when it is favorable.

A Model Defense

October 11, 2009

From Chris Mandel

Risk modeling is the key to successful risk management. Not quite. Not even close. It has been headline news for most of this year, though. “Risk models got it wrong,” the headlines said. Billions were written off or lost. Chief Risk Officers were fired and some scape-goated.

By quick, yet unfounded extension, enterprise risk management has failed, just like so many pundits predicted.

Wrong again. Like every other component of risk frameworks, no one part is key to the whole. In fact, each part is important to a successful approach to understanding and successfully managing risk.

But what about risk modeling? No one model can be expected to give just the “right” answer.

More often, multiple and varied data points are more useful to an effective analysis. Think traditional actuarial work in casualty insurance. Most good actuaries use multiple methods or models to get to their range of predicted values.

So it is in modeling all kinds of risks. More than one model gets you to a better answer most of the time. But a better analysis doesn’t stop there. Supplemental approaches can often add a lot. One such approach is the use of expert opinion.

Continued in Risk & Insurance

Law of Risk & Light

October 7, 2009

Risk management is all about making conscious decisions about risk taking.  Fully recognizing the potential losses that could result from a risky undertaking.  But in many camps, risk management is being simplified and simplified to a point where it may well mislead CEOs and Boards about the potential effectiveness of an oversimplified risk management regime.

These simplified risk management regimes are often in violation of the Law.  The Law of Risk and Light…

Risks in the light shrink, Risks in the dark grow
Return for Risks in the light shrinks faster than risk
Return for Risks in the dark does not grow as fast as risk

What this means is that risks that are visible to the market (in the light) will be managed by the market. The degree of uncertainty around the risk will shrink. With decreased uncertainty, the risk premium will shrink. With broad comfort, demand will rise; with increased demand, risk premium will shrink further.

Risks in the dark are risks that are not visible or known to the market. If the market charges little or nothing for a risk, then those who are aware of the risk will bring more and more of that risk to market. And if the market continues to be unaware of a risk, then more and more extreme versions of the risk will be brought to market, the risk will grow.
As the risks grow and grow, that growth might be noticed faintly by the market as a shadow of a risk. Some market participants are canny enough to know that if someone really wants to do a transaction, then a higher price for that transaction is probably in order, even if they do not fully understand the underlying reasons.

This law is as fundamental to risk management as supply and demand is fundamental to micro economics.  Any risk management actions that are taken or planned without recognition of the risks that may be in the dark initially could end up to be as flawed as management without consideration of risk.

Risk & Light was the winner of the Practical Paper Award at the 2009 Enterprise Risk Management Symposium

Unrisk – Part 3

October 6, 2009

From Jawwad Farid

Transition Matrix

Here is another way of looking at it. It is called a transition matrix. All it does is track how something rated/scored in a given class moves across classes over time

t1

How do you link to profitability?

t2

This is how profitability is calculated generally. Take the amount you have lent, multiply it by your expected adjusted return and voila, you have expected earnings. But that is not the true picture.

t3

What you are missing is the impact of two more elements. Your cost of funds (the money you have lent is actually not yours. You have borrowed it at a cost and that cost needs to be repaid) and your best and worst case provisions. So true profitability would look something like this.

t4

That is a pretty picture if I ever saw one. Especially when you compare the swing from the original projected number. Back to the question clients ask. Where do projected provisions come from? From transition matrices. And where do transition matrices come from. From applying your understanding of your distribution to your portfolio.

Remember these are not my ideas. They are hardly even original. The Goldman trader who first asked me about moment generating functions wanted to understand how well I understood the distributions that were going to rule my life on Fleet street?

Full credit for posing the distribution problem goes to our friend NNT (Nicholas Nassim Taleb) who first posed this as getting comfortable with the generating function problem. He wrote all of three books on the subject and then some. Rumor has it that he also made an obscene amount of money in the process (not with book writing, but with understanding the distribution). All he suggested was that before you took a punt, try and understand how much trouble could you possibly land in based on how what you are punting on is likely to behave in the future. Don’t just look at the past and the present, look the range, likely, unlikely, expected, unexpected.

UNRISK Part 1

UNRISK Part 2


Follow

Get every new post delivered to your Inbox.

Join 566 other followers

%d bloggers like this: