Archive for the ‘risk assessment’ category

Determining Risk Capital

February 5, 2022

Knowing the amount of surplus an insurer needs to support risk is fundamental to enterprise risk management (ERM) and to the own risk and solvency assessment (ORSA).

With the increasing focus on ERM, regulators, rating agencies, and insurance and reinsurance executives are more focused on risk capital modeling than ever before.

Risk – and the economic capital associated with it – cannot actually be measured as you can measure your height. Risk is about the future.

To measure risk, you must measure it against an idea of the future. A risk model is the most common tool for comparing one idea of the future against others.

Types of Risk Models

There are many ways to create a model of risk to provide quantitative metrics and derive a figure for the economic capital requirement.

Each approach has inherent strengths and weaknesses; the trade-offs are between factors such as implementation cost, complexity, run time, ability to represent reality, and ease of explaining the findings. Different types of models suit different purposes.

Each of the approaches described below can be used for purposes such as determining economic capital need, capital allocation, and making decisions about risk mitigation strategies.

Some methods may fit a particular situation, company, or philosophy of risk better than others.

Factor-Based Models

Here the concept is to define a relatively small number of risk categories; for each category, we require an exposure metric and a measure of riskiness.

The overall risk can then be calculated by multiplying “exposure × riskiness” for each category, and adding up the category scores.

Because factor-based models are transparent and straightforward to apply, they are commonly used by regulators and rating agencies.

The NAIC Risk-Based Capital and the Solvency II Standard Formula are calculated in this way, as is A.M. Best’s BCAR score and S&P’s Insurance Capital Model.

Stress Test Models

Stress tests can provide valuable information about how a company might hold up under adversity. As a stand-alone measure or as an adjunct to factor-based methods, stress tests can provide concrete indications that reflect company-specific features without the need for complex modeling. A robust stress testing regime might reflect, for example:

Worst company results experienced in last 20 years
Worst results observed across peer group in last 20 years
Worst results across peer group in last 50 years (or, 20% worse than stage 2) Magnitude of stress-to-failure

Stress test models focus on the severity of possible adverse scenarios. While the framework used to create the stress scenario may allow rough estimates of likelihood, this is not the primary goal.

High-Level Stochastic Models

Stochastic models enable us to analyze both the severity and likelihood of possible future scenarios. Such models need not be excessively complex. Indeed, a high-level model can provide useful guidance.

Categories of risk used in a high-level stochastic model might reflect the main categories from a factor-based model already in use; for example, the model might reflect risk sources such as underwriting risk, reserve risk, asset risk, and credit risk.

A stochastic model requires a probability distribution for each of these risk sources. This might be constructed in a somewhat ad-hoc way by building on the results of a stress test model, or it might be developed using more complex actuarial analysis.

Ideally, the stochastic model should also reflect any interdependencies among the various sources of risk. Timing of cash flows and present value calculations may also be included.

Detailed Stochastic Models

Some companies prefer to construct a more detailed stochastic model. The level of detail may vary; in order to keep the model practical and facilitate quality control, it may be best to avoid making the model excessively complicated, but rather develop only the level of granularity required to answer key business questions.

Such a model may, for example, sub-divide underwriting risk into several lines of business and/or profit centers, and associate to each of these units a probability distribution for both the frequency and the severity of claims. Naturally, including more granular sources of risk makes the question of interdependency more complicated.

Multi-Year Strategic Models with Active Management

In the real world, business decisions are rarely made in a single-year context. It is possible to create models that simulate multiple, detailed risk distributions over a multi-year time frame.

And it is also possible to build in “management logic,” so that the model responds to evolving circumstances in a way that approximates what management might actually do.

For example, if a company sustained a major catastrophic loss, in the ensuing year management might buy more reinsurance to maintain an adequate A.M. Best rating, rebalance the investment mix, and reassess growth strategy.

Simulation models can approximate this type of decision making, though of course the complexity of the model increases rapidly.

Key Questions and Decisions

Once a type of risk model has been chosen, there are many different ways to use this model to quantify risk capital. To decide how best to proceed, insurer management should consider questions such as:

  • What are the issues to be aware of when creating or refining our model?
  • What software offers the most appropriate platform?
  • What data will we need to collect?
  • What design choices must we make, and which selections are most appropriate for us?
  • How best can we aggregate risk from different sources and deal with interdependency?
  • There are so many risk metrics that can be used to determine risk capital – Value at Risk, Tail Value at Risk, Probability of Ruin, etc. – what are their implications, and how can we choose among them?
  • How should this coordinate with catastrophe modeling?
  • Will our model actually help us to answer the questions most important to our firm?
  • What are best practices for validating our model?
  • How should we allocate risk capital to business units, lines of business, and/or insurance policies?
  • How should we think about the results produced by our model in the context of rating agency capital benchmarks?
  • Introducing a risk capital model may create management issues – how can we anticipate and deal with these?

In answering these questions, it is important to consider the intended applications. Will the model be used to establish or refine risk appetite and risk tolerance?

Will modeled results drive reinsurance decisions, or affect choices about growth and merger opportunities? Does the company intend to use risk capital for performance management, or ratemaking?

Will the model be used to complete the NAIC ORSA, or inform rating agency capital adequacy discussions?

The intended applications, along with the strengths and weaknesses of the various modeling approaches and range of risk metrics, should guide decisions throughout the economic capital model design process.

Advertisement

Risk Measurement & Reporting

October 18, 2021

Peter Drucker is reported to have once said “what gets measured, gets managed.” That truism of modern management applied to risk as well as it does to other more commonly measured things like sales, profits and expens es .

Regulators take a similar view; what gets measured should get managed. ORSA f rameworks aim to support prospective solvency by giving management a clear view of their on-going corporate risk positions.

This in turn should reduce the likelihood of large unanticipated losses if timely action can be taken when a risk limit is breached.

From a regulatory perspective, each identified risk should have at least one measurable metric that is reported upwards, ultimately to the board.

The Need to Measure Up

Many risk management programs build up extensive risk registers but are stymied by this obvious next step – that of measuring the risks that have been identif ied.

Almost every CEO can cite the company’s latest f igures f or sales, expenses and profits, but very few know what the company’s risk position might be.

Risks are somewhat more difficult to measure than profits due to the degree to which they depend upon opinions.

Insurance company profits are already seen as opaque by many non-industry observers because profits depend on more than just sales and expenses:profits depend upon claims estimates, which are based on current (and often incomplete) information about those transactions.

Risk, on the other hand, is all about things that might happen in the f uture: specif ically, bad things that might happen in the f uture.

Arisk measure reflects an opinion about the size of the exposure to f uture losses. All risk measures are opinions; there are no f acts about the f uture. At least not yet.

Rationalizing Risk

There are, however, several ways that risk can be measured to facilitate management in the classical sense that Drucker was thinking of.

That classic idea is the management control cycle, where management sets a plan and then monitors emerging experience in comparison to that plan.

To achieve this objective, risk measures need to be consistent from period to period. They need to increase when volume of activity increases, but they also need to reflect changes in the riskiness of activities as time passes and as the portfolio of the risk taker changes .

Good risk measures provide a projected outcome; but in some
cases, such calculations are not available and risk indicators must be used instead.

Risk indicators measure something that is closely related to the risk and so can be expected to vary similarly to an actual risk measure, if one were available.

For insurers, current state-of-the-art risk measures are based upon computer models of the risk taking act ivit ies .

With these models, risk managers can determine a broad range of possible outcomes for a risk taking activity and then define the risk measure as some subset of those outcomes.

Value at Risk

The most common such measure is called value at risk (VaR). If the risk model is run with a random element, usually called a Monte Carlo or stochastic model, a 99% VaR would be the 99th worst result in a run of 100 outcomes, or the 990th worst out of 1000.

Contingent Tail Expectation

This value might represent the insurer’s risk capital target.Asimilar risk measure is the contingent tail expectation (CTE), which is also called the tail value at risk (TVaR).

The 99% CTE is the average of all the values that are worse than the 99% VaR. You can think of these two values in this manner: if a company holds capital at the 99% VaR level, then the 99% CTE minus the 99% VaR is the average amount of loss to policyholders should the company become insolvent.

Rating agencies, and increasingly regulators, require companies to provide results of risk measures from stochastic models of natural catastrophes.

Stochastic models are also used to estimate other risk exposures, including underwriting risk from other lines of insurance coverage and investment risk.

In addition to stochastic models, insurers also model possible losses under single well-defined adverse scenarios. The results are often called stress tests.

Regulators are also increasingly calling for stress tests to provide risk measures that they feel are more easily understood and compared among companies.

Key Risk Indicators

Most other risks, especially strategic and operational risks, are monitored by key risk indicators (KRIs). For these risks, good measures are not available and so we must rely on indicators.

For example, an economic downturn could pose risk to an insurer’s growth strategy. While it may be dif f icult to measure the likelihood of a downturn or the extent to which it would impair growth, the insurer can use economic f orecasts as risk indicators.

Of course,simplymeasuringriskisinsufficient.Theresultsof themeasurementmustbecommunicatedto people who can and will use the risk information to appropriately steer the future activity of the company.

Risk Dashboard

Simple charts of numbers are sufficient in some cases, but the state of the art approach to presenting risk measurement information is the risk dashboard.

With a risk dashboard, several important charts and graphs are presented on a single page, like the dashboard of a car or airplane, so that the user can see important information and trends at a glance.

The risk dashboard is often accompanied by the charts of numbers, either on later pages of a hard copy or on a click-through basis for on-screen risk dashboards.

Dashboard Example

How to Show the Benefits of Risk Management

January 2, 2015

From Harry Hall at www.pmsouth.com

Sometimes we struggle to illustrate the value of risk management. We sense we are doing the right things. How can we show the benefits?

Some products such as weight loss programs are promoted by showing a “before picture” and an “after picture.” We are sold by the extraordinary improvements.

The “before picture” and “after picture” are also a powerful way to make known the value of risk management.

We have risks in which no strategies or actions have been executed. In other words, we have a “before picture” of the risks. When we execute appropriate response strategies such as mitigating a threat, the risk exposure is reduced. Now we have the “after picture.”

Let’s look at one way to create pictures of our risk exposure for projects, programs, portfolios, and enterprises.

Say Cheese

The first step to turning risk assessments into pictures is to assign risk levels.

Assume that a Project Manager is using a qualitative rating scale of 1 to 10, 10 being the highest, to rate Probability and Impact. The Risk Score is calculated by multiplying Probability x Impact. Here is an example of a risk table with a level of risk and the corresponding risk score range.

Level of Risk

Risk Score

Very Low

< 20

Low

21 – 39

Medium

40 – 59

High

60 – 79

Very High

> 80

Figure 1: Qualitative Risk Table

Looking Good

Imagine a Project Manager facilitates the initial risk identification and assessment. The initial assessment results in fifteen Urgent Risks – eight “High” risks and seven “Very High” risks.

Figure 2: Number of Risk before Execution of Risk Response Strategies

We decide to act on the Urgent Risks alone and leave the remaining risks in our Watch List. The team develops risk response strategies for the Urgent Risks such as ways to avoid and mitigate threats.

Figure 3: Number of Risks after Execution of Risk Response Strategies

After the project team executes the strategies, the team reassesses the risks. We see a drop in the number of Urgent Risks (lighter bars). The team has reduced the risk exposure and improved the potential for success.

How to Illustrate Programs, Portfolios, or Enterprises

Now, imagine a Program Manager managing four projects in a program. We can roll up the risks of the four projects into a single view. Figure 4 below illustrates the comparison of the number of risks before and after the execution of the risk strategies.

Figure 4: Number of Program risks before and after the execution of risk response strategies

Of course, we can also illustrate risks in a like manner at a portfolio level or an enterprise level (i.e., Enterprise Risk Management).

Tip of the Day

When you ask team members to rate risks, it is important we specify whether the team members are assessing the “before picture” (i.e., inherent risks) or the “after picture” (i.e., residual risks) or bothInherent risks are risks to the project in the absence of any strategies/actions that might alter the risk. Residual risks are risks remaining after strategies/actions have been taken.

Question: What types of charts or graphics do you use to illustrate the value of risk management?

Top 10 RISKVIEWS Posts of 2014 – ORSA Heavily Featured

December 29, 2014

RISKVIEWS believes that this may be the best top 10 list of posts in the history of this blog.  Thanks to our readers whose clicks resulted in their selection.

  • Instructions for a 17 Step ORSA Process – Own Risk and Solvency Assessment is here for Canadian insurers, coming in 2015 for US and required in Europe for 2016. At least 10 other countries have also adopted ORSA and are moving towards full implementation. This post leads you to 17 other posts that give a detailed view of the various parts to a full ORSA process and report.
  • Full Limits Stress Test – Where Solvency and ERM Meet – This post suggests a link between your ERM program and your stress tests for ORSA that is highly logical, but not generally practiced.
  • What kind of Stress Test? – Risk managers need to do a better job communicating what they are doing. Much communications about risk models and stress tests is fairly mechanical and technical. This post suggests some plain English terminology to describe the stress tests to non-technical audiences such as boards and top management.
  • How to Build and Use a Risk Register – A first RISKVIEWS post from a new regular contributor, Harry Hall. Watch for more posts along these lines from Harry in the coming months. And catch Harry on his blog, http://www.pmsouth.com
  • ORSA ==> AC – ST > RCS – You will notice a recurring theme in 2014 – ORSA. That topic has taken up much of RISKVIEWS time in 2014 and will likely take up even more in 2015 and after as more and more companies undertake their first ORSA process and report. This post is a simple explanation of the question that ORSA is trying to answer that RISKVIEWS has used when explaining ORSA to a board of directors.
  • The History of Risk Management – Someone asked RISKVIEWS to do a speech on the history of ERM. This post and the associated new permanent page are the notes from writing that speech. Much more here than could fit into a 15 minute talk.
  • Hierarchy Principle of Risk Management – There are thousands of risks faced by an insurer that do not belong in their ERM program. That is because of the Hierarchy Principle. Many insurers who have followed someone’s urging that ALL risk need to be included in ERM belatedly find out that no one in top management wants to hear from them or to let them talk to the board. A good dose of the Hierarchy Principle will fix that, though it will take time. Bad first impressions are difficult to fix.
  • Risk Culture, Neoclassical Economics, and Enterprise Risk Management – A discussion of the different beliefs about how business and risk work. A difference in the beliefs that are taught in MBA and Finance programs from the beliefs about risk that underpin ERM make it difficult to reconcile spending time and money on risk management.
  • What CEO’s Think about Risk – A discussion of three different aspects of decision-making as practiced by top management of companies and the decision making processes that are taught to quants can make quants less effective when trying to explain their work and conclusions.
  • Decision Making Under Deep Uncertainty – Explores the concepts of Deep Uncertainty and Wicked Problems. Of interest if you have any risks that you find yourself unable to clearly understand or if you have any problems where all of the apparent solutions are strongly opposed by one group of stakeholders or another.

Economic Capital for Banking Industry

December 22, 2014

Everything you ever wanted to know but were afraid to ask.

For the last seventeen years I have hated conversations with board members around economic capital. It is perfectly acceptable to discuss Market risk, Credit risk or interest rates mismatch in isolation but the minute you start talking about the Enterprise, you enter a minefield.

The biggest hole in that ground is produced by correlations. The smartest board members know exactly which buttons to press to shoot your model down. They don’t do it out of malice but they won’t buy anything they can’t accept, reproduce or believe.

Attempt to explain Copulas or the stability of historical correlations in the future and your board presentation will head south. Don’t take my word for it. Try it next time.  It is not a reflection on the board, it is a simple manifestation of the disconnect that exist today between the real world of Enterprise risk and applied statistical modeling. And when it comes to banking regulation and economic capital for banking industry, the disconnect is only growing larger.

Frustrated with our ineptitude with the state of modeling in this space three years ago we started working on an alternate model for economic capital.  The key trigger was the shift to shortfall and probability of ruin models in bank regulation as well as Taleb’s assertions in the area of how risk results should be presented to ensure informed decision making.   While the proposed model was a simple extension of the same principles on which value at risk is based, we felt that some of our tweaks and hacks delivered on our end objective – meaningful, credible conversations with the board around economic capital estimates.

Enterprise models for estimating economic capital simply extend the regulatory value at risk (VaR) model. The theory focuses on anchoring expectations.  If institutional risk expectations max out at 97.5% then 99.9% can represent unexpected risk. The appealing part of these logistics is that the anchors can shift as more points become visible in the underlying risk distribution. In the simplest and crudest of forms, here is what economic capital models suggest

While regulatory capital model compensate for expected risk, economic capital should account for unexpected risk. The difference between two estimates is the amount you need to put aside for economic capital modeling.”

The plus point with this approach is that it ensures that Economic Capital requirements will always exceed regulatory capital requirements. It removes the possibility of arbitrage that occurs when this condition doesn’t hold. The downside is the estimation of dependence between business lines.  The variations that we proposed short circuited the correlation debate. It also recommended using accounting data, data that the board had already reconciled and sign off on.

EconomicCapitalModel

Without further ado, there is the series that presents our alternate model for estimating economic capital for banking industry Discuss, dissect, modify, suggest. We would love to hear your feedback.

Economic Capital – An alternate Model

Can we use the accounting data series and skip copulas and correlation modeling for business lines altogether? Take a look to find the answer.

EconomicCapital-Framework

Economic Capital Case Study – setting the context

We use publicly available data from Goldman Sachs, JP Morgan Chase, Citibank, Wells Fargo & Barclays Bank from the years 2002 to 2014 to calculate economic capital buffers in place at these 5 banks. Three different approaches are used. Two centered around Capital Adequacy. One using the regulatory Tier 1 leverage ratio.

EconomicCapital-CaseStudy

Economic Capital Models – The appeal of using accounting data

Why does accounting data work? What is the business case for using accounting data for economic capital estimation? How does the modeling work.

EconomicCapital-ModelFlow

Calculating Economic Capital – Using worst case losses

Our first model uses worst case loss. If you are comfortable with value at risk terminology, this is historical simulation approach for economic capital estimation.  We label it model one

EconomicCapitalCaseStudy

Calculating Economic Capital – Using volatility

Welcome to the variance covariance model for economic capital estimation. The results will surprise you.  Presenting model two.

EconomicCapital-Intervention

Calculating Economic Capital – Using Leverage ratio

We figured it was time that we moved from capital adequacy to leverage ratios.  Introducing model three.

TrailingLeverageRatio

Too Much Risk

August 18, 2014

Risk Management is all about avoiding taking Too Much Risk.

And when it really comes down to it, there are only a few ways to get into the situation of taking too much risk.

  1. Misunderstanding the risk involved in the choices made and to be made by the organization
  2. Misunderstanding the risk appetite of the organization
  3. Misunderstanding the risk taking capacity of the organization
  4. Deliberately ignoring the risk, the risk appetite and/or the risk taking capacity

So Risk Management needs to concentrate on preventing these four situations.  Here are some thoughts regarding how Risk Management can provide that.

1. Misunderstanding the risk involved in the choices made and to be made by an organization

This is the most common driver of Too Much Risk.  There are two major forms of misunderstanding:  Misunderstanding the riskiness of individual choices and Misunderstanding the way that risk from each choice aggregates.  Both of these drivers were strongly in evidence in the run up to the financial crisis.  The risk of each individual mortgage backed security was not seriously investigated by most participants in the market.  And the aggregation of the risk from the mortgages was misunderestimated as well.  In both cases, there was some rationalization for the misunderstanding.  The Misunderstanding was apparent to most only in hindsight.  And that is most common for misunderstanding risks.  Those who are later found to have made the wrong decisions about risk were most often acting on their beliefs about the risks at the time.  This problem is particularly common for firms with no history of consistently and rigorously measuring risks.  Those firms usually have very experienced managers who have been selecting their risks for a long time, who may work from rules of thumb.  Those firms suffer this problem most when new risks are encountered, when the environment changes making their experience less valid and when there is turnover of their experienced managers.  Firms that use a consistent and rigorous risk measurement process also suffer from model induced risk blindness.  The best approach is to combine analysis with experienced judgment.

2.  Misunderstanding the risk appetite of the organization

This is common for organizations where the risk appetite has never been spelled out.  All firms have risk appetites, it is just that in many, many cases, no one knows what they are in advance of a significant loss event.  So misunderstanding the unstated risk appetite is fairly common.  But actually, the most common problem with unstated risk appetites is under utilization of risk capacity.  Because the risk appetite is unknown, some ambitious managers will push to take as much risk as possible, but the majority will be over cautious and take less risk to make sure that things are “safe”.

3.  Misunderstanding the risk taking capacity of the organization

 This misunderstanding affects both companies who do state their risk appetites and companies who do not.  For those who do state their risk appetite, this problem comes about when the company assumes that they have contingent capital available but do not fully understand the contingencies.  The most important contingency is the usual one regarding money – no one wants to give money to someone who really, really needs it.  The preference is to give money to someone who has lots of money who is sure to repay.  For those who do not state a risk appetite, each person who has authority to take on risks does their own estimate of the risk appetite based upon their own estimate of the risk taking capacity.  It is likely that some will view the capacity as huge, especially in comparison to their decision.  So most often the problem is not misunderstanding the total risk taking capacity, but instead, mistaking the available risk capacity.

4.  Deliberately ignoring the risk, the risk appetite and/or the risk taking capacity of the organization

A well established risk management system will have solved the above problems.  However, that does not mean that their problems are over.  In most companies, there are rewards for success in terms of current compensation and promotions.  But it is usually difficult to distinguish luck from talent and good execution in a business about risk taking.  So there is a great temptation for managers to deliberately ignore the risk evaluation, the risk appetite and the risk taking capacity of the firm.  If the excess risk that they then take produces excess losses, then the firm may take a large loss.  But if the excess risk taking does not result in an excess loss, then there may be outsized gains reported and the manager may be seen as highly successful person who saw an opportunity that others did not.  This dynamic will create a constant friction between the Risk staff and those business managers who have found the opportunity that they believe will propel their career forward.

So get to work, risk managers.

Make sure that your organization

  1. Understands the risks
  2. Articulates and understands the risk appetite
  3. Understands the aggregate and remaining risk capacity at all times
  4. Keeps careful track of risks and risk taking to be sure to stop any managers who might want to ignore the risk, the risk appetite and the risk taking capacity

Insurers need to adapt COSO/ISO Risk Management to achieve ERM

July 29, 2014

Both the COSO and ISO risk management frameworks describe many excellent practices.  However, in practice, insurers need to make two major changes from the typical COSO/ISO risk management process to achieve real ERM.

  1. RISK MEASUREMENT – Both COSO and ISO emphasize what RISKVIEWS calls the Risk Impressions approach to risk measurement.  That means asking people what their impression is of the frequency and severity of each risk.  Sometimes they get real fancy and also ask for an impression of Risk Velocity.  RISKVIEWS sees two problems with this for insurers.  First, impressions of risk are notoriously inaccurate.  People are just not very good at making subjective judgments about risk.  Second, the frequency/severity pair idea does not actually represent reality.  The idea properly applies to very specific incidents, not to risks, which are broad classes of incidents.  Each possible incident that makes up the class that we call a risk has a different frequency severity pair.   There is no single pair that represents the class.  Insurers risks are in one major way different from the risks of non-financial firms.  Insurers almost always buy and sell the risks that make up 80% or more of their risk profile.  That means that to make those transactions they should be making an estimate of the expected value of ALL of those frequency and severity pairs.  No insurance company that expects to survive for more than a year would consider setting its prices based upon something as lacking in reality testing as a single frequency and severity pair.  So an insurer should apply the same discipline to measuring its risks as it does to setting its prices.  After all, risk is the business that it is in.
  2. HIERARCHICAL RISK FOCUS – Neither COSO nor ISO demand that the risk manager run to their board or senior management and proudly expect them to sit still while the risk manager expounds upon the 200 risks in their risk register.  But a highly depressingly large number of COSO/ISO shops do exactly that.  Then they wonder why they never get a second chance in front of top management and the board.  However, neither COSO nor ISO provide strong enough guidance regarding the Hierarchical principal that is one of the key ideas of real ERM.    COSO and ISO both start with a bottoms up process for identifying risks.  That means that many people at various levels in the company get to make input into the risk identification process.  This is the fundamental way that COSO/ISO risk management ends up with risk registers of 200 risks.  COSO and ISO do not, however, offer much if any guidance regarding how to make that into something that can be used by top management and the board.  In RISKVIEWS experience, the 200 item list needs to be sorted into no more than 25 broad categories.  Then those categories need to be considered the Risks of the firm and the list of 200 items considered the Riskettes.  Top management should have a say in the development of that list.  It should be their chooses of names for the 25 Risks. The 25 Risks then need to be divided into three groups.  The top 5 to 7 Risks are the first rank risks that are the focus of discussions with the Board.    Those should be the Risks that are most likely to cause a financial or other major disruption to the firm.   Besides focusing on those first rank risks, the board should make sure that management is attending to all of the 25 risks.  The remaining 18 to 20 Risks then can be divided into two ranks.  The Top management should then focus on the first and second rank risks.  And they should make sure that the risk owners are attending to the third rank risks.  Top management, usually through a risk committee, needs to regularly look at these risk assignments and promote and demote risks as the company’s exposure and the risk environment changes.  Now, if you are a risk manager who has recently spent a year or more constructing the list of the 200 Riskettes, you are doubtless wondering what use would be made of all that hard work.  Under the Hierarchical principle of ERM, the process described above is repeated down the org chart.  The risk committee will appoint a risk owner for each of the 25 Risks and that risk owner will work with their list of Riskettes.  If their Riskette list is longer than 10, they might want to create a priority structure, ranking the risks as is done for the board and top management.  But if the initial risk register was done properly, then the Riskettes will be separate because there is something about them that requires something different in their monitoring or their risk treatment.  So the risk register and Riskettes will be an valuable and actionable way to organize their responsibilities as risk owner.  Even if it is never again shown to the Top management and the board.

These two ideas do not contradict the main thrust of COSO and ISO but they do represent a major adjustment in approach for insurance company risk managers who have been going to COSO or ISO for guidance.  It would be best if those risk managers knew in advance about these two differences from the COSO/ISO approach that is applied in non-financial firms.

Chicken Little or Coefficient of Riskiness (COR)

July 21, 2014

Running around waving your arms and screaming “the Sky is Falling” is one way to communicate risk positions.  But as the story goes, it is not a particularly effective approach.  The classic story lays the blame on the lack of perspective on the part of Chicken Little.  But the way that the story is told suggests that in general people have almost zero tolerance for information about risk – they only want to hear from Chicken Little about certainties.

But insurers live in the world of risk.  Each insurer has their own complex stew of risks.  Their riskiness is a matter of extreme concern.  Many insurers use complex models to assess their riskiness.  But in some cases, there is a war for the hearts and minds of the decision makers in the insurer.  It is a war between the traditional qualitative gut view of riskiness and the new quantitative view of riskiness.  One tactic in that war used by the qualitative camp is to paint the quantitative camp as Chicken Little.

In a recent post, Riskviews told of a scale, a Coefficient of Riskiness.  The idea of the COR is to provide a simple basis for taking the argument about riskiness from the name calling stage to an actual discussion about Riskiness.

For each risk, we usually have some observations.  And from those observations, we can form the two basic statistical facts, the observed average and observed volatility (known as standard deviation to the quants).  But in the past 15 years, the discussion about risk has shifted away from the observable aspects of risk to an estimate of the amount of capital needed for each risk.

Now, if each risk held by an insurer could be subdivided into a large number of small risks that are similar in riskiness for each (including size of potential loss) and where the reasons for the losses for each individual risk were statistically separate (independent) then the maximum likely loss to be expected (99.9%tile) would be something like the average loss plus three times the volatility.  It does not matter what number is the average or what number is the standard deviation.

RISKVIEWS has suggested that this multiple of 3 would represent a standard amount of riskiness and become the index value for the Coefficient of Riskiness.

This could also be a starting point in looking at the amount of capital needed for any risks.  Three times the observed volatility plus the observed average loss.  (For the quants, this assumes that losses are positive values and gains negative.  If you want losses to be negative values, then take the observed average loss and subtract three times the volatility).

So in the debate about risk capital, that value is the starting point, the minimum to be expected.  So if a risk is viewed as made up of substantially similar but totally separate smaller risks (homogeneous and independent), then we start with a maximum likely loss of average plus three times volatility.  Many insurers choose (or have chosen for them) to hold capital for a loss at the 1 in 200 level.  That means holding capital for 83% of this Maximum Likely Loss.  This is the Viable capital level.  Some insurers who wish to be at the Robust level of capital will hold capital roughly 10% higher than the Maximum Likely Loss.  Insurers targeting the Secure capital level will hold capital at approximately 100% of the Maximum Likely Loss level.

But that is not the end of the discussion of capital.  Many of the portfolios of risks held by an insurer are not so well behaved.  Those portfolios are not similar and separate.  They are dissimilar in the likelihood of loss for individual exposures, they are dissimilar for the possible amount of loss.  One way of looking at those dissimilarities is that the variability of rate and of size result in a larger number of pooled risks acting statistically more like a smaller number of similar risks.

So if we can imagine that evaluation of riskiness can be transformed into a problem of translating a block of somewhat dissimilar, somewhat interdependent risks into a pool of similar, independent risks, this riskiness question comes clearly into focus.  Now we can use a binomial distribution to look at riskiness.  The plot below takes up one such analysis for a risk with an average incidence of 1 in 1000.  You see that for up to 1000 of these risks, the COR is 5 or higher.  The COR gets up to 6 for a pool of only 100 risks.  It gets close to 9 for a pool of only 50 risks.

 

cor

 

There is a different story for a risk with average incidence of 1 in 100.  COR is less than 6 for a pool as small as 25 exposures and the COR gets down to as low as 3.5.

Cor100

In producing these graphs, RISKVIEW notices that COR is largely a function of number of expected claims.  So The following graph shows COR plotted against number of expected claims for low expected number of claims.  (High expected claims produces COR that is very close to 3 so are not very interesting.)

COR4You see that the COR stays below 4.5 for expected claims 1 or greater.  And there does seem to be a gently sloping trend connecting the number of expected claims and the COR.

So for risks where losses are expected every year, the maximum COR seems to be under 4.5.  When we look at risks where the losses are expected less frequently, the COR can get much higher.  Values of COR above 5 start showing up with expected losses that are in the range of .2 and values above .1 are even higher.

cor5

What sorts of things fit with this frequency?  Major hurricanes in a particular zone, earthquakes, major credit losses all have expected frequencies of one every several years.

So what has this told us?  It has told us that fat tails can come from the small portfolio effect.  For a large portfolio of similar and separate risks, the tails are highly likely to be normal with a COR of 3.  For risks with a small number of exposures, the COR, and therefore the tail, might get as much as 50% fatter with a COR of up to 4.5. And the COR goes up as the number of expected losses goes down.

Risks with very fat tails are those with expected losses less frequent than one per year can have much fatter tails, up to three times as fat as normal.

So when faced with those infrequent risks, the Chicken Little approach is perhaps a reasonable approximation of the riskiness, if not a good indicator of the likelihood of an actual impending loss.

 

Quantitative vs. Qualitative Risk Assessment

July 14, 2014

There are two ways to assess risk.  Quantitative and Qualitative.  But when those two words are used in the NAIC ORSA Guidance Manual, their meaning is a little tricky.

In general, one might think that a quantitative assessment uses numbers and a qualitative assessment does not.  The difference is as simple as that.  The result of a quantitative assessment would be a number such as $53 million.  The result of a qualitative assessment would be words, such as “very risky” or “moderately risky”.

But that straightforward approach to the meaning of those words does not really fit with how they are used by the NAIC.  The ORSA Guidance Manual suggests that an insurer needs to include those qualitative risk assessments in its determination of capital adequacy.  Well, that just will not work if you have four risks that total $400 million and three others that are two “very riskys” and one “not so risk”.  How much capital is enough for two “very riskys”, perhaps you need a qualitative amount of surplus to provide for that, something like “a good amount”.

RISKVIEWS believes that then the NAIC says “Quantitative” and “Qualitative” they mean to describe two approaches to developing a quantity.  For ease, we will call these two approaches Q1 and Q2.

The Q1 approach is data and analysis driven approach to developing the quantity of loss that the company’s capital standard provides for.  It is interesting to RISKVIEWS that very few participants or observers of this risk quantification regularly recognize that this process has a major step that is much less quantitative and scientific than others.

The Q1 approach starts and ends with numbers and has mathematical steps in between.  But the most significant step in the process is largely judgmental.  So at its heart, the “quantitative” approach is “qualitative”.  That step is the choice of mathematical model that is used to extrapolate and interpolate between actual data points.  In some cases, there are enough data points that the choice of model can be based upon somewhat less subjective fit criteria.  But in other cases, that level of data is reached by shortening the time step for observations and THEN making heroic (and totally subjective) assumptions about the relationship between successive time periods.

These subjective decisions are all made to enable the modelers to make a connection between the middle of the distribution, where there usually is enough data to reliably model outcomes and the tail, particularly the adverse tail of the distribution where the risk calculations actually take place and where there is rarely if ever any data.

There are only a couple of subjective decisions possibilities, in broad terms…

  • Benign – Adverse outcomes are about as likely as average outcomes and are only moderately more severe.
  • Moderate – Outcomes similar to the average are much more likely than outcomes significantly different from average.  Outcomes significantly higher than average are possible, but likelihood of extremely adverse outcomes are extremely highly unlikely.
  • Highly risky – Small and moderately adverse outcomes are highly likely while extremely adverse outcomes are possible, but fairly unlikely.

The first category of assumption, Benign,  is appropriate for large aggregations of small loss events where contagion is impossible.  Phenomenon that fall into this category are usually not the concern for risk analysis.  These phenomenon are never subject to any contagion.

The second category, Moderate, is appropriate for moderate sized aggregations of large loss events.  Within this class, there are two possibilities:  Low or no contagion and moderate to high contagion.  The math is much simpler if no contagion is assumed.

But unfortunately, for risks that include any significant amount of human choice, contagion has been observed.  And this contagion has been variable and unpredictable.  Even more unfortunately, the contagion has a major impact on risks at both ends of the spectrum.  When past history suggests a favorable trend, human contagion has a strong tendency to over play that trend.  This process is called “bubbles”.  When past history suggests an unfavorable trend, human contagion also over plays the trend and markets for risks crash.

The modelers who wanted to use the zero contagion models, call this “Fat Tails”.  It is seen to be an unusual model, only because it was so common to use the zero contagion model with the simpler maths.

RISKVIEWS suggests that when communicating that the  approach to modeling is to use the Moderate model, the degree of contagion assumed should be specified and an assumption of zero contagion should be accompanied with a disclaimer that past experience has proven this assumption to be highly inaccurate when applied to situations that include humans and therefore seriously understates potential risk.

The Highly Risky models are appropriate for risks where large losses are possible but highly infrequent.  This applies to insurance losses due to major earthquakes, for example.  And with a little reflection, you will notice that this is nothing more than a Benign risk with occasional high contagion.  The complex models that are used to forecast the distribution of potential losses for these risks, the natural catastrophe models go through one step to predict possible extreme events and the second step to calculate an event specific degree of contagion for an insurer’s specific set of coverages.

So it just happens that in a Moderate model, the 1 in 1000 year loss is about 3 standard deviations worse than the mean.  So if we use that 1 in 1000 year loss as a multiple of standard deviations, we can easily talk about a simple scale for riskiness of a model:

Scale

So in the end the choice is to insert an opinion about the steepness of the ramp up between the mean and an extreme loss in terms of multiples of the standard deviation.  Where standard deviation is a measure of the average spread of the observed data.  This is a discussion that on these terms include all of top management and the conclusions can be reviewed and approved by the board with the use of this simple scale.  There will need to be an educational step, which can be largely in terms of placing existing models on the scale.  People are quite used to working with a Richter Scale for earthquakes.  This is nothing more than a similar scale for risks.  But in addition to being descriptive and understandable, once agreed, it can be directly tied to models, so that the models are REALLY working from broadly agreed upon assumptions.

*                  *                *               *             *                *

So now we go the “Qualitative” determination of the risk value.  Looking at the above discussion, RISKVIEWS would suggest that we are generally talking about situations where we for some reason do not think that we know enough to actually know the standard deviation.  Perhaps this is a phenomenon that has never happened, so that the past standard deviation is zero.  So we cannot use the multiple of standard deviation method discussed above.  Or to put is another way, we can use the above method, but we have to use judgment to estimate the standard deviation.

*                  *                *               *             *                *

So in the end, with a Q1 “quantitative” approach, we have a historical standard deviation and we use judgment to decide how risky things are in the extreme compared to that value.  In the Q2 “qualitative” approach, we do not have a reliable historical standard deviation and we need to use judgment to decide how risky things are in the extreme.

Not as much difference as one might have guessed!

Just Stop IT! Right Now. And Don’t Do IT again.

June 16, 2014

IT is a medieval, or possibly pre-medieval practice for evaluating risks.  That is the assignment of a single Frequency and Severity pair to each risk and calling that a risk evaluation.

In the mid 1700’s Daniel Bernoulli wrote:

EVER SINCE mathematicians first began to study the measurement of risk there has been general agreement on the following proposition: Expected values are computed by multiplying each possible gain by the number of ways in which it can occur, and then dividing the sum of these products by the total number of possible cases where, in this theory, the consideration of cases which are all of the same probability is insisted upon. If this rule be accepted, what remains to be done within the framework of this theory amounts to the enumeration of all alternatives, their breakdown into equi-probable cases and, finally, their insertion into corresponding classifications.

Many modern writers attribute this process to Bernoulli but this is the very first sentence of his “Exposition of a New Theory for Measuring Risk” published in 1738.  He suggests that the idea is so common in his time that he does not cite an original author.  His work is not to prove that this basic idea is correct, but to propose a new methodology for implementing.

It is hard to say how the single pair idea (i.e. that a risk can be represented by a sing frequency/severity pair of values) has crept into basic modern risk assessment practice, but it has. And it is firmly established.  But in 1738, Bernoulli knew that each risk has many possible gain amounts.  NOT A SINGLE PAIR.

But let me ask you this…

How did you pick the particular pair of values that you use to characterize any of your risks?

You see, as far as RISKVIEWS can tell, Bernoulli was correct – each and every risk has an infinite number of such pairs that are valid.  So how did you pick the one that you use?

Take for an example, the risk of a fire.  There are an infinite number of possible fires that could happen.  Some more likely and some less likely.  Some would do lots of damage some only a little.  The likelihood of a fire is not actually always related to the damage.  Some highly unlikely fires might be very small and low damage.  Hopefully, you do not have the situation of a likely high damage fire.  But all by itself, you could make up a frequency severity heat map for any single risk with many points on the chart.

FS

So RISKVIEWS asks again, how do you pick which point from that chart to be the one single point for your main risk report and heat map?

And those heat maps that you are so fond of…

Do you realize that the points on the heat map are not rationally comparable?  That is because there is no single criteria that most risk managers use to pick the pairs that they use.  To compare values they need to have been selected by applying the same exact criteria.  But usually the actual criteria for choosing the pairs is not clearly articulated.

So here you stand, you have a risk register that is populated with these bogus statistics.  What can you do to move away towards a more rational view of your risks?

You can start to reveal to people that you are aware that your risks are NOT fully measured by that single statistic.  Try revealing some additional statistics about each risk on your risk register:

  • The Likelihood of zero (or an inconsequential low amount) loss from each risk in any one year
  • The Likelihood of a loss of 1% of earnings or more
  • The expected loss at a 1% likelihood (or 1 in 100 year expected loss)

Try plotting those values and show how the risks on your risk register compare.  Create a heat map that plots likelihood of zero loss against expected loss at a 1% likelihood.

Those values are then comparable.

So stop IT.  Stop misinforming everyone about your risks.  Stop using frequency severity pairs to represent your risks.

 

Can’t skip measuring Risk and still call it ERM

January 15, 2014

Many insurers are pushing ahead with ERM at the urging of new executives, boards, rating agencies and regulators.  Few of those firms who have resisted ERM for many years have a history of measuring most of their risks.

But ERM is not one of those liberal arts like the study of English Literature.  In Eng Lit, you may set up literature classification schemes, read materials, organize discussion groups and write papers.  ERM can have those elements, but the heart of ERM is Risk Measurement.  Comparing those risk measures to expectations and to prior period measures.  If a company does not have Risk Measurement, then they do not have ERM.

That is the tough side of this discussion, the other side is that there are many ways to measure risks and most companies can implement several of them for each risk without the need for massive projects.

Here are a few of those measures, listed in order of increasing sophistication:

1. Risk Guesses (AKA Qualitative Risk Assessment)
– Guesses, feelings
– Behavioral Economics Biases
2. Key Risk Indicators (KRI)
– Risk is likely to be similar to …
3. Standard Factors
– AM Best,  S&P, RBC
4. Historical Analysis
– Worst Loss in past 10 years as pct of base (premiums,assets).
5. Stress Tests
– Potential loss from historical or hypothetical scenario
6. Risk Models
– If the future is like the past …
– Or if the future is different from the past in this way …

More discussion of Risk Measurement on WillisWire:

     Part 2 of a 14 part series
And on RISKVIEWS:
Risk Assessment –  55 other posts relating to risk measurement and risk assessment.

Provisioning – Packing for your trip into the future

April 26, 2013

There are two levels of provisioning for an insurer.  Reserves and Risk Capital.  The two are intimately related.  In fact, in some cases, insurers will spend more time and care in determining the correct number for the sum of the two, called Total Asset Requirement (TAR) by some.

Insurers need an realistic picture of future obligations long before the future is completely clear. This is a key part of the feedback mechanism.  The results of the first year of business is the most important indication of business success for non-life insurance.  That view of results depends largely upon the integrity of the reserve value.  This feedback information effects performance evaluation, pricing for the next year, risk analysis and capital adequacy analysis and capital allocation.

The other part of provisioning is risk capital.  Insurers also need to hold capital for less likely swings in potential losses.  This risk capital is the buffer that provides for the payment of policyholder claims in a very high proportion of imagined circumstances.  The insurance marketplace, the rating agencies and insurance regulatory bodies all insist that the insurer holds a high buffer for this purpose.

In addition, many valuable insights into the insurance business can be gained from careful analysis of the data that is input to the provisioning process for both levels of provisioning.

However, reserves are most often set to be consistent with considerations.  Swings of adequate and inadequate pricing is tightly linked to swings in reserves.  When reserves are optimistically set capital levels may reflect same bias. This means that inadequate prices can ripple through to cause deferred recognition of actual claims costs as well as under provisioning at both levels.  This is more evidence that consideration is key to risk management.

There is often pressure for small and smooth changes to reserves and risk capital but information flows and analysis provide jumps in insights both as to expectations for emerging losses as well as in terms of methodologies for estimation of reserves and capital.  The business pressures may threaten to overwhelm the best analysis efforts here.  The analytical team that prepares the reserves and capital estimates needs to be aware of and be prepared for this eventuality.  One good way to prepare for this is to make sure that management and the board are fully aware of the weaknesses of the modeling approach and so are more prepared for the inevitable model corrections.

Insurers need to have a validation process to make sure that the sum of reserves and capital is an amount that provides the degree of security that is sought.  Modelers must allow for variations in risk environment as well as the impact of risk profile, financial security and risk management systems of the insurer in considering the risk capital amount.  Changes in any of those elements may cause abrupt shifts in the amount of capital needed.

The Total Asset Requirement should be determined without regard to where the reserves have been set so that risk capital level does not double up on redundancy or implicitly affirm inadequacy of reserves.

The capital determined through the Provisioning process will usually be the key element to the Risk Portfolio process.  That means that accuracy in the sub totals within the models are just as important as the overall total.  The common practice of tolerating offsetting inadequacies in the models may totally distort company strategic decision making.

This is one of the seven ERM Principles for Insurers.

Controlling with a Cycle

April 3, 2013

Helsinki_city_bikes

No, not that kind of cycle… This kind:

CycleThis is a Risk Control Cycle.  It includes Thinking/Observing steps and Action Steps.  The only reason a sane organization would spend the time on the Assessing, Planning and Monitoring steps is so that they could be more effective with the Risk Taking, Mitigating and Responding steps.

A process capable of limiting losses can be referred to as a complete risk control process, which would usually include the following:

  • Identification of risks—with a process that seeks to find all risks inherent in a insurance product, investment instrument, or other situation, rather than simply automatically targeting “the usual suspects.”
  • Assess Risks – This is both the beginning and the end of the cycle.  As the end, this step is looking back and determining whether your judgment about the risk and your ability to select and manage risks is as good as you thought that it would be.  As the beginning, you look forward to form a new opinion about the prospects for risk and rewards for the next year.  For newly identified risks/opportunities this is the due diligence phase.
  • Plan Risk Taking and Risk Management – Based upon the risk assessment, management will make plans for how much of each risk that the organization will plan to accept and then how much of that risk will be transferred, offset and retained.  These plans will also include the determination of limits
  • Take Risks – organizations will often have two teams of individuals involved in risk taking.  One set will identify potential opportunities based upon broad guidelines that are either carried over from a prior year or modified by the accepted risk plan.  (Sales) The other set will do a more detailed review of the acceptability of the risk and often the appropriate price for accepting the risk.  (Underwriting)
  • Measuring and monitoring of risk—with metrics that are adapted to the complexity and the characteristics of the risk as well as Regular Reporting of Positions versus Limits/Checkpoints— where the timing needed to be effective depends on the volatility of the risk and the rate at which the insurer changes their risk positions. Insurers may report at a granular level that supports all specific decision making and actions on a regular schedule.
  • Regular risk assessment and dissemination of risk positions and loss experience—with a standard set of risk and loss metrics and distribution of risk position reports, with clear attention from persons with significant standing and authority in the organization.
  • Risk limits and standards—directly linked to objectives. Terminology varies widely, but many insurers have both hard “Limits” that they seek to never exceed and softer “Checkpoints” that are sometimes exceeded. Limits will often be extended to individuals within the organization with escalating authority for individuals higher in the organizational hierarchy.
  • Response – Enforcement of limits and policing of checkpoints—with documented consequences for limit breaches and standard resolution processes for exceeding checkpoints. Risk management processes such as risk avoidance for risks where the insurer has zero tolerance. These processes will ensure that constant management attention is not needed to assure compliance. However, occasional assessment of compliance is often practiced. Loss control processes to reduce the avoidable excess frequency and severity of claims and to assure that when losses occur, the extent of the losses is contained to the extent possible. Risk transfer processes, which are used when an insurer takes more risk than they wish to retain and where there is a third party who can take the risk at a price that is sensible after accounting for any counterparty risk that is created by the risk transfer process. Risk offset processes, which are used when insurer risks can be offset by taking additional risks that are found to have opposite characteristics. These processes usually entail the potential for basis risk because the offset is not exact at any time or because the degree of offset varies as time passes and conditions change, which is overcome in whole or in part by frequent adjustment to the offsetting positions. Risk diversification, which can be used when risks can be pooled with other risks with relatively low correlation. Risk costing / pricing, which involves maintaining the capability to develop appropriate views of the cost of holding a risk in terms of expected losses and provision for risk. This view will influence the risks that an insurer will take and the provisioning for losses from risks that the insurer has taken (reserves). This applies to all risks but especially to insurance risk management. Coordination of insurance profit/loss analysis with pricing with loss control (claims) with underwriting (risk selection), risk costing, and reserving, so that all parties within the insurer are aware of the relationship between emerging experience of the 
risks that the insurer has chosen to retain and the expectations that the insurer held when it chose to write and retain the risks.
  • Assess Risks – and the cycle starts again.

This is one of the seven ERM Principles for Insurers

Risk Evaluation by Actuaries

October 22, 2012

The US Actuarial Standards Board has promulgated a new Actuarial Standard of Practice number 46 Risk Evaluation in Enterprise Risk Management.

ASB Adopts New ASOP No. 46

At its September meeting, the ASB adopted ASOP No. 46, Risk Evaluation in Enterprise Risk Management. The ASOP provides guidance to actuaries when performing professional services with respect to risk evaluation systems used for the purposes of enterprise risk management, including designing, developing, implementing, using, maintaining, and reviewing those systems. An ASOP providing guidance for activities related to risk treatment is being addressed in a proposed ASOP titled, Risk Treatment in Enterprise Risk Management, which will be released in late 2012. The topics of these two standards were chosen because they cover the most common actuarial services performed within risk management systems of organizations. ASOP No. 46 will be effective May 1, 2013 and can be viewed under the tab, “Current Actuarial Standards of Practice.”

 

Must have more than one View of Risk

May 14, 2012

Riskviews finds the headline Value-at-Risk model masked JP Morgan $2 bln loss to be totally appalling. JP Morgan is of course famous for having been one of the first large banks to use VaR for daily risk assessment.

During the late 1980’s, JP Morgan developed a firm-wide VaR system. This modeled several hundred risk factors. A covariance matrix was updated quarterly from historical data. Each day, trading units would report by e-mail their positions’ deltas with respect to each of the risk factors. These were aggregated to express the combined portfolio’s value as a linear polynomial of the risk factors. From this, the standard deviation of portfolio value was calculated. Various VaR metrics were employed. One of these was one-day 95% USD VaR, which was calculated using an assumption that the portfolio’s value was normally distributed.
With this VaR measure, JP Morgan replaced a cumbersome system of notional market risk limits with a simple system of VaR limits. Starting in 1990, VaR numbers were combined with P&L’s in a report for each day’s 4:15 PM Treasury meeting in New York. Those reports, with comments from the Treasury group, were forwarded to Chairman
Weatherstone.                        from History of Value-at-Risk:1922-1998 by Glyn Holten

JP Morgan went on to spin off a group, Riskmetrics, who sold the capability to do VaR calculations to all comers.

Riskviews had always assumed that JP Morgan had felt safe selling the VaR technology because they had moved on to something better.

But the story given about the $2 billion loss suggests that they were flubbing the measurement of their exposure because of a new risk measurement system.

Riskviews would suggest two ideas to JP Morgan:

  1. A firm that makes its money taking risks should never rely upon a single measure of risk.  See Risk and Light and the CARE Report for further information.
  2. The folks responsible for risk evaluation need to apply some serious standards for their work.  Take a look at the first attempt of the actuarial profession of standards for professionals performing risk evaluation in ERM programs.  This proposed standard suggests many things, but the most important idea is that a professional who is evaluating risk should look at three things: the risk taking capacity of the firm, the risk environment and the risk management program of the firm.

These are fundamental principles of risk management.  Not the only ones, but principles that speak to the problem that JP Morgan claims to have.

Risk Evaluation Standard

April 4, 2012

The US Actuarial Standards Board (ASB) recently approved proposed ASOP Risk Evaluation in Enterprise Risk Management as an exposure draft.

In March 2011, discussion drafts on the topics of risk evaluation and risk treatment were issued. The ERM Task Force of the ASB reviewed the comments received and based on those comments, began work on the development of exposure drafts in those areas.

The proposed ASOP on risk evaluation provides guidance to actuaries when performing professional services with respect to risk evaluation systems, including designing, implementing, using, and reviewing those systems. The comment deadline for the exposure draft is June 30, 2012. An exposure draft on risk treatment is scheduled to be reviewed in summer 2012.

ASB Approves ERM Risk Evaluation Exposure DraftRisk Evaluation in Enterprise Risk Management.  Comment deadline: June 30, 2012

Three Ideas of Risk Management

March 12, 2012

In the book Streetlights and Shadows, Gary Klein describes three sorts of risk management.

  • Prioritize and Reduce – the system used by safety and (insurance) risk managers.  In this view of risk management, there is a five step process to
    1. Identify Risks
    2. Assess and Prioritize Risks
    3. Develop plans to mitigate the highest priority risks
    4. implement plans
    5. Track effectiveness of mitigations and adapt plans as necessary
  • Calculate and Decide – the system used by investors (and insurers) to develop multi scenario probability trees of potential outcomes and to select the options with the best risk reward relationship.
  • Anticipate and Adapt – the system preferred by CEO’s.  For potential courses of action, the worst case scenario will be assessed.  If the worst case is within acceptable limits, then the action will be considered for its benefits.  If the worst case is outside of acceptable limits, then consideration is given to management to reduce or eliminate the adverse outcomes.  If those outcomes cannot be brought within acceptable limits then the option is rejected.

Most ERM System are set up to support the first two ideas of Risk Management.

But if it is true that most CEO’s favor the Anticipate and Adapt approach, a total mismatch between what the CEO is thinking and what the ERM system is doing emerges.

It would not be difficult to develop an ERM system that matches with the Anticipate and Adapt approach, but most risk managers are not even thinking of that possibility.

Under that system of risk management, the task would be to look at a pair of values for every major activity.  That pair would be the planned profit and the worst case loss.  During the planning stage, the Risk Manager would then be tasked to find ways to reduce the worst case losses of potential plans in a reliable manner.  Once plans are chosen, the Risk Manager would be responsible to make sure that any of the planned actions do not exceed the worst case losses.

Thinking of risk management in this manner allows us to understand the the worst possible outcome for a risk manager would not be a loss from one of the planned activities of the firm, it would be a loss that is significantly in excess of the maximum loss that was contemplated at the time of the plan.  The excessive loss would be a signal that the Risk area is not a reliable provider of risk information for planning, decision making or execution of plans or all three.

This is an interesting line of reasoning and may be a better explanation for the way that risk managers are treated within organizations and especially why risk managers are sometimes fired after losses.  They may be losing their jobs, not because there is a loss, but because they were unable to warn management of the potential size of the loss.  It could well be that management would have made different plans if they had known in advance the potential magnitude of losses from one of their choices.

Or at least, that is the story that they believe about themselves after the excessive loss.

This suggests that risk managers need to be particular with risk evaluations.  Klein also mentions that executives are usually not particularly impressed with evaluations of frequency.  They most often want to focus on severity.

So whatever is believed about frequency, the risk manager needs to be careful with the assessment of worst case losses.

Actuarial Risk Management Volunteer Opportunity

August 11, 2011

Actuarial Review of Enterprise Risk Management Practices –

A Working Group formed by The Enterprise and Financial Risks Committee of the IAA has started working on a white paper to be titled: “Actuarial Review of Enterprise Risk Management Practices”.  We are seeking volunteers to assist with writing, editing and research.

This project would set out a systematic process for actuaries to use when evaluating risk management practices.  Actuaries in Australia are now called to certify risk management practices of insurers and that the initial reaction of some actuaries was that they were somewhat unprepared to do that.  This project would produce a document that could be used by actuaries and could be the basis for actuaries to propose to take on a similar role in other parts of the world.  Recent events have shown that otherwise comparable businesses can differ greatly in the effectiveness of their risk management practices. Many of these differences appear to be qualitative in character and centered on management processes. Actuaries can take a role to offer opinion on process quality and on possible avenues for improvement. More specifically, recent events seem likely to increase emphasis on what the supervisory community calls Pillar 2 of prudential supervision – the review of risk and solvency governance. In Solvency II in Europe, a hot topic is the envisaged requirement for an ‘Own Risk and Solvency Assessment’ by firms and many are keen to see actuaries have a significant role in advising on this. The International Association of Insurance Supervisors has taken up the ORSA requirement as an Insurance Core Principle and encourages all regulators to adopt as part of their regulatory structure.  It seems an opportune time to pool knowledge.

The plan is to write the paper over the next six months and to spend another six months on comment & exposure prior to finalization.  If we get enough volunteers the workload for each will be small.   This project is being performed on a wiki which allows many people to contribute from all over the world.  Each volunteer can make as large or as small a contribution as their experience and energy allows.  People with low experience but high energy are welcome as well as people with high experience.

A similar working group recently completed a white paper titled the CARE report.  http://www.actuaries.org/CTTEES_FINRISKS/Documents/CARE_EN.pdf  You can see what the product of this sort of effort looks like.

Further information is available from Mei Dong, or David Ingram

==============================================================

David Ingram, CERA, FRM, PRM
+1 212 915 8039
(daveingram@optonline.net )

FROM 2009

ERM BOOKS – Ongoing Project – Volunteers still needed

A small amount of development work was been done to create the framework for a global resource for ERM Readings and References.

http://ermbooks.wordpress.com

Volunteers are needed to help to make this into a real resource.  Over 200 books, articles and papers have been identified as possible resources ( http://ermbooks.wordpress.com/lists-of-books/ )
Posts to this website give a one paragraph summary of a resource and identify it within several classification categories.  15 examples of posts with descriptions and categorizations can be found on the site.
Volunteers are needed to (a) identify additional resources and (b) write 1 paragraph descriptions and identify classifications.
If possible, we are hoping that this site will ultimately contain information on the reading materials for all of the global CERA educational programs.  So help from students and/or people who are developing CERA reading lists is solicited.
Participants will be given author access to the ermbooks site.  Registration with wordpress at www.wordpress.com is needed prior to getting that access.
Please contact Dave Ingram if you are interested in helping with this project.

(more…)

You Must Abandon All Presumptions

August 5, 2011

If you really want to have Enterprise Risk Management, then you must at all times abandon all presumptions. You must make sure that all of the things to successfully manage risks are being done, and done now, not sometime in the distant past.

A pilot of an aircraft will spend over an hour checking things directly and reviewing other people’s checks.  The pilot will review:

  • the route of flight
  • weather at the origin, destination, and enroute.
  • the mechanical status of the airplane
  • mechanical issues that may have been improperly logged.
  • the items that may have been fixed just prior to the flight to make certain that system works
  • the flight computer
  • the outside of the airplane for obvious defects that may have been overlooked
  • the paperwork
  • the fuel load
  • the takeoff and landing weights to make sure that they are within limits for the flight

Most of us do not do anything like this when we get into our cars to drive.  Is this overkill?  You decide.

When you are expecting to fly somewhere and there is a last minute delay because of something that seems like it should have really been taken care of, that is likely because the pilot finds something that someone might normally PRESUME was ok that was not.

Personally, as someone who takes lots and lots of flights, RISKVIEWS thinks that this is a good process.  One that RISKVIEWS would recommend to be used by risk managers.

THE NO PRESUMPTION APPROACH TO RISK MANAGEMENT

Here are the things that the Pilot of the ERM program needs to check before taking off on each flight.

1.  Risks need to be diversified.  There is no risk management if a firm is just taking one big bet.

2.  Firm needs to be sure of the quality of the risks that they take.  This implies that multiple ways of evaluating risks are needed to maintain quality, or to be aware of changes in quality.  There is no single source of information about quality that is adequate.

3.  A control cycle is needed regarding the amount of risk taken.  This implies measurements, appetites, limits, treatment actions, reporting, feedback

4.  The pricing of the risks needs to be adequate.  At least if you are in the risk business like insurers, for risks that are traded.  For risks that are not traded, the benefit of the risk needs to exceed the cost in terms of potential losses.

5.  The firm needs to manage its portfolio of risks so that it can take advantage of the opportunities that are often associated with its risks.  This involves risk reward management.

6.   The firm needs to provision for its retained risks appropriately, in terms of set asides (reserves) for expected losses and capital for excess losses.

A firm ultimately needs all six of these things.  Things like a CRO, or risk committees or board involvement are not on this list because those are ways to get these six things.

The Risk Manager needs to take a NO PRESUMPTIONS approach to checking these things.  Many of the problems of the financial crisis can be traced back to presumptions that one or more of these six things were true without any attempt to verify.

Cascading Failures

July 27, 2011

Most of the risks that concern us exist in systems. In massively complex systems.

However, our approach to risk assessment is often to isolate certain risk/loss events and treat them totally marginally.  That works fine when the events are actually marginal to the system but it may put us in a worse situation if the event triggers a cascading failure.

Within a system cycles are found.  Cycles that can ebb and flow over a long time.  And cycles that are self dampening or cycles that are self reinforcing.

The classic epidemiological disease model is an example of a self dampening system.  The dampening is caused by the fact that disease spread is self limiting.  Any person will have so many contacts with other people that might be sufficient to spread a disease were they infected.  In most disease situations, the spread of the disease starts to wane when enough people have already been exposed to the disease and developed immunity so that a significant fraction of the contacts that a newly infected and contagious person might have are already immune.  This produces the “S” curve of a disease. See  The Dynamics of SARS: Plotting the Risk of Epidemic Disasters.

The behavior of a financial markets in a large loss situation is a self reinforcing cycle.  Losses cause institutional investors to lose capital and because of their gearing the loss of capital triggers the need to reduce exposures which means selling into a falling market resulting in more losses.  Often the only cure is to close the market and hope that some exogenous event changes something.

These cascading situations are why the “tail” events are so terribly large compared to the ordinary events.

Each system has its own tipping point.  Your risk appetite should reflect how much you know about the tipping point of the system that each of your risks exists in.

And if you do not know the tipping point…

Echo Chamber Risk Models

June 12, 2011

The dilemma is a classic – in order for a risk model to be credible, it must be an Echo Chamber – it must reflect the starting prejudices of management. But to be useful – and worth the time and effort of building it – it must provide some insights that management did not have before building the model.

The first thing that may be needed is to realize that the big risk model cannot be the only tool for risk management.  The Big Risk Model, also known as the Economic Capital Model, is NOT the Swiss Army Knife of risk management.  This Echo Chamber issue is only one reason why.

It is actually a good thing that the risk model reflects the beliefs of management and therefore gets credibility.  The model can then perform the one function that it is actually suited for.  That is to facilitate the process of looking at all of the risks of the firm on the same basis and to provide information about how those risks add up to make up the total risk of the firm.

That is very, very valuable to a risk management program that strives to be Enterprise-wide in scope.  The various risks of the firm can then be compared one to another.  The aggregation of risk can be explored.

All based on the views of management about the underlying characteristics of the risks. That functionality allows a quantum leap in the ability to understand and consistently manage the risks of the firm.

Before creating this capability, the risks of each firm were managed totally separately.  Some risks were highly restricted and others were allowed to grow in a mostly uncontrolled fashion.  With a credible risk model, management needs to face their inconsistencies embedded in the historical risk management of the firm.

Some firms look into this mirror and see their problems and immediately make plans to rationalize their risk profile.  Others lash out at the model in a shoot the messenger fashion.  A few will claim that they are running an ERM program, but the new information about risk will result in absolutely no change in risk profile.

It is difficult to imagine that a firm that had no clear idea of aggregate risk and the relative size of the components thereof would find absolutely nothing that needs adjustment.  Often it is a lack of political will within the firm to act upon the new risk knowledge.

For example, when major insurers started to create the economic capital models in the early part of this century, many found that their equity risk exposure was very large compared to their other risks and to their business strategy of being an insurer rather than an equity mutual fund.  Some firms used this new information to help guide a divestiture of equity risk.  Others delayed and delayed even while saying that they had too much equity risk.  Those firms were politically unable to use the new risk information to reduce the equity position of the group.  More than one major business segment had heavy equity positions and they could not choose which to tell to reduce.  They also rejected the idea of reducing exposure through hedging, perhaps because there was a belief at the top of the firm that the extra return of equities was worth the extra risk.

This situation is not at all unique to equity risk.   Other firms had the same experience with Catastrophe risks, interest rate risks and Casualty risk concentrations.

A risk model that was not an Echo Chamber model would be any use at all in these situation above. The differences between management beliefs and the model assumptions of a non Echo Chamber model would result in it being left out of the discussion entirely.

Other methods, such as stress tests can be used to bring in alternate views of the risks.

So an Echo Chamber is useful, but only if you are willing to listen to what you are saying.

Getting Independence Right

May 11, 2011

Independence of the risk function is very important.  But often, the wrong part of the risk function is made independent.

It is the RISK MEASUREMENT AND REPORTING part of the risk function that needs to be independent.  If this part of the risk function is not independent of the risk takers, then you have the Nick Leeson risk – the risk that once you start to lose money that you will delay reporting the bad news to give yourself a little more time to earn back the losses, or the Jérôme Kerviel risk that you will simply understate the risk of what you are doing to allow you to enhance return on risk calculations and avoid pesky risk limits.

When Risk Reporting is independent, then the risk reports are much less likely to be fudged in the favor of the risk takers.  They are much more likely to simply and factually report the risk positions.  Then the risk management system either reacts to the risk information or not, but at least it has the correct information to make the decision on whether to act or not.

Many discussions of risk management suggest that there needs to be independence between the risk taking and the entire risk management function.  This is a model for risk disaster, but a model that is very common in banking.  Under this type of independence there will be a steady war.  A war that it it likely that the risk management folks will lose.  The risk takers are in charge of making money and the independent risk management folks are in charge of preventing that.  The risk takers, since they bring in the bacon, will always be much more popular with management than the risk managers, who add to costs and detract from revenue.

Instead, the actual risk management needs to be totally integrated within the risk taking function.  This will be resisted by any risk takers who have had a free ride to date.  So the risk takers can decide what would be the least destructive way to stay within their risk limits.  In a system of independent risk management, the risk managers are responsible for monitoring limit breaches and taking actions to unwind over limit situations.  In many cases, there are quite heated arguments around those unwinding transactions.

Under the reporting only independence model, the risk taking area would have responsibility for taking the actions needed to stay within limits and resolving breaches to limits.  (Most often those breaches are not due to deliberate violations of limits, but to market movements that cause breaches to limits to grow out of previously hedged positions.)

Ultimately, it would be preferable if the risk taking area would totally own their limits and the process to stay within those limits.

However, if the risk measurement and reporting is independent, then the limit breaches are reported and the decisions about what to do about any risk taking area that is not owning their limits is a top management decision, rather than a risk manager decision that sometimes gets countermanded by the top management.

What’s Next?

March 25, 2011

Turbulent Times are Next.

At BusinessInsider.com, a feature from Guillermo Felices tells of 8 shocks that are about to slam the global economy.

#1 Higher Food Prices in Emerging Markets

#2 Higher Interest Rates and Tighter Money in Emerging Markets

#3 Political Crises in the Middle East

#4 Surging Oil Prices

#5 An Increase in Interest Rates in Developed Markets

#6 The End of QE2

#7 Fiscal Cuts and Sovereign Debt Crises

#8 The Japanese Disaster

How should ideas like these impact on ERM systems?  Is it at all reasonable to say that they should not? Definitely not.

These potential shocks illustrate the need for the ERM system to be reflexive.  The system needs to react to changes in the risk environment.  That would mean that it needs to reflect differences in the risk environment in three possible ways:

  1. In the calibration of the risk model.  Model assumptions can be adjusted to reflect the potential near term impact of the shocks.  Some of the shocks are certain and could be thought to impact on expected economic activity (Japanese disaster) but have a range of possible consequences (changing volatility).  Other shocks, which are much less certain (end of QE2 – because there could still be a QE3) may be difficult to work into model assumptions.
  2. With Stress and Scenario Tests – each of these shocks as well as combinations of the shocks could be stress or scenario tests.  Riskviews suggest that developing a handful of fully developed scenarios with 3 or more of these shocks in each would be the modst useful.
  3. In the choices of Risk Appetite.  The information and stress.scenario tests should lead to a serious reexamination of risk appetite.  There are several reasonable reactions – to simply reduce risk appetite in total, to selectively reduce risk appetite, to increase efforts to diversify risks, or to plan to aggressively take on more risk as some risks are found to have much higher reward.

The last strategy mentioned above (aggressively take on more risk) might not be thought of by most to be a risk management strategy.  But think of it this way, the strategy could be stated as an increase in the minimum target reward for risk.  Since things are expected to be riskier, the firm decides that it must get paid more for risk taking, staying away from lower paid risks.  This actually makes quite a bit MORE sense than taking the same risks, expecting the same reward for risks and just taking less risk, which might be the most common strategy selected.

The final consideration is compensation.  How should the firm be paying people for their performance in a riskier environment?  How should the increase in market risk premium be treated?

See Risk adjusted performance measures for starters.

More discussion on a future post.

Where to Draw the Line

March 22, 2011

“The unprecedented scale of the earthquake and tsunami that struck Japan, frankly speaking, were among many things that happened that had not been anticipated under our disaster management contingency plans.”  Japanese Chief Cabinet Secretary Yukio Edano.

In the past 30 days, there have been 10 earthquakes of magnitude 6 or higher.  In the past 100 years, there have been over 80 earthquakes magnitude 8.0 or greater.  The Japanese are reputed to be the most prepared for earthquakes.  And also to experience the most earthquakes of any settled region on the globe.  By some counts, Japan experiences 10% of all earthquakes that are on land and 20% of all severe earthquakes.

But where should they, or anyone making risk management decisions, draw the line in preparation?

In other words, what amount of safety are you willing to pay for in advance and what magnitude of loss event are you willing to say that you will have to live with the consequences.

That amount is your risk tolerance.  You will do what you need to do to manage the risk – but only up to a certain point.

That is because too much security is too expensive, too disruptive.

You are willing to tolerate the larger loss events because you believe them to be sufficiently rare.

In New Zealand,  that cost/risk trade off thinking allowed them to set a standard for retrofitting of existing structures of 1/3 of the standard for new buildings.  But, they also allowed 20 years transition.  Not as much of an issue now.  Many of the older buildings, at least in Christchurch are gone.

But experience changes our view of frequency.  We actually change the loss distribution curve in our minds that is used for decision making.

Risk managers need to be aware of these shifts.  We need to recognize them.  We want to say that these shifts represent shifts in risk appetite.  But we need to also realize that they represent changes in risk perception.  When our models do not move as risk perception moves, the models lose fundamental credibility.

In addition, when modelers do things like what some of the cat modeling firms are doing right now, that is moving the model frequency when people’s risk perceptions are not moving at all, they also lose credibility for that.

So perhaps you want scientists and mathematicians creating the basic models, but someone who is familiar with the psychology of risk needs to learn an effective way to integrate those changes in risk perceptions (or lack thereof) with changes in models (or lack thereof).

The idea of moving risk appetite and tolerance up and down as management gets more or less comfortable with the model estimations of risk might work.  But you are still then left with the issue of model credibility.

What is really needed is a way to combine the science/math with the psychology.

Market consistent models come the closest to accomplishing that.  The pure math/science folks see the herding aspect of market psychology as a miscalibration of the model.  But they are just misunderstanding what is being done.  What is needed is an ability to create adjustments to risk calculations that are applied to non-traded risks that allow for the combination of science & math analysis of the risk with the emotional component.

Then the models will accurately reflect how and where management wants to draw the line.

Liquidity Risk Management for a Bank

February 9, 2011

A framework for estimating liquidity risk capital for a bank

From Jawwad Farid

Capital estimation for Liquidity Risk Management is a difficult exercise. It comes up as part of the internal liquidity risk management process as well as the internal capital adequacy assessment process (ICAAP). This post and the liquidity risk management series that can be found at the Learning Corporate Finance blog suggests a framework for ongoing discussion based on the work done by our team with a number of regional banking customers.

By definition banks take a small Return on asset (1% – 1.5%) and use leverage and turnover to scale it to a 15% – 18% Return on Equity. When market conditions change and a bank becomes the subject of a name crisis and a subsequent liquidity run, the same process becomes the basis for a death chant for the bank.  We try to de-lever the bank by selling assets and paying down liabilities and the process quickly turns into a fire sale driven by the speed at which word gets out about the crisis.

Figure 1 Increasing Cash Reserves

Reducing leverage by distressed asset sales to generate cash is one of the primary defense mechanisms used by the operating teams responsible for shoring up cash reserves. Unfortunately every slice of value lost to the distressed sale process is a slice out of the equity pool or capital base of the bank. An alternate mechanism that can protect capital is using the interbank Repurchase (Repo) contract to use liquid or acceptable assets as collateral but that too is dependent on the availability of un-encumbered liquid securities on the balance sheet as well as availability of counterparty limits. Both can quickly disappear in times of crisis. The last and final option is the central bank discount window the use of which may provide temporary relief but serves as a double edge sword by further feeding the name and reputational crisis.  While a literature review on the topic also suggest cash conservation approaches by a re-alignment of businesses and a restructuring of resources, these last two solutions assume that the bank in question would actually survive the crisis to see the end of re-alignment and re-structuring exercise.

Liquidity Reserves: Real or a Mirage

A questionable assumption that often comes up when we review Liquidity Contingency Plans is the availability or usage of Statutory Liquidity and Cash Reserves held for our account with the Central Bank.  You can only touch those assets when your franchise and license is gone and the bank has been shut down. This means that if you want to survive the crisis with your banking license intact there is a very good chance that the 6% core liquidity you had factored into your liquidation analysis would NOT be available to you as a going concern in times of a crisis. That liquidity layer has been reserved by the central bank as the last defense for depositor protection and no central bank is likely to grant abuse of that layer.

Figure 2 Liquidity Risk and Liquidity Run Crisis

As the Bear Stearns case study below illustrate the typical Liquidity crisis begins with a negative event that can take many shapes and forms. The resulting coverage and publicity leads to pressure on not just the share price but also on the asset portfolio carried on the bank’s balance sheet as market players take defensive cover by selling their own inventory or aggressive bets by short selling the securities in question. Somewhere in this entire process rating agencies finally wake up and downgrade the issuer across the board leading to a reduction or cancellation of counterparty lines.  Even when lines are not cancelled given the write down in value witnessed in the market, calls for margin and collateral start coming in and further feed liquidity pressures.

What triggers a Name Crisis that leads to the vicious cycle that can destroy the inherent value in a 90 year old franchise in less than 3 months.  Typically a name crisis is triggered by a change in market conditions that impact a fundamental business driver for the bank. The change in market conditions triggers either a large operational loss or a series of operation losses, at times related to a correction in asset prices, at other resulting in a permanent reduction in margins and spreads.  Depending on when this is declared and becomes public knowledge and what the bank does to restore confidence drives what happens next. One approach used by management teams is to defer the news as much as possible by creative accounting or accounting hand waving which simply changes the nature of the crisis from an asset price or margin related crisis to a much more serious regulatory or accounting scandal with similar end results.

Figure 3 What triggers a name crisis?

The problem however is that market players have a very well established defensive response to a name crisis after decades of bank failures. Which implies that once you hit a crisis the speed with which you generate cash, lock in a deal with a buyer and get rid of questionable assets determined how much value you will lose to the market driven liquidation process. The only failsafe here is the ability of the local regulator and lender of last resort to keep the lifeline of counterparty and interbank credit lines open.  As was observed at the peak of the crisis in North America, UK and a number of Middle Eastern market this ability to keep market opens determines how low prices will go, the magnitude of the fire sale and the number of banks that actually go under.

Figure 4 Market response to a Name Crisis and the Liquidity Run cycle.

The above context provides a clear roadmap for building a framework for liquidity risk management. The ending position or the end game is a liquidity driven asset sale. A successful framework would simply jump the gun and get to the asset sale before the market does. The only reason why you would not jump the gun is if you have cash, a secured contractually bound commitment for cash, a white knight or any other acceptable buyer for your franchise and an agreement on the sale price and shareholders’ approval for that sale in place.  If you are missing any of the above, your only defense is to get to the asset sale before the market does.

The problem with the above assertion is the responsiveness of the Board of directors and the Senior executive team to the seriousness of the name crisis. The most common response by both is a combination of the following

a)     The crisis is temporary and will pass. If there is a need we will sell later.

b)    We can’t accept these fire sale prices.

c)     There must be another option. Please investigate and report back.

This happens especially when the liquidity policy process was run as a compliance checklist and did not run its full course at the board and executive management level.  If a full blown liquidity simulation was run for the board and the senior management team and if they had seen for themselves the consequences of speed as well as delay such reaction don’t happen. The board and the senior team must understand that illiquid assets are equivalent of high explosives and delay in asset sale is analogous to a short fuse. When you combine the two with a name crisis you will blow the bank irrespective of its history or the power of its franchise. When the likes of Bear, Lehman, Merrill, AIG and Morgan failed, your bank and your board is not going to see through the crisis to a different and pleasant fate.

(more…)

Economic Capital Review by S&P

February 7, 2011

Standard & Poor’s started including an evaluation of insurers’ enterprise risk management in its ratings in late 2005. Companies that fared well under the stick of ERM evaluation there was the carrot of potentially lower capital requirements.  On 24 January, S&P published the basis for an economic capital review and adjustment process and announced that the process was being implemented immediately.

The ERM review is still the key. Insurers must already have a score from their ERM review of “strong” or “excellent” before they are eligible for any consideration of their capital model. That strong or excellent score implies that those firms have already passed S&P’s version of the Solvency II internal mode use test — which S&P calls strategic risk management (SRM). Those firms with the strong and excellent ERM rating will all have their economic capital models reviewed.

The new name for this process is the level III ERM review. The level I review is the original ERM process that was initiated in 2005. The level II process, started in 2006, is a more detailed review that S&P applies to firms with high levels of risk and/or complexity. That level II review included a more detailed look at the risk control processes of the firms.

The new level III ERM review looks at five aspects of the economic capital model: methodology, data quality, assumptions and parameters, process/execution and testing/validation.

Read More at InsuranceERM.com

Sins of Risk Measurement

February 5, 2011
.
Read The Seven Deadly Sins of Measurement by Jim Campy

Measuring risk means walking a thin line.  Balancing what is highly unlikely from what it totally impossible.  Financial institutions need to be prepared for the highly unlikely but must avoid getting sucked into wasting time worrying about the totally impossible.

Here are some sins that are sometimes committed by risk measurers:

1.  Downplaying uncertainty.  Risk measurement will always become more and more uncertain with increasing size of the potential loss numbers.  In other words, the larger the potential loss, the less certain you can be about how certain it might be.  Downplaying uncertainty is usually a sin of omission.  It is just not mentioned.  Risk managers are lured into this sin by the simple fact that the less that they mention uncertainty, the more credibility their work will be given.

2.  Comparing incomparables.  In many risk measurement efforts, values are developed for a wide variety of risks and then aggregated.  Eventually, they are disaggregated and compared.  Each of the risk measurements are implicitly treated as if they were all calculated totally consistently.  However,  in fact, we are usually adding together measurements that were done with totally different amounts of historical data, for markets that have totally different degrees of stability and using tools that have totally different degrees of certitude built into them.  In the end, this will encourage decisions to take on whatever risks that we underestimate the most through this process.

3.  Validate to Confirmation.  When we validate risk models, it is common to stop the validation process when we have evidence that our initial calculation is correct.  What that sometimes means is that one validation is attempted and if validation fails, the process is revised and tried again.  This is repeated until the tester is either exhausted or gets positive results.  We are biased to finding that our risk measurements are correct and are willing to settle for validations that confirm our bias.

4.  Selective Parameterization.  There are no general rules for parameterization.  Generally, someone must choose what set of data is used to develop the risk model parameters.  In most cases, this choice determines the answers of the risk measurement.  If data from a benign period is used, then the measures of risk will be low.  If data from an adverse period is used, then risk measures will be high.  Selective paramaterization means that the period is chosen because the experience was good or bad to deliberately influence the outcome.

5.  Hiding behind Math.  Measuring risk can only mean measuring a future unknown contingency.  No amount of fancy math can change that fact.  But many who are involved in risk measurement will avoid ever using plain language to talk about what they are doing, preferring to hide in a thicket of mathematical jargon.  Real understanding of what one is doing with a risk measurement process includes the ability to say what that entails to someone without an advanced quant degree.

6.  Ignoring consequences.  There is a stream of thinking that science can be disassociated from its consequences.  Whether or not that is true, risk measurement cannot.  The person doing the risk measurement must be aware of the consequences of their findings and anticipate what might happen if management truly believes the measurements and acts upon them.

7.  Crying Wolf.  Risk measurement requires a concentration on the negative side of potential outcomes.  Many in risk management keep trying to tie the idea of “risk” to both upsides and downsides.  They have it partly right.  Risk is a word that means what it means, and the common meaning associated risk with downside potential.  However, the risk manager who does not keep in mind that their risk calculations are also associated with potential gains will be thought to be a total Cassandra and will lose all attention.  This is one of the reasons why scenario and stress tests are difficult to use.  One set of people will prepare the downside story and another set the upside story.  Decisions become a tug of war between opposing points of view, when in fact both points of view are correct.

There are doubtless many more possible sins.  Feel free to add your favorites in the comments.

But one final thought.  Calling it MEASUREMENT might be the greatest sin.

Global Convergence of ERM Requirements in the Insurance Industry

January 27, 2011

Role of Own Risk and Solvency Assessment in Enterprise Risk Management

Insurance companies tend to look backwards to see if there was enough capital for the risks that were present then. It is important for insurance companies to be forward looking and assess whether enough capital is in place to take care risks in the future. Though it is mandatory for insurance firms to comply with solvency standards set by regulatory authorities, what is even more important is the need for top management to be responsible for certifying solvency. Performing Own Risk and Solvency Assessment (ORSA) is the key for the insurance industry.

  • Global Convergence of ERM Regulatory requirements with NAIC adoption of ORSA regulations
  • Importance of evaluating Enterprise Risk Management for ORSA
  • When to do an ORSA and what goes in an ORSA report?
  • Basic and Advanced ERM Practices
  • ORSA Plan for Insurers
  • Role of Technology in Risk Management

Join this MetricStream webinar

Date: Wednesday February 16, 2011
Time: 10 am EST | 4 pm CET | 3pm GMT
Duration: 1 hour

ERM Fundamentals

January 21, 2011

You have to start somewhere.

My suggestion it that rather than starting with someone else’s idea of ERM, you start with what YOUR COMPANY is already doing.

In that spirit, I offer up these eight Fundamental ERM Practices.  So to follow my suggestion, you would start in each of these eight areas with a self assessment.  Identify what you already have in these eight areas.  THEN start to think about what to build.  If there are gaping holes, plan to fill those in with new practices.  If there are areas where your company already has a rich vein of existing practice build gently on that foundation.  Much better to use ERM to enhance existing good practice than to tear down existing systems that are already working.  Making significant improvement to existing good practices should be one of your lowest priorities.

  1. Risk Identification: Systematic identification of principal risks – Identify and classify risks to which the firm is exposed and understand the important characteristics of the key risks

  2. Risk Language: Explicit firm-wide words for risk – A risk definition that can be applied to all exposures, that helps to clarify the range of size of potential loss that is of concern to management and that identifies the likelihood range of potential losses that is of concern. Common definitions of the usual terms used to describe risk management roles and activities.

  3. Risk Measurement: What gets measured gets managed – Includes: Gathering data, risk models, multiple views of risk and standards for data and models.

  4. Policies and Standards: Clear and comprehensive documentation – Clearly documented the firm’s policies and standards regarding how the firm will take risks and how and when the firm will look to offset, transfer or retain risks. Definitions of risk-taking authorities; definitions of risks to be always avoided; underlying approach to risk management; measurement of risk; validation of risk models; approach to best practice standards.

  5. Risk Organization: Roles & responsibilities – Coordination of ERM through: High-level risk committees; risk owners; Chief Risk Officer; corporate risk department; business unit management; business unit staff; internal audit. Assignment of responsibility, authority and expectations.

  6. Risk Limits and Controlling: Set, track, enforce – Comprehensively clarifying expectations and limits regarding authority, concentration, size, quality; a distribution of risk targets

    and limits, as well as plans for resolution of limit breaches and consequences of those breaches.

  7. Risk Management Culture: ERM & the staff – ERM can be much more effective if there is risk awareness throughout the firm. This is accomplished via a multi-stage training program, targeting universal understanding of how the firm is addressing risk management best practices.

  8. Risk Learning: Commitment to constant improvement – A learning and improvement environment that encourages staff to make improvements to company practices based on unfavorable and favorable experiences with risk management and losses, both within the firm and from outside the firm.

Risk Environment

January 10, 2011

It seems that there are three approaches to how to look at the riskiness of the future when assessing risk of a specific exposure:

  1. Look at the “long term” frequency and severity and look at risk based upon assuming that the near term future is a “typically” risky period.
  2. Look at the market’s current idea of near term future riskiness.  This is evident in terms of items such as implied volatility.
  3. Focus on “Expert Opinion” of the risk environment.

There are proponents of each approach.  That is because there are strengths and weaknesses for each approach.

Long Term Approach

The long term view of risk environment will help to make sure that the company takes into account “all” of the risk that could be inherent in  their risk positions.  The negative of this approach is that it will in most times not represent the risk environment that will be faced in the immediate future.

Market View

The market view of risk does definitely give an immediate view of risk environment.  It is thought by proponents to be the only valid approach to getting a view of that.  However, the market implied risk view may be a little too short term for some purposes.  And when trying to look at longer term risk environment through market implied factors, there may be very large inaccuracies that creep into the view.  That is because there are other factors other than view of risk that are a part of the market implied factors that are not so large for very short term periods, but that grow to predominate with longer time periods.

Expert Opinion

Expert opinion can also reflect the current risk environment and is potentially adaptable to the desired time frame.  However, the main complaint with Expert Opinion is that there is no specific way to know whether an expert opinion of the risk environment is equivalent between one point of time and another.  One explanation for the uncertainty in that can be found in the changing risk attitudes that are described by the Theory of Plural Rationalities. Experts may have methods to overcome the changing waves of risk attitudes that they are personally exposed to, but it is hard to believe that they can escape that basic human cycle entirely.

Risk environment is important in setting risk strategies and adjusting risk tolerances and appetites.

Using the Long Term approach at all times and effectively ignoring the different risk environments is going to be as effective as crossing a street using long term averages for the amount of traffic.

ERM an Economic Sustainability Proposition

January 6, 2011

Global ERM Webinars – January 12 – 14 (CPD credits)

We are pleased to announce the fourth global webinars on risk management. The programs are a mix of backward and forward looking subjects as our actuarial colleagues across the globe seek to develop the science and understanding of the factors that are likely to influence our business and professional environment in the future. The programs in each of the three regions are a mix of technical and qualitative dissertations dealing with subjects as diverse as regulatory reform, strategic and operational risks, on one hand, and the modeling on tail risks and implied volatility surfaces, on the other. For the first time, and in keeping with our desire to ensure a global exchange of information, each of the regional programs will have presentations from speakers from the other two regions on subjects that have particular relevance to their markets.

Asia Pacific Program
http://www.soa.org/professional-development/event-calendar/event-detail/erm-economic/2011-01-14-ap/agenda.aspx

Europe/Africa Program
http://www.soa.org/professional-development/event-calendar/event-detail/erm-economic/2011-01-14/agenda.aspx

Americas Program
http://www.soa.org/professional-development/event-calendar/event-detail/erm-economic/2011-01-12/agenda.aspx

Registration
http://www.soa.org/professional-development/event-calendar/event-detail/erm-economic/2011-01-12/registration.aspx

Intrinsic Risk

November 26, 2010

If you were told that someone had flipped a coin 60 times and had found that heads were the results 2/3 of the time, you might have several reactions.

  • You might doubt whether the coin was a real coin or whether it was altered.
  • You might suspect that the person who got that result was doing something other than a fair flip.
  • You might doubt whether they are able to count or whether they actually counted.
  • You doubt whether they are telling the truth.
  • You start to calculate the likelihood of that result with a fair coin.

Once you take that last step, you find that the story is highly unlikely, but definitely not impossible.  In fact, my computer tells me that if I lined up 225 people and had them all flip a coin 60 times, there is a fifty-fifty chance  that at least one person will get that many heads.

So how should you evaluate the risk of getting 40 heads out of 60 flips?  Should you do calculations based upon the expected likelihood of heads based upon an examination of the coin?  You look at it and see that there are two sides and a thin edge.  You assess whether it seems to be uniformly balanced.  Then you conclude that you are fairly certain of the inherent risk of the coin flipping process.

Your other choice to assess the risk is to base your evaluation on the observed outcomes of coin flips.  This will mean that the central limit theorem should help us to eventually get the right number.  But if your first observation is that person described above, then it will be quite a few additional observations before you find out what the Central Limit Theorem has to tell you.

The point being that a purely observation based approach will not always give you the best answer.   Good to make sure that you understand something about the intrinsic risk.

If you are still not convinced of this, ask the turkey.  Taleb uses that turkey story to explain a Black Swan.  But if you think about it, many Black Swans are nothing more than ignorance of intrinsic risk.

Measuring Risks

November 25, 2010

What gets measured gets managed.

Measuring risks is the second of the eight ERM Fundamental Practices.

There are many, many ways to measure risks.  For the most part, they give information about different aspects of the risks.  Some basic types of measures include:

  • Transaction Flows, measuring counts or dollar flows of risk transactions
  • mean Expected Loss
  • Standard deviation of loss
  • Expected loss at a particular confidence interval (also known as VaR or Value at Risk)
  • Average expected loss beyond a certain confidence interval (also known as TVaR, Expected Shortfall, and other names)

So you needs to think about what you want from a risk measure.  Here are some criteria of a GOOD RISK MEASURE:

1. Timely – if you do not get the information about risk in time, the information is potentially entertaining or educational, but not useful.

2. Accurately distinguishes broad degrees of riskiness within the broad risk class – allowing you to discern whether one choice is riskier than another.

3. Not too expensive or time intensive to produce – the information needs to be more valuable than the cost of the process that produces it, either in dollars or opportunity cost based on the time it uses up.

4. Understood by all who must use – some will spend lots of time making sure that they have a risk measure that is the theoretical BEST.  But the improvements from intellectual purity may come with a large trade-off in the ability of a broad audience to understand.  And if people in power do not understand something, there are few who will really rely on it when their career’s are at stake in an extreme risk situation.

5. Actionable – the risk measure must be able to point to a possible action.   Otherwise, it just presents management with a difficult and unpleasant puzzle that needs to be solved.  And risk is often not a presenting problem, but a possible problem, so it is easier always to defer actions that are unclearly indicated.

If you can set up your risk measurement systems so that you can satisfy all five of those criteria, then you can feel pretty good. Your risk management system is well served.

But some do not stop there.  They look for EXCELLENT RISK MEASURES.  Those are measures that in addition to satisfying the five criteria above:

6. Can help to identify changes to risk quality – this is the Achilles heel of the risk measurement process.  The deterioration of the key riskiness of the individual risks.  Without this ability, it is possible for a tightly managed risk portfolio to fail unexpectedly because the measures gradually drifted away from the actual underlying riskiness of the portfolio of risks.

7. Provides information that is consistent across different Broad Classes of Risk – this would allow a firm to actually do Risk Steering.  And to more quantitatively assess their Diversification.  So this quality is needed to support two of the four key ERM Strategies and is also needed to  apply an enterprise view to Risk Trading and Loss Controlling.

8. For most sensitive risks will pinpoint variations in risk levels – this is the characteristic that brings a risk measure to the ultimate level of actionability.  This is the information that risk managers who are seeking to drive their risk measurement down to the transaction level should be seeking to achieve.  However, it is very important to know the actual accuracy of the risk measure in these situations.  When the standard error is much larger than the differences in risk between similar transaction then process has gone ahead of substance.

 

 

Turkey Risk

November 25, 2010

On Thanksgiving Day her in the US, let us recall Nassim Taleb’s story about the turkey.  For 364 days the turkey saw no risk whatsoever, just good eats.  Then one day, the turkey became dinner.

For some risks, studying the historical record and making a model from experience just will not give useful results.

And, remembering the experience of the turkey, a purely historical basis for parameterizing risk models could get you cooked.

Happy Thanksgiving.

Simplicity Fails

September 16, 2010

Usually the maxim KISS (Keep it Simple Stupid) is the way to go.

But in Risk Management, just the opposite is true. If you keep it simple, you will end up being eaten alive.

That is because risk is constantly changing. At any time, your competitors will try to change the game trying to take the better risks and if you keep it simple and stand still, leaving you with the worst risks.

If you keep it simple and focused and identify the ONE MOST IMPORTANT RISK METRIC and focus all of your risk management systems around controlling risk as defined by that one metric, you will eventually end up accumulating more and more of some risk that fails to register under that metric.  See Risk and Light.

The solution is not to get better at being Simple, but to get good at managing complexity.

That means looking at risk through many lenses, and then focusing on the most important aspects of risk for each situation.  That may mean that you will need to have different risk measures for different risks.  Something that is actually the opposite of the thrust of the ERM movement towards the homogenization of risk measurement.  There are clearly benefits of having one common measure of risk that can be applied across all risks, but some folks went too far and abandoned their risk specific metrics in the process.

And there needs to be a process of regularly going back to what your had decided were the most important risk measures and making sure that there had not been some sort of regime change that meant that you should be adding some new risk measure.

So, you should try Simple at your own risk.

It’s simple.  Just pick the important strand.

On The Top of My List

August 28, 2010

I finished a two hour presentation on how to get started with ERM and was asked what were my top 3 things to keep in mind and top 3 things to avoid.

Here’s what I wish I had said:

Top three hings to keep in mind when starting an ERM Program:

  1. ERM must have a clear objective to be successful.  That objective should reflect both the view of management and the board about the amount of risk in the current environment as well as the direction that the company is headed in terms of the future target of risk as compared to capacity.  And finally, the objective for ERM must be compatible with the other objectives of the firm.  It must be reasonably possible to accomplish both the ERM objective and the growth and profit objectives of the firm at the same time.
  2. ERM must have someone who is committed to accomplishing the objective of ERM for the firm.  That person also must have the authority within the firm to resolve most conflicts between the ERM development process and the other objectives of the firm. And they must have access to the CEO to be able to resolve any conflicts that they do not have the authority to resolve personally.
  3. Exactly what you do first is much less important than the fact that you start doing something to develop an ERM program.   Doing something that involves actually managing risk and reporting the activity is a better choice than a long term developmental project.  It is not optimal for the firm to commit to ERM, to identify resources for that process and then to have those people and ERM disappear from  sight for a year or more to develop the ERM system.  Much better to start giving everyone in management of the firm some ideas of what ERM looks and feels like.  Recognize that one product that you are building is confidence in ERM.

Things to Avoid:

  1. Valuing ERM retrospectively taking into account only experienced gains and losses.  (see ERM Value)  A good ERM program changes the likelihood of losses, but in any short period of time actual losses are a matter of chance.  On the other hand, if your ERM programs works to a limit for losses from an individual transaction, then it IS a failure if the firm has losses above that amount for individual transactions.
  2. Starting out on ERM development with the idea that ERM is only correct if it validates existing company decisions.  New risk evaluation systems will almost always find one or more major decisions that expose the company to too much risk in some way. At least they will if the evaluation system is Comprehensive.
  3. Letting ERM routines substitute for critical judgment.  Some of the economic carnage of the Global Financial Crisis was perpetuated by firms where their actions were supported by risk management systems that told them that everything was ok.  But Risk managers need to be humble.

But in fact, I did get some of these out. So next time, I will be prepared.

Risk Adjusted Performance Measures

June 20, 2010

By Jean-Pierre Berliet

Design weaknesses are an important source of resistance to ERM implementation. Some are subtle and thus often remain unrecognized. Seasoned business executives recognize readily, however, that decision signals from ERM can be misleading in particular situations in which these design weaknesses can have a significant impact. This generates much organizational heat and can create a highly dysfunctional decision environment.

Discussions with senior executives have suggested that decision signals from ERM would be more credible and that ERM would be a more effective management process if ERM frameworks were shown to produce credible and useful risk adjusted performance measures

Risk adjusted performance measures (RAPM) such as RAROC (Risk Adjusted Return On Capital), first developed in banking institutions, or Risk Adjusted Economic Value Added (RAEVA) have been heralded as significant breakthroughs in performance measurement for insurance companies. They were seen as offering a way for risk bearing enterprises to relate financial performance to capital consumption in relation to risks assumed and thus to value creation.

Many insurance companies have attempted to establish RAROC/RAEVA performance measurement frameworks to assess their economic performance and develop value enhancing business and risk management strategies. A number of leading companies, mostly in Europe where regulators are demanding it, have continued to invest in refining and using these frameworks. Even those that have persevered, however, understand that framework weaknesses create management challenges that cannot be ignored.

Experienced executives recognize that the attribution of capital to business units or lines provides a necessary foundation for aligning the perspectives of policyholders and shareholders.

Many company executives recognize, however, that i) risk adjusted performance measures can be highly sensitive to methodologies that determine the attribution of income and capital and ii) earnings reported for a period do not adequately represent changes in the value of insurance businesses. As a result, these senior executives believe that decision signals provided by risk adjusted performance measures need to be evaluated with great caution, lest they might mislead. Except for Return on Embedded Value measures that are comparatively more challenging to develop and validate than RAROC/RAEVA measures, risk adjusted performance measures are not typically capable of relating financial performance to return on value considerations that are of critical importance to shareholders.

To provide information that is credible and useful to management and shareholders, insurance companies need to establish risk adjusted performance measures based on:

  • A ( paid up or economic) capital attribution method, with explicit allowance for deviations in special situations, that is approved by Directors
  • Period income measures aligned with pricing and expense decisions, with explicit separation of in-force/run-off, renewals, and new business
  • Supplemental statements relating period or projected economic performance/ changes in value to the value of the underlying business.
  • Reconciliation of risk adjusted performance metrics to reported financial results under accounting principles used in their jurisdictions (GAAP, IFRS, etc.)
  • Establishment and maintenance of appropriate controls, formally certified by management, reviewed and approved by the Audit Committee of the Board of Directors.

In many instances, limitations and weaknesses in performance measures create serious differences of view between a company’s central ERM staff and business executives.

Capital attribution

(more…)

Risk Velocity

June 17, 2010

By Chris Mandel

Understand the probability of loss, adjusted for the severity of its impact, and you have a sure-fire method for measuring risk.

Sounds familiar and seems on point; but is it? This actuarial construct is useful and adds to our understanding of many types of risk. But if we had these estimates down pat, then how do we explain the financial crisis and its devastating results? The consequences of this failure have been overwhelming.

Enter “risk velocity,” or how quickly risks create loss events. Another way to think about the concept is in terms of “time to impact” a military phrase, a perspective that implies proactively assessing when the objective will be achieved. While relatively new in the risk expert forums I read, I would suggest this is a valuable concept to understand and more so to apply.

It is well and good to know how likely it is that a risk will manifest into a loss. Better yet to understand what the loss will be if it manifests. But perhaps the best way to generate a more comprehensive assessment of risk is to estimate how much time there may be to prepare a response or make some other risk treatment decision about an exposure. This allows you to prioritize more rapidly, developing exposures for action. Dynamic action is at the heart of robust risk management.

After all, expending all of your limited resources on identification and assessment really doesn’t buy you much but awareness. In fact awareness, from a legal perspective, creates another element of risk, one that can be quite costly if reasonable action is not taken in a timely manner. Not every exposure will result in this incremental risk, but a surprising number do.

Right now, there’s a substantial number of actors in the financial services sector who wish they’d understood risk velocity and taken some form of prudent action that could have perhaps altered the course of loss events as they came home to roost; if only.

More at Risk and Insurance

Winners and Losers

June 14, 2010

Sometimes quants who get involved with building new economic capital models have the opinion that their work will reveal the truth about the risks of the group and that the best approach is to just let the truth be told and let the chips fall where they may.

Then they are completely surprised that their project has enemies within management.  And that those enemies are actively at work undermining the credibility of the model.  Eventually, the modelers are faced with a choice of adjusting the model assumptions to suit those enemies or having the entire project discarded because it has failed to get the confidence of management.

But that situation is actually totally predictable.

That is because it is almost a sure thing that the first comprehensive and consistent look at the group’s risks will reveal winners and losers.  And if this really is a new way of approaching things, one or more of the losers will come as a complete surprise to many.

The easiest path for the managers of the new loser business is to undermine the model.  And it is completely natural to find that they will usually be completely skeptical of this new model that makes their business look bad.  It is quite likely that they do not think that their business takes too much risk or has too little profits in comparison to their risk.

In the most primitive basis, I saw this first in the late 1970’s when the life insurer where I worked shifted from a risk approach that allocated all capital in proportion to reserves to one that recognized the insurance risk as well as the investment risk as two separate factors.  The term insurance products suddenly were found to be drastically underpriced.  Of course, the product manager of that product was an instant enemy of the new approach and was able to find many reasons why capital shouldn’t be allocated to insurance risk.

The same sorts of issues had been experienced by firms when they first adopted nat cat models and shifted from a volatility risk focus to a ruin risk focus.

What needs to be done to diffuse these sorts of issues, is that steps must be taken to separate the message from the messenger.  There are 2 main ways to accomplish this:

  1. The message about the new level of risks needs to be delivered long before the model is completed.  This cannot wait until the model is available and the exact values are completely known.  Management should be exposed to broad approximations of the findings of the model at the earliest possible date.  And the rationale for the levels of the risk needs to be revealed and discussed and agreed long before the model is completed.
  2. Once the broad levels of the risk  are accepted and the problem areas are known, a realistic period of time should be identified for resolving these newly identified problems.   And appropriate resources allocated to developing the solution.  Too often the reaction is to keep doing business and avoid attempting a solution.

That way, the model can take its rightful place as a bringer of light to the risk situation, rather than the enemy of one or more businesses.

Common Terms for Severity

June 1, 2010

In the US, firms are required to disclose their risks.  This has led to an exercize that is particularly useless.  Firms obviously spend very little time on what they publish under this part of their financial statement.  Most firms seem to be using boilerplate language and a list of risks that is as long as possible.  It is clearly a totally compliance based CYA activity.  The only way that a firm could “lose” under this system is if they fail to disclose something that later creates a major loss.  So best to mention everything under the sun.  But when you look across a sector at these lists, you find a startling degree to which the risks actually differ.  That is because there is absolutely no standard that is applied to tell what is a risk and if something is a risk, how significant is it.  The idea of risk severity is totally missing.  

Bread Box

 

What would help would be a common set of terms for Severity of losses from risks.  Here is a suggestion of a scale for discussing loss severity for an individual firm: 

  1. A Loss that is a threat to earnings.  This level of risk could result in a loss that would seriously impair or eliminate earnings. 
  2. A Loss that could result in a significant reduction to capital.  This level of risk would result in a loss that would eliminate earnings and in addition eat into capital, reducing it by 10% to 20%
  3. A Loss that could result in severe reduction of business activity.  For insurers, this would be called “Going into Run-off”.  It means that the firm is not insolvent, but it is unable to continue doing new business.  This state often lasts for several years as old liabilities of the insurer are slowly paid of as they become due.  Usually the firm in this state has some capital, but not enough to make any customers comfortable trusting them for future risks. 
  4. A Loss that would result in the insolvency of the firm. 

Then in addition, for an entire sector or sub sector of firms: 

  1. Losses that significantly reduce earnings of the sector.  A few firms might have capital reductions.
  2. Losses that significantly impair capital for the sector.  A few firms might be run out of business from these losses.
  3. Losses that could cause a significant number of firms in the sector to be run out of business.  The remainder of the sector still has capacity to pick up the business of the firms that go into run-off.  A few firms might be insolvent. 
  4. Losses that are large enough that the sector no longer has the capacity to do the business that it had been doing.  There is a forced reduction in activity in the sector until capacity can be replaced, either internally or from outside funds.  A large number of firms are either insolvent or will need to go into run-off. 

These can be referred to as Class 1, Class 2, Class 3, Class 4 risks for a firm or for a sector.  

Class 3 and Class 4 Sector risks are Systemic Risks.  

Care should be taken to make sure that everyone understands that risk drivers such as equity markets, or CDS can possibly produce Class 1, Class 2, Class 3 or Class 4 losses for a firm or for a sector in a severe enough scenario.  There is no such thing as classifying a risk as always falling into one Class.  However, it is possible that at a point in time, a risk may be small enough that it cannot produce a loss that is more than a Class 1 event.  

For example, at a point in time (perhaps 2001), US sub prime mortgages were not a large enough class to rise above a Class 1 loss for any firms except those whose sole business was in that area.  By 2007, Sub Prime mortgage exposure was large enough that Class 4 losses were created for the banking sector.  

Looking at Sub Prime mortgage exposure in 2006, a bank should have been able to determine that sub primes could create a Class 1, Class 2, Class 3 or even Class 4 loss in the future.  The banks could have determined the situations that would have led to losses in each Class for their firm and determined the likelihood of each situation, as well as the degree of preparation needed for the situation.  This activity would have shown the startling growth of the sub prime mortgage exposure from a Class 1 to a Class 2 through Class 3 to Class 4 in a very short time period.  

Similarly, the prudential regulators could theoretically have done the same activity at the sector level.  Only in theory, because the banking regulators do not at this time collect the information needed to do such an exercize.  There is a proposal that is part of the financial regulation legislation to collect such information.  See CE_NIF.

Comprehensive Actuarial Risk Evaluation

May 11, 2010

The new CARE report has been posted to the IAA website this week.

It raises a point that must be fairly obvious to everyone that you just cannot manage risks without looking at them from multiple angles.

Or at least it should now be obvious. Here are 8 different angles on risk that are discussed in the report and my quick take on each:

  1. MARKET CONSISTENT VALUE VS. FUNDAMENTAL VALUE   –  Well, maybe the market has it wrong.  Do your own homework in addition to looking at what the market thinks.  If the folks buying exposure to US mortgages had done fundamental evaluation, they might have noticed that there were a significant amount of sub prime mortgages where the Gross mortgage payments were higher than the Gross income of the mortgagee.
  2. ACCOUNTING BASIS VS. ECONOMIC BASIS  –  Some firms did all of their analysis on an economic basis and kept saying that they were fine as their reported financials showed them dying.  They should have known in advance of the risk of accounting that was different from their analysis.
  3. REGULATORY MEASURE OF RISK  –  vs. any of the above.  The same logic applies as with the accounting.  Even if you have done your analysis “right” you need to know how important others, including your regulator will be seeing things.  Better to have a discussion with the regulator long before a problem arises.  You are just not as credible in the middle of what seems to be a crisis to the regulator saying that the regulatory view is off target.
  4. SHORT TERM VS. LONG TERM RISKS  –  While it is really nice that everyone has agreed to focus in on a one year view of risks, for situations that may well extend beyond one year, it can be vitally important to know how the risk might impact the firm over a multi year period.
  5. KNOWN RISK AND EMERGING RISKS  –  the fact that your risk model did not include anything for volcano risk, is no help when the volcano messes up your business plans.
  6. EARNINGS VOLATILITY VS. RUIN  –  Again, an agreement on a 1 in 200 loss focus is convenient, it does not in any way exempt an organization from risks that could have a major impact at some other return period.
  7. VIEWED STAND-ALONE VS. FULL RISK PORTFOLIO  –  Remember, diversification does not reduce absolute risk.
  8. CASH VS. ACCRUAL  –  This is another way of saying to focus on the economic vs the accounting.

Read the report to get the more measured and complete view prepared by the 15 actuaries from US, UK, Australia and China who participated in the working group to prepare the report.

Comprehensive Actuarial Risk Evaluation

Assumptions Embedded in Risk Analysis

April 28, 2010

The picture below from Dour VanDemeter’s blog gives an interesting take on the embedded assumptions in various approaches to risk analysis and risk treatment.

But what I take from this is a realization that many firms have activity in one or two or three of those boxes, but the only box that does not assume away a major part of reality is generally empty.

In reality, most financial firms do experience market, credit and liability risks all at the same time and most firms do expect to be continuing to receive future cashflows both from past activities and from future activities.

But most firms have chosen to measure and manage their risk by assuming that one or two or even three of those things are not a concern.  By selectively putting on blinders to major aspects of their risks – first blinding their right eye, then their left, then by not looking up and finally not looking down.

Some of these processes were designed that way in earlier times when computational power would not have allowed anything more.  For many firms their affairs are so very complicated and their future is so uncertain that it is simply impractical to incorporate everything into one all encompassing risk assessment and treatment framework.

At least that is the story that folks are most likely to use.

But the fact that their activity is too complicated for them to model does not seem to send them any flashing red signal that it is possible that they really do not understand their risk.

So look at Doug’s picture and see which are the embedded assumptions in each calculation – the ones I am thinking of are the labels on the OTHER rows and columns.

For Credit VaR – the embedded assumption is that there is no Market Risk and that there is no new assets or liabilities (business is in sell-off mode)

For Interest risk VaR – the embedded assumption is that there is no credit risk nor new assets or liabilities (business is in sell-off mode)

For ALM – the embedded assumption is that there is no credit risk and business is in run-off mode.

Those are the real embedded assumptions.  We should own up to them.

Dangerous Words

April 27, 2010

One of the causes of the Financial Crisis that is sometimes cited is an inappropriate reliance on complex financial models.  In our defense, risk managers have often said that users did not take the time to understand the models that they relied upon.

And I have said that in some sense, blaming the bad decisions on the models is like a driver who gets lost blaming it on the car.

But we risk managers and risk modelers do need to be careful with the words that we use.  Some of the most common risk management terminology is guilty of being totally misleading to someone who has no risk management training – who simply relies upon their understanding of English.

One of the fundamental steps of risk management is to MEASURE RISK.

I would suggest that this very common term is potentially misleading and risk managers should consider using it less.

In common usage, you could say that you measure a distance between two points or measure the weight of an object.  Measurement usually refers to something completely objective.

However, when we “measure” risk, it is not at all objective.  That is because Risk is actually about the future.  We cannot measure the future.  Or any specific aspect of the future.

While I can measure my height and weight today, I cannot now measure what it will be tomorrow.  I can predict what it might be tomorrow.  I might be pretty certain of a fairly tight range of values, but that does not make my prediction into a measurement.

So by the very words we use to describe what we are doing, we sought to instill a degree of certainty and reliability that is impossible and unwarranted.  We did that perhaps as mathematicians who are used to starting a problem by defining terms.  So we start our work by defining our calculation as a “measurement” of risk.

However, non-mathematicians are not so used to defining A = B at the start of the day and then remembering thereafter that whenever they hear someone refer to A, that they really mean B.

We also may have defined our work as “measuring risk” to instill in it enough confidence from the users that they would actually pay attention to the work and rely upon it.  In which case we are not quite as innocent as we might claim on the over reliance front.

It might be difficult now to retreat however.  Try telling management that you do not now, not have you ever measured risk.  And see what happens to your budget.

Volcano Risk 2

April 20, 2010

Top 10 European Volcanos in terms of people nearby and potential losses from an eruption:

Volcano/Country/Affected population/Values of residences at risk
1.Vesuvius/Italy/1,651,950/$66.1bn
2.Campi Flegrei/Italy/144,144/$7.8bn
3.La Soufrière Guadeloupe/Guadeloupe,France/94,037 /$3.8bn
4.Etna/Italy/70,819/$2.8bn
5.Agua de Pau/Azores,Portugal/34,307/$1.4bn
6.Soufrière Saint Vincent/Saint Vincent,Caribbean/24,493/$1bn
7.Furnas/Azores,Portugal/19,862/$0.8bn
8.Sete Cidades/Azores,Portugal/17,889/$0.7bn
9.Hekla/Iceland/10,024/$0.4bn
10.Mt Pelée/Martinique,France/10,002/$0.4bn

http://www.strategicrisk.co.uk/story.asp?source=srbreaknewsRel&storycode=384008

LIVE from the ERM Symposium

April 17, 2010

(Well not quite LIVE, but almost)

The ERM Symposium is now 8 years old.  Here are some ideas from the 2010 ERM Symposium…

  • Survivor Bias creates support for bad risk models.  If a model underestimates risk there are two possible outcomes – good and bad.  If bad, then you fix the model or stop doing the activity.  If the outcome is good, then you do more and more of the activity until the result is bad.  This suggests that model validation is much more important than just a simple minded tick the box exercize.  It is a life and death matter.
  • BIG is BAD!  Well maybe.  Big means large political power.  Big will mean that the political power will fight for parochial interests of the Big entity over the interests of the entire firm or system.  Safer to not have your firm dominated by a single business, distributor, product, region.  Safer to not have your financial system dominated by a handful of banks.
  • The world is not linear.  You cannot project the macro effects directly from the micro effects.
  • Due Diligence for mergers is often left until the very last minute and given an extremely tight time frame.  That will not change, so more due diligence needs to be a part of the target pre-selection process.
  • For merger of mature businesses, cultural fit is most important.
  • For newer businesses, retention of key employees is key
  • Modelitis = running the model until you get the desired answer
  • Most people when asked about future emerging risks, respond with the most recent problem – prior knowledge blindness
  • Regulators are sitting and waiting for a housing market recovery to resolve problems that are hidden by accounting in hundreds of banks.
  • Why do we think that any bank will do a good job of creating a living will?  What is their motivation?
  • We will always have some regulatory arbitrage.
  • Left to their own devices, banks have proven that they do not have a survival instinct.  (I have to admit that I have never, ever believed for a minute that any bank CEO has ever thought for even one second about the idea that their bank might be bailed out by the government.  They simply do not believe that they will fail. )
  • Economics has been dominated by a religious belief in the mantra “markets good – government bad”
  • Non-financial businesses are opposed to putting OTC derivatives on exchanges because exchanges will only accept cash collateral.  If they are hedging physical asset prices, why shouldn’t those same physical assets be good collateral?  Or are they really arguing to be allowed to do speculative trading without posting collateral? Probably more of the latter.
  • it was said that systemic problems come from risk concentrations.  Not always.  They can come from losses and lack of proper disclosure.  When folks see some losses and do not know who is hiding more losses, they stop doing business with everyone.  None do enough disclosure and that confirms the suspicion that everyone is impaired.
  • Systemic risk management plans needs to recognize that this is like forest fires.  If they prevent the small fires then the fires that eventually do happen will be much larger and more dangerous.  And someday, there will be another fire.
  • Sometimes a small change in the input to a complex system will unpredictably result in a large change in the output.  The financial markets are complex systems.  The idea that the market participants will ever correctly anticipate such discontinuities is complete nonsense.  So markets will always be efficient, except when they are drastically wrong.
  • Conflicting interests for risk managers who also wear other hats is a major issue for risk management in smaller companies.
  • People with bad risk models will drive people with good risk models out of the market.
  • Inelastic supply and inelastic demand for oil is the reason why prices are so volatile.
  • It was easy to sell the idea of starting an ERM system in 2008 & 2009.  But will firms who need that much evidence of the need for risk management forget why they approved it when things get better?
  • If risk function is constantly finding large unmanaged risks, then something is seriously wrong with the firm.
  • You do not want to ever have to say that you were aware of a risk that later became a large loss but never told the board about it.  Whether or not you have a risk management program.

Take CARE in evaluating your Risks

February 12, 2010

Risk management is sometimes summarized as a short set of simply stated steps:

  1. Identify Risks
  2. Evaluate Risks
  3. Treat Risks

There are much more complicated expositions of risk management.  For example, the AS/NZ Risk Management Standard makes 8 steps out of that. 

But I would contend that those three steps are the really key steps. 

The middle step “Evaluate Risks” sounds easy.  However, there can be many pitfalls.  A new report [CARE] from a working party of the Enterprise and Financial Risks Committee of the International Actuarial Association gives an extensive discussion of the conceptual pitfalls that might arise from an overly narrow approach to Risk Evaluation.

The heart of that report is a discussion of eight different either or choices that are often made in evaluating risks:

  1. MARKET CONSISTENT VALUE VS. FUNDAMENTAL VALUE 
  2. ACCOUNTING BASIS VS. ECONOMIC BASIS         
  3. REGULATORY MEASURE OF RISK    
  4. SHORT TERM VS. LONG TERM RISKS          
  5. KNOWN RISK AND EMERGING RISKS        
  6. EARNINGS VOLATILITY VS. RUIN    
  7. VIEWED STAND-ALONE VS. FULL RISK PORTFOLIO       
  8. CASH VS. ACCRUAL 

The main point of the report is that for a comprehensive evaluation of risk, these are not choices.  Both paths must be explored.

Why the valuation of RMBS holdings needed changing

January 18, 2010

Post from Michael A Cohen, Principal – Cohen Strategic Consulting

Last November’s decision by the National Association of Insurance Commissioners (NAIC) to appoint PIMCO Advisory to assess the holdings of non-agency residential mortgage-backed securities (RMBS) signaled a marked change in attitude towards the major ratings agencies. This move by the NAIC — the regulatory body for the insurance industry in the US, comprising the insurance commissioners of the 50 states – was aimed at determining the appropriate amount of risk-adjusted capital to be held by US insurers (more than 1,600 companies in both the life and property/casualty segments) for RMBS on their balance sheets.

Why did the NAIC act?

A number of problems had arisen from the way RMBS held by insurers had historically been rated by some rating agencies which are “nationally recognized statistical rating organizations” (NRSROs), though it is important to note that not all rating agencies which are NRSROs had engaged in this particular rating activity.

RMBS had been assigned (much) higher ratings than they seem to have deserved at the time, albeit with the benefit of hindsight. The higher ratings also led to lower capital charges for entities holding these securitizations (insurers, in this example) in determining the risk-adjusted capital they needed to hold for regulatory standards.

Consequently, these insurance organizations were ultimately viewed to be undercapitalized for their collective investment risks. These higher ratings also led to lower prices for the securitizations, which meant that the purchasers were ultimately getting much lower risk-adjusted returns than had been envisaged (and in many cases losses) for their purchases.

The analysis that was performed by the NRSROs has been strenuously called into question by many industry observers during the financial crisis of the past two years, for two primary reasons:

  • The level of analytical due diligence was weak and the default statistics used to evaluate these securities did not reflect the actual level of stress in the marketplace; as a consequence ratings were issued at higher levels than the underlying analytics in part to placate the purchasers of the ratings, and a number of industry insiders observed that this was done.
  • Once the RMBS marketplace came under extreme stress, the rating agencies subsequently determined that the risk charges for these securities would increase several fold, materially increasing the amount of risk-adjusted capital needed to be held by insurers with RMBS, and ultimately jeopardizing the companies’ financial strength ratings themselves.

Flaws in rating RMBS

Rating agencies have historically been paid for their rating services by those entities to which they assign ratings (that reflect claims paying, debt paying, principal paying, etc. abilities). Industry observers have long viewed this relationship as a potential conflict of interest, but, because insurers and buyers had not been materially harmed by this process until recently, the industry practice of rating agencies assigning ratings to companies who were paying them for the service was not strenuously challenged.

Further, since the rating agencies can increase their profit margins by increasing their overall rating fees while maintaining their expenses in the course of performing rating analysis, it follows that there is an incentive to increase the volume of ratings issued by the staff, which implies less time being spent on a particular analysis. Again, until recently, the rated entities and the purchasers of rated securities and insurance policies did not feel sufficiently harmed to challenge the process.

(more…)

Best Risk Management Quotes

January 12, 2010

The Risk Management Quotes page of Riskviews has consistently been the most popular part of the site.  Since its inception, the page has received almost 2300 hits, more than twice the next most popular part of the site.

The quotes are sometimes actually about risk management, but more often they are statements or questions that risk managers should keep in mind.

They have been gathered from a wide range of sources, and most of the authors of the quotes were not talking about risk management, at least they were not intending to talk about risk management.

The list of quotes has recently hit its 100th posting (with something more than 100 quotes, since a number of the posts have multiple quotes.)  So on that auspicous occasion, here are my favotites:

  1. Human beings, who are almost unique in having the ability to learn from the experience of others, are also remarkable for their apparent disinclination to do so.  Douglas Adams
  2. “when the map and the territory don’t agree, always believe the territory” Gause and Weinberg – describing Swedish Army Training
  3. When you find yourself in a hole, stop digging.-Will Rogers
  4. “The major difference between a thing that might go wrong and a thing that cannot possibly go wrong is that when a thing that cannot possibly go wrong goes wrong it usually turns out to be impossible to get at or repair” Douglas Adams
  5. “A foreign policy aimed at the achievement of total security is the one thing I can think of that is entirely capable of bringing this country to a point where it will have no security at all.”– George F. Kennan, (1954)
  6. “THERE ARE IDIOTS. Look around.” Larry Summers
  7. the only virtue of being an aging risk manager is that you have a large collection of your own mistakes that you know not to repeat  Donald Van Deventer
  8. Philip K. Dick “Reality is that which, when you stop believing in it, doesn’t go away.”
  9. Everything that can be counted does not necessarily count; everything that counts cannot necessarily be counted.  Albert Einstein
  10. “Perhaps when a man has special knowledge and special powers like my own, it rather encourages him to seek a complex explanation when a simpler one is at hand.”  Sherlock Holmes (A. Conan Doyle)
  11. The fact that people are full of greed, fear, or folly is predictable. The sequence is not predictable. Warren Buffett
  12. “A good rule of thumb is to assume that “everything matters.” Richard Thaler
  13. “The technical explanation is that the market-sensitive risk models used by thousands of market participants work on the assumption that each user is the only person using them.”  Avinash Persaud
  14. There are more things in heaven and earth, Horatio,
    Than are dreamt of in your philosophy.
    W Shakespeare Hamlet, scene v
  15. When Models turn on, Brains turn off  Til Schuermann

You might have other favorites.  Please let us know about them.

New Decade Resolutions

January 1, 2010

Here are New Decade Resolutions for firms to adopt who are looking to be prepared for another decade

  1. Attention to risk management by top management and the board.  The past decade has been just one continuous lesson that losses can happen from any direction. This is about the survival of the firm.  Survival must not be delegated to a middle manager.  It must be a key concern for the CEO and board.
  2. Action oriented approach to risk.  Risk reports are made to point out where and what actions are needed.  Management expects to and does act upon the information from the risk reports.
  3. Learning from own losses and from the losses of others.  After a loss, the firm should learn not just what went wrong that resulted in the loss, but how they can learn from their experience to improve their responses to future situations both similar and dissimilar.  Two different areas of a firm shouldn’t have to separately experience a problem to learn the same lesson. Competitor losses should present the exact same opportunity to improve rather than a feeling of smug superiority.
  4. Forwardlooking risk assessment. Painstaking calibration of risk models to past experience is only valuable for firms that own time machines.  Risk assessment needs to be calibrated to the future. 
  5. Skeptical of common knowledge. The future will NOT be a repeat of the past.  Any risk assessment that is properly calibrated to the future is only one one of many possible results.  Look back on the past decade’s experience and remember how many times risk models needed to be recalibrated.  That recalibration experience should form the basis for healthy skepticism of any and all future risk assessments.

  6. Drivers of risks will be highlighted and monitored.  Key risk indicators is not just an idea for Operational risks that are difficult to measure directly.  Key risk indicators should be identified and monitored for all important risks.  Key risk indicators need to include leading and lagging indicators as well as indicators from information that is internal to the firm as well as external. 
  7. Adaptable. Both risk measurement and risk management will not be designed after the famously fixed Ligne Maginot that spectacularly failed the French in 1940.  The ability needs to be developed and maintained to change focus of risk assessment and to change risk treatment methods on short notice without major cost or disruption. 
  8. Scope will be clear for risk management.  I have personally favored a split between risk of failure of the firm strategy and risk of losses within the form strategy, with only the later within the scope of risk management.  That means that anything that is potentially loss making except failure of sales would be in the scope of risk management. 
  9. Focus on  the largest exposures.  All of the details of execution of risk treatment will come to naught if the firm is too concentrated in any risk that starts making losses at a rate higher than expected.  That means that the largest exposures need to be examined and re-examined with a “no complacency” attitude.  There should never be a large exposure that is too safe to need attention.   Big transactions will also get the same kind of focus on risk. 

Risk Management in 2009 – Reflections

December 26, 2009

Perhaps we will look back at 2009 and recall that it is the turning point year for Risk Management.  The year that boards ans management and regulators all at once embraced ERM and really took it to heart.  The year that many, many firms appointed their first ever Chief Risk Officer.  They year when they finally committed the resources to build the risk capital model of the entire firm.

On the other hand, it might be recalled as the false spring of ERM before its eventual relegation to the scrapyard of those incessant series of new business management fads like Management by Objective, Managerial Grid, TQM, Process Re-engineering and Six Sigma.

The Financial Crisis was in part due to risk management.  Put a helmet on a kid on a bicycle and they go faster down that hill.  And if the kid really doesn’t believe in helmets and they fail to buckle to chin strap and the helmet blows off in the wind, so much the better.  The wind in the hair feels exhilarating.

The true test of whether the top management is ready to actually DO risk management is whether they are expecting to have to vhange some of their decisions based upon what their risk assessment process tells them.

The dashboard metaphor is really a good way of thinking about risk management.  A reasonable person driving a car will look at their dashboard periodically to check on their speed and on the amount of gas that they have in the car.  That information will occasionally cause them to do something different than what they might have otherwise done.

Regulatory concentration on Risk Management is. on the whole, likely to be bad for firms.  While most banks were doing enough risk management to satisfy regulators, that risk management was not relevant to stopping or even slowing down the financial crisis.

Firms will tend to load up on risks that are not featured by their risk assessment system.  A regulatory driven risk management system tends to be fixed, while a real risk management system needs to be nimble.

Compliance based risk management makes as much sense for firms as driving at the speed limit regardless of the weather, road conditions or the conditions of the car’s breaks and steering.

Many have urged that risk management is as much about opportunities as it is about losses.  However, that is then usually followed by focusing on the opportunities and downplaying the importance of loss controlling.

Preventing a dollar of loss is just as valuable to the firm as adding a dollar of revenue.  A risk management loss controlling system provides management with a methodology to make that loss prevention a reliable and repeatable event.  Excess revenue has much more value if it is reliable and repeatable.  Loss control that is reliable and repeatable can have the same value.

Getting the price right for risks is key.  I like to think of the right price as having three components.  Expected losses.  Risk Margin.  Margin for expenses and profits.  The first thing that you have to decide about participating in a market for a particular type of risk is whether the market in sane.  That means that the market is realistically including some positive margin for expenses and profits above a realistic value for the expected losses and risk margin.

Most aspects of the home real estate and mortgage markets were not sane in 2006 and 2007.  Various insurance markets go through periods of low sanity as well.

Risk management needs to be sure to have the tools to identify the insane markets and the access to tell the story to the real decision makers.

Finally, individual risks or trades need to be assessed and priced properly.  That means that the insurance premium needs to provide a positive margin for expenses and profits above the realistic provision for expected losses and a reasonable margin for risk.

There were two big hits to insurers in 2009.  One was the continuing problems to AIG from its financial products unit.  The main lesson from their troubles ought to be TANSTAAFL.  There ain’t no such thing as a free lunch.  Selling far out of the money puts and recording the entire premium as a profit is a business model that will ALWAYS end up in disaster.

The other hit was to the variable annuity writers.  In their case, they were guilty of only pretending to do risk management.  Their risk limits were strange historical artifacts that had very little to do with the actual risk exposures of the firm.  The typical risk limits for a VA writer were very low risk retained from equities if the potential loss was due to an embedded guarantee and no limit whatsoever for equity risk that resulted in drops in basic M&E revenue.  A typical VA hedging program was like a homeowner who insured every item of his possessions from fire risk, but who failed to insure the house!

So insurers should end the year of 2009 thinking about whether they have either of those two problems lurking somewhere in their book of business.

Are there any “far out of the money” risks where no one is appropriately aware of the large loss potential ?

Are there parts of the business where risk limits are based on tradition rather than on risk?

Have a Happy New Year!


%d bloggers like this: