Posted tagged ‘Economic Capital’

Economic Capital for Banking Industry

December 22, 2014

Everything you ever wanted to know but were afraid to ask.

For the last seventeen years I have hated conversations with board members around economic capital. It is perfectly acceptable to discuss Market risk, Credit risk or interest rates mismatch in isolation but the minute you start talking about the Enterprise, you enter a minefield.

The biggest hole in that ground is produced by correlations. The smartest board members know exactly which buttons to press to shoot your model down. They don’t do it out of malice but they won’t buy anything they can’t accept, reproduce or believe.

Attempt to explain Copulas or the stability of historical correlations in the future and your board presentation will head south. Don’t take my word for it. Try it next time.  It is not a reflection on the board, it is a simple manifestation of the disconnect that exist today between the real world of Enterprise risk and applied statistical modeling. And when it comes to banking regulation and economic capital for banking industry, the disconnect is only growing larger.

Frustrated with our ineptitude with the state of modeling in this space three years ago we started working on an alternate model for economic capital.  The key trigger was the shift to shortfall and probability of ruin models in bank regulation as well as Taleb’s assertions in the area of how risk results should be presented to ensure informed decision making.   While the proposed model was a simple extension of the same principles on which value at risk is based, we felt that some of our tweaks and hacks delivered on our end objective – meaningful, credible conversations with the board around economic capital estimates.

Enterprise models for estimating economic capital simply extend the regulatory value at risk (VaR) model. The theory focuses on anchoring expectations.  If institutional risk expectations max out at 97.5% then 99.9% can represent unexpected risk. The appealing part of these logistics is that the anchors can shift as more points become visible in the underlying risk distribution. In the simplest and crudest of forms, here is what economic capital models suggest

While regulatory capital model compensate for expected risk, economic capital should account for unexpected risk. The difference between two estimates is the amount you need to put aside for economic capital modeling.”

The plus point with this approach is that it ensures that Economic Capital requirements will always exceed regulatory capital requirements. It removes the possibility of arbitrage that occurs when this condition doesn’t hold. The downside is the estimation of dependence between business lines.  The variations that we proposed short circuited the correlation debate. It also recommended using accounting data, data that the board had already reconciled and sign off on.


Without further ado, there is the series that presents our alternate model for estimating economic capital for banking industry Discuss, dissect, modify, suggest. We would love to hear your feedback.

Economic Capital – An alternate Model

Can we use the accounting data series and skip copulas and correlation modeling for business lines altogether? Take a look to find the answer.


Economic Capital Case Study – setting the context

We use publicly available data from Goldman Sachs, JP Morgan Chase, Citibank, Wells Fargo & Barclays Bank from the years 2002 to 2014 to calculate economic capital buffers in place at these 5 banks. Three different approaches are used. Two centered around Capital Adequacy. One using the regulatory Tier 1 leverage ratio.


Economic Capital Models – The appeal of using accounting data

Why does accounting data work? What is the business case for using accounting data for economic capital estimation? How does the modeling work.


Calculating Economic Capital – Using worst case losses

Our first model uses worst case loss. If you are comfortable with value at risk terminology, this is historical simulation approach for economic capital estimation.  We label it model one


Calculating Economic Capital – Using volatility

Welcome to the variance covariance model for economic capital estimation. The results will surprise you.  Presenting model two.


Calculating Economic Capital – Using Leverage ratio

We figured it was time that we moved from capital adequacy to leverage ratios.  Introducing model three.


Setting your Borel Point

July 28, 2014

What is a Borel Risk Point you ask?  Emile Borel once said

“Events with a sufficiently small probability never occur”.

Your Borel Risk Point (BRP) is your definition of “sufficiently small probability” that causes you to ignore unlikely risks.

Chances are, your BRP is set at much too high of a level of likelihood.  You see, when Borel said that, he was thinking of a 1 in 1 million type of likelihood.  Human nature, that has survival instincts that help us to survive on a day to day basis, would have us ignoring things that are not likely to happen this week.

Even insurance professionals will often want to ignore risks that are as common as 1 in 100 year events.  Treating them as if they will never happen.

And in general, the markets allow us to get away with that.  If a serious adverse event happens, the unprepared generally are excused if it is something as unlikely as a 1 in 100 event.

That works until another factor comes into play.  That other factor is the number of potential 1 in 100 events that we are exposed to.  Because if you are exposed to fifty 1 in 100 events, you are still pretty unlikely to see any particular event, but very likely to see some such event.

Governor Andrew Cuomo of New York State reportedly told President Obama,

New York “has a 100-year flood every two years now.”
Solvency II has Europeans all focused on the 1 in 200 year loss.  RISKVIEWS would suggest that is still too high of a likelihood for a good Borel Risk Point for insurers. RISKVIEWS would argue that insurers need to have a higher BRP because of the business that they are in.  For example, Life Insurers primary product (which is life insurance, at least in some parts of the world) pays for individual risks (unexpected deaths) that occur at an average rate of less than 1 in 1000.  How does an insurance company look their customers in the eye and say that they need to buy protection against a 1 in 1000 event from a company that only has a BRP of 1 in 200?
So RISKVIEWS suggest that insurers have a BRP somewhere just above 1 in 1000.  That might sound aggressive but it is pretty close to the Secure Risk Capital standard.  With a Risk Capital Standard of 1 in 1000, you can also use the COR instead of a model to calculate your capital needed.

One in Two Hundred

December 20, 2011

The odds of Earth being hit by the asteroid Apophis in 2039 was determined to be 1 in 200.

later corrected to be 1 in 48,000

The odds a person is 80 years old are 1 in 250.4 (US, 5/2009).

If 200 insurance companies are meeting Solvency II capital requirements should we expect that one of them will fail each year?

Do we really have any idea of the answer to that question?

Or can we admit that calculating a 1/200 capital requirement is not really the same as knowing how much capital it takes to prevent failures at that rate?

Calculating a 1/200 capital requirement is about creating capital requirements that are related to the level of risk of the insurer.  Calculating a 1/200 capital requirement is about trying to make the relationship of the capital level to the risk level consistent for different insurers with all different types of risk.  Calculating a 1/200 capital requirement is about having a regulatory requirement that is reasonably close to the actual level of capital held by insurers presently.

It actually cannot be about knowing the actual likelihood of very large losses.  Because it is unlikely that we will ever actually know with any degree of certainty what the actual size of the 1/200 losses might be.

We agree on methods for extrapolating losses from observed frequency levels.  So perhaps, we might know what a 1/20 loss might be and we use “scientific” methods to extrapolate to a 1/200 value.  These scientific assumptions are about the relationship between the 1/20 loss that we might know with some confidence and the 1/200 loss.  Instead of just making an assumption about the relationship between the 1/20 and the 1/200 loss, we make an intermediate assumption and let that assumption drive the ultimate answer.  That intermediate assumption is usually an assumption of the statistical relationship between frequency and severity.  By making that complicated assumption and letting it drive the ultimate values, we are able to obscure our lack of real knowledge about the likelihood of extreme values.  By making complicated assumptions about something that we do not know, we make sure that we can keep the discussion out of the hands of folks who might not fully understand the mathematics.

For the simplest such assumption, i.e. that of a Gaussian or Normal Distribution, the relationships are something like this:

  • For a risk with a coefficient of variance of 100% (i.e. the mean = standard deviation), the 1/200 loss is approximately 250% of the 1/20 loss
  • For a risk with a coefficient of variance of 150% (1.e. the mean = 2/3 the standard deviation) the 1/200 loss is approximately 200% of the 1/20 loss
  • For a risk with a coefficient of variance of 200% (i.e. the mean = 1/2 the standard deviation) the 1/200 loss is approximately 180% of the 1/20 loss
  • For a risk with a coefficient of variance of 70%, the 1/200 loss is 530% of the 1/20 loss

The graph above is the standard deviation/mean looking backwards at the S&P 500 annual returns for each of the previous 21 twenty-year periods.  So based upon that data, we see that the 1/200 loss might be somewhere between 530% and 180% of the worst result in the 20 year period.

And in this case, we base this upon the assumption that the returns are normally distributed. We simply varied the parameters as we made observations.

What this suggests is that the distribution is not at all stable based upon 20 observations.  So using this approach to extrapolating losses at more remote frequency looks like it will have some severe issues with parameter risk.

You can look at every single sub model and find that there is huge parameter risk.

So the conclusion should be that the 1/200 standard is a convention, rather than a claim that such a calculation might be reliable.

How Much Strategic ERM is Enough?

November 13, 2011

Strategic Risk Management is the name given by S&P to the enterprise level activities that seek to improve risk adjusted returns by a strategic capital allocation process.  It is also considered by S&P to the the “Use Test” for economic Capital models.  Strategic Risk Management includes

  • Capital Budgeting and Allocation
  • Strategic Trade-offs among insurance coverages AND investments
    • based on long term view of risk adjusted return
    • Recognizing significance of investment risk to total risk profile
    • Recognizing ceded reinsurance credit risk
  • Selecting which risks to write and which to retain over the long term
  • Strategic Asset Allocation
  • Risk Reward Optimization
Meanwhile Solvency II had created standards for its internal model Use Test.
The foundation principle of the Solvency II Use Test states that
internal model use should be sufficiently material to result in pressure to increase the quality of the model 
This strongly self referential idea is then supported by 9 Basic Principles.
Principle 1. Senior management and the administrative, management or supervisory body, shall be able to demonstrate understanding of the internal model
Principle 2. The internal model shall fit the business model
Principle 3. The internal model shall be used to support and verify decision-making in the undertaking
Principle 4. The internal model shall cover sufficient risks to make it useful for risk management and decision-making
Principle 5. Undertakings shall design the internal model in such a way that it facilitates analysis of business decisions.
Principle 6. The internal model shall be widely integrated with the risk-management system
Principle 7. The internal model shall be used to improve the undertaking’s risk-management system.
Principle 8. The integration into the risk-management system shall be on a consistent basis for all uses
Principle 9. The Solvency Capital Requirement shall be calculated at least annually from a full run of the internal model
From these two descriptions of a Use Test, one should be forgiven for picturing a group of priests in long robes prowling the halls of an insurer in procession carrying a two foot thick book of the internal model.  Their primary religious duty is to make sure that no one at the insurer ever have an independent thought without first thinking about the great internal model.  Every year, the internal model book is reprinted and the priests restart their procession.  
But take heart.  A quick look at the website of the European CRO Forum reveals that those firms do not revere their internal models quite so highly.  
The above chart suggests that in most groups the internal model is just one consideration for making most strategic decisions of the insurer.  
The excerpt below from the Tokio Marine Holdings puts these things into perspective.  

The Group carries out a relative evaluation of each business and prioritizes the allocation of management resources (business portfolio management).  It is achieved by using risk/return indicators for each business and applying a scoring indicator covering market growth potential and profitability, competitive advantages and expected effects of strategies.  Allocated management resources include funds, human resources and risk capital. By allocating these resources to business units and new businesses with even higher profitability and growth potential, we aim to improve the profitability and growth potential of our business portfolio.

You see from that statement that risk return is not the only input nor is capital the only factor under consideration.  Describing Strategic Risk Management as its own separate management process is incorrect.  
Strategic Risk Management is one set of inputs and outputs to/from the Strategic Decision Making Process.  
And if read carefully, that will satisfy both S&P as well as Solvency II Use Tests.

How many significant digits on your car’s speedometer?

September 29, 2011

Mine only shows the numbers every 20 and has markers for gradations of 5. So the people who make cars think that it is sufficient accuracy to drive a car that the driver know the speed of the car within 5.
And for the sorts of things that one usually needs to do while driving, that seems fine to me. I do not recall ever even wondering what my speed is to the nearest .0001.

That is because I never need to make any decisions that require the more precise value.
What about your economic capital model? Do you make decisions that require an answer to the nearest million? Or nearest thousand, or nearest 1?  How much time and effort goes into getting the accuracy that you do not use?

What causes the answer to vary from one time you run your model to another?  Riskviews tries to think of the drivers of changes as volume variances and rate variances.

The volume variances are the changes you experience because the volume of risk changes.  You wrote more or less business.  Your asset base grew or shrunk.

Rate variances are the changes that you experience because the amount of risk per unit of activity has changed.  Riskviews likes to call this the QUALITY of the risk.  For many firms, one of the primary objectives of the risk management system is to control the QUANTITY of risk.

QUANTITY of risk = QUALITY of risk times VOLUME of risk.

Some of those firms seek to control quantity of risk solely by managing VOLUME.  They only look at QUALITY of risk after the fact.  Some firms only look at QUALITY of risk when they do their economic capital calculation.  They try to manage QUALITY of risk from the modeling group.  That approach to managing QUALITY of risk is doomed to failure.

That is because QUALITY of risk is a micro phenomena and needs to be managed operationally at the stage of risk acceptance.  Trying to manage it as a macro phenomena results in the development of a process to counter the risks taken at the risk acceptance area with a macro risk offsetting activity.  This adds a layer of unnecessary cost and also adds a considerable amount of operational risk.

Some firms have processes for managing both QUANTITY and QUALITY of risk at the micro level.  At the risk acceptance stage.  The firm might have tight QUALITY criteria for risk acceptance or if the firm has a broad range of acceptable risk QUALITY it might have QUANTITY of risk criteria that have been articulated as the accumulation of quantity and quality.  (In fact, if they do their homework, the firms with the broad QUALITY acceptance will find that some ranges of QUALITY are much preferable to others and they can improve their return for risk taking by narrowing their QUALITY acceptance criteria.)

Once the firm has undertaken one or the other of these methods for controlling quality, then the need for detailed and complex modeling of their risks decreases drastically.  They have controlled their accumulation of risks and they already know what their risk is before they do their model.

Volume Variances and Rate Variances

June 8, 2011

There is only one reason why you might think that you really need to frequently use a complex stochastic model to measure your risks.  That would be because you do not know how risky your activities might be at any point in time.

Some risks are the type where you might not know what you got when you wrote the risk.  This happens at underwriting.

Some risks are the type that do not stay the same over time.  This could be reserve risk on long tailed coverages or any risk on any naked position that is extended over time.

Others require constant painstaking adjustment to hedging or other offsets.  Hedged positions or ALM systems fall into this category.

These are all rate variances.  The rate of risk per unit of activity is uncontrolled. Volume variances are usually easy to see.  They are evidenced by different volumes of activities.  You might easily see that you have more insurance risk because you wrote more insurance coverages.

But uncontrolled Rate variances seems to be a particularly scarey situation.

It seems that the entire purpose of risk management is to reduce the degree to which there might be uncontrolled rate variances.

So the need for a complex model seems to be proof that the risk management is inadequate.

A good underwriting system should make it so that you do know the risk you are writing – whether it is higher or lower than expected.

For the risks that might change over time, it you have no plans other than to stay long, then you are using the model to tell you how much to change your plans because of a decision to write and then not further manage long tailed risks.  The existence of a model does not make that practice actually risk management.  It seems like the tail wagging the dog.  Much better to develop management options for those long tailed risks.  Has anyone done any risk reward analysis on the decision to keep the long tailed exposure  looking at the opportunity risk that you will sometime in the future need to do less profitable business because of this strategy?

For the risks that are managed via hedging and/or ALM,  what is needed is a good system to making sure that the retained risk never ever exceeds the risk tolerance.  Making sure that there never is a rate variance.

The complex risk model does not seem to be a need for firms unless they suspect that they have these serious flaws in their risk management program or thinking AND they believe that they are able to control their model risk better than their actual risk.

The entire concept seems suspect.

Riskviews would suggest that if you think that your firm has uncontrolled rate variances, then you should not sleep until you get them under control.

Then you will not need a complex model.

Economic Capital Review by S&P

February 7, 2011

Standard & Poor’s started including an evaluation of insurers’ enterprise risk management in its ratings in late 2005. Companies that fared well under the stick of ERM evaluation there was the carrot of potentially lower capital requirements.  On 24 January, S&P published the basis for an economic capital review and adjustment process and announced that the process was being implemented immediately.

The ERM review is still the key. Insurers must already have a score from their ERM review of “strong” or “excellent” before they are eligible for any consideration of their capital model. That strong or excellent score implies that those firms have already passed S&P’s version of the Solvency II internal mode use test — which S&P calls strategic risk management (SRM). Those firms with the strong and excellent ERM rating will all have their economic capital models reviewed.

The new name for this process is the level III ERM review. The level I review is the original ERM process that was initiated in 2005. The level II process, started in 2006, is a more detailed review that S&P applies to firms with high levels of risk and/or complexity. That level II review included a more detailed look at the risk control processes of the firms.

The new level III ERM review looks at five aspects of the economic capital model: methodology, data quality, assumptions and parameters, process/execution and testing/validation.



Get every new post delivered to your Inbox.

Join 756 other followers

%d bloggers like this: