Posted tagged ‘Economic Capital’

One in Two Hundred

December 20, 2011

The odds of Earth being hit by the asteroid Apophis in 2039 was determined to be 1 in 200.

later corrected to be 1 in 48,000

The odds a person is 80 years old are 1 in 250.4 (US, 5/2009).

If 200 insurance companies are meeting Solvency II capital requirements should we expect that one of them will fail each year?

Do we really have any idea of the answer to that question?

Or can we admit that calculating a 1/200 capital requirement is not really the same as knowing how much capital it takes to prevent failures at that rate?

Calculating a 1/200 capital requirement is about creating capital requirements that are related to the level of risk of the insurer.  Calculating a 1/200 capital requirement is about trying to make the relationship of the capital level to the risk level consistent for different insurers with all different types of risk.  Calculating a 1/200 capital requirement is about having a regulatory requirement that is reasonably close to the actual level of capital held by insurers presently.

It actually cannot be about knowing the actual likelihood of very large losses.  Because it is unlikely that we will ever actually know with any degree of certainty what the actual size of the 1/200 losses might be.

We agree on methods for extrapolating losses from observed frequency levels.  So perhaps, we might know what a 1/20 loss might be and we use “scientific” methods to extrapolate to a 1/200 value.  These scientific assumptions are about the relationship between the 1/20 loss that we might know with some confidence and the 1/200 loss.  Instead of just making an assumption about the relationship between the 1/20 and the 1/200 loss, we make an intermediate assumption and let that assumption drive the ultimate answer.  That intermediate assumption is usually an assumption of the statistical relationship between frequency and severity.  By making that complicated assumption and letting it drive the ultimate values, we are able to obscure our lack of real knowledge about the likelihood of extreme values.  By making complicated assumptions about something that we do not know, we make sure that we can keep the discussion out of the hands of folks who might not fully understand the mathematics.

For the simplest such assumption, i.e. that of a Gaussian or Normal Distribution, the relationships are something like this:

  • For a risk with a coefficient of variance of 100% (i.e. the mean = standard deviation), the 1/200 loss is approximately 250% of the 1/20 loss
  • For a risk with a coefficient of variance of 150% (1.e. the mean = 2/3 the standard deviation) the 1/200 loss is approximately 200% of the 1/20 loss
  • For a risk with a coefficient of variance of 200% (i.e. the mean = 1/2 the standard deviation) the 1/200 loss is approximately 180% of the 1/20 loss
  • For a risk with a coefficient of variance of 70%, the 1/200 loss is 530% of the 1/20 loss

The graph above is the standard deviation/mean looking backwards at the S&P 500 annual returns for each of the previous 21 twenty-year periods.  So based upon that data, we see that the 1/200 loss might be somewhere between 530% and 180% of the worst result in the 20 year period.

And in this case, we base this upon the assumption that the returns are normally distributed. We simply varied the parameters as we made observations.

What this suggests is that the distribution is not at all stable based upon 20 observations.  So using this approach to extrapolating losses at more remote frequency looks like it will have some severe issues with parameter risk.

You can look at every single sub model and find that there is huge parameter risk.

So the conclusion should be that the 1/200 standard is a convention, rather than a claim that such a calculation might be reliable.

How Much Strategic ERM is Enough?

November 13, 2011

Strategic Risk Management is the name given by S&P to the enterprise level activities that seek to improve risk adjusted returns by a strategic capital allocation process.  It is also considered by S&P to the the “Use Test” for economic Capital models.  Strategic Risk Management includes

  • Capital Budgeting and Allocation
  • Strategic Trade-offs among insurance coverages AND investments
    • based on long term view of risk adjusted return
    • Recognizing significance of investment risk to total risk profile
    • Recognizing ceded reinsurance credit risk
  • Selecting which risks to write and which to retain over the long term
  • Strategic Asset Allocation
  • Risk Reward Optimization
Meanwhile Solvency II had created standards for its internal model Use Test.
The foundation principle of the Solvency II Use Test states that
internal model use should be sufficiently material to result in pressure to increase the quality of the model 
This strongly self referential idea is then supported by 9 Basic Principles.
Principle 1. Senior management and the administrative, management or supervisory body, shall be able to demonstrate understanding of the internal model
Principle 2. The internal model shall fit the business model
Principle 3. The internal model shall be used to support and verify decision-making in the undertaking
Principle 4. The internal model shall cover sufficient risks to make it useful for risk management and decision-making
Principle 5. Undertakings shall design the internal model in such a way that it facilitates analysis of business decisions.
Principle 6. The internal model shall be widely integrated with the risk-management system
Principle 7. The internal model shall be used to improve the undertaking’s risk-management system.
Principle 8. The integration into the risk-management system shall be on a consistent basis for all uses
Principle 9. The Solvency Capital Requirement shall be calculated at least annually from a full run of the internal model
 
From these two descriptions of a Use Test, one should be forgiven for picturing a group of priests in long robes prowling the halls of an insurer in procession carrying a two foot thick book of the internal model.  Their primary religious duty is to make sure that no one at the insurer ever have an independent thought without first thinking about the great internal model.  Every year, the internal model book is reprinted and the priests restart their procession.  
 
But take heart.  A quick look at the website of the European CRO Forum reveals that those firms do not revere their internal models quite so highly.  
 
The above chart suggests that in most groups the internal model is just one consideration for making most strategic decisions of the insurer.  
 
The excerpt below from the Tokio Marine Holdings puts these things into perspective.  

The Group carries out a relative evaluation of each business and prioritizes the allocation of management resources (business portfolio management).  It is achieved by using risk/return indicators for each business and applying a scoring indicator covering market growth potential and profitability, competitive advantages and expected effects of strategies.  Allocated management resources include funds, human resources and risk capital. By allocating these resources to business units and new businesses with even higher profitability and growth potential, we aim to improve the profitability and growth potential of our business portfolio.

You see from that statement that risk return is not the only input nor is capital the only factor under consideration.  Describing Strategic Risk Management as its own separate management process is incorrect.  
 
Strategic Risk Management is one set of inputs and outputs to/from the Strategic Decision Making Process.  
 
And if read carefully, that will satisfy both S&P as well as Solvency II Use Tests.
 

How many significant digits on your car’s speedometer?

September 29, 2011

Mine only shows the numbers every 20 and has markers for gradations of 5. So the people who make cars think that it is sufficient accuracy to drive a car that the driver know the speed of the car within 5.
And for the sorts of things that one usually needs to do while driving, that seems fine to me. I do not recall ever even wondering what my speed is to the nearest .0001.


That is because I never need to make any decisions that require the more precise value.
What about your economic capital model? Do you make decisions that require an answer to the nearest million? Or nearest thousand, or nearest 1?  How much time and effort goes into getting the accuracy that you do not use?

What causes the answer to vary from one time you run your model to another?  Riskviews tries to think of the drivers of changes as volume variances and rate variances.

The volume variances are the changes you experience because the volume of risk changes.  You wrote more or less business.  Your asset base grew or shrunk.

Rate variances are the changes that you experience because the amount of risk per unit of activity has changed.  Riskviews likes to call this the QUALITY of the risk.  For many firms, one of the primary objectives of the risk management system is to control the QUANTITY of risk.

QUANTITY of risk = QUALITY of risk times VOLUME of risk.

Some of those firms seek to control quantity of risk solely by managing VOLUME.  They only look at QUALITY of risk after the fact.  Some firms only look at QUALITY of risk when they do their economic capital calculation.  They try to manage QUALITY of risk from the modeling group.  That approach to managing QUALITY of risk is doomed to failure.

That is because QUALITY of risk is a micro phenomena and needs to be managed operationally at the stage of risk acceptance.  Trying to manage it as a macro phenomena results in the development of a process to counter the risks taken at the risk acceptance area with a macro risk offsetting activity.  This adds a layer of unnecessary cost and also adds a considerable amount of operational risk.

Some firms have processes for managing both QUANTITY and QUALITY of risk at the micro level.  At the risk acceptance stage.  The firm might have tight QUALITY criteria for risk acceptance or if the firm has a broad range of acceptable risk QUALITY it might have QUANTITY of risk criteria that have been articulated as the accumulation of quantity and quality.  (In fact, if they do their homework, the firms with the broad QUALITY acceptance will find that some ranges of QUALITY are much preferable to others and they can improve their return for risk taking by narrowing their QUALITY acceptance criteria.)

Once the firm has undertaken one or the other of these methods for controlling quality, then the need for detailed and complex modeling of their risks decreases drastically.  They have controlled their accumulation of risks and they already know what their risk is before they do their model.

Volume Variances and Rate Variances

June 8, 2011

There is only one reason why you might think that you really need to frequently use a complex stochastic model to measure your risks.  That would be because you do not know how risky your activities might be at any point in time.

Some risks are the type where you might not know what you got when you wrote the risk.  This happens at underwriting.

Some risks are the type that do not stay the same over time.  This could be reserve risk on long tailed coverages or any risk on any naked position that is extended over time.

Others require constant painstaking adjustment to hedging or other offsets.  Hedged positions or ALM systems fall into this category.

These are all rate variances.  The rate of risk per unit of activity is uncontrolled. Volume variances are usually easy to see.  They are evidenced by different volumes of activities.  You might easily see that you have more insurance risk because you wrote more insurance coverages.

But uncontrolled Rate variances seems to be a particularly scarey situation.

It seems that the entire purpose of risk management is to reduce the degree to which there might be uncontrolled rate variances.

So the need for a complex model seems to be proof that the risk management is inadequate.

A good underwriting system should make it so that you do know the risk you are writing – whether it is higher or lower than expected.

For the risks that might change over time, it you have no plans other than to stay long, then you are using the model to tell you how much to change your plans because of a decision to write and then not further manage long tailed risks.  The existence of a model does not make that practice actually risk management.  It seems like the tail wagging the dog.  Much better to develop management options for those long tailed risks.  Has anyone done any risk reward analysis on the decision to keep the long tailed exposure  looking at the opportunity risk that you will sometime in the future need to do less profitable business because of this strategy?

For the risks that are managed via hedging and/or ALM,  what is needed is a good system to making sure that the retained risk never ever exceeds the risk tolerance.  Making sure that there never is a rate variance.

The complex risk model does not seem to be a need for firms unless they suspect that they have these serious flaws in their risk management program or thinking AND they believe that they are able to control their model risk better than their actual risk.

The entire concept seems suspect.

Riskviews would suggest that if you think that your firm has uncontrolled rate variances, then you should not sleep until you get them under control.

Then you will not need a complex model.

Economic Capital Review by S&P

February 7, 2011

Standard & Poor’s started including an evaluation of insurers’ enterprise risk management in its ratings in late 2005. Companies that fared well under the stick of ERM evaluation there was the carrot of potentially lower capital requirements.  On 24 January, S&P published the basis for an economic capital review and adjustment process and announced that the process was being implemented immediately.

The ERM review is still the key. Insurers must already have a score from their ERM review of “strong” or “excellent” before they are eligible for any consideration of their capital model. That strong or excellent score implies that those firms have already passed S&P’s version of the Solvency II internal mode use test — which S&P calls strategic risk management (SRM). Those firms with the strong and excellent ERM rating will all have their economic capital models reviewed.

The new name for this process is the level III ERM review. The level I review is the original ERM process that was initiated in 2005. The level II process, started in 2006, is a more detailed review that S&P applies to firms with high levels of risk and/or complexity. That level II review included a more detailed look at the risk control processes of the firms.

The new level III ERM review looks at five aspects of the economic capital model: methodology, data quality, assumptions and parameters, process/execution and testing/validation.

Read More at InsuranceERM.com

Holding Sufficient Capital

May 23, 2010

From Jean-Pierre Berliet

The companies that withstood the crisis and are now poised for continuing success have been disciplined about holding sufficient capital. However, the issue of how much capital an insurance company should hold beyond requirements set by regulators or rating agencies is contentious.

Many insurance executives hold the view that a company with a reputation for using capital productively on behalf of shareholders would be able to raise additional capital rapidly and efficiently, as needed to execute its business strategy. According to this view, a company would be able to hold just as much “solvency” capital as it needs to protect itself over a one year horizon from risks associated with the run off of in-force policies plus one year of new business. In this framework, the capital need is calculated to enable a company to pay off all its liabilities, at a specified confidence level, at the end of the one year period of stress, under the assumption that assets and liabilities are sold into the market at then prevailing “good prices”. If more capital were needed than is held, the company would raise it in the capital market.

Executives with a “going concern” perspective do not agree. They observe first that solvency capital requirements increase with the length of the planning horizon. Then, they correctly point out that, during a crisis, prices at which assets and liabilities can be sold will not be “good times” prices upon which the “solvency” approach is predicated. Asset prices are likely to be lower, perhaps substantially, while liability prices will be higher. As a result, they believe that the “solvency” approach, such as the Solvency II framework adopted by European regulators, understates both the need for and the cost of capital. In addition, these executives remember that, during crises, capital can become too onerous or unavailable in the capital market. They conclude that, under a going concern assumption, a company should hold more capital, as an insurance policy against many risks to its survival that are ignored under a solvency framework.

The recent meltdown of debt markets made it impossible for many banks and insurance companies to shore up their capital positions. It prompted federal authorities to rescue AIG, Fannie Mae and Freddie Mac. The “going concern” view appears to have been vindicated.

Directors and CEOs have a fiduciary obligation to ensure that their companies hold an amount of capital that is appropriate in relation to risks assumed and to their business plan. Determining just how much capital to hold, however, is fraught with difficulties because changes in capital held have complex impacts about which reasonable people can disagree. For example, increasing capital reduces solvency concerns and the strength of a company’s ratings while also reducing financial leverage and the rate of return on capital that is being earned; and conversely.

Since Directors and CEOs have an obligation to act prudently, they need to review the process and analyses used to make capital strategy decisions, including:

  • Economic capital projections, in relation to risks assumed under a going concern assumption, with consideration of strategic risks and potential systemic shocks, to ensure company survival through a collapse of financial markets during which capital cannot be raised or becomes exceedingly onerous
  • Management of relationships with leading investors and financial analysts
  • Development of reinsurance capacity, as a source of “off balance sheet” capital
  • Management of relationships with leading rating agencies and regulators
  • Development of “contingent” capital capacity.

The integration of risk, capital and business strategy is very important to success. Directors and CEOs cannot let actuaries and finance professionals dictate how this is to happen, because they and the risk models they use have been shown to have important blind spots. In their deliberations, Directors and CEOs need to remember that models cannot reflect credibly the impact of strategic risks. Models are bound to “miss the point” because they cannot reflect surprises that occur outside the boundaries of the closed business systems to which they apply.

©Jean-Pierre Berliet   Berliet Associates, LLC (203) 972-0256  jpberliet@att.net

The Use Test – A Simple Suggestion

February 23, 2010

Many are concerned about what the “Use Test” will be. Will it be a pop quiz or will companies be allowed to study?

Well, I have a suggestion for a simple and, I believe, fairly foolproof test. That would be for top management (not risk management or modeling staff) to be able to hold a conversation about their risk profile each year.

Now the first time that they can demonstrate that would not be the “Use Test”. It would be the second or third time that would constitute the test.

The conversation would be simple. It would involve explaining the risk profile of the firm – why the insurer is taking each of the major risks, what do they expect to get out of that risk exposure and how are they making sure that the potential losses that they experience are not worse than represented by their risk model. This discussion should include recognition of gross risk before offsets as well as net retained risk.

After the first time, the discussion would include an explanation of the reasons for the changes in the risk profile – did the profile change because the world shifted or did it change due to a deliberate decision on the part of management to take more or less or to retain more or less of a risk.

Finally a third part of the discussion would be to identify the experience of the past year in terms of its likelihood as predicted by the model and the degree to which that experience caused the firm to recalibrate its view of each risk.

To pass the test, management would merely need to have a complete story that is largely consistent from year to year.

Those who fail the test would be making large changes to their model calibration and their story from year to year – stretching to make it look like the model information was a part of management decisions.

Some firms who might have passed before the crisis who should have failed were firms who in successive years told the same story of good intentions with no actions in reducing outsized risks.

For firms who are really using their models, there will be no preparation time needed for this test. Their story for this test will be the story of their firm’s financial management.

Ideally, I would suggest that the test be held publicly at an investor call.

Economic Risk Capital

December 1, 2009

Guest Post from Chitro Majumdar

Economic capital models can be complex, embodying many component parts and it may not be immediately obvious that a complex model works satisfactorily. Moreover, a model may embody assumptions about relationships between variables or about their behaviour that may not hold in all circumstances (e.g under periods of stress). We have developed an algorithm for Dynamic Financial Analysis (DFA) that enables the creation of a comprehensive framework to manage Enterprise Risk’s Economic Risk Capital. DFA is used in the capital budgeting decision process of a company to launch a new invention and predict the impact of the strategic decision on the balance sheet in the horizon. DFA gives strategy for Enterprise Risk Management in order to avoid undesirable outcomes, which could be disastrous.

“The Quants know better than anyone how their models can fail. The surest way to replicate this adversity is to trust the models blindly while taking large-scale advantage of situations where they seem to provide ERM strategies that would yield results too superior to be true”

Dynamic Financial Analysis (DFA) is the most advance modelling process in today’s property and casualty industry-allowing us to develop financial forecasts that integrate the variability and interrelationships of critical factors affecting our results. Through the modeling of DFA, we see the company’s relevant random variables is based on the categorization of risks which is generated solvency testing where the financial position of the company is evaluated from the perspective of the customers. The central idea is to quantify in probabilistic terms whether the company will be able to meet its commitments in the future.  DFA is in the capital budgeting decision process of a company launching a new invention and predicting the impact of the strategic decision on the balance sheet in a horizon of few years.

 

The validation of economic capital models is at a very preliminary stage. There exists a wide range of validation techniques, each of which provides corroboration for (or against) only some of the desirable properties of a model. Moreover, validation techniques are powerful in some areas such as risk sensitivity but not in other areas such as overall absolute accuracy or accuracy in the tail of the loss distribution. It is advisable that validation processes are designed alongside development of the models rather than chronologically following the model building process. There is a wide range of validation processes and each one provides evidence for only some of the desirable properties of a model. Certain industry validation practices are weak with improvements needed in benchmarking, industry wide exercises, back-testing, profit and loss analysis and stress testing and followed by other advanced simulation model. For validation we adhere to the below mentioned method to calculate.

 

Calculation of risk measures

In their internal use of risk measures, banks need to determine an appropriate confidence level for their economic capital models. It generally does not coincide with the 99.9% confidence level used for credit and operational risk under Pillar 1 of Basel II or with the 99% confidence level for general and specific market risk. Frequently, the link between a bank’s target rating and the choice of confidence level is interpreted as the amount of economic capital necessary to prevent the bank from eroding its capital buffer at a given confidence level. According to this view, which can be interpreted as a going concern view, capital planning is seen more as a dynamic exercise than a static one, in which banks want to hold a capital buffer “on top” of their regulatory capital and where it is the probability of eroding such a buffer (rather than all available capital) that is linked to the target rating. This would reflect the expectation (by analysts, rating agencies and the market) that the bank operates with capital that exceeds the regulatory minimum requirement. Apart from considerations about the link to a target rating, the choice of a confidence level might differ based on the question to be addressed. On the one hand, high confidence levels reflect the perspective of creditors, rating agencies and regulators in that they are used to determine the amount of capital required to minimise bankruptcy risk. On the other hand,  use of lower confidence levels for management purposes in order to allocate capital to business lines and/or individual exposures and to identify those exposures that are critical for profit objectives in a normal business environment. Another interesting aspect of the internal use of different risk measures is that the choice of risk measure and confidence level heavily influences relative capital allocations to individual exposures or portfolios. In short, the farther out in the tail of a loss distribution, the more relative capital gets allocated to concentrated exposures. As such, the choice of the risk measure as well as the confidence level can have a strategic impact since some portfolios might look relatively better or worse under risk-adjusted performance measures than they would based on an alternative risk measure.

 

Chitro Majumdar CSO – R-square RiskLab

 

 

More details: http://www.riskreturncorp.com


Follow

Get every new post delivered to your Inbox.

Join 566 other followers

%d bloggers like this: