Archive for the ‘Black Swan’ category

Real World Risks

December 16, 2015

There are many flavors of Risk Management.  Each flavor of risk manager believes that they are addressing the Real World.

  • Bank risk managers believe that the world consists of exactly three sorts of risk:  Market, Credit and Operational.  They believe that because that is the way that banks are organized.  At one time, if you hired a person who was a banking risk manager to manage your risks, their first step would be to organize the into those three buckets.
  • Insurance Risk Managers believe that a company’s insurable risks – liability, E&O, D&O, Workers Comp, Property, Auto Liability – are the real risks of a firm.  As insurance risk managers have expanded into ERM, they have adapted their approach, but not in a way that could, for instance, help at all with the Credit and Market risk of a bank.
  • Auditor Risk Managers believe that there are hundreds of risks worth attention in any significant organization. Their approach to risk is often to start at the bottom and ask the lowest level supervisors.  Their risk management is an extension of their audit work.  Consistent with the famous Guilliani broken windows approach to crime.  However, this approach to risk often leads to confusion about priorities and they sometimes find it difficult to take their massive risk registers to top management and the board.
  • Insurer Risk Managers are focused on statistical models of risk and have a hard time imagining dealing with risks that are not easily modeled such as operational and strategic risks.  The new statistical risk managers often clash with the traditional risk managers (aka the underwriters) whose risk management takes the form of judgment based selection and pricing processes.
  • Trading Desk Risk Managers are focused on the degree to which any traders exceed their limits.  These risk managers have evolved into the ultimate risk takers of their organizations because they are called upon to sometime approve breaches when they can be talked into agreeing with the trader about the likelihood of a risk paying off.  Their effectiveness is viewed by comparing the number of days that the firm’s losses exceed the frequency predicted by the risk models.

So what is Real World Risk?

Start with this…

Top Causes of death

  • Heart disease
  • stroke
  • lower respiratory infections
  • chronic obstructive lung disease
  • HIV
  • Diarrhea
  • Lung cancers
  • diabetes

Earthquakes, floods and Hurricanes are featured as the largest insured losses. (Source III)

Cat LossesNote that these are the insured portion of the losses.  the total loss from the Fukishima disaster is estimated to be around $105B.  Katrina total loss $81B. (Source Wikipedia)

Financial Market risk seems much smaller.  When viewed in terms of losses from trading, the largest trading loss is significantly smaller than the 10th largest natural disaster. (Source Wikipedia)

Trading LossesBut the financial markets sometimes create large losses for everyone who is exposed at the same time.

The largest financial market loss is the Global Financial Crisis of 2008 – 2009.  One observer estimates the total losses to be in the range of $750B to $2000B.  During the Great Depression, the stock market dropped by 89% over several years, far outstripping the 50% drop in 2009.  But some argue that every large drop in the stock market is preceded by an unrealistic run up in the value of stocks, so that some of the “value” lost was actually not value at all.

If your neighbor offers you $100M for your house but withdraws the offer before you can sell it to him and then you subsequently sell the house for $250k, did you lose $99.75M?  Of course not.  But if you are the stock market and for one day you trade at 25 time earnings and six months later you trade at 12 times earnings, was that a real loss for any investors who neither bought or sold at those two instants?

So what are Real World Risks?

 

Comments welcomed…

 

Setting your Borel Point

July 28, 2014

What is a Borel Risk Point you ask?  Emile Borel once said

“Events with a sufficiently small probability never occur”.

Your Borel Risk Point (BRP) is your definition of “sufficiently small probability” that causes you to ignore unlikely risks.

Chances are, your BRP is set at much too high of a level of likelihood.  You see, when Borel said that, he was thinking of a 1 in 1 million type of likelihood.  Human nature, that has survival instincts that help us to survive on a day to day basis, would have us ignoring things that are not likely to happen this week.

Even insurance professionals will often want to ignore risks that are as common as 1 in 100 year events.  Treating them as if they will never happen.

And in general, the markets allow us to get away with that.  If a serious adverse event happens, the unprepared generally are excused if it is something as unlikely as a 1 in 100 event.

That works until another factor comes into play.  That other factor is the number of potential 1 in 100 events that we are exposed to.  Because if you are exposed to fifty 1 in 100 events, you are still pretty unlikely to see any particular event, but very likely to see some such event.

Governor Andrew Cuomo of New York State reportedly told President Obama,

New York “has a 100-year flood every two years now.”
Solvency II has Europeans all focused on the 1 in 200 year loss.  RISKVIEWS would suggest that is still too high of a likelihood for a good Borel Risk Point for insurers. RISKVIEWS would argue that insurers need to have a higher BRP because of the business that they are in.  For example, Life Insurers primary product (which is life insurance, at least in some parts of the world) pays for individual risks (unexpected deaths) that occur at an average rate of less than 1 in 1000.  How does an insurance company look their customers in the eye and say that they need to buy protection against a 1 in 1000 event from a company that only has a BRP of 1 in 200?
So RISKVIEWS suggest that insurers have a BRP somewhere just above 1 in 1000.  That might sound aggressive but it is pretty close to the Secure Risk Capital standard.  With a Risk Capital Standard of 1 in 1000, you can also use the COR instead of a model to calculate your capital needed.

Why some think that there is No Need for Storm Shelters

May 22, 2013

The BBC featured a story about the dearth of storm shelters in the area hit last week by tornadoes.

Why so few storm shelters in Tornado Alley hotspot?

The story goes on to discuss the fact that Americans, especially in red states like Oklahoma, strongly prefer keeping the government out of the business of providing things like storm shelters, allowing that to be an individual option.  Then reports that few individuals opt to spend their money on shelters.

The answer might well be in the numbers…

Below, from the National Oceanic and Atmospheric Administration (NOAA) is a list of the 25 deadliest tornadoes in US history:

1. Tri-State (MO, IL, IN) – March 18, 1925 – 695 deaths
2. Natchez, MS – May 6, 1840 – 317 deaths
3. St. Louis, MO – May 27, 1896 – 255 deaths
4. Tupelo, MS – April 5, 1936 – 216 deaths
5. Gainesville, GA – April 6, 1936 – 203 deaths
6. Woodward, OK – April 9, 1947 – 181 deaths
7. Joplin, MO – May 22, 2011 – 158 deaths
8. Amite, LA, Purvis, MS – April 24, 1908 – 143 deaths
9. New Richmond, WI – June 12, 1899 – 117 deaths
10. Flint, MI – June 8, 1953 – 116 deaths
11. Waco, TX – May 11, 1953 – 114 deaths
12. Goliad, TX – May 18, 1902 – 114 deaths
13. Omaha, NE – March 23, 1913 – 103 deaths
14. Mattoon, IL – May 26, 1917 – 101 deaths
15. Shinnston, WV – June 23, 1944 – 100 deaths
16. Marshfield, MO – April 18, 1880 – 99 deaths
17. Gainesville, GA – June 1, 1903 – 98 deaths
18. Poplar Bluff, MO – May 9, 1927 – 98 deaths
19. Snyder, OK – May 10, 1905 – 97 deaths
20. Comanche, IA & Albany, IL – June 3, 1860 – 92 deaths
21. Natchez, MS – April 24, 1908 – 91 deaths
22. Worcester, MA – June 9, 1953 – 90 deaths
23. Starkville, MS to Waco, AL -April 20, 1920 – 88 deaths
24. Lorain & Sandusky, OH – June 28, 1924 – 85 deaths
25. Udall, KS – May 25, 1955 – 80 deaths

Looks scary and impressively dangerous.  Until you look more carefully at the dates.  Most of those events are OLD.  In fact, if you look at this as a histogram, you see something interesting…

Deadly Tornadoes

You see from this chart, why there are few storm shelters.  Between the 1890’s and 1950’s, there were at least two very deadly tornadoes per decade.  Enough to keep people scared.  But before the last week, there had not been a decade in over 50 years with any major events.  50  years is a long time to go between times when someone somewhere in the US needed a storm shelter to protect them from a very deadly storm.

This is not to say that there have not been storms in the past 50 years.  The chart below from the Washington Post, shows the losses from tornadoes for that same 50 year period and the numbers are not small.

It is RISKVIEWS’ guess that in the face of smaller, less deadly but destructive storms, people are much more likely to attribute their own outcome to some innate talent that they have and the losers do not have.  Sort of like the folks who have had one or several good experiences at the slot machines who believe that they have a talent for gambling.

Another reason is that almost 45% of storm fatalities are folks who live in trailers.  They often will not even have an option to build their own storm shelter.  There it is probably something that could be addressed by regulations regarding zoning of trailer parks.

Proper risk management can only be done in advance.  The risk management second guessing that is done after the fact helps to create a tremendous drag on society.  We are forced into spending money to prevent recurrence of the last disaster, regardless of whether that expenditure makes any sense at all on the basis of frequency and severity of the potential adverse events or not.

We cannot see the future as clearly as we can see the past.  We can only prepare for some of the possible futures. 

The BBC article stands on the side of that discussion that looks back after the fact and finds fault with whoever did not properly see the future exactly as clearly as they are now able to see the past.

A simple recent example of this is the coverage of the Boston Marathon bombers.  Much has been made of the fact that there were warnings about one or more members of the family before the event.  But no one has chosen to mention how many others who did not commit bombings were there similar or even much more dire warnings about.  It seems quite likely, that the warnings about these people were dots in a stream of hundreds of thousands of similar warnings.

Learnings from the Superstorm

April 29, 2013

From the FSOC 2013 Annual Report with minor paraphrasing…

• Planning and testing: It is important that your company and all of your important counterparties, vendors, and sub contractees, fully understand the functionality of contingency systems, and that key operations and business personnel communicate efficiently to assure enterprise-wide clarity. Expanded testing exercises would enhance assurance of failover reliability. Such testing should involve all parties inside and outside your firm that you depend upon to continue functioning, and should also involve providers of essential services such as power, water, and telecommunications.

• Incident management: Protocols for assuring a timely decision on whether and when to close or open the company would benefit from review and streamlining by the responsible parties. Likewise, protocols for assuring timely decisions within the firm on whether and when to leverage back-up sites would benefit from continued regular testing. Furthermore, operational interdependencies need to be fully incorporated in the decision-making process.

• Personnel: The resilience of critical components of the company requires geographic dispersal of both electronic systems and personnel sufficient to enable an organization to operate despite the occurrence of a wide-scale disruption affecting the metropolitan or geographic area of the organization’s primary operations, including communities economically integrated with, adjacent to, or within normal commuting distance of the primary operations area. Organizations, including major firms, need to continuously and rigorously analyze their routine positioning and emergency repositioning of key management and staff. This is an ongoing requirement as technology, market structure, and institutions evolve rapidly. Developed business continuity plans should be implemented, and key staff should be sent to disaster recovery sites when there is advance notice of events.

• Dependencies: Cross-industry interdependencies require constant review, reassessment, and improvement by organizations to mitigate the impact of energy, power, transport, and communications failures during severe incidents, and to help ensure reliable redundancy.

Future Uncertainty

April 16, 2013

Often called emerging risks. Going back to Knight’s definitions of Risk and Uncertainty, there is very little risk contained in these potential situations.  Emerging risks are often pure uncertainty.  Humans are good at finding patterns.  Emerging risks are breaks in patterns.

What to Do about Emerging Risks…

Emerging risks are defined by AM Best as “new or evolving risks that are difficult to manage because their identification, likelihood of occurrence, potential impacts, timing of occurrence or impact, or correlation with other risks, are highly uncertain.” An example from the past is asbestos; other current examples could be problems deriving from nanotechnology, genetically modified food, climate change, etc. Lloyd’s, a major sufferer from the former emerging risk of asbestos, takes emerging risks very seriously. They think of emerging risks as “an issue that is perceived to be potentially significant but which may not be fully understood or allowed for in insurance terms and conditions, pricing, reserving or capital setting”.

What do the rating agencies expect?

AM Best says that insurers need “sound risk management practices relative to its risk profile and considering the risks inherent in the liabilities it writes, the assets it acquires and the market(s) in which it operates, and takes into consideration new and emerging risks.” In 2013, Best has added a question asking insurers to identify emerging risks to the ERM section of the SRQ. Emerging Risks Management has been one of the five major pillars of the Standard & Poor’s Insurance ERM ratings criteria since 2006.

How do you identify emerging risks?

A recent report from the World Economic Forum, The Global Risks 2012 report is based on a survey of 469 experts from industry, government, academia and civil society that examines 50 global risks. Those experts identified 8 of those 50 risks as having the most significance over the next 10 years:

  •   Chronic fiscal imbalances
  •   Cyber attacks
  •   Extreme volatility in energy and agriculture prices
  •   Food shortage crises
  •   Major systemic financial failure
  •   Rising greenhouse gas emissions
  •   Severe income disparity
  •   Water supply crises

This survey method for identifying or prioritizing risks is called the Delphi method and can be used by any insurer. Another popular method is called environmental scanning which includes simply reading and paying attention for unusual information about situations that could evolve into future major risks.

What can go wrong?

Many companies do not have any process to consider emerging risks.  At those firms, managers usually dismiss many possible emerging risks as impossible.  It may be the company culture to scoff at the sci fi thinking of the emerging risks process.  The process Taleb describes of finding ex post explanation for emerging Black Swan risks is often the undoing of careful plans to manage emerging risk.  In addition, lack of imagination causes some managers to conclude that the past worst case is the outer limit for future losses.

What can you do about emerging risks?

The objectives for emerging risks management are just the same as for other more well-known risks: to reduce the frequency and severity of future losses. The uncertain nature of emerging risks makes that much more difficult to do cost effectively. Insurers can use scenario testing to examine potential impact of emerging risks and to see what actions taken in advance of their emergence might lessen exposures to losses. This scenario testing can also help to identify what actions might lessen the impact of an unexpected loss event that comes from a very rapidly emerging risk. Finally, insurers seek to identify and track leading indicators of impending new risk emergence.

Reinsurance is one of the most effective ways to protect against emerging risks, second only to careful drafting of insurance contract terms and conditions

Many of the largest insurers and reinsurers have developed very robust practices to identify and to prepare for emerging risks.  Other companies can learn from the insurers who practice emerging risk management and adapt the same processes to their emerging risks.

Normal risk control processes focus on everyday risk management, including the management of identifiable risks and/or risks where uncertainty and unpredictability are mitigated by historical data that allow insurers to estimate loss distribution with reasonable confidence. Emerging risk management processes take over for risks that do not currently exist but that might emerge at some point due to changes in the environment. Emerging risks may appear abruptly or slowly and gradually, are difficult to identify, and may for some time represent an ill formed idea more than factual circumstances. They often result from changes in the political, legal, market, or physical environment, but the link between cause and effect is fully known in advance. An example from the past is asbestos; other examples could be problems deriving from nanotechnology, genetically modified food, climate change, etc. 
For these risks, normal risk identification and monitoring will not work because the likelihood is usually completely unknown. Nevertheless, past experience shows that when they materialize, they have a significant impact on the insurers and therefore cannot be excluded from a solid risk management 
program. So insurers have implemented unique specific strategies and approaches to cope with them properly.

Identifying emerging risks

Emerging risks have not yet materialized or are not yet clearly defined and can appear abruptly or very slowly. Therefore, having some sort of early warning system in place, methodically identified either through internal or external sources, is very important. To minimize the uncertainty surrounding these risks, insurers will consistently gather all existing relevant information to amass preliminary evidence of emerging risks, which would allow the insurer to reduce or limit growth of exposure as the evidence becomes more and more certain.  However, Insurers practicing this discipline will need to be aware of the cost of false alarms.

Assessing their significance

Assess the relevance (i.e. potential losses) of the emerging risks linked to a company’s commitment— which classes of business and existing policies would be affected by the materialization of the risk—and continue with the assessment of the potential financial impact, taking into account potential correlation with other risks already present in the firm. For an insurer, the degree of concentration and correlation of the risks that they have taken on from their customers are two important parameters to be considered; the risk in question could be subject to very low frequency/high intensity manifestations, but if exposure to that particular risk is limited, then the impact on the company may not be as important. On the other hand, unexpected risk correlations should not be underestimated; small individual exposures can coalesce into an extreme risk if underlying risks are highly interdependent. When developing extreme scenarios, some degree of imagination to think of unthinkable interdependencies could be beneficial.

A further practice of insurers is to sometimes work backwards from concentrations to risks. Insurers might envision risks that could apply to their concentrations and then track for signs of risk emergence in those areas. Some insurers set risk limits for insurance concentrations that are very similar to investment portfolio credit limits, with maximum concentrations in specific industries in geographic or political regions. In addition, just as investment limits might restrict an insurer’s debt or equity position as a percentage of a company’s total outstanding securities, some insurers limit the percentage of coverage they might offer in any of the sectors described above.

Define appropriate responses

Responses to emerging risks might be part of the normal risk control process, i.e., risk mitigation or transfer, either through reinsurance (or retrocession) in case of insurance risks, through the financial markets for financial risks, or through general limit reduction or hedging. When these options are not available or the insurer decides not to use them, it must be prepared to shoulder significant losses, which can strain a company’s liquidity.  Planning access to liquidity is a basic part of emerging risk management.  Asset-selling priorities, credit facilities with banks, and notes programs are possible ways of managing a liquidity crisis.

Apart from liquidity crisis management, other issues exist for which a contingency plan should be identified in advance. The company should be able to quickly estimate and identify total losses and the payments due. It should also have a clear plan for settling the claims in due time so as to avoid reputation issues. Availability of reinsurance is also an important consideration: if a reinsurer were exposed to the same risks, it would be a sound practice for the primary insurer to evaluate the risk that the reinsurer might delay payments.

Advance Warning Process

For the risks that have identified as most significant and where the insurer has developed coherent contingency plans, the next step is to create and install an advanced warning process.  To do that, the insurer identifies key risk indicators that provide an indication of increasing likelihood of a particular emerging risk.

Learn

Finally, sound practices for managing emerging risks include establishing procedures for learning from past events. The company will identify problems that appeared during the last extreme event and identify improvements to be added to the risk controls.  In addition, expect to get better at each step of the emerging risk process with time and experience.

But emerging risk management costs money.  And the costs that are most difficult to defend are the emerging risks that never emerge.  A good emerging risk process will have many more misses than hits.  Real emerged risks are rare.  A company that is really taking emerging risks seriously will be taking actions on occasion that cost money to perform and possibly include a reduction in the risks accepted and the attendant profits.  Management needs to have a tolerance for these costs.  But not too much tolerance.

 

This is one of the seven ERM Principles for Insurers

Watch Your Back! The Machines are Coming!!!

November 26, 2012

Did you see the 2004 movie I Robot?  Do you remember the scene where the hoards of silver robots came down the streets and started to take over?

Where is Robot Take Over on your risk list?

In Artificial intelligence – can we keep it in the box? two writers from Cambridge argue that the threat from AI is not an “if?” question, but a “when?” question.

The authors are part of a group at Cambridge (actually, there are three members of the group) who are interested in studying threats from technology.  “Many scientists are concerned that developments in human technology may soon pose new, extinction-level risks to our species as a whole.” says their website, The Cambridge Project for Existential Risk.

Go back and watch I Robot again.  The only reason that the robot rebellion was foiled was because there was one robot who was designed to be independent enough to disagree.

If the “group” from Cambridge is correct, we need to get working on designing that robot that will save the day.

But first, we should ask them what they mean by “many scientists”?

How Much Resilience Do We Need?

November 13, 2012

Much too much of what we do relies upon the simplest idea of linear extrapolation.  It must be hard wired into human brains to always think first of that process.  Because we frequently seem to miss when extrapolation does not work.

Risk managers desperately need to understand the idea of system capacity.  The capacity of a system is a point beyond which the system will fail or will start to work completely differently.

The obvious simple example is a cup with a small hole in the bottom.  If you pour water into that cup at a rate that is exactly equal to the rate of the leak from the hole at the bottom, then the water level of the cup will be in equilibrium.  A little slower and the cup will empty.  A little faster and it will fill.  Too long in the fill mode and it will spill.  The capacity will be exceeded.

The highly popular single serving coffee machines are built with a fixed approach to cup capacity.  The more sophisticated will allow for two different capacities, but usually leave it to the human operator to determine which limit to apply.

For the past several years, there have been a number of events, the latest a hurricane that damaged an area the size of Western Europe, that have far exceeded the resilience capacity of our systems.  The resilience capacity is the amount of damage that we can sustain without any significant disruption.  If we exceed our resilience capacity by a small amount, then we end up with a small amount of disruption.  But the amount of disruption seems to grow exponentially as the exceedance of resilience capacity increases.

The disruption to the New York area from Hurricane Sandy far exceeded the resilience capacity.  For one example, the power outages still continue two weeks after the storm.  The repairs that have been done to date have reflected herioc round the clock efforts by both local and regional repair crews.  The size of the problem was so immense that even with the significant outside help, the situation is still out of control for some homes and businesses.

We need to ask ourselves whether we need to increase the resilience capacity of our modern societies?

Have we developed our sense of what is needed during a brief interlude of benign experiences?  In the financial markets, the term “Great Moderation” has been used to describe the 20 year period leading up to the bursting of the dot com bubble.  During that period, lots of financial economics was developed.  The jury is still out about whether those insights have any value if the world is actually much more volatile and unpredictable than that period of time.

Some weather experts have pointed out that hurricanes go in cycles, with high and low periods of activities.  Perhaps we have been moving into a high period.

It is also possible that some of the success that mankind has experienced in the past 50 years might be in part due to a tempory lull in many damaging natural phenomina.  The cost of just keeping even was lower than over the rest of mankind’s history.

What if the current string of catastrophes is just a regression to the mean and we can expect that the future will be significantly more adverse than the mild past that we fondly remember?

We need to come to a conclusion on those questions to determine How Much Resilience Do We Need?


%d bloggers like this: