Archive for the ‘Random’ category

Ignoring a Risk

October 31, 2013

Ignoring is perhaps the most common approach to large but infrequent risks.

Most people think of a 1 in 100 year event as something so rare as it will never happen.

But just take a second and look at the mortality risk of a life insurer.  Each insured has on average around a 1 – 2 in 1000 likelihood of death in any one year.  However, life insurers do not plan for zero claims.  They plan for 1 -2 in 1000 of their policies to have a death claim in any one year.  No one thinks it odd that something with a 1-2 in 1000 likelihood happens hundreds of times in a year.  No one goes around scoffing at the validity of the model or likelihood estimate because such a rare event has happened.

But somehow, that seemingly totally simple minded logic escapes most people when dealing with other risks.  They scoff at how silly that it is that so many 1 in 100 events happen in a year.  Of course, they say, such estimated of likelihood MUST be wrong.

So they go forth ignoring the risk and ignoring the attempts at estimating the expected frequency of loss.  The cost of ignoring a low frequency risk is zero in most years.

And of course, any options for transferring such a risk will have both an expected frequency and an uncertainty charge built in.  Which make those options much too expensive.

The big difference is that a large life insurer takes on hundreds of thousands and in the largest cases, millions of exposures to the 1-2 in 1000 risks. Of course, the law of large numbers turns these individual ultra low frequency risks into a predictable claims pattern, in many cases one with a fairly tight distribution of possible claims.

But because they are ignored, no one tries to know how many of those 1 in 100 risks that we are exposed to.  But the statistics of 20 or 50 or 100 totally unrelated 1 in 100 risks is exactly the same as the life insurance math.

With 100 totally unrelated independent 1 in 100 risks, the chance of one or more turning into a loss in any one year is 63%!

And the most common reaction to the experience of a 1 in 100 event happening is to decide that the statistics are all wrong!

After Superstorm Sandy, NY Governor Cuomo told President Obama that NY “has a 100-year flood every two years now.”  Cuomo had been governor for less than two full years at that point.

The point is that organizations must go against the natural human impulse to separately decide to ignore each of their “rare” risks and realize that the likelihood of experiencing one of these rare events is not so rare, what is uncertain is which one.

Do we always underprice tail risk?

April 23, 2011

What in the world might underpricing mean when referring to a true tail risk? Adequacy of pricing assumes that someone actually can know the correct price.

But imagine something that has a true likelihood of 5% in any one period.  Now imagine 100 periods of randomly generated results.

Then for each of three 100 period trials look at 20 year periods. The tables below show the frequency table for the 80 observation periods.

20 Year Observed Frequency Out of 80
0 45
5% 24
10% 12
15% 0
20% 0
20 Year Observed Frequency Out of 80
0 9
5% 28
10% 24
15% 8
20% 7
20 Year Observed Frequency Our of 80
0 50
5% 11
10% 20
15% 0
20% 0

if the “tail risks” are 1/20 events and you do not have any information other than observations of experience, then this is the sort of result you will get. The observed frequency will jump around.

If that is the situation, how would anyone get the price “correct”?

But suppose that you then set a price for this tail risk. Let’s just say you picked 15% because that is what your competitor is doing.

And you have a very patient set of investors. They will judge you by 5 year results. So then we plot the 5 year results.

And you see that my profits are quite a wild ride.

Now in the insurance sector, what seems to happen is that when there are runs of good results people tend to cut rates to pick up market share. And when the profits run to losses, people tend to raise rates to make up for losses.

So again we are stymied from knowing what is the correct rate since the market goes up and down with a lag to experience.

Is the result a tendency to underprice?  You be the judge.

Why?

November 29, 2010

My favorite book of the Bible is Job.  That could have been called the book of WHY.  Everyone throughout the book assumes that there must be an answer and they try out those answers but none seems to fit.

Finally, they get an answer, but it is not the sort of answer that they were looking for.  The answer that they get is something like”you would never understand”.

But the risk manager is always being asked why?  Asked to explain the unexplainable.

David Hackett Fisher advised historians to avoid WHY.  To stick with who, what, when, where  and how.

A why question tends to become a metaphysical question. It is also an imprecise question, for the adverb ‘why’ is slippery and difficult to define. Sometimes it seeks a cause, sometimes a motive, sometimes a reason, sometimes a description, sometimes a process, sometimes a purpose, sometimes a justification.

This list of definitions for the word why is useful to the risk manager however, because often there is no “why” under some definitions, but the other definitions can help to provide a path to an answer that is probably less than satisfactory but better than nothing.

Nothing being the same as the answer “it is still a 1 in 100 event, we were just unlucky”.

If you want the company’s executives to really embrace ERM, then the risk manager needs to have all of these definitions and as many of the answers as humanly possible on hand.  The executive will need the risk manager to provide the words that they can use and feel comfortable lording it over their peers who do not have such a smart risk management department.  They need the words than answer WHY.

The famous quote about risk..

“We were seeing things that were 25-standard deviation moves, several days in a row.”

David Vinar, CFO Goldman Sachs
August 2007

Vinar obviously had someone who did not have the above list of definitions of WHY on hand, he got the S— Happens answer from a math geek.

Executives need to be brought into the Baysian recalibration process.  Each year, the  experience of the year needs to be placed on the scale from the model (as Vinar did above) and the scale then either accepted or rejected.  (Which step Vinar obviously did after making that statement.)

That exercize ought to be a part of every year end wrap up from the risk department.  Their recount of the who, what, when, where, how and WHY of the events of the year.

A Posteriori Creation

September 29, 2010

The hunters had come back to the village empty handed after a particularly difficult day. They talked through the evening around the fire about what had happened. They needed to make sense out of their experience, so that they could go back out tomorrow and feel that they knew how the world worked well enough to risk another hunt. This day, they were able to convince themselves that what had happened was similar to another day many years ago and that it was an unusually bad day, but driven by natural forces that they could expect and plan for in the future.

Other days, they could not reconcile an unusually bad day and they attributed their experience to the wrath of one or another of their gods.

Risk managers still do the same thing.  They have given this process a fancy name, Bayesian inference.  The very bad days, we now call Black Swans instead of an act of the gods.

Where we have truly advanced is in our ability to claim that we can reverse this process.  We claim that we can create the stories in advance of the experience and thereby provide better protection.

But we fail to realize that underneath, we are still those hunters.  We tell the stories to make ourselves feel better, to feel safe enough to go back out the next day.  Once we have gone through the process of a posteriori creation of the framework, the past events fit neatly into a framework that did not really exist when those same events were in the future.

If you do not believe that, think about how many risk models have had to be significantly recalibrated in the last 10 years.

To correct for this, we need to go against 10,000 or more years of human experience.  The correction can be summed up with the line from the movie The Fly,

Be afraid.  Be very afraid.

There is another answer.  That answer is

Be smart.  Be very smart.

That is because it is not always the best or even a very good strategy to be very afraid.  Only sometimes.  So you need to become smart enough to:

  1. Know when it is really important to mistrust the models and to be very afraid
  2. Have built up the credibility and trust so that you are not ignored.

While you are doing that,be careful with the a posteriori creations.  The better people get with explaining away the bad days, the harder it will be for you to convince them that a really bad day is at hand.

You may have missed these . . .

November 22, 2009

Riskviews was dormant from April to July 2009 and restarted as a forum for discussions of risk and risk management.  You may have missed some of these posts from shortly after the restart…

Crafting Risk Policy and Processes

From Jawwad Farid

Describes different styles of Risk Policy statements and warns against creating unnecessary bottlenecks with overly restrictive policies.

A Model Defense

From Chris Mandel

Suggests that risk models are just a tool of risk managers and therefore cannot be blamed.

No Thanks, I have enough “New”

Urges thinking of a risk limit for “new” risks.

The Days After – NEVER AGAIN

Tells how firms who have survived a near death experience approach their risk management.

Whose Loss is it?

Asks about who gets what shares of losses from bad loans and suggests that shares havedrifted over time and should be reconsidered.

How about a Risk Diet?

Discusses how an aggregate risk limit is better than silo risk limits.

ERM: Law of Unintended Consequences

From Neil Bodoff

Suggests that accounting changes will have unintended consequences.

Lessons from a Bull Market that Never Happened

Translates lessons learned from the 10 year bull market that was predicted 10 years ago from investors to risk managers.

Choosing the Wrong Part of the Office

From Neil Bodoff

Suggests that by seeking tobe risk managers, actuaries are choosing the wrong part of the office.

Random Numbers

Some comments on how random number generators might be adapted to better reflect the variability of reality.

Non-Linearities and Capacity

November 18, 2009

I bought my current house 11 years ago.  The area where it is located was then in the middle of a long drought.  There was never any rain during the summer.  Spring rains were slight and winter snow in the mountains that fed the local rivers was well below normal for a number of years in a row.  The newspapers started to print stories about the levels of the reservoirs – showing that the water was slightly lower at the end of each succeeding summer.  One year they even outlawed watering the lawns and everyone’s grass turned brown.

Then, for no reason that was ever explained, the drought ended.  Rainy days in the spring became common and one week it rained for six days straight.

Every system has a capacity.  When the capacity of a system is exceeded, there will be a breakdown of the system of some type.  The breakdown will be a non-linearity of performance of the system.

For example, the ground around my house has a capacity for absorbing and running off water.  When it rained for six days straight,  that capacity was exceeded, some of the water showed up in my basement.   The first time that happened, I was shocked and surprised.  I had lived in the house for 5 years and there had never been a hint of water in the basement. I cleaned up the effects of the water and promptly forgot about it. I put it down to a 1 in 100 year rainstorm.  In other parts of town, streets had been flooded.  It really was an unusual situation.

When it happened again the very next spring, this time after just 3 days of very, very heavy rain.  The flooding in the local area was extreme.  People were driven from their homes and they turned the high school gymnasium into a shelter for a week or two.

It appeared that we all had to recalibrate our models of rainfall possibilities.  We had to realize that the system we had for dealing with rainfall was being exceeded regularly and that these wetter springs were going to continue to exceed the system.  During the years of drought, we had built more and more in low lying areas and in ways that we might not have understood at the time, we altered to overall capacity of the system by paving over ground that would have absorbed the water.

For me, I added a drainage system to my basement.  The following spring, I went into my basement during the heaviest rains and listened to the pump taking the water away.

I had increased the capacity of that system.  Hopefully the capacity is now higher than the amount of rain that we will experience in the next 20 years while I live here.

Financial firms have capacities.  Management generally tries to make sure that the capacity of the firm to absorb losses is not exceeded by losses during their tenure.  But just like I underestimated the amount of rain that might fall in my home town, it seems to be common that managers underestimate the severity of the losses that they might experience.

Writers of liability insurance in the US underestimated the degree to which the courts would assign blame for use of a substance that was thought to be largely benign at one time that turned out to be highly dangerous.

In other cases, though it was the system capacity that was misunderstood.  Investors miss-estimated the capacity of internet firms to productively absorb new cash from the investors.  Just a few years earlier, the capacity of Asian economies to absorb investors cash was over-estimated as well.

Understanding the capacity of large sectors or entire financial systems to absorb additional money and put it to work productively is particularly difficult.  There are no rules of thumb to tell what the capacity of a system is in the first place.  Then to make it even more difficult, the addition of cash to a system changes the capacity.

Think of it this way, there is a neighborhood in a city where there are very few stores.  Given the income and spending of the people living there, an urban planner estimates that there is capacity for 20 stores in that area.  So with encouragement of the city government and private investors, a 20 store shopping center is built in an underused property in that neighborhood.  What happens next is that those 20 stores employ 150 people and for most of those people, the new job is a substantial increase in income.  In addition, everyone in the neighborhood is saving money by not having to travel to do all of their shopping.  Some just save money and all save time.  A few use that extra time to work longer hours, increasing their income.  A new survey by the urban planner a year after the stores open shows that the capacity for stores in the neighborhood is now 22.  However, entrepreneurs see the success of the 20 stores and they convert other properties into 10 more stores.  The capacity temporarily grows to 25, but eventually, half of the now 30 stores in the neighborhood go out of business.

This sort of simple micro economic story is told every year in university classes.

Version:1.0 StartHTML:0000000165 EndHTML:0000006093 StartFragment:0000002593 EndFragment:0000006057 SourceURL:file://localhost/Users/daveingr/Desktop/Capacity

It clearly applies to macroeconomics as well – to large systems as well as small.  Another word for these situations where system capacity is exceeded is systemic risk.  The term is misleading.  Systemic risk is not a particular type of risk, like market or credit risk.  Systemic risk is the risk that the system will become overloaded and start to behave in severely non-linear manner.  One severe non-linear behavior is shutting down.  That is what the interbank lending did in 2008.

In 2008, many knew that the capacity of the banking system had been exceeded.  They knew that because they knew that their own bank’s capacity had been exceeded.  And they knew that the other banks had been involved in the same sort of business as them.  There is a name for the risks that hit everyone who is in a market – systematic risks.  Systemic risks are usually Systematic risks that grow so large that they exceed the capacity of the system.  The third broad category of risk, specific risks, are not an issue, unless a firm with a large amount of specific risk that exceeds their capacity is “too big to fail”.  Then suddenly specific risk can become systemic risk.

So everyone just watched when the sub prime systematic risk became a systemic risk to the banking sector.  And watch the specific risk to AIG lead to the largest single firm bailout in history.

Many have proposed the establishment of a systemic risk regulator.  What that person would be in charge of doing would be to identify growing systematic risks that could become large enough to become systemic problems.  THen they are responsible to taking or urging actions that are intended to diffuse the systematic risk before it becomes a systemic risk.

A good risk manager has a systemic risk job as well.  THe good risk manager needs to pay attention to the exact same things – to watch out for systematic risks that are growing to a level that might overwhelm the capacity of the system.  The risk manager’s responsibility is then to urge their firm to withdraw from holding any of the systematic risk.   Stories tell us that happened at JP Morgan and at Goldman.  Other stories tell us that didn’t happen at Bear or Lehman.

So the moral of this is that you need to watch not just your own capacity but everyone else’s capacity as well if you do not want stories told about you.

Models & Manifesto

September 1, 2009

Have you ever heard anyone say that their car got lost? Or that they got into a massive pile-up because it was a 1-in-200-year event that someone drove on the wrong side of a highway? Probably not.

But statements similar to these have been made many times since mid-2007 by CEOs and risk managers whose firms have lost great sums of money in the financial crisis. And instead of blaming their cars, they blame their risk models. In the 8 February 2009 Financial Times, Goldman Sachs’ CEO Lloyd Blankfein said “many risk models incorrectly assumed that positions could be fully hedged . . . risk models failed to capture the risk inherent in off-balance sheet activities,” clearly placing the blame on the models.

But in reality, it was, for the most part, the modellers, not the models, that failed. A car goes where the driver steers it and a model evaluates the risks it is designed to evaluate and uses the data the model operator feeds into the model. In fact, isn’t it the leadership of these enterprises that are really responsible for not clearly assessing the limitations of these models prior to mass usage for billion-dollar decisions?

But humans, who to varying degrees all have a limit to their capacity to juggle multiple inter-connected streams of information, need models to assist with decision-making at all but the smallest and least complex firms.

These points are all captured in the Financial Modeler’s Manifesto from Paul Wilmott and Emanuel Derman.

But before you use any model you did not build yourself, I suggest that you ask the model builder if they have read the manifesto.

If you do build models, I suggest that you read it before and after each model building project.

Random Numbers

August 30, 2009

Just a quick thought on random numbers.

Perhaps we have the wrong model for a random number generator with the regular statistical probability distributions.
nine128
I am wondering if it wouldn’t be better to think of random numbers as coming from a toss of several dice where the dice are drawn from a barrel where some number of the dice in the barrel are non-standard dice. We do not know how many dots are on those dice or how many of the non-standard dice are in the barrel.

We might go for dozens of tosses without hitting one of the non-standard dice, and then one day we get two of them.

Somehow we need to figure out how to play the game well when we get the regular dice but be ready when the non-standard dice are drawn without warning.