Archive for July 2012

Rounding Up to Reduce Drift into Failure and Maintain Risk Karma

July 31, 2012

So what to do about Drift into Failure?

Think of DIF in simple math terms.  At every turn in the calculation, you are rounding down or truncating the values that you calculate.  With that process, your result will always be low.  Not always noticeably low but with a bias to be below the value that you would have calculated with carrying forward the value with all of the decimal points.

With a Risk Management or Safety system, it is the same thing.  If checking ten times will give a .9999 guaranty of safety, then nine times should be good enough.  If lubricating weekly produces no failures, how about lubricating every 9 days.  And so on.  If a hedge that is 98% effective works out fine most days, how about a hedge that is 96% effective.  A $5 million retention works, why not move it to $5.5 million.

In every case, the company rounds down.

So the practice that is needed to reduce DIF is to occasionally round up.  One year, try rounding up on half the risk systems.  Make the standards just a tiny bit tighter a few times.  Balance things that way.  Think of your firm as accumulating bad karma by allowing the shortcuts, the rounding down on the risk management and safety systems.  Protect the karma, by going the other way in the same sort of imperceptible small steps that are the evidence of the DIF.

Stop Drifting.   Join the Fight Against Bad Risk Karma Today.

Advertisement

The Risk of Paying too much Attention to your Experience

July 30, 2012

The Drift into Failure idea from the Safety Engineers is quite valuable.

One way that DIF occurs is when an organization listens too well to the feedback that they get from their safety system.

That is right, too much attention.  In the case of a remote risk, the feedback that you will get most days, most weeks, most months is NOTHING HAPPENS.

That is the feedback you are likely to get if you have a good loss prevention system or if you have none.

This ties to the DIF idea because organizations are always under pressure to do more with less.  To streamline and reduce costs.

So what happens?  In Safety and Risk Management, someone studies the risks of a situations and designs a risk mitigation system that reduces the frequency or severity of problem situations to an acceptable level.

Then, at some future time, the company management looks to reduce costs and/or staff.  This particular risk mitigation system looks like a prime candidate.  The company is spending time and money and there has never been a problem.  Doubtless, the same “nothing” could be achieved with less.  So the budget is cut, a position is elimated and they get by with less mitigation.

Then time pass and they collect the feedback, the experience with the reduced risk mitigation process.  And the experience tells them that they still have no problems.  The budget cutters are vindicated.  Things seem to be just fine with a less costly program.

If the risk here is highly remote, then this process might happen several times.

Which may eventually result in a very bad situation if the remote adverse event finally happens.  The company will be inadequately unprepared.  And no one made a clear decision to dilute the defense to an ineffective level.  They just kept making small decisions and eventually they drifted into failure.

And each step was validated by their experience.

Risk Language Needs to be Learned – By the Risk Officer

July 22, 2012

Language is not imposed.  Language develops from usage.  The first step in developing a common risk language WITHIN a firm is to understand the risk language that already exists.  The goal is to figure out what concepts are spoken of in different words in different parts of the firm.  And which risk management concepts are not already in use.
If you listen instead of talking, you will usually learn that almost all risk management concepts are already in use somewhere in the firm and there is already language.  When there is multiple terms in different areas, the solution is usually to teach both areas the terms that the other area uses.  Soon, the organization accepts the terms as synonyms.  (Languages have synonyms, you know).
Good luck to the risk officer who brings in a language and tells everyone which words to say.  The risk officer needs to learn the risk language that is already in use and concentrate on elevating the significance of the practices that the existing language describes.

There are experts who say that it is important for everyone to speak the same language about risk and risk management.  The private benefits are negligible.  The collective benefits are slim.  Absolutely everything else gets along just fine with each firm having their own private language.

A Learning Break

July 16, 2012

Riskviews has been taking a learning break.

Some times we are refreshed and invigorated by getting away from anything relating to their primary occupation.

But other times the most refreshing thing that you can do is to learn about how people faced with seemingly different, but fundamentally similar problems approach their work.

Riskviews has been learning small bits about Resilience.  That topic is usually associated with physical systems failures.  We are fooled into thinking that physical systems failures are all about engineering questions about the failures of metals or breakdown of lubricants.

But just as most failures in financial firms are directly related to human systems issues, so are most physical systems failures.  Studies about resilience are mostly studies of the human systems that are tightly linked to the physical systems that fail.

Here is a definition of resilience:

Resilience is the intrinsic ability of a system
to adjust its functioning prior to, during, or
following changes and disturbances, so that
it can sustain required operations under both
expected and unexpected conditions.

Already, Riskviews is learning something.  In much risk management literature, it is assumed that the system is determined via rules and that there is not necessarily ANY adjusting happening.  But from experience, we know that in almost all cases, systems will adjust to most significant changes and certainly will adjust to “disturbances”.

At the highest level, banks found out that a capital regime under which they held capital for a 1 in x event worked for absorbing the large loss, but it did not work for providing needed capital after the large loss.  They had a plan that worked up until the day after the event.

What both banks and insurers also found in the crisis was that their systems did adjust as things got insanely adverse.  But what they found was that in some cases, their systems adjusted so that they reduced the impact of the crisis and in other cases, made things worse.

One of the concepts that Resilience Engineers have developed is what they call “Drift into Failure.”  What they mean by that is that in many cases, complex systems fail, not because of some single part of person’s failure, but because of a series of small problems that in the end cause an avalanche type failure.

Here are four ideas that were discussed at a Resilience Engineering conference in 2004 from the notes of C Nemeth:

. Get smarter at reporting the next [adverse] event, helping
organizations to better manage the processes by which they decide
to control risk
. Detect drift into failure before breakdown occurs. Large system
accidents have revealed that what is considered to be normal is
highly negotiable. There is no operational model of drift.
. Chart the momentary distance between operations as they are,
versus as they are imagined, to lessen the gap between operations
and management that leads to brittleness.
. Constantly test whether ideas about risk still match reality. Keeping
discussions of risk alive even when everything looks safe can serve
as a broader indicator of resilience than tracking the number of
accidents that have occurred.

Resilience is a big topic and Riskviews will continue to share further learnings.

When You Find Yourself in a Hole, Stop Digging

July 2, 2012

Attributed to Will Rogers

Who knew that Will Rogers was a closet Risk Manager.   He must have been because that is great risk management advise.

If you have too much of something – the first thing that you should do is to STOP ADDING to your position.

We do not yet have the full story, but it is pretty safe to guess that neither MF Global or JP Morgan followed that idea.  It seems fairly obvious that at some point in time, the each had smaller positions that were already too big and then they ADDED to their positions.

The bank/hedge fund trading mentality suggests that the traders who really tener cojones will be able to keep raising the size of their position until the market breaks.

Insurance companies harbor the same mentality, except that they are never on the big win side of the bet.  Insurers win small on any one bet.  They win if there is no claim.  But even with that lopsided situation does not stop insurers from loading up on bets where they already have too much.

So the answer is to invite WIll Rogers into your Limit protocol.  When you are setting or reviewing your limits for the next period, set a new WILL ROGERS LIMIT.  The new WILL ROGERS LIMIT (WRL) is the point where you automatically stop adding to your position if there has not been a discussion and an exception to the WRL.

And that is what risk management is all about.  Just thinking ahead.  It is not magic.  Just listening to the great risk managers of the past.


%d bloggers like this: