Riskviews has been taking a learning break.
Some times we are refreshed and invigorated by getting away from anything relating to their primary occupation.
But other times the most refreshing thing that you can do is to learn about how people faced with seemingly different, but fundamentally similar problems approach their work.
Riskviews has been learning small bits about Resilience. That topic is usually associated with physical systems failures. We are fooled into thinking that physical systems failures are all about engineering questions about the failures of metals or breakdown of lubricants.

But just as most failures in financial firms are directly related to human systems issues, so are most physical systems failures. Studies about resilience are mostly studies of the human systems that are tightly linked to the physical systems that fail.
Here is a definition of resilience:
Resilience is the intrinsic ability of a system
to adjust its functioning prior to, during, or
following changes and disturbances, so that
it can sustain required operations under both
expected and unexpected conditions.
Already, Riskviews is learning something. In much risk management literature, it is assumed that the system is determined via rules and that there is not necessarily ANY adjusting happening. But from experience, we know that in almost all cases, systems will adjust to most significant changes and certainly will adjust to “disturbances”.
At the highest level, banks found out that a capital regime under which they held capital for a 1 in x event worked for absorbing the large loss, but it did not work for providing needed capital after the large loss. They had a plan that worked up until the day after the event.
What both banks and insurers also found in the crisis was that their systems did adjust as things got insanely adverse. But what they found was that in some cases, their systems adjusted so that they reduced the impact of the crisis and in other cases, made things worse.
One of the concepts that Resilience Engineers have developed is what they call “Drift into Failure.” What they mean by that is that in many cases, complex systems fail, not because of some single part of person’s failure, but because of a series of small problems that in the end cause an avalanche type failure.
Here are four ideas that were discussed at a Resilience Engineering conference in 2004 from the notes of C Nemeth:
. Get smarter at reporting the next [adverse] event, helping
organizations to better manage the processes by which they decide
to control risk
. Detect drift into failure before breakdown occurs. Large system
accidents have revealed that what is considered to be normal is
highly negotiable. There is no operational model of drift.
. Chart the momentary distance between operations as they are,
versus as they are imagined, to lessen the gap between operations
and management that leads to brittleness.
. Constantly test whether ideas about risk still match reality. Keeping
discussions of risk alive even when everything looks safe can serve
as a broader indicator of resilience than tracking the number of
accidents that have occurred.
Resilience is a big topic and Riskviews will continue to share further learnings.
Like this:
Like Loading...