A race between a motorcycle and a wheelbarrow

Posted May 2, 2018 by riskviews
Categories: Risk

Tags:

pexels-photo-217872.jpeg

Behavioral Finance / Behavioral Economics (BF for short) says that in general folks do a poor job of decision-making related to risk and finance.  There is quite a lot of analysis of systematic errors that their experimental subjects have been found to make.

In general, people are found to make IRRATIONAL choices.  RATIONAL choices are defined to be the choices that economists have found to be the best.  (The best in the world specified by the economists – not necessarily in the world that people actually live in.  But that is the subject for a different and long essay.)

This work is highly regarded and widely studied and quoted.  Kahneman and Smith received a Nobel Prize for the original development of BF in 2002 and Thaler received a Nobel prize for his advancements in the field in 2017.

But does it actually make sense?  As they pose the issue, it seems to.  But take a step back.  They are comparing economic decisions made by an economist to decisions made by folks with no training in economics.  If they follow the general protocols of psychology, they would have looked for subjects with the least amount of knowledge of finance and risk.

So should it be a surprise that the studied population did not do well in their study?  That they made systematic errors?

Imagine if you had a group of adults who had never been exposed to multiplication.  And you gave them a simple multiplication test.  Their answers would be compared to a group of math PhDs.  So for the most part, they would have been guessing at the answers to the questions.  If asked, they might well have felt good about their answers to some or all of the questions.  But it is highly likely that they would be wrong.

From this experiment, it would be concluded that people cannot answer multiplication problems.  The study might progress further and start to look at word problems, including word problems that represent everyday situations where multiplication is vital to getting by.  Oh no, people are found to be poor at this as well.

But the solution is not some grand theory about how people are flawed regarding multiplication.  The solution is math education!!!

On risk and finance, our society takes the position that in general we will not instruct people.  That the best way to learn risk is via experience.  And the best way to learn about finance is from a payday lender or a credit card past due debt collector.

flowers-garden-playing-pot.jpg

Economists generally have PhDs.  And their course of study includes both risk and finance.  One topic, for example, is the math of finance.  Taught within that topic are many of the approaches to financial decision making that BF has found that people make IRRATIONALLY.  Another course that is generally required of economics PhDs is statistics.  One of the ideas usually covered in statistics is risk.  Even an introductory statistics course provides much more knowledge of risk than is needed to answer the BF questions.  So economists have had systematic instruction that allows them to give the RATIONAL answers to the BF questions.

A side note – the idea of RATIONAL used in BF is consistent with Utility Maximization – an economics theory that was first fully developed in 1947.  So even some economists might have failed the BF questions prior to that.

So instead of the conclusions reached by BF, RISKVIEWS would suggest a very simple alternative:

Teach people about Risk and Finance!

Advertisements

Did the Three Pigs have different Risk Tolerances?

Posted March 21, 2018 by riskviews
Categories: Enterprise Risk Management, Risk Appetite

Tags: ,

Or did they just have a different view of the degree of risk in their environment?

3 PigsBy Alex Proimos from Sydney, Australia – Three Little Pigs

Think about it?  Is there any evidence that the first pig, whose house was made off straw, was fine with the idea of losing his house?  Not really.  More likely, he thought that the world was totally benign.  He thought that there was no way that his straw house wouldn’t be there tomorrow and the next day.  He was not tolerant of the risk of losing his house.  He just didn’t think it would happen.  But he was wrong.  It could and did happen.

The second pig used sticks instead of straw.  Did that mean that the second pig had less tolerance for risk than the first pig?  Probably not.  The second pig probably thought that a house of sticks was sturdy enough to withstand whatever the world would send against it.  This pig thought that the world was more dangerous than the first pig.  He needed sticks, rather than straw to make the house sturdy enough to last.  He also was wrong.  Sticks were not enough either.

That third pig has a house of bricks.  That probably cost much more than sticks or straw and took longer to build as well.  The third pig thought that the world was pretty dangerous for houses.  And he was right.  Bricks were sturdy enough to survive.  At least on the day that the wolf came by.

The problem here was not risk tolerance, but inappropriate parameters for the risk models of the first two pigs.  When they parameterized their models, the first pig probably put down zero for the number of wolves in the area.  After all, the first pig had never ever seen a wolf.  The second pig, may have put down 1 wolf, but when he went to enter the parameter for how hard could the wolf blow, he put down “not very hard”.  He had not seen a wolf either.  But he had heard of wolves.  He didn’t know about the wind speed of a full on wolf huff and puff.  His model told him that sticks could withstand whatever a wolf could do to his house.  When the third pig built his risk model, he answered that there were “many” wolves around.  And when he filled in the parameter for how hard the wolf could blow, he put “very”.  When he was a wee tiny pig, he had seen a wolf blow down a house built of sticks that had a straw roof.  He was afraid of wolves for a reason.

 

 

Too Much Logic

Posted March 13, 2018 by riskviews
Categories: Change Risk, Enterprise Risk Management, Risk Appetite

Tags: ,

Someone recently told RISKVIEWS that before a company could start a project to revitalize their risk governance structures they MUST update their Risk Appetite and Tolerance.  Because everything in an ERM program flows from Risk Appetite and Tolerance.  That suggestion is likely to be too much logic to succeed.

What many organizations have found is that if they are not ready to update their Risk Appetite and Tolerance, there are two likely outcomes of an update project:

  1. The update project will never be completed.
  2. The update project will be completed but the organization will ignore the updated Risk Appetite and Tolerance.

An organization will make a change when the pain of continuing on the existing course exceeds the pain of change.  (paraphrased from Edgar Shein)

So if an organization is not yet thoroughly dissatisfied with their current Risk Appetite and Tolerance, then they are not likely to change.

So you can think of the ERM program as the combination of several subsystems:

  • Governance – the people who have ERM responsibilities and their organizational positions – all the way up to the board.
  • Measurement – the models and other methods used to measure risk
  • Selection, Mitigation and Control – the processes that make up the every day activities of ERM
  • Capital Management – the processes that control aggregate risk including the ORSA.
  • Risk Reward Management – the processes that relate risk to prices and profits

When management of an organization is dissatisfied enough with any one of these sub systems, then they should undertake to revise/replace/improve those sub systems.

These sub systems are highly interconnected, so an improvement to one sub system is likely to increase dissatisfaction with another sub system.

For example, if the Governance sub system is not working.  People are not fulfilling their ERM related responsibilities which they may not really understand.  When this subsystem is set right,  people are aware of their ERM responsibilities and then they find out that some of the other sub systems do not provide sufficient support for them.  They get dissatisfied and urge an upgrade to another sub system.  And so on.

This might well result in a very different order for updating an ERM program than the logical order.

However, if the update follows the wave of dissatisfaction, the changes are much more likely to be fully adopted into ongoing company practice and to be effective.

WaveBy Malene Thyssen – Own work, CC BY-SA 3.0,https://commons.wikimedia.org/w/index.php?curid=651071

There is insufficient evidence to support a determination of past actual frequency of remote events!

Posted November 28, 2017 by riskviews
Categories: Enterprise Risk Management

Go figure.  The Institute and Faculty of Actuaries seems to have just discovered that humans are involved in risk modeling.  Upon noticing that, they immediately issued the following warning:

RISK ALERT
MODEL MANIPULATION

KEY MESSAGE
There are a number of risks associated with the use of models:
members must exercise care when using models to ensure that the rationale for selection of a particular model is sound and, in applying that model, it is not inappropriately used solely to provide evidence to support predetermined or preferred outcomes.

They warn particularly about the deliberate manipulation of models to get the desired answer.

There are two broad reasons why a human might select a model.  In both cases, they select the model to get the answer that they want.

  1. The human might have an opinion about the correct outcome from the model.  An outcome that does not concur with their opinion is considered to be WRONG and must be corrected.  See RISKVIEWS discussion of Plural Rationality for the range of different opinions that are likely.  Humans actually do believe quite a wide range of different things.  And if we restrict the management of insurance organizations to people with a narrow range of beliefs, that will have similar results to restricting planting to a single strain of wheat.  Cheap bread most years and none in some!
  2. The human doesn’t care what the right answer might be.  They want a particular range of result to support other business objectives.  Usually these folks believe that the concern of the model – a very remote loss – is not important to the management of the business.  Note that most people work in the insurance business for 45 years or less.  So the idea that they should be concerned with a 1 in 200 year loss seems absurd to many.  If they apply a little statistics knowledge, they might say that there is an 80% chance that there will not be a 1 in 200 year loss during their career.  Their Borel point is probably closer to a 1 in 20 level, where there is a 90% chance that such a loss will happen at least once in their career.

They also suggest that there needs to be “evidence to support outcomes”.  RISKVIEWS has always wondered what evidence might support prediction of remote outcomes in the future.  For the most part, there is insufficient evidence to support a determination of past actual frequency of the same sort of remote events.  And over time things change, so past frequency isn’t always indicative of future likelihood, even if the past frequency were known.

One insurer. where management was skeptical of the whole idea of “principles based” assessment of remote losses, decided to use a two pronged approach.  For their risk management, they focused on 95th percentile, 1 in 20 year losses.  There was some hope that they could validate these values through observed data.  For their capital management, they used the rating agency standard for their desired rating level.

Banks, with their VaR approach have gone to an extreme in this regard.  Their loss horizon is in days and their calibration period is less than 2 years.  Validation is easy.  But this misses the possibility of extremes.  Banks only managed risks that had recently happened and ignored the possibility that things could get much worse, even though most risks that they were measuring went through multi year cycles of boom and bust.

At one time, banks usually used the normal distribution to extrapolate to determine the potential extreme losses.  The problem is Fat Tails.  Many, possibly all, real world risks have remote losses that are larger than what is predicted by the normal distribution.  Perhaps we should generalize and say that the normal distribution might be ok for predicting things that happen with high frequency and that are near the mean in value, but some degree of Fat Tails must be recognized to come closer to the potential for extreme losses.

For a discussion of Fat Tails and a metric for assessing them (Coefficient of Risk) try this:  Fatness of Tails in Risk Models .

What is needed to make risk measurement effective is standards for results, not moralizing about process.  The standards for results need to be stated in terms of some Tail Fatness metric such as Coefficient of Risk.  Then modelers can be challenged to either follow the standards or justify their deviations.  Can they come up with a reasonable argument of why their company’s risk has thinner tails than the standard?

 

 

Don’t Ignore Ashby’s Law

Posted August 16, 2017 by riskviews
Categories: Enterprise Risk Management

Many observers will claim that complex systems are inherently fragile.  Some argue for simplifying things instead.  But one of the main reasons why many man-made complex systems are fragile is that we often ignore Ashby’s Law.

Ashby’s Law is also known as the Law of Requisite Variety.  It is so powerful that it is sometimes called the first law of cybernetics.

Basically, Ashby’s Law states that to be fully effective, a control system must has as much variety as the system being controlled.  The control system must be as complex as the system being controlled.

So man-made complex systems often evolve when people decide to add more and more functionality – more variety – to existing systems.  Sometimes this includes linking up multiple complex systems.

But humans are really clever and they tend to save time and money by not bothering to even figure out what additional controls are needed to make a newly enhanced system secure.  There is often not any appreciation of how much more control is needed when two complex systems are combined.

But look at the literature regarding company mergers and acquisitions.  The literature keeps saying that the majority of this activity destroys value.  Sometimes that is because the two organizations have incompatible cultures.  Executives are becoming aware of that and activities to create a single new culture are sometimes included in post merger activity lists.

But there is an aversion to recognize that there needs to be much more spending on control systems.  Most often in a merger, there is a reduction in the amount of people assigned to internal controls, either directly or within a line function.  This is usually expected to be one of the synergies or redundancies than can be eliminated to justify the purchase price.

But in reality, if the new merged entity is more complex than the two original firms, the need for control, as expressed under Ashby’s Law, is greater than the sum of the two entities.

Merging without recognizing this means that there is an out of the money put being embedded in the merged entity.  The merged entity has lower control expenses than it should for a time.  And maybe, just maybe, it will experience major problems because of the inadequate controls.

 

Risk and Reward are not relatives

Posted July 1, 2017 by riskviews
Categories: Enterprise Risk Management

A recent report on risk management mentions near the top that risk and reward have a fundamental relationship.  But experience tells us that just is not at all true in most situations.

The first person (that RISKVIEWS can find) to comment on that relationship was the great economist Alfred Marshall:

“in all undertakings in which there are risks of great losses, there must also be hopes of great gains.”
1890 Principles of Economics

That seems to be a very realistic characterization of the relationship – one of hope.  But his statement has been heavily distorted through the years.  Many have come to believe that if you increase risk then you also, automatically, increase reward.  Or that if you want increased reward that you must increase risk.

Perhaps the risk reward relationship is a simple arithmetic statement.  Made by those who believe that all economic actors are rational.  And by rational, they mean that they make choices to maximize expected value.

So if all of the choices that you actively consider have a positive expected value, then those with higher risk will have to have higher rewards to keep the sum positive.  (Alternately, risks would have much lower likelihood than gains – but this hardly seems to fit in with the concept of higher risks.)

So perhaps the “relationship” between risk and reward is this:

For opportunities where the risk and reward can be reliably determined in both amount and likelihood, then among those opportunities with a positive expected value, those with higher risk will have higher reward.

But isn’t that the rub?  Can we reliably determine risk, reward and their likelihood for most opportunities?

But then there is another issue.  For a single opportunity, the outcome will either be a loss or a gain.  If there is higher risk, the likelihood or amount of loss is higher.  So if there is higher risk, there is a higher chance of a loss or a higher chance of a larger loss.

So by definition, an opportunity with higher risk may just produce a loss. And either the likelihood or amount of that loss will, by definition, be higher.  No reward – LOSS.

Now, you can reduce the likelihood of that loss by creating a diversified portfolio of such opportunities.  And by diversified, read unrelated.

So the rule above needs to be amended…

For opportunities where the risk and reward can be reliably determined in both amount and likelihood, then among those opportunities with a positive expected value, those with higher risk will have higher reward.  To reliably achieve a higher reward, rather than more losses, it is necessary to choose a number of these opportunities that are unrelated.  

Realize here that we are talking about Knightian risk here.  Risk where the likelihood is knowable.  For Knightian Uncertainty – where the likelihood is not knowable – this is much more difficult to achieve.  Investors and business people who realize that they are faced by Uncertainty will usually Hope for even greater gains.  They require higher potential returns.  And/or set higher prices.

The issue is that in many cases, humans will make mistakes when assessing likelihood of uncertainty, risk and reward (see Restaurant failure rate).  There are quite a number of reasons for that.  One of my favorites is survivor bias in our data of comparables (They just don’t make them like they used to).  We also overestimate our chances of success because we overrate our own capabilities.  (see Lake Wobegone, above average children).  And to achieve that portfolio diversification effect, we need to be able to also reliably assess interdependence (see mortgage interdependence, 2008).

The real world problem is that aside from lottery tickets, there are very few opportunities where the likelihood of losses is actually knowable.  So risk and reward are not necessarily related.  Except perhaps in the way that all humans are related . . . through Adam (or Lucy if you prefer).

How to manage Risk in Uncertain Times

Posted June 8, 2017 by riskviews
Categories: Enterprise Risk Management

The biologist Holling saw that natural systems went through phases.  One view of those four phases is:

  1. Rapid Growth
  2.  Controlled Growth
  3. Collapse
  4. Reorganization

The phase will usually coincide with an environment that encourages that sort of activity.  The fourth phase, Reorganization, coincides with an Uncertain environment.

Since the financial crisis of 2008, many aspects of our economies and our societies have drifted in and out of the Uncertain environment.  We have been living in an historical inflection point.  The post WWII world, both politically and economically may be coming to an end.  But no new regime has emerged to take its place.  Difficult times for making long term plans and long term commitments.

And that describes the best approach to risk management in Uncertain times.  Avoid long term  and large commitments.  Keep short term, stay diversified.  Returns will not be great that way, but losses will be small and the change of a devastating loss smaller.

Sooner or later things will clarify and we will move out of uncertainty.  But one of the things that keeps us in an uncertain stage is the way that people act as if somehow, they have a right to something more certain.  Most often they are hoping for a return to a controlled growth phase.  When the careful are rewarded modestly.  Some long for the return to the boom phase when a few are rewarded greatly.

But right now, it makes the most sense to not count on that and to accept that we will uncertainty for some time to come.

For more on Uncertainty see these posts


%d bloggers like this: