Archive for February 2013

US Government Debt – Non-Debate of Know Nothings

February 27, 2013

One of the most frustrating things about the US government, at least to anyone who actually tries to pay attention, is that there is never any debate and the statements made by the opposite parties seem to all be made without any attempt to actually understand the issues at hand.

If you have an interest in being informed about the US Government debt situation and the alternate courses of action for resolving the situation, there is now an alternative for you to listening to the non-debate by know nothing politicians.

Wharton is making a set of 15 essays from scholars in law and economics about the US debt history, applicable laws and the potential consequences of alternate strategies for resolution available.

This 250 page book is available free here.

These essays come from a conference hosted by Wharton in 2012.  Here is the introduction to the book describing the conference and the resulting essays.

The opening panel explored the functions of U.S. Treasury instruments and the Treasury market in the United States and beyond.  U.S. Treasuries play a unique role in the national and global economy. Richard Sylla put their current role in historical perspective,  observing that U.S. government debt obligations from their birth in  the revolutionary days have been much more than another means to finance the government: they cemented the political union, served  as a currency, backed the banking system, and helped attract foreign  capital.  William Bratton,  Richard Herring, and  Zoltan Pozsar then discussed the Treasuries’ role in the modern financial system,  including corporate finance, banking and shadow banking in the  United States and around the globe. While other reserve currencies and assets may eventually displace the U.S. dollar and the U.S. Treasuries, none are readily available at this time, and some that have  served as substitutes in the past (notably agency securities) ultimately rely on the credit of the United States.

The second panel considered constitutional, statutory, and contractual dimensions of U.S. government debt.  Michael McConnell opened with an examination of the U.S. Constitution as a fiscal  framework based on legislative control of taxing, spending, and borrowing. Howell Jackson then returned to the statutory debt ceiling  controversy, lifting the curtain on a plausible sequence of events had  the President and the Congress failed to compromise as they did at  the eleventh hour in the summer of 2011. In addition to Jackson’s  essay, this volume contains a policy brief by Jeremy Kreisberg and  Kelley O’Mara detailing the Executive’s options for honoring U.S.  government payment obligations with the debt ceiling unchanged. Richard Squire  concluded with thoughts on the market in credit  default swaps on U.S. government debt.

Peter Fisher gave the luncheon keynote, where he brought his perspective as former U.S. government debt manager, central bank official, and market participant to bear on the themes of the conference. Echoing the first panel, his remarks urged closer attention to the  sources of demand for U.S. Treasuries both at home and abroad. He  surveyed the experience of Britain in the 19th century and Japan in  the late 20th to identify some of the demand factors that help account for the ability of countries with very high debt burdens to avoid default.  The focus on demand in the U.S. banking, shadow banking,  and global financial systems suggests cautious optimism about the Treasuries’ prospects going forward.

The first afternoon panel revisited the questions of U.S. ability and willingness to pay, which has been debated heavily in policy and academic circles. A sovereign’s ability to pay is a function of its ability to generate revenues, which depends, among other things, on  the economy’s capacity to grow and on the government’s political  capacity to collect taxes. The line between ability and willingness to pay can be notoriously fuzzy. Deborah Lucas examined the structural sources and magnitudes of U.S. fiscal imbalances and the policy  changes needed to avoid them. While conceivable, default remains unlikely; however, risks from rising healthcare costs, slow productivity growth, a spike in interest rates, and contingent liabilities can tip  the outcome.  James Hines observed that while the United States imposes a smaller tax burden than other large wealthy economies,  its greatest unused tax capacity is in expenditure taxation that would  alter the current distributional bargain.  James Kwak put the U.S. fiscal challenge in historical and political perspectives, analyzing the  structural and policy steps needed to address the debt problem, and  the political capacity of the U.S. government to take these steps.

James Millstein suggested that asset sales—such as sales of mineral  rights—merit serious consideration as part of a package of debt reduction measures. His contribution drew on the history of sovereign asset sales, adapting it to the current needs of the United States.

The conference culminated in a panel discussion of a “thought experiment” laid out in Charles Mooney’s contribution: what if the  United States decided that it was in its interest to restructure U.S. Treasury debt? How might it go about it? What legal and policy options would the U.S. government have, what are the pros, cons, and  likely consequences of taking any of these steps?  His paper considers constitutional, statutory, market and transactional challenges to default and restructuring, and presents three options for a hypothetical  operation. At the conference, he laid out the strategy for across-the-board and selective exchanges of outstanding U.S. Treasuries for new  obligations, including the possible issuance of “Prosperity Shares,”  non-debt securities giving creditors a stake in future growth.  Donald Bernstein and Steven Schwarcz offered comments on the paper.  Bernstein was skeptical of recourse to the bankruptcy powers, and  pointed to the many hard policy challenges, including loss distribution and policy reform, that would remain unsolved even with  recourse to bankruptcy. Schwarcz noted further possibilities for restructuring, and obstacles to selective default. In addition, his contribution explored the problem of government financing through special purpose entities, and urged oversight to improve accountability.

Throughout the day, conference participants from different academic  disciplines and backgrounds engaged in lively discussion. We did not  strive for a policy consensus, nor did we achieve one. Our purpose in the volume, as it was in the conference, is to start a conversation  long overdue. We hope it will continue. If the conference convinced us of one thing, it is that the stakes in the future of U.S. government debt are too high to confine serious analysis and informed debate to legislative back-rooms and disciplinary silos.

Two Fundamental Flaws of Solvency II

February 25, 2013

Many people in Europe have worked very hard for many years, attempting to perfect solvency oversight for insurers. The concepts underlying Solvency II are the best thinking about risk regulation that the world has ever seen.

However, there are two fundamental flaws that are drivers of the problems that Solvency II is having in getting to the point of actual implementation.

The first flaw is the targeted level of required capital.  When Solvency II was first imagined, banks seemed to be well run and well regulated.  And under that system banks were reporting returns in the high 20’s.  Insurer returns rarely hit the perennial 15% target.  Banks tended to operate right at their level of regulatory required capital.  Insurers looked at that and suggested that the capital requirement for Solvency II should be at a level that the largest insurers would be comfortable operating at.  There was also a big push for a single set of books.  So with a solvency requirement at the level where a rational insurer would want to operate that would mean that in addition to having only one set of books, there would only be one important capital target.  (for discussion of the flaw in the idea of “one number” management, see Risk and Light.)   But the reason why setting the required capital at that high of a level is that it then leaves no room for error or for disagreement.  (Disagreement is absolutely inevitable.  See Plural Rationalities.) The capital calculation needed to be just right.  A capital requirement that was at say 2/3 of the level a prudent company would want to operate at would leave room for errors and disagreements.  If for some risks the requirements were even 50% higher than what some would feel is the correct number, then companies could in fact live with that.  It would become known in the marketplace that companies that write that risk are likely to have tighter solvency margins, and everyone would be able to go about their business.  But with a target that is so very high, if some risk is set too high, then there would be firms who are forced to hold higher capital than makes sense in their minds for their risks.  That completely destroys the idea of management relying upon a model that is calibrated to what they believe is the wrong result.  It also encourages firms to find ways to get around the rules to only hold what they believe is the right level of capital.  What we are seeing now is the inevitable differences in opinions about riskiness of some activities.  The differences of opinion mean the difference between being in business and not for companies concentrated in those activities.  Or for being in those businesses or not for more diversified groups.  If the Solvency II target was set at, for instance, a 1 in 100 loss level, then there might be room for compromise that would allow that activity to continue for firms willing to run a little tight on solvency margin.

==========================================

The second flaw, that surprisingly has only been raised very recently is to total lack of any cost benefit criteria for the process.  If further refinement of Solvency II could prevent one insolvency over a 10 year period, yet would cost other insurers $100 million in expenses and $1 billion in additional capital, is that a good trade-off?  This is the exact sort of thinking that Solvency II REQUIRES of insurers.  EIOPA ought to have a complex model of the insurance industry in Europe so that they can show the risk reward relationship of all of their rules.  What?  You say that is terribly difficult and complicated and would not provide reliable guidance?  EIOPA should  live in the same world that they are requiring of insurers.  Without even a simple minded cost benefit requirement, anything can make it into Solvency II.  The exposure process allows questions to be raised about cost/benefit, but in many cases, that has not happened.  Besides, with no stated criteria for cost benefit, the question is ultimately solved by judgment.  So now we have insurers saying that they will withdraw from parts of the Solvency II process because they are too expensive.  Those insurers have not put forward an objective criteria under which they reached that conclusion either.

It seems unlikely at this point that either of these flaws of Solvency II will be fixed.  A lower standard would seem to too many to be a retreat, a dilution of the power of Solvency II.  Imposing a risk reward or cost benefit rule would result in crazy inconsistencies between decisions made after the rule with those made before or else a very long wait as all of the parts of Solvency II are examined under such a rule.

So it is yet to be seen whether those faults will in the end be fatal.  Solvency II could be tied up in arguments until it is abandoned, it could limp into practice with very mixed support and then be pulled after a few years and enough unanticipated implementation issues, or it could soar for a long run of effective prudential oversight as its designers originally hoped.

I am sure that someone in London can quote you odds.

R E A C T

February 21, 2013

In 1986, two Canadian professors of management, MacCrimmon and Wehrung,  published a book titled Taking Risks. That book details the results of a survey that they did with over 600 business managers about their approach to risk.  Included in the book is their view of risk and risk management.  The risk management process is described with the REACT model:

  1. Recognize Risks
  2. Evaluate Risks
  3. Adjust Risks
  4. Choose Actions
  5. Track Outcomes

Their survey found that managers felt that they should be risk takers.  So all of their answers were probably shaded by an effort to fulfill that expectation. They also found that over 90% of managers were not satisfied to simply accept risks in the gambling model that game theory was based upon.  Almost all managers sought to adjust the risks that the might be exposed to.

Risk is seen by the authors to have three primary characteristics:

A.  Lack of Control

B.  Lack of Information

C.  Lack of Time

The adjustments to risk, step 3 above were defined as efforts to increase control, increase information and/or to increase time.

It is dangerous to ignore the idea of conscious and systematic risk management.  It is almost as dangerous to become complacent about your risk management because you have developed a state of the art systematic risk management system.

Riskviews finds that ERM systems are usually like a deck of cards.  The different ERM systems all use essentially the same deck, but they shuffle the cards into different piles and construct new names for the piles.  In the end, there is nothing new or even different, just a rearrangement.

The REACT model is just a reshuffling of the same elements.  However, this was published in 1986, so they were not copying off the same deck that ERM consultants have been using for the past 15 years.  What this shows is that the ERM deck of practices is older than ERM.

And the suggestion that risk comes from Lack of Control, Information and/or Time is something to think about.  What their study goes on to show is that for the most part, when managers are faced with a problem situation, they usually seek to increase their information, their control and to seek more time.

What about you?  Do you seek time,  information, and control?  Of course you do.

 

Marking Risks to Market

February 19, 2013

If financial statements are set to mark to market, why aren’t they marking uninsured risks to market?
Under all accounting systems, a business that buys no fire insurance will show a better result then a similar company who is buying insurance. Except in the year when they have a claim. The market price for their risk is an insurance premium.  But for some reason, risk has never been treated in this way.

If risk was market to market, then a firm that buys no insurance, or does not hedge a risk would not report a gain, they would need to put aside an amount at least equal to the insurance premium. That amount could be put into a fund and released when they have an event that would have generated an insurance claim.

Of course, to be mathematically correct, they would need to make adjustments to the insurance premiums. One to remove the profit margin/risk charge in the premium and another to reflect the fact that they are in effect creating an insurance pool with one participant which appropriately replaces the risk charge.
An insurance pool with one participant? That doesn’t make any sense. But that is what a business who is not buying insurance is doing. What then would be the correct premium, not loaded for profits, for an insurance pool of one? The pool would have to bare the cost of holding capital (or a contingent capital facility) for the entire maximum claim amount to the extent that amount exceeds the reserves (or the amount in the pool).
So if the cost of capital is 3%, and the claims rate is 1%, then the mark to market cost would be about 400% of expected claims at first, declining as the fund builds up.
Pretty expensive. But that would make the financial statement make sense on a mark to market basis for risk.
This approach could be applied to unhedged risks as well. The mark to market accounting is actually much too lenient on hedgable risks that are unhedged. The MTM accounting in effect allows those companies to reflect the cost of hedging even if they are not hedging. In fact, when they do not hedge, they are self insuring and need to reflect a much higher cost as described above.

Not managing risk is expensive, particularly to investors.  Investors deserve appropriate information on risk.  The longstanding accounting paradigm that ignors risk gives investors the exact wrong information and needs to be immediately corrected.

One of the main reasons that risk management is not already completely embedded in all firms is that they can get away with this scam on their investors, supported by their accounting statement.

Risk needs to be accounted for properly, especially when it is not managed.

Spreadsheets are not the problem

February 18, 2013

The media have latched on to a story.

Microsoft’s Excel Might Be The Most Dangerous Software On The Planet

The culprit in the 2012 JP Morgan trading loss has been exposed.  Spreadsheets are to blame!

The only problem with this answer is that it is simply incorrect.  It is blaming the bad result on the last step in the process.  Like the announcers for a football game who blame the last play of the game for the outcome.  It really wasn’t missing that one last ditch scoring effort that made the difference.  It was how the two teams played the whole game.

And for situations like the JP Morgan trading loss, the spreadsheet was one of the last steps in the process.

But the fundamental problem was that they were allowing someone in the bank to take very large risks that no one could understand directly.  Risks that no one had a rule of thumb that told them that they were nearing a situation where any bad day, they could lose billions.

That is pretty fundamental to a risk taking business.  To understand your risks.  And if you have no idea whatsoever of how much risk that you are taking without running that position through a model, then you are in big trouble.

That does not mean that models shouldn’t be used to evaluate risk.  The problem is the need to use a model in the heat of battle, when there is no time to check for the kinds of mistakes that tripped up JP Morgan.  The models should be used in advance of going to market and rules of thumb, or heuristics for those who like the academic labels, need to be developed.

The model should be a tool for building understanding of the business, not as a substitute for understanding the business.

Humans have developed very powerful skills to work with heuristics over tens of thousands of years.  Models should feed into that capability, not be used to totally override it.

Chances are that the traders at JP Morgan did have heuristics for the risk and knew that they were arbitraging their own risk management process.  They may not have known why they gut told them that there was more risk than the model, but they are likely to have known that there was nore risk there.

The risk managers are the ones who most need to have those heuristics.  And management needs to set down clear rules about the situations where the risk models are later found to be in error that protect the bank, rather than the traders bonuses.

No, spreadsheets are not the problem.

The problem is the idea that you can be in a business that neither top management nor risk management has any “feel” for.