One of the more contentious debates in finance over the last couple of decades has centered on the value of what are known as Value at Risk or”VaR” models. Indeed, a VaR model is one of the protagonists in the 2010 film “Margin Call” about the collapse of the housing market. VaR models, originally developed at JPMorgan in the early 1990s, attempt to calculate the probability of a worst case scenario for a certain distribution of risk for a given period of time. In other words, it’s an attempt to answer a question such as, “what is the worst thing that can happen to me 95% of the time during the next 24 hours? Or 30 days? Or 99 days?” Notice that the question is not what is the worst thing that can happen AT ALL, but what is the worst thing that can happen 95% (or often 99%) of the time, which is a wholly different matter.

Few other subjects in risk finance causes such heated arguments as VaR, and perhaps it’s most famous critique is the professor and author Nassem Nicholas Taleb. As Joe Nocera wrote in the NY Times back in 2009* :

Taleb says that Wall Street risk models, no matter how mathematically sophisticated, are bogus; indeed, he is the leader of the camp that believes that risk models have done far more harm than good. And the essential reason for this is that the greatest risks are never the ones you can see and measure, but the ones you can’t see and therefore can never measure. The ones that seem so far outside the boundary of normal probability that you can’t imagine they could happen in your lifetime — even though, of course, they do happen, more often than you care to realize. Devastating hurricanes happen. Earthquakes happen. And once in a great while, huge financial catastrophes happen. Catastrophes that risk models somehow always manage to miss.

Yet Nocera goes on to note that as much as Taleb dismisses VarR models as useless, at best, and catastrophic, at worst, many others defend their use:

And yet, instead of dismissing VaR as worthless, most of the experts I talked to defended it. The issue, it seemed to me, was less what VaR did and did not do, but how you thought about it. Taleb says that because VaR didn’t measure the 1 percent, it was worse than useless — it was downright harmful. But most of the risk experts said there was a great deal to be said for being able to manage risk 99 percent of the time, however imperfectly, even though it meant you couldn’t account for the last 1 percent.

“If you say that all risk is unknowable,” Gregg Berman [of the VaR consultancy RiskMetrics] said, “you don’t have the basis of any sort of a bet or a trade. You cannot buy and sell anything unless you have some idea of the expectation of how it will move.” In other words, if you spend all your time thinking about black swans, you’ll be so risk averse you’ll never do a trade. Brown put it this way: “NT” — that is how he refers to Nassim Nicholas Taleb — “says that 1 percent will dominate your outcomes. I think the other 99 percent does matter. There are things you can do to control your risk. To not use VaR is to say that I won’t care about the 99 percent, in which case you won’t have a business. That is true even though you know the fate of the firm is going to be determined by some huge event. When you think about disasters, all you can rely on is the disasters of the past. And yet you know that it will be different in the future. How do you plan for that?”

One risk-model critic, Richard Bookstaber, a hedge-fund risk manager and author of “A Demon of Our Own Design,” ranted about VaR for a half-hour over dinner one night. Then he finally said, “If you put a gun to my head and asked me what my firm’s risk was, I would use VaR.” VaR may have been a flawed number, but it was the best number anyone had come up with.

Why is this relevant to SCM? Because the term “SC VaR” has begun to creep into the jargon of SC consultants in various presentations I have come across in the last few months. This appropriation makes sense at one level: VaR sounds like a good concept to someone who is not completely familiar with the many and serious critiques that Taleb and others have made against it’s use. Moreover, the lack of any other way of measuring what the impact would be of having a catastrophic failure of a product launch combined with a devastating, months-long disruption in production (and their subsequent impact in share price/shareholder value) makes it tempting to just recycle the term in the Operations context. However, this not just lazy but dangerous.

I have been working with the Finance department at the The Robert H. Smith School of Business (University of Maryland) this fall on a project to adapt finance metrics such as VaR to supply chain and operations, and this is no easy task. It is quite an effort just to examine the various VaR model types for applicability; to then create ones that would be useful to CFO’s, CPOs, and Chief SC Officers is a daunting technical task.

As that effort progresses I will share some of the (non-proprietary) aspects of this work publicly, but I would already suggest to those who would simply lift VaR from the pages of a finance magazines and claim that, in its current form, this concept should be adapted by CFO’s and Operations executives, to re-read Nocera’s article.

As he noes at the end:

“When I teach it,” Christopher Donohue, the managing director of the research group at the Global Association of Risk Professionals, said, “I immediately go into the shortcomings. You can’t calculate a VaR number and think you know everything you need. On a day-to-day basis I don’t care so much that the VaR is 42. I care about where it was yesterday and where it is going tomorrow. What direction is the risk going?” Then he added, “That is probably another danger: because we put a dollar number to it, they attach a meaning to it.”

By “they,” Donohue meant everyone who wasn’t a risk manager or a risk expert. There were the investors who saw the VaR numbers in the annual reports but didn’t pay them the least bit of attention. There were the regulators who slept soundly in the knowledge that, thanks to VaR, they had the whole risk thing under control. There were the boards who heard a VaR number once or twice a year and thought it sounded good…There was everyone, really, who, over time, forgot that the VaR number was only meant to describe what happened 99 percent of the time. That $50 million wasn’t just the most you could lose 99 percent of the time. It was the least you could lose 1 percent of the time. In the bubble, with easy profits being made and risk having been transformed into mathematical conceit, the real meaning of risk had been forgotten. Instead of scrutinizing VaR for signs of impending trouble, they took comfort in a number and doubled down, putting more money at risk in the expectation of bigger gains. “It has to do with the human condition,” said one former risk manager. “People like to have one number they can believe in.”

*http://www.nytimes.com/2009/01/04/magazine/04risk-t.html?pagewanted=all&_r=1

Advertisements

Posted by Carlos Alvarenga

Carlos Alvarenga is the Executive Director of World 50 ThinkLabs and an Adjunct Professor at the University of Maryland's Smith School of Business.

3 Comments

  1. All of this assumes a degree of randomness or lack of change in the factors that cause catastrophes. Neither assumption is valid. Most catastrophic events aren’t random at all. Bust follow booms that result from credit expansion. Honestly the Fed turns on the gas in a smoker’s house and everyone wonders why the model didn’t predict the explosion.

    Reply

  2. Lokesh Chowdary January 16, 2012 at 09:25

    The main issues would be that ….even if one finds out a way to capture that 1% as well. The final figure would be so big that it does not give any meaning. capturing that 1% far away at the tail will remove the basic efforts towards determining something close to accurate.
    and may be we should tell all those who says Var is useless, ” VaR is useless for machines….not for someone involved in the business…who always will have the responsibility and acumen to take decision in case of major aberations “…..

    Reply

  3. The problem with VaR, as Nassim correctly points out is, essentially Kurtosis.

    It is really a philosophical question – how much data is needed to calculate a “meaningful” mean and shape a distribution curve to assess probabilities? 5 years? 50? 500? And this because price data is shaped by human behaviour, not some mechanistic process.
    As as result, the core problem with all distributional based measures is the assumption that the variance is stationary.

    in the long run, Nassim’s theory of betting against those that bet against the risk of the outlier is superior. Especially those that are placing those bets for a duration greater than 5 minutes.

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s