The dark side (about extremes and where nobody knows)

Written by Ruolin Wang, PhD Candidate at QUT

Two years ago, the world witnessed the disappearance of flight MH370. So far, we have not found it, for sure it is gone, together with 239 passengers on board. Every hour, thousands of airplanes fly towards their destinations above in the sky, but very few of them end up with the same fate as MH370. The prevailing but “not yet determined” explanation for the disappearance was that the pilot Zaharie hijacked the aircraft. Is it a risk of air travel? It is the airport control tower’s task to assess foreseeable risks and direct them around bad weather and air traffic to ensure a safe flight. However, as for aircraft hijacking, I am inclined to describe it using the word uncertainty. Why? To put it simply, risk is different from uncertainty.

Back in the 1920s, the great economist Frank Knight described risk as randomness with known probabilities, while uncertainty is when you don’t even know the possibilities. This distinction is also applicable to financial risk management. For a long time, traders and risk officers firmly believed that risk factors can be identified and financial risk is calculable. Ultimately, they are obsessed with building sophisticated risk models relying on backward looking time series data. It seems like risk managers believe they have complete knowledge about the risk. But if that is the case, why could they not prevent the collapse of Barings bank, the breakout of 2008 financial crisis, or the occurrence of “the London Whale”? Evidently, all those numbers, tests and frameworks carried out by risk managers only represented the tip of the iceberg, and we barely know anything about the deep end.

Value-at-Risk (VaR), the widely adopted risk management model in finance, was introduced by JP Morgan just after the 1987 stock market crash. The model mainly consists of three parameters: level of confidence, a time horizon and standard deviation. Take the example of regulatory VaR required by Basel III; the model uses 99% level of confidence with 10-day period for the trading portfolio. Then, here comes the problem. What happened to the other 1%? Are they simply ignored? The so-called experts may tell you this: events outside the 99% level are those with extremely low occurrences and they may never happen, therefore, our model is adequate 99% of the time we invest. Well, it just reminded me of an ancient Chinese proverb “a single spark can start a prairie fire”. Events which occur too rarely to pay attention to are exactly the ones we want to be best prepared for. I would rather not mention 9/11, but it is surely a vivid reminder that those events that may have been hard to foresee are the ones that actually threaten our survival.

Apart from that, as the standard risk measurement tool, VaR is typically based on the assumption that current changes in market values are consistent with historical changes. However, this is rarely the case in reality. The famous value investor Seth Klarman once expressed a similar opinion by saying “The reality is that past securities price volatility do not reliably predict future investment performance and therefore it is a poor measure of risk”. Indeed, the distribution of past occurrences may be irrelevant or even unhelpful if the future is statistically different from the past. Or in other words, the fundamental assumption is too academic to be relevant in the real world. As a matter of fact, limitations of VaR are far more than this. The president of Greenlight Capital, David Einhorn, used to compare VaR to “an airbag that works all the time, except when you have a car accident”. Isn’t it biting? Clearly, there are a whole bunch of unforeseeable factors that go beyond the detection zone of the VaR framework, and more importantly, what remains in the shadow is critical for future improvements of risk management. So, do you still believe in the illusion created by risk managers who are overly confident with such an imperfect model?

By far, it is worth emphasizing that we are not meant to deny the role of VaR during the past 20 years. Nevertheless, to ensure a better degree of protection, innovations should be made to the existing risk assessment framework in the light of uncertainty. Events with extremely low occurrences may not have been eye-catching, but it is never a sideshow. Although academic researchers have noted that risk can be more comprehensively accounted for than uncertainty, risk assessment still needs to be supplemented by analyses that characterize the forms of political, social, physical, cultural and economic harms to which individuals, firms and societies are susceptible.

Now, let’s step back a little. Models, processes and trades are just symptoms and results of human decisions. Rather than concentrating entirely on scientific models, academic theories and empirical studies, human factors can no longer be ignored in terms of managing financial risks. Human factors include an individual’s judgements, insights, and previous experiences and so on. Uncertainty lies in that this cannot be extrapolated straightforward from input data using well-defined rules to generate unambiguous outputs. The key problem is we do not know how beforehand how others are going to react, even if he/she is standing right in front of you. Given all this, how to measure and manage human behaviour is a huge challenge but worth the effort.

Abundant research has established that risk managers tend to be excessively optimistic, or say overconfident, about the prospects of the companies that they are following. It is evident that they are much more likely to recommend a purchase than a sale. One reason that may account for this phenomenon is that risk managers are willing to assist their corporate finance arms by providing optimistic forecasts so that they can contribute to the firm’s revenues. As such, risk managers would more inclined to underestimate the level of risk and eventually step down the path of blindness. In 2012, General Motors (GM) suffered losses of about $300 million and this failure was materialized from a seemingly low probability event. For a while, social media began to criticize GM’s risk management failure for underestimating risk. In response to this criticism, the Chief Economist at GM, Mustafa Mohatarem, replied “There is a tendency to underestimate the risk…it is relatively easy to say, ‘well, it is a low probability risk, let’s go on.’” Lessons learnt from GM’s case once again demonstrate that attitudes and perception of risk managers can sometimes distort judgments. Often representativeness leads to the tendency to ignore abnormal market conditions and to put too much faith into established risk measurement tools. Hence, it is time for individual biases to appear on the risk manager’s radar screen.

In closing, I will share an important piece of advice from Warrant Buffett, the world’s most successful investor: “Risk comes from not knowing what you are doing”. False perception and ignorance of the unknown may finally lead to the true darkness. So, keep an eye out for the dark side.

 

Tags: , , , ,

Comments are closed.


Privacy | Copyright matters | Accessibility
Contact us | Feedback | Disclaimer
Opinions expressed in this blog are those of the individual contributors only.
QUT Home | Blog Home