Measuring market risk using historical data: Bogus or not?

by Mei Lin Ker

Risk management has become the hottest topic ever in today’s turbulent economic climate. We are now still living in the shadows cast by the U.S. subprime crisis in 2008 which has shaken up the U.S. economy with its powerful negative aftershocks reverberating to the rest of the world. Till now, the U.S. economy is making baby steps in recovering from the economic devastation of the Global Financial Crisis (GFC). As big firms collapsed and mega bailouts ensued, people are asking the fundamental question: Why did this happen? Aren’t companies which use highly sophisticated risk management tools able to foresee the impending downfall and, with the information provided, aren’t their risk managers able to take preemptive measures accordingly? Is there then something wrong with these tools or the risk managers?

Currently, the most famous method of measuring and reporting market risk is Value-at-Risk (VaR). In holding an investment portfolio, VaR(99%, 1-day) is the minimum (not the maximum as often stated!) dollar loss that is expected to be suffered in 1 out of 100 days. VaR has gained widespread adoption for its simplicity and intuitiveness to aid in management decisions and has been embraced as the standard market risk calculation method for setting regulatory minimum capital standards under Basel II. However, risk managers should be wary of the shortcomings of VaR which have great potential to mar its effectiveness. Commonly, historical data has been used as the main input for risk calculation, including VaR. Yet, the GFC pointed out the fact that none of the companies using this method were able to forestall the economic catastrophe. So does using historical data to measure market risk prove to be effective? I think it is a bogus effort. Perhaps it is high time to reexamine the current market risk measurement techniques and ponder further.

Does relying on the past help in predicting the future? This is highly questionable. Even with humans, the fact that an athlete gets the gold medal today does not mean that she will continue to do so a few years down the road. There are simply other non-extenuating factors that can be present to allow us to just infer that. Similarly, historical data may not be representative of the future and thus cannot be deemed as a good predictor of what is going to come. Anyone who employs risk models that draw on historical data are effectively assuming that the future will follow its past. If the historical data used comes from a relatively calm period, it will have no way of predicting any possible adverse occurrences in the future, just because the data does not even consider any adverse occurrences to begin with.

Nassim Taleb’s famous exhortation about Black Swans made a fine point here. The rarity of Black Swan events happening but the catastrophic impact to corporate survivability when they occur warrants particular attention. This was what happened with the bailout of Long-Term Capital Management (LTCM) in 1998. The hedge fund company was highly prolific, enjoying a string of exceptionally successful investments with two future Nobel Prize winners at its helm. However, its fortune twisted unexpectedly when it suddenly faced illiquidity issues brought about by the twin impact of financial crises in Asia and Russia. It failed to foresee these Black Swan events and yes, it used VaR. Financial distributions are not Gaussian, as is often assumed, but are fat-tailed as Taleb has alleged. This may well render the VaR figures useless. Clearly, VaR does not answer the most important questions – what is the worst case scenario and when will it occur – in order for companies to take the precautionary steps against catastrophic events to ensure continued financial stability.

Even if the historical data does accommodate adverse occurrences, it is still too simplistic to assume that it can predict the future with certainty. Economist Jon Danielsson puts forth the concept of endogenous risk which refers to the risk from shocks generated and amplified within the system. It occurs when individuals react to the environment and in doing so the individual actions affect the environment. In the financial market, endogenous risk exists when the market participants react to some adverse price movement and proceed to sell their stocks. This selling frenzy in turn drives the price even lower. As VaR is the current common standard of market risk, almost all companies utilise it. If the VaR statistics indicate a sell, all companies are predisposed to follow the same decision, leading to the development of endogenous risk which will pull down the market even further. Just using historical data will not be able to predict the occurrence and extent of all endogenous risks that will be inherent in the market in future.

The destructive 2008 GFC is an example of endogenous risk in action. The collapse of several established banking institutions and the mega bailouts of big firms led to banks’ reluctance to lend money. Illiquid companies that needed capital injection to survive were thus forced to liquidate. This impacted individual investors who were in urgent need to dispose of increasingly depreciating shares. With the selling hysteria, investors and companies alike were sucked into a downward spiraling economy. The endogeneity of the risk posed by the freezing of credit markets has presented a devastating impact here. Evidently, the ability of past data to anticipate such interplay of inner market forces during this turbulent time was next to zero.

Not all people are born equal; the same goes for companies and countries. For emerging economies, they will have no or little historical data to talk about for use in calculating risk. What then should they use? Moreover, just because the rest of the world is using historical data to determine VaR does not mean that it will work the best or be suitable for every company. During the yearly Dubrovnik Economic Conference organized by the Croatian National Bank in 2007, Saša Žiković presented his research on the market risk in eight states after they joined the EU in 2004. He reported that all the stock indexes of the eight states displayed significant fat-tailed and asymmetry characteristics. Thus VaR models that are commonly employed by developed financial markets are not suited for the measurement of market risk in these EU new member states. Clearly, using past data to derive VaR figures here will lead to erroneous risk analyses.

The world now is at an unprecedented level of change with rapid advances in technology and communication. We will not have been able to imagine the advancements of today by simply extrapolating the information provided by the past. Similarly, relying solely on historical data to predict the future will always be a futile strategy. The future will always be an unknown and will continue to baffle us when we least expected it. For now, we have to acknowledge the limitations of the various risk calculation methodologies and make more use of our common sense in deciphering the risk figures together with the existing environment. VaR’s widespread use on itself should not give it any credibility. The adage ‘”do not put all your eggs in one basket” also applies to the usage of risk determination methodologies. A variety of alternative techniques such as scenario analysis, stress testing and Monte Carlo simulation should be undertaken together to allow comparisons and a more comprehensive outlook for better decision making.

Mei Lin Ker is a Master of Applied Finance student at Queensland University of Technology.

Tags: , , , , , ,

One Response to “Measuring market risk using historical data: Bogus or not?”

  1. felix Says:

    Surely, if we can predict crashes with any reasonable degree of certainty shouldn’t we also be able to avert these? Surely, there is some fallacy in Taleb’s argument.

Privacy | Copyright matters | Accessibility
Contact us | Feedback | Disclaimer
Opinions expressed in this blog are those of the individual contributors only.
QUT Home | Blog Home