When all you have is a hammer; Why financial managers are so bad at measuring risk

By Robin (Mac) Stark

When a carpenter sets to work, tools in hand, he is prepared. The house he goes to work on, he is all too familiar with, and, thanks to physics and engineering, he can replicate, time after time, the same solid and reliable structure. Alas – this is not so for the contemporary Financial manager. Yet, there is a belief that a risk analyst can go to her trade, tools in hand, with the same accuracy and reliability of the carpenter. But this is her first mistake, a hammer is a reliable tool derived from the predictability of physics – the carpenter can make use of a simple and reliable hammer because with certainty, the force of the last strike of the hammer will be the same as the next. However, for the risk manager, such an assumption can prove costly, because the game is not the same – finance is not physics, a hammer just won’t do.

The measurement of financial risk, the primary responsibility of the risk officer, is a tainted craft. The first and often primary task of the risk manager is to measure risk. The task itself is false, as it implies risk is accurately measurable, when risk is nothing more than perception. The result is that risk officers are on a constant journey to better measure risk, leading to over-engineered risk modelling and, ultimately, over reliance on these tools for their management.

The problem with risk management, in my opinion, stems from the belief that risk can be measured. This is not to say that there is no place for predictive modelling, but instead of assuming risk is accurately measureable as a probability is a step down the path of ignorance. Far too much focus is placed on measuring risk to a level of confidence, rather than focusing on understanding the risk, and the impact of that which we cannot measure.

Value-at-Risk (VaR), the standard risk measurement tool for banks, provides an estimate of the maximum losses expected at a certain confidence level, typically 95%. This quantitative measure is typically based on the assumption that returns are log-normally distributed. Significantly, using a simple GARCH methodology, it suggests that risk is conditional and can be extrapolated from past events. This creates an estimate that is very pro-cyclical, so much so that if there has been upwards trend in the not too distant past, that trend is expected to continue forward –clearly a very strong assumption. In its essence, risk managers are equipping themselves with a hammer – a very useful tool in a very predictable environment. However, few would agree that financial markets are predictable – in particular financial market crashes. In fact, based on a 95% confidence level, VaR provides no understanding of the impact of major events ‘outside the 95% level’. Those events that cause the most damage are surely the ones a risk manager wants to be best prepared for? It is here I believe the greatest improvement must be had for risk management.

The Global Financial Crisis is a recent and painful reminder of the damage a lack of focus on the other 5% of events can cause. For me, the clearest example of the misunderstanding and under appreciating of risk is in the case of the Credit-Default Swap (‘CDS’) market in the lead up to the near collapse of American Insurance Group (‘AIG’) in 2008. The insurance style derivative, bought and sold by thousands of the world’s financial institutions, offers two elements of risk exposure to for the buyer of protection: the default risk of the credit instrument the contract protects; and the counter-party default risk of the seller.

On face value, as a buyer of insurance, the risk seems fairly easy to measure. If I have received a rating of the credit risk of the underlying security, I know (or believe I know) the probability of a default event. I would then attempt to derive the correlative strength of the relationship between the default event of the security and counter-party default event, as counter-party risk only becomes relevant at the time of the first default. It is easy to see then, how the ‘A ratings’ of Lehman brothers originated paper and AIG made the risk seem negligible and why the worlds banks would quite happily engage in CDS to reduce the risk-weighted assets. Had more attention been paid to possible ‘events’, however unpredictable they might be, and truly understanding the risk they were engaged in, one of the largest bail outs in history could have been avoided. Ultimately, overly aggressive, leveraged positions were taken, based on a perception of little, to no, risk – a position that the author of ‘The Black Swan, Nassim Taleb, would call ‘Fragile’. While a probability risk perception of these securities would have determined there was an insignificant risk of substantial loss, the reality was that the CDS insurance was fragile to unpredictable and unknown, a ‘black swan’ occurrence, caused by highly correlative low probability events, which resulted in extremely damaging losses.

Andrew Haldane, the Executive Director of Financial Stability at the Bank of England, in August of 2012, offers an argument that strongly influences my beliefs of risk management. Dubbed, ‘The Dog and the Frisbee’, the speech illustrates how over-engineering of risk modelling not only fails to deliver reliable outcomes for crisis events, but does so worse than simpler methods. He argues that ‘simpler is better’, and my belief is that this is likely to be true. By creating complex methods that focus on the past events, risk managers gain confidence because they trust the outcomes, outcomes which are proven to be false predictions.

But perhaps risk managers are not trying to accurately measure risk, but rather, control the perception of risk. After all, by over-engineering models, JP Morgan was able to reduce its risk-weighted assets on their book by ‘tweaking’ the model, without any change, at all, to the underlying securities. If risk is truly measureable, than how can it be so easily manipulated?

Now, this is not to say that there is not a place for tools such as VaR in the office of risk management, or for other risk models. The Markowitz framework delivers strong theoretical foundations of portfolio management that cannot be ignored, but to date there has been an over-reliance on methods that assume finance is a science. While CAPM and the Black Scholes models apply similar ‘scientific’ assumptions, proven by empirical results to be false, their application is not nearly as damaging on a systematic level as the misrepresentations of VaR.

But where to from here? No doubt the debate will continue to rage. Bankers will defend their models fiercely as years of their craft are challenged, while those in the camp of Nassim Taleb and Andrew Haldane pursue simpler, more uniform tools. My opinion is that too much faith is put into ‘the hammer’. The focus of the risk manager should be to understand the risk their business is taking, holistically, by also understanding what may lay beyond ‘95%’ and do not seek to solely quantify risk and treat it as gospel. What I do know for certain is, pity the risk officer, for unlike the carpenter, whom can rely on his hammer to deliver, day, after day, the tools of the contemporary risk manager will surely fail him.

Robin (Mac) Stark, is a current Master of Business (Applied Finance) student at QUT.

Tags: , , , , ,

Comments are closed.


Privacy | Copyright matters | Accessibility
Contact us | Feedback | Disclaimer
Opinions expressed in this blog are those of the individual contributors only.
QUT Home | Blog Home