The merits and perils of risk modelling

Introduction – OnRisk Dialogues: Challenges of modelling risk

Models. In the 1990s we fell in love with them. Then, after they failed us so spectacularly during the financial crisis of 2007-8, we hated them. Now we are learning to live with them.

The 9th Colloquium of the Scottish Financial Risk Academy (SFRA), Conveying Information about Risk: Use, Misuse and Abuse of Models, set out to explore the place of risk modelling in financial services. OnRisk Dialogues is a new a collaboration between the SFRA and Solvency II Wire th at brings together a collection of articles from some of the speakers at the colloquia.

“All models are wrong, but some models are useful”

The scene for the discussion was set by Professor Andrew Cairns of Heriot-Watt University with a quote from the famous British statistician George Box: “All models are wrong, but some models are useful”.

Professor Cairns explained the allure of models and the inherent paradox they contain: a well-crafted approximation of reality can be helpful in informing decision-making, while the very nature of the proxy exposes it to potential misuse and over-reliance that could have disastrous consequences. This fundamental tension dominated the debate and is reflected in the contributions to the first issue of OnRisk Dialogues: ‘Challenges of risk modelling’.

An overarching theme emerging from the discussion is the view that the key to making good use of models as effective means of conveying information about risk is understanding their inherent vulnerabilities. At the core of these vulnerabilities is the human interaction with the data, not only in interpreting it but also in the assumptions and choices made by the model builders. Put another way: good models are only good when used wisely.

What is a model?

To help understand the value and limits of models consider this model of a model. A model can be broadly defined as a system or process used to assist in calculation or prediction. It consists of three components: 1. input (collecting data and setting assumptions for the model); 2. processing (transforming the data and assumptions into values); and 3. output (translating the assumptions and values into useful business information).

Immediately the appeal of using models becomes apparent. You know that there is much more to each of the three components, but now, in case you didn’t know this already, you have a basic grasp of what modelling entails and this will help to understand the challenges of using models appropriately.

As a number of speakers highlighted during the colloquium, model misuse can arise at any of these stages. In his article Models and their limits, Professor Cairns explains the concept of “model risk”. That is, risks arising from either incorrect parameters used for the model or misuse of the model output by end users.

Professor Cairns makes a distinction between two types of model risk: risks that arise from an unquestioning belief that the model is correct and subsequent failure to engage with alternatives (statistical risk), and those that arise from errors during the design and use of the model (operational risks).

The colloquium examined the causes of model vulnerabilities beyond the models themselves, to the initial choices made in selecting the data. The journalist Michael Blastland, whose work explores perception and understanding of risk in everyday life, demonstrated how the very assumptions and purpose of the model could affect its outcome.

He cautioned against latching onto what he calls a “significant data point” that could in itself influence the output of the model, and gave a simple but pertinent example of how such bias can creep into the data.

Picture the scene. A family sits down at the dinner table and suddenly an elderly relative coughs. A mundane occurrence you are hardly likely to notice. Now picture the same scene knowing you are watching Casualty, a popular hospital-based melodrama in the UK. Suddenly, the cough takes on a new meaning. It does not bode well for granddad. You can almost hear the ambulance sirens off camera….

What this example shows is that the circumstances, in this case a melodrama revolving around a hospital emergency room, changes our perception of which data is significant.

His article, Presenting risk: distorting perceptions, highlights the dangers of shifting assumptions into the data, as well as demonstrating how our perceptions of risks can change based on how the data is presented. The article concludes by asking whether a human softness lurks in our risk models more than we’d like to think.

The outer limits of risk models

The appropriateness of models as a tool for conveying risk was subject to further scrutiny when the discussion turned to explore the outer limits of model use. It was argued by some that in cases where there is either insufficient data or where patterns in the data are difficult to detect, models should be used with extreme caution or potentially not used at all.

It was argued by some that in cases where there is either insufficient data or where patterns in the data are difficult to detect, models should be used with extreme caution or potentially not used at all.

Modelling life expectancy (longevity risk), for example, can be difficult given that changes in life expectancy occur over long periods of time and can often only be detected after significant shifts have taken place. The gross underestimation of the change in life expectancy over the past two decades is one of the reasons we face a pension deficit chasm today. From a risk-modelling perspective, it was noted that these factors expose longevity modelling to an especially nasty form of model manipulation or “back-solving” – making the model fit the required output.

Correlations and interactions between variables are also hard to model. One example cited during the discussion was that of deriving ratings for securitisation products – where assets or loans are pooled together and then repackaged to produce new securities.

The rating for each newly created security, or tranche, is influenced by the rating of the individual components of the pool and the position of the tranche in the payment pecking order (the cash flows from the underlying assets or loans are redistributed to the tranches so the higher the priority of the tranche the better its rating because it is less likely to default than tranches that get paid after it).

The difficulty in setting the rating for the tranches is caused by the fact that the relationship between the credit quality of a tranche and its default risk is strongly affected by credit correlation. As the recent financial crisis has demonstrated, in extreme shocks all assets tend to move together, thus cancelling any of the diversification benefits.

Securitisations are sensitive to the systematic risk of the pool and this can fluctuate greatly, leading some to argue that any form of securitisation is ultimately a bet on the overall state of the economy.

Rating of the tranches cannot be carried out without a model but the phenomenon being modelled is complex and the model user is exposed to considerable model risk.

Regulation

The recognition of the importance of models in financial services together with a growing understanding of their limits, as illustrated so vividly by the financial crisis, have introduced the inevitable and necessary increase in regulatory scrutiny of models and their use.

While raising standards and practices are to be welcomed, the added attention has its limitations. Some of the new rules for model validation – the process of testing the data, assumptions and outputs of the model – can have some unintended consequences.

During the discussion it was noted that, while in theory validation is sequential, conducted after the model is built, in practice this is an iterative process that takes place throughout, between the model builders and those charged with validating and approving it. Recently introduced rules place strong constraints on the process and, according to some, act as a business requirement that limits the scope and flexibility of model validation. Whereas model building and validation needs to be seen in the context of an environment that expresses increasing skepticism of models and in some case s an argument can be made that models should not be allowed to be used at all.

The regulatory environment and the use of modeling in banking is explained and explored by Dherminder Kainth, Head of Model Risk at the Royal Bank of Scotland, in the final article in the series.

Managing risk models also looks at model risk management and argues that model risk should be treated like any other risk assessed by the bank including the use of a model risk appetite framework.

Living with models

Models are an essential part of modern financial risk management. In the wake of the financial crisis there is a growing recognition that model users, and senior management, must take responsibility for using model outputs; they can no longer blindly accept the output of a model without at least understanding the underlying assumptions used in its creation.

But in spite of the progress made in understanding the limitations of modelling there are aspects that are not yet fully understood. Models remain exposed to human vulnerability and misuse. As the contributions to this inaugural issue of OnRisk Dialogues show, when it comes to models, we have much to learn and much to be cautious about.

Gideon Benari is the editor of Solvency II Wire