Quantification of model risk

Quantification of model risk

Risk management and treasury specialists are using diverse models on a daily basis to manage various risks. It is easy to forget about the risk that is implied in using the model itself. What we refer to as ‘model risk’ can arise due to misuse of the model, incorrect model choices or inappropriate model use.

In the US, the Supervisory Guidance on Model Risk Management (SR 11-07) was published in a joint effort by the Federal Reserve System and the Office of Comptroller of the Currency (OCC) in 2011. Since then, attention to proper model risk management has been steadily increasing.

Following SR 11-07, US banks are obliged to maintain a model inventory, to create risk governance policies and to conduct periodic model reviews. Although not touched by this regulation, insurers and EU banks are also increasing their focus in this area. But there is one question many are struggling with: how to quantify the model risk? The answer is simple: it’s impossible to do this with 100 per cent certainty. Nevertheless, there are several techniques that allow us to quantify model risk and determine if our model is suitable for current use.

“A lot of the work around model risk is recognizing your models are never perfect” – head of model risk validation, HSBC. (source: Risk.net article)

In this article, we will focus on three main techniques that can help us to quantify model risk:

  • sensitivity analysis – observing the outcome related to changes in model parameters and assumptions;
  • backtesting – testing model accuracy by using historical data as the input and compare model output to observed past results; and
  • challenger model – comparing the results of the original model against outcomes from a separately developed model.

To understand model risk better, we split it into three core areas:

  • data
    • data choice for model calibration
    • poor data quality (errors in data or missing data)
  • methodology – model choices
    • assumptions made by the modeler
    • parameter selection/calibration
  • implementation – programming code into which model is translated

Awareness of the model risk type is crucial for quantification as not all three techniques are suitable for verifying correctness in all model risk areas (figure 1):

Quantification of Model Risk, fig 1

Figure 1. Suitability of different quantification methods in verifying three core model risk areas.
* Application to different model risk areas depending on availability of the observed outcomes.

Sensitivity analysis

One of the most popular ways of quantifying model risk is performing sensitivity analysis, also known as “what if” analysis. The idea behind this is quite simple – change the model and observe how its outcomes behave. Modifications, which we are planning to implement during sensitivity analysis, can be related to model parameters, functional forms, model assumptions like changing the distribution or manipulating the quality of data used in the model.

A common use of sensitivity analysis is to vary the model input parameters. The general steps of a parameter sensitivity analysis are:

(a) identifying the model’s independent and dependent variables;
(b) assigning a probability density function to some or all parameters;
(c) simulating input values using the chosen probability density function and calculate the corresponding output; and
(d) assessing the influence of parameters and their relative importance.

This exercise can help to identify the parameter that has the most significant impact on the model output and also put a number on the expected risk associated with it (by comparing initial and modified outcomes). Conceptually the easiest approach is to manipulate one (OAT – “one-at-a-time”) or a few parameters at the same time. If one decides to focus on several parameters, the correlation between them (based on historical data or expert judgement) needs to be considered in the simulations.

Backtesting

One of the most basic approaches to model verification is backtesting. This is the process of using the model with historical data and analyzing the model results or, in case of predictive models, compare them to observed outcomes. The idea is relatively simple and can be implemented for all models with sufficient historical data.

Nevertheless, we need to remember that by performing the backtesting exercise and based on this confirming model validity, we are assuming that the market’s future behavior will be in line with its past behavior. As an example of a misleading backtesting approach we can consider the events of June 2014, when we faced negative interest rates for the first time in Europe. This situation hadn’t previously been considered and several errors and breaks occurred in models, even though they worked perfectly in the past and during backtesting exercises – but only under the assumption of positive rates.

Challenger models

The third approach in model risk quantification is the use of challenger models. There are several possible applications of this approach in practice:

Challenging the implementation

One way to use challenger models in model risk quantification is to challenge the implementation by rebuilding a model (or parts of it) from scratch, trying to mimic it exactly while only relying on its documentation. In this case we are expecting to match the results from the champion model (the one which we are validating) very accurately and if the match is imperfect, the deviations can show us errors in implementation or documentation of the model.

Challenging the assumptions

Another approach is to challenge the model’s assumptions by creating relatively basic and simple challenger models that differ from our champion model on one or more specific model assumptions and then compare the results (based on the same data set). In this way we can determine how certain model choices affect the models’ outputs.

Full-blown challenge

The most extreme application of the challenger/champion method is to rebuild a model without knowing the champion model. This means, we can make different modeling choices, use different statistical techniques or even chose a different data set as input to our model.

In case of disparity in model outcomes, we need to investigate where these differences are coming from. If the challenger model performs better, the factors that lead to the outperformance should be investigated.

Selecting the right challenger model

Choosing the right challenger model is essential for meaningful results. Figure 2 depicts the desired characteristics of a challenger model.

Quantification of Model Risk, fig 2

Figure 2 Characteristics of good challenger models.

Due to their predictive power, machine learning models make promising candidates for challenger models. They are often outperforming their equivalent standard models, have fast prediction speed and are scalable.

They also fulfill a good part of the other requirements: machine-learning models are independent from the champion model, are based on sound techniques and, for certain algorithms, the application is straightforward.

“The black box models are a problem. We need to be able to explain those models and to step through why the conclusion of models is justified, rather than saying ‘the computer is saying that the answer is x’” – head of treasury risk and compliance, Clydesdale Bank, (source: Risk.net article)

One of the most prominent criticisms of machine-learning algorithms is that they are considered ‘black boxes’. While in some cases this is true, we need to distinguish between different classes of algorithms. Plain decision-tree based algorithms, for example, are a series of if-else-statements and thus very transparent for moderate-size trees, while deep neural networks can be very hard to interpret.

Summary

Quantification of model risk is a broad topic, which leaves us still with many questions to answer. All three methods, presented in this article (sensitivity analysis, backtesting and challenger model), show their usefulness in measuring model risk. Nevertheless, these approaches have their pros and cons and the decision of which technique to use needs to be based on the scope and the purpose of the analysis, as well as the nature of the model. Once model deviations are quantified, it can serve as the base for the model risk capital reserve.

To make things more challenging, but also more interesting, one of the next hurdles to take will be to look at model risk from a firm-wide perspective instead of assessing the models separately, which introduces dependency structures between models. Overall, the increasing quantity and complexity of models, as well as new regulatory developments, mean that the quantification of model risk will be an ongoing challenge.

Related articles and services:

Why are data projects costtranscending and never-ending
Zanders Model Risk Framework
Targeted review of internal models
Model Validation