“Validation is much more than a mathematical exercise”
An interview with Martijn Habing (ABN AMRO)
The risk models that banks use are validated by model risk managers. It is their role to determine whether all risks facing the bank have been properly identified. Martijn Habing, head of Model Risk Management (MoRM) at ABN AMRO bank, spoke earlier this year at the Zanders Risk Management Seminar about the extent to which a model can predict the impact of an event. After the seminar, we wanted to hear more about MoRM at ABN AMRO.
The MoRM division of ABN AMRO comprises around 45 people. What are the crucial conditions to run the department efficiently?
Habing: “Since the beginning of 2019, we have been divided into teams with clear responsibilities, enabling us to work more efficiently as a model risk management component. Previously, all questions from the ECB or other regulators were taken care of by the experts of credit risk, but now we have a separate team ready to focus on all non-quantitative matters. This reduces the workload on the experts who really need to deal with the mathematical models. The second thing we have done is to make a stronger distinction between the existing models and the new projects that we need to run. Major projects include the Definition of default and the introduction of IFRS 9. In the past, these kinds of projects were carried out by people who actually had to do the credit models. By having separate teams for this, we can scale more easily to the new projects – that works well.”
What exactly is the definition of a model within your department? Are they only risk models, or are hedge accounting or pricing models in scope too?
“We aim to identify the widest range of models as possible, both in size and type. From an administrative point of view, we can easily do 600 to 700 models. But with such a number, we can’t validate them all in the same degree of depth. We therefore try to get everything in picture, but this varies per model what we look at.”
To what extent does the business determine whether a validation model is presented?
“We want to have all models in view. Then the question is: how do you get a complete overview? How do you know what models there are if you don’t see them all? We try to set this up in two ways. On the one hand, we do this by connecting to the change risk assessment process. We have an operational risk department that looks at the entire bank in cycles of approximately three years. We work with operational risk and explain to them what they need to look out for, what ‘a model’ is according to us and what risks it can contain. On the other hand, we take a top-down approach, setting the model owner at the highest possible level. For example, the director of mortgages must confirm for all processes in his business that the models have been well developed, and the documentation is in order and validated. So, we’re trying to get a view on that from the top of the organization. We do have the vast majority of all models in the picture.”
Does this ever lead to discussion?
“Yes, that definitely happens. In the bank’s policy, we’ve explained that we make the final judgment on whether something is a model. If we believe that a risk is being taken with a model, we indicate that something needs to be changed.”
Some of the models will likely be implemented through vendor systems. How do you deal with that in terms of validation?
“The regulations are clear about this: as a bank, you need to fully understand all your models. We have developed a vast majority of the models internally. In addition, we have market systems for which large platforms have been created by external parties. So, we are certainly also looking at these vendor systems, but they require a different approach. With these models you look at how you parametrize – which test should be done with it exactly? The control capabilities of these systems are very different. We’re therefore looking at them, but they have other points of interest. For example, we perform shadow calculations to validate the results.”
How do you include the more qualitative elements in the validation of a risk model?
“There are models that include a large component from an expert who makes a certain assessment of his expertise based on one or more assumptions. That input comes from the business itself; we don’t have it in the models and we can’t control it mathematically. At MoRM, we try to capture which assumptions have been made by which experts. Since there is more risk in this, we are making more demands on the process by which the assumptions are made. In addition, the model outcome is generally input for the bank’s decision. So, when the model concludes something, the risk associated with the assumptions will always be considered and assessed in a meeting to decide what we actually do as a bank. But there is still a risk in that.”
How do you ensure that the output from models is applied correctly?
“We try to overcome this by the obligation to include the use of the model in the documentation. For example, we have a model for IFRS 9 where we have to indicate that we also use it for stress testing. We know the internal route of the model in the decision-making of the bank. And that’s a dynamic process; there are models that are developed and used for other purposes three years later. Validation is therefore much more than a mathematical exercise to see how the numbers fall apart.”
Typically, the approach is to develop first, then validate. Not every model will get a ‘validation stamp’. This can mean that a model is rejected after a large amount of work has been done. How can you prevent this?
“That is indeed a concrete problem. There are cases where a lot of work has been put into the development of a new model that was rejected at the last minute. That’s a shame as a company. On the one hand, as a validation department, you have to remain independent. On the other hand, you have to be able to work efficiently in a chain. These points can be contradictory, so we try to live up to both by looking at the assumptions of modeling at an early stage. In our Model Life Cycle we have described that when developing models, the modeler or owner has to report to the committee that determines whether something can or can’t. They study both the technical and the business side. Validation can therefore play a purer role in determining whether or not something is technically good.”
To be able to better determine the impact of risks, models are becoming increasingly complex. Machine learning seems to be a solution to manage this, to what extent can it?
“As a human being, we can’t judge datasets of a certain size – you then need statistical models and summaries. We talk a lot about machine learning and its regulatory requirements, particularly with our operational risk department. We then also look at situations in which the algorithm decides. The requirements are clearly formulated, but implementation is more difficult – after all, a decision must always be explainable. So, in the end it is people who make the decisions and therefore control the buttons.”
To what extent does the use of machine learning models lead to validation issues?
“Seventy to eighty percent of what we model and validate within the bank is bound by regulation – you can’t apply machine learning to that. The kind of machine learning that is emerging now is much more on the business side – how do you find better customers, how do you get cross-selling? You need a framework for that; if you have a new machine learning model, what risks do you see in it and what can you do about it? How do you make sure your model follows the rules? For example, there is a rule that you can’t refuse mortgages based on someone’s zip code, and in the traditional models that’s well in sight. However, with machine learning, you don’t really see what’s going on ‘under the hood’. That’s a new risk type that we need to include in our frameworks. Another application is that we use our own machine learning models as challenger models for those we get delivered from modeling. This way we can see whether it results in the same or other drivers, or we get more information from the data than the modelers can extract.”
How important is documentation in this?
“Very important. From a validation point of view, it’s always action point number one for all models. It’s part of the checklist, even before a model can be validated by us at all. We have to check on it and be strict about it. But particularly with the bigger models and lending, the usefulness and need for documentation is permeated.”
Finally, what makes it so much fun to work in the field of model risk management?
“The role of data and models in the financial industry is increasing. It’s not always rewarding; we need to point out where things go wrong – in that sense we are the dentist of the company. There is a risk that we’re driven too much by statistics and data. That’s why we challenge our people to talk to the business and to think strategically. At the same time, many risks are still managed insufficiently – it requires more structure than we have now. For model risk management, I have a clear idea of what we need to do to make it stronger in the future. And that’s a great challenge.”