The balance between trust and control

The balance between trust and control

Back in 2010, the American Economic Review published the article Growth in a time of debt, which was penned by Carmen Reinhart and Kenneth Rogoff (Reinhart and Rogoff).

It concluded that if the national debt of a country is more than 90 per cent of its gross domestic product, its economy will contract by 0.1 per cent a year. Several politicians, including US Congressman Paul Ryan and European Commissioner Olli Rehn, quoted this conclusion when arguing for further cutbacks. Moreover, discussions about the solution to Greece’s national debt and huge budget deficit added significant extra relevance to Reinhart and Rogoff’s conclusion.

Thomas Herndon, Michael Ash and Robert Pollin of the University of Massachusetts Amherst (UMASS Amherst) tried to reproduce RR’s results, but without success. After seeking the authors’ advice, it transpired that the reason that Herndon, Ash and Pollin were unable to reproduce the results was a modeling error in the Excel file used by Reinhart and Rogoff. But in the meantime, several people in influential positions were citing Reinhart and Rogoff’s conclusions as ‘the Holy Grail’.

I think it’s fair to say that we should be grateful that the UMASS Amherst academics tried to reproduce these results, and, in so doing, stopped us from blindly following a course that was partly founded on erroneous results. But for me their actions also raised another question, namely: how can we be certain that similar modeling errors have not been made in other scientific studies? Might we similarly have followed a course elsewhere that is partly based on the wrong research results? As a risk manager, it’s my job to model risks in a variety of financial institutions. In addition to being reviewed by senior advisors, my work is also checked independently, in the form of an audit and/or model validation.

Furthermore, all models must be supported by model documentation describing the methodology and configuration, as well as the test activities that were carried out. Anyone armed with the right knowledge, and with this model documentation at their disposal, should be able to recreate the model. In my humble opinion this is crucial, given the far-reaching consequences that erroneous modeling can have for a financial institution – and thus, by extension, society as a whole.

One of the objectives of modeling risks in financial institutions is to convince supervisory bodies and other stakeholders that they are not taking on risks irresponsibly.

If there is one thing we have learned from the 2008 credit crisis, it’s that irresponsible risk-taking by the financial sector can have dire consequences for the rest of society. But this doesn’t only apply to the financial sector either. We now live in a world that is increasingly specializing and in which we are communicating with each other faster and better. This makes it all the more important that every specialization group assumes responsibility for what it does. Given the potential repercussions for the general public, it’s unfortunately not enough to trust that every specialization group will duly assume its responsibility; the necessary controls must be put in place.

This, in turn, raises another important question.

Bearing in mind how serious the consequences of a scientific modeling error could be for society, are the controls designed to prevent such an error really effective enough? Should we not be taking a long hard look at these controls, before we lose our faith in science because we find out that conclusions in a published article were based on flawed modeling?