Stay focused on objectives

Stay focused on objectives

Technological developments are changing the world around us at an ever-increasing pace. This speed of change has become the new reality – and it won’t be slowing down any time soon. What opportunities does this present to insurers and banks?

Accelerating technological developments are making it proportionately easier to personalize digital services, paving the way for new companies to enter the financial markets and add value. Another way of looking at this is to imagine that the intermediary model, which has been the go-to model in the insurance industry for decades, has been given a new and much bigger envelope in which to operate.

Specialization

An indirect consequence of these fast-moving technological developments is that the customer’s overall expectation of the quality of service continuously increases. It only takes a few to lead and the rest are left with virtually no choice but to follow in this world of free communication. It would seem, therefore, that banks and insurance companies will eventually, inevitably, have to outsource most of their non-core activities to specialist parties . This is already happening, on a large scale.

The better news for insurers and banks is that the demand for risk transformation is not about to drop any time soon. On the contrary, as long as prosperity in the Third World continues to rise, it is more likely increase. What might change, however, is that the balance of power could shift from the risk bearer to that bigger intermediary model envelope. As better digital personalization usually provides a better customer experience, this shift is most likely in segments that intrinsically provide a lot of relatively homogeneous information, such as the retail market. But this too can hardly be considered a new development.

For core activities there will be no choice: being the best in class will be the only option. The logical next step is for everyone to fully commit to the “new technology”.

Financial risk management

In our domain – financial risk and balance-sheet management – the use of these “new” (and complex) algorithms is still limited. This is only logical: “our models” are typically used for highly regulated activities, often directly or indirectly resulting in hedging decisions. It is unwise to apply algorithms if we do not fully understand how changes in source data will translate into a different prediction. Fortunately, supervisory boards agree.

But even our domain offers enough wriggle room to experiment with algorithms. Our clients do it all the time, and so do we. Yet you hear and read so little about it, you could conclude that most of these experiments simply fail. It is crucial to obtain a thorough knowledge of the advantages and disadvantages of “the means”, but it is equally as important not to lose sight of “the end”, which is to work more efficiently, be able to serve a wider audience, and retain profitability in the process.

Old- fashioned statistics

Expectations of “intelligent” or even “self-learning algorithms” are often high. However, even the most old-fashioned among us will concede that there is a lot of work involved in constructing a good intelligent algorithm. In essence, the modeling process is no different from the statistics that we have used for decades. We are still not at the stage where we can “quickly” put a useful machine-learning algorithm in place .

This is unlikely to change any time soon. An algorithm that can be used generically, in other words one that is devoid of significant adaptations, will never function optimally for long compared to the alternatives. That is due to the very fact that it is generic. It will only work for as long as nothing else is available, after which it will be superseded by specialist applications, just as the markets tend to develop.

Additionally, new algorithms like these are often applied to good old-fashioned “positional data”. So-called “bounded” datasets are often gathered from many source systems, which naturally provides little mutual consistency to speak of. Without either a huge amount of data or really high-quality data, this type of new algorithm will not, by definition, be able to find the more complex relationships for which the algorithm adds value. What this means is that old-fashioned statistics, such as generalized linear models (GLM) or logistic regression, are likely to do the job just as well as the cool new algorithm your colleagues have worked on for months.

Knowledge and experience

Even if the data signals spring to green, as it were, success in our domain is not guaranteed. The legions of data scientists who are currently taking over the world are perfectly capable of visualizing how and when a customer books a flight on his smartphone (they do it themselves all the time), but they have a much more limited understanding of balance sheet management and financial markets. They are therefore unable to identify inconsistencies in source data or predictions. Some visionaries insist that the endless flow of available data will render domain expertise redundant in the long-term. Maybe they are right. But until then, such expertise remains absolutely crucial in all these undertakings.

The availability of an expert is also not enough. Because ‘generic’ data scientists cannot do the job alone, experts must be able to estimate what, why and where an algorithm and process does and does not work, to effectively guide them. They can make all the difference between doing the right things and doing things right. In other words: in addition to having knowledge about a domain, experts must also bring the necessary practical experience, in the areas of modern algorithms and the associated data and infrastructure, to the party. McKinsey heralded this need a few years ago by highlighting the respective roles of the data strategist and analytics translator. Without adequately fulfilling these roles, the average chance of success will plummet.

Of course, as every self-respecting financial service provider will tell you, knowledge and experience like this can be insourced – and they have a point too. However, our experience has taught us that it can also be developed internally. But it takes more than just sending a few designated employees on a course for a week. In the same way that you rely hopefully on the relevant science in your field, you will also be well advised to listen to the didactics. If you allow a few capable and intrinsically motivated employees to work for two or three days a week for about a year on important relevant but mainly non-urgent issues and/or innovation, you will see that ample creative space and development will be created to help your organization move forward in the long term.

Conclusion

Technological developments offer a multitude of great new insights and possibilities. While this brings substantial change to financial markets, it must be stressed that the usefulness of new technology is nuanced. Technology, in whatever form it takes, will always be a means to an end. If you cannot determine beforehand which problem a “new” (as opposed to an “old”) technology will solve, it usually means that it will not solve a problem at all. An expert’s time can only be spent once, so make sure you use the right expert in the right way and you will have an above-average chance of staying afloat.

 

[1] Fortunately, the first sweeping steps are already being taken. Most of our clients clearly understand that specialist IT service providers can often provide better IT infrastructure services than their own IT department. But this does not necessarily render their own IT services superfluous: they will remain indispensible for management and employee contact.
[2] Nonetheless, with a limited amount of effort, self-learning algorithms can be useful in quickly setting up challenger models. These are not designed to get the most out of every bit of data, but to support that the main model (which we do understand) is sound.
[3] For the sake of completeness: methodologically these algorithms are not new. What is new about them is that we now have the computational capacity and data to calculate estimators.