news

Powering FRTB: coping with new computing needs

Posted: 14 April 2017 | | No comments yet

Come December 31 2019, FRTB will bring an increased level of complexity to the way that both standard and internal models will work. Jay Hibbin, Regional Sales Director – Financial Services EMEA at CenturyLink explores the need for more computing power…

Bolero seeks to drive expansion through global head of sales appointment

To coin a phrase, the Fundamental Review of the Trading Book (FRTB) to some degree does what it says on the tin. Certainly the reforms will be fundamental in nature, with the changing regulations set to transform the way market risk is managed within banking.

At a process level, FRTB will bring about a significant shift in how banks calculate the level of risk associated with individual financial instruments held on their trading books. The regulation will place particular emphasis on any institution wishing to mitigate its exposure by using its own models to calculate capital charges.

With a focus on CET1 ratio as the core metric of a bank’s strength, and at a time when banks are so capitally constrained, financial institutions are even more incentivised to use their own internal models to calculate risk. As opposed to a standardised model that is set out by the regulator, which could require a hefty increase in the amount of capital to be held against the risk, internal models allow the bank to fine tune their risk profiles. This essentially means that financial organisations using internal models can hold a more appropriate level of capital for the risk that they hold.

Muddying the waters

Come December 31 2019, FRTB will bring an increased level of complexity to the way that both standard and internal models will work.

The current Value-at-Risk (VaR) method used to determine how much capital an organisation needs to hold against risk will be replaced with Expected Shortfall (ES). This change of approach is also accompanied by a change in the liquidity horizon used to establish risk, and the fixed 10 days becomes variable from 10 to 120 days, depending on the instrument. This is fine in theory, and should produce better results but completely overhauling the existing models is a significant task for banks.

In addition to this, under FRTB, internal models will also need to be approved on a desk-by-desk basis, where previously permission to use an internal model was given across the bank. This makes the approval process for models significantly more demanding. However, with a substantial amount of additional capital potentially required if their internal models do not comply with the new regulation, banks need to be looking at how best they can create these new models efficiently in order to gain approval.

More complexity, more computing power

When the complexity of models increases and new variable time horizons are introduced this will ultimately lead to an increase in the computing power required to run the more complex calculations. However, returning to the fact that banks are so capitally constrained, it could be a large IT investment that banks will not be keen to part ways with.

Some banks will be looking at moving these models and data to the public cloud, however, the potential intensity of the workload could pose a problem – not all cloud environments are set up to cope with such a demanding workload, or if they are, the economics of cloud can make the total cost of ownership too high. In addition, the age old issue of security could also be a road block in spite of the fact that not all grid workloads are associated with sensitive or client data. 

The grid-as-a-service model

Facing a difficult decision, financial institutions should instead look to utilise the grids that they are already using to power their existing models. With a strategic partner to guide them through this process, banks can look to move to the grid-as-a-service model in order to cope with the increased workloads. This approach essentially allows banks to expand their known and loved grids, but at a manageable cost per month. By using networks of dense computing managed by a third party in its data centre, and contracted out as a service on a monthly basis, the capital cost of dealing with these changes will be significantly reduced.

It is also possible to combine together the use of cloud technology and mature grids in one effective hybrid IT model. This approach enables banks to have access to the right computer resources in a timely manner so that they can meet regulatory requirements without taking on extensive extra cost. The cloud element of this scenario can allow for the scalability to enable bursts of activity whilst banks are working hard initially to define these models, and the grid element will provide the power to maintain these on an ongoing basis.

Opportunity for the future

Following this approach means that compliance with FRTB can be managed effectively, and it should also make banks think about their future strategies in this area. When forced to consider changes at a grid level, banks should take the opportunity to consider their grid strategies at a macro level – enabling the bank’s IT team to make a call on whether grid services can still be run on-premise with intermittent capital investment, or whether they should be outsourced for best performance and cost. Considering a grid-as-a-service model also enables banks to review the overall efficiency of their entire grid networks, a process which well may be long overdue.

Send this to a friend