Summary of the FRTB Regulations

If you would like a quick primer on the FRTB regulations that have been published please check this post for a very concise post.

The Dominance of the Historical Simulation Model under FRTB

Before we come on to the technology implications of FRTB, one of the consequences of the FRTB regulations in terms of VaR methodology is going to be a shift towards Historical Simulation as the de facto methodology of choice. There are a few key reasons for why this shift will happen, but it all comes down to the fact that the regulatory authorities are pushing for a full revaluation methodology for banks that want to use an internal model approach (IMA). There will also be an impact on the standardised approach (SA)methodology as the sensitivities will inevitably be calculated from the same full revaluation infrastructure that the bank implements. This also works well with the stated desire to bringing the IMA and the SA closer together for the sake of consistency.

The number of runs needed are as follows :

  • The number of calculations will be even higher because the FRTB regulations demand that each Risk Class be calculated separately on top of the all in run(while holding the others constant) – so this adds another 5 runs – FX, Commodity , IR , Credit and Equity. So a total of 6 runs
  • There are multiple runs that need to be calculated to deal with the calibration windows – Reduced Stress, Reduced Current, Full Current. A total of another 3 runs
  • This gives us a total of 18 runs!

Technology Implications of  FRTB

This clearly shows why under Monte Carlo methodology this will become even more difficult to implement (if not impossible to actually compete the calculations needed within some reasonable SLA timeframe) as you would have to calculate each trade 10,000 times as opposed to 520 times for most Historical Simulation implementations. This has some very big technology implications for FRTB. Very simply a Monte Carlo methodology will take in the order of 20 times longer to calculate as there are about 20 times more calculations. Furthermore the amount of storage space required for the Monte Carlo results will also be much greater than that required for the Historical Simulation methodology. This also of course will have cost implications and financial institutions will have to decide what is realistic in terms of investment into the Risk Architecture.

From a technology point of view there are several implications :

    • Grid Distributed calculation infrastructure that can be orchestrated efficiently and effectively – giving users full control  to be able to configure and set off risk runs in a flexible way.
    • Separation of the calculation and the aggregation infrastructure – this is fundamental as there should be a calculate once, run reports across many different dimensions philosophy
    •  Use Hadoop and Big data analytics for the transformation and enrichment of data – this is to take advantage of map-reduce and other big data tools that allow for the efficient enrichment and transformation of data
    • There should also be a heavy tilt towards the paradigm of bringing the analytics to the data rather than having huge volumes of data moving around different systems and messages queues – so avoid I/O as much as possible.
    • A move toward in-memory analytics especially for the aggregation and analysis of the data. The technology should give the ability to analyse the data across many dimensions and get more quick and efficient insights into the risk profile of the bank at any given point.


Big Data for FRTB

Big Data for FRTB

Leverage the technology you build for FRTB

This point cannot be emphasised enough, banks are investing ten of millions into their risk infrastructure to be able to cope with the fact of new regulations and especially FRTB. This should be used as an opportunity to streamline legacy systems and pull different departments across banks closer together. A crucial area will be to ensure that the P&L / Finance functions are more closely aligned with the risk function, both operationally and technologically.

As more and more data is collected and Hadoop technologies make it possible to store more data and analyse data for trends, this should also help banks use this data to gain valuable insights. VaR data broken down by the Client dimension and then plotted across time would give valuable insight into seasonal trends of Risk On / Risk Off across different clients. This could then be used to generate more business or to mitigate risks with certain clients during certain periods. Any bank that manages to crack this will definitely lean a competitive advantage. And best of all there needs to be very little additional investment for this – this can be done under the work that most banks will have to do for FRGB anyway.

%d bloggers like this: