One of the most interesting and challenging aspects of the FRTB regulations is the pure volume of PnL Vectors for FRTB that will be produced as a result. This brings a whole host of computational and technology challenges to go with it. Lets look at this in some more detail and the first step is to understand why so much data needs to be produced. If you want to rea da summary / primer before you look at this article, check out this link for an FRTB summary.

One of the cornerstones of the new FRTB regulations is the fact that liquidity horizons will need to be taken into account and that too at a asset class specific level. These risk factors are classified as Equity, Interest Rates, FX, Commodity, Credit Spread and Total. There are 5 liquidity horizons that we need to take account of 10, 20, 40, 60 and 120. This gives us quite a considerable number of runs to account for and as a result a considerable amount of PnL vectors for FRTB that will be produced.

Horizon Equity Interest Rates Credit FX Commodity Total
10 Yes Yes Yes Yes Yes Yes
20 Yes Yes Yes Yes Yes Yes
40 Yes Yes Yes Yes Yes Yes
60 Yes Yes Yes No Yes Yes
120 No No Yes No Yes Yes

That gives us 26 runs we have to do just for the Full set current period run.
Add to that the fact that we have to also calculate a set of results for a reduced risk factor run (covered elsewhere) and also a Stressed period run which can also use the reduced set. This makes 3 runs altogether. So in total – we could have a series of 78 PnL vector runs that need to be generated! In practice there may be some optimisation which will reduce this slightly – but this is the worst case scenario.

Lets look at the formula prescribed for the calculation of Expected Shortfall :

PnL Vectors for FRTB

Next lets look at an example to really understand how the data requirements explode out so much and what this really means from a practical implementation point of view. We will take the case of a simple equity example. Lets take the specific case of a small cap underlying equity in the US. So our risk factors in this examples will be as follows (with Liquidity Horizons mapped to them).

Risk Factor Liquidity Horizon
Equity – Small Cap 20
Equity – Small Cap Volatility 60
Interest Rates 10

Lets assume that the volatility data we have for this underlier is not good enough for the full 10 year period, so we can remove it from our reduced risk factor set. We will end up with the following PnL vectors that need to be generated for the full set:

Horizon Equity Interest Rates Total
10 Equity/Volatility Curve Equity/Volatility/Curve
20 Equity/Volatility Equity/Volatility
40 Volatility Volatility
60 Volatility Volatility

From the above we would get 9 Vectors. From the reduced set for the current period as we don’t have the volatility risk factor we would have only 5. We would again have 5 for the reduced set stress period as well. That gives us a total of 19 PnL Vectors for FRTB across the equity option trade. Although arguably we would be able to make some calculation and storage optimisations by not doing 40 and the 60 horizon vectors twice, but doing once and saving it with both tags.

The next challenge is the aggregation, which we will tackle in another article. However the main point to note here is that we have now got significant challenges around the calculation and storage of these PnL vectors for FRTB. Let us assume we are using a hist sim methodology, then for each one of these calculations we will have to perform 260 individual pricing calls. So in the case of this option – even very simply put we will have to repeat the pricing call with different parameters 260 * 19 times, 4940 times. Once we have the data we will then have to store it somewhere. Now if we work on the assumption that each of the PnL vectors for FRTB is roughly 10 kb, then for the 19 we will need roughly 190kb of space. Again this is a very simple case of 1 trade. This structure has only 3 risk factors – other will have less or more and this will change the calculation requirements. Let us also assume that on average we need 12 vectors per trade (a very big assumption).

Based on the 12 per trade, let us further look at the case of an institution that has 1 million trades. This means we have a requirement for storing around 12 million PnL vectors for FRTB per day from the calculation engine prior to aggregation and calculation of the metrics. Based on the estimated size of 10kb per vector this would bring this out to be in the order of 120 gigabytes per day for the PnL vectors for FRTB. That’s a lot of storage, especially when you start thinking about requirements around how long you need to store the data etc. The only real solution is to use something like Hadoop as most other solutions will not scale – and MapReduce / Spark / Scala is really the key to solving some of the challenges of data processing and enrichment that are likely to be encountered. Add also the fact that you are transferring 120 GB of data around every day (and will probably be a lot more for bigger institutions) and you really need a solution that can scale.

In truth the quantitative challenges are fairly trivial this is a problem that requires getting the enterprise risk architecture right. Think about even the scale of the number of calculations and its pretty monumental a task. You really need a grid based calculation architecture that can parallelise and do the calculations as efficiently as possible. A point to note which we will talk about in future articles is that the calculation and the aggregation need to be completely separate. The key here is calculate once and aggregate multiple times.

The costs of implementation will not be trivial and no doubt a lot of the smaller institutions will elect to stay with the standardised approach, however a good strategy will try to use the spend in clever ways to future proof the institution. Essentially this will lead many financial institutions to move towards the world of Big Data and away from RDBMS for parts of their risk architecture. The key is to leverage this huge amount of data that is being stored and try to gain insights along the lines of what companies like Amazon and Google do. Imagine a bank that is able to see what sort of trades their clients are drawn towards on a cyclical basis. This is valuable insight that can help banks promote their businesses. Particularly in a brave new world where complex structured products take a backseat and most banks concentrate on the simple flow stuff. The key to success here will be customer service, great technology and a sales force that can add value. This is the opportunity that can be bolted onto the regulatory requirement that needs to be fulfilled at a vast expense.

%d bloggers like this: