Risk Technology – Demystified
Liquidity Risk has shot into prominence since the subprime crisis because of the issues banks had with determining their ability to obtain funding and the tradability of their assets in a turbulent market.
Originally published by the Center for Financial Professionals here: https://www.cefpro.com/risk-technology-demystified/, Srikant Ganesan, Head of the Risk Solutions Practise at Riskfocus talks about technology and risk management.
I graduated with a Masters in Electrical and Computer Engineering, specializing in VLSI fabrication. After defending my thesis, I decided to take a break from chip design to experience the variety of building software applications. That is when Wall Street beckoned and for the last 20 years I’ve worked in the Capital Markets industry. Most of my financial industry career has been centered around developing and managing enterprise risk platforms for front- and back-office use cases. The evolution of technology in the past few years and its application to resolve risk computation and visualization challenges motivates me to seek innovation in this space.
Since the subprime crisis regulators have made significant changes to how risk gets calculated and reported across financial services organizations. These changes have had an impact on the amount of data required for calculations and the sheer volume of results produced by risk platforms based on multiple scenario configurations and time horizons. In addition, banks have to deal with the storage of historical risk calculations for future inquiries from regulators that demand quick access to historical reports and the ability to drill through to trade and position level information. Prior to the crisis, risk was primarily computed at the end of the trading day. But since then, the regulators have been demanding that financial organizations have the capability to view and report on their risk in near time or on-demand. These regulatory demands, together with the frequency of regulatory risk reporting and the large data volumes are often too taxing for existing risk technology infrastructures, preventing institutions from satisfying the demands of the regulators. Most of the existing platforms have to undergo significant refactoring or a complete rewrite in order to meet the current demands and to allow for future growth.
From our perspective, demystifying risk technology is a way of separating the plumbing of a risk platform from the value-add components, which are required to meet a business user’s requirements. By keeping the plumbing simple and static and focusing on the control points for the business user, IT teams can significantly reduce the time it takes to develop and go live with risk systems, while allowing the users to self-service their business needs.
During the sub-prime crisis, risk officers had to scramble to get exposures on specific issuers and counterparties. In most banks, such information is distributed across multiple risk calculation and aggregation platforms. In order to get all the required exposures from these systems into a central system, risk officers had to scrape information out of these systems and aggregate them in Excel spreadsheets to produce reports. If the CRO or regulator asked to see the report in more than one dimension, e.g. counterparty, issuer, desk and maturity, the process would have to be repeated all over again multiple times. This proved to be a highly inefficient process in a time of such crisis. Since then regulators have demanded much faster reporting, requiring greater efficiency from the banks’ risk technology infrastructures.
Some of the complexities of building efficient risk platforms are centered on the following business and regulatory requirements:
Near Time Risk: Shifting focus from a T+1 risk view to a near time or on-demand risk view. Risk managers and traders are demanding an up-to-the minute view of their risk exposures. The complexities for an IT team revolve around refactoring the T+1 system to work for near time use cases. Depending on the legacy architecture this can be a herculean effort involving a complete rewrite of the existing platform.
Risk Aggregation: Capital Market organizations have multiple risk systems across various trading desks within the organization. One of the main issues during the subprime crisis was the inability of CROs and risk managers to obtain a single view of risk across these silos. Introducing a cross-desk risk aggregation layer will provide multi-dimensional analysis capabilities and the ability to load and aggregate risk results in real time.
Risk Visualization: With the advent of big data, low-cost memory, and scalable compute farms, organizations are producing petabytes of scenario analysis data daily. It is now possible to visualize enormous amounts of data aggregated and displayed in intuitive dashboards. This provides a much more meaningful view of an institutions’ position than looking at thousands of rows in Excel. The challenge here is to provide drill-through capability from an aggregated view by balancing the use of in-memory and hard disk storage mechanisms for timely access to the data.
Liquidity Risk has shot into prominence since the subprime crisis because of the issues banks had with determining their ability to obtain funding and the tradability of their assets in a turbulent market. As part of Basel III, the Basel Committee for Banking Supervision (BCBS) introduced Liquidity Coverage Ratio (LCR) and Net Stable Funding Rate (NSFR) measures for banks to incorporate into their liquidity requirements. The EBA published European versions of LCR and NSFR that banks have to compute, in addition to the BCBS versions. Such constant changes to calculation methodologies and reporting templates require bank liquidity risk technology infrastructures to be nimble, efficient and scalable to enable quick turnaround times. Regulators are cutting short the liquidity risk reporting cycles from monthly to daily reporting e.g. banks must report their intraday liquidity usage and the availability of liquidity at the start of each business day. If a bank does not have a flexible risk technology platform it will be unable to provide a timely response to the regulators. In the long run, a well-architected design can help with minimizing the total cost of ownership of such platforms.
The right technical architecture facilitates a quick response to regulatory demands in the following ways:
- Scaling across large data volumes and variety of scenarios
- Separating the sourcing and computing of large amounts of cash flow data from best-of-breed visualization tools and techniques.
This type of “componentized” enterprise architecture provides the ability to produce the tools that risk managers need to analyze risk measures across scenarios, time bucket and cash ladder dimensions.
Our vision of an enterprise risk architecture involves the following layers:
- Infrastructure Layer – This layer integrates Trades, Positions, Market Data and Reference Data from golden sources into the risk platform.
- Compute Layer – This layer consists of one or more risk engines within a compute farm responsible for the calculation of risk measurements such as PVs, sensitivities and PnL vectors.
- Aggregation Layer – This layer aggregates the results computed in the compute layer to provide multi-dimensional analysis capabilities for business users.
- Visualization Layer – This layer allows for building custom visualization tools with drill-through capabilities.
The above components can be integrated to provide front to back risk solutions for Market, Credit and Liquidity Risk teams. In most cases, one or more layers can be integrated into existing risk architectures to improve performance and efficiency.
Srikant Ganesan, Head of Risk Solutions Practic, Riskfocus
From designing microwave transistors for Air Force landing systems to using innovative technologies in Structured Credit business solutions, Srikant brings 18 years of experience in Hardware and Software in Telecoms, Networking and Capital Markets to the leading solutions we offer our clients.