Value at Risk (VaR)

The Value at Risk (VaR) is a risk measure to compute the maximum amount of losses that can be expected with certain confidence level p over a certain horizon (K trading days). This means that we are (1- p)*100% confident that losses will not exceed the VaR, over K days horizon. In statistical terms, that is: Probability (Loss > VaR) = p.

In order to compute this probability, we need to understand how the financial returns behave, or in other words what their distribution looks like. Therefore it is important to mention few tendencies (or stylised facts) of financial (daily) returns:
It is very difficult to predict returns from their own past
Daily returns seems to have high probability of large losses (and gains) because of the presence of crisis and large fluctuations
Presence of volatility clustering that induces volatility to be correlated to its previous levels; which means that high volatility in the returns induces even higher volatility and viceversa low volatility tends to be followed by low levels of volatility
Presence of leverage effect, i.e. the negatively correlated relationship between volatility and returns which implies that an increase in volatility is normally associated with a drop in price
In general, there are three ways to simulate the distribution of the financial returns tomorrow; specifically:
Historical or Weighted Historical Simulation
Parametric (or Analytical) Simulation
Montecarlo Simulation
Historical or Weighted Historical Simulation
The Historical simulation approach is based on the assumption that financial returns will be distributed like in the past. In practice one should: gather the past m daily returns; order the returns from the largest loss to the largest gain; the Historical-VaR is simply calculated by taking the percentile that corresponds to the probability p (if the confidence level is 95%, then we take the 5th percentile). The weighted historical simulation approach assigns a decreasing probability weight to each of the m past observations, in an attempt to give more importance to the near future. After the assignment of the weights, the procedure is identical to the historical simulation.
This technique is widely adopted in the financial markets, especially for its simple implementation and its model-free nature. In fact, as a report of¬†McKinsey & Company¬†(“McKinsey Working Papers on Risk, Number 32,¬†Managing market risk: Today and tomorrow”) shows: ‚ÄúSeventy-two percent of banks use these techniques. Of these, 85 percent apply equal weighting to the chosen time span (of which about half use a one-year time span and half use a two- to five-year time span). Only 15 percent apply some sort of weighting.‚Äù However, this method has serious downsides. In particular, the choice for the number of observations¬†m¬†is particularly complicated because if it is too small it will not include relevant data but if chosen too large, the most relevant observations will not have a great impact. This drawback is inflated when we compute the VaR for longer horizons than 1 day ahead, because of the large amount of observations needed. Another very important problem with the HS is that it reacts too slowly to changes in the markets and in particular, it ignores the volatility clustering effect discussed above.
Parametric or Analytical Simulation
The parametric simulation approach to VaR is based on the assumptions that returns follow a specific parametric distribution, such as the Normal distribution or the Student’s t distribution.
We assume that daily financial returns are described as: Rt = σt zt, where zt is distributed like a Normal distribution with mean 0 and variance σ2.
Hence the Parametric Value-at-Risk is defined as: VaRt+1 = Рσt+1 F-1(p), where F-1(p) returns the number such as probability p% mass is below that number. In this particular case, we F represents the standard normal distribution. As you may have noticed, we need the volatility forecast (σt+1 ) for this particular type of simulation; without going in too much details, we would like to mention that in the industry NGARCH models  are used to model the volatility because they allow for all the stylised facts mentioned above. The RiskMetrics model σ2t+1= 0.94 σ2t + 0.06 R2t was developed by J.P. Morgan and constitutes a specific type of GARCH.
The choice of the distribution and the model to forecast the volatility are key for this type of simulation. We want to highlight that the Normal distribution is not an adequate for the modelling of the daily financial returns as it does not account for the presence of crisis and large fluctuations as it does not have a large mass in the tails. The so-called heavy tails distributions (like the Student’s t distributions) are better fit for purpose; however, another problem arises with distribution that are enough heavy-tailed with the VaR. The VaR because a non-coherent risk measure and therefore it become non adequate to estimate the potential losses on a portfolio.
Montecarlo Simulation
The Montecarlo simulation relies on the generation of artificial returns drawn from a certain distribution such as the Normal. If financial returns are described as: Rt = σt zt, then zt will be generated a large number, for example 10,000 times. In this way, we will have 10,000 draws for zt, which generates Rt. In order to forecast the volatility we will still use the NGARCH or some other model. Therefore, for each draw we will have a different volatility forecast. Therefore, in order to compute the VaR with this methodology we will have (10,000) simulated returns and forecasted volatility and, as in the Historical simulation, we will sort the returns from the smallest to the largest and the VaR will be that returns corresponding to the p% level.
Again, in this case we need to be careful on which distribution we decide to draw our returns from.