Calibrating financial models is an essential part of all valuations and is also the first step of XVA calculations. Various calibration techniques are used in practice, many of which involve the numerical optimization of an error function to find the best fit of the model parameters to given market instruments. These optimizations typically require gradient information or the Hessian matrix, which consist of the first order and second order derivatives of the error function respectively. This information is required hundreds or thousands of times at different points during a given calibration, hence a fast sensitivity calculation is important. Further, the accuracy of the sensitivities determines the quality of the obtained result and more accurate gradients reduce the number of iterations needed for the optimizer to converge. Algorithmic Differentiation (AD) allows to compute the sensitivities quickly and accurately. It offers a significant improvement compared to the traditional bump-and-revalue approach (finite difference).

This paper discusses the benefits of using AD for financial model calibration, shows practical examples, and highlights the costs-benefits trade-off of AD vs. bump-and-revalue. It further details how to apply the implicit function theorem in order to calculate sensitivities to market parameters in an XVA context, given the sensitivities to model parameters.

  • Sensitivities calculation in model calibration
  • Develop user-friendly AD code
  • Coping with the memory footprint
  • Second order derivatives
  • Performance considerations
  • Cost-benefit trade-off: AAD vs. bumping
  • Implicit function theorem
top