# A New Resolution for the Market’s Classic Smile Problem

The smile calibration problem is a mathematical conundrum in finance that has challenged quantitative analysts for decades. Through my research, I have discovered a novel resolution to this classic problem.

Those outside the arena of quantitative analysis might ask: why is the smile calibration problem so important? The answer is short, sweet, and exciting: by calibrating the smile, we move closer to accurate pricing of available market options.

Bruno Dupire offered the first solution in 1994 using the concept of local volatility. It is a fairly well-understood model and problem. The market smile has been addressed a few times since from the more classical mathematical approach, using the Partial Differential Equation (PDE.) In 2011, Julien Guyon and Pierre Henry-Labordère offered another approach, using the particle method. This latter work has become the benchmark, particularly for high-dimensional problems.

But the smile calibration problem continues to confound. We can tackle the maths numerically. More challenging are theoretical questions, such as: when is this problem well-posed? What do we do when the problem has neither a numeric, nor a theoretical solution?

I have followed the lead of Guyon and Henry-Labordère, building a new approach that is more robust, and without hyperparameters. It provides close-form formulas that make the algorithm more stable in production. What’s more, my method is not much more complicated than the particle method.

Let’s start with a Black Scholes stochastic volatility model, and its deterministic interest rate. Here, we have the leverage function, which I consider to be the secret ingredient to calibrate this Lambda function. It assesses the smile model we have seen in the market.

The interest rate and volatility need to be adapted to the d-dimensional Brownian motion. To envision this, think of a two-factor Bergomi model with basic checking interest rates, or a two-factor basic checking interest model with as many types as you want in the volatility.

So, one starts with a Blacks-Scholes model, in which the underlying (S) is driven by a Standard Brownian motion (W):

Here, V and r represent the stochastic volatility, and the interest rate. The task at hand is to find the function, . The probabilistic representation of the solution is well-known, and given by the following, with ZCB being the zero coupon bond, and C the market call option price.

where

The PDE, based on Forward Kolmogorov Equations, allows the problem to be solved numerically. However, the problem often falls into the so-called “curse of dimensionality,” which makes the problem computationally expensive to be solved in a reasonable amount of time. To overcome this, Guyon and Henry-Labordère proposed a Monte Carlo simulation-based algorithm that beats the curse. Their main contribution was to estimate the conditional expectations above using a Nadaraya-Watson kernel regression, which is the particle method:

The success of this approach lies in its universality and simplicity. In other words, the model just needs to be simulated. There has, however, been quite a bit of criticism concerning the sensitivity of this approach to the kernel function K, and the bandwidth parameter h.

The contribution we make to this problem is to keep the essence of the particle method — universality and simplicity — whilst removing the sources of criticism: kernel and bandwidth.

To finish up, we will give a simple, and visual explanation on how the algorithm works:

The blue densities in the picture above represent our model, and the green represent what we observe in the market. Simply put, our job is to transform the blue density into the green. To do that, we move forward in time. At each transformation, we ask, “how much do I need to move my blue density to match the green market density?”

The response to this question is answered by the formula we created.

To finish up, we will show a few examples of the algorithm benchmarked against previous methods, as well as some variation that we present in the paper (SSRN:3461545) in more detail:

The above example shows how our estimator solves the smile problem where the Guyon/Henry-Labordère method fails, most likely due to a misspecified kernel function or a bandwidth parameter. Our approach may seem bold, but that’s by design. The beauty of our approach is that it allows us to safely implement the routine in a production environment without any ambiguity or hyperparameter risk. Below, we show a few more examples of our method’s outstanding performance.

SX5E 2019/03/19

SPX 2011/09/06

**Bibliography**

Dupire, B. *Pricing with a smile.* Risk Magazine 1994.

Guyon, J., and P. Henry-Labordère. “Being particular about calibration.” *Risk Magazine* 2012.

Lipton, A. *The vol smile problem*. Risk Magazine 2002.

*Photo by Alec Favale on Unsplash*

*Aitor Muguruza is the Head of Quantitative Modelling and Data Analytics at Kaiju Capital Management and is a visiting Lecturer at Imperial College London. He holds a PhD in Mathematics from Imperial College. Aitor was recipient of the Rising Star Award in Quantitative Finance by Risk Magazine in 2020 due to the seminal paper "Deep Learning Volatility".*