Skip to main content
IBM Quantum Platform

Classical Optimizers


What is an optimizer?

Victoria Lipinska tells us about classical optimizers, and how they function as part of VQE.

You will hear about a few example optimizers and how they perform in the presence and absence of noise.

References

The following articles are referenced in the above video.


Coding a classical optimizer

In the previous lessons, you learned to make a Hamiltonian suitable for use on a quantum computer and how to make a variational circuit. You also learned that the variational circuit (or ansatz) contains parameters to be varied, and the optimal choice of parameters is whatever yields the lowest possible cost function or energy. Thus, our problem is reduced to searching the parameter space for the optimal set. Most of the work in classical optimizers has been done for us, as excellent optimizers are available from several sources.

In this lesson you will learn:

  • How classical optimizers fit into a VQE calculation
  • What classical optimizers are available from SciPy
  • What optimizers are not yet available through SciPy and how to supplement in the meantime using qiskit.algorithms
  • What options are available for these optimizers and the significance for quantum computing

SciPy is a free, open-source Python library with packages relevant to many areas of scientific computing, including optimization. In particular, SciPy has an optimization package that includes minimize:

from scipy.optimize import minimize

This minimize function has several arguments, but the most relevant arguments for quantum chemistry are:

  • The cost function (cost_func). This is related to the Hamiltonian, but also includes some complexities, such as determining the expectation value by using Estimator, and in the case of excited state calculations, might include orthogonality conditions.
  • An initial state (x0) for the system, often the Hartree Fock state
  • Other arguments, including arguments of the cost function itself
  • The method set to the classical optimizer you select
  • Options for the classical optimizer (not to be confused with Session options discussed in the next section)

Some example code is shown below. We restrict our discussion here to the last two arguments.

    cost_func,
    x0,
    args=(ansatz, hamiltonian, estimator),
    method="cobyla",
    options={"maxiter": 200})

SciPy has documentation on all the available minimize methods. Here are a few noteworthy examples, all of which are methods for minimizing a scalar function of one or more variables:

  • cobyla: Optimization BY Linear Approximation (COBYLA) algorithm.
  • slsqp: Sequential Least Squares Programming (SLSQP).
  • nelder-mead Nelder-Mead algorithm.

These, and most available classical optimization algorithms, are local minimizers, in that they use gradients to find local minima. These algorithms have several options in common, but with subtle differences. For example, all have the option to specify a maximum number of iterations using the 'maxiter': 200 notation from above. All have some option specifying a different stopping criterion based on function or variable values, though these criteria are slightly different for different algorithms. Cobyla, for example, allows you to specify a tolerance (for example, 'tol': 0.0001) that is the lower bound on a "trust region", determined by using gradients. In comparison, SLSQP lets you specify a goal in the precision of the function used in the stopping criterion ('ftol'). Nelder-Mead lets you specify a tolerance in the difference between successive parameter (xx) guesses (xatol) or a tolerance in the difference between successive values obtained for the cost function f(x)f(x) (fatol) (or both). For a complete list of available algorithms and options, visit SciPy's minimize documentation.

Was this page helpful?
Report a bug or request content on GitHub.