Examples and applications
During this lesson, we'll explore some variational algorithm examples and how to apply them:
- How to write a custom variational algorithm
- How to apply a variational algorithm to find minimum eigenvalues
- How to utilize variational algorithms to solve application use cases
Note that the Qiskit patterns framework can be applied to all the problems we introduce here. However, to avoid repetition, we will only explicitly call out the framework steps in one example case, run on real hardware.
Problem definitions
Imagine that we want to use a variational algorithm to find the eigenvalue of the following observable:
This observable has the following eigenvalues:
And eigenstates:
from qiskit.quantum_info import SparsePauliOp
observable_1 = SparsePauliOp.from_list([("II", 2), ("XX", -2), ("YY", 3), ("ZZ", -3)])
Custom VQE
We'll first explore how to construct a VQE instance manually to find the lowest eigenvalue for . This will incorporate a variety of techniques that we have covered throughout this course.
def cost_func_vqe(params, ansatz, hamiltonian, estimator):
"""Return estimate of energy from estimator
Parameters:
params (ndarray): Array of ansatz parameters
ansatz (QuantumCircuit): Parameterized ansatz circuit
hamiltonian (SparsePauliOp): Operator representation of Hamiltonian
estimator (Estimator): Estimator primitive instance
Returns:
float: Energy estimate
"""
pub = (ansatz, hamiltonian, params)
cost = estimator.run([pub]).result()[0].data.evs
return cost
from qiskit.circuit.library import TwoLocal
from qiskit import QuantumCircuit
import numpy as np
reference_circuit = QuantumCircuit(2)
reference_circuit.x(0)
variational_form = TwoLocal(
2,
rotation_blocks=["rz", "ry"],
entanglement_blocks="cx",
entanglement="linear",
reps=1,
)
raw_ansatz = reference_circuit.compose(variational_form)
raw_ansatz.decompose().draw("mpl")
Output:

We will start debugging on local simulators.
from qiskit.primitives import StatevectorEstimator as Estimator
from qiskit.primitives import StatevectorSampler as Sampler
estimator = Estimator()
sampler = Sampler()
We now set an initial set of parameters:
x0 = np.ones(raw_ansatz.num_parameters)
print(x0)
Output:
[1. 1. 1. 1. 1. 1. 1. 1.]
We can minimize this cost function to calculate optimal parameters
# SciPy minimizer routine
from scipy.optimize import minimize
import time
start_time = time.time()
result = minimize(
cost_func_vqe,
x0,
args=(raw_ansatz, observable_1, estimator),
method="COBYLA",
options={"maxiter": 1000, "disp": True},
)
end_time = time.time()
execution_time = end_time - start_time
Output:
Normal return from subroutine COBYLA
NFVALS = 136 F =-6.000000E+00 MAXCV = 0.000000E+00
X = 1.741143E+00 9.605909E-01 1.570819E+00 2.115261E-05 1.898532E+00
1.243135E+00 6.062982E-01 6.062986E-01
result
Output:
message: Optimization terminated successfully.
success: True
status: 1
fun: -5.999999982445723
x: [ 1.741e+00 9.606e-01 1.571e+00 2.115e-05 1.899e+00
1.243e+00 6.063e-01 6.063e-01]
nfev: 136
maxcv: 0.0
Because this toy problem uses only two qubits, we can check this by using NumPy's linear algebra eigensolver.
from numpy.linalg import eigvalsh
solution_eigenvalue = min(eigvalsh(observable_1.to_matrix()))
print(f"""Number of iterations: {result.nfev}""")
print(f"""Time (s): {execution_time}""")
print(
f"Percent error: {abs((result.fun - solution_eigenvalue)/solution_eigenvalue):.2e}"
)
Output:
Number of iterations: 136
Time (s): 0.26955747604370117
Percent error: 2.93e-09
As you can see, the result is extremely close to the ideal.
Experimenting to improve speed and accuracy
Add reference state
In the previous example we have not used any reference operator . Now let us think about how the ideal eigenstate can be obtained. Consider the following circuit.
from qiskit import QuantumCircuit
ideal_qc = QuantumCircuit(2)
ideal_qc.h(0)
ideal_qc.cx(0, 1)
ideal_qc.draw("mpl")
Output:

We can quickly check that this circuit gives us the desired state.
from qiskit.quantum_info import Statevector
Statevector(ideal_qc)
Output:
Statevector([0.70710678+0.j, 0. +0.j, 0. +0.j,
0.70710678+0.j],
dims=(2, 2))
Now that we have seen how a circuit preparing the solution state looks like, it seems reasonable to use a Hadamard gate as a reference circuit, so that the full ansatz becomes:
reference = QuantumCircuit(2)
reference.h(0)
# Include barrier to separate reference from variational form
reference.barrier()
ref_ansatz = raw_ansatz.decompose().compose(reference, front=True)
ref_ansatz.draw("mpl")
Output:

For this new circuit, the ideal solution could be reached with all the parameters set to , so this confirms that the choice of reference circuit is reasonable.
Now let us compare the number of cost function evaluations, optimizer iterations and time taken with those of the previous attempt.
import time
start_time = time.time()
ref_result = minimize(
cost_func_vqe, x0, args=(ref_ansatz, observable_1, estimator), method="COBYLA"
)
end_time = time.time()
execution_time = end_time - start_time
Using our optimal parameters to calculate the minimum eigenvalue:
experimental_min_eigenvalue_ref = cost_func_vqe(
result.x, raw_ansatz, observable_1, estimator
)
print(experimental_min_eigenvalue_ref)
Output:
-5.999999982445723
print("ADDED REFERENCE STATE:")
print(f"""Number of iterations: {result.nfev}""")
print(f"""Time (s): {execution_time}""")
print(
f"Percent error: {abs((experimental_min_eigenvalue_ref - solution_eigenvalue)/solution_eigenvalue):.2e}"
)
Output:
ADDED REFERENCE STATE:
Number of iterations: 136
Time (s): 0.3681156635284424
Percent error: 2.93e-09
Change the initial point
Now that we have seen the effect of adding the reference state, we will go into what happens when we choose different initial points . In particular we will use and .
Remember that, as discussed when the reference state was introduced, the ideal solution would be found when all the parameters are , so the first initial point should give fewer evaluations.
import time
start_time = time.time()
x0 = [0, 0, 0, 0, 6, 0, 0, 0]
x0_1_result = minimize(
cost_func_vqe, x0, args=(raw_ansatz, observable_1, estimator), method="COBYLA"
)
end_time = time.time()
execution_time = end_time - start_time
print("INITIAL POINT 1:")
print(f"""Number of iterations: {x0_1_result.nfev}""")
print(f"""Time (s): {execution_time}""")
Output:
INITIAL POINT 1:
Number of iterations: 129
Time (s): 0.27564215660095215
Adjusting initial point to :
import time
start_time = time.time()
x0 = 6 * np.ones(raw_ansatz.num_parameters)
x0_2_result = minimize(
cost_func_vqe, x0, args=(raw_ansatz, observable_1, estimator), method="COBYLA"
)
end_time = time.time()
execution_time = end_time - start_time
print("INITIAL POINT 2:")
print(f"""Number of iterations: {x0_2_result.nfev}""")
print(f"""Time (s): {execution_time}""")
Output:
INITIAL POINT 2:
Number of iterations: 114
Time (s): 0.2536280155181885
By experimenting with different initial points, you might be able to achieve convergence faster and with fewer function evaluations.
Experimenting with different optimizers
We can adjust the optimizer using SciPy minimize
's method
argument, with more options found here. We originally used a constrained minimizer (COBYLA
). In this example, we'll explore using an unconstrained minimizer (BFGS
) instead
import time
start_time = time.time()
result = minimize(
cost_func_vqe, x0, args=(raw_ansatz, observable_1, estimator), method="BFGS"
)
end_time = time.time()
execution_time = end_time - start_time
print("CHANGED TO BFGS OPTIMIZER:")
print(f"""Number of iterations: {result.nfev}""")
print(f"""Time (s): {execution_time}""")
Output:
CHANGED TO BFGS OPTIMIZER:
Number of iterations: 117
Time (s): 0.25261688232421875
VQD example
Here we implement the Qiskit patterns framework, explicitly.
Step 1: Map classical inputs to a quantum problem
Now instead of looking for only the lowest eigenvalue of our observables, we will look for all , (where ).
Remember that the cost functions of VQD are:
This is particularly important because a vector (in this case ) must be passed as an argument when we define the VQD
object.
Also, in Qiskit's implementation of VQD, instead of considering the effective observables described in the previous notebook, the fidelities are calculated directly via the ComputeUncompute
algorithm, that leverages a Sampler
primitive to sample the probability of obtaining for the circuit
. This works precisely because this probability is
from qiskit.circuit.library import TwoLocal
ansatz = TwoLocal(2, rotation_blocks=["ry", "rz"], entanglement_blocks="cz", reps=1)
ansatz.decompose().draw("mpl")
Output:

Let's start by examining the following observable:
This observable has the following eigenvalues:
And eigenstates:
from qiskit.quantum_info import SparsePauliOp
observable_2 = SparsePauliOp.from_list([("II", 2), ("XX", -3), ("YY", 2), ("ZZ", -4)])
We'll be using the following function to calculate the overlap penalty. Note that this is still part of mapping the problem to quantum circuits. However, as discussed in the previous lesson, this function calculates the overlap between a current variational circuit and the optimized circuit from a previous, lower-energy/cost state obtained. The new circuit being generated also has to be transpiled to run on real hardware. We have seen this function before, used on a simulator. Here, we must already consider the transpiling and related optimization for when we use a real backend, hence the lines around if realbackend == 1
. This is mixing a bit of step 2, but we will call out step 2 explicitly later.
import numpy as np
from qiskit.transpiler.preset_passmanagers import generate_preset_pass_manager
def calculate_overlaps(
ansatz, prev_circuits, parameters, sampler, realbackend, backend
):
def create_fidelity_circuit(circuit_1, circuit_2):
if len(circuit_1.clbits) > 0:
circuit_1.remove_final_measurements()
if len(circuit_2.clbits) > 0:
circuit_2.remove_final_measurements()
circuit = circuit_1.compose(circuit_2.inverse())
circuit.measure_all()
return circuit
overlaps = []
for prev_circuit in prev_circuits:
fidelity_circuit = create_fidelity_circuit(ansatz, prev_circuit)
if realbackend == 1:
pm = generate_preset_pass_manager(backend=backend, optimization_level=3)
fidelity_circuit = pm.run(fidelity_circuit)
sampler_job = sampler.run([(fidelity_circuit, parameters)])
meas_data = sampler_job.result()[0].data.meas
counts_0 = meas_data.get_int_counts().get(0, 0)
shots = meas_data.num_shots
overlap = counts_0 / shots
overlaps.append(overlap)
return np.array(overlaps)
Now we add VQD's cost function. Note that compared to the previous lesson, we now have two additional arguments (realbackend
and backend
) to help us with transpilation when we use real backends.
def cost_func_vqd(
parameters,
ansatz,
prev_states,
step,
betas,
estimator,
sampler,
hamiltonian,
realbackend,
backend,
):
estimator_job = estimator.run([(ansatz, hamiltonian, [parameters])])
total_cost = 0
if step > 1:
overlaps = calculate_overlaps(
ansatz, prev_states, parameters, sampler, realbackend, backend
)
total_cost = np.sum(
[np.real(betas[state] * overlap) for state, overlap in enumerate(overlaps)]
)
estimator_result = estimator_job.result()[0]
value = estimator_result.data.evs[0] + total_cost
return value
Once again, we will use simulators for debugging, and then move on to real hardware.
from qiskit.primitives import StatevectorSampler
from qiskit.primitives import StatevectorEstimator
sampler = StatevectorSampler(default_shots=4092)
estimator = StatevectorEstimator()
Here we introduce the number of states we wish to calculate, the penalties, and a set of initial parameters, x0.
from qiskit.quantum_info import SparsePauliOp
import numpy as np
k = 4
betas = [50, 60, 40]
x0 = np.ones(8)
We will now test the algorithm using simulators:
from scipy.optimize import minimize
prev_states = []
prev_opt_parameters = []
eigenvalues = []
realbackend = 0
for step in range(1, k + 1):
if step > 1:
prev_states.append(ansatz.assign_parameters(prev_opt_parameters))
result = minimize(
cost_func_vqd,
x0,
args=(
ansatz,
prev_states,
step,
betas,
estimator,
sampler,
observable_2,
realbackend,
None,
),
method="COBYLA",
options={"maxiter": 200, "tol": 0.000001},
)
print(result)
prev_opt_parameters = result.x
eigenvalues.append(result.fun)
Output:
message: Maximum number of function evaluations has been exceeded.
success: False
status: 2
fun: -6.999999999996282
x: [ 1.571e+00 1.571e+00 1.986e+00 2.171e+00 1.011e+00
1.222e+00 1.709e+00 2.774e+00]
nfev: 200
maxcv: 0.0
message: Optimization terminated successfully.
success: True
status: 1
fun: 3.6824557527813138
x: [ 9.053e-01 1.015e+00 1.199e+00 1.102e+00 1.317e+00
1.243e+00 8.335e-01 1.077e+00]
nfev: 151
maxcv: 0.0
message: Optimization terminated successfully.
success: True
status: 1
fun: 6.92006300160242
x: [ 2.060e+00 1.998e+00 2.582e+00 1.934e-01 2.199e+00
3.707e+00 5.337e-01 1.248e-01]
nfev: 110
maxcv: 0.0
message: Optimization terminated successfully.
success: True
status: 1
fun: 5.262289233706376
x: [ 2.114e+00 4.107e+00 1.905e+00 3.867e-01 1.947e+00
2.207e+00 1.530e+00 1.292e+00]
nfev: 137
maxcv: 0.0
eigenvalues
Output:
[-6.999999999996282, 3.6824557527813138, 6.92006300160242, 5.262289233706376]
These results are fairly close to the expected ones except for approximation error and global phase. We could adjust the tolerance on the classical optimizer and/or the penalties for statevector overlap to obtain more precise values.
solution_eigenvalues = [-7, 3, 5, 7]
for index, experimental_eigenvalue in enumerate(eigenvalues):
solution_eigenvalue = solution_eigenvalues[index]
print(
f"Percent error: {abs((experimental_eigenvalue - solution_eigenvalue)/solution_eigenvalue):.2e}"
)
Output:
Percent error: 5.31e-13
Percent error: 2.27e-01
Percent error: 3.84e-01
Percent error: 2.48e-01
Change betas
As mentioned in the previous lesson, the values of should be bigger than the difference between eigenvalues. Let us see what happens when they do not satisfy that condition with
with eigenvalues
from qiskit.quantum_info import SparsePauliOp
k = 4
betas = np.ones(3)
x0 = np.zeros(8)
from scipy.optimize import minimize
prev_states = []
prev_opt_parameters = []
eigenvalues = []
realbackend = 0
for step in range(1, k + 1):
if step > 1:
prev_states.append(ansatz.assign_parameters(prev_opt_parameters))
result = minimize(
cost_func_vqd,
x0,
args=(
ansatz,
prev_states,
step,
betas,
estimator,
sampler,
observable_2,
realbackend,
None,
),
method="COBYLA",
options={"tol": 0.01, "maxiter": 200},
)
print(result)
prev_opt_parameters = result.x
eigenvalues.append(result.fun)
Output:
message: Maximum number of function evaluations has been exceeded.
success: False
status: 2
fun: -2.8660482689389806
x: [ 6.741e-01 -5.950e-01 1.608e-02 2.130e-02 -3.382e-01
3.065e-01 -5.579e-01 -2.512e-01]
nfev: 200
maxcv: 0.0
message: Optimization terminated successfully.
success: True
status: 1
fun: -1.0642642132957927
x: [ 1.552e-03 6.382e-04 1.006e+00 -7.815e-03 -6.482e-04
-1.341e-03 9.819e-04 8.095e-04]
nfev: 39
maxcv: 0.0
message: Optimization terminated successfully.
success: True
status: 1
fun: -0.05596088598225535
x: [-3.509e-04 -2.847e-03 9.906e-01 -6.027e-04 1.023e-04
1.607e-03 1.000e+00 -7.751e-04]
nfev: 40
maxcv: 0.0
message: Optimization terminated successfully.
success: True
status: 1
fun: 0.9428191822532732
x: [-1.086e-04 -2.580e-04 1.260e+00 9.998e-01 -2.437e-04
-1.264e-03 1.000e+00 4.158e-04]
nfev: 40
maxcv: 0.0
solution_eigenvalues = [-7, 3, 5, 7]
for index, experimental_eigenvalue in enumerate(eigenvalues):
solution_eigenvalue = solution_eigenvalues[index]
print(
f"Percent error: {abs((experimental_eigenvalue - solution_eigenvalue)/solution_eigenvalue):.2e}"
)
Output:
Percent error: 5.91e-01
Percent error: 1.35e+00
Percent error: 1.01e+00
Percent error: 8.65e-01
This time, the optimizer returns the same state as a proposed solution to all eigenstates: which is clearly wrong. This happens because the betas were too small to penalize the minimum eigenstate in the successive cost functions. Therefore, it was not excluded from the effective search space in later iterations of the algorithm, and always chosen as the best possible solution.
We recommend experimenting with the values of , and ensuring they are bigger than the difference between eigenvalues.
Step 2: Optimize problem for quantum execution
To run this on real hardware, we must optimize the quantum circuits for our quantum computer of choice. For our purposes here, we will simply use the least busy backend.
from qiskit_ibm_runtime import SamplerV2 as Sampler
from qiskit_ibm_runtime import EstimatorV2 as Estimator
from qiskit_ibm_runtime import Session, EstimatorOptions
from qiskit_ibm_runtime import QiskitRuntimeService
service = QiskitRuntimeService(channel="ibm_quantum")
backend = service.least_busy(operational=True, simulator=False)
# Or use a specific backend
# backend = service.backend("ibm_cusco")
print(backend)
Output:
<IBMBackend('ibm_cusco')>
We will transpile our circuit using a preset pass manager and optimization level 3.
pm = generate_preset_pass_manager(backend=backend, optimization_level=3)
isa_ansatz = pm.run(ansatz)
isa_observable = observable_2.apply_layout(layout=isa_ansatz.layout)
Step 3: Execute using Qiskit primitives
Taking care to reset our betas to sufficiently high values, we can now run our calculation on real quantum hardware.
# Estimated compute resource usage: 25 minutes. Benchmarked at 24 min, 30 s on ibm_nazca on 5-30-24
k = 2
betas = [30, 50, 80]
x0 = np.zeros(8)
real_prev_states = []
real_prev_opt_parameters = []
real_eigenvalues = []
realbackend = 1
estimator_options = EstimatorOptions(resilience_level=1, default_shots=10_000)
with Session(backend=backend) as session:
estimator = Estimator(mode=session, options=estimator_options)
sampler = Sampler(mode=session)
for step in range(1, k + 1):
if step > 1:
real_prev_states.append(isa_ansatz.assign_parameters(prev_opt_parameters))
result = minimize(
cost_func_vqd,
x0,
args=(
isa_ansatz,
real_prev_states,
step,
betas,
estimator,
sampler,
isa_observable,
realbackend,
backend,
),
method="COBYLA",
options={"maxiter": 200},
)
print(result)
real_prev_opt_parameters = result.x
real_eigenvalues.append(result.fun)
session.close()
print(real_eigenvalues)
Step 4: Post-process, return result in classical format
Our output is structurally similar to what has been discussed in previous lessons and examples. But there is something problematic in the results above, from which we can derive a cautionary message for the context of excited states. To limit computing time used on this learning example, we set a maximum number of iterations for classical optimizer that was potentially too low: 200 iterations. A previous calculation above, on a simulator, failed to converge in 200 iterations. Here, ours did converge... but to what tolerance? We have not specified a tolerance for COBYLA to consider itself "converged". A glance at the function value and comparison with previous runs tells us that COBYLA was not close to converging to the precision we require.
There is another issue: the energy of the first excited state appears to be lower than the energy of the ground state! See if you can explain how this could happen. Hint: it is related to the convergence point we just addressed. This behavior is explained in detail below after VQD is applied to the H2 molecule.
Quantum chemistry: ground state and excited energy solver
Our objective is to minimize the expectation value of the observable representing energy (Hamiltonian ):
from qiskit.quantum_info import SparsePauliOp
from qiskit.circuit.library import EfficientSU2
H2_op = SparsePauliOp.from_list(
[
("II", -1.052373245772859),
("IZ", 0.39793742484318045),
("ZI", -0.39793742484318045),
("ZZ", -0.01128010425623538),
("XX", 0.18093119978423156),
]
)
chem_ansatz = EfficientSU2(H2_op.num_qubits)
chem_ansatz.decompose().draw("mpl")
Output:

from qiskit.circuit.library import TwoLocal
from qiskit import QuantumCircuit
def cost_func_vqe(params, ansatz, hamiltonian, estimator):
"""Return estimate of energy from estimator
Parameters:
params (ndarray): Array of ansatz parameters
ansatz (QuantumCircuit): Parameterized ansatz circuit
hamiltonian (SparsePauliOp): Operator representation of Hamiltonian
estimator (Estimator): Estimator primitive instance
Returns:
float: Energy estimate
"""
pub = (ansatz, hamiltonian, params)
cost = estimator.run([pub]).result()[0].data.evs
# cost = estimator.run(ansatz, hamiltonian, parameter_values=params).result().values[0]
return cost
We now set an initial set of parameters:
import numpy as np
x0 = np.ones(chem_ansatz.num_parameters)
We can minimize this cost function to calculate optimal parameters, and we can check our code first by using a local simulator.
from qiskit.primitives import StatevectorEstimator as Estimator
from qiskit.primitives import StatevectorSampler as Sampler
estimator = Estimator()
sampler = Sampler()
# SciPy minimizer routine
from scipy.optimize import minimize
import time
start_time = time.time()
result = minimize(
cost_func_vqe, x0, args=(chem_ansatz, H2_op, estimator), method="COBYLA"
)
end_time = time.time()
execution_time = end_time - start_time
result
Output:
message: Optimization terminated successfully.
success: True
status: 1
fun: -1.857275029048451
x: [ 7.326e-01 1.354e+00 ... 1.040e+00 1.508e+00]
nfev: 242
maxcv: 0.0
The minimum value of the cost function (-1.857...) is the ground state energy of the H2 molecule, in units of hartrees.
Excited States
We can also leverage VQD to solve for total states (the ground state and the first excited state).
from qiskit.quantum_info import SparsePauliOp
import numpy as np
k = 2
betas = [33, 33]
# x0 = np.zeros(ansatz.num_parameters)
x0 = [
1.164e00,
-2.438e-01,
9.358e-04,
6.745e-02,
1.990e00,
9.810e-02,
6.154e-01,
5.454e-01,
]
We'll add our overlap calculation:
from scipy.optimize import minimize
prev_states = []
prev_opt_parameters = []
eigenvalues = []
realbackend = 0
for step in range(1, k + 1):
if step > 1:
prev_states.append(ansatz.assign_parameters(prev_opt_parameters))
result = minimize(
cost_func_vqd,
x0,
args=(
ansatz,
prev_states,
step,
betas,
estimator,
sampler,
H2_op,
realbackend,
None,
),
method="COBYLA",
options={"tol": 0.001, "maxiter": 2000},
)
print(result)
prev_opt_parameters = result.x
eigenvalues.append(result.fun)
Output:
message: Optimization terminated successfully.
success: True
status: 1
fun: -1.8572671093941977
x: [ 1.164e+00 -2.437e-01 2.118e-03 6.448e-02 1.990e+00
9.870e-02 6.167e-01 5.476e-01]
nfev: 58
maxcv: 0.0
message: Optimization terminated successfully.
success: True
status: 1
fun: -1.0322873777662176
x: [ 3.205e+00 1.502e+00 1.699e+00 -1.107e-02 3.086e+00
1.530e+00 4.445e-02 7.013e-02]
nfev: 99
maxcv: 0.0
eigenvalues
Output:
[-1.8572671093941977, -1.0322873777662176]
Real hardware and a final cautionary message
To run this on real hardware, we must optimize the quantum circuits for our quantum computer of choice. For our purposes here, we will simply use the least busy backend.
from qiskit_ibm_runtime import SamplerV2 as Sampler
from qiskit_ibm_runtime import EstimatorV2 as Estimator
from qiskit_ibm_runtime import Session, EstimatorOptions
from qiskit_ibm_runtime import QiskitRuntimeService
service = QiskitRuntimeService(channel="ibm_quantum")
backend = service.least_busy(operational=True, simulator=False)
We will use a preset pass manager for transpilation, and we will maximally optimize our circuit using optimization level 3.
from qiskit.transpiler.preset_passmanagers import generate_preset_pass_manager
pm = generate_preset_pass_manager(backend=backend, optimization_level=3)
isa_ansatz = pm.run(ansatz)
isa_observable = H2_op.apply_layout(layout=isa_ansatz.layout)
Because VQD is highly iterative, we will carry out all steps inside a Runtime session, such that our jobs will only be queued at the beginning, and not between every parameter update. Nothing else changes about the syntax for the cost function or estimator.
x0 = [
1.306e00,
-2.284e-01,
6.913e-02,
-2.530e-02,
1.849e00,
7.433e-02,
6.366e-01,
5.600e-01,
]
# Estimated hardware usage: 20 min benchmarked on ibm_nazca on 5-30-24
real_prev_states = []
real_prev_opt_parameters = []
real_eigenvalues = []
realbackend = 1
estimator_options = EstimatorOptions(resilience_level=1, default_shots=4096)
with Session(backend=backend) as session:
estimator = Estimator(mode=session)
sampler = Sampler(mode=session)
for step in range(1, k + 1):
if step > 1:
real_prev_states.append(
isa_ansatz.assign_parameters(real_prev_opt_parameters)
)
result = minimize(
cost_func_vqd,
x0,
args=(
isa_ansatz,
real_prev_states,
step,
betas,
estimator,
sampler,
isa_observable,
realbackend,
backend,
),
method="COBYLA",
options={"tol": 0.001, "maxiter": 300},
)
print(result)
real_prev_opt_parameters = result.x
real_eigenvalues.append(result.fun)
session.close()
print(real_eigenvalues)
The ground state energy obtained (-1.83 hartrees) is not too far from the correct value (-1.85 hartrees). However, the excited state energy is quite a bit off. This is similar to the erroneous behavior we saw earlier in this lesson. The energy reported for the excited state is nearly the same as that for the ground state. In the previous case, we even saw an excited state energy that was lower than the reported ground state energy.
It is not possible for a variational calculation to yield an energy that is lower than the true ground state energy. In the earlier instance, the ground state energy we obtained was not very close to the true ground state. Since we did not obtain the true ground state energy in that case, there is no contradiction. In the present case, the ground state energy was fairly close to the correct value, and yet the excited state energy seems strangely close to that same value.
To understand better how this happened, recall that the way we find an excited state is by requiring that the variational state be orthogonal to the ground state (using the overlap circuits and penalty terms). If we fail to obtain an accurate ground state energy (or are off by a few percent), then we also fail to obtain an accurate ground state vector! So when we require that the excited state be orthogonal to the first state we found, we were not imposing orthogonality with the true ground state, but rather with some approximation of it (sometimes a poor approximation of it). Thus, the excited state was not forced to be orthogonal to the true ground state, and our energy estimates for the excited states were actually quite close to the ground state energy.
This will always be a concern in VQD. But in principle, this can be corrected by increasing the maximum number of iterations for the classical optimizer, imposing lower tolerance for the classical optimizer, and possibly also trying a different ansatz if we are habitually missing the true ground state. As we have seen, one may also need to modify the overlap penalties (betas). But that is really a separate issue. No penalty for overlap will keep you away from the true ground state, if you haven't found a very good estimate of the true ground state for the overlap circuit.
Optimization: Max-Cut
The maximum cut (Max-Cut) problem is a combinatorial optimization problem that involves dividing the vertices of a graph into two disjoint sets such that the number of edges between the two sets is maximized. More formally, given an undirected graph , where is the set of vertices and is the set of edges, the Max-Cut problem asks to partition the vertices into two disjoint subsets, and , such that the number of edges with one endpoint in and the other in is maximized.
We can apply Max-Cut to solve a various problems including: clustering, network design, phase transitions, etc. We'll start by creating a problem graph:
import rustworkx as rx
from rustworkx.visualization import mpl_draw
n = 4
G = rx.PyGraph()
G.add_nodes_from(range(n))
# The edge syntax is (start, end, weight)
edges = [(0, 1, 1.0), (0, 2, 1.0), (0, 3, 1.0), (1, 2, 1.0), (2, 3, 1.0)]
G.add_edges_from(edges)
mpl_draw(
G, pos=rx.shell_layout(G), with_labels=True, edge_labels=str, node_color="#1192E8"
)
Output:

This problem can be expressed as a binary optimization problem. For each node , where is the number of nodes of the graph (in this case ), we will consider the binary variable . This variable will have the value if node is one of the groups that we'll label and if it's in the other group, that we'll label as . We will also denote as (element of the adjacency matrix ) the weight of the edge that goes from node to node . Because the graph is undirected, . Then we can formulate our problem as maximizing the following cost function:
To solve this problem with a quantum computer, we are going to express the cost function as the expected value of an observable. However, the observables that Qiskit admits natively consist of Pauli operators, that have eigenvalues and instead of and . That's why we are going to make the following change of variable:
Where . We can use the adjacency matrix to comfortably access the weights of all the edge. This will be used to obtain our cost function:
This implies that:
So the new cost function we want to maximize is:
Moreover, the natural tendency of a quantum computer is to find minima (usually the lowest energy) instead of maxima so instead of maximizing we are going to minimize:
Now that we have a cost function to minimize whose variables can have the values and , we can make the following analogy with the Pauli :
In other words, the variable will be equivalent to a gate acting on qubit . Moreover:
Then the observable we are going to consider is:
to which we will have to add the independent term afterwards:
The operator is a linear combination of terms with Z operators on nodes connected by an edge (recall that the 0th qubit is farthest right): . Once the operator is constructed, the ansatz for the QAOA algorithm can easily be built by using the QAOAAnsatz
circuit from the Qiskit circuit library.
from qiskit.circuit.library import QAOAAnsatz
from qiskit.quantum_info import SparsePauliOp
max_hamiltonian = SparsePauliOp.from_list(
[("IIZZ", 1), ("IZIZ", 1), ("IZZI", 1), ("ZIIZ", 1), ("ZZII", 1)]
)
max_ansatz = QAOAAnsatz(max_hamiltonian, reps=2)
# Draw
max_ansatz.decompose(reps=3).draw("mpl")
Output:

# Sum the weights, and divide by 2
offset = -sum(edge[2] for edge in edges) / 2
print(f"""Offset: {offset}""")
Output:
Offset: -2.5
def cost_func(params, ansatz, hamiltonian, estimator):
"""Return estimate of energy from estimator
Parameters:
params (ndarray): Array of ansatz parameters
ansatz (QuantumCircuit): Parameterized ansatz circuit
hamiltonian (SparsePauliOp): Operator representation of Hamiltonian
estimator (Estimator): Estimator primitive instance
Returns:
float: Energy estimate
"""
pub = (ansatz, hamiltonian, params)
cost = estimator.run([pub]).result()[0].data.evs
# cost = estimator.run(ansatz, hamiltonian, parameter_values=params).result().values[0]
return cost
from qiskit.primitives import StatevectorEstimator as Estimator
from qiskit.primitives import StatevectorSampler as Sampler
estimator = Estimator()
sampler = Sampler()
We now set an initial set of random parameters:
import numpy as np
x0 = 2 * np.pi * np.random.rand(max_ansatz.num_parameters)
print(x0)
Output:
[6.0252949 0.58448176 2.15785731 1.13646074]
Any classical optimizer can be used to minimize the cost function. On a real quantum system, an optimizer designed for non-smooth cost function landscapes usually does better. Here we use the COBYLA routine from SciPy via the minimize function.
Because we are iteratively executing many calls to Runtime, we make use of a session to execute all calls within a single block. Moreover, for QAOA, the solution is encoded in the output distribution of the ansatz circuit bound with the optimal parameters from the minimization. Therefore, we will need a Sampler primitive, and will instantiate it with the same session
And run our minimization routine:
result = minimize(
cost_func, x0, args=(max_ansatz, max_hamiltonian, estimator), method="COBYLA"
)
print(result)
Output:
message: Optimization terminated successfully.
success: True
status: 1
fun: -2.585287311689236
x: [ 7.332e+00 3.904e-01 2.045e+00 1.028e+00]
nfev: 80
maxcv: 0.0
The solution vector of parameter angles (x
), when plugged into the ansatz circuit, yields the graph partitioning that we were looking for.
eigenvalue = cost_func(result.x, max_ansatz, max_hamiltonian, estimator)
print(f"""Eigenvalue: {eigenvalue}""")
print(f"""Max-Cut Objective: {eigenvalue + offset}""")
Output:
Eigenvalue: -2.585287311689236
Max-Cut Objective: -5.085287311689235
from qiskit.result import QuasiDistribution
from qiskit.primitives import StatevectorSampler
sampler = StatevectorSampler()
# Assign solution parameters to ansatz
qc = max_ansatz.assign_parameters(result.x)
# Add measurements to our circuit
qc.measure_all()
# Sample ansatz at optimal parameters
# samp_dist = sampler.run(qc).result().quasi_dists[0]
shots = 1024
job = sampler.run([qc], shots=shots)
qc.decompose().draw("mpl")
Output:

data_pub = job.result()[0].data
bitstrings = data_pub.meas.get_bitstrings()
counts = data_pub.meas.get_counts()
quasi_dist = QuasiDistribution(
{outcome: freq / shots for outcome, freq in counts.items()}
)
probabilities = quasi_dist
# Close the session since we are now done with it
# session.close()
from qiskit.visualization import plot_distribution
plot_distribution(counts)
Output:

binary_string = max(counts.items(), key=lambda kv: kv[1])[0]
x = np.asarray([int(y) for y in reversed(list(binary_string))])
colors = ["r" if x[i] == 0 else "c" for i in range(n)]
mpl_draw(
G, pos=rx.shell_layout(G), with_labels=True, edge_labels=str, node_color=colors
)
Output:

Summary
With this lesson, you have learned:
- How to write a custom variational algorithm
- How to apply a variational algorithm to find minimum eigenvalues
- How to utilize variational algorithms to solve application use cases
Proceed to the final lesson to take your assessment and earn your badge!