Skip to main content
IBM Quantum Platform

Data encoding


Introduction and notation

To use a quantum algorithm, classical data must somehow be brought into a quantum circuit. This is usually referred to as data encoding, but is also called data loading. Recall from previous lessons the notion of a feature mapping, a mapping of data features from one space to another. Just transferring classical data to a quantum computer is a sort of mapping, and could be called a feature mapping. In practice, the built-in feature mappings in Qiskit (like ZFeatureMap and ZZFeatureMap) will typically include rotation layers and entangling layers that extend the state to many dimensions in the Hilbert space. This encoding process is a critical part of quantum machine learning algorithms and directly affects their computational capabilities.

Some of the encoding techniques below can be efficiently classically simulated; this is particularly easy to see in encoding methods that yield product states (i.e. which do not entangle qubits). And remember that quantum utility is most likely to lie where the quantum-like complexity of the dataset is well-matched by the encoding method. So it is very likely that you will end up writing your own encoding circuits. Here, we show a wide variety of possible encoding strategies simply so that you can compare and contrast them, and see what is possible. There are some very general statements that can be made about the usefulness of encoding techniques. For example, EfficientSU2 (see below) with a full entangling scheme is much more likely to capture quantum features of data than methods that yield product states (like ZFeatureMap). But this does not mean EfficientSU2 is sufficient, or sufficiently well-matched to your dataset, to yield a quantum speed-up. That requires careful consideration of the structure of the data being modeled or classified. There is also a balancing act with circuit depth, since many feature maps which fully entangle the qubits in a circuit yield very deep circuits, too deep to get usable results on today's quantum computers.

Notation

A dataset is a set of MM data vectors: X={x(j)j[M]}\text{X} = \{\vec{x}^{(j)}\,|\,j\in [M]\}, where each vector is NN dimensional, i.e. x(j)=(x1(j),,xN(j))RN\vec{x}^{(j)}=(\vec{x}^{(j)}_1,\ldots,\vec{x}^{(j)}_N)\in\mathbb{R}^N. This could be extended to complex data features. In his lesson, we may occassionally use these notations for the full set (X),(\text{X}), and its specific elements like x(j)\vec{x}^{(j)}. But we will mostly refer to the loading of a single vector from our dataset at a time, and will often simply refer to a single vector of NN features as x\vec{x}.

Additionally, it is common to use the symbol Φ(x)\Phi(\vec{x}) to refer to the feature mapping Φ\Phi of data vector x\vec{x}. In quantum computing specifically, it is common to refer to mappings in quantum computing using U(x),U(\vec{x}), a notation that reinforces the unitary nature of these operations. One could correctly use the same symbol for both; both are feature mappings. Throughout this course, we tend to use:

  • Φ(x)\Phi(\vec{x}) when discussing feature mappings in machine learning, generally, and
  • U(x)U(\vec{x}) when discussing circuit implementations of feature mappings.

Normalization and information loss

In classical machine learning, training data features are often "normalized" or rescaled which often improves model performance. One common way of doing this is by using min-max normalization or standardization. In min-max normalization, feature columns of the data matrix X\text{X} (say, feature kk) are normalized:

xk(i)=xk(i)min{xk(j)x(j)[X]}max{xk(j)x(j)[X]}min{xk(j)x(j)[X]}x^{'(i)}_k = \frac{x^{(i)}_k - \text{min}\{x^{(j)}_k\,|\,\vec{x}^{(j)}\in [\text{X}]\}}{\text{max}\{x^{(j)}_k\,|\,\vec{x}^{(j)}\in [\text{X}]\}-\text{min}\{x^{(j)}_k\,|\,\vec{x}^{(j)}\in [\text{X}]\}}

where min and max refer to the minimum and maximum of feature kk over the MM data vectors in the dataset X\text{X}. All the feature values then fall in the unit interval: xk(i)[0,1]x^{'(i)}_k \in [0,1] for all i[M]i\in [M], k[N]k\in[N].

Normalization is also a fundamental concept in quantum mechanics and quantum computing, but it is slightly different from min-max normalization. Normalization in quantum mechanics requires that the length (in the context of quantum computing, the 2-norm) of a state vector ψ|\psi\rangle is equal to unity: ψ=ψψ=1\|\psi\|=\sqrt{\langle\psi|\psi\rangle} = 1, ensuring that measurement probabilities sum to 1. The state is normalized by dividing by the 2-norm; that is, by rescaling

ψψ1ψ|\psi\rangle\rightarrow\|\psi\|^{-1}|\psi\rangle

In quantum computing and quantum mechanics, this is not a normalization imposed by people on the data, but a fundamental property of quantum states. Depending on your encoding scheme, this constraint may affect how your data are rescaled. For example, in amplitude encoding (see below), the data vector is normalized x(j)=1\vert\vec{x}^{(j)}\vert = 1 as is required by quantum mechanics, and this affects the scaling of the data being encoded. In phase encoding, feature values are recommended to be rescaled as xi(j)(0,2π]\vec{x}^{(j)}_i \in (0,2\pi] so that there is no information loss due to the modulo-2π2\pi effect of encoding to a qubit phase angle[1,2].


Methods of encoding

In the next few sections, we will refer to a small example classical dataset Xex\text{X}_\text{ex} consisting of M=5M=5 data vectors, each with N=3N=3 features:

Xex={(4,8,5),(9,8,6),(2,9,2),(5,7,0),(3,7,5)}\text{X}_{\text{ex}}=\{(4,8,5),(9,8,6),(2,9,2),(5,7,0),(3,7,5)\}

In the notation introduced above, we might say the 1st1^\text{st} feature of the 4th4^\text{th} data vector in our set Xex\text{X}_{\text{ex}} is x1(4)=5,\vec{x}^{(4)}_1 = 5, for example.

Basis encoding

Basis encoding encodes a classical PP-bit string into a computational basis state of a PP-qubit system. Take for example x3(1)=5=0(23)+1(22)+0(21)+1(20).\vec{x}^{(1)}_3 = 5 = 0(2^3)+1(2^2)+0(2^1)+1(2^0). This can be represented as a 44-bit string as (0101)(0101), and by a 44-qubit system as the quantum state 0101|0101\rangle. More generally, for a PP-bit string: xk(j)=(b1,b2,...,bP)\vec{x}^{(j)}_k = (b_1, b_2, ... , b_P), the corresponding PP-qubit state is xk(j)=b1,b2,...,bP|x^{(j)}_k\rangle = | b_1, b_2, ... , b_P \rangle with bn{0,1}b_n \in \{0,1\} for n=1,,Pn = 1 , \dots , P. Note that this is just for a single feature.

If each feature of this data vector is mapped to a quantum state xk(j)|x^{(j)}_k\rangle, then we can describe a data vector from our set as a superposition of all the computational basis states describing the features of that vector:

x(j)=1Nk=1Nxk(j)|x^{(j)} \rangle = \frac{1}{\sqrt{N}}\sum_{k=1}^{N}|x^{(j)}_k \rangle

In Qiskit, once we calculate what state will encode our data point, we can use the initialize function to prepare it. Consider the 4th data vector in our dataset x(4)=(5,7,0)\vec{x}^{(4)} = (5,7,0). We have x1(4)=101,x2(4)=111x^{(4)}_1=101, x^{(4)}_2=111, and x3(4)=000x^{(4)}_3 = 000. This is encoded as the state x(4)=13(101+111+000)|x^{(4)}\rangle= \frac{1}{\sqrt{3}}(|101\rangle+|111\rangle+|000\rangle).

We can generate a circuit that will prepare this state using initialize. For this specific case, we will use three qubits. The space of all 232^3 measurable states of these three qubits is spanned by

000,001,010,011,100,101,110,111\vert 000\rangle, \vert 001\rangle, \vert 010\rangle, \vert 011\rangle, \vert 100\rangle, \vert 101\rangle, \vert 110\rangle, \vert 111\rangle

When specifying the desired state of our 3-qubit system, we specify the amplitude of each of these 232^3 basis states, in this order. Thus, our desired state will have 1/31 /\sqrt{3} in the 1st1^\text{st}, 6th6^\text{th}, and 8th8^\text{th} entries, and zeros everywhere else.

import math
from qiskit import QuantumCircuit
 
desired_state = [1 / math.sqrt(3), 0, 0, 0, 0, 1 / math.sqrt(3), 0, 1 / math.sqrt(3)]
 
qc = QuantumCircuit(3)
qc.initialize(desired_state, [0, 1, 2])
qc.decompose(reps=8).draw(output="mpl")

Output:

Output of the previous code cell

This example illustrates a couple of disadvantages of basis encoding. While it is simple to understand, the state vectors can become quite sparse, and schemes to implement it are usually not efficient.

Example

Write code to encode the first vector in our example data set Xex\text{X}_{\text{ex}}:

x(1)=(4,8,5)\vec{x}^{(1)}=(4,8,5)

using basis encoding.

Solution:

import math
from qiskit import QuantumCircuit
 
desired_state = [
    0,
    0,
    0,
    0,
    1 / math.sqrt(3),
    1 / math.sqrt(3),
    0,
    0,
    1 / math.sqrt(3),
    0,
    0,
    0,
    0,
    0,
    0,
    0,
]
 
print(desired_state)
 
qc = QuantumCircuit(4)
qc.initialize(desired_state, [0, 1, 2, 3])
qc.decompose(reps=7).draw(output="mpl")

Output:

[0, 0, 0, 0, 0.5773502691896258, 0.5773502691896258, 0, 0, 0.5773502691896258, 0, 0, 0, 0, 0, 0, 0]
Output of the previous code cell

Amplitude encoding

Amplitude encoding encodes data into the amplitudes of a quantum state. It represents a normalized classical NN-dimensional data vector, x(j)\vec{x}^{(j)}, as the amplitudes of a nn-qubit quantum state, ψx|\psi_x\rangle:

ψx(j)=1αi=1Nxi(j)i|\psi^{(j)}_x\rangle = \frac{1}{\alpha}\sum_{i=1}^N x^{(j)}_i |i\rangle

where NN is the same dimension of the data vectors as before, xi(j)\vec{x}^{(j)}_i is the ithi^{th} element of x(j)\vec{x}^{(j)} and i|i\rangle is the ithi^{th} computational basis state. Here, α\alpha is a normalization constant to be determined from the data being encoded. This is the normalization condition imposed by quantum mechanics:

i=1Nxi(j)2=α2.\sum_{i=1}^N \left|x^{(j)}_i\right|^2 = \left|\alpha\right|^2.

In general, this is a different condition than the min/max normalization used for each feature across all data vectors. Precisely how this is navigated will depend on your problem. But there is no way around the quantum mechanical normalization condition above.

In amplitude encoding, each feature in a data vector is stored as an amplitude of a different quantum state. As a system of nn qubits provides 2n2^n amplitudes, amplitude encoding of NN features requires nlog2(N)n \ge \mathrm{log}_2(N) qubits.

As an example, let's encode the first vector in our example dataset Xex\text{X}_\text{ex}, x(1)=(4,8,5)\vec{x}^{(1)} = (4,8,5) using amplitude encoding. Normalizing the resulting vector, we get:

i=1Nxi(1)2=42+82+52=105=α2α=105\sum_{i=1}^N \left|x^{(1)}_i\right|^2 = 4^2+8^2+5^2 = 105 = \left|\alpha\right|^2 \rightarrow \alpha = \sqrt{105}

and the resulting 2-qubit quantum state would be:

ψ(x(1))=1105(400+801+510+011)|\psi(\vec{x}^{(1)})\rangle = \frac{1}{\sqrt{105}}(4|00\rangle+8|01\rangle+5|10\rangle+0|11\rangle)

In the example above, the number of features in the vector N=3N=3, is not a power of 2. When NN is not a power of 2, we simply choose a value for the number of qubits nn such that 2nN2^n\geq N and pad the amplitude vector with uninformative constants (here, a zero).

Like in basis encoding, once we calculate what state will encode our dataset, in Qiskit we can use the initialize function to prepare it:

desired_state = [
    1 / math.sqrt(105) * 4,
    1 / math.sqrt(105) * 8,
    1 / math.sqrt(105) * 5,
    1 / math.sqrt(105) * 0,
]
 
qc = QuantumCircuit(2)
qc.initialize(desired_state, [0, 1])
 
qc.decompose(reps=5).draw(output="mpl")

Output:

Output of the previous code cell

An advantage of amplitude encoding is the aforementioned requirement of only log2(N)\mathrm{log}_2(N) qubits to encode. However, subsequent algorithms must operate on the amplitudes of a quantum state, and methods to prepare and measure the quantum states tend not to be efficient.

Example

Write down the normalized state for encoding the following vector (made of two vectors from our example dataset): x=(9,8,6,2,9,2)\vec{x}=(9,8,6,2,9,2) using amplitude encoding.

Solution:

To encode 6 numbers, we will need to have at least 6 available states on whose amplitudes we can encode. This will require 3 qubits. Using an unknown normalization factor α\alpha, we can write this as:

ψ=α(9000+8001+6010+2011+9100+2101+0110+0111)|\psi\rangle = \alpha(9|000\rangle+8|001\rangle+6|010\rangle+2|011\rangle+9|100\rangle+2|101\rangle+0|110\rangle+0|111\rangle)

Note that

ψψ=α2×(92+82+62+22+92+22+02+02)=α2×(270)=1α=1270\langle \psi|\psi\rangle = |\alpha|^2\times(9^2+8^2+6^2+2^2+9^2+2^2+0^2+0^2) = |\alpha|^2\times(270)=1 \rightarrow \alpha = \frac{1}{\sqrt{270}}

So finally,

ψ=1270(9000+8001+6010+2011+9100+2101+0110+0111)|\psi\rangle = \frac{1}{\sqrt{270}}(9|000\rangle+8|001\rangle+6|010\rangle+2|011\rangle+9|100\rangle+2|101\rangle+0|110\rangle+0|111\rangle)

Example

For the same data vector x=(9,8,6,2,9,2),\vec{x}=(9,8,6,2,9,2), write code to create a circuit that loads these data features using amplitude encoding.

Solution:

desired_state = [
    9 / math.sqrt(270),
    8 / math.sqrt(270),
    6 / math.sqrt(270),
    2 / math.sqrt(270),
    9 / math.sqrt(270),
    2 / math.sqrt(270),
    0,
    0,
]
 
print(desired_state)
 
qc = QuantumCircuit(3)
qc.initialize(desired_state, [0, 1, 2])
qc.decompose(reps=8).draw(output="mpl")

Output:

[0.5477225575051662, 0.48686449556014766, 0.36514837167011077, 0.12171612389003691, 0.5477225575051662, 0.12171612389003691, 0, 0]
Output of the previous code cell

Example

You may need to deal with very large data vectors. Consider the vector

x=(4,8,5,9,8,6,2,9,2,5,7,0,3,7,5).\vec{x}=(4,8,5,9,8,6,2,9,2,5,7,0,3,7,5).

Write code to automate the normalization, and generate a quantum circuit for amplitude encoding.

Solution:

There are many possible answers. Here is code that prints a few steps along the way:

import numpy as np
from math import sqrt
 
init_list = [4, 8, 5, 9, 8, 6, 2, 9, 2, 5, 7, 0, 3, 7, 5]
qubits = round(np.log(len(init_list)) / np.log(2) + 0.4999999999)
need_length = 2**qubits
pad = need_length - len(init_list)
for i in range(0, pad):
    init_list.append(0)
 
init_array = np.array(init_list)  # Unnormalized data vector
length = sqrt(
    sum(init_array[i] ** 2 for i in range(0, len(init_array)))
)  # Vector length
norm_array = init_array / length  # Normalized array
print("Normalized array:")
print(norm_array)
print()
 
qubit_numbers = []
for i in range(0, qubits):
    qubit_numbers.append(i)
print(qubit_numbers)
 
qc = QuantumCircuit(qubits)
qc.initialize(norm_array, qubit_numbers)
qc.decompose(reps=7).draw(output="mpl")

Output:

Normalized array:
[0.17342199 0.34684399 0.21677749 0.39019949 0.34684399 0.26013299
 0.086711   0.39019949 0.086711   0.21677749 0.30348849 0.
 0.1300665  0.30348849 0.21677749 0.        ]

[0, 1, 2, 3]
Output of the previous code cell

Check-in question

Do you see advantages to amplitude encoding over basis encoding? If so, explain.

Answer:

There may be several answers. One answer is that, given the fixed ordering of the basis states, this amplitude encoding preserves the order of the numbers encoded. It will often also be encoded more densely.

A benefit of amplitude encoding is that only log2(N)\log_2(N) qubits are required for an NN-dimensional (NN-feature) data vector xx\vec{x}\rightarrow|\vec{x}\rangle. However, amplitude encoding is generally an inefficient procedure that requires arbitrary state preparation, which is exponential in the number of CNOT gates. Stated differently, the state preparation has a polynomial runtime complexity of O(N)\mathcal O(N) in the number of dimensions, where N=2nN = 2^n, and nn is the number of qubits. Amplitude encoding “provides an exponential saving in space at the cost of an exponential increase in time”[3]; however, runtime increases to O(logN)\mathcal O(\log N) are achievable in certain cases[4]. For an end-to-end quantum speedup, the data loading runtime complexity needs to be considered.

Angle encoding

Angle encoding is of interest in many QML models using Pauli feature maps such as quantum support vector machines (QSVMs) and variational quantum circuits (VQCs), among others. Angle encoding is closely related to phase encoding and dense angle encoding which are presented below. Here we will use "angle encoding" to refer to a rotation in θ\theta, that is, a rotation away from the zz axis accomplished for example by an RXR_X gate or an RYR_Y gate[1,3]. Really, one can encode data in any rotation or combination of rotations. But RYR_Y is common in the literature, so we emphasize it here.

When applied to a single qubit, angle encoding imparts a Y-axis rotation proportional to the data value. Consider the encoding of a single (kthk^\text{th})feature from the jthj^\text{th} data vector in a dataset, xk(j)\vec{x}^{(j)}_k:

xk(j)=RY(θ=xk(j))0=cos(xk(j)2)0+sin(xk(j)2)1.|\vec{x}^{(j)}_k\rangle = R_Y(\theta=\vec{x}^{(j)}_k)|0\rangle = \textstyle\cos\left(\frac{\vec{x}^{(j)}_k}{2}\right)|0\rangle + \sin\left(\frac{\vec{x}^{(j)}_k}{2}\right)|1\rangle.

Alternatively, angle encoding can be performed using RX(θ)R_X(\theta) gates, although the encoded state would have a complex relative phase compared to RY(θ)R_Y(\theta).

Angle encoding is different from the previous two methods discussed in several ways. In angle encoding:

  • Each feature value is mapped to a corresponding qubit, xk(j)Qk\vec{x}^{(j)}_k \rightarrow Q_k, leaving the qubits in a product state.
  • One numerical value is encoded at a time, rather than a whole set of features from a data point.
  • nn qubits are required for NN data features, where nNn\leq N. Often equality holds, here. We'll see how n<Nn<N is possible in the next few sections.
  • The resulting circuit is a constant depth (typically the depth is 1 prior to transpilation).

The constant depth quantum circuit makes it particularly amenable to current quantum hardware. One additional feature of encoding our data using θ\theta (and specifically, our choice to use Y-axis angle encoding) is that it creates real-valued quantum states that can be useful for certain applications. For Y-axis rotation, data is mapped with a Y-axis rotation gate RY(θ)R_Y(\theta) by a real-valued angle θ(0,2π]\theta \in (0, 2\pi] (Qiskit RYGate). As with phase encoding (see below), we recommend that you rescale data so that xk(j)(0,2π]\vec{x}^{(j)}_k \in (0,2\pi], preventing information loss and other unwanted effects.

The following Qiskit code rotates a single qubit from an initial state 0|0\rangle to encode a data value xk(j)=12π\vec{x}^{(j)}_k=\frac{1}{2}\pi.

from qiskit.quantum_info import Statevector
from math import pi
 
qc = QuantumCircuit(1)
state1 = Statevector.from_instruction(qc)
qc.ry(pi / 2, 0)  # Phase gate rotates by an angle pi/2
state2 = Statevector.from_instruction(qc)
states = state1, state2

We will define a function to visualize the action on the state vector. The details of the function definition are not important, but the ability to visualize the state vectors and their changes is important.

from qiskit.visualization.bloch import Bloch
from qiskit.visualization.state_visualization import _bloch_multivector_data
 
 
def plot_Nstates(states, axis, plot_trace_points=True):
    """This function plots N states to 1 Bloch sphere"""
    bloch_vecs = [_bloch_multivector_data(s)[0] for s in states]
 
    if axis is None:
        bloch_plot = Bloch()
    else:
        bloch_plot = Bloch(axes=axis)
 
    bloch_plot.add_vectors(bloch_vecs)
 
    if len(states) > 1:
 
        def rgba_map(x, num):
            g = (0.95 - 0.05) / (num - 1)
            i = 0.95 - g * num
            y = g * x + i
            return (0.0, y, 0.0, 0.7)
 
        num = len(states)
        bloch_plot.vector_color = [rgba_map(x, num) for x in range(1, num + 1)]
 
    bloch_plot.vector_width = 3
    bloch_plot.vector_style = "simple"
 
    if plot_trace_points:
 
        def trace_points(bloch_vec1, bloch_vec2):
            # bloch_vec = (x,y,z)
            n_points = 15
            thetas = np.arccos([bloch_vec1[2], bloch_vec2[2]])
            phis = np.arctan2(
                [bloch_vec1[1], bloch_vec2[1]], [bloch_vec1[0], bloch_vec2[0]]
            )
            if phis[1] < 0:
                phis[1] = phis[1] + 2 * pi
            angles0 = np.linspace(phis[0], phis[1], n_points)
            angles1 = np.linspace(thetas[0], thetas[1], n_points)
 
            xp = np.cos(angles0) * np.sin(angles1)
            yp = np.sin(angles0) * np.sin(angles1)
            zp = np.cos(angles1)
            pnts = [xp, yp, zp]
            bloch_plot.add_points(pnts)
            bloch_plot.point_color = "k"
            bloch_plot.point_size = [4] * len(bloch_plot.points)
            bloch_plot.point_marker = ["o"]
 
        for i in range(len(bloch_vecs) - 1):
            trace_points(bloch_vecs[i], bloch_vecs[i + 1])
 
    bloch_plot.sphere_alpha = 0.05
    bloch_plot.frame_alpha = 0.15
    bloch_plot.figsize = [4, 4]
 
    bloch_plot.render()
 
 
plot_Nstates(states, axis=None, plot_trace_points=True)

Output:

Output of the previous code cell

That was just a single feature of a single data vector. When encoding NN features into the rotation angles of nn qubits, say for the jthj^\text{th} data vector x(j)=(x1,...,xN),\vec{x}^{(j)} = (x_1,...,x_N), the encoded product state will look like this:

x(j)=k=1Ncos(xk(j))0+sin(xk(j))1|\vec{x}^{(j)}\rangle = \bigotimes^N_{k=1} \cos(\vec{x}^{(j)}_k)|0\rangle + \sin(\vec{x}^{(j)}_k)|1\rangle

We note that this is equivalent to

x(j)=k=1NRY(2xk(j))0.|\vec{x}^{(j)}\rangle = \bigotimes^N_{k=1} R_Y(2\vec{x}^{(j)}_k)|0\rangle.

Example

Encode the data vector x(j)=(0,π/4,π/2)\vec{x}^{(j)} = (0, \pi/4, \pi/2) using angle encoding, as described above.

Solution:

qc = QuantumCircuit(3)
qc.ry(0, 0)
qc.ry(2 * math.pi / 4, 1)
qc.ry(2 * math.pi / 2, 2)
qc.draw(output="mpl")

Output:

Output of the previous code cell

Check-in questions

Using angle encoding as described above, how many qubits are required to encode 5 features?

Answer: 5

Phase encoding

Phase encoding is very similar to the angle encoding described above. The phase angle of a qubit is a real-valued angle ϕ\phi about the zz-axis from the +xx-axis. Data are mapped with a phase rotation, P(ϕ)=eiϕ/2RZ(ϕ)P(\phi) = e^{i\phi/2}R_Z(\phi), where ϕ(0,2π]\phi \in (0,2\pi] (see Qiskit PhaseGate for more information). It is recommended to rescale data so that xk(j)(0,2π]\vec{x}^{(j)}_k \in (0,2\pi]. This prevents information loss and other potentially unwanted effects[1,2].

A qubit is often initialized in the state 0|0\rangle, which is an eigenstate of the phase rotation operator, meaning that the qubit state first needs to be rotated for phase encoding to be implemented. It therefore makes sense to initialize the state with a Hadamard gate: H0=+=12(0+1)H|0\rangle = |+\rangle = \textstyle\frac{1}{\sqrt{2}}(|0\rangle + |1\rangle). Phase encoding on a single qubit means imparting a relative phase proportional to the data value:

xk(j)=P(ϕ=xk(j))+=12(0+eixk(j)1).|\vec{x}^{(j)}_k\rangle = P(\phi=\vec{x}^{(j)}_k)|+\rangle = \textstyle\frac{1}{\sqrt{2}}\big(|0\rangle + e^{i\vec{x}^{(j)}_k}|1\rangle\big).

The phase encoding procedure maps each feature value to the phase of a corresponding qubit, xk(j)Qk\vec{x}^{(j)}_k \rightarrow Q_k. In total, phase encoding has a circuit depth of 2, including the Hadamard layer, which makes it an efficient encoding scheme. The phase-encoded multi-qubit state (nn qubits for N=nN=n features) is a product state:

x(j)=k=1NPk(ϕ=xk(j))+N=12Nk=1N(0+eixk(j)1).|\vec{x}^{(j)}\rangle = \bigotimes_{k=1}^{N} P_k(\phi = \vec{x}^{(j)}_k)|+\rangle^{\otimes N} = {\textstyle\frac{1}{\sqrt{2^N}}} \bigotimes_{k=1}^{N}\big(|0\rangle + e^{i\vec{x}^{(j)}_k}|1\rangle\big).

The following Qiskit code first prepares the initial state of a single qubit by rotating it with a Hadamard gate, then rotates it again using a phase gate to encode a data feature xk(j)=12π\vec{x}^{(j)}_k=\frac{1}{2}\pi.

qc = QuantumCircuit(1)
qc.h(0)  # Hadamard gate rotates state down to Bloch equator
state1 = Statevector.from_instruction(qc)
 
qc.p(pi / 2, 0)  # Phase gate rotates by an angle pi/2
state2 = Statevector.from_instruction(qc)
 
states = state1, state2
 
qc.draw("mpl", scale=1)

Output:

Output of the previous code cell

We can visualize the rotation in ϕ\phi using the plot_Nstates function we defined.

plot_Nstates(states, axis=None, plot_trace_points=True)

Output:

Output of the previous code cell

The Bloch sphere plot shows the Z-axis rotation +P(12π)+|+\rangle \rightarrow P(\frac{1}{2}\pi)|+\rangle where xk(j)=12π\vec{x}^{(j)}_k=\frac{1}{2}\pi. The light green arrow shows the final state.

Phase encoding is used in many quantum feature maps, particularly ZZ and ZZZZ feature maps, and general Pauli feature maps, among others.

Check-in questions

How many qubits are required in order to use phase encoding as described above to store 8 features?

Answer: 8

Example

Write code to the vector (4,8,5,9,8,6,2,9,2,5,7,0) using phase encoding.

Solution:

There may be many answers. Here is one example:

phase_data = [4, 8, 5, 9, 8, 6, 2, 9, 2, 5, 7, 0]
qc = QuantumCircuit(len(phase_data))
for i in range(0, len(phase_data)):
    qc.h(i)
    qc.rz(phase_data[i] * 2 * math.pi / float(max(phase_data)), i)
qc.draw(output="mpl")

Output:

Output of the previous code cell

Dense angle encoding

Dense angle encoding (DAE) is a combination of angle encoding and phase encoding. DAE allows two feature values to be encoded in a single qubit: one angle with a Y-axis rotation angle, and the other with a zz-axis rotation angle: xk(j),\vec{x}^{(j)}_k, x(j)θ,ϕ\vec{x}^{(j)}_\ell \rightarrow \theta, \phi. It encodes two features as follows:

xk(j),x(j)=RZ(ϕ=x(j))RY(θ=xk(j))0=cos(xk(j)2)0+eix(j)sin(xk(j)2)1.|\vec{x}^{(j)}_k,\vec{x}^{(j)}_\ell\rangle = R_Z(\phi=\vec{x}^{(j)}_\ell) R_Y(\theta=\vec{x}^{(j)}_k)|0\rangle = \cos\left(\frac{\vec{x}^{(j)}_k}{2}\right)|0\rangle + e^{i\vec{x}^{(j)}_\ell} \sin\left(\frac{\vec{x}^{(j)}_k}{2}\right)|1\rangle.

Encoding two data features to one qubit results in a 2×2\times reduction in the number of qubits required for the encoding. Extending this to more features, the data vector x=(x1,...,xN)\vec{x} = (x_1,...,x_N) can be encoded as:

x=k=1N/2cos(x2k1)0+eix2ksin(x2k1)1|\vec{x}\rangle = \bigotimes_{k=1}^{N/2} \cos(x_{2k-1})|0\rangle + e^{i x_{2k}}\sin(x_{2k-1})|1\rangle

DAE can be generalized to arbitrary functions of the two features instead of the sinusoidal functions used here. This is called general qubit encoding[7].

As an example of DAE, the code below encodes and visualizes the encoding of the features x1=θ=3π/8x_1=\theta = 3\pi/8 and x2=ϕ=7π/4x_2=\phi = 7\pi/4.

qc = QuantumCircuit(1)
state1 = Statevector.from_instruction(qc)
qc.ry(3 * pi / 8, 0)
state2 = Statevector.from_instruction(qc)
qc.rz(7 * pi / 4, 0)
state3 = Statevector.from_instruction(qc)
states = state1, state2, state3
 
plot_Nstates(states, axis=None, plot_trace_points=True)

Output:

Output of the previous code cell

Check-in questions

Given the treatment above, how many qubits are needed to encode 6 features using dense encoding?

Answer: 3

Example

Write code to load the vector (4,8,5,9,8,6,2,9,2,5,7,0,3,7,5) using dense angle encoding.

Solution:

Note that we have padded the list with a "0" to avoid the problem of there being a single unused parameter in our encoding scheme.

dense_data = [4, 8, 5, 9, 8, 6, 2, 9, 2, 5, 7, 0, 3, 7, 5, 0]
qc = QuantumCircuit(int(len(dense_data) / 2))
entry = 0
for i in range(0, int(len(dense_data) / 2)):
    qc.ry(dense_data[entry] * 2 * math.pi / float(max(dense_data)), i)
    entry = entry + 1
    qc.rz(dense_data[entry] * 2 * math.pi / float(max(dense_data)), i)
    entry = entry + 1
qc.draw(output="mpl")

Output:

Output of the previous code cell

Encoding with built-in feature maps

Encoding at arbitrary points

Angle encoding, phase encoding, and dense encoding prepared product states with a feature encoded on each qubit (or two features per qubit). This is different from basis encoding and amplitude encoding, in that those methods make use of entangled states. There is not a 1:1 correspondence between data feature and qubit. In amplitude encoding, for example, you might have one feature as the amplitude of the state 01|01\rangle and another feature as the amplitude for 10|10\rangle. Generally, methods that encode in product states yield shallower circuits and can store 1 or 2 features on each qubit. Methods that use entanglement and associate a feature with a state rather than a qubit result in deeper circuits, and can store more features per qubit on average.

But encoding need not be entirely in product states or entirely in entangled states as in amplitude encoding. Indeed, many encoding schemes built into Qiskit allow encoding both before and after an entanglement layer, as opposed to just at the beginning. This is known as "data reuploading". For related work, see references [5] and [6].

In this section, we will use and visualize a few of the built-in encoding schemes. All the methods in this section encode NN features as rotations on NN parameterized gates on nn qubits, where nNn \leq N. Note that maximizing data loading for a given number of qubits is not the only consideration. In many cases, circuit depth may be an even more important consideration than qubit count.

EfficientSU2

A common and useful example of encoding with entanglement is Qiskit's EfficientSU2 circuit. Impressively, this circuit can, for example, encode 8 features on only 2 qubits. Let's see this, and then try to understand how it is possible.

from qiskit.circuit.library import EfficientSU2
 
circuit = EfficientSU2(num_qubits=2, reps=1, insert_barriers=True)
circuit.decompose().draw(output="mpl")

Output:

Output of the previous code cell

Up to the first barrier (a point we label b1b1), our states are:

ψb1=(cos(θ0)0+sin(θ0)eiθ21)(cos(θ1)0+sin(θ1)eiθ31)|\psi\rangle_{b1} = \left(\cos(\theta_0)|0\rangle+\sin(\theta_0)e^{i\theta_2}|1\rangle\right)\otimes\left(\cos(\theta_1)|0\rangle+\sin(\theta_1)e^{i\theta_3}|1\rangle\right)

That's just dense encoding, which we've seen before. Now after the CNOT gate, at the second barrier (b2b2), our state is

ψb2=(cos(θ0)0+sin(θ0)eiθ21)cos(θ1)0+(sin(θ0)eiθ20+cos(θ0)1)sin(θ1)eiθ31|\psi\rangle_{b2} = \left(\cos(\theta_0)|0\rangle+\sin(\theta_0)e^{i\theta_2}|1\rangle\right)\otimes\cos(\theta_1)|0\rangle+ \left(\sin(\theta_0)e^{i\theta_2}|0\rangle+\cos(\theta_0)|1\rangle\right)\otimes\sin(\theta_1)e^{i\theta_3}|1\rangle

We now apply the last set of rotations to obtain:

ψfinal=(cos(θ0)0+sin(θ0)eiθ21)cos(θ1)(cos(θ4)0+sin(θ4)eiθ61)+(sin(θ0)eiθ20+cos(θ0)1)sin(θ1)eiθ3(cos(θ5)1+sin(θ5)eiθ70)\begin{align} \nonumber |\psi\rangle_{\text{final}} &= \left(\cos(\theta_0)|0\rangle+\sin(\theta_0)e^{i\theta_2}|1\rangle\right)\otimes\cos(\theta_1)\left(\cos(\theta_4)|0\rangle+\sin(\theta_4)e^{i\theta_6}|1\rangle\right)\\\nonumber &+\left(\sin(\theta_0)e^{i\theta_2}|0\rangle+\cos(\theta_0)|1\rangle\right)\otimes\sin(\theta_1)e^{i\theta_3}\left(\cos(\theta_5)|1\rangle+\sin(\theta_5)e^{i\theta_7}|0\rangle\right)\nonumber \end{align}

At first glance, it may appear that we have loaded so more parameters onto just a few states than makes sense, since the final state can be written as ψfinal=c000+c101+c210+c311\psi_\text{final} = c_0|00\rangle+c_1|01\rangle+c_2|10\rangle+c_3|11\rangle. But note that each prefactor is complex! Written like this:

ψfinal=(a0+ib0)00+(a1+ib1)01+(a2+ib2)10+(a3+ib3)11\psi_\text{final} = (a_0+ib_0)|00\rangle+(a_1+ib_1)|01\rangle+(a_2+ib_2)|10\rangle+(a_3+ib_3)|11\rangle

One can see that we do, indeed, have 8 parameters on the state on which to encode our 8 features.

By increasing the number of qubits and increasing the number of repetitions of entangling and rotation layers, one can encode much more data. Writing out the wave functions quickly becomes intractable. But we can still see the encoding in action.

Here we encode the data vector x=[0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0,1.1,1.2]\vec{x} = [0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0,1.1,1.2] with 12 features, on a 3-qubit EfficientSU2 circuit, using each of the parameterized gates to encode a different feature.

x = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2]
circuit = EfficientSU2(num_qubits=3, reps=1, insert_barriers=True)
encode = circuit.assign_parameters(x)
encode.decompose().draw(output="mpl")

Output:

Output of the previous code cell

Instead of increasing the number of qubits, you might choose to increase the number of repetitions of entangling and rotation layers. But there are limits to how many repetitions are useful.

As previously stated, there is a tradeoff: circuits with more qubits or more repetitions of entangling and rotation layers may store more parameters, but do so with greater circuit depth. We will return to the depths of some built-in feature maps, below.

The next few encoding methods that are built into Qiskit have "feature map" as part of their names. Let us reiterate that encoding data into a quantum circuit is a feature mapping, in the sense that it takes data into a new space: the Hilbert space of the qubits involved. The relationship between the dimensionality of the original feature space and that of the Hilbert space will depend on the circuit you use for encoding.

ZZ feature map

The ZZ feature map (ZFM) can be interpreted as a natural extension of phase encoding. The ZFM consists of alternating layers of single-qubit gates: Hadamard gate layers and phase gate layers. Let the data vector x\vec{x} have NN features. The quantum circuit that performs the feature mapping is represented as a unitary operator that acts on the initial state:

UZFM(x)0N=ϕ(x)\mathscr{U}_{\text{ZFM}}(\vec{x})|0\rangle^{\otimes N}=|\phi(\vec{x})\rangle

where 0N|0\rangle^{\otimes N} is the NN-qubit ground state. This notation is used for consistency with reference [4] Havlicek et al. The data features xix_i are mapped one-to-one with corresponding qubits. For example, if you have 8 features in a data vector, then you would use 8 qubits. The ZFM circuit is composed of rr repetitions of a subcircuit comprised of Hadamard gate layers and phase gate layers. A Hadamard layer is made up of a Hadamard gate acting on every qubit in an nn-qubit register, HHH=HnH \otimes H \otimes \dots \otimes H = H^{\otimes n}, within the same stage of the algorithm. This description also applies to a phase gate layer in which the ithi^\text{th} qubit is acted on by P(xi)P(\vec{x}_i). Each PP gate has one feature as an argument, but the phase gate layer (P(x1)P(xk)P(xN)P(\vec{x}_1)\otimes\ldots P(\vec{x}_k)\otimes\ldots P(\vec{x}_N) is a function of the data vector. The full ZFM circuit unitary with a single repetition is:

UZFM=(P(x1)P(xk)P(xN)HN)=(k=1NP(xk))HN\mathscr{U}_{\text{ZFM}}=\big(P(\vec{x}_1)\otimes\ldots P(\vec{x}_k)\otimes\ldots P(\vec{x}_N)H^{\otimes N}\big)=\left(\bigotimes_{k = 1}^N P(\vec{x}_k)\right)H^{\otimes N}

Then rr repetitions of this unitary would be

UZFM(r)(x)=s=1r[(k=1NP(xk))HN]\mathscr{U}^{(r)}_{\text{ZFM}}\left(\vec{x}\right)=\prod_{s=1}^{r}\left[\left(\bigotimes_{k = 1}^N P(\vec{x}_k)\right)H^{\otimes N}\right]

The data features, xkx_k, are mapped to the phase gates in the same way in all rr repetitions. The ZFM feature map state is a product state and is efficient for classical simulation[4].

To start with a small example, a 2-qubit ZFM circuit is coded using Qiskit and drawn to display the simple circuit structure. In the example, a single repetition, r=1r=1, is implemented with the data vector x=(12π,13π)\vec{x} = \left(\textstyle\frac{1}{2}\pi, \textstyle\frac{1}{3}\pi\right). The ZFM circuit unitary operator acts on the initial state in the following way:

UZFM(xˉ)00=P(xˉ)2H200=(P(12π)H0)(P(13π)H0).\mathscr{U}_{\text{ZFM}}(\bar{x})|00\rangle = P(\bar{x})^{\otimes 2} H^{\otimes 2}|00\rangle = \left( P\left(\textstyle\frac{1}{2}\pi\right)H|0\rangle \right) \otimes \left(P\left(\textstyle\frac{1}{3}\pi\right)H|0\rangle\right).

The formula has been rearranged around the tensor product to emphasize the operations on each qubit. The following Qiskit code uses Hadamard and phase gates explicitly to show the structure of the ZFM:

qc0 = QuantumCircuit(1)
qc1 = QuantumCircuit(1)
 
qc0.h(0)
qc0.p(pi / 2, 0)
 
qc1.h(0)
qc1.p(pi / 3, 0)
 
# Combine circuits qc0 and qc1 into 1 circuit
qc = QuantumCircuit(2)
qc.compose(qc0, [0], inplace=True)
qc.compose(qc1, [1], inplace=True)
 
qc.draw("mpl", scale=1)

Output:

Output of the previous code cell

We now encode the same data vector x=(12π,13π)\vec{x} = \left(\textstyle\frac{1}{2}\pi, \textstyle\frac{1}{3}\pi\right) to a ZFM circuit with three repetitions, r=3r=3, using the Qiskit ZFeatureMap class, which altogether gives us the quantum feature map UZFM(x)\mathscr{U}_{\text{ZFM}}(\vec{x}). By default in the ZFeatureMap class, parameters β\beta are multiplied by 2 before mapping to the phase gate βP(θ=2β)\beta \rightarrow P(\theta = 2\beta). To reproduce the same encodings as above, we divide by 2.

from qiskit.circuit.library import ZFeatureMap
 
zfeature_map = ZFeatureMap(feature_dimension=2, reps=3)
zfeature_map = zfeature_map.assign_parameters([(1 / 2) * pi / 2, (1 / 2) * pi / 3])
zfeature_map.decompose().draw("mpl")

Output:

Output of the previous code cell

You may use ZFM via Qiskit's ZFM class; you can also use this structure as inspiration to construct your own feature mapping.

ZZZZ feature map

The ZZZZ feature map (ZZFM) extends the ZFM with the inclusion of two-qubit entangling gates, specifically the ZZZZ-rotation gate RZZ(θ)R_{ZZ}(\theta). The ZZFM is conjectured to be generally expensive to compute on a classical computer, unlike the ZFM.

RZZ(θ)R_{ZZ}(\theta) implements a ZZZZ-interaction and is maximally entangling for θ=12π\theta = \textstyle{\frac{1}{2}}\pi. RZZ(θ)R_{ZZ}(\theta) can be decomposed into a series of gates on two qubits, as shown in the following Qiskit code using the RZZ gate and the QuantumCircuit class method decompose. We encode a single feature of the data vector x\vec{x}: xk=π.\vec{x}_k=\pi.

qc = QuantumCircuit(2)
qc.rzz(pi, 0, 1)
qc.draw("mpl", scale=1)

Output:

Output of the previous code cell

As is often the case, we see this represented as a single gate-like unit, until we use .decompose() to see all constituent gates.

qc.decompose().draw("mpl", scale=1)

Output:

Output of the previous code cell

Data is mapped with a phase rotation P(θ)=eiθ/2RZ(θ)P(\theta) = e^{i\theta/2}R_Z(\theta) on the second qubit. The RZZ(θ)R_{ZZ}(\theta) gate entangles the two qubits on which it operates by a degree of entanglement determined by the encoded feature value.

The full ZZFM circuit consists of a Hadamard gate and phase gate, as in the ZFM, followed by the entanglement described above. A single repetition of the ZZFM circuit is:

UZZFM(x)=UZZ(x)(P(x1)P(xk)P(xN)HN)=UZZ(x)(k=1NP(xk))HN,\mathscr{U}_{\text{ZZFM}}(\vec{x}) = U_{ZZ}(\vec{x})\big(P(\vec{x}_1)\otimes\ldots P(\vec{x}_k)\otimes\ldots P(\vec{x}_N)H^{\otimes N}\big)=U_{ZZ}(\vec{x})\left(\bigotimes_{k = 1}^N P(\vec{x}_k)\right)H^{\otimes N},

where UZZ(x)U_{ZZ}(\vec{x}) contains ZZ-gate layer structured by an entanglement scheme. Several entanglement schemes are shown in code blocks below. The structure of UZZ(x)U_{ZZ}(\vec{x}) also includes a function that combines the data features from qubits being entangled in the following way. Let us say that the RZZR_{ZZ} gate is to be applied to qubits pp and qq. In the phase layer, these qubits have phase gates that encode xp\vec{x}_p and xq\vec{x}_q on them, respectively. The argument θq,p\theta_{q,p} of the RZZ,q,p(θq,p)R_{ZZ,q,p}(\theta_{q,p}) will not simply be one of these features or the other, but a function often denoted by ϕ\phi (not to be confused with the azimuthal angle):

θq,pϕ(xq,xp)=2(πxq)(πxp).\theta_{q,p} \rightarrow \phi(\vec{x}_q, \vec{x}_p) = 2(\pi-\vec{x}_q)(\pi-\vec{x}_p).

We will see this in several examples below. The extension to multiple repetitions is the same as in the ZFeatureMap case:

UZZFM(r)(x)=s=1r[UZZ(x)(k=1NP(xk))HN].\mathscr{U}^{(r)}_{\text{ZZFM}}\left(\vec{x}\right)=\prod_{s=1}^{r}\left[U_{ZZ}(\vec{x})\left(\bigotimes_{k = 1}^N P(\vec{x}_k)\right)H^{\otimes N}\right].

As the operators have increased in complexity, let us first encode a data vector x=(x0,x1)\vec{x} = (x_0, x_1) with a two-qubit ZZFM and one repetition using the following code:

from qiskit.circuit.library import ZZFeatureMap
 
feature_dim = 2
zzfeature_map = ZZFeatureMap(
    feature_dimension=feature_dim, entanglement="linear", reps=1
)
zzfeature_map.decompose(reps=1).draw("mpl", scale=1)

Output:

Output of the previous code cell

By default in Qiskit, the features (x1,x2)(\vec{x}_1, \vec{x}_2) are mapped together to RZZ(θ)R_{ZZ}(\theta) by this mapping function θ1,2=ϕ(x1,x2)=2(πx1)(πx2)\theta_{1,2} = \phi(\vec{x}_1, \vec{x}_2) = 2(\pi-\vec{x}_1)(\pi-\vec{x}_2). Qiskit allows the user to customize the function ϕ\phi (or ϕS\phi_S where SS is the set of qubit pairs coupled through RZZR_{ZZ} gates) as a preprocessing step.

Moving to a four-dimensional data vector x=(x1,x2,x3,x4)\vec{x} = (\vec{x}_1, \vec{x}_2, \vec{x}_3, \vec{x}_4) and mapping to a four-qubit ZZFM with one repetition, we can start to see the mapping ϕ\phi for various qubit pairs. We can also see the meaning of "linear" entanglement:

feature_dim = 4
zzfeature_map = ZZFeatureMap(
    feature_dimension=feature_dim, entanglement="linear", reps=1
)
zzfeature_map.decompose().draw("mpl", scale=1)

Output:

Output of the previous code cell

In the linear entanglement scheme, nearest-neighbor (numbered) pairs of qubits in this circuit are entangled. There are other built-in entanglement schemes in Qiskit, including circular and full.

Pauli feature map

The Pauli feature map (PFM) is the generalization of the ZFM and ZZFM to use arbitrary Pauli gates. The Pauli feature map takes a very similar form to the previous two feature maps. For rr repetitions of the encoding of the NN features of vector x,\vec{x},

UPFM(x)=s=1rU(x)Hn.\mathscr{U}_{\text{PFM}}(\vec{x}) = \prod_{s=1}^{r} U(\vec{x}) H^{\otimes n}.

For PFM, U(x)U(\vec{x}) is generalized to a Pauli expansion unitary operator. Here we present a more generalized form of the feature maps considered so far:

U(x)=exp(iSIϕS(x)iSσi),U(\vec{x}) = \exp\left(i \sum_{S \in\mathcal{I}} \phi_S(\vec{x}) \prod_{i \in S} \sigma_i \right),

where σi\sigma_i is a Pauli operator, σiI,X,Y,Z\sigma_i \in {I,X,Y,Z}. Here I\mathcal{I} is the set of all qubit connectivities as determined by the feature map, including the set of qubits acted on by single-qubit gates. That is, for a feature map in which qubit 0 was acted upon by a phase gate, and qubits 2 and 3 were acted upon by an RZZR_{ZZ} gate, the set I\mathcal{I} would include {{0},{2,3}}\{\{0\},\{2,3\}\}. SS runs through all elements of that set. In previous feature maps, the function ϕS(x)\phi_S(\vec{x}) was involved either exclusively with single-qubit gates or exclusively with two-qubit gates. Here, we define it in general:

ϕS(x)={xiif S={i} (single-qubit)jS(πxj)if S2 (multi-qubit)\phi_S(\vec{x})= \begin{cases} x_i & \text{if } S= \{i\} \text{ (single-qubit)}\\ \prod_{j\in{S}}(\pi-x_j) & \text{if } |S|\ge2 \text{ (multi-qubit)}\\ \end{cases}

For documentation, see the Qiskit PauliFeatureMap class documentation). In the ZZFM, the operator σi\sigma_i is restricted to ZiZ_i.

One way to understand the above unitary is through analogy with the propagator in a physical system. The unitary above is a unitary evolution operator, exp(itH)\exp(it\mathcal{H}), for a Hamiltonian, H\mathcal{H}, similar to the Ising model, where the time parameter, tt, is replaced with data values to drive the evolution. The expansion of this unitary operator gives the PFM circuit. The entangling connectivities in SS can be interpreted as Ising couplings in a spin lattice.

Let us consider an example of Pauli YY and XXXX operators representing those Ising-type interactions. Qiskit provides a PauliFeatureMap class for instantiating a PFM with a choice of single- and nn-qubit gates, which in this example will be passed as Pauli strings ‘Y’ and ‘XX’. Typically, nn is 1 or 2 for single- and two-qubit interactions, respectively. The entanglement scheme is “linear,” meaning that only nearest-neighbor qubits in the quantum circuit are coupled. Note that this does not correspond to nearest-neighbor qubits on the quantum computer itself, as this quantum circuit is an abstraction layer.

from qiskit.circuit.library import PauliFeatureMap
 
feature_dim = 3
pauli_feature_map = PauliFeatureMap(
    feature_dimension=feature_dim, entanglement="linear", reps=1, paulis=["Y", "XX"]
)
 
pauli_feature_map.decompose().draw("mpl", scale=1.5)

Output:

Output of the previous code cell

Qiskit provides a parameter, α\alpha, in Pauli feature maps to control the scaling of Pauli rotations.

U(xˉ)=exp(iαS[n]ϕS(xˉ)iSσi)U(\bar{x}) = \exp\left(i \alpha \sum_{S\subseteq[n]} \phi_S(\bar{x}) \prod_{i \in S} \sigma_i \right)

The default value of α\alpha is 22. By optimizing its value in the interval, for example, [0,4],[0,4], one can better align a quantum kernel to the data.

Here we visualize various Pauli feature maps for two-qubit circuits to get a better picture of the range of possibilities.

from qiskit.visualization import circuit_drawer
import matplotlib.pyplot as plt
 
feature_dim = 2
fig, axs = plt.subplots(9, 2)
i_plot = 0
for paulis in [
    ["I"],
    ["X"],
    ["Y"],
    ["Z"],
    ["XX"],
    ["XY"],
    ["XZ"],
    ["YY"],
    ["YZ"],
    ["ZZ"],
    ["X", "ZZ"],
    ["Y", "ZZ"],
    ["Z", "ZZ"],
    ["X", "YZ"],
    ["Y", "YZ"],
    ["Z", "YZ"],
    ["YY", "ZZ"],
    ["XY", "ZZ"],
]:
    pfm = PauliFeatureMap(feature_dimension=feature_dim, paulis=paulis, reps=1)
    circuit_drawer(
        pfm.decompose(),
        output="mpl",
        style={"backgroundcolor": "#EEEEEE"},
        ax=axs[int((i_plot - i_plot % 2) / 2), i_plot % 2],
    )
    axs[int((i_plot - i_plot % 2) / 2), i_plot % 2].title.set_text(paulis)
    i_plot += 1
 
fig.set_figheight(16)
fig.set_figwidth(16)

Output:

Output of the previous code cell

The above can, of course, be extended to include other permutations and repetitions of Pauli matrices. Learners are encouraged to experiment with those options.


Review of built-in feature maps

You have seen several schemes for encoding data into a quantum circuit:

  • Basis encoding
  • Amplitude encoding
  • Angle encoding
  • Phase encoding
  • Dense encoding

You have seen how to construct your own feature maps using these encoding schemes, and you have seen four built-in feature maps which take advantage of angle and phase encoding:

  • EfficientSU2
  • ZFeatureMap
  • ZZFeatureMap
  • PauliFeatureMap

These built-in feature maps differed from each other in several ways:

  • The depth for a given number of encoded features
  • The number of qubits required for a given number of features
  • The degree of entanglement (obviously related to the other differences)

The code below applies these four built-in feature maps to the encoding of a feature set, and plots the two-qubit depth of the resulting circuit. Since two-qubit error rates are much higher than single-qubit gate error rates, one might reasonably be most interested in the depth of two-qubit gates. In the code below, we obtain counts of all gates in a circuit by first decomposing the circuit and then using count_ops(), as shown below. Here the two-qubit gates we are interested in are 'cx' gates:

# Initializing parameters and empty lists for depths
x = [0.1, 0.2]
n_data = []
zz2gates = []
su22gates = []
z2gates = []
p2gates = []
 
# Generating feature maps
for n in range(3, 10):
    x.append(n / 10)
    zzcircuit = ZZFeatureMap(n, reps=1, insert_barriers=True)
    zcircuit = ZFeatureMap(n, reps=1, insert_barriers=True)
    su2circuit = EfficientSU2(n, reps=1, insert_barriers=True)
    pcircuit = PauliFeatureMap(n, reps=1, paulis=["XX"], insert_barriers=True)
    # Getting the cx depths
    zzcx = zzcircuit.decompose().count_ops().get("cx")
    zcx = zcircuit.decompose().count_ops().get("cx")
    su2cx = su2circuit.decompose().count_ops().get("cx")
    pcx = pcircuit.decompose().count_ops().get("cx")
 
    # Appending the cx gate counts to the lists. We shift the zz & pauli data points, because they overlap.
    n_data.append(n)
    zz2gates.append(zzcx - 0.5)
    z2gates.append(0)
    su22gates.append(su2cx)
    p2gates.append(pcx + 0.5)
 
# Plot the output
plt.plot(n_data, p2gates, "bo")
plt.plot(n_data, zz2gates, "ro")
plt.plot(n_data, su22gates, "yo")
plt.plot(n_data, z2gates, "go")
plt.ylabel("CX Gates")
plt.xlabel("Data elements")
plt.legend(["Pauli", "ZZ", "SU2", "Z"])
# plt.suptitle('ZZFeatureMap(n)')
plt.show()

Output:

Output of the previous code cell

Generally Pauli and ZZ feature maps will result in greater circuit depth and higher numbers of 2-qubit gates than EfficientSU2 and Z feature maps.

Because the feature maps built into Qiskit are widely applicable, we will often not need to design our own, especially in the learning phase. However, experts in quantum machine learning will likely return to the subject of designing their own feature mapping, as they tackle two complicated challenges:

  1. Modern hardware: the presence of noise and the large overhead of error-correcting code mean that present-day applications will need to consider things like hardware efficiency and minimizing two-qubit gate depth.

  2. Mappings that fit the problem at hand: It is one thing to say that the ZZFeatureMap, for example, is difficult to simulate classically, and therefore interesting. It is quite another thing for the ZZFeatureMap to be ideally suited to your machine learning task or data set. The performance of different parameterized quantum circuits on different types of data is an active area of investigation.

We close with a note on hardware efficiency.


Hardware-efficient feature mapping

A hardware-efficient feature mapping is one that takes into account constraints of real quantum computers, in the interest of reducing noise and errors in the computation. When running quantum circuits on near-term quantum computers, there are many strategies to mitigate noise inherent to the hardware. One main strategy for hardware efficiency is the minimization of the depth of the quantum circuit so that noise and decoherence have less time to corrupt the computation. The depth of a quantum circuit is the number of time-aligned gate steps required to complete the entire computation (after circuit optimization)[5]. Recall that the depth of the abstract, logical circuit may be much lower than the depth once the circuit is transpiled for a real quantum computer.

Transpilation is the process of converting the quantum circuit from a high-level abstraction to one that is ready to run on a real quantum computer, taking into account constraints of the hardware. A quantum computer has a native set of single- and two-qubit gates. This means all gates in Qiskit code have to be transpiled into the set of native hardware gates. For example, in ibm_torino, a system sporting a heron r1 processor and completed in 2023, the native or basis gates are {CZ, ID, RZ, SX, X}. These are the two-qubit controlled-Z gate, and single-qubit gates called identity, ZZ-rotation, square root of NOT, and NOT, respectively, providing a universal set. When implementing multi-qubit gates as an equivalent subcircuit, physical two-qubit CZCZ gates are required, along with other single-qubit gates available in hardware. In addition, to perform a two-qubit gate on a pair of qubits that are not physically coupled, SWAP gates are added to move qubit states between qubits to enable coupling, which leads to an unavoidable extension of the circuit. Using the optimization argument that can be set from 0 up to a highest level of 3. For greater control and customizability, the transpiler pipeline can be managed with the Qiskit Pass Manager. Refer to the Qiskit Transpiler documentation for more information on transpilation.

In Havlicek et al. 2019 [2], one way the authors achieve hardware efficiency is by using the ZZZZ feature map because it is a second-order expansion (see the “ZZZZ feature map” section above). An NN-order expansion has NN-qubit gates. IBM Quantum systems do not have native NN-qubit gates, where N>2N>2, so to implement them would require decomposition into two-qubit CNOT gates available in hardware. A second way the authors minimize depth is by choosing a ZZZZ coupling topology that maps directly to the architecture couplings. A further optimization they undertake is targeting a higher-performing, suitably connected hardware subcircuit. Additional things to consider are minimizing the number of feature map repetitions and choosing a customized low-depth or “linear” entangling scheme instead of the “full” scheme that entangles all qubits.

Data encoding image

The above graphic shows a network of nodes and edges that represent physical qubits and hardware couplings, respectively. The coupling map and performance of ibm_torino is shown with all possible two-qubit CZ coupling gates. Qubits are color-coded on a scale based on the T1 relaxation time in microseconds (μs), where longer T1 times are better and in a lighter shade. The coupling edges are color-coded by CZ error, where darker shades are better. Information on the hardware specification can be accessed in the hardware backend configuration schema IBMQBackend.configuration().


References

  1. Maria Schuld and Francesco Petruccione, Supervised Learning with Quantum Computers, Springer 2018, doi:10.1007/978-3-319-96424-9.
  2. Vojtech Havlicek et al., “Supervised Learning with Quantum Enhanced Feature Spaces.” Nature, vol. 567 (2019): 209–212. https://arxiv.org/abs/1804.11326.
  3. Ryan LaRose and Brian Coyle, "Robust data encodings for quantum classifiers", Physical Review A 102, 032420 (2020), doi:10.1103/PhysRevA.102.032420, arXiv:2003.01695.
  4. Lou Grover and Terry Rudolph. “Creating Superpositions That Correspond to Efficiently Integrable Probability Distributions.” arXiv:quant-ph/0208112, August 15, 2002, https://arxiv.org/abs/quant-ph/0208112.
  5. Adrián Pérez-Salinas, Alba Cervera-Lierta, Elies Gil-Fuster, José I. Latorre, "Data re-uploading for a universal quantum classifier", Quantum 4, 226 (2020), ArXiv.org/abs/1907.02085.
  6. Maria Schuld, Ryan Sweke, Johannes Jakob Meyer, "The effect of data encoding on the expressive power of variational quantum machine learning models", Phys. Rev. A 103, 032430 (2021), arxiv.org/abs/2008.08605
import qiskit
 
qiskit.version.get_version_info()

Output:

'2.0.2'
Was this page helpful?
Report a bug or request content on GitHub.