Discretization of errors
So far we've considered errors and errors in the context of the 9-qubit Shor code, and in this section we'll consider arbitrary errors. What we'll find is that, to handle such errors, we don't need to do anything different from what we've already discussed; the ability to correct errors, errors, or both, implies the ability to correct arbitrary errors. This phenomenon is sometimes called the discretization of errors.
Unitary qubit errors
Let's begin with single-qubit unitary errors. For example, such an error could correspond to a very small rotation of the Bloch sphere, possibly representing an error incurred by a gate that isn't perfect, for instance. Or it could be any other unitary operation on a qubit and not necessarily one that's close to the identity.
It might seem like correcting for such errors is difficult. After all, there are infinitely many possible errors like this, and it's inconceivable that we could somehow identify each error exactly and then undo it. However, as long as we can correct for a bit-flip, a phase-flip, or both, then we will succeed in correcting an arbitrary single-qubit unitary error using the procedures described earlier in the lesson.
To see why this is the case, let us recognize first that we can express an arbitrary unitary matrix representing an error on a single qubit, as a linear combination of the four Pauli matrices (including the identity matrix).
As we will see, when the error detection circuits are run, the measurements that give us the syndrome bits effectively collapse the state of the encoding probabilistically to one where an error (or lack of an error) represented by one of the four Pauli matrices has taken place. (It follows from the fact that is unitary that the numbers and must satisfy and indeed, the values and are the probabilities with which the encoded state collapses to one for which the corresponding Pauli error has occurred.)
To explain how this works in greater detail, it will be convenient to use subscripts to indicate which qubit a given qubit unitary operation acts upon. For example, using Qiskit's qubit numbering convention to number the 9 qubits used for the Shor code, we have these expressions for various unitary operations on single qubits, where in each case we tensor the unitary matrix with the identity matrix on every other qubit.
So, in particular, for a given qubit unitary operation we can specify the action of applied to qubit by the following formula, which is similar to the one before except that each matrix represents an operation applied to qubit
Now suppose that is the 9-qubit encoding of a qubit state. If the error takes place on qubit we obtain the state which can be expressed as a linear combination of Pauli operations acting on as follows.
At this point let's make the substitution
Now consider the error-detection and correction steps described previously. We can think about the measurement outcomes for the three inner code parity checks along with the one for the outer code collectively as a single syndrome consisting of 8 bits. Just prior to the actual standard basis measurements that produce these syndrome bits, the state has the following form.
To be clear, we have two systems at this point. The system on the left is the 8 qubits we'll measure to get the syndrome, where and so on, refer to whatever 8-qubit standard basis state is consistent with the corresponding error (or non-error). The system on the right is the 9 qubits we're using for the encoding.
Notice that these two systems are now correlated (in general), and this is the key to why this works. By measuring the syndrome, the state of the 9 qubits on the right effectively collapses to one in which a Pauli error consistent with the measured syndrome has been applied to one of the qubits. Moreover, the syndrome itself provides enough information so that we can undo the error and recover the original encoding
In particular, if the syndrome qubits are measured and the appropriate corrections are made, we obtain a state that can be expressed as a density matrix,
where
Critically, this is a product state: we have our original, uncorrupted encoding as the right-hand tensor factor, and on the left we have a density matrix that describes a random error syndrome. There is no longer any correlation with the system on the right, which is the one we care about, because the errors have been corrected. At this point we can throw the syndrome qubits away or reset them so we can use them again. This is how the randomness — or entropy — created by errors is removed from the system.
This is the discretization of errors for the special case of unitary errors. In essence, by measuring the syndrome, we effectively project the error onto an error that's described by a Pauli matrix.
At first glance it may seem too good to be true that we can correct for arbitrary unitary errors like this, even errors that are tiny and hardly noticeable on their own. But, what's important to realize here is that this is a unitary error on a single qubit, and by the design of the code, a single-qubit operation can't change the state of the logical qubit that's been encoded. All it can possibly do is to move the state out of the subspace of valid encodings, but then the error detections collapse the state and the corrections bring it back to where it started.
Arbitrary qubit errors
Finally, let's consider arbitrary errors that are not necessarily unitary. To be precise, we'll consider an error described by an arbitrary qubit channel For example, this could be a dephasing or depolarizing channel, a reset channel, or a strange channel that we've never thought about before.
The first step is to consider any Kraus representation of
This is a qubit channel, so each is a matrix, which we can express as a linear combination of Pauli matrices.
This allows us to express the action of the error on a chosen qubit in terms of Pauli matrices as follows.
In short, we've simply expanded out all of our Kraus matrices as linear combinations of Pauli matrices.
If we now compute and measure the error syndrome, and correct for any errors that are revealed, we'll obtain a similar sort of state to what we had in the case of a unitary error:
where this time we have
The details are a bit messier and are not shown here. Conceptually speaking, the idea is identical to the unitary case.
Generalization
The discretization of errors generalizes to other quantum error-correcting codes, including ones that can detect and correct errors on multiple qubits. In such cases, errors on multiple qubits can be expressed as tensor products of Pauli matrices, and correspondingly different syndromes specify Pauli operation corrections that might be performed on multiple qubits rather than just one qubit.
Again, by measuring the syndrome, errors are effectively projected or collapsed onto a discrete set of possibilities represented by tensor products of Pauli matrices, and by correcting for those Pauli errors, we can recover the original encoded state. Meanwhile, whatever randomness is generated in the process is moved into the syndrome qubits, which are discarded or reset, thereby removing the randomness generated in this process from the system that stores the encoding.