Threshold theorem
The final topic of discussion for the lesson is a very important theorem known as the threshold theorem. Here is a somewhat informal statement of this theorem.
Theorem (The threshold theorem). A quantum circuit having size can be implemented with high accuracy by a noisy quantum circuit, provided that the probability of error at each location in the noisy circuit is below a fixed, nonzero threshold value The size of the noisy circuit scales as for a positive constant
In simple terms, it says that if we have any quantum circuit having gates, where can be as large as we like, then it's possible to implement that circuit with high accuracy using a noisy quantum circuit, provided that the level of noise is below a certain threshold value that is independent of N. Moreover, it isn't too expensive to do this, in the sense that the size of the noisy circuit required is on the order of times some constant power of the logarithm of
To state the theorem more formally requires being specific about the noise model, which will not be done in this lesson. It can, for instance, be proved for the independent stochastic noise model that was mentioned earlier, where errors occur independently at each possible location in the circuit with some probability strictly smaller than the threshold value, but it can also be proved for more general noise models where there can be correlations among errors.
This is a theoretical result, and the most typical way it is proved doesn't necessarily translate to a practical approach, but it does nevertheless have great practical importance. In particular, it establishes that there is no fundamental barrier to performing quantum computations using noisy components; as long as the error rate for these components is below the threshold value, they can be used to build reliable quantum circuits of arbitrary size. An alternative way to state its importance is to observe that, if the theorem wasn't true, it would be hard to imagine large-scale quantum computing ever becoming a reality.
There are many technical details involved in formal proofs of (formal statements of) this theorem, and those details will not be communicated here — but the essential ideas can nevertheless be explained at an intuitive level. To make this explanation as simple as possible, let's imagine that we use the -qubit Steane code for error correction. This would be an impractical choice for an actual physical implementation — as would be reflected by a minuscule threshold value — but it works well to convey the main ideas. This explanation will also be rather cavalier about the noise model, with the assumption being that an error strikes each location in a fault-tolerant implementation independently with probability
Now, if the probability is larger than the reciprocal of the size of the circuit we aim to implement, chances are very good that an error will strike somewhere. So, we can attempt to run a fault-tolerant implementation of this circuit, following the prescription outlined in the lesson. We may then ask ourselves the question suggested earlier: Is this making things better or worse?
If the probability of an error at each location is too large, then our efforts will not help and may even make things worse, just like the -qubit Shor code doesn't help if the error probability is above 3.23% or so. In particular, the fault-tolerant implementation is considerably larger than our original circuit, so there are a lot more locations where errors could strike.
However, if is small enough, then we will succeed in reducing the error probability for the logical computation we're performing. (In a formal proof, we would need to be very careful at this point: errors in the logical computation will not necessarily be accurately described by the original noise model. This, in fact, motivates less forgiving noise models where errors might not be independent — but we will sweep this detail under the rug for the sake of this explanation.)
In greater detail, in order for a logical error to occur in the original circuit, at least two errors must fall into the same code block in the fault-tolerant implementation, given that the Steane code can correct any single error in a code block. Keeping in mind there are many different ways to have two or more errors in the same code block, it is possible to argue that the probability of a logical error at each location in the original circuit is at most for some fixed positive real number that depends on the code and the gadgets we use, but critically not on the size of the original circuit. If is smaller than which is the number we can take as our threshold value this translates to a reduction in error.
However, this new error rate might still be too high to allow the entire circuit to work correctly. A natural thing to do at this point is to choose a better code and better gadgets to drive the error rate down to a point where the implementation is likely to work. Theoretically speaking, a simple way to argue that this is possible is to concatenate. That is to say, we can think of the fault-tolerant implementation of the original circuit as if it were any other quantum circuit, and then implement this new circuit fault-tolerantly, using the same scheme. We can then do this again and again, as many times as we need to reduce the error rate to a level that allows the original computation to work.
To get a rough idea for how the error rate decreases through this method, let's consider how it works for a few iterations. Note that a rigorous analysis would need to account for various technical details we're omitting here.
We start with the error probability for locations in the original circuit. Presuming that the logical error rate can be bounded by after the first iteration. By treating the fault-tolerant implementation as any other circuit, and implementing it fault-tolerantly, we obtain a bound on the logical error rate of
Another iteration reduces the error bound further, to
Continuing in this manner for a total of iterations leads to a logical error rate (for the original circuit) bounded by
which is doubly exponential in
This means that we don't need too many iterations to make the error rate extremely small. Meanwhile, the circuits are growing in size with each level of concatenation, but the size only increases singly exponentially in the number of levels This is because, with each level of fault-tolerance, the size grows by at most a factor determined by the maximum size of the gadgets being used. When it is all put together, and an appropriate choice for the number of levels of concatenation is made, we obtain the threshold theorem.
So, what is this threshold value in reality? The answer depends on the code and the gadgets used. For the Steane code together with magic state distillation, it is minuscule and probably unlikely to be achievable in practice. But, using surface codes and state of the art gadgets, the threshold has been estimated to be on the order of 0.1% to 1%.
As new codes and methods are discovered, it is reasonable to expect the threshold value to increase, while simultaneously the level of noise in actual physical components will decrease. Reaching the point at which large-scale quantum computations can be implemented fault-tolerantly will not be easy, and will not happen overnight. But, this theorem, together with advances in quantum codes and quantum hardware, provide us with optimism as we continue to push forward to reach the ultimate goal of building a large-scale, fault-tolerant quantum computer.