Shor's algorithm
Now we'll turn our attention to the integer factorization problem, and see how it can be solved efficiently on a quantum computer using phase estimation. The algorithm we'll obtain is Shor's algorithm for integer factorization. Shor didn't describe his algorithm specifically in terms of phase estimation, but it is a natural and intuitive way to explain how it works.
We'll begin by discussing an intermediate problem known as the order-finding problem and see how phase estimation provides a solution to this problem. We'll then see how an efficient solution to the order-finding problem gives us an efficient solution to the integer factorization problem. (When a solution to one problem provides a solution to another problem like this, we say that the second problem reduces to the first — so in this case we're reducing integer factorization to order finding.) This second part of Shor's algorithm doesn't make use of quantum computing at all; it's completely classical. Quantum computing is only needed to solve order finding.
The order-finding problem
Some basic number theory
To explain the order-finding problem and how it can be solved using phase estimation, it will be helpful to begin with a couple of basic number theory concepts, and to introduce some handy notation along the way.
To begin, for any given positive integer define the set like this.
For instance, and so on.
These are sets of numbers, but we can think of them as more than sets. In particular, we can think about arithmetic operations on such as addition and multiplication — and if we agree to always take our answers modulo (i.e., divide by and take the remainder as the result), we'll always stay within this set when we perform these operations. The two specific operations of addition and multiplication, both taken modulo turn into a ring, which is a fundamentally important type of object in algebra.
For example, and are elements of and if we multiply them together we get which leaves a remainder of when divided by Sometimes we express this as follows.
But we can also simply write provided that it's been made clear that we're working in just to keep our notation as simple as possible.
As an example, here are the addition and multiplication tables for
Among the elements of the elements that satisfy are special. Frequently the set containing these elements is denoted with a star like so.
If we focus our attention on the operation of multiplication, the set forms a group — specifically an abelian group — which is another important type of object in algebra. It's a basic fact about these sets (and finite groups in general), that if we pick any element and repeatedly multiply to itself, we'll always eventually get the number
For a first example, let's take We have that because and if we multiply to itself we get as the table above confirms.
As a second example, let's take If we go through the numbers from to the ones having GCD equal to with are as follows.
For each of these elements, it is possible to raise that number to a positive integer power to get Here are the smallest powers for which this works:
Naturally we're working within for all of these equations, which we haven't bothered to write — we take it to be implicit to avoid cluttering things up. We'll continue to do that throughout the rest of the lesson.
Problem statement and connection to phase estimation
Now we can state the order-finding problem.
Order finding
Input: positive integers and satisfying
Output: the smallest positive integer such that
Alternatively, in terms of the notation we just introduced above, we're given and we're looking for the smallest positive integer such that This number is called the order of modulo
To connect the order-finding problem to phase estimation, let's think about the operation defined on a system whose classical states correspond to where we multiply by a fixed element
To be clear, we're doing the multiplication in so it's implicit that we're taking the product modulo inside of the ket on the right-hand side of the equation.
For example, if we take and then the action of on the standard basis is as follows.
This is a unitary operation provided that it shuffles the elements of the standard basis so as a matrix it's a permutation matrix. It's evident from its definition that this operation is deterministic, and a simple way to see that it's invertible is to think about the order of modulo and to recognize that the inverse of is
There's another way to think about the inverse that doesn't require any knowledge of (which, after all, is what we're trying to compute). For every element there's always a unique element that satisfies We denote this element by and it can be computed efficiently; an extension of Euclid's GCD algorithm does it with cost quadratic in And thus
So, the operation is both deterministic and invertible. That implies that it's described by a permutation matrix, and is therefore unitary.
Now let's think about the eigenvectors and eigenvalues of the operation assuming that As was just argued, this assumption tells us that is unitary.
There are eigenvalues of possibly including the same eigenvalue repeated multiple times, and in general there's some freedom in selecting corresponding eigenvectors — but we won't need to worry about all of the possibilities. Let's start simply and identify just one eigenvector of
The number is the order of modulo here and throughout the remainder of the lesson. The eigenvalue associated with this eigenvector is because it isn't changed when we multiply by
This happens because so each standard basis state gets shifted to for and gets shifted back to Informally speaking, it's like we're slowly stirring but it's already completely stirred so nothing changes.
Here's another example of an eigenvector of This one happens to be more interesting in the context of order finding and phase estimation.
Alternatively, we can write this vector using a summation as follows.
Here we're seeing the complex number showing up naturally, due to the way that multiplication by works modulo This time the corresponding eigenvalue is To see this, we can first compute as follows.
Then, because and we see that
so
Using the same reasoning, we can identify additional eigenvector/eigenvalue pairs for For any choice of we have that
is an eigenvector of whose corresponding eigenvalue is
There are other eigenvectors of but we don't need to concern ourselves with them — we'll focus solely on the eigenvectors that we've just identified.
Order finding through phase estimation
To solve the order-finding problem for a given choice of we can apply the phase-estimation procedure to the operation
To do this, we need to implement not only efficiently with a quantum circuit, but also and so on, going as far as needed to obtain a precise enough estimate from the phase estimation procedure. Here we'll explain how this can be done, and we'll figure out exactly how much precision is needed later.
Let's start with the operation by itself. Naturally, because we're working with the quantum circuit model, we'll use binary notation to encode the numbers between and The largest number we need to encode is so the number of bits we need is
For example, if we have Here's what the encoding of elements of as binary strings of length looks like.
And now, here's a precise definition of how is defined as an -qubit operation.
The point is that although we only care about how works for we do have to specify how it works for the remaining standard basis states — and we need to do this in a way that still gives us a unitary operation. Defining so that it does nothing to the remaining standard basis states accomplishes this.
Using the algorithms for integer multiplication and division discussed in the previous lesson, together with the methodology for reversible, garbage-free implementations of them, we can build a quantum circuit that performs for any choice of at cost Here is one way this can be done.
-
Build a circuit for performing the operation
where
using the method described in the previous lesson. This gives us a circuit of size
-
Swap the two -qubit systems using swap gates to swap the qubits individually.
-
Along similar lines to the first step, build a circuit for the operation
where is the inverse of in
By initializing the bottom qubits and composing the three steps, we obtain this transformation:
The method requires workspace qubits, but they're returned to their initialized state at the end, which allows us to use these circuits for phase estimation. The total cost of the circuit we obtain is
To perform and so on, we can use exactly the same method, except that we replace with and so on, as elements of That is, for any power we choose, we can create a circuit for not by iterating times the circuit for but instead by computing and then using the circuit for
The computation of powers is the modular exponentiation problem mentioned in the previous lesson. This computation can be done classically, using the algorithm for modular exponentiation mentioned in the previous lesson (often called the power algorithm in computational number theory). In fact, we only require power-of-2 powers of in particular and we can obtain these powers by iteratively squaring times. Each squaring can be performed by a Boolean circuit of size
In essence, what we're effectively doing here is offloading the problem of iterating as many as times to an efficient classical computation. And it's good fortune that this is possible! For an arbitrary choice of a quantum circuit in the phase estimation problem, this is not likely to be possible — and in that case the resulting cost for phase estimation grows exponentially in the number of control qubits
Solution given a convenient eigenvector
To understand how we can solve the order-finding problem using phase estimation, let's start by supposing that we run the phase estimation procedure on the operation using the eigenvector Getting our hands on this eigenvector isn't easy, as it turns out, so this won't be the end of the story — but it's helpful to start here.
The eigenvalue of corresponding to the eigenvector is
That is, for So, if we run the phase estimation procedure on using the eigenvector we'll get an approximation to By computing the reciprocal we'll be able to learn — provided that our approximation is good enough.
In more detail, when we run the phase-estimation procedure using control qubits, what we obtain is a number We then take as a guess for which is in the case at hand. To figure out what is from this approximation, the natural thing to do is to compute the reciprocal of our approximation and round to the nearest integer.
For example, let's suppose and we perform phase estimation on with the eigenvector using control bits. The best -bit approximation to is and we have a pretty good chance (about in this case) to obtain the outcome from phase estimation. We have
and rounding to the nearest integer gives which is the correct answer.
On the other hand, if we don't use enough precision, we might not get the right answer. For instance, if we take control qubits in phase estimation, we might obtain the best -bit approximation to which is Taking the reciprocal yields
and rounding to the nearest integer gives an incorrect answer of
So how much precision do we need to get the right answer? We know that the order is an integer, and intuitively speaking what we need is enough precision to distinguish from nearby possibilities, including and The closest number to that we need to be concerned with is and the distance between these two numbers is
So, if we want to make sure that we don't mistake for it suffices to use enough precision to guarantee that a best approximation to is closer to than it is to If we use enough precision so that
so that the error is less than half of the distance between and then will be closer to than to any other possibility, including and
We can double-check this as follows. Suppose that
for satisfying
When we take the reciprocal we obtain
By maximizing in the numerator and minimizing in the denominator, we can bound how far away we are from as follows.
We're less than away from so as expected we'll get when we round.
Unfortunately, because we don't yet know what is, we can't use it to tell us how much accuracy we need. What we can do instead is to use the fact that must be smaller than to ensure that we use enough precision. In particular, if we use enough accuracy to guarantee that the best approximation to satisfies
then we'll have enough precision to correctly determine when we take the reciprocal. Taking ensures that we have a high chance to obtain an estimation with this precision using the method described previously. (Taking is good enough if we're comfortable with a lower bound of 40% on the probability of success.)
General solution
As we just saw, if we have the eigenvector of we can learn through phase estimation, so long as we use enough control qubits to do this with sufficient precision. Unfortunately, it's not easy to get our hands on the eigenvector so we need to figure out how to proceed.
Let's suppose momentarily that we proceed just like above, except with the eigenvector in place of for any choice of that we choose to think about. The result we get from the phase estimation procedure will be an approximation
Working under the assumption that we don't know either or this might or might not allow us to identify For example, if we'll get an approximation to which unfortunately tells us nothing. This, however, is an unusual case; for other values of we'll at least be able to learn something about
We can use an algorithm known as the continued fraction algorithm to turn our approximation into nearby fractions — including if the approximation is good enough. We won't explain the continued fraction algorithm here. Instead, here's a statement of a known fact about this algorithm.
Fact. Given an integer and a real number there is at most one choice of integers with and satisfying Given and the continued fraction algorithm finds and or reports that they don't exist. This algorithm can be implemented as a Boolean circuit having size
If we have a very close approximation to and we run the continued fraction algorithm for and we'll get and as they're described in the fact. An analysis of the fact allows us to conclude that
Notice in particular that we don't necessarily learn and we only learn in lowest terms.
For example, and as we've already noticed, we're not going to learn anything from But that's the only value of where that happens. When is nonzero, it might have common factors with but the number we obtain from the continued fraction algorithm must at least divide
It's far from obvious, but it is true that if we have the ability to learn and for for chosen uniformly at random, then we're very likely to be able to recover after just a few samples. In particular, if our guess for is the least common multiple of all the values for the denominator that we observe, we'll be right with high probability. Intuitively speaking, some values of aren't good because they share common factors with and those common factors are hidden from us when we learn and But random choices of aren't likely to hide factors of for long, and the probability that we don't guess correctly by taking the least common multiple of the denominators we observe drops exponentially in the number of samples.
It remains to address the issue of how we get our hands on an eigenvector of on which to run the phase estimation procedure. As it turns out, we don't actually need to create them!
What we will do instead is to run the phase estimation procedure on the state by which we mean the -bit binary encoding of the number in place of an eigenvector of So far, we've only talked about running the phase estimation procedure on a particular eigenvector, but nothing prevents us from running the procedure on an input state that isn't an eigenvector of and that's what we're doing here with the state (This isn't an eigenvector of unless which isn't a choice we'll be interested in.)
The rationale for choosing the state in place of an eigenvector of is that the following equation is true.
One way to verify this equation is to compare the inner products of the two sides with each standard basis state, using formulas mentioned previously in the lesson to help to evaluate the results for the right-hand side. As a consequence, we will obtain precisely the same measurement results as if we had chosen uniformly at random and used as an eigenvector.
In greater detail, let's imagine that we run the phase estimation procedure with the state in place of one of the eigenvectors After the inverse quantum Fourier transform is performed, this leaves us with the state
where
The vector represents the state of the top qubits after the inverse of the quantum Fourier transform has been performed on them.
So, by virtue of the fact that is an orthonormal set, we find that a measurement of the top qubits yields an approximation to the value where is chosen uniformly at random. As we've already discussed, this allows us to learn with a high degree of confidence after several independent runs, which was our goal.
Total cost
The cost to implement each controlled-unitary is There are controlled-unitary operations, and we have so the total cost for the controlled-unitary operations is In addition, we have Hadamard gates (which contribute to the cost), and the inverse quantum Fourier transform contributes to the cost. Thus, the cost of the controlled-unitary operations dominates the cost of the entire procedure — which is therefore
In addition to the quantum circuit itself, there are a few classical computations that need to be performed along the way. This includes computing the powers in for which are needed to create the controlled-unitary gates, as well as the continued fraction algorithm that converts approximations of into fractions. These computations can be performed by Boolean circuits with a total cost of
As is typical, all of these bounds can be improved using asymptotically fast algorithms; these bounds assume we're using standard algorithms for basic arithmetic operations.
Factoring by order finding
The very last thing we need to discuss is how solving the order-finding problem helps us to factor. This part is completely classical — it has nothing specifically to do with quantum computing.
Here's the basic idea. We want to factorize the number and we can do this recursively. Specifically, we can focus on the task of splitting which means finding any two integers for which This isn't possible if is a prime number, but we can efficiently test to see if is prime using a primality testing algorithm first, and if isn't prime we'll try to split it. Once we split we can simply recurse on and until all of our factors are prime and we obtain the prime factorization of
Splitting even integers is easy: we just output and
It's also easy to split perfect powers, meaning numbers of the form for integers just by approximating the roots etc., and checking nearby integers as suspects for We don't need to go further than steps into this sequence, because at that point the root drops below and won't reveal additional candidates.
It's good that we can do both of these things because order finding won't help us to factor even numbers or for prime powers, where the number happens to be prime. If is odd and not a prime power, however, order finding allows us to split
Probabilistic algorithm to split an odd, composite integer N that is not a prime power
Randomly choose
Compute
If then output and and stop. Otherwise continue to the next step knowing that
Let be the order of modulo (Here's where we need order finding.)
If is even:
5.1 Compute modulo
5.2 Compute
5.3 If then output and and stop.If this point is reached, the algorithm has failed to find a factor of
A run of this algorithm may fail to find a factor of Specifically, this happens in two situations:
- The order of modulo is odd.
- The order of modulo is even and
Using basic number theory it can be proved that, for a random choice of with probability at least neither of these events happen. In fact, the probability that either event happens is at most for being the number of distinct prime factors of which is why the assumption that is not a prime power is needed. (The assumption that is odd is also required for this fact to be true.)
This means that each run has at least a 50% chance to split Therefore, if we run the algorithm times, randomly choosing each time, we'll succeed in splitting with probability at least
The basic idea behind the algorithm is as follows. If we have a choice of for which the order of modulo is even, then is an integer and we can consider the numbers
Using the formula we conclude that
Now, we know that by the definition of the order — which is another way of saying that evenly divides That means that evenly divides the product
For this to be true, all of the prime factors of must also be prime factors of or (or both) — and for a random selection of it turns out to be unlikely that all of the prime factors of will divide one of the terms and none will divide the other. Otherwise, so long as some of the prime factors of divide the first term and some divide the second term, we'll be able to find a non-trivial factor of by computing the GCD with the first term.