Choosing the number of iterations
We have established that the state vector of the register in Grover's algorithm remains in the two-dimensional subspace spanned by and once the initialization step has been performed.
The goal is to find an element and this goal will be accomplished if we can obtain the state — for if we measure this state, we're guaranteed to get a measurement outcome Given that the state of after iterations in step 2 is
we should choose so that
is as close to as possible in absolute value, to maximize the probability to obtain from the measurement. For any angle the value oscillates as increases, though it is not necessarily periodic — there's no guarantee that we'll ever get the same value twice.
Naturally, in addition to making the probability of obtaining an element from the measurement large, we would also like to choose to be as small as possible, because applications of the operation requires queries to the function Because we're aiming to make close to in absolute value, a natural way to do this is to choose so that
Solving for yields
Of course, must be an integer, so we won't necessarily be able to hit this value exactly — but what we can do is to take the closest integer to this value, which is
This is the recommended number of iterations for Grover's algorithm. As we proceed with the analysis, we'll see that the closeness of this integer to the target value naturally affects the performance of the algorithm.
(As an aside, if the target value happens to be exactly half-way between two integers, this expression of is what we get by rounding up. We could alternatively round down, which makes sense to do because it means one fewer query — but this is secondary and unimportant for the sake of the lesson.)
Recalling that the value of the angle is given by the formula
we see that the recommended number of iterations depends on the number of strings in This presents a challenge if we don't know how many solutions we have, as we'll discuss later.
Unique search
First, let's focus on the situation in which there's a single string such that Another way to say this is that we're considering an instance of the Unique search problem. In this case we have
which can be conveniently approximated as
when gets large. If we substitute into the expression
we obtain
Recalling that is not only the number of times the operation is performed, but also the number of queries to the function required by the algorithm, we see that we're on track to obtaining an algorithm that requires queries.
Now we'll investigate how well the recommended choice of works. The probability that the final measurement results in the unique solution can be expressed explicitly as
The first argument, refers to the number of items we're searching over, and the second argument, which is in this case, refers to the number of solutions. A bit later we'll use the same notation more generally, where there are multiple solutions.
Here's a table of the probabilities of success for increasing values of
Notice that these probabilities are not strictly increasing. In particular, we have an interesting anomaly when where we get a solution with certainty. It can, however, be proved in general that
for all so the probability of success goes to in the limit as becomes large, as the values above seem to suggest. This is good!
But notice, however, that even a weak bound such as establishes the utility of Grover's algorithm. For whatever measurement outcome we obtain from running the procedure, we can always check to see if using a single query to And if we fail to obtain the unique string for which with probability at most by running the procedure once, then after independent runs of the procedure we will have failed to obtain this unique string with probability at most That is, using queries to we'll obtain the unique solution with probability at least Using the better bound reveals that the probability to find using this method is actually at least
Multiple solutions
As the number of elements in varies, so too does the angle which can have a significant effect on the algorithm's probability of success. For the sake of brevity, let's write to denote the number of solutions, and as before we'll assume that
As a motivating example, let's imagine that we have solutions rather than a single solution, as we considered above. This means that
which is approximately double the angle we had in the case when is large. Suppose that we didn't know any better, and selected the same value of as in the unique solution setting:
The effect will be catastrophic as the following table of probabilities reveals.
This time the probability of success goes to as goes to infinity. This happens because we're effectively rotating twice as fast as we did when there was a unique solution, so we end up zooming past the target and landing near
However, if instead we use the recommended choice of which is
for
then the performance will be better. To be more precise, using this choice of leads to success with high probability.
Generalizing what was claimed earlier, it can be proved that
where we're using the notation suggested earlier: denotes the probability that Grover's algorithm run for iterations reveals a solution when there are solutions in total out of possibilities.
This lower bound of on the probability of success is slightly peculiar in that more solutions implies a worse lower bound — but under the assumption that is significantly smaller than we nevertheless conclude that the probability of success is reasonably high. As before, the mere fact that is reasonably large implies the algorithm's usefulness.
It also happens to be the case that
This lower bound describes the probability that a string selected uniformly at random is a solution — so Grover's algorithm always does at least as well as random guessing. (In fact, when Grover's algorithm is random guessing.)
Now let's take a look at the number of iterations (and hence the number of queries)
for
For every it is the case that and so
This implies that
This translates to a savings in the number of queries as grows. In particular, the number of queries required is
Unknown number of solutions
If the number of solutions is unknown, then a different approach is required, for in this situation we have no knowledge of to inform our choice of There are, in fact, multiple approaches.
One simple approach is to choose
uniformly at random. Selecting in this way always finds a solution (assuming one exists) with probability greater than 40%, though this is not obvious and requires an analysis that will not be included here. It does makes sense, however, particularly when we think about the geometric picture: rotating the state of a random number of times like this is not unlike choosing a random unit vector in the space spanned by and for which it is likely that the coefficient of is reasonably large. By repeating this procedure and checking the outcome in the same way as described before, the probability to find a solution can be made very close to
There is a refined method that finds a solution when one exists using queries, even when the number of solutions is not known, and requires queries to determine that there are no solutions when
The basic idea is to choose uniformly at random from the set iteratively, for increasing values of In particular, we can start with and increase it exponentially, always terminating the process as soon as a solution is found and capping so as not to waste queries when there isn't a solution. The process takes advantage of the fact that fewer queries are required when more solutions exist. Some care is required, however, to balance the rate of growth of with the probability of success for each iteration. (Taking works, for instance, as an analysis reveals. Doubling however, does not — this turns out to be too fast of an increase.)
The trivial cases
Throughout the analysis we've just gone through, we've assumed that the number of solutions is non-zero. Indeed, by referring to the vectors
we have implicitly assumed that and are both nonempty. Here we will briefly consider what happens when one of these sets is empty.
Before we bother with an analysis, let's observe the obvious: if every string is a solution, then we'll see a solution when we measure; and when there aren't any solutions, we won't see one. In some sense there's no need to go deeper than this.
We can, however, quickly verify the mathematics for these trivial cases. The situation where one of and is empty happens when is constant; is empty when for every and is empty when for every This means that
and therefore
So, irrespective of the number of iterations we perform in these cases, the measurements will always reveal a uniform random string