bloch-sphere-0

Grover Search

With Deutsch-Jozsa, we saw our first fundamentally quantum algorithm that showed how quantum computers could do something better than classical computers. But it doesn't seem like a particularly useful algorithm. Is there a more useful algorithm that we might be able to run on a quantum computer?

We'll answer this question by looking at Grover Search. This algorithm shows how we can use a quantum computer to search through some list of items exponentially faster than a classical computer would be able to.

Let's start by formally defining the problem.

Suppose we have a list of items we would like to search. Perhaps a list of users in our system and we would like to find the user with a particular email address:

User IDEmailNamePhone Number
34587[email protected]Joe Blogs1234556789
98263[email protected]Adam Smith9373526118
76291[email protected]Lara Croft6735264123

In a classical system, we might solve this problem of lookup using indexes. But this can create overheads in other parts of the system, so it is not uncommon that we might consider consider a linear search.

In a completely unordered list like this, we know that the best we can do is simply scanning over each item until we find our match. This is a linear approach since the amount of time it takes to scan through the list is proportional linearly to the number of items in the list.

This is quite often refered to as the "phone book search" problem. A phone book is an ancient pre-historic device that was used to look up a phone number for a particular person. It was sorted by name so that you could find the person you were looking for quickly. But if you wanted to do the opposite, find the name associated to a particular phone number, the best you could do was simply scan through every entry until you found it.

I hope you're already getting excited thinking about how a quanutm computer could solve this quicker. You should be!

Before we step back into the quantum world, let's translate our problem into some more formal mathematical language.

Let's say our database contains NN objects. We will define a function ff that takes an index from 00 to N1N-1 and outputs 11 if the database item at that index satisfies our condition and 00 otherwise.

For example, in our above example, if we're looking for the user with email "[email protected]", we have f(x)=1f(x) = 1 if the object at index xx has that email address. If we treat the "User ID" as the index then in this case: x=76291x = 76291.

Again, to re-iterate in the mathemtical language, the best a classical computer could do to find xx would be to check all values from 00 to N1N-1 until it finds one that satisfies f(x)=1f(x) = 1.

Now let's go back to the quantum world.

To start constructing our quantum algorithm, we're going to introduce something called a phase oracle. It will be define as follows:

Oxy={y  if y=xy      if yxO_x\ket{y} = \begin{cases} -\ket{y}\ \ \text{if}\ y = x \\ \ket{y}\ \ \ \ \ \ \text{if}\ y \neq x\end{cases}

So it essentially sticks a negative sign in front of a particular state and leaves all others alone. We call this a phase oracle since adding a negative sign is just multiplying by a global phase eiπ=1e^{i\pi} = -1.

We can represent this oracle in outer-product notation:

Ox=I2xxO_x = I - 2\ket{x}\bra{x}

We can see this works, in the case when yxy \neq x:

Oxx=Ix2xxx=x2x=xOxy=Iy2xxy=yO_x\ket{x} = I\ket{x} - 2\ket{x}\bra{x}\ket{x} = \ket{x} - 2\ket{x} = -\ket{x} \\[3ex] O_x\ket{y} = I\ket{y} - 2\ket{x}\bra{x}\ket{y} = \ket{y}

We will apply one additional constraint before we construct the algorithm: the database will only have a single entry satisfying the condition i.e. there will only exist a single value of xx such that f(x)=1f(x) = 1. It's possible to generalise to multiple solutions, but we'll keep it simple for now.

Let's define the solution we're looking for as ω\omega i.e. f(ω)=1f(\omega)=1. Grover's search will then run as follows:

  1. Prepare n=log2Nn=\lceil \log_2{N} \rceil qubits in the starting state 0\ket{0}. This ensures we have enough bits to represent all the possible indexes in our database.
  2. Apply the Hadamard gate to all qubits: HnH^{\otimes n}, to end up in the even superposition: 1Nx=0N1x\frac{1}{\sqrt{N}}\sum_{x=0}^{N-1} \ket{x}
  3. Apply our quantum phase oracle for the solution state: OωO_{\omega}
  4. Hadamard gate every qubit: HnH^{\otimes n}
  5. Apply an oracle that adds a negative sign to every state apart from the all zero state: O0-O_0
  6. Hadamard gate every qubit: HnH^{\otimes n}
  7. Go back to step 3 and repeat a specified number of times
  8. Measure all the qubits with a high probability to get the solution ω\omega
  9. If the solution is incorrect, run again

Ok, that's quite a lot to take in. Before we start looking into how this works, let's look at the circuit representation:

The circuit representation of the Grover circuit

The numbers at the bottom represent the corresponding steps in the algorithm listed above. We've also added a bracket over steps 3-6 labelled GG. We will refer to this part as the Grover iterator. This will be the part that gets repeated several times.

Analysing this algorithm revolves around finding a nice mathematical representation of this operator. Formally, we can see it's defined as:

G=Hn(O0)HnOωG = H^{\otimes n}(-O_0)H^{\otimes n}O_{\omega}

In fact, we'll split it up more by defining the Grover diffusion operator DD:

D=Hn(O0)HnD = H^{\otimes n}(-O_0)H^{\otimes n}

So G=DOωG = DO_{\omega}

We can simplify DD:

D=Hn(O0)Hn=Hn(200I)Hn=2Hn00HnHnIHn=2(Hn0)(0Hn)HnHnD = H^{\otimes n}(-O_0)H^{\otimes n} \\[3ex] = H^{\otimes n}(2\ket{0}\bra{0} - I)H^{\otimes n} \\[3ex] = 2H^{\otimes n}\ket{0}\bra{0}H^{\otimes n} - H^{\otimes n}IH^{\otimes n} \\[3ex] = 2(H^{\otimes n}\ket{0})(\bra{0}H^{\otimes n}) - H^{\otimes n}H^{\otimes n}

Since the Hadamard gate is its own inverse, the second term here cancels to II

The first bracket in the first term we know is just the even superposition of all possible qubit states: 12nx=02n1x\frac{1}{\sqrt{2^n}}\sum_{x=0}^{2^n-1}\ket{x}. We will need this state a lot, so we're going to denote it as s\ket{s}. The second bracket is just the corresponding bra of this ket: s\bra{s}. This comes from some maths we haven't discussed yet, but just know that since the two terms in the braket are real (have no imaginary coefficients) the bra can be written this way simply by flipping the ket.

Putting this together we now have:

D=2ssID = 2\ket{s}\bra{s} - I

And thus:

G=(2ssI)(I2ωω)G = (2\ket{s}\bra{s} - I)(I - 2\ket{\omega}\bra{\omega})

The next step of our journey will be building some geometric intuition for what these operators are really doing.

What we will show now, is that an operator of the form 2xxI2\ket{x}\bra{x} - I actually represents a geometric reflection.

To see this, imagine we have 2 orthogonal vectors x\ket{x} and x\ket{x^{\perp}} where, since they are orthogonal, we have xx=0\braket{x|x^{\perp}} = 0. We define a vector in terms of these two: ψ=ax+bx\ket{\psi} = a\ket{x} + b\ket{x^{\perp}} and then see how our operator behaves on it:

(2xxI)ψ=(2xxI)(ax+bx)=2axxx+2bxxxaIxbIx=2axaxbx=axbx(2\ket{x}\bra{x} - I)\ket{\psi} = (2\ket{x}\bra{x} - I)(a\ket{x} + b\ket{x^{\perp}}) \\[3ex] = 2a\ket{x}\bra{x}\ket{x} + 2b\ket{x}\bra{x}\ket{x^{\perp}} - aI\ket{x} - bI\ket{x^{\perp}} \\[3ex] = 2a\ket{x} - a\ket{x} - b\ket{x^{\perp}} \\[3ex] = a\ket{x} - b\ket{x^{\perp}}

As you can see, the operator flipped the component of x\ket{x^{\perp}} and left the component of x\ket{x} untouched. Thus this represents a reflection or mirroring around the vector x\ket{x}.

Now we know this a slight re-arranging of our definition of GG tells us so much more:

G=(2ssI)(2ωωI)G=-(2\ket{s}\bra{s} - I)(2\ket{\omega}\bra{\omega} - I)

We can see that GG is in fact a reflection around ω\ket{\omega} followed by a reflection around s\ket{s} then negated.

If you remember your rules of geometric transforms well, you will know that 2 reflections always make a rotation. So we can see that our Grover iterator is performing a rotation of our vector every time we apply it.

We'd like to be able to visualise this rotation; to see where it's rotating from and where it's rotating to. The problem is that we have n=log2Nn = \lceil \log_2{N} \rceil qubits and thus N\approx N different basis states. This means if we have 1000 items to search through, we have a 1000-dimensional vector space! That's not very easy to visualise...

Luckily we have a handy tool up our sleeve: projection. Just in the same way your eyes right now are projecting the 3D world into a 2D image on your retina, we can project our problem into a more manageable number of dimensions.

Surprisingly, we can project it down into just 2!

Our first vector will simply be the solution we are looking for: ω\ket{\omega}.

The second vector we will choose will be the even superposition of all states we are NOT looking for i.e. an even superposition of all x\ket{x} where f(x)=0f(x) = 0. Since we specified there will be only one solution in our case, there are N1N-1 such values of x\ket{x} so we can write our second vector as:

s=1N1xωx\ket{s^\prime} = \frac{1}{\sqrt{N-1}}\sum_{x \neq \omega}\ket{x}

The choice to label this vector s\ket{s^\prime} will become apparent shortly.

It's fairly clear these two vectors are orthogonal, since they don't share any of the same kets. And thus we can plot them in 2D:

The omega and s prime two-dimensional vector space

Ok, so we've got two basis vectors for our space. Now we need to know where in the space to plot our state vector. Since we've applied all Hadamard gates to the all 0\ket{0} state, we know that we start in the state s\ket{s}.

We also know all our state vectors should be of unit length, that's what the normalisation coefficients are for, so we have to determine whether it is closer to ω\ket{\omega} or s\ket{s^\prime}. We can do this with the inner-product since we know two orthogonal vectors inner-product to 0 and two parallel vectors to 1:

ωs=ω1Nx=0Nx=1Nωω=1N\braket{\omega|s} = \bra{\omega}\frac{1}{\sqrt{N}}\sum_{x=0}^{N}\ket{x} \\[3ex] = \frac{1}{\sqrt{N}}\braket{\omega|\omega} = \frac{1}{\sqrt{N}}
ss=(1N1xωω)(1Nx=0Nx)=1N1Nxωωx=1N1N(N1)=N1N\braket{s^\prime |s} = \Bigl(\frac{1}{\sqrt{N-1}}\sum_{x\neq \omega}\bra{\omega}\Bigr)\Bigl(\frac{1}{\sqrt{N}}\sum_{x=0}^{N}\ket{x}\Bigr) \\[3ex] = \frac{1}{\sqrt{N-1}\sqrt{N}}\sum_{x\neq \omega}\braket{\omega|x} = \frac{1}{\sqrt{N-1}\sqrt{N}}(N-1) \\[3ex] = \sqrt{\frac{N-1}{N}}

From these results, it is fairly clear that our initial state vector lies closer to s\ket{s^\prime} than ω\ket{\omega}, so let's plot our starting point. We'll denote our starting state as ϕ0\phi_0 as we have done before:

The phi zero state plotted on the two dimensional vector space, closer to s prime than to omega

Now to find ϕ1\ket{\phi_1} we just apply our Grover iterator: ϕ1=Gϕ0\ket{\phi_1} = G\ket{\phi_0}. We know, from earlier, that this operation is a reflection around ω\ket{\omega} followed by a reflection around s\ket{s} then negated.

A nice result from 2D geometry is that a negated reflection around a vector is equal to a reflection around a perpendicular vector.

So we could pull the negative sign of the Grover iterator inside to the reflection around ω\ket{\omega} and make it a reflection around s\ket{s^\prime}

So our Grover iterator is now a reflection around s\ket{s^\prime} followed by a reflection around s\ket{s}.

So this looks like:

A representation of the flips a state goes through under the Grover iterator

Here, the bottom dotted line represents the first reflection around s\ket{s^\prime}.

So we can see that the Grover iterator is rotating our state away from the vector of non-solutions, s\ket{s^\prime}, and towards the solution vector ω\ket{\omega}!

Once we have got near to the solution we can simply measure the state and we will have a high probability of finding the correct solution. The only question is how many times do we have to rotate? i.e. how many times do we have to apply the Grover iterator?

To answer this, we'll have to look at the angles involved in the rotations. We can see this more easily on the following diagram:

The flips of the state under the Grover iterator with angles labelled

We can see clearly some doubling relations between angles here. The green angle, the result after the initial reflection around s\ket{s^\prime}, is double the red angle and the blue angle, the result after the reflection around s\ket{s}, is double the green angle. And finally, the yellow angle is clearly half the blue angle.

Putting this all together we see that the angle between ϕ0\ket{\phi_0} and ϕ1\ket{\phi_1} is simply double the red angle, the angle between s\ket{s} and s\ket{s^\prime}.

So what is the red angle?

We can use the following geometric result on the inner-product: xy=xxyycosθ\braket{x|y} = \sqrt{\braket{x|x}}\sqrt{\braket{y|y}}\cos{\theta}. In our case s\ket{s} and s\ket{s^\prime} are unit vectors, so the length scales can be ignored: ss=cosθ\braket{s|s^\prime} = \cos{\theta}.

But we already calculated this inner-product earlier it's:

ss=N1N\braket{s|s^\prime} = \sqrt{\frac{N-1}{N}}

So we have:

cosθ=N1N\cos{\theta} = \sqrt{\frac{N-1}{N}}

Using sin/cos relations:

sinθ=1N\sin{\theta} = \sqrt{\frac{1}{N}}

For small θ\theta we have sin1θθ\sin^{-1}{\theta} \approx \theta so:

θ1N\theta \approx \sqrt{\frac{1}{N}}

Now remember every application of the Grover iterator rotates by 2θ2\theta so after nn iterations we are at an angle of (n+1)2θ(n+1)2\theta.

We reach the solution when this angle equals π/2\pi/2 which, when subbing into the previous equation, is when n=π/4θ1n = \pi/4\theta - 1.

Subbing now in our approximation of θ\theta:

n=π4N1n = \frac{\pi}{4}\sqrt{N}-1

Wow, that was a lot of work! But we're finally done!

We've shown that we require only N\sim \sqrt{N} steps to reach the solution.

Remember a classical algorithm would take NN steps, so we've found a significant improvement!

> Next Article (Quantum Fourier Transform)