Concrete Examples of Quantum Gates

So far in this series we’ve seen a lot of motivation and defined basic ideas of what a quantum circuit is. But on rereading my posts, I think we would all benefit from some concreteness.

“Local” operations

So by now we’ve understood that quantum circuits consist of a sequence of gates $ A_1, \dots, A_k$, where each $ A_i$ is an 8-by-8 matrix that operates “locally” on some choice of three (or fewer) qubits. And in your head you imagine starting with some state vector $ v$ and applying each $ A_i$ locally to its three qubits until the end when you measure the state and get some classical output.

But the point I want to make is that $ A_i$ actually changes the whole state vector $ v$, because the three qubits it acts “locally” on are part of the entire basis. Here’s an example. Suppose we have three qubits and they’re in the state

$ \displaystyle v = \frac{1}{\sqrt{14}} (e_{001} + 2e_{011} – 3e_{101})$

Recall we abbreviate basis states by subscripting them by binary strings, so $ e_{011} = e_0 \otimes e_1 \otimes e_1$, and a valid state is any unit vector over the $ 2^3 = 8$ possible basis elements. As a vector, this state is $ \frac{1}{\sqrt{14}} (0,1,0,2,0,-3,0,0)$

Say we apply the gate $ A$ that swaps the first and third qubits. “Locally” this gate has the following matrix:

$ \displaystyle \begin{pmatrix} 1&0&0&0 \\ 0&0&1&0 \\ 0&1&0&0 \\ 0&0&0&1 \end{pmatrix}$

where we index the rows and columns by the relevant strings in lexicographic order: 00, 01, 10, 11. So this operation leaves $ e_{00}$ and $ e_{11}$ the same while swapping the other two. However, as an operation on three qubits the operation looks quite different. And it’s sort of hard to describe a general way to write it down as a matrix because of the choice of indices. There are three different perspectives.

Perspective 1: if the qubits being operated on are sequential (like, the third, fourth, and fifth qubits), then we can write the matrix as $ I_{2^{a}} \otimes A \otimes I_{2^{b}}$ where a tensor product of matrices is the Kronecker product and $ a + b + \log \textup{dim}(A) = n$ (the number of qubits adds up). Then the final operation looks like a “tiled product” of identity matrices by $ A$, but it’s a pain to write out. Let me hurt my self for your sake, dear reader.

tensormatrix1

And each copy of $ A \otimes I_{2^b}$ looks like

tiled-piece

 

That’s a mess, but if you write it out for our example of swapping the first and third qubits of a three-qubit register you get the following:

example-3swap2

And this makes sense: the gate changes any entry of the state vector that has values for the first and third qubit that are different. This is what happens to our state:

$ \displaystyle v = \frac{1}{\sqrt{14}} (0,1,0,2,0,-3,0,0) \mapsto \frac{1}{\sqrt{14}} (0,0,0,0,1,-3,2,0)$

Perspective 2: just assume every operation works on the first three qubits, and wrap each operation $ A$ in between an operation that swaps the first three qubits with the desired three. So like $ BAB$ for $ B$ a swap operation. Then the matrix form looks a bit simpler, and it just means we permute the columns of the matrix form we gave above so that it just has the form $ A \otimes I_a$. This allows one to retain a shred of sanity when trying to envision the matrix for an operation that acts on three qubits that are not sequential. The downside is that to actually use this perspective in an analysis you have to carry around the extra baggage of these permutation matrices. So one might use this as a simplifying assumption (a “without loss of generality” statement).

Perspective 3: ignore matrices and write things down in a summation form. So if $ \sigma$ is the permutation that swaps 1 and 3 and leaves the other indices unchanged, we can write the general operation on a state $ v = \sum_{x \in \{ 0,1 \}^n } a_x e_{x}$ as $ Av = \sum_{x \in \{ 0,1 \}^n} a_x e_{\sigma(x)}$.

The third option is probably the nicest way to do things, but it’s important to keep the matrix view in mind for many reasons. Just one quick reason: “errors” in quantum gates (that are meant to approximately compute something) compound linearly in the number of gates because the operations are linear. This is a key reason that allows one to design quantum analogues of error correcting codes.

So we’ve established that the basic (atomic) quantum gates are “local” in the sense that they operate on a fixed number of qubits, but they are not local in the sense that they can screw up the entire state vector.

A side note on the meaning of “local”

When I was chugging through learning this stuff (and I still have far to go), I wanted to come up with an alternate characterization of the word “local” so that I would feel better about using the word “local.” Mathematicians are as passionate about word choice as programmers are about text editors. In particular, for a long time I was ignorantly convinced that quantum gates that act on a small number of qubits don’t affect the marginal distribution of measurement outcomes for other qubits. That is, I thought that if $ A$ acts on qubits 1,2,3, then $ Av$ and $ v$ have the same probability of a measurement producing a 1 in index 4, 5, etc, conditioned on fixing a measurement outcome for qubits 1,2,3. In notation, if $ x$ is a random variable whose values are binary strings and $ v$ is a state vector, I’ll call $ x \sim v$ the random process of measuring a state vector $ v$ and getting a string $ x$, then my claim was that the following was true for every $ b_1, b_2, b_3 \in \{0,1\}$ and every $ 1 \leq i \leq n$:

$ \displaystyle \begin{aligned}\Pr_{x \sim v}&[x_i = 1 \mid x_1 = b_1, x_2 = b_2, x_3 = b_3] = \\ \Pr_{x \sim Av}&[x_i = 1 \mid x_1 = b_1, x_2 = b_2, x_3 = b_3] \end{aligned}$

You could try to prove this, and you would fail because it’s false. In fact, it’s even false if $ A$ acts on only a single qubit! Because it’s so tedious to write out all of the notation, I decided to write a program to illustrate the counterexample. (The most brazenly dedicated readers will try to prove this false fact and identify where the proof fails.)

import numpy

H = (1/(2**0.5)) * numpy.array([[1,1], [1,-1]])
I = numpy.identity(4)
A = numpy.kron(H,I)

Here $ H$ is the 2 by 2 Hadamard matrix, which operates on a single qubit and maps $ e_0 \mapsto \frac{e_0 + e_1}{\sqrt{2}}$, and $ e_1 \mapsto \frac{e_0 – e_1}{\sqrt{2}}$. This matrix is famous for many reasons, but one simple use as a quantum gate is to generate uniform random coin flips. In particular, measuring $ He_0$ outputs 1 and 0 with equal probability.

So in the code sample above, $ A$ is the mapping which applies the Hadamard operation to the first qubit and leaves the other qubits alone.

Then we compute some arbitrary input state vector $ w$

def normalize(z):
   return (1.0 / (sum(abs(z)**2) ** 0.5)) * z

v = numpy.arange(1,9)
w = normalize(v)

And now we write a function to compute the probability of some query conditioned on some fixed bits. We simply sum up the square norms of all of the relevant indices in the state vector.

def condProb(state, query={}, fixed={}):
   num = 0
   denom = 0
   dim = int(math.log2(len(state)))

   for x in itertools.product([0,1], repeat=dim):
      if any(x[index] != b for (index,b) in fixed.items()):
         continue

      i = sum(d << i for (i,d) in enumerate(reversed(x)))
      denom += abs(state[i])**2
      if all(x[index] == b for (index, b) in query.items()):
         num += abs(state[i]) ** 2

   if num == 0:
      return 0 

   return num / denom

So if the query is query = {1:0} and the fixed thing is fixed = {0:0}, then this will compute the probability that the measurement results in the second qubit being zero conditioned on the first qubit also being zero.

And the result:

Aw = A.dot(w)
query = {1:0}
fixed = {0:0}
print((condProb(w, query, fixed), condProb(Aw, query, fixed)))
# (0.16666666666666666, 0.29069767441860467)

So they are not equal in general.

Also, in general we won’t work explicitly with full quantum gate matrices, since for $ n$ qubits the have size $ 2^{2n}$ which is big. But for finding counterexamples to guesses and false intuition, it’s a great tool.

Some important gates on 1-3 qubits

Let’s close this post with concrete examples of quantum gates. Based on the above discussion, we can write out the 2 x 2 or 4 x 4 matrix form of the operation and understand that it can apply to any two qubits in the state of a quantum program. Gates are most interesting when they’re operating on entangled qubits, and that will come out when we visit our first quantum algorithm next time, but for now we will just discuss at a naive level how they operate on the basis vectors.

Hadamard gate:

We introduced the Hadamard gate already, but I’ll reiterate it here.

Let $ H$ be the following 2 by 2 matrix, which operates on a single qubit and maps $ e_0 \mapsto \frac{e_0 + e_1}{\sqrt{2}}$, and $ e_1 \mapsto \frac{e_0 – e_1}{\sqrt{2}}$.

$ \displaystyle H = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}$

One can use $ H$ to generate uniform random coin flips. In particular, measuring $ He_0$ outputs 1 and 0 with equal probability.

Quantum NOT gate:

Let $ X$ be the 2 x 2 matrix formed by swapping the columns of the identity matrix.

$ \displaystyle X = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}$

This gate is often called the “Pauli-X” gate by physicists. This matrix is far too simple to be named after a person, and I can only imagine it is still named after a person for the layer of obfuscation that so often makes people feel smarter (same goes for the Pauli-Y and Pauli-Z gates, but we’ll get to those when we need them).

If we’re thinking of $ e_0$ as the boolean value “false” and $ e_1$ as the boolean value “true”, then the quantum NOT gate simply swaps those two states. In particular, note that composing a Hadamard and a quantum NOT gate can have interesting effects: $ XH(e_0) = H(e_0)$, but $ XH(e_1) \neq H(e_1)$. In the second case, the minus sign is the culprit. Which brings us to…

Phase shift gate:

Given an angle $ \theta$, we can “shift the phase” of one qubit by an angle of $ \theta$ using the 2 x 2 matrix $ R_{\theta}$.

$ \displaystyle R_{\theta} = \begin{pmatrix} 1 & 0 \\ 0 & e^{i \theta} \end{pmatrix}$

“Phase” is a term physicists like to use for angles. Since the coefficients of a quantum state vector are complex numbers, and since complex numbers can be thought of geometrically as vectors with direction and magnitude, it makes sense to “rotate” the coefficient of a single qubit. So $ R_{\theta}$ does nothing to $ e_0$ and it rotates the coefficient of $ e_1$ by an angle of $ \theta$.

Continuing in our theme of concreteness, if I have the state vector $ v = \frac{1}{\sqrt{204}} (1,2,3,4,5,6,7,8)$ and I apply a rotation of $ pi$ to the second qubit, then my operation is the matrix $ I_2 \otimes R_{\pi} \otimes I_2$ which maps $ e_{i0k} \mapsto e_{i0k}$ and $ e_{i1k} \mapsto -e_{i1k}$. That would map the state $ v$ to $ (1,2,-3,-4,5,6,-7,-8)$.

If we instead used the rotation by $ \pi/2$ we would get the output state $ (1,2,3i, 4i, 5, 6, 7i, 8i)$.

Quantum AND/OR gate:

In the last post in this series we gave the quantum AND gate and left the quantum OR gate as an exercise. Rather than write out the matrix again, let me remind you of this gate using a description of the effect on the basis $ e_{ijk}$ where $ i,j,k \in \{ 0,1 \}$. Recall that we need three qubits in order to make the operation reversible (which is a consequence of all unitary gates being unitary matrices). Some notation: $ \oplus$ is the XOR of two bits, and $ \wedge$ is AND, $ \vee$ is OR. The quantum AND gate maps

$ \displaystyle e_{ijk} \mapsto e_{ij(k \oplus (i \wedge j))}$

In words, the third coordinate is XORed with the AND of the first two coordinates. We think of the third coordinate as a “scratchwork” qubit which is maybe prepared ahead of time to be in state zero.

Simiarly, the quantum OR gate maps $ e_{ijk} \mapsto e_{ij(k \oplus (i \vee j))}$. As we saw last time these combined with the quantum NOT gate (and some modest number of scratchwork qubits) allows quantum circuits to simulate any classical circuit.

Controlled-* gate:

The last example in this post is a meta-gate that represents a conditional branching. If we’re given a gate $ A$ acting on $ k$ qubits, then we define the controlled-A to be an operation which acts on $ k+1$ qubits. Let’s call the added qubit “qubit zero.” Then controlled-A does nothing if qubit zero is in state 0, and applies $ A$ if qubit zero is in state 1. Qubit zero is generally called the “control qubit.”

The matrix representing this operation decomposes into blocks if the control qubit is actually the first qubit (or you rearrange).

$ \displaystyle \begin{pmatrix} I_{2^{k-1}} & 0 \\ 0 & A \end{pmatrix}$

A common example of this is the controlled-NOT gate, often abbreviated CNOT, and it has the matrix

$ \displaystyle \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{pmatrix}$

Looking forward

Okay let’s take a step back and evaluate our life choices. So far we’ve spent a few hours of our time motivating quantum computing, explaining the details of qubits and quantum circuits, and seeing examples of concrete quantum gates and studying measurement. I’ve hopefully hammered into your head the notion that quantum states which aren’t pure tensors (i.e. entangled) are where the “weirdness” of quantum computing comes from. But we haven’t seen any examples of quantum algorithms yet!

Next time we’ll see our first example of an algorithm that is genuinely quantum. We won’t tackle factoring yet, but we will see quantum “weirdness” in action.

Until then!

Multiple Qubits and the Quantum Circuit

Last time we left off with the tantalizing question: how do you do a quantum “AND” operation on two qubits? In this post we’ll see why the tensor product is the natural mathematical way to represent the joint state of multiple qubits. Then we’ll define some basic quantum gates, and present the definition of a quantum circuit.

Working with Multiple Qubits

In a classical system, if you have two bits with values $ b_1, b_2$, then the “joint state” of the two bits is given by the concatenated string $ b_1b_2$. But if we have two qubits $ v, w$, which are vectors in $ \mathbb{C}^2$, how do we represent their joint state?

There are seemingly infinitely many things we could try, but let’s entertain the simplest idea for the sake of exercising our linear algebra intuition. The simplest idea is to just “concatenate” the vectors as one does in linear algebra: represent the joint system as $ (v, w) \in \mathbb{C}^2 \oplus \mathbb{C}^2$. Recall that the direct sum of two vector spaces is just what you’d want out of “concatenation” of vectors. It treats the two components as completely independent of each other, and there’s an easy way to take any vector in the sum and decompose it into two vectors in the pieces.

Why does this fail to meet our requirements of qubits? Here’s one reason: $ (v, w)$ is not a unit vector when $ v$ and $ w$ are separately unit vectors. Indeed, $ \left \| (v,w) \right \|^2 = \left \| v \right \|^2 + \left \| w \right \|^2 = 2$. We could normalize everything, and that would work for a while, but we would still run into problems. A better reason is that direct sums screw up measurement. In particular, if you have two qubits (and they’re independent, in a sense we’ll make clear later), you should be able to measure one without affecting the other. But if we use the direct sum method for combining qubits, then measuring one qubit would collapse the other! There are times when we want this to happen, but we don’t always want it to happen. Alas, there should be better reasons out there (besides, “physics says so”) but I haven’t come across them yet.

So the nice mathematical alternative is to make the joint state of two qubits $ v,w$ the tensor product $ v \otimes w$. For a review of the basic properties of tensors and multilinear maps, see our post on the subject. Suffice it for now to remind the reader that the basis of the tensor space $ U \otimes V$ consists of all the tensors of the basis elements of the pieces $ U$ and $ V$: $ u_i \otimes v_j$. As such, the dimension of $ U \otimes V$ is the product of the dimensions $ \text{dim}(U) \text{dim}(V)$.

As a consequence of this and the fact that all $ \mathbb{C}$-vector spaces of the same dimension are the same (isomorphic), the state space of a set of $ n$ qubits can be identified with $ \mathbb{C}^{2^n}$. This is one way to see why quantum computing has the potential to be strictly more powerful than classical computing: $ n$ qubits provide a state space with $ 2^n$ coefficients, each of which is a complex number. With classical probabilistic computing we only get $ n$ “coefficients.” This isn’t a proof that quantum computing is more powerful, but a wink and a nudge that it could be.

While most of the time we’ll just write our states in terms of tensors (using the $ \otimes$ symbol), we could write out the vector representation of $ v \otimes w$ in terms of the vectors $ v = (v_1, v_2), w=(w_1, w_2)$. It’s just $ (v_1w_1, v_1w_2, v_2w_1, v_2w_2)$, with the obvious generalization to vectors of any dimension. This already fixes our earlier problem with norms: the norm of a tensor of two vectors is the product of the two norms. So tensors of unit vectors are unit vectors. Moreover, if you measure the first qubit, that just sets the $ v_1, v_2$ above to zero or one, leaving a joint state that is still a valid

Likewise, given two linear maps $ A, B$, we can describe the map $ A \otimes B$ on the tensor space both in terms of pure tensors ($ (A \otimes B)(v \otimes w) = Av \otimes Bw$) and in terms of a matrix. In the same vein as the representation for vectors, the matrix corresponding to $ A \otimes B$ is

$ \displaystyle \begin{pmatrix}
a_{1,1}B & a_{1,2}B & \dots & a_{1,n}B \\
a_{2,1}B & a_{2,2}B & \dots & a_{2,n}B \\
\vdots & \vdots & \ddots & \vdots \\
a_{n,1}B & a_{n,2}B & \dots & a_{n,n}B
\end{pmatrix}$

This is called the Kronecker product.

One of the strange things about tensor products, which very visibly manifests itself in “strange quantum behavior,” is that not every vector in a tensor space can be represented as a single tensor product of some vectors. Let’s work with an example: $ \mathbb{C}^2 \otimes \mathbb{C}^2$, and denote by $ e_0, e_1$ the computational basis vectors (the same letters are used for each copy of $ \mathbb{C}^2$). Sometimes you’ll get a vector like

$ \displaystyle v = \frac{1}{\sqrt{2}} e_0 \otimes e_0 + \frac{1}{\sqrt{2}} e_1 \otimes e_0$

And if you’re lucky you’ll notice that this can be factored and written as $ \frac{1}{\sqrt{2}}(e_0 + e_1) \otimes e_0$. Other times, though, you’ll get a vector like

$ \displaystyle \frac{1}{\sqrt{2}}(e_0 \otimes e_0 + e_1 \otimes e_1)$

And it’s a deep fact that this cannot be factored into a tensor product of two vectors (prove it as an exercise). If a vector $ v$ in a tensor space can be written as a single tensor product of vectors, we call $ v$ a pure tensor. Otherwise, using some physics lingo, we call the state represented by $ v$ entangled. So if you did the exercise you proved that not all tensors are pure tensors, or equivalently that there exist entangled quantum states. The latter sounds so much more impressive. We’ll see in a future post why these entangled states are so important in quantum computing.

Now we need to explain how to extend gates and qubit measurements to state spaces with multiple qubits. The first is easy: just as we often restrict our classical gates to a few bits (like the AND of two bits), we restrict multi-qubit quantum gates to operate on at most three qubits.

Definition: A quantum gate $ G$ is a unitary map $ \mathbb{C}^{2^n} \to \mathbb{C}^{2^n}$ where $ n$ is at most 3, (recall, $ (\mathbb{C}^2)^{\otimes 3} = \mathbb{C}^{2^3}$ is the state space for 3 qubits).

Now let’s see how to implement AND and OR for two qubits. You might be wondering why we need three qubits in the definition above, and, perhaps surprisingly, we’ll see that AND and OR require us to work with three qubits.

Because how would one compute an AND of two qubits? Taking a naive approach from how we did the quantum NOT, we would label $ e_0$ as “false” and $ e_1$ as “true,” and we’d want to map $ e_1 \otimes e_1 \mapsto e_1$ and all other possibilities to $ e_0$. The main problem is that this is not an invertible function! Remember, all quantum operations are unitary matrices and all unitary matrices have inverses, so we have to model AND and OR as an invertible operation. We also have a “type error,” since the output is not even in the same vector space as the input, but any way to fix that would still run into the invertibility problem.

The way to deal with this is to add an extra “scratch work” qubit that is used for nothing else except to make the operation invertible. So now say we have three qubits $ a, b, c$, and we want to compute $ a$ AND $ b$ in the sensible way described above. What we do is map

$ \displaystyle a \otimes b \otimes c \mapsto a \otimes b \otimes (c \oplus (a \wedge b))$

Here $ a \wedge b$ is the usual AND (where we interpret, e.g., $ e_1 \wedge e_0 = e_0$), and $ \oplus$ is the exclusive or operation on bits. It’s clear that this mapping makes sense for “bits” (the true/false interpretation of basis vectors) and so we can extend it to a linear map by writing down the matrix.

quantum-AND

This gate is often called the Toffoli gate by physicists, but we’ll just call it the (quantum) AND gate. Note that the column $ ijk$ represents the input $ e_i \otimes e_j \otimes e_k$, and the 1 in that column denotes the row whose label is the output. In particular, if we want to do an AND then we’ll ensure the “scratch work” qubit is $ e_0$, so we can ignore half the columns above where the third qubit is 1. The reader should write down the analogous construction for a quantum OR.

From now on, when we’re describing a basis state like $ e_1 \otimes e_0 \otimes e_1$, we’ll denote it as $ e_{101}$, and more generally when $ i$ is a nonnegative integer or a binary string we’ll denote the basis state as $ e_i$. We’re taking advantage of the correspondence between the $ 2^n$ binary strings and the $ 2^n$ basis states, and it compactifies notation.

Once we define a quantum circuit, it will be easy to show that using quantum AND’s, OR’s and NOT’s, we can achieve any computation that a classical circuit can.

We have one more issue we’d like to bring up before we define quantum circuits. We’re being a bit too slick when we say we’re working with “at most three qubits.” If we have ten qubits, potentially all entangled up in a weird way, how can we apply a mapping to only some of those qubits? Indeed, we only defined AND for $ \mathbb{C}^8$, so how can we extend that to an AND of three qubits sitting inside any $ \mathbb{C}^{2^n}$ we please? The answer is to apply the Kronecker product with the identity matrix appropriately. Let’s do a simple example of this to make everything stick.

Say I want to apply the quantum NOT gate to a qubit $ v$, and I have four other qubits $ w_1, w_2, w_3, w_4$ so that they’re all in the joint state $ x = v \otimes w_1 \otimes w_2 \otimes w_3 \otimes w_4$. I form the NOT gate, which I’ll call $ A$, and then I apply the gate $ A \otimes I_{2^4}$ to $ x$ (since there are 4 of the $ w_i$). This will compute the tensor $ Av \otimes I_2 w_1 \otimes I_2 w_2 \otimes I_2 w_3 \otimes I_2 w_4$, as desired.

In particular, you can represent a gate that depends on only 3 qubits by writing down the 3×3 matrix and the three indices it operates on. Note that this requires only 12 (possibly complex) numbers to write down, and so it takes “constant space” to represent a single gate.

Quantum Circuits

Here we are at the definition of a quantum circuit.

Definition: quantum circuit is a list $ G_1, \dots, G_T$ of $ 2^m \times 2^m$ unitary matrices, such that each $ G_i$ depends on at most 3 qubits.

We’ll write down what it means to “compute” something with a quantum circuit, but for now we can imagine drawing it like a usual circuit. We write the input state as some unit vector $ x \in C^{2^n}$ (which may or may not be a pure tensor), each qubit making up the vector is associated to a “wire,” and at each step we pick three of the wires, send them to the next quantum gate $ G_i$, and use the three output wires for further computations. The final output is the matrix product applied to the input $ G_T \dots G_1x$. We imagine that each gate takes only one step to compute (recall, in our first post one “step” was a photon flying through a special material, so it’s not like we have to multiply these matrices by hand).

So now we have to say how a quantum circuit could solve a problem. At all levels of mathematical maturity we should have some idea how a regular circuit solves a problem: there is some distinguished output wire or set of wires containing the answer. For a quantum circuit it’s basically the same, except that at the end of the circuit we get a single quantum state (a tensor in this big vector space), and we just measure that state. Like the case of a single qubit, if the vector has coordinates $ x = (x_1, \dots, x_{2^n})$, they must satisfy $ \sum_i |x_i|^2 = 1$, and the probability of the measurement producing index $ j$ is $ |x_j|^2$. The result of that measurement is an integer (some classical bits) that represent our answer. As a side effect, the vector $ x$ is mutated into the basis state $ e_j$. As we’ve said we may need to repeat a quantum computation over and over to get a good answer with high probability, so we can imagine that a quantum circuit is used as some subroutine in a larger (otherwise classical) algorithm that allows for pre- and post-processing on the quantum part.

The final caveat is that we allow one to include as many scratchwork qubits as one needs in their circuit. This makes it possible already to simulate any classical circuit using a quantum circuit. Let’s prove it as a theorem.

Theorem: Given a classical circuit $ C$ with a single output bit, there is a quantum circuit $ D$ that computes the same function.

Proof. Let $ x$ be a binary string input to $ C$, and suppose that $ C$ has $ s$ gates $ g_1, \dots, g_s$, each being either AND, OR, or NOT, and with $ g_s$ being the output gate. To construct $ D$, we can replace every $ g_i$ with their quantum counterparts $ G_i$. Recall that this takes $ e_{b_1b_20} \mapsto e_{b_1b_2(g_i(b_1, b_2))}$. And so we need to add a single scratchwork qubit for each one (really we only need it for the ANDs and ORs, but who cares). This means that our start state is $ e_{x} \otimes e_{0^s} = e_{x0^s}$. Really, we need one of these gates $ G_i$ for each wire going out of the classical gate $ g_i$, but with some extra tricks one can do it with a single quantum gate that uses multiple scratchwork qubits. The crucial thing to note is that the state vector is always a basis vector!

If we call $ z$ the contents of all the scratchwork after the quantum circuit described above runs and $ z_0$ the initial state of the scratchwork, then what we did was extend the function $ x \mapsto C(x)$ to a function $ e_{xz_0} \mapsto e_{xz}$. In particular, one of the bits in the $ z$ part is the output of the last gate of $ C$, and everything is 0-1 valued. So we can measure the state vector, get the string $ xz$ and inspect the bit of $ z$ which corresponds to the output wire of the final gate of the original circuit $ C$. This is your answer.

$ \square$

It should be clear that the single output bit extends to the general case easily. We can split a circuit with lots of output bits into a bunch of circuits with single output bits in the obvious way and combine the quantum versions together.

Next time we’ll finally look at our first quantum algorithms. And along the way we’ll see some more significant quantum operations that make use of the properties that make the quantum world interesting. Until then!

The Quantum Bit

The best place to start our journey through quantum computing is to recall how classical computing works and try to extend it. Since our final quantum computing model will be a circuit model, we should informally discuss circuits first.

A circuit has three parts: the “inputs,” which are bits (either zero or one); the “gates,” which represent the lowest-level computations we perform on bits; and the “wires,” which connect the outputs of gates to the inputs of other gates. Typically the gates have one or two input bits and one output bit, and they correspond to some logical operation like AND, NOT, or XOR.

A simple example of a circuit.

A simple example of a circuit. The V’s are “OR” and the Λ’s are “AND.” Image source: Ryan O’Donnell

If we want to come up with a different model of computing, we could start regular circuits and generalize some or all of these pieces. Indeed, in our motivational post we saw a glimpse of a probabilistic model of computation, where instead of the inputs being bits they were probabilities in a probability distribution, and instead of the gates being simple boolean functions they were linear maps that preserved probability distributions (we called such a matrix “stochastic”).

Rather than go through that whole train of thought again let’s just jump into the definitions for the quantum setting. In case you missed last time, our goal is to avoid as much physics as possible and frame everything purely in terms of linear algebra.

Qubits are Unit Vectors

The generalization of a bit is simple: it’s a unit vector in $ \mathbb{C}^2$. That is, our most atomic unit of data is a vector $ (a,b)$ with the constraints that $ a,b$ are complex numbers and $ |a|^2 + |b|^2 = 1$. We call such a vector a qubit.

A qubit can assume “binary” values much like a regular bit, because you could pick two distinguished unit vectors, like $ (1,0)$ and $ (0,1)$, and call one “zero” and the other “one.” Obviously there are many more possible unit vectors, such as $ \frac{1}{\sqrt{2}}(1, 1)$ and $ (-i,0)$. But before we go romping about with what qubits can do, we need to understand how we can extract information from a qubit. The definitions we make here will motivate a lot of the rest of what we do, and is in my opinion one of the major hurdles to becoming comfortable with quantum computing.

A bittersweet fact of life is that bits are comforting. They can be zero or one, you can create them and change them and read them whenever you want without an existential crisis. The same is not true of qubits. This is a large part of what makes quantum computing so weird: you can’t just read the information in a qubit! Before we say why, notice that the coefficients in a qubit are complex numbers, so being able to read them exactly would potentially encode an infinite amount of information (in the infinite binary expansion)! Not only would this be an undesirably powerful property of a circuit, but physicists’ experiments tell us it’s not possible either.

So as we’ll see when we get to some algorithms, the main difficulty in getting useful quantum algorithms is not necessarily figuring out how to compute what you want to compute, it’s figuring out how to tease useful information out of the qubits that otherwise directly contain what you want. And the reason it’s so hard is that when you read a qubit, most of the information in the qubit is destroyed. And what you get to see is only a small piece of the information available. Here is the simplest example of that phenomenon, which is called the measurement in the computational basis.

Definition: Let $ v = (a,b) \in \mathbb{C}^2$ be a qubit. Call the standard basis vectors $ e_0 = (1,0), e_1 = (0,1)$ the computational basis of $ \mathbb{C}^2$. The process of measuring $ v$ in the computational basis consists of two parts.

  1. You observe (get as output) a random choice of $ e_0$ or $ e_1$. The probability of getting $ e_0$ is $ |a|^2$, and the probability of getting $ e_1$ is $ |b|^2$.
  2. As a side effect, the qubit $ v$ instantaneously becomes whatever state was observed in 1. This is often called a collapse of the waveform by physicists.

There are more sophisticated ways to measure, and more sophisticated ways to express the process of measurement, but we’ll cover those when we need them. For now this is it.

Why is this so painful? Because if you wanted to try to estimate the probabilities $ |a|^2$ or $ |b|^2$, not only would you get an estimate at best, but you’d have to repeat whatever computation prepared $ v$ for measurement over and over again until you get an estimate you’re satisfied with. In fact, we’ll see situations like this, where we actually have a perfect representation of the data we need to solve our problem, but we just can’t get at it because the measurement process destroys it once we measure.

Before we can talk about those algorithms we need to see how we’re allowed to manipulate qubits. As we said before, we use unitary matrices to preserve unit vectors, so let’s recall those and make everything more precise.

Qubit Mappings are Unitary Matrices

Suppose $ v = (a,b) \in \mathbb{C}^2$ is a qubit. If we are to have any mapping between vector spaces, it had better be a linear map, and the linear maps that send unit vectors to unit vectors are called unitary matrices. An equivalent definition that seems a bit stronger is:

Definition: A linear map $ \mathbb{C}^2 \to \mathbb{C}^2$ is called unitary if it preserves the inner product on $ \mathbb{C}^2$.

Let’s remember the inner product on $ \mathbb{C}^n$ is defined by $ \left \langle v,w \right \rangle = \sum_{i=1}^n v_i \overline{w_i}$ and has some useful properties.

  • The square norm of a vector is $ \left \| v \right \|^2 = \left \langle v,v \right \rangle$.
  • Swapping the coordinates of the complex inner product conjugates the result: $ \left \langle v,w \right \rangle = \overline{\left \langle w,v \right \rangle}$
  • The complex inner product is a linear map if you fix the second coordinate, and a conjugate-linear map if you fix the first. That is, $ \left \langle au+v, w \right \rangle = a \left \langle u, w \right \rangle + \left \langle v, w \right \rangle$ and $ \left \langle u, aw + v \right \rangle = \overline{a} \left \langle u, w \right \rangle + \left \langle u,v \right \rangle$

By the first bullet, it makes sense to require unitary matrices to preserve the inner product instead of just the norm, though the two are equivalent (see the derivation on page 2 of these notes). We can obviously generalize unitary matrices to any complex vector space, and unitary matrices have some nice properties. In particular, if $ U$ is a unitary matrix then the important property is that the columns (and rows) of $ U$ form an orthonormal basis. As an immediate result, if we take the product $ U\overline{U}^\text{T}$, which is just the matrix of all possible inner products of columns of $ U$, we get the identity matrix. This means that unitary matrices are invertible and their inverse is $ \overline{U}^\text{T}$.

Already we have one interesting philosophical tidbit. Any unitary transformation of a qubit is reversible because all unitary matrices are invertible. Apparently the only non-reversible thing we’ve seen so far is measurement.

Recall that $ \overline{U}^\text{T}$ is the conjugate transpose of the matrix, which I’ll often write as $ U^*$. Note that there is a way to define $ U^*$ without appealing to matrices: it is a notion called the adjoint, which is that linear map $ U^*$ such that $ \left \langle Uv, w \right \rangle = \left \langle v, U^*w \right \rangle$ for all $ v,w$. Also recall that “unitary matrix” for complex vector spaces means precisely the same thing as “orthogonal matrix” does for real numbers. The only difference is the inner product being used (indeed, if the complex matrix happens to have real entries, then orthogonal matrix and unitary matrix mean the same thing).

Definition: single qubit gate is a unitary matrix $ \mathbb{C}^2 \to \mathbb{C}^2$.

So enough with the properties and definitions, let’s see some examples. For all of these examples we’ll fix the basis to the computational basis $ e_0, e_1$. One very important, but still very simple example of a single qubit gate is the Hadamard gate. This is the unitary map given by the matrix

$ \displaystyle \frac{1}{\sqrt{2}}\begin{pmatrix}
1 & 1 \\
1 & -1
\end{pmatrix}$

It’s so important because if you apply it to a basis vector, say, $ e_0 = (1,0)$, you get a uniform linear combination $ \frac{1}{\sqrt{2}}(e_1 + e_2)$. One simple use of this is to allow for unbiased coin flips, and as readers of this blog know unbiased coins can efficiently simulate biased coins. But it has many other uses we’ll touch on as they come.

Just to give another example, the quantum NOT gate, often called a Pauli X gate, is the following matrix

$ \displaystyle \begin{pmatrix}
0 & 1 \\
1 & 0
\end{pmatrix}$

It’s called this because, if we consider $ e_0$ to be the “zero” bit and $ e_1$ to be “one,” then this mapping swaps the two. In general, it takes $ (a,b)$ to $ (b,a)$.

As the reader can probably imagine by the suggestive comparison with classical operations, quantum circuits can do everything that classical circuits can do. We’ll save the proof for a future post, but if we want to do some kind of “quantum AND” operation, we get an obvious question. How do you perform an operation that involves multiple qubits? The short answer is: you represent a collection of bits by their tensor product, and apply a unitary matrix to that tensor.

We’ll go into more detail on this next time, and in the mean time we suggest checking out this blog’s primer on the tensor product. Until then!

A Motivation for Quantum Computing

Quantum mechanics is one of the leading scientific theories describing the rules that govern the universe. It’s discovery and formulation was one of the most important revolutions in the history of mankind, contributing in no small part to the invention of the transistor and the laser.

Here at Math ∩ Programming we don’t put too much emphasis on physics or engineering, so it might seem curious to study quantum physics. But as the reader is likely aware, quantum mechanics forms the basis of one of the most interesting models of computing since the Turing machine: the quantum circuit. My goal with this series is to elucidate the algorithmic insights in quantum algorithms, and explain the mathematical formalisms while minimizing the amount of “interpreting” and “debating” and “experimenting” that dominates so much of the discourse by physicists.

Indeed, the more I learn about quantum computing the more it’s become clear that the shroud of mystery surrounding quantum topics has a lot to do with their presentation. The people teaching quantum (writing the textbooks, giving the lectures, writing the Wikipedia pages) are almost all purely physicists, and they almost unanimously follow the same path of teaching it.

Scott Aaronson (one of the few people who explains quantum in a way I understand) describes the situation superbly.

There are two ways to teach quantum mechanics. The first way – which for most physicists today is still the only way – follows the historical order in which the ideas were discovered. So, you start with classical mechanics and electrodynamics, solving lots of grueling differential equations at every step. Then, you learn about the “blackbody paradox” and various strange experimental results, and the great crisis that these things posed for physics. Next, you learn a complicated patchwork of ideas that physicists invented between 1900 and 1926 to try to make the crisis go away. Then, if you’re lucky, after years of study, you finally get around to the central conceptual point: that nature is described not by probabilities (which are always nonnegative), but by numbers called amplitudes that can be positive, negative, or even complex.

The second way to teach quantum mechanics eschews a blow-by-blow account of its discovery, and instead starts directly from the conceptual core – namely, a certain generalization of the laws of probability to allow minus signs (and more generally, complex numbers). Once you understand that core, you can then sprinkle in physics to taste, and calculate the spectrum of whatever atom you want.

Indeed, the sequence of experiments and debate has historical value. But the mathematics needed to have a basic understanding of quantum mechanics is quite simple, and it is often blurred by physicists in favor of discussing interpretations. To start thinking about quantum mechanics you only need to a healthy dose of linear algebra, and most of it we’ve covered in the three linear algebra primers on this blog. More importantly for computing-minded folks, one only needs a basic understanding of quantum mechanics to understand quantum computing.

The position I want to assume on this blog is that we don’t care about whether quantum mechanics is an accurate description of the real world. The real world gave an invaluable inspiration, but at the end of the day the mathematics stands on its own merits. The really interesting question to me is how the quantum computing model compares to classical computing. Most people believe it is strictly stronger in terms of efficiency. And so the murky depths of the quantum swamp must be hiding some fascinating algorithmic ideas. I want to understand those ideas, and explain them up to my own standards of mathematical rigor and lucidity.

So let’s begin this process with a discussion of an experiment that motivates most of the ideas we’ll need for quantum computing. Hopefully this will be the last experiment we discuss.

Shooting Photons and The Question of Randomness

Does the world around us have inherent randomness in it? This is a deep question open to a lot of philosophical debate, but what evidence do we have that there is randomness?

Here’s the experiment. You set up a contraption that shoots photons in a straight line, aimed at what’s called a “beam splitter.” A beam splitter seems to have the property that when photons are shot at it, they will be either be reflected at a 90 degree angle or stay in a straight line with probability 1/2. Indeed, if you put little photon receptors at the end of each possible route (straight or up, as below) to measure the number of photons that end at each receptor, you’ll find that on average half of the photons went up and half went straight.

photon-experiment

The triangle is the photon shooter, and the camera-looking things are receptors.

If you accept that the photon shooter is sufficiently good and the beam splitter is not tricking us somehow, then this is evidence that universe has some inherent randomness in it! Moreover, the probability that a photon goes up or straight seems to be independent of what other photons do, so this is evidence that whatever randomness we’re seeing follows the classical laws of probability. Now let’s augment the experiment as follows. First, put two beam splitters on the corners of a square, and mirrors at the other two corners, as below.

The thicker black lines are mirrors which always reflect the photons.

The thicker black lines are mirrors which always reflect the photons.

This is where things get really weird. If you assume that the beam splitter splits photons randomly (as in, according to an independent coin flip), then after the first beam splitter half go up and half go straight, and the same thing would happen after the second beam splitter. So the two receptors should measure half the total number of photons on average.

But that’s not what happens. Rather, all the photons go to the top receptor! Somehow the “probability” that the photon goes left or up in the first beam splitter is connected to the probability that it goes left or up in the second. This seems to be a counterexample to the claim that the universe behaves on the principles of independent probability. Obviously there is some deeper mystery at work.

awardplz

Complex Probabilities

One interesting explanation is that the beam splitter modifies something intrinsic to the photon, something that carries with it until the next beam splitter. You can imagine the photon is carrying information as it shambles along, but regardless of the interpretation it can’t follow the laws of classical probability.

The simplest classical probability explanation would go something like this:

There are two states, RIGHT and UP, and we model the state of a photon by a probability distribution $ (p, q)$ such that the photon has a probability $ p$ of being in state RIGHT a probability $ q$ of being in state UP, and like any probability distribution $ p + q = 1$. A photon hence starts in state $ (1,0)$, and the process of traveling through the beam splitter is the random choice to switch states. This is modeled by multiplication by a particular so-called stochastic matrix (which just means the rows sum to 1)

$ \displaystyle A = \begin{pmatrix} 1/2 & 1/2 \\ 1/2 & 1/2 \end{pmatrix}$

Of course, we chose this matrix because when we apply it to $ (1,0)$ and $ (0,1)$ we get $ (1/2, 1/2)$ for both outcomes. By doing the algebra, applying it twice to $ (1,0)$ will give the state $ (1/2, 1/2)$, and so the chance of ending up in the top receptor is the same as for the right receptor.

But as we already know this isn’t what happens in real life, so something is amiss. Here is an alternative explanation that gives a nice preview of quantum mechanics.

The idea is that, rather than have the state of the traveling photon be a probability distribution over RIGHT and UP, we have it be a unit vector in a vector space (over $ \mathbb{C}$). That is, now RIGHT and UP are the (basis) unit vectors $ e_1 = (1,0), e_2 = (0,1)$, respectively, and a state $ x$ is a linear combination $ c_1 e_1 + c_2 e_2$, where we require $ \left \| x \right \|^2 = |c_1|^2 + |c_2|^2 = 1$. And now the “probability” that the photon is in the RIGHT state is the square of the coefficient for that basis vector $ p_{\text{right}} = |c_1|^2$. Likewise, the probability of being in the UP state is $ p_{\text{up}} = |c_2|^2$.

This might seem like an innocuous modification — even a pointless one! — but changing the sum (or 1-norm) to the Euclidean sum-of-squares (or the 2-norm) is at the heart of why quantum mechanics is so different. Now rather than have stochastic matrices for state transitions, which are defined they way they are because they preserve probability distributions, we use unitary matrices, which are those complex-valued matrices that preserve the 2-norm. In both cases, we want “valid states” to be transformed into “valid states,” but we just change precisely what we mean by a state, and pick the transformations that preserve that.

In fact, as we’ll see later in this series using complex numbers is totally unnecessary. Everything that can be done with complex numbers can be done without them (up to a good enough approximation for computing), but using complex numbers just happens to make things more elegant mathematically. It’s the kind of situation where there are more and better theorems in linear algebra about complex-valued matrices than real valued matrices.

But back to our experiment. Now we can hypothesize that the beam splitter corresponds to the following transformation of states:

$ \displaystyle A = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & i \\ i & 1 \end{pmatrix}$

We’ll talk a lot more about unitary matrices later, so for now the reader can rest assured that this is one. And then how does it transform the initial state $ x =(1,0)$?

$ \displaystyle y = Ax = \frac{1}{\sqrt{2}}(1, i)$

So at this stage the probability of being in the RIGHT state is $ 1/2 = (1/\sqrt{2})^2$ and the probability of being in state UP is also $ 1/2 = |i/\sqrt{2}|^2$. So far it matches the first experiment. Applying $ A$ again,

$ \displaystyle Ay = A^2x = \frac{1}{2}(0, 2i) = (0, i)$

And the photon is in state UP with probability 1. Stunning. This time Science is impressed by mathematics.

Next time we’ll continue this train of thought by generalizing the situation to the appropriate mathematical setting. Then we’ll dive into the quantum circuit model, and start churning out some algorithms.

Until then!

[Edit: Actually, if you make the model complicated enough, then you can achieve the result using classical probability. The experiment I described above, while it does give evidence that something more complicated is going on, it does not fully rule out classical probability. Mathematically, you can lay out the axioms of quantum mechanics (as we will from the perspective of computing), and mathematically this forces non-classical probability. But to the best of my knowledge there is no experiment or set of experiments that gives decisive proof that all of the axioms are necessary. In my search for such an experiment I asked this question on stackexchange and didn’t understand any of the answers well enough to paraphrase them here. Moreover, if you leave out the axiom that quantum circuit operations are reversible, you can do everything with classical probability. I read this somewhere but now I can’t find the source 🙁

One consequence is that I am more firmly entrenched in my view that I only care about quantum mechanics in how it produced quantum computing as a new paradigm in computer science. This paradigm doesn’t need physics at all, and apparently the motivations for the models are still unclear, so we just won’t discuss them any more. Sorry, physics lovers.]