# Multiple Qubits and the Quantum Circuit

Last time we left off with the tantalizing question: how do you do a quantum “AND” operation on two qubits? In this post we’ll see why the tensor product is the natural mathematical way to represent the joint state of multiple qubits. Then we’ll define some basic quantum gates, and present the definition of a quantum circuit.

## Working with Multiple Qubits

In a classical system, if you have two bits with values $b_1, b_2$, then the “joint state” of the two bits is given by the concatenated string $b_1b_2$. But if we have two qubits $v, w$, which are vectors in $\mathbb{C}^2$, how do we represent their joint state?

There are seemingly infinitely many things we could try, but let’s entertain the simplest idea for the sake of exercising our linear algebra intuition. The simplest idea is to just “concatenate” the vectors as one does in linear algebra: represent the joint system as $(v, w) \in \mathbb{C}^2 \oplus \mathbb{C}^2$. Recall that the direct sum of two vector spaces is just what you’d want out of “concatenation” of vectors. It treats the two components as completely independent of each other, and there’s an easy way to take any vector in the sum and decompose it into two vectors in the pieces.

Why does this fail to meet our requirements of qubits? Here’s one reason: $(v, w)$ is not a unit vector when $v$ and $w$ are separately unit vectors. Indeed, $\left \| (v,w) \right \|^2 = \left \| v \right \|^2 + \left \| w \right \|^2 = 2$. We could normalize everything, and that would work for a while, but we would still run into problems. A better reason is that direct sums screw up measurement. In particular, if you have two qubits (and they’re independent, in a sense we’ll make clear later), you should be able to measure one without affecting the other. But if we use the direct sum method for combining qubits, then measuring one qubit would collapse the other! There are times when we want this to happen, but we don’t always want it to happen. Alas, there should be better reasons out there (besides, “physics says so”) but I haven’t come across them yet.

So the nice mathematical alternative is to make the joint state of two qubits $v,w$ the tensor product $v \otimes w$. For a review of the basic properties of tensors and multilinear maps, see our post on the subject. Suffice it for now to remind the reader that the basis of the tensor space $U \otimes V$ consists of all the tensors of the basis elements of the pieces $U$ and $V$: $u_i \otimes v_j$. As such, the dimension of $U \otimes V$ is the product of the dimensions $\text{dim}(U) \text{dim}(V)$.

As a consequence of this and the fact that all $\mathbb{C}$-vector spaces of the same dimension are the same (isomorphic), the state space of a set of $n$ qubits can be identified with $\mathbb{C}^{2^n}$. This is one way to see why quantum computing has the potential to be strictly more powerful than classical computing: $n$ qubits provide a state space with $2^n$ coefficients, each of which is a complex number. With classical probabilistic computing we only get $n$ “coefficients.” This isn’t a proof that quantum computing is more powerful, but a wink and a nudge that it could be.

While most of the time we’ll just write our states in terms of tensors (using the $\otimes$ symbol), we could write out the vector representation of $v \otimes w$ in terms of the vectors $v = (v_1, v_2), w=(w_1, w_2)$. It’s just $(v_1w_1, v_1w_2, v_2w_1, v_2w_2)$, with the obvious generalization to vectors of any dimension. This already fixes our earlier problem with norms: the norm of a tensor of two vectors is the product of the two norms. So tensors of unit vectors are unit vectors. Moreover, if you measure the first qubit, that just sets the $v_1, v_2$ above to zero or one, leaving a joint state that is still a valid

Likewise, given two linear maps $A, B$, we can describe the map $A \otimes B$ on the tensor space both in terms of pure tensors ($(A \otimes B)(v \otimes w) = Av \otimes Bw$) and in terms of a matrix. In the same vein as the representation for vectors, the matrix corresponding to $A \otimes B$ is

$\displaystyle \begin{pmatrix} a_{1,1}B & a_{1,2}B & \dots & a_{1,n}B \\ a_{2,1}B & a_{2,2}B & \dots & a_{2,n}B \\ \vdots & \vdots & \ddots & \vdots \\ a_{n,1}B & a_{n,2}B & \dots & a_{n,n}B \end{pmatrix}$

This is called the Kronecker product.

One of the strange things about tensor products, which very visibly manifests itself in “strange quantum behavior,” is that not every vector in a tensor space can be represented as a single tensor product of some vectors. Let’s work with an example: $\mathbb{C}^2 \otimes \mathbb{C}^2$, and denote by $e_0, e_1$ the computational basis vectors (the same letters are used for each copy of $\mathbb{C}^2$). Sometimes you’ll get a vector like

$\displaystyle v = \frac{1}{\sqrt{2}} e_0 \otimes e_0 + \frac{1}{\sqrt{2}} e_1 \otimes e_0$

And if you’re lucky you’ll notice that this can be factored and written as $\frac{1}{\sqrt{2}}(e_0 + e_1) \otimes e_0$. Other times, though, you’ll get a vector like

$\displaystyle \frac{1}{\sqrt{2}}(e_0 \otimes e_0 + e_1 \otimes e_1)$

And it’s a deep fact that this cannot be factored into a tensor product of two vectors (prove it as an exercise). If a vector $v$ in a tensor space can be written as a single tensor product of vectors, we call $v$ a pure tensor. Otherwise, using some physics lingo, we call the state represented by $v$ entangled. So if you did the exercise you proved that not all tensors are pure tensors, or equivalently that there exist entangled quantum states. The latter sounds so much more impressive. We’ll see in a future post why these entangled states are so important in quantum computing.

Now we need to explain how to extend gates and qubit measurements to state spaces with multiple qubits. The first is easy: just as we often restrict our classical gates to a few bits (like the AND of two bits), we restrict multi-qubit quantum gates to operate on at most three qubits.

Definition: A quantum gate $G$ is a unitary map $\mathbb{C}^{2^n} \to \mathbb{C}^{2^n}$ where $n$ is at most 3, (recall, $(\mathbb{C}^2)^{\otimes 3} = \mathbb{C}^{2^3}$ is the state space for 3 qubits).

Now let’s see how to implement AND and OR for two qubits. You might be wondering why we need three qubits in the definition above, and, perhaps surprisingly, we’ll see that AND and OR require us to work with three qubits.

Because how would one compute an AND of two qubits? Taking a naive approach from how we did the quantum NOT, we would label $e_0$ as “false” and $e_1$ as “true,” and we’d want to map $e_1 \otimes e_1 \mapsto e_1$ and all other possibilities to $e_0$. The main problem is that this is not an invertible function! Remember, all quantum operations are unitary matrices and all unitary matrices have inverses, so we have to model AND and OR as an invertible operation. We also have a “type error,” since the output is not even in the same vector space as the input, but any way to fix that would still run into the invertibility problem.

The way to deal with this is to add an extra “scratch work” qubit that is used for nothing else except to make the operation invertible. So now say we have three qubits $a, b, c$, and we want to compute $a$ AND $b$ in the sensible way described above. What we do is map

$\displaystyle a \otimes b \otimes c \mapsto a \otimes b \otimes (c \oplus (a \wedge b))$

Here $a \wedge b$ is the usual AND (where we interpret, e.g., $e_1 \wedge e_0 = e_0$), and $\oplus$ is the exclusive or operation on bits. It’s clear that this mapping makes sense for “bits” (the true/false interpretation of basis vectors) and so we can extend it to a linear map by writing down the matrix.

This gate is often called the Toffoli gate by physicists, but we’ll just call it the (quantum) AND gate. Note that the column $ijk$ represents the input $e_i \otimes e_j \otimes e_k$, and the 1 in that column denotes the row whose label is the output. In particular, if we want to do an AND then we’ll ensure the “scratch work” qubit is $e_0$, so we can ignore half the columns above where the third qubit is 1. The reader should write down the analogous construction for a quantum OR.

From now on, when we’re describing a basis state like $e_1 \otimes e_0 \otimes e_1$, we’ll denote it as $e_{101}$. We’re taking advantage of the correspondence between the $2^n$ binary strings and the $2^n$ basis states, and it compactifies notation.

Once we define a quantum circuit, it will be easy to show that using quantum AND’s, OR’s and NOT’s, we can achieve any computation that a classical circuit can.

We have one more issue we’d like to bring up before we define quantum circuits. We’re being a bit too slick when we say we’re working with “at most three qubits.” If we have ten qubits, potentially all entangled up in a weird way, how can we apply a mapping to only some of those qubits? Indeed, we only defined AND for $\mathbb{C}^8$, so how can we extend that to an AND of three qubits sitting inside any $\mathbb{C}^{2^n}$ we please? The answer is to apply the Kronecker product with the identity matrix appropriately. Let’s do a simple example of this to make everything stick.

Say I want to apply the quantum NOT gate to a qubit $v$, and I have four other qubits $w_1, w_2, w_3, w_4$ so that they’re all in the joint state $x = v \otimes w_1 \otimes w_2 \otimes w_3 \otimes w_4$. I form the NOT gate, which I’ll call $A$, and then I apply the gate $A \otimes I_{2^4}$ to $x$ (since there are 4 of the $w_i$). This will compute the tensor $Av \otimes I_2 w_1 \otimes I_2 w_2 \otimes I_2 w_3 \otimes I_2 w_4$, as desired.

In particular, you can represent a gate that depends on only 3 qubits by writing down the 3×3 matrix and the three indices it operates on. Note that this requires only 12 (possibly complex) numbers to write down, and so it takes “constant space” to represent a single gate.

## Quantum Circuits

Here we are at the definition of a quantum circuit.

Definition: quantum circuit is a list $G_1, \dots, G_T$ of $2^m \times 2^m$ unitary matrices, such that each $G_i$ depends on at most 3 qubits.

We’ll write down what it means to “compute” something with a quantum circuit, but for now we can imagine drawing it like a usual circuit. We write the input state as some unit vector $x \in C^{2^n}$ (which may or may not be a pure tensor), each qubit making up the vector is associated to a “wire,” and at each step we pick three of the wires, send them to the next quantum gate $G_i$, and use the three output wires for further computations. The final output is the matrix product applied to the input $G_T \dots G_1x$. We imagine that each gate takes only one step to compute (recall, in our first post one “step” was a photon flying through a special material, so it’s not like we have to multiply these matrices by hand).

So now we have to say how a quantum circuit could solve a problem. At all levels of mathematical maturity we should have some idea how a regular circuit solves a problem: there is some distinguished output wire or set of wires containing the answer. For a quantum circuit it’s basically the same, except that at the end of the circuit we get a quantum state, and we pick some distinguished wires (qubits) to measure. The result of that measurement is some classical bits containing our answer. As we’ve said we may need to repeat a quantum computation over and over to get a good answer with high probability, so we can imagine that a quantum circuit is used as some subroutine in a larger (otherwise classical) algorithm that allows for pre- and post-processing on the quantum part.

The final caveat is that we allow one to include as many scratchwork qubits as one needs in their circuit. This makes it possible already to simulate any classical circuit using a quantum circuit. Let’s prove it as a theorem.

Theorem: Given a classical circuit $C$ with a single output bit, there is a quantum circuit $D$ that computes the same function.

Proof. Let $x$ be a binary string input to $C$, and suppose that $C$ has $s$ gates $g_1, \dots, g_s$, each being either AND, OR, or NOT, and with $g_s$ being the output gate. To construct $D$, we can replace every $g_i$ with their quantum counterparts $G_i$. Recall that this takes $e_{b_1b_20} \mapsto e_{b_1b_2(g_i(b_1, b_2))}$. And so we need to add a single scratchwork qubit for each one (really we only need it for the ANDs and ORs, but who cares). This means that our start state is $e_{x} \otimes e_{0^s} = e_{x0^s}$. Really, we need one of these gates $G_i$ for each wire going out of the classical gate $g_i$, but with some extra tricks one can do it with a single quantum gate that uses multiple scratchwork qubits.

If we call $z$ the contents of all the scratchwork after the quantum circuit described above runs and $z_0$ the initial state of the scratchwork, then what we did was extend the function $x \mapsto C(x)$ to a function $e_{x,z_0} \mapsto e_{x, z}$. In particular, one of the bits in the $z$ part is the output of the last gate of $C$, and everything is 0-1 valued. So we can just measure the output qubit and win.

$\square$

It should be clear that the single output bit extends to the general case easily. We can split a circuit with lots of output bits into a bunch of circuits with single output bits in the obvious way and combine the quantum versions together.

Next time we’ll finally look at our first quantum algorithms. And along the way we’ll see some more significant quantum operations that make use of the properties that make the quantum world interesting. Until then!

# The Quantum Bit

The best place to start our journey through quantum computing is to recall how classical computing works and try to extend it. Since our final quantum computing model will be a circuit model, we should informally discuss circuits first.

A circuit has three parts: the “inputs,” which are bits (either zero or one); the “gates,” which represent the lowest-level computations we perform on bits; and the “wires,” which connect the outputs of gates to the inputs of other gates. Typically the gates have one or two input bits and one output bit, and they correspond to some logical operation like AND, NOT, or XOR.

A simple example of a circuit. The V’s are “OR” and the Λ’s are “AND.” Image source: Ryan O’Donnell

If we want to come up with a different model of computing, we could start regular circuits and generalize some or all of these pieces. Indeed, in our motivational post we saw a glimpse of a probabilistic model of computation, where instead of the inputs being bits they were probabilities in a probability distribution, and instead of the gates being simple boolean functions they were linear maps that preserved probability distributions (we called such a matrix “stochastic”).

Rather than go through that whole train of thought again let’s just jump into the definitions for the quantum setting. In case you missed last time, our goal is to avoid as much physics as possible and frame everything purely in terms of linear algebra.

## Qubits are Unit Vectors

The generalization of a bit is simple: it’s a unit vector in $\mathbb{C}^2$. That is, our most atomic unit of data is a vector $(a,b)$ with the constraints that $a,b$ are complex numbers and $|a|^2 + |b|^2 = 1$. We call such a vector a qubit.

A qubit can assume “binary” values much like a regular bit, because you could pick two distinguished unit vectors, like $(1,0)$ and $(0,1)$, and call one “zero” and the other “one.” Obviously there are many more possible unit vectors, such as $\frac{1}{\sqrt{2}}(1, 1)$ and $(-i,0)$. But before we go romping about with what qubits can do, we need to understand how we can extract information from a qubit. The definitions we make here will motivate a lot of the rest of what we do, and is in my opinion one of the major hurdles to becoming comfortable with quantum computing.

A bittersweet fact of life is that bits are comforting. They can be zero or one, you can create them and change them and read them whenever you want without an existential crisis. The same is not true of qubits. This is a large part of what makes quantum computing so weird: you can’t just read the information in a qubit! Before we say why, notice that the coefficients in a qubit are complex numbers, so being able to read them exactly would potentially encode an infinite amount of information (in the infinite binary expansion)! Not only would this be an undesirably powerful property of a circuit, but physicists’ experiments tell us it’s not possible either.

So as we’ll see when we get to some algorithms, the main difficulty in getting useful quantum algorithms is not necessarily figuring out how to compute what you want to compute, it’s figuring out how to tease useful information out of the qubits that otherwise directly contain what you want. And the reason it’s so hard is that when you read a qubit, most of the information in the qubit is destroyed. And what you get to see is only a small piece of the information available. Here is the simplest example of that phenomenon, which is called the measurement in the computational basis.

Definition: Let $v = (a,b) \in \mathbb{C}^2$ be a qubit. Call the standard basis vectors $e_0 = (1,0), e_1 = (0,1)$ the computational basis of $\mathbb{C}^2$. The process of measuring $v$ in the computational basis consists of two parts.

1. You observe (get as output) a random choice of $e_0$ or $e_1$. The probability of getting $e_0$ is $|a|^2$, and the probability of getting $e_1$ is $|b|^2$.
2. As a side effect, the qubit $v$ instantaneously becomes whatever state was observed in 1. This is often called a collapse of the waveform by physicists.

There are more sophisticated ways to measure, and more sophisticated ways to express the process of measurement, but we’ll cover those when we need them. For now this is it.

Why is this so painful? Because if you wanted to try to estimate the probabilities $|a|^2$ or $|b|^2$, not only would you get an estimate at best, but you’d have to repeat whatever computation prepared $v$ for measurement over and over again until you get an estimate you’re satisfied with. In fact, we’ll see situations like this, where we actually have a perfect representation of the data we need to solve our problem, but we just can’t get at it because the measurement process destroys it once we measure.

Before we can talk about those algorithms we need to see how we’re allowed to manipulate qubits. As we said before, we use unitary matrices to preserve unit vectors, so let’s recall those and make everything more precise.

## Qubit Mappings are Unitary Matrices

Suppose $v = (a,b) \in \mathbb{C}^2$ is a qubit. If we are to have any mapping between vector spaces, it had better be a linear map, and the linear maps that send unit vectors to unit vectors are called unitary matrices. An equivalent definition that seems a bit stronger is:

Definition: A linear map $\mathbb{C}^2 \to \mathbb{C}^2$ is called unitary if it preserves the inner product on $\mathbb{C}^2$.

Let’s remember the inner product on $\mathbb{C}^n$ is defined by $\left \langle v,w \right \rangle = \sum_{i=1}^n v_i \overline{w_i}$ and has some useful properties.

• The square norm of a vector is $\left \| v \right \|^2 = \left \langle v,v \right \rangle$.
• Swapping the coordinates of the complex inner product conjugates the result: $\left \langle v,w \right \rangle = \overline{\left \langle w,v \right \rangle}$
• The complex inner product is a linear map if you fix the second coordinate, and a conjugate-linear map if you fix the first. That is, $\left \langle au+v, w \right \rangle = a \left \langle u, w \right \rangle + \left \langle v, w \right \rangle$ and $\left \langle u, aw + v \right \rangle = \overline{a} \left \langle u, w \right \rangle + \left \langle u,v \right \rangle$

By the first bullet, it makes sense to require unitary matrices to preserve the inner product instead of just the norm, though the two are equivalent (see the derivation on page 2 of these notes). We can obviously generalize unitary matrices to any complex vector space, and unitary matrices have some nice properties. In particular, if $U$ is a unitary matrix then the important property is that the columns (and rows) of $U$ form an orthonormal basis. As an immediate result, if we take the product $U\overline{U}^\text{T}$, which is just the matrix of all possible inner products of columns of $U$, we get the identity matrix. This means that unitary matrices are invertible and their inverse is $\overline{U}^\text{T}$.

Already we have one interesting philosophical tidbit. Any unitary transformation of a qubit is reversible because all unitary matrices are invertible. Apparently the only non-reversible thing we’ve seen so far is measurement.

Recall that $\overline{U}^\text{T}$ is the conjugate transpose of the matrix, which I’ll often write as $U^*$. Note that there is a way to define $U^*$ without appealing to matrices: it is a notion called the adjoint, which is that linear map $U^*$ such that $\left \langle Uv, w \right \rangle = \left \langle v, U^*w \right \rangle$ for all $v,w$. Also recall that “unitary matrix” for complex vector spaces means precisely the same thing as “orthogonal matrix” does for real numbers. The only difference is the inner product being used (indeed, if the complex matrix happens to have real entries, then orthogonal matrix and unitary matrix mean the same thing).

Definition: single qubit gate is a unitary matrix $\mathbb{C}^2 \to \mathbb{C}^2$.

So enough with the properties and definitions, let’s see some examples. For all of these examples we’ll fix the basis to the computational basis $e_0, e_1$. One very important, but still very simple example of a single qubit gate is the Hadamard gate. This is the unitary map given by the matrix

$\displaystyle \frac{1}{\sqrt{2}}\begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}$

It’s so important because if you apply it to a basis vector, say, $e_0 = (1,0)$, you get a uniform linear combination $\frac{1}{\sqrt{2}}(e_1 + e_2)$. One simple use of this is to allow for unbiased coin flips, and as readers of this blog know unbiased coins can efficiently simulate biased coins. But it has many other uses we’ll touch on as they come.

Just to give another example, the quantum NOT gate, often called a Pauli X gate, is the following matrix

$\displaystyle \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}$

It’s called this because, if we consider $e_0$ to be the “zero” bit and $e_1$ to be “one,” then this mapping swaps the two. In general, it takes $(a,b)$ to $(b,a)$.

As the reader can probably imagine by the suggestive comparison with classical operations, quantum circuits can do everything that classical circuits can do. We’ll save the proof for a future post, but if we want to do some kind of “quantum AND” operation, we get an obvious question. How do you perform an operation that involves multiple qubits? The short answer is: you represent a collection of bits by their tensor product, and apply a unitary matrix to that tensor.

We’ll go into more detail on this next time, and in the mean time we suggest checking out this blog’s primer on the tensor product. Until then!

# A Motivation for Quantum Computing

Quantum mechanics is one of the leading scientific theories describing the rules that govern the universe. It’s discovery and formulation was one of the most important revolutions in the history of mankind, contributing in no small part to the invention of the transistor and the laser.

Here at Math ∩ Programming we don’t put too much emphasis on physics or engineering, so it might seem curious to study quantum physics. But as the reader is likely aware, quantum mechanics forms the basis of one of the most interesting models of computing since the Turing machine: the quantum circuit. My goal with this series is to elucidate the algorithmic insights in quantum algorithms, and explain the mathematical formalisms while minimizing the amount of “interpreting” and “debating” and “experimenting” that dominates so much of the discourse by physicists.

Indeed, the more I learn about quantum computing the more it’s become clear that the shroud of mystery surrounding quantum topics has a lot to do with their presentation. The people teaching quantum (writing the textbooks, giving the lectures, writing the Wikipedia pages) are almost all purely physicists, and they almost unanimously follow the same path of teaching it.

Scott Aaronson (one of the few people who explains quantum in a way I understand) describes the situation superbly.

There are two ways to teach quantum mechanics. The first way – which for most physicists today is still the only way – follows the historical order in which the ideas were discovered. So, you start with classical mechanics and electrodynamics, solving lots of grueling differential equations at every step. Then, you learn about the “blackbody paradox” and various strange experimental results, and the great crisis that these things posed for physics. Next, you learn a complicated patchwork of ideas that physicists invented between 1900 and 1926 to try to make the crisis go away. Then, if you’re lucky, after years of study, you finally get around to the central conceptual point: that nature is described not by probabilities (which are always nonnegative), but by numbers called amplitudes that can be positive, negative, or even complex.

The second way to teach quantum mechanics eschews a blow-by-blow account of its discovery, and instead starts directly from the conceptual core – namely, a certain generalization of the laws of probability to allow minus signs (and more generally, complex numbers). Once you understand that core, you can then sprinkle in physics to taste, and calculate the spectrum of whatever atom you want.

Indeed, the sequence of experiments and debate has historical value. But the mathematics needed to have a basic understanding of quantum mechanics is quite simple, and it is often blurred by physicists in favor of discussing interpretations. To start thinking about quantum mechanics you only need to a healthy dose of linear algebra, and most of it we’ve covered in the three linear algebra primers on this blog. More importantly for computing-minded folks, one only needs a basic understanding of quantum mechanics to understand quantum computing.

The position I want to assume on this blog is that we don’t care about whether quantum mechanics is an accurate description of the real world. The real world gave an invaluable inspiration, but at the end of the day the mathematics stands on its own merits. The really interesting question to me is how the quantum computing model compares to classical computing. Most people believe it is strictly stronger in terms of efficiency. And so the murky depths of the quantum swamp must be hiding some fascinating algorithmic ideas. I want to understand those ideas, and explain them up to my own standards of mathematical rigor and lucidity.

So let’s begin this process with a discussion of an experiment that motivates most of the ideas we’ll need for quantum computing. Hopefully this will be the last experiment we discuss.

## Shooting Photons and The Question of Randomness

Does the world around us have inherent randomness in it? This is a deep question open to a lot of philosophical debate, but what evidence do we have that there is randomness?

Here’s the experiment. You set up a contraption that shoots photons in a straight line, aimed at what’s called a “beam splitter.” A beam splitter seems to have the property that when photons are shot at it, they will be either be reflected at a 90 degree angle or stay in a straight line with probability 1/2. Indeed, if you put little photon receptors at the end of each possible route (straight or up, as below) to measure the number of photons that end at each receptor, you’ll find that on average half of the photons went up and half went straight.

The triangle is the photon shooter, and the camera-looking things are receptors.

If you accept that the photon shooter is sufficiently good and the beam splitter is not tricking us somehow, then this is evidence that universe has some inherent randomness in it! Moreover, the probability that a photon goes up or straight seems to be independent of what other photons do, so this is evidence that whatever randomness we’re seeing follows the classical laws of probability. Now let’s augment the experiment as follows. First, put two beam splitters on the corners of a square, and mirrors at the other two corners, as below.

The thicker black lines are mirrors which always reflect the photons.

This is where things get really weird. If you assume that the beam splitter splits photons randomly (as in, according to an independent coin flip), then after the first beam splitter half go up and half go straight, and the same thing would happen after the second beam splitter. So the two receptors should measure half the total number of photons on average.

But that’s not what happens. Rather, all the photons go to the top receptor! Somehow the “probability” that the photon goes left or up in the first beam splitter is connected to the probability that it goes left or up in the second. This seems to be a counterexample to the claim that the universe behaves on the principles of independent probability. Obviously there is some deeper mystery at work.

## Complex Probabilities

One interesting explanation is that the beam splitter modifies something intrinsic to the photon, something that carries with it until the next beam splitter. You can imagine the photon is carrying information as it shambles along, but regardless of the interpretation it can’t follow the laws of classical probability. The classical probability explanation would go something like this:

There are two states, RIGHT and UP, and we model the state of a photon by a probability distribution $(p, q)$ such that the photon has a probability $p$ of being in state RIGHT a probability $q$ of being in state UP, and like any probability distribution $p + q = 1$. A photon hence starts in state $(1,0)$, and the process of traveling through the beam splitter is the random choice to switch states. This is modeled by multiplication by a particular so-called stochastic matrix (which just means the rows sum to 1)

$\displaystyle A = \begin{pmatrix} 1/2 & 1/2 \\ 1/2 & 1/2 \end{pmatrix}$

Of course, we chose this matrix because when we apply it to $(1,0)$ and $(0,1)$ we get $(1/2, 1/2)$ for both outcomes. By doing the algebra, applying it twice to $(1,0)$ will give the state $(1/2, 1/2)$, and so the chance of ending up in the top receptor is the same as for the right receptor.

But as we already know this isn’t what happens in real life, so something is amiss. Here is an alternative explanation that gives a nice preview of quantum mechanics.

The idea is that, rather than have the state of the traveling photon be a probability distribution over RIGHT and UP, we have it be a unit vector in a vector space (over $\mathbb{C}$). That is, now RIGHT and UP are the (basis) unit vectors $e_1 = (1,0), e_2 = (0,1)$, respectively, and a state $x$ is a linear combination $c_1 e_1 + c_2 e_2$, where we require $\left \| x \right \|^2 = |c_1|^2 + |c_2|^2 = 1$. And now the “probability” that the photon is in the RIGHT state is the square of the coefficient for that basis vector $p_{\text{right}} = |c_1|^2$. Likewise, the probability of being in the UP state is $p_{\text{up}} = |c_2|^2$.

This might seem like an innocuous modification — even a pointless one! — but changing the sum (or 1-norm) to the Euclidean sum-of-squares (or the 2-norm) is at the heart of why quantum mechanics is so different. Now rather than have stochastic matrices for state transitions, which are defined they way they are because they preserve probability distributions, we use unitary matrices, which are those complex-valued matrices that preserve the 2-norm. In both cases, we want “valid states” to be transformed into “valid states,” but we just change precisely what we mean by a state, and pick the transformations that preserve that.

In fact, as we’ll see later in this series using complex numbers is totally unnecessary. Everything that can be done with complex numbers can be done without them (up to a good enough approximation for computing), but using complex numbers just happens to make things more elegant mathematically. It’s the kind of situation where there are more and better theorems in linear algebra about complex-valued matrices than real valued matrices.

But back to our experiment. Now we can hypothesize that the beam splitter corresponds to the following transformation of states:

$\displaystyle A = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & i \\ i & 1 \end{pmatrix}$

We’ll talk a lot more about unitary matrices later, so for now the reader can rest assured that this is one. And then how does it transform the initial state $x =(1,0)$?

$\displaystyle y = Ax = \frac{1}{\sqrt{2}}(1, i)$

So at this stage the probability of being in the RIGHT state is $1/2 = (1/\sqrt{2})^2$ and the probability of being in state UP is also $1/2 = |i/\sqrt{2}|^2$. So far it matches the first experiment. Applying $A$ again,

$\displaystyle Ay = A^2x = \frac{1}{2}(0, 2i) = (0, i)$

And the photon is in state UP with probability 1. Stunning. This time Science is impressed by mathematics.

Next time we’ll continue this train of thought by generalizing the situation to the appropriate mathematical setting. Then we’ll dive into the quantum circuit model, and start churning out some algorithms.

Until then!

# Linear Programming and the Simplex Algorithm

In the last post in this series we saw some simple examples of linear programs, derived the concept of a dual linear program, and saw the duality theorem and the complementary slackness conditions which give a rough sketch of the stopping criterion for an algorithm. This time we’ll go ahead and write this algorithm for solving linear programs, and next time we’ll apply the algorithm to an industry-strength version of the nutrition problem we saw last time. The algorithm we’ll implement is called the simplex algorithm. It was the first algorithm for solving linear programs, invented in the 1940’s by George Dantzig, and it’s still the leading practical algorithm, and it was a key part of a Nobel Prize. It’s by far one of the most important algorithms ever devised.

As usual, we’ll post all of the code written in the making of this post on this blog’s Github page.

## Slack variables and equality constraints

The simplex algorithm can solve any kind of linear program, but it only accepts a special form of the program as input. So first we have to do some manipulations. Recall that the primal form of a linear program was the following minimization problem.

$\min \left \langle c, x \right \rangle \\ \textup{s.t. } Ax \geq b, x \geq 0$

where the brackets mean “dot product.” And its dual is

$\max \left \langle y, b \right \rangle \\ \textup{s.t. } A^Ty \leq c, y \geq 0$

The linear program can actually have more complicated constraints than just the ones above. In general, one might want to have “greater than” and “less than” constraints in the same problem. It turns out that this isn’t any harder, and moreover the simplex algorithm only uses equality constraints, and with some finicky algebra we can turn any set of inequality or equality constraints into a set of equality constraints.

We’ll call our goal the “standard form,” which is as follows:

$\max \left \langle c, x \right \rangle \\ \textup{s.t. } Ax = b, x \geq 0$

It seems impossible to get the usual minimization/maximization problem into standard form until you realize there’s nothing stopping you from adding more variables to the problem. That is, say we’re given a constraint like:

$\displaystyle x_7 + x_3 \leq 10,$

we can add a new variable $\xi$, called a slack variable, so that we get an equality:

$\displaystyle x_7 + x_3 + \xi = 10$

And now we can just impose that $\xi \geq 0$. The idea is that $\xi$ represents how much “slack” there is in the inequality, and you can always choose it to make the condition an equality. So if the equality holds and the variables are nonnegative, then the $x_i$ will still satisfy their original inequality. For “greater than” constraints, we can do the same thing but subtract a nonnegative variable. Finally, if we have a minimization problem “$\min z$” we can convert it to $\max -z$.

So, to combine all of this together, if we have the following linear program with each kind of constraint,

We can add new variables $\xi_1, \xi_2$, and write it as

By defining the vector variable $x = (x_1, x_2, x_3, \xi_1, \xi_2)$ and $c = (-1,-1,-1,0,0)$ and $A$ to have $-1, 0, 1$ as appropriately for the new variables, we see that the system is written in standard form.

This is the kind of tedious transformation we can automate with a program. Assuming there are $n$ variables, the input consists of the vector $c$ of length $n$, and three matrix-vector pairs $(A, b)$ representing the three kinds of constraints. It’s a bit annoying to describe, but the essential idea is that we compute a rectangular “identity” matrix whose diagonal entries are $\pm 1$, and then join this with the original constraint matrix row-wise. The reader can see the full implementation in the Github repository for this post, though we won’t use this particular functionality in the algorithm that follows.

There are some other additional things we could do: for example there might be some variables that are completely unrestricted. What you do in this case is take an unrestricted variable $z$ and replace it by the difference of two unrestricted variables $z' - z''$.  For simplicity we’ll ignore this, but it would be a fruitful exercise for the reader to augment the function to account for these.

## What happened to the slackness conditions?

The “standard form” of our linear program raises an obvious question: how can the complementary slackness conditions make sense if everything is an equality? It turns out that one can redo all the work one did for linear programs of the form we gave last time (minimize w.r.t. greater-than constraints) for programs in the new “standard form” above. We even get the same complementary slackness conditions! If you want to, you can do this entire routine quite a bit faster if you invoke the power of Lagrangians. We won’t do that here, but the tool shows up as a way to work with primal-dual conversions in many other parts of mathematics, so it’s a good buzzword to keep in mind.

In our case, the only difference with the complementary slackness conditions is that one of the two is trivial: $\left \langle y^*, Ax^* - b \right \rangle = 0$. This is because if our candidate solution $x^*$ is feasible, then it will have to satisfy $Ax = b$ already. The other one, that $\left \langle x^*, A^Ty^* - c \right \rangle = 0$, is the only one we need to worry about.

Again, the complementary slackness conditions give us inspiration here. Recall that, informally, they say that when a variable is used at all, it is used as much as it can be to fulfill its constraint (the corresponding dual constraint is tight). So a solution will correspond to a choice of some variables which are either used or not, and a choice of nonzero variables will correspond to a solution. We even saw this happen in the last post when we observed that broccoli trumps oranges. If we can get a good handle on how to navigate the set of these solutions, then we’ll have a nifty algorithm.

Let’s make this official and lay out our assumptions.

## Extreme points and basic solutions

Remember that the graphical way to solve a linear program is to look at the line (or hyperplane) given by $\langle c, x \rangle = q$ and keep increasing $q$ (or decreasing it, if you are minimizing) until the very last moment when this line touches the region of feasible solutions. Also recall that the “feasible region” is just the set of all solutions to $Ax = b$, that is the solutions that satisfy the constraints. We imagined this picture:

The constraints define a convex area of “feasible solutions.” Image source: Wikipedia.

With this geometric intuition it’s clear that there will always be an optimal solution on a vertex of the feasible region. These points are called extreme points of the feasible region. But because we will almost never work in the plane again (even introducing slack variables makes us relatively high dimensional!) we want an algebraic characterization of these extreme points.

If you have a little bit of practice with convex sets the correct definition is very natural. Recall that a set $X$ is convex if for any two points $x, y \in X$ every point on the line segment between $x$ and $y$ is also in $X$. An algebraic way to say this (thinking of these points now as vectors) is that every point $\delta x + (1-\delta) y \in X$ when $0 \leq \delta \leq 1$. Now an extreme point is just a point that isn’t on the inside of any such line, i.e. can’t be written this way for $0 < \delta < 1$. For example,

A convex set with extremal points in red. Image credit Wikipedia.

Another way to say this is that if $z$ is an extreme point then whenever $z$ can be written as $\delta x + (1-\delta) y$ for some $0 < \delta < 1$, then actually $x=y=z$. Now since our constraints are all linear (and there are a finite number of them) they won’t define a convex set with weird curves like the one above. This means that there are a finite number of extreme points that just correspond to the intersections of some of the constraints. So there are at most $2^n$ possibilities.

Indeed we want a characterization of extreme points that’s specific to linear programs in standard form, “$\max \langle c, x \rangle \textup{ s.t. } Ax=b, x \geq 0$.” And here is one.

Definition: Let $A$ be an $m \times n$ matrix with $n \geq m$. A solution $x$ to $Ax=b$ is called basic if at most $m$ of its entries are nonzero.

The reason we call it “basic” is because, under some mild assumptions we describe below, a basic solution corresponds to a vector space basis of $\mathbb{R}^m$. Which basis? The one given by the $m$ columns of $A$ used in the basic solution. We don’t need to talk about bases like this, though, so in the event of a headache just think of the basis as a set $B \subset \{ 1, 2, \dots, n \}$ of size $m$ corresponding to the nonzero entries of the basic solution.

Indeed, what we’re doing here is looking at the matrix $A_B$ formed by taking the columns of $A$ whose indices are in $B$, and the vector $x_B$ in the same way, and looking at the equation $A_Bx_B = b$. If all the parts of $x$ that we removed were zero then this will hold if and only if $Ax=b$. One might worry that $A_B$ is not invertible, so we’ll go ahead and assume it is. In fact, we’ll assume that every set of $m$ columns of $A$ forms a basis and that the rows of $A$ are also linearly independent. This isn’t without loss of generality because if some rows or columns are not linearly independent, we can remove the offending constraints and variables without changing the set of solutions (this is why it’s so nice to work with the standard form).

Moreover, we’ll assume that every basic solution has exactly $m$ nonzero variables. A basic solution which doesn’t satisfy this assumption is called degenerate, and they’ll essentially be special corner cases in the simplex algorithm. Finally, we call a basic solution feasible if (in addition to satisfying $Ax=b$) it satisfies $x \geq 0$. Now that we’ve made all these assumptions it’s easy to see that choosing $m$ nonzero variables uniquely determines a basic feasible solution. Again calling the sub-matrix $A_B$ for a basis $B$, it’s just $x_B = A_B^{-1}b$. Now to finish our characterization, we just have to show that under the same assumptions basic feasible solutions are exactly the extremal points of the feasible region.

Proposition: A vector $x$ is a basic feasible solution if and only if it’s an extreme point of the set $\{ x : Ax = b, x \geq 0 \}$.

Proof. For one direction, suppose you have a basic feasible solution $x$, and say we write it as $x = \delta y + (1-\delta) z$ for some $0 < \delta < 1$. We want to show that this implies $y = z$. Since all of these points are in the feasible region, all of their coordinates are nonnegative. So whenever a coordinate $x_i = 0$ it must be that both $y_i = z_i = 0$. Since $x$ has exactly $n-m$ zero entries, it must be that $y, z$ both have at least $n-m$ zero entries, and hence $y,z$ are both basic. By our non-degeneracy assumption they both then have exactly $m$ nonzero entries. Let $B$ be the set of the nonzero indices of $x$. Because $Ay = Az = b$, we have $A(y-z) = 0$. Now $y-z$ has all of its nonzero entries in $B$, and because the columns of $A_B$ are linearly independent, the fact that $A_B(y-z) = 0$ implies $y-z = 0$.

In the other direction, suppose  that you have some extreme point $x$ which is feasible but not basic. In other words, there are more than $m$ nonzero entries of $x$, and we’ll call the indices $J = \{ j_1, \dots, j_t \}$ where $t > m$. The columns of $A_J$ are linearly dependent (since they’re $t$ vectors in $\mathbb{R}^m$), and so let $\sum_{i=1}^t z_{j_i} A_{j_i}$ be a nontrivial linear combination of the columns of $A$. Add zeros to make the $z_{j_i}$ into a length $n$ vector $z$, so that $Az = 0$. Now

$A(x + \varepsilon z) = A(x - \varepsilon z) = Ax = b$

And if we pick $\varepsilon$ sufficiently small $x \pm \varepsilon z$ will still be nonnegative, because the only entries we’re changing of $x$ are the strictly positive ones. Then $x = \delta (x + \varepsilon z) + (1 - \delta) \varepsilon z$ for $\delta = 1/2$, but this is very embarrassing for $x$ who was supposed to be an extreme point. $\square$

Now that we know extreme points are the same as basic feasible solutions, we need to show that any linear program that has some solution has a basic feasible solution. This is clear geometrically: any time you have an optimum it has to either lie on a line or at a vertex, and if it lies on a line then you can slide it to a vertex without changing its value. Nevertheless, it is a useful exercise to go through the algebra.

Theorem. Whenever a linear program is feasible and bounded, it has a basic feasible solution.

Proof. Let $x$ be an optimal solution to the LP. If $x$ has at most $m$ nonzero entries then it’s a basic solution and by the non-degeneracy assumption it must have exactly $m$ nonzero entries. In this case there’s nothing to do, so suppose that $x$ has $r > m$ nonzero entries. It can’t be a basic feasible solution, and hence is not an extreme point of the set of feasible solutions (as proved by the last theorem). So write it as $x = \delta y + (1-\delta) z$ for some feasible $y \neq z$ and $0 < \delta < 1$.

The only thing we know about $x$ is it’s optimal. Let $c$ be the cost vector, and the optimality says that $\langle c,x \rangle \geq \langle c,y \rangle$, and $\langle c,x \rangle \geq \langle c,z \rangle$. We claim that in fact these are equal, that $y, z$ are both optimal as well. Indeed, say $y$ were not optimal, then

$\displaystyle \langle c, y \rangle < \langle c,x \rangle = \delta \langle c,y \rangle + (1-\delta) \langle c,z \rangle$

Which can be rearranged to show that $\langle c,y \rangle < \langle c, z \rangle$. Unfortunately for $x$, this implies that it was not optimal all along:

$\displaystyle \langle c,x \rangle < \delta \langle c, z \rangle + (1-\delta) \langle c,z \rangle = \langle c,z \rangle$

An identical argument works to show $z$ is optimal, too. Now we claim we can use $y,z$ to get a new solution that has fewer than $r$ nonzero entries. Once we show this we’re done: inductively repeat the argument with the smaller solution until we get down to exactly $m$ nonzero variables. As before we know that $y,z$ must have at least as many zeros as $x$. If they have more zeros we’re done. And if they have exactly as many zeros we can do the following trick. Write $w = \gamma y + (1- \gamma)z$ for a $\gamma \in \mathbb{R}$ we’ll choose later. Note that no matter the $\gamma$, $w$ is optimal. Rewriting $w = z + \gamma (y-z)$, we just have to pick a $\gamma$ that ensures one of the nonzero coefficients of $z$ is zeroed out while maintaining nonnegativity. Indeed, we can just look at the index $i$ which minimizes $z_i / (y-z)_i$ and use $\delta = - z_i / (y-z)_i$. $\square$.

So we have an immediate (and inefficient) combinatorial algorithm: enumerate all subsets of size $m$, compute the corresponding basic feasible solution $x_B = A_B^{-1}b$, and see which gives the biggest objective value. The problem is that, even if we knew the value of $m$, this would take time $n^m$, and it’s not uncommon for $m$ to be in the tens or hundreds (and if we don’t know $m$ the trivial search is exponential).

So we have to be smarter, and this is where the simplex tableau comes in.

## The simplex tableau

Now say you have any basis $B$ and any feasible solution $x$. For now $x$ might not be a basic solution, and even if it is, its basis of nonzero entries might not be the same as $B$. We can decompose the equation $Ax = b$ into the basis part and the non basis part:

$A_Bx_B + A_{B'} x_{B'} = b$

and solving the equation for $x_B$ gives

$x_B = A^{-1}_B(b - A_{B'} x_{B'})$

It may look like we’re making a wicked abuse of notation here, but both $A_Bx_B$ and $A_{B'}x_{B'}$ are vectors of length $m$ so the dimensions actually do work out. Now our feasible solution $x$ has to satisfy $Ax = b$, and the entries of $x$ are all nonnegative, so it must be that $x_B \geq 0$ and $x_{B'} \geq 0$, and by the equality above $A^{-1}_B (b - A_{B'}x_{B'}) \geq 0$ as well. Now let’s write the maximization objective $\langle c, x \rangle$ by expanding it first in terms of the $x_B, x_{B'}$, and then expanding $x_B$.

\displaystyle \begin{aligned} \langle c, x \rangle & = \langle c_B, x_B \rangle + \langle c_{B'}, x_{B'} \rangle \\ & = \langle c_B, A^{-1}_B(b - A_{B'}x_{B'}) \rangle + \langle c_{B'}, x_{B'} \rangle \\ & = \langle c_B, A^{-1}_Bb \rangle + \langle c_{B'} - (A^{-1}_B A_{B'})^T c_B, x_{B'} \rangle \end{aligned}

If we want to maximize the objective, we can just maximize this last line. There are two cases. In the first, the vector $c_{B'} - (A^{-1}_B A_{B'})^T c_B \leq 0$ and $A_B^{-1}b \geq 0$. In the above equation, this tells us that making any component of $x_{B'}$ bigger will decrease the overall objective. In other words, $\langle c, x \rangle \leq \langle c_B, A_B^{-1}b \rangle$. Picking $x = A_B^{-1}b$ (with zeros in the non basis part) meets this bound and hence must be optimal. In other words, no matter what basis $B$ we’ve chosen (i.e., no matter the candidate basic feasible solution), if the two conditions hold then we’re done.

Now the crux of the algorithm is the second case: if the conditions aren’t met, we can pick a positive index of $c_{B'} - (A_B^{-1}A_{B'})^Tc_B$ and increase the corresponding value of $x_{B'}$ to increase the objective value. As we do this, other variables in the solution will change as well (by decreasing), and we have to stop when one of them hits zero. In doing so, this changes the basis by removing one index and adding another. In reality, we’ll figure out how much to increase ahead of time, and the change will correspond to a single elementary row-operation in a matrix.

Indeed, the matrix we’ll use to represent all of this data is called a tableau in the literature. The columns of the tableau will correspond to variables, and the rows to constraints. The last row of the tableau will maintain a candidate solution $y$ to the dual problem. Here’s a rough picture to keep the different parts clear while we go through the details.

But to make it work we do a slick trick, which is to “left-multiply everything” by $A_B^{-1}$. In particular, if we have an LP given by $c, A, b$, then for any basis it’s equivalent to the LP given by $c, A_B^{-1}A, A_{B}^{-1} b$ (just multiply your solution to the new program by $A_B$ to get a solution to the old one). And so the actual tableau will be of this form.

When we say it’s in this form, it’s really only true up to rearranging columns. This is because the chosen basis will always be represented by an identity matrix (as it is to start with), so to find the basis you can find the embedded identity sub-matrix. In fact, the beginning of the simplex algorithm will have the initial basis sitting in the last few columns of the tableau.

Let’s look a little bit closer at the last row. The first portion is zero because $A_B^{-1}A_B$ is the identity. But furthermore with this $A_B^{-1}$ trick the dual LP involves $A_B^{-1}$ everywhere there’s a variable. In particular, joining all but the last column of the last row of the tableau, we have the vector $c - A_B^T(A_B^{-1})^T c$, and setting $y = A_B^{-1}c_B$ we get a candidate solution for the dual. What makes the trick even slicker is that $A_B^{-1}b$ is already the candidate solution $x_B$, since $(A_B^{-1}A)_B^{-1}$ is the identity. So we’re implicitly keeping track of two solutions here, one for the primal LP, given by the last column of the tableau, and one for the dual, contained in the last row of the tableau.

I told you the last row was the dual solution, so why all the other crap there? This is the final slick in the trick: the last row further encodes the complementary slackness conditions. Now that we recognize the dual candidate sitting there, the complementary slackness conditions simply ask for the last row to be non-positive (this is just another way of saying what we said at the beginning of this section!). You should check this, but it gives us a stopping criterion: if the last row is non-positive then stop and output the last column.

## The simplex algorithm

Now (finally!) we can describe and implement the simplex algorithm in its full glory. Recall that our informal setup has been:

1. Find an initial basic feasible solution, and set up the corresponding tableau.
2. Find a positive index of the last row, and increase the corresponding variable (adding it to the basis) just enough to make another variable from the basis zero (removing it from the basis).
3. Repeat step 2 until the last row is nonpositive.
4. Output the last column.

This is almost correct, except for some details about how increasing the corresponding variables works. What we’ll really do is represent the basis variables as pivots (ones in the tableau) and then the first 1 in each row will be the variable whose value is given by the entry in the last column of that row. So, for example, the last entry in the first row may be the optimal value for $x_5$, if the fifth column is the first entry in row 1 to have a 1.

As we describe the algorithm, we’ll illustrate it running on a simple example. In doing this we’ll see what all the different parts of the tableau correspond to from the previous section in each step of the algorithm.

Spoiler alert: the optimum is $x_1 = 2, x_2 = 1$ and the value of the max is 8.

So let’s be more programmatically formal about this. The main routine is essentially pseudocode, and the difficulty is in implementing the helper functions

def simplex(c, A, b):
tableau = initialTableau(c, A, b)

while canImprove(tableau):
pivot = findPivotIndex(tableau)

return primalSolution(tableau), objectiveValue(tableau)


Let’s start with the initial tableau. We’ll assume the user’s inputs already include the slack variables. In particular, our example data before adding slack is

c = [3, 2]
A = [[1, 2], [1, -1]]
b = [4, 1]


c = [3, 2, 0, 0]
A = [[1,  2,  1,  0],
[1, -1,  0,  1]]
b = [4, 1]


Now to set up the initial tableau we need an initial feasible solution in mind. The reader is recommended to work this part out with a pencil, since it’s much easier to write down than it is to explain. Since we introduced slack variables, our initial feasible solution (basis) $B$ can just be $(0,0,1,1)$. And so $x_B$ is just the slack variables, $c_B$ is the zero vector, and $A_B$ is the 2×2 identity matrix. Now $A_B^{-1}A_{B'} = A_{B'}$, which is just the original two columns of $A$ we started with, and $A_B^{-1}b = b$. For the last row, $c_B$ is zero so the part under $A_B^{-1}A_B$ is the zero vector. The part under $A_B^{-1}A_{B'}$ is just $c_{B'} = (3,2)$.

Rather than move columns around every time the basis $B$ changes, we’ll keep the tableau columns in order of $(x_1, \dots, x_n, \xi_1, \dots, \xi_m)$. In other words, for our example the initial tableau should look like this.

[[ 1,  2,  1,  0,  4],
[ 1, -1,  0,  1,  1],
[ 3,  2,  0,  0,  0]]


So implementing initialTableau is just a matter of putting the data in the right place.

def initialTableau(c, A, b):
tableau = [row[:] + [x] for row, x in zip(A, b)]
tableau.append(c[:] + [0])
return tableau


As an aside: in the event that we don’t start with the trivial basic feasible solution of “trivially use the slack variables,” we’d have to do a lot more work in this function. Next, the primalSolution() and objectiveValue() functions are simple, because they just extract the encoded information out from the tableau (some helper functions are omitted for brevity).

def primalSolution(tableau):
# the pivot columns denote which variables are used
columns = transpose(tableau)
indices = [j for j, col in enumerate(columns[:-1]) if isPivotCol(col)]
return list(zip(indices, columns[-1]))

def objectiveValue(tableau):
return -(tableau[-1][-1])


Similarly, the canImprove() function just checks if there’s a nonnegative entry in the last row

def canImprove(tableau):
lastRow = tableau[-1]
return any(x &gt; 0 for x in lastRow[:-1])


Let’s run the first loop of our simplex algorithm. The first step is checking to see if anything can be improved (in our example it can). Then we have to find a pivot entry in the tableau. This part includes some edge-case checking, but if the edge cases aren’t a problem then the strategy is simple: find a positive entry corresponding to some entry $j$ of $B'$, and then pick an appropriate entry in that column to use as the pivot. Pivoting increases the value of $x_j$ (from zero) to whatever is the largest we can make it without making some other variables become negative. As we’ve said before, we’ll stop increasing $x_j$ when some other variable hits zero, and we can compute which will be the first to do so by looking at the current values of $x_B = A_B^{-1}b$ (in the last column of the tableau), and seeing how pivoting will affect them. If you stare at it for long enough, it becomes clear that the first variable to hit zero will be the entry $x_i$ of the basis for which $x_i / A_{i,j}$ is minimal (and $A_{i,j}$ has to be positve). This is because, in order to maintain the linear equalities, every entry of $x_B$ will be decreased by that value during a pivot, and we can’t let any of the variables become negative.

All of this results in the following function, where we have left out the degeneracy/unboundedness checks.

def findPivotIndex(tableau):
# pick first nonzero index of the last row
column = [i for i,x in enumerate(tableau[-1][:-1]) if x &gt; 0][0]
quotients = [(i, r[-1] / r[column]) for i,r in enumerate(tableau[:-1]) if r[column] &gt; 0]

# pick row index minimizing the quotient
row = min(quotients, key=lambda x: x[1])[0]
return row, column


For our example, the minimizer is the $(1,0)$ entry (second row, first column). Pivoting is just doing the usual elementary row operations (we covered this in a primer a while back on row-reduction). The pivot function we use here is no different, and in particular mutates the list in place.

def pivotAbout(tableau, pivot):
i,j = pivot

pivotDenom = tableau[i][j]
tableau[i] = [x / pivotDenom for x in tableau[i]]

for k,row in enumerate(tableau):
if k != i:
pivotRowMultiple = [y * tableau[k][j] for y in tableau[i]]
tableau[k] = [x - y for x,y in zip(tableau[k], pivotRowMultiple)]


And in our example pivoting around the chosen entry gives the new tableau.

[[ 0.,  3.,  1., -1.,  3.],
[ 1., -1.,  0.,  1.,  1.],
[ 0.,  5.,  0., -3., -3.]]


In particular, $B$ is now $(1,0,1,0)$, since our pivot removed the second slack variable $\xi_2$ from the basis. Currently our solution has $x_1 = 1, \xi_1 = 3$. Notice how the identity submatrix is still sitting in there, the columns are just swapped around.

There’s still a positive entry in the bottom row, so let’s continue. The next pivot is (0,1), and pivoting around that entry gives the following tableau:

[[ 0.        ,  1.        ,  0.33333333, -0.33333333,  1.        ],
[ 1.        ,  0.        ,  0.33333333,  0.66666667,  2.        ],
[ 0.        ,  0.        , -1.66666667, -1.33333333, -8.        ]]


And because all of the entries in the bottom row are negative, we’re done. We read off the solution as we described, so that the first variable is 2 and the second is 1, and the objective value is the opposite of the bottom right entry, 8.

To see all of the source code, including the edge-case-checking we left out of this post, see the Github repository for this post.