Methods of Proof — Diagonalization

A while back we featured a post about why learning mathematics can be hard for programmers, and I claimed a major issue was not understanding the basic methods of proof (the lingua franca between intuition and rigorous mathematics). I boiled these down to the “basic four,” direct implication, contrapositive, contradiction, and induction. But in mathematics there is an ever growing supply of proof methods. There are books written about the “probabilistic method,” and I recently went to a lecture where the “linear algebra method” was displayed. There has been recent talk of a “quantum method” for proving theorems unrelated to quantum mechanics, and many more.

So in continuing our series of methods of proof, we’ll move up to some of the more advanced methods of proof. And in keeping with the spirit of the series, we’ll spend most of our time discussing the structural form of the proofs. This time, diagonalization.

Diagonalization

Perhaps one of the most famous methods of proof after the basic four is proof by diagonalization. Why do they call it diagonalization? Because the idea behind diagonalization is to write out a table that describes how a collection of objects behaves, and then to manipulate the “diagonal” of that table to get a new object that you can prove isn’t in the table.

The simplest and most famous example of this is the proof that there is no bijection between the natural numbers and the real numbers. We defined injections, and surjections and bijections, in two earlier posts in this series, but for new readers a bijection is just a one-to-one mapping between two collections of things. For example, one can construct a bijection between all positive integers and all even positive integers by mapping n to 2n. If there is a bijection between two (perhaps infinite) sets, then we say they have the same size or cardinality. And so to say there is no bijection between the natural numbers and the real numbers is to say that one of these two sets (the real numbers) is somehow “larger” than the other, despite both being infinite in size. It’s deep, it used to be very controversial, and it made the method of diagonalization famous. Let’s see how it works.

Theorem: There is no bijection from the natural numbers \mathbb{N} to the real numbers \mathbb{R}.

Proof. Suppose to the contrary (i.e., we’re about to do proof by contradiction) that there is a bijection f: \mathbb{N} \to \mathbb{R}. That is, you give me a positive integer k and I will spit out f(k), with the property that different k give different f(k), and every real number is hit by some natural number k (this is just what it means to be a one-to-one mapping).

First let me just do some setup. I claim that all we need to do is show that there is no bijection between \mathbb{N} and the real numbers between 0 and 1. In particular, I claim there is a bijection from (0,1) to all real numbers, so if there is a bijection from \mathbb{N} \to (0,1) then we could combine the two bijections. To show there is a bijection from (0,1) \to \mathbb{R}, I can first make a bijection from the open interval (0,1) to the interval (-\infty, 0) \cup (1, \infty) by mapping x to 1/x. With a little bit of extra work (read, messy details) you can extend this to all real numbers. Here’s a sketch: make a bijection from (0,1) to (0,2) by doubling; then make a bijection from (0,2) to all real numbers by using the (0,1) part to get (-\infty, 0) \cup (1, \infty), and use the [1,2) part to get [0,1] by subtracting 1 (almost! To be super rigorous you also have to argue that the missing number 1 doesn’t change the cardinality, or else write down a more complicated bijection; still, the idea should be clear).

Okay, setup is done. We just have to show there is no bijection between (0,1) and the natural numbers.

The reason I did all that setup is so that I can use the fact that every real number in (0,1) has an infinite binary decimal expansion whose only nonzero digits are after the decimal point. And so I’ll write down the expansion of f(1) as a row in a table (an infinite row), and below it I’ll write down the expansion of f(2), below that f(3), and so on, and the decimal points will line up. The table looks like this.

firsttableThe d‘s above are either 0 or 1. I need to be a bit more detailed in my table, so I’ll index the digits of f(1) by b_{1,1}, b_{1,2}, b_{1,3}, \dots, the digits of f(2) by b_{2,1}, b_{2,2}, b_{2,3}, \dots, and so on. This makes the table look like this

secondtable

It’s a bit harder to read, but trust me the notation is helpful.

Now by the assumption that f is a bijection, I’m assuming that every real number shows up as a number in this table, and no real number shows up twice. So if I could construct a number that I can prove is not in the table, I will arrive at a contradiction: the table couldn’t have had all real numbers to begin with! And that will prove there is no bijection between the natural numbers and the real numbers.

Here’s how I’ll come up with such a number N (this is the diagonalization part). It starts with 0., and it’s first digit after the decimal is 1-b_{1,1}. That is, we flip the bit b_{1,1} to get the first digit of N. The second digit is 1-b_{2,2}, the third is 1-b_{3,3}, and so on. In general, digit i is 1-b_{i,i}.

Now we show that N isn’t in the table. If it were, then it would have to be N = f(m) for some m, i.e. be the m-th row in the table. Moreover, by the way we built the table, the m-th digit of N would be b_{m,m}. But we defined N so that it’s m-th digit was actually 1-b_{m,m}. This is very embarrassing for N (it’s a contradiction!). So N isn’t in the table.

\square

It’s the kind of proof that blows your mind the first time you see it, because it says that there is more than one kind of infinity. Not something you think about every day, right?

The Halting Problem

The second example we’ll show of a proof by diagonalization is the Halting Theorem, proved originally by Alan Turing, which says that there are some problems that computers can’t solve, even if given unbounded space and time to perform their computations. The formal mathematical model is called a Turing machine, but for simplicity you can think of “Turing machines” and “algorithms described in words” as the same thing. Or if you want it can be “programs written in programming language X.” So we’ll use the three words “Turing machine,” “algorithm,” and “program” interchangeably.

The proof works by actually defining a problem and proving it can’t be solved. The problem is called the halting problem, and it is the problem of deciding: given a program P and an input x to that program, will P ever stop running when given x as input? What I mean by “decide” is that any program that claims to solve the halting problem is itself required to halt for every possible input with the correct answer. A “halting problem solver” can’t loop infinitely!

So first we’ll give the standard proof that the halting problem can’t be solved, and then we’ll inspect the form of the proof more closely to see why it’s considered a diagonalization argument.

Theorem: The halting program cannot be solved by Turing machines.

Proof. Suppose to the contrary that T is a program that solves the halting problem. We’ll use T as a black box to come up with a new program I’ll call meta-T, defined in pseudo-python as follows.

def metaT(P):
   run T on (P,P)
   if T says that P halts:
      loop infinitely
   else:
      halt and output "success!"

In words, meta-T accepts as input the source code of a program P, and then uses T to tell if P halts (when given its own source code as input). Based on the result, it behaves the opposite of P; if P halts then meta-T loops infinitely and vice versa. It’s a little meta, right?

Now let’s do something crazy: let’s run meta-T on itself! That is, run

metaT(metaT)

So meta. The question is what is the output of this call? The meta-T program uses T to determine whether meta-T halts when given itself as input. So let’s say that the answer to this question is “yes, it does halt.” Then by the definition of meta-T, the program proceeds to loop forever. But this is a problem, because it means that metaT(metaT) (which is the original thing we ran) actually does not halt, contradicting T‘s answer! Likewise, if T says that metaT(metaT) should loop infinitely, that will cause meta-T to halt, a contradiction. So T cannot be correct, and the halting problem can’t be solved.

\square

This theorem is deep because it says that you can’t possibly write a program to which can always detect bugs in other programs. Infinite loops are just one special kind of bug.

But let’s take a closer look and see why this is a proof by diagonalization. The first thing we need to convince ourselves is that the set of all programs is countable (that is, there is a bijection from \mathbb{N} to the set of all programs). This shouldn’t be so hard to see: you can list all programs in lexicographic order, since the set of all strings is countable, and then throw out any that are not syntactically valid programs. Likewise, the set of all inputs, really just all strings, is countable.

The second thing we need to convince ourselves of is that a problem corresponds to an infinite binary string. To do this, we’ll restrict our attention to problems with yes/no answers, that is where the goal of the program is to output a single bit corresponding to yes or no for a given input. Then if we list all possible inputs in increasing lexicographic order, a problem can be represented by the infinite list of bits that are the correct outputs to each input.

For example, if the problem is to determine whether a given binary input string corresponds to an even number, the representation might look like this:

010101010101010101...

Of course this all depends on the details of how one encodes inputs, but the point is that if you wanted to you could nail all this down precisely. More importantly for us we can represent the halting problem as an infinite table of bits. If the columns of the table are all programs (in lex order), and the rows of the table correspond to inputs (in lex order), then the table would have at entry (x,P) a 1 if P(x) halts and a 0 otherwise.


haltingtable

here b_{i,j} is 1 if P_j(x_i) halts and 0 otherwise. The table encodes the answers to the halting problem for all possible inputs.

Now we assume for contradiction sake that some program solves the halting problem, i.e. that every entry of the table is computable. Now we’ll construct the answers output by meta-T by flipping each bit of the diagonal of the table. The point is that meta-T corresponds to some row of the table, because there is some input string that is interpreted as the source code of meta-T. Then we argue that the entry of the table for (\textup{meta-}T, \textup{meta-}T) contradicts its definition, and we’re done!

So these are two of the most high-profile uses of the method of diagonalization. It’s a great tool for your proving repertoire.

Until next time!

Hamming’s Code

Or how to detect and correct errors

Last time we made a quick tour through the main theorems of Claude Shannon, which essentially solved the following two problems about communicating over a digital channel.

  1. What is the best encoding for information when you are guaranteed that your communication channel is error free?
  2. Are there any encoding schemes that can recover from random noise introduced during transmission?

The answers to these questions were purely mathematical theorems, of course. But the interesting shortcoming of Shannon’s accomplishment was that his solution for the noisy coding problem (2) was nonconstructive. The question remains: can we actually come up with efficiently computable encoding schemes? The answer is yes! Marcel Golay was the first to discover such a code in 1949 (just a year after Shannon’s landmark paper), and Golay’s construction was published on a single page! We’re not going to define Golay’s code in this post, but we will mention its interesting status in coding theory later. The next year Richard Hamming discovered another simpler and larger family of codes, and went on to do some of the major founding work in coding theory. For his efforts he won a Turing Award and played a major part in bringing about the modern digital age. So we’ll start with Hamming’s codes.

We will assume some basic linear algebra knowledge, as detailed our first linear algebra primer. We will also use some basic facts about polynomials and finite fields, though the lazy reader can just imagine everything as binary \{ 0,1 \} and still grok the important stuff.

hamming-3

Richard Hamming, inventor of Hamming codes. [image source]

What is a code?

The formal definition of a code is simple: a code C is just a subset of \{ 0,1 \}^n for some n. Elements of C are called codewords.

This is deceptively simple, but here’s the intuition. Say we know we want to send messages of length k, so that our messages are in \{ 0,1 \}^k. Then we’re really viewing a code C as the image of some encoding function \textup{Enc}: \{ 0,1 \}^k \to \{ 0,1 \}^n. We can define C by just describing what the set is, or we can define it by describing the encoding function. Either way, we will make sure that \textup{Enc} is an injective function, so that no two messages get sent to the same codeword. Then |C| = 2^k, and we can call k = \log |C| the message length of C even if we don’t have an explicit encoding function.

Moreover, while in this post we’ll always work with \{ 0,1 \}, the alphabet of your encoded messages could be an arbitrary set \Sigma. So then a code C would be a subset of tuples in \Sigma^n, and we would call q = |\Sigma|.

So we have these parameters n, k, q, and we need one more. This is the minimum distance of a code, which we’ll denote by d. This is defined to be the minimum Hamming distance between all distinct pairs of codewords, where by Hamming distance I just mean the number of coordinates that two tuples differ in. Recalling the remarks we made last time about Shannon’s nonconstructive proof, when we decode an encoded message y (possibly with noisy bits) we look for the (unencoded) message x whose encoding \textup{Enc}(x) is as close to y as possible. This will only work in the worst case if all pairs of codewords are sufficiently far apart. Hence we track the minimum distance of a code.

So coding theorists turn this mess of parameters into notation.

Definition: A code C is called an (n, k, d)_q-code if

  • C \subset \Sigma^n for some alphabet \Sigma,
  • k = \log |C|,
  • C has minimum distance d, and
  • the alphabet \Sigma has size q.

The basic goals of coding theory are:

  1. For which values of these four parameters do codes exist?
  2. Fixing any three parameters, how can we optimize the other one?

In this post we’ll see how simple linear-algebraic constructions can give optima for one of these problems, optimizing k for d=3, and we’ll state a characterization theorem for optimizing k for a general d. Next time we’ll continue with a second construction that optimizes a different bound called the Singleton bound.

Linear codes and the Hamming code

A code is called linear if it can be identified with a linear subspace of some finite-dimensional vector space. In this post all of our vector spaces will be \{ 0,1 \}^n, that is tuples of bits under addition mod 2. But you can do the same constructions with any finite scalar field \mathbb{F}_q for a prime power q, i.e. have your vector space be \mathbb{F}_q^n. We’ll go back and forth between describing a binary code q=2 over \{ 0,1 \} and a code in $\mathbb{F}_q^n$. So to say a code is linear means:

  • The zero vector is a codeword.
  • The sum of any two codewords is a codeword.
  • Any scalar multiple of a codeword is a codeword.

Linear codes are the simplest kinds of codes, but already they give a rich variety of things to study. The benefit of linear codes is that you can describe them in a lot of different and useful ways besides just describing the encoding function. We’ll use two that we define here. The idea is simple: you can describe everything about a linear subspace by giving a basis for the space.

Definition: generator matrix of a (n,k,d)_q-code C is a k \times n matrix G whose rows form a basis for C.

There are a lot of equivalent generator matrices for a linear code (we’ll come back to this later), but the main benefit is that having a generator matrix allows one to encode messages x \in \{0,1 \}^k by left multiplication xG. Intuitively, we can think of the bits of x as describing the coefficients of the chosen linear combination of the rows of G, which uniquely describes an element of the subspace. Note that because a k-dimensional subspace of \{ 0,1 \}^n has 2^k elements, we’re not abusing notation by calling k = \log |C| both the message length and the dimension.

For the second description of C, we’ll remind the reader that every linear subspace C has a unique orthogonal complement C^\perp, which is the subspace of vectors that are orthogonal to vectors in C.

Definition: Let H^T be a generator matrix for C^\perp. Then H is called a parity check matrix.

Note H has the basis for C^\perp as columns. This means it has dimensions n \times (n-k). Moreover, it has the property that x \in C if and only if the left multiplication xH = 0. Having zero dot product with all columns of H characterizes membership in C.

The benefit of having a parity check matrix is that you can do efficient error detection: just compute yH on your received message y, and if it’s nonzero there was an error! What if there were so many errors, and just the right errors that y coincided with a different codeword than it started? Then you’re screwed. In other words, the parity check matrix is only guarantee to detect errors if you have fewer errors than the minimum distance of your code.

So that raises an obvious question: if you give me the generator matrix of a linear code can I compute its minimum distance? It turns out that this problem is NP-hard in general. In fact, you can show that this is equivalent to finding the smallest linearly dependent set of rows of the parity check matrix, and it is easier to see why such a problem might be hard. But if you construct your codes cleverly enough you can compute their distance properties with ease.

Before we do that, one more definition and a simple proposition about linear codes. The Hamming weight of a vector x, denoted wt(x), is the number of nonzero entries in x.

Proposition: The minimum distance of a linear code C is the minimum Hamming weight over all nonzero vectors x \in C.

Proof. Consider a nonzero x \in C. On one hand, the zero vector is a codeword and wt(x) is by definition the Hamming distance between x and zero, so it is an upper bound on the minimum distance. In fact, it’s also a lower bound: if x,y are two nonzero codewords, then x-y is also a codeword and wt(x-y) is the Hamming distance between x and y.

\square

So now we can define our first code, the Hamming code. It will be a (n, k, 3)_2-code. The construction is quite simple. We have fixed d=3, q=2, and we will also fix l = n-k. One can think of this as fixing n and maximizing k, but it will only work for n of a special form.

We’ll construct the Hamming code by describing a parity-check matrix H. In fact, we’re going to see what conditions the minimum distance d=3 imposes on H, and find out those conditions are actually sufficient to get d=3. We’ll start with 2. If we want to ensure d \geq 2, then you need it to be the case that no nonzero vector of Hamming weight 1 is a code word. Indeed, if e_i is a vector with all zeros except a one in position i, then e_i H = h_i is the i-th row of H. We need e_i H \neq 0, so this imposes the condition that no row of H can be zero. It’s easy to see that this is sufficient for d \geq 2.

Likewise for d \geq 3, given a vector y = e_i + e_j for some positions i \neq j, then yH = h_i + h_j may not be zero. But because our sums are mod 2, saying that h_i + h_j \neq 0 is the same as saying h_i \neq h_j. Again it’s an if and only if. So we have the two conditions.

  • No row of H may be zero.
  • All rows of H must be distinct.

That is, any parity check matrix with those two properties defines a distance 3 linear code. The only question that remains is how large can n  be if the vectors have length n-k = l? That’s just the number of distinct nonzero binary strings of length l, which is 2^l - 1. Picking any way to arrange these strings as the rows of a matrix (say, in lexicographic order) gives you a good parity check matrix.

Theorem: For every l > 0, there is a (2^l - 1, 2^l - l - 1, 3)_2-code called the Hamming code.

Since the Hamming code has distance 3, we can always detect if at most a single error occurs. Moreover, we can correct a single error using the Hamming code. If x \in C and wt(e) = 1 is an error bit in position i, then the incoming message would be y = x + e. Now compute yH = xH + eH = 0 + eH = h_i and flip bit i of y. That is, whichever row of H you get tells you the index of the error, so you can flip the corresponding bit and correct it. If you order the rows lexicographically like we said, then h_i = i as a binary number. Very slick.

Before we move on, we should note one interesting feature of linear codes.

Definition: A code is called systematic if it can be realized by an encoding function that appends some number n-k “check bits” to the end of each message.

The interesting feature is that all linear codes are systematic. The reason is as follows. The generator matrix G of a linear code has as rows a basis for the code as a linear subspace. We can perform Gaussian elimination on G and get a new generator matrix that looks like [I \mid A] where I is the identity matrix of the appropriate size and A is some junk. The point is that encoding using this generator matrix leaves the message unchanged, and adds a bunch of bits to the end that are determined by A. It’s a different encoding function on \{ 0,1\}^k, but it has the same image in \{ 0,1 \}^n, i.e. the code is unchanged. Gaussian elimination just performed a change of basis.

If you work out the parameters of the Hamming code, you’ll see that it is a systematic code which adds \Theta(\log n) check bits to a message, and we’re able to correct a single error in this code. An obvious question is whether this is necessary? Could we get away with adding fewer check bits? The answer is no, and a simple “information theoretic” argument shows this. A single index out of n requires \log n bits to describe, and being able to correct a single error is like identifying a unique index. Without logarithmically many bits, you just don’t have enough information.

The Hamming bound and perfect codes

One nice fact about Hamming codes is that they optimize a natural problem: the problem of maximizing d given a fixed choice of n, k, and q. To get this let’s define V_n(r) denote the volume of a ball of radius r in the space \mathbb{F}_2^n. I.e., if you fix any string (doesn’t matter which) x, V_n(r) is the size of the set \{ y : d(x,y) \leq r \}, where d(x,y) is the hamming distance.

There is a theorem called the Hamming bound, which describes a limit to how much you can pack disjoint balls of radius r inside \mathbb{F}_2^n.

Theorem: If an (n,k,d)_2-code exists, then

\displaystyle 2^k V_n \left ( \left \lfloor \frac{d-1}{2} \right \rfloor \right ) \leq 2^n

Proof. The proof is quite simple. To say a code C has distance d means that for every string x \in C there is no other string y within Hamming distance d of x. In other words, the balls centered around both x,y of radius r = \lfloor (d-1)/2 \rfloor are disjoint. The extra difference of one is for odd d, e.g. when d=3 you need balls of radius 1 to guarantee no overlap. Now |C| = 2^k, so the total number of strings covered by all these balls is the left-hand side of the expression. But there are at most 2^n strings in \mathbb{F}_2^n, establishing the desired inequality.

\square

Now a code is called perfect if it actually meets the Hamming bound exactly. As you probably guessed, the Hamming codes are perfect codes. It’s not hard to prove this, and I’m leaving it as an exercise to the reader.

The obvious follow-up question is whether there are any other perfect codes. The answer is yes, some of which are nonlinear. But some of them are “trivial.” For example, when d=1 you can just use the identity encoding to get the code C = \mathbb{F}_2^n. You can also just have a code which consists of a single codeword. There are also some codes that encode by repeating the message multiple times. These are called “repetition codes,” and all three of these examples are called trivial (as a definition). Now there are some nontrivial and nonlinear perfect codes I won’t describe here, but here is the nice characterization theorem.

Theorem [van Lint ’71, Tietavainen ‘73]: Let C be a nontrivial perfect (n,d,k)_q code. Then the parameters must either be that of a Hamming code, or one of the two:

  • A (23, 12, 7)_2-code
  • A (11, 6, 5)_3-code

The last two examples are known as the binary and ternary Golay codes, respectively, which are also linear. In other words, every possible set of parameters for a perfect code can be realized as one of these three linear codes.

So this theorem was a big deal in coding theory. The Hamming and Golay codes were both discovered within a year of each other, in 1949 and 1950, but the nonexistence of other perfect linear codes was open for twenty more years. This wrapped up a very neat package.

Next time we’ll discuss the Singleton bound, which optimizes for a different quantity and is incomparable with perfect codes. We’ll define the Reed-Solomon and show they optimize this bound as well. These codes are particularly famous for being the error correcting codes used in DVDs. We’ll then discuss the algorithmic issues surrounding decoding, and more recent connections to complexity theory.

Until then!

A Proofless Introduction to Information Theory

There are two basic problems in information theory that are very easy to explain. Two people, Alice and Bob, want to communicate over a digital channel over some long period of time, and they know the probability that certain messages will be sent ahead of time. For example, English language sentences are more likely than gibberish, and “Hi” is much more likely than “asphyxiation.” The problems are:

  1. Say communication is very expensive. Then the problem is to come up with an encoding scheme for the messages which minimizes the expected length of an encoded message and guarantees the ability to unambiguously decode a message. This is called the noiseless coding problem.
  2. Say communication is not expensive, but error prone. In particular, each bit i of your message is erroneously flipped with some known probably p, and all the errors are independent. Then the question is, how can one encode their messages to as to guarantee (with high probability) the ability to decode any sent message? This is called the noisy coding problem.

There are actually many models of “communication with noise” that generalize (2), such as models based on Markov chains. We are not going to cover them here.

Here is a simple example for the noiseless problem. Say you are just sending binary digits as your messages, and you know that the string “00000000” (eight zeros) occurs half the time, and all other eight-bit strings occur equally likely in the other half. It would make sense, then, to encode the “eight zeros” string as a 0, and prefix all other strings with a 1 to distinguish them from zero. You would save on average 7 \cdot 1/2 + (-1) \cdot 1/2 = 3 bits in every message.

One amazing thing about these two problems is that they were posed and solved in the same paper by Claude Shannon in 1948. One byproduct of his work was the notion of entropy, which in this context measures the “information content” of a message, or the expected “compressibility” of a single bit under the best encoding. For the extremely dedicated reader of this blog, note this differs from Kolmogorov complexity in that we’re not analyzing the compressibility of a string by itself, but rather when compared to a distribution. So really we should think of (the domain of) the distribution as being compressed, not the string.

Claude Shannon. Image credit: Wikipedia

Entropy and noiseless encoding

Before we can state Shannon’s theorems we have to define entropy.

Definition: Suppose D is a distribution on a finite set X, and I’ll use D(x) to denote the probability of drawing x from D. The entropy of D, denoted H(D) is defined as

H(D) = \sum_{x \in X} D(x) \log \frac{1}{D(x)}

It is strange to think about this sum in abstract, so let’s suppose D is a biased coin flip with bias 0 \leq p \leq 1 of landing heads. Then we can plot the entropy as follows

Image source: Wikipedia

Image source: Wikipedia

The horizontal axis is the bias p, and the vertical axis is the value of H(D), which with some algebra is - p \log p - (1-p) \log (1-p). From the graph above we can see that the entropy is maximized when p=1/2 and minimized at p=0, 1. You can verify all of this with calculus, and you can prove that the uniform distribution maximizes entropy in general as well.

So what is this saying? A high entropy measures how incompressible something is, and low entropy gives us lots of compressibility. Indeed, if our message consisted of the results of 10 such coin flips, and p was close to 1, we could be able to compress a lot by encoding strings with lots of 1’s using few bits. On the other hand, if p=1/2 we couldn’t get any compression at all. All strings would be equally likely.

Shannon’s famous theorem shows that the entropy of the distribution is actually all that matters. Some quick notation: \{ 0,1 \}^* is the set of all binary strings.

Theorem (Noiseless Coding Theorem) [Shannon 1948]: For every finite set X and distribution D over X, there are encoding and decoding functions \textup{Enc}: X \to \{0,1 \}^*, \textup{Dec}: \{ 0,1 \}^* \to X such that

  1. The encoding/decoding actually works, i.e. \textup{Dec}(\textup{Enc}(x)) = x for all x.
  2. The expected length of an encoded message is between H(D) and H(D) + 1.

Moreover, no encoding scheme can do better.

Item 2 and the last sentence are the magical parts. In other words, if you know your distribution over messages, you precisely know how long to expect your messages to be. And you know that you can’t hope to do any better!

As the title of this post says, we aren’t going to give a proof here. Wikipedia has a proof if you’re really interested in the details.

Noisy Coding

The noisy coding problem is more interesting because in a certain sense (that was not solved by Shannon) it is still being studied today in the field of coding theory. The interpretation of the noisy coding problem is that you want to be able to recover from white noise errors introduced during transmission. The concept is called error correction. To restate what we said earlier, we want to recover from error with probability asymptotically close to 1, where the probability is over the errors.

It should be intuitively clear that you can’t do so without your encoding “blowing up” the length of the messages. Indeed, if your encoding does not blow up the message length then a single error will confound you since many valid messages would differ by only a single bit. So the question is does such an encoding exist, and if so how much do we need to blow up the message length? Shannon’s second theorem answers both questions.

Theorem (Noisy Coding Theorem) [Shannon 1948]: For any constant noise rate p < 1/2, there is an encoding scheme \textup{Enc} : \{ 0,1 \}^k \to \{0,1\}^{ck}, \textup{Dec} : \{ 0,1 \}^{ck} \to \{ 0,1\}^k with the following property. If x is the message sent by Alice, and y is the message received by Bob (i.e. \textup{Enc}(x) with random noise), then \Pr[\textup{Dec}(y) = x] \to 1 as a function of n=ck. In addition, if we denote by H(p) the entropy of the distribution of an error on a single bit, then choosing any c > \frac{1}{1-H(p)} guarantees the existence of such an encoding scheme, and no scheme exists for any smaller c.

This theorem formalizes a “yes” answer to the noisy coding problem, but moreover it characterizes the blowup needed for such a scheme to exist. The deep fact is that it only depends on the noise rate.

A word about the proof: it’s probabilistic. That is, Shannon proved such an encoding scheme exists by picking \textup{Enc} to be a random function (!). Then \textup{Dec}(y) finds (nonconstructively) the string x such that the number of bits different between \textup{Enc}(x) and y is minimized. This “number of bits that differ” measure is called the Hamming distance. Then he showed using relatively standard probability tools that this scheme has the needed properties with high probability, the implication being that some scheme has to exist for such a probability to even be positive. The sharp threshold for c takes a bit more work. If you want the details, check out the first few lectures of Madhu Sudan’s MIT class.

The non-algorithmic nature of his solution is what opened the door to more research. The question has surpassed, “Are there any encodings that work?” to the more interesting, “What is the algorithmic cost of constructing such an encoding?” It became a question of complexity, not computability. Moreover, the guarantees people wanted were strengthened to worst case guarantees. In other words, if I can guarantee at most 12 errors, is there an encoding scheme that will allow me to always recover the original message, and not just with high probability? One can imagine that if your message contains nuclear codes or your bank balance, you’d definitely want to have 100% recovery ability.

Indeed, two years later Richard Hamming spawned the theory of error correcting codes and defined codes that can always correct a single error. This theory has expanded and grown over the last sixty years, and these days the algorithmic problems of coding theory have deep connections to most areas of computer science, including learning theory, cryptography, and quantum computing.

We’ll cover Hamming’s basic codes next time, and then move on to Reed-Solomon codes and others. Until then!

The Quantum Bit

The best place to start our journey through quantum computing is to recall how classical computing works and try to extend it. Since our final quantum computing model will be a circuit model, we should informally discuss circuits first.

A circuit has three parts: the “inputs,” which are bits (either zero or one); the “gates,” which represent the lowest-level computations we perform on bits; and the “wires,” which connect the outputs of gates to the inputs of other gates. Typically the gates have one or two input bits and one output bit, and they correspond to some logical operation like AND, NOT, or XOR.

A simple example of a circuit.

A simple example of a circuit. The V’s are “OR” and the Λ’s are “AND.” Image source: Ryan O’Donnell

If we want to come up with a different model of computing, we could start regular circuits and generalize some or all of these pieces. Indeed, in our motivational post we saw a glimpse of a probabilistic model of computation, where instead of the inputs being bits they were probabilities in a probability distribution, and instead of the gates being simple boolean functions they were linear maps that preserved probability distributions (we called such a matrix “stochastic”).

Rather than go through that whole train of thought again let’s just jump into the definitions for the quantum setting. In case you missed last time, our goal is to avoid as much physics as possible and frame everything purely in terms of linear algebra.

Qubits are Unit Vectors

The generalization of a bit is simple: it’s a unit vector in \mathbb{C}^2. That is, our most atomic unit of data is a vector (a,b) with the constraints that a,b are complex numbers and |a|^2 + |b|^2 = 1. We call such a vector a qubit.

A qubit can assume “binary” values much like a regular bit, because you could pick two distinguished unit vectors, like (1,0) and (0,1), and call one “zero” and the other “one.” Obviously there are many more possible unit vectors, such as \frac{1}{\sqrt{2}}(1, 1) and (-i,0). But before we go romping about with what qubits can do, we need to understand how we can extract information from a qubit. The definitions we make here will motivate a lot of the rest of what we do, and is in my opinion one of the major hurdles to becoming comfortable with quantum computing.

A bittersweet fact of life is that bits are comforting. They can be zero or one, you can create them and change them and read them whenever you want without an existential crisis. The same is not true of qubits. This is a large part of what makes quantum computing so weird: you can’t just read the information in a qubit! Before we say why, notice that the coefficients in a qubit are complex numbers, so being able to read them exactly would potentially encode an infinite amount of information (in the infinite binary expansion)! Not only would this be an undesirably powerful property of a circuit, but physicists’ experiments tell us it’s not possible either.

So as we’ll see when we get to some algorithms, the main difficulty in getting useful quantum algorithms is not necessarily figuring out how to compute what you want to compute, it’s figuring out how to tease useful information out of the qubits that otherwise directly contain what you want. And the reason it’s so hard is that when you read a qubit, most of the information in the qubit is destroyed. And what you get to see is only a small piece of the information available. Here is the simplest example of that phenomenon, which is called the measurement in the computational basis.

Definition: Let v = (a,b) \in \mathbb{C}^2 be a qubit. Call the standard basis vectors e_0 = (1,0), e_1 = (0,1) the computational basis of \mathbb{C}^2. The process of measuring v in the computational basis consists of two parts.

  1. You observe (get as output) a random choice of e_0 or e_1. The probability of getting e_0 is |a|^2, and the probability of getting e_1 is |b|^2.
  2. As a side effect, the qubit v instantaneously becomes whatever state was observed in 1. This is often called a collapse of the waveform by physicists.

There are more sophisticated ways to measure, and more sophisticated ways to express the process of measurement, but we’ll cover those when we need them. For now this is it.

Why is this so painful? Because if you wanted to try to estimate the probabilities |a|^2 or |b|^2, not only would you get an estimate at best, but you’d have to repeat whatever computation prepared v for measurement over and over again until you get an estimate you’re satisfied with. In fact, we’ll see situations like this, where we actually have a perfect representation of the data we need to solve our problem, but we just can’t get at it because the measurement process destroys it once we measure.

Before we can talk about those algorithms we need to see how we’re allowed to manipulate qubits. As we said before, we use unitary matrices to preserve unit vectors, so let’s recall those and make everything more precise.

Qubit Mappings are Unitary Matrices

Suppose v = (a,b) \in \mathbb{C}^2 is a qubit. If we are to have any mapping between vector spaces, it had better be a linear map, and the linear maps that send unit vectors to unit vectors are called unitary matrices. An equivalent definition that seems a bit stronger is:

Definition: A linear map \mathbb{C}^2 \to \mathbb{C}^2 is called unitary if it preserves the inner product on \mathbb{C}^2.

Let’s remember the inner product on \mathbb{C}^n is defined by \left \langle v,w \right \rangle = \sum_{i=1}^n v_i \overline{w_i} and has some useful properties.

  • The square norm of a vector is \left \| v \right \|^2 = \left \langle v,v \right \rangle.
  • Swapping the coordinates of the complex inner product conjugates the result: \left \langle v,w \right \rangle = \overline{\left \langle w,v \right \rangle}
  • The complex inner product is a linear map if you fix the second coordinate, and a conjugate-linear map if you fix the first. That is, \left \langle au+v, w \right \rangle = a \left \langle u, w \right \rangle + \left \langle v, w \right \rangle and \left \langle u, aw + v \right \rangle = \overline{a} \left \langle u, w \right \rangle + \left \langle u,v \right \rangle

By the first bullet, it makes sense to require unitary matrices to preserve the inner product instead of just the norm, though the two are equivalent (see the derivation on page 2 of these notes). We can obviously generalize unitary matrices to any complex vector space, and unitary matrices have some nice properties. In particular, if U is a unitary matrix then the important property is that the columns (and rows) of U form an orthonormal basis. As an immediate result, if we take the product U\overline{U}^\text{T}, which is just the matrix of all possible inner products of columns of U, we get the identity matrix. This means that unitary matrices are invertible and their inverse is \overline{U}^\text{T}.

Already we have one interesting philosophical tidbit. Any unitary transformation of a qubit is reversible because all unitary matrices are invertible. Apparently the only non-reversible thing we’ve seen so far is measurement.

Recall that \overline{U}^\text{T} is the conjugate transpose of the matrix, which I’ll often write as U^*. Note that there is a way to define U^* without appealing to matrices: it is a notion called the adjoint, which is that linear map U^* such that \left \langle Uv, w \right \rangle = \left \langle v, U^*w \right \rangle for all v,w. Also recall that “unitary matrix” for complex vector spaces means precisely the same thing as “orthogonal matrix” does for real numbers. The only difference is the inner product being used (indeed, if the complex matrix happens to have real entries, then orthogonal matrix and unitary matrix mean the same thing).

Definition: single qubit gate is a unitary matrix \mathbb{C}^2 \to \mathbb{C}^2.

So enough with the properties and definitions, let’s see some examples. For all of these examples we’ll fix the basis to the computational basis e_0, e_1. One very important, but still very simple example of a single qubit gate is the Hadamard gate. This is the unitary map given by the matrix

\displaystyle \frac{1}{\sqrt{2}}\begin{pmatrix}  1 & 1 \\  1 & -1  \end{pmatrix}

It’s so important because if you apply it to a basis vector, say, e_0 = (1,0), you get a uniform linear combination \frac{1}{\sqrt{2}}(e_1 + e_2). One simple use of this is to allow for unbiased coin flips, and as readers of this blog know unbiased coins can efficiently simulate biased coins. But it has many other uses we’ll touch on as they come.

Just to give another example, the quantum NOT gate, often called a Pauli X gate, is the following matrix

\displaystyle \begin{pmatrix}  0 & 1 \\  1 & 0  \end{pmatrix}

It’s called this because, if we consider e_0 to be the “zero” bit and e_1 to be “one,” then this mapping swaps the two. In general, it takes (a,b) to (b,a).

As the reader can probably imagine by the suggestive comparison with classical operations, quantum circuits can do everything that classical circuits can do. We’ll save the proof for a future post, but if we want to do some kind of “quantum AND” operation, we get an obvious question. How do you perform an operation that involves multiple qubits? The short answer is: you represent a collection of bits by their tensor product, and apply a unitary matrix to that tensor.

We’ll go into more detail on this next time, and in the mean time we suggest checking out this blog’s primer on the tensor product. Until then!