The Codes of Solomon, Reed, and Muller

Last time we defined the Hamming code. We also saw that it meets the Hamming bound, which is a measure of how densely a code can be packed inside an ambient space and still maintain a given distance. This time we’ll define the Reed-Solomon code which optimizes a different bound called the Singleton bound, and then generalize them to a larger class of codes called Reed-Muller codes. In future posts we’ll consider algorithmic issues behind decoding the codes, for now we just care about their existence and optimality properties.

The Singleton bound

Recall that a code $ C$ is a set of strings called codewords, and that the parameters of a code $ C$ are written $ (n,k,d)_q$. Remember $ n$ is the length of a codeword, $ k = \log_q |C|$ is the message length, $ d$ is the minimum distance between any two codewords, and $ q$ is the size of the alphabet used for the codewords. Finally, remember that for linear codes our alphabets were either just $ \{ 0,1 \}$ where $ q=2$, or more generally a finite field $ \mathbb{F}_q$ for $ q$ a prime power.

One way to motivate for the Singleton bound goes like this. We can easily come up with codes for the following parameters. For $ (n,n,1)_2$ the identity function works. And to get a $ (n,n-1,2)_2$-code we can encode a binary string $ x$ by appending the parity bit $ \sum_i x_i \mod 2$ to the end (as an easy exercise, verify this has distance 2). An obvious question is can we generalize this to a $ (n, n-d+1, d)_2$-code for any $ d$? Perhaps a more obvious question is: why can’t we hope for better? A larger $ d$ or $ k \geq n-d+1$? Because the Singleton bound says so.

Theorem [Singleton 64]: If $ C$ is an $ (n,k,d)_q$-code, then $ k \leq n-d+1$.

Proof. The proof is pleasantly simple. Let $ \Sigma$ be your alphabet and look at the projection map $ \pi : \Sigma^n \to \Sigma^{k-1}$ which projects $ x = (x_1, \dots, x_n) \mapsto (x_1, \dots, x_{k-1})$. Remember that the size of the code is $ |C| = q^k$, and because the codomain of $ \pi,$ i.e. $ \Sigma^{k-1}$ has size $ q^{k-1} < q^k$, it follows that $ \pi$ is not an injective map. In particular, there are two codewords $ x,y$ whose first $ k-1$ coordinates are equal. Even if all of their remaining coordinates differ, this implies that $ d(x,y) < n-k+1$.

$ \square$

It’s embarrassing that such a simple argument can prove that one can do no better. There are codes that meet this bound and they are called maximum distance separable (MDS) codes. One might wonder how MDS codes relate to perfect codes, but they are incomparable; there are perfect codes that are not MDS codes, and conversely MDS codes need not be perfect. The Reed-Solomon code is an example of the latter.

The Reed-Solomon Code

Irving Reed (left) and Gustave Solomon (right).

Irving Reed (left) and Gustave Solomon (right).

The Reed-Solomon code has a very simple definition, especially for those of you who have read about secret sharing.

Given a prime power $ q$ and integers $ k \leq n \leq q$, the Reed-Solomon code with these parameters is defined by its encoding function $ E: \mathbb{F}_q^k \to \mathbb{F}_q^n$ as follows.

  1. Generate $ \mathbb{F}_q$ explicitly.
  2. Pick $ n$ distinct elements $ \alpha_i \in \mathbb{F}_q$.
  3. A message $ x \in \mathbb{F}_q^k$ is a list of elements $ c_0 \dots c_{k-1}$. Represent the message as a polynomial $ m(x) = \sum_j c_jx^j$.
  4. The encoding of a message is the tuple $ E(m) = (m(\alpha_1), \dots, m(\alpha_n))$. That is, we just evaluate $ m(x)$ at our chosen locations in $ \alpha_i$.

Here’s an example when $ q=5, n=3, k=3$. We’ll pick the points $ 1,3,4 \in \mathbb{F}_5$, and let our message be $ x = (4,1,2)$, which is encoded as a polynomial $ m(x) = 4 + x + 2x^2$. Then the encoding of the message is

$ \displaystyle E(m) = (m(1), m(3), m(4)) = (2, 0, 0)$

Decoding the message is a bit more difficult (more on that next time), but for now let’s prove the basic facts about this code.

Fact: The Reed-Solomon code is linear. This is just because polynomials of a limited degree form a vector space. Adding polynomials is adding their coefficients, and scaling them is scaling their coefficients. Moreover, the evaluation of a polynomial at a point is a linear map, i.e. it’s always true that $ m_1(\alpha) + m_2(\alpha) = (m_1 + m_2)(\alpha)$, and scaling the coefficients is no different. So the codewords also form a vector space.

Fact: $ d = n – k + 1$, or equivalently the Reed-Solomon code meets the Singleton bound. This follows from a simple fact: any two different single-variable polynomials of degree at most $ k-1$ agree on at most $ k-1$ points. Indeed, otherwise two such polynomials $ f,g$ would give a new polynomial $ f-g$ which has more than $ k-1$ roots, but the fundamental theorem of algebra (the adaptation for finite fields) says the only polynomial with this many roots is the zero polynomial.

So the Reed-Solomon code is maximum distance separable. Neat!

One might wonder why one would want good codes with large alphabets. One reason is that with a large alphabet we can interpret a byte as an element of $ \mathbb{F}_{256}$ to get error correction on bytes. So if you want to encode some really large stream of bytes (like a DVD) using such a scheme and you get bursts of contiguous errors in small regions (like a scratch), then you can do pretty powerful error correction. In fact, this is more or less the idea behind error correction for DVDs. So I hear. You can read more about the famous applications at Wikipedia.

The Reed-Muller code

The Reed-Muller code is a neat generalization of the Reed-Solomon code to multivariable polynomials. The reason they’re so useful is not necessarily because they optimize some bound (if they do, I haven’t heard of it), but because they specialize to all sorts of useful codes with useful properties. One of these is properties is called local decodability, which has big applications in theoretical computer science.

Anyway, before I state the definition let me remind the reader about compact notation for multivariable polynomials. I can represent the variables $ x_1, \dots, x_n$ used in the polynomial as a vector $ \mathbf{x}$ and likewise a monomial $ x_1^{\alpha_1} x_2^{\alpha_2} \dots x_n^{\alpha_n}$ by a “vector power” $ \mathbf{x}^\alpha$, where $ \sum_i \alpha_i = d$ is the degree of that monomial, and you’d write an entire polynomial as $ \sum_\alpha c_\alpha x^{\alpha}$ where $ \alpha$ ranges over all exponents you want.

Definition: Let $ m, l$ be positive integers and $ q > l$ be a prime power. The Reed-Muller code with parameters $ m,l,q$ is defined as follows:

  1. The message is the list of multinomial coefficients of a homogeneous degree $ l$ polynomial in $ m$ variables, $ f(\mathbf{x}) = \sum_{\alpha} c_\alpha x^\alpha$.
  2. You encode a message $ f(\mathbf{x})$ as the tuple of all polynomial evaluations $ (f(x))_{x \in \mathbb{F}_q^m}$.

Here the actual parameters of the code are $ n=q^m$, and $ k = \binom{m+l}{m}$ being the number of possible coefficients. Finally $ d = (1 – l/q)n$, and we can prove this in the same way as we did for the Reed-Solomon code, using a beefed up fact about the number of roots of a multivariate polynomial:

Fact: Two multivariate degree $ \leq l$ polynomials over a finite field $ \mathbb{F}_q$ agree on at most an $ l/q$ fraction of $ \mathbb{F}_q^m$.

For messages of desired length $ k$, a clever choice of parameters gives a good code. Let $ m = \log k / \log \log k$, $ q = \log^2 k$, and pick $ l$ such that $ \binom{m+l}{m} = k$. Then the Reed-Muller code has polynomial length $ n = k^2$, and because $ l = o(q)$ we get that the distance of the code is asymptotically $ d = (1-o(1))n$, i.e. it tends to $ n$.

A fun fact about Reed-Muller codes: they were apparently used on the Voyager space missions to relay image data back to Earth.

The Way Forward

So we defined Reed-Solomon and Reed-Muller codes, but we didn’t really do any programming yet. The reason is because the encoding algorithms are very straightforward. If you’ve been following this blog you’ll know we have already written code to explicitly represent polynomials over finite fields, and extending that code to multivariable polynomials, at least for the sake of encoding the Reed-Muller code, is straightforward.

The real interesting algorithms come when you’re trying to decode. For example, in the Reed-Solomon code we’d take as input a bunch of points in a plane (over a finite field), only some of which are consistent with the underlying polynomial that generated them, and we have to reconstruct the unknown polynomial exactly. Even worse, for Reed-Muller we have to do it with many variables!

We’ll see exactly how to do that and produce working code next time.

Until then!

Posts in this series:

Hamming’s Code

Or how to detect and correct errors

Last time we made a quick tour through the main theorems of Claude Shannon, which essentially solved the following two problems about communicating over a digital channel.

  1. What is the best encoding for information when you are guaranteed that your communication channel is error free?
  2. Are there any encoding schemes that can recover from random noise introduced during transmission?

The answers to these questions were purely mathematical theorems, of course. But the interesting shortcoming of Shannon’s accomplishment was that his solution for the noisy coding problem (2) was nonconstructive. The question remains: can we actually come up with efficiently computable encoding schemes? The answer is yes! Marcel Golay was the first to discover such a code in 1949 (just a year after Shannon’s landmark paper), and Golay’s construction was published on a single page! We’re not going to define Golay’s code in this post, but we will mention its interesting status in coding theory later. The next year Richard Hamming discovered another simpler and larger family of codes, and went on to do some of the major founding work in coding theory. For his efforts he won a Turing Award and played a major part in bringing about the modern digital age. So we’ll start with Hamming’s codes.

We will assume some basic linear algebra knowledge, as detailed our first linear algebra primer. We will also use some basic facts about polynomials and finite fields, though the lazy reader can just imagine everything as binary $ \{ 0,1 \}$ and still grok the important stuff.

hamming-3

Richard Hamming, inventor of Hamming codes. [image source]

What is a code?

The formal definition of a code is simple: a code $ C$ is just a subset of $ \{ 0,1 \}^n$ for some $ n$. Elements of $ C$ are called codewords.

This is deceptively simple, but here’s the intuition. Say we know we want to send messages of length $ k$, so that our messages are in $ \{ 0,1 \}^k$. Then we’re really viewing a code $ C$ as the image of some encoding function $ \textup{Enc}: \{ 0,1 \}^k \to \{ 0,1 \}^n$. We can define $ C$ by just describing what the set is, or we can define it by describing the encoding function. Either way, we will make sure that $ \textup{Enc}$ is an injective function, so that no two messages get sent to the same codeword. Then $ |C| = 2^k$, and we can call $ k = \log |C|$ the message length of $ C$ even if we don’t have an explicit encoding function.

Moreover, while in this post we’ll always work with $ \{ 0,1 \}$, the alphabet of your encoded messages could be an arbitrary set $ \Sigma$. So then a code $ C$ would be a subset of tuples in $ \Sigma^n$, and we would call $ q = |\Sigma|$.

So we have these parameters $ n, k, q$, and we need one more. This is the minimum distance of a code, which we’ll denote by $ d$. This is defined to be the minimum Hamming distance between all distinct pairs of codewords, where by Hamming distance I just mean the number of coordinates that two tuples differ in. Recalling the remarks we made last time about Shannon’s nonconstructive proof, when we decode an encoded message $ y$ (possibly with noisy bits) we look for the (unencoded) message $ x$ whose encoding $ \textup{Enc}(x)$ is as close to $ y$ as possible. This will only work in the worst case if all pairs of codewords are sufficiently far apart. Hence we track the minimum distance of a code.

So coding theorists turn this mess of parameters into notation.

Definition: A code $ C$ is called an $ (n, k, d)_q$-code if

  • $ C \subset \Sigma^n$ for some alphabet $ \Sigma$,
  • $ k = \log |C|$,
  • $ C$ has minimum distance $ d$, and
  • the alphabet $ \Sigma$ has size $ q$.

The basic goals of coding theory are:

  1. For which values of these four parameters do codes exist?
  2. Fixing any three parameters, how can we optimize the other one?

In this post we’ll see how simple linear-algebraic constructions can give optima for one of these problems, optimizing $ k$ for $ d=3$, and we’ll state a characterization theorem for optimizing $ k$ for a general $ d$. Next time we’ll continue with a second construction that optimizes a different bound called the Singleton bound.

Linear codes and the Hamming code

A code is called linear if it can be identified with a linear subspace of some finite-dimensional vector space. In this post all of our vector spaces will be $ \{ 0,1 \}^n$, that is tuples of bits under addition mod 2. But you can do the same constructions with any finite scalar field $ \mathbb{F}_q$ for a prime power $ q$, i.e. have your vector space be $ \mathbb{F}_q^n$. We’ll go back and forth between describing a binary code $ q=2$ over $ \{ 0,1 \}$ and a code in $\mathbb{F}_q^n$. So to say a code is linear means:

  • The zero vector is a codeword.
  • The sum of any two codewords is a codeword.
  • Any scalar multiple of a codeword is a codeword.

Linear codes are the simplest kinds of codes, but already they give a rich variety of things to study. The benefit of linear codes is that you can describe them in a lot of different and useful ways besides just describing the encoding function. We’ll use two that we define here. The idea is simple: you can describe everything about a linear subspace by giving a basis for the space.

Definition: generator matrix of a $ (n,k,d)_q$-code $ C$ is a $ k \times n$ matrix $ G$ whose rows form a basis for $ C$.

There are a lot of equivalent generator matrices for a linear code (we’ll come back to this later), but the main benefit is that having a generator matrix allows one to encode messages $ x \in \{0,1 \}^k$ by left multiplication $ xG$. Intuitively, we can think of the bits of $ x$ as describing the coefficients of the chosen linear combination of the rows of $ G$, which uniquely describes an element of the subspace. Note that because a $ k$-dimensional subspace of $ \{ 0,1 \}^n$ has $ 2^k$ elements, we’re not abusing notation by calling $ k = \log |C|$ both the message length and the dimension.

For the second description of $ C$, we’ll remind the reader that every linear subspace $ C$ has a unique orthogonal complement $ C^\perp$, which is the subspace of vectors that are orthogonal to vectors in $ C$.

Definition: Let $ H^T$ be a generator matrix for $ C^\perp$. Then $ H$ is called a parity check matrix.

Note $ H$ has the basis for $ C^\perp$ as columns. This means it has dimensions $ n \times (n-k)$. Moreover, it has the property that $ x \in C$ if and only if the left multiplication $ xH = 0$. Having zero dot product with all columns of $ H$ characterizes membership in $ C$.

The benefit of having a parity check matrix is that you can do efficient error detection: just compute $ yH$ on your received message $ y$, and if it’s nonzero there was an error! What if there were so many errors, and just the right errors that $ y$ coincided with a different codeword than it started? Then you’re screwed. In other words, the parity check matrix is only guarantee to detect errors if you have fewer errors than the minimum distance of your code.

So that raises an obvious question: if you give me the generator matrix of a linear code can I compute its minimum distance? It turns out that this problem is NP-hard in general. In fact, you can show that this is equivalent to finding the smallest linearly dependent set of rows of the parity check matrix, and it is easier to see why such a problem might be hard. But if you construct your codes cleverly enough you can compute their distance properties with ease.

Before we do that, one more definition and a simple proposition about linear codes. The Hamming weight of a vector $ x$, denoted $ wt(x)$, is the number of nonzero entries in $ x$.

Proposition: The minimum distance of a linear code $ C$ is the minimum Hamming weight over all nonzero vectors $ x \in C$.

Proof. Consider a nonzero $ x \in C$. On one hand, the zero vector is a codeword and $ wt(x)$ is by definition the Hamming distance between $ x$ and zero, so it is an upper bound on the minimum distance. In fact, it’s also a lower bound: if $ x,y$ are two nonzero codewords, then $ x-y$ is also a codeword and $ wt(x-y)$ is the Hamming distance between $ x$ and $ y$.

$ \square$

So now we can define our first code, the Hamming code. It will be a $ (n, k, 3)_2$-code. The construction is quite simple. We have fixed $ d=3, q=2$, and we will also fix $ l = n-k$. One can think of this as fixing $ n$ and maximizing $ k$, but it will only work for $ n$ of a special form.

We’ll construct the Hamming code by describing a parity-check matrix $ H$. In fact, we’re going to see what conditions the minimum distance $ d=3$ imposes on $ H$, and find out those conditions are actually sufficient to get $ d=3$. We’ll start with 2. If we want to ensure $ d \geq 2$, then you need it to be the case that no nonzero vector of Hamming weight 1 is a code word. Indeed, if $ e_i$ is a vector with all zeros except a one in position $ i$, then $ e_i H = h_i$ is the $ i$-th row of $ H$. We need $ e_i H \neq 0$, so this imposes the condition that no row of $ H$ can be zero. It’s easy to see that this is sufficient for $ d \geq 2$.

Likewise for $ d \geq 3$, given a vector $ y = e_i + e_j$ for some positions $ i \neq j$, then $ yH = h_i + h_j$ may not be zero. But because our sums are mod 2, saying that $ h_i + h_j \neq 0$ is the same as saying $ h_i \neq h_j$. Again it’s an if and only if. So we have the two conditions.

  • No row of $ H$ may be zero.
  • All rows of $ H$ must be distinct.

That is, any parity check matrix with those two properties defines a distance 3 linear code. The only question that remains is how large can $ n$  be if the vectors have length $ n-k = l$? That’s just the number of distinct nonzero binary strings of length $ l$, which is $ 2^l – 1$. Picking any way to arrange these strings as the rows of a matrix (say, in lexicographic order) gives you a good parity check matrix.

Theorem: For every $ l > 0$, there is a $ (2^l – 1, 2^l – l – 1, 3)_2$-code called the Hamming code.

Since the Hamming code has distance 3, we can always detect if at most a single error occurs. Moreover, we can correct a single error using the Hamming code. If $ x \in C$ and $ wt(e) = 1$ is an error bit in position $ i$, then the incoming message would be $ y = x + e$. Now compute $ yH = xH + eH = 0 + eH = h_i$ and flip bit $ i$ of $ y$. That is, whichever row of $ H$ you get tells you the index of the error, so you can flip the corresponding bit and correct it. If you order the rows lexicographically like we said, then $ h_i = i$ as a binary number. Very slick.

Before we move on, we should note one interesting feature of linear codes.

Definition: A code is called systematic if it can be realized by an encoding function that appends some number $ n-k$ “check bits” to the end of each message.

The interesting feature is that all linear codes are systematic. The reason is as follows. The generator matrix $ G$ of a linear code has as rows a basis for the code as a linear subspace. We can perform Gaussian elimination on $ G$ and get a new generator matrix that looks like $ [I \mid A]$ where $ I$ is the identity matrix of the appropriate size and $ A$ is some junk. The point is that encoding using this generator matrix leaves the message unchanged, and adds a bunch of bits to the end that are determined by $ A$. It’s a different encoding function on $ \{ 0,1\}^k$, but it has the same image in $ \{ 0,1 \}^n$, i.e. the code is unchanged. Gaussian elimination just performed a change of basis.

If you work out the parameters of the Hamming code, you’ll see that it is a systematic code which adds $ \Theta(\log n)$ check bits to a message, and we’re able to correct a single error in this code. An obvious question is whether this is necessary? Could we get away with adding fewer check bits? The answer is no, and a simple “information theoretic” argument shows this. A single index out of $ n$ requires $ \log n$ bits to describe, and being able to correct a single error is like identifying a unique index. Without logarithmically many bits, you just don’t have enough information.

The Hamming bound and perfect codes

One nice fact about Hamming codes is that they optimize a natural problem: the problem of maximizing $ d$ given a fixed choice of $ n$, $ k$, and $ q$. To get this let’s define $ V_n(r)$ denote the volume of a ball of radius $ r$ in the space $ \mathbb{F}_2^n$. I.e., if you fix any string (doesn’t matter which) $ x$, $ V_n(r)$ is the size of the set $ \{ y : d(x,y) \leq r \}$, where $ d(x,y)$ is the hamming distance.

There is a theorem called the Hamming bound, which describes a limit to how much you can pack disjoint balls of radius $ r$ inside $ \mathbb{F}_2^n$.

Theorem: If an $ (n,k,d)_2$-code exists, then

$ \displaystyle 2^k V_n \left ( \left \lfloor \frac{d-1}{2} \right \rfloor \right ) \leq 2^n$

Proof. The proof is quite simple. To say a code $ C$ has distance $ d$ means that for every string $ x \in C$ there is no other string $ y$ within Hamming distance $ d$ of $ x$. In other words, the balls centered around both $ x,y$ of radius $ r = \lfloor (d-1)/2 \rfloor$ are disjoint. The extra difference of one is for odd $ d$, e.g. when $ d=3$ you need balls of radius 1 to guarantee no overlap. Now $ |C| = 2^k$, so the total number of strings covered by all these balls is the left-hand side of the expression. But there are at most $ 2^n$ strings in $ \mathbb{F}_2^n$, establishing the desired inequality.

$ \square$

Now a code is called perfect if it actually meets the Hamming bound exactly. As you probably guessed, the Hamming codes are perfect codes. It’s not hard to prove this, and I’m leaving it as an exercise to the reader.

The obvious follow-up question is whether there are any other perfect codes. The answer is yes, some of which are nonlinear. But some of them are “trivial.” For example, when $ d=1$ you can just use the identity encoding to get the code $ C = \mathbb{F}_2^n$. You can also just have a code which consists of a single codeword. There are also some codes that encode by repeating the message multiple times. These are called “repetition codes,” and all three of these examples are called trivial (as a definition). Now there are some nontrivial and nonlinear perfect codes I won’t describe here, but here is the nice characterization theorem.

Theorem [van Lint ’71, Tietavainen ‘73]: Let $ C$ be a nontrivial perfect $ (n,d,k)_q$ code. Then the parameters must either be that of a Hamming code, or one of the two:

  • A $ (23, 12, 7)_2$-code
  • A $ (11, 6, 5)_3$-code

The last two examples are known as the binary and ternary Golay codes, respectively, which are also linear. In other words, every possible set of parameters for a perfect code can be realized as one of these three linear codes.

So this theorem was a big deal in coding theory. The Hamming and Golay codes were both discovered within a year of each other, in 1949 and 1950, but the nonexistence of other perfect linear codes was open for twenty more years. This wrapped up a very neat package.

Next time we’ll discuss the Singleton bound, which optimizes for a different quantity and is incomparable with perfect codes. We’ll define the Reed-Solomon and show they optimize this bound as well. These codes are particularly famous for being the error correcting codes used in DVDs. We’ll then discuss the algorithmic issues surrounding decoding, and more recent connections to complexity theory.

Until then!

Posts in this series:

A Proofless Introduction to Information Theory

There are two basic problems in information theory that are very easy to explain. Two people, Alice and Bob, want to communicate over a digital channel over some long period of time, and they know the probability that certain messages will be sent ahead of time. For example, English language sentences are more likely than gibberish, and “Hi” is much more likely than “asphyxiation.” The problems are:

  1. Say communication is very expensive. Then the problem is to come up with an encoding scheme for the messages which minimizes the expected length of an encoded message and guarantees the ability to unambiguously decode a message. This is called the noiseless coding problem.
  2. Say communication is not expensive, but error prone. In particular, each bit $ i$ of your message is erroneously flipped with some known probably $ p$, and all the errors are independent. Then the question is, how can one encode their messages to as to guarantee (with high probability) the ability to decode any sent message? This is called the noisy coding problem.

There are actually many models of “communication with noise” that generalize (2), such as models based on Markov chains. We are not going to cover them here.

Here is a simple example for the noiseless problem. Say you are just sending binary digits as your messages, and you know that the string “00000000” (eight zeros) occurs half the time, and all other eight-bit strings occur equally likely in the other half. It would make sense, then, to encode the “eight zeros” string as a 0, and prefix all other strings with a 1 to distinguish them from zero. You would save on average $ 7 \cdot 1/2 + (-1) \cdot 1/2 = 3$ bits in every message.

One amazing thing about these two problems is that they were posed and solved in the same paper by Claude Shannon in 1948. One byproduct of his work was the notion of entropy, which in this context measures the “information content” of a message, or the expected “compressibility” of a single bit under the best encoding. For the extremely dedicated reader of this blog, note this differs from Kolmogorov complexity in that we’re not analyzing the compressibility of a string by itself, but rather when compared to a distribution. So really we should think of (the domain of) the distribution as being compressed, not the string.

Claude Shannon. Image credit: Wikipedia

Entropy and noiseless encoding

Before we can state Shannon’s theorems we have to define entropy.

Definition: Suppose $ D$ is a distribution on a finite set $ X$, and I’ll use $ D(x)$ to denote the probability of drawing $ x$ from $ D$. The entropy of $ D$, denoted $ H(D)$ is defined as

$ H(D) = \sum_{x \in X} D(x) \log \frac{1}{D(x)}$

It is strange to think about this sum in abstract, so let’s suppose $ D$ is a biased coin flip with bias $ 0 \leq p \leq 1$ of landing heads. Then we can plot the entropy as follows

Image source: Wikipedia

Image source: Wikipedia

The horizontal axis is the bias $ p$, and the vertical axis is the value of $ H(D)$, which with some algebra is $ – p \log p – (1-p) \log (1-p)$. From the graph above we can see that the entropy is maximized when $ p=1/2$ and minimized at $ p=0, 1$. You can verify all of this with calculus, and you can prove that the uniform distribution maximizes entropy in general as well.

So what is this saying? A high entropy measures how incompressible something is, and low entropy gives us lots of compressibility. Indeed, if our message consisted of the results of 10 such coin flips, and $ p$ was close to 1, we could be able to compress a lot by encoding strings with lots of 1’s using few bits. On the other hand, if $ p=1/2$ we couldn’t get any compression at all. All strings would be equally likely.

Shannon’s famous theorem shows that the entropy of the distribution is actually all that matters. Some quick notation: $ \{ 0,1 \}^*$ is the set of all binary strings.

Theorem (Noiseless Coding Theorem) [Shannon 1948]: For every finite set $ X$ and distribution $ D$ over $ X$, there are encoding and decoding functions $ \textup{Enc}: X \to \{0,1 \}^*, \textup{Dec}: \{ 0,1 \}^* \to X$ such that

  1. The encoding/decoding actually works, i.e. $ \textup{Dec}(\textup{Enc}(x)) = x$ for all $ x$.
  2. The expected length of an encoded message is between $ H(D)$ and $ H(D) + 1$.

Moreover, no encoding scheme can do better.

Item 2 and the last sentence are the magical parts. In other words, if you know your distribution over messages, you precisely know how long to expect your messages to be. And you know that you can’t hope to do any better!

As the title of this post says, we aren’t going to give a proof here. Wikipedia has a proof if you’re really interested in the details.

Noisy Coding

The noisy coding problem is more interesting because in a certain sense (that was not solved by Shannon) it is still being studied today in the field of coding theory. The interpretation of the noisy coding problem is that you want to be able to recover from white noise errors introduced during transmission. The concept is called error correction. To restate what we said earlier, we want to recover from error with probability asymptotically close to 1, where the probability is over the errors.

It should be intuitively clear that you can’t do so without your encoding “blowing up” the length of the messages. Indeed, if your encoding does not blow up the message length then a single error will confound you since many valid messages would differ by only a single bit. So the question is does such an encoding exist, and if so how much do we need to blow up the message length? Shannon’s second theorem answers both questions.

Theorem (Noisy Coding Theorem) [Shannon 1948]: For any constant noise rate $ p < 1/2$, there is an encoding scheme $ \textup{Enc} : \{ 0,1 \}^k \to \{0,1\}^{ck}, \textup{Dec} : \{ 0,1 \}^{ck} \to \{ 0,1\}^k$ with the following property. If $ x$ is the message sent by Alice, and $ y$ is the message received by Bob (i.e. $ \textup{Enc}(x)$ with random noise), then $ \Pr[\textup{Dec}(y) = x] \to 1$ as a function of $ n=ck$. In addition, if we denote by $ H(p)$ the entropy of the distribution of an error on a single bit, then choosing any $ c > \frac{1}{1-H(p)}$ guarantees the existence of such an encoding scheme, and no scheme exists for any smaller $ c$.

This theorem formalizes a “yes” answer to the noisy coding problem, but moreover it characterizes the blowup needed for such a scheme to exist. The deep fact is that it only depends on the noise rate.

A word about the proof: it’s probabilistic. That is, Shannon proved such an encoding scheme exists by picking $ \textup{Enc}$ to be a random function (!). Then $ \textup{Dec}(y)$ finds (nonconstructively) the string $ x$ such that the number of bits different between $ \textup{Enc}(x)$ and $ y$ is minimized. This “number of bits that differ” measure is called the Hamming distance. Then he showed using relatively standard probability tools that this scheme has the needed properties with high probability, the implication being that some scheme has to exist for such a probability to even be positive. The sharp threshold for $ c$ takes a bit more work. If you want the details, check out the first few lectures of Madhu Sudan’s MIT class.

The non-algorithmic nature of his solution is what opened the door to more research. The question has surpassed, “Are there any encodings that work?” to the more interesting, “What is the algorithmic cost of constructing such an encoding?” It became a question of complexity, not computability. Moreover, the guarantees people wanted were strengthened to worst case guarantees. In other words, if I can guarantee at most 12 errors, is there an encoding scheme that will allow me to always recover the original message, and not just with high probability? One can imagine that if your message contains nuclear codes or your bank balance, you’d definitely want to have 100% recovery ability.

Indeed, two years later Richard Hamming spawned the theory of error correcting codes and defined codes that can always correct a single error. This theory has expanded and grown over the last sixty years, and these days the algorithmic problems of coding theory have deep connections to most areas of computer science, including learning theory, cryptography, and quantum computing.

We’ll cover Hamming’s basic codes next time, and then move on to Reed-Solomon codes and others. Until then!

Posts in this series:

The Complexity of Communication

satellite

One of the most interesting questions posed in the last thirty years of computer science is to ask how much “information” must be communicated between two parties in order for them to jointly compute something. One can imagine these two parties living on distant planets, so that the cost of communicating any amount of information is very expensive, but each person has an integral component of the answer that the other does not.

Since this question was originally posed by Andrew Yao in 1979, it has led to a flurry of applications in many areas of mathematics and computer science. In particular it has become a standard tool for proving lower bounds in many settings such as circuit design and streaming algorithms. And if there’s anything theory folks love more than a problem that can be solved by an efficient algorithm, it’s a proof that a problem cannot be solved by any efficient algorithm (that’s what I mean by “lower bound”).

Despite its huge applicability, the basic results in this area are elementary. In this post we’ll cover those basics, but once you get past these basic ideas and their natural extensions you quickly approach the state of the art and open research problems. Attempts to tackle these problems in recent years have used sophisticated techniques in Fourier analysis, Ramsey theory, and geometry. This makes it a very fun and exciting field.

As a quick side note before we start, the question we’re asking is different from the one of determining the information content of a specific message. That is the domain of information theory, which was posed (and answered) decades earlier. Here we’re trying to determine the complexity of a problem, where more complex messages require more information about their inputs.

The Basic Two-Player Model

The most basic protocol is simple enough to describe over a dinner table. Alice and Bob each have one piece of information $ x,y$, respectively, say they each have a number. And together they want to compute some operation that depends on both their inputs, for example whether $ x > y$. But in the beginning Alice has access only to her number $ x$, and knows nothing about $ y$. So Alice sends Bob a few bits. Depending on the message Bob computes something and replies, and this repeats until they have computed an answer. The question is: what is the minimum number of bits they need to exchange in order for both of them to be able to compute the right answer?

There are a few things to clarify here: we’re assuming that Alice and Bob have agreed on a protocol for sending information before they ever saw their individual numbers. So imagine ten years earlier Alice and Bob were on the same planet, and they agreed on the rules they’d follow for sending/replying information once they got their numbers. In other words, we’re making a worst-case assumption on Alice and Bob’s inputs, and as usual it will be measured as a function of $ n$, the lengths of their inputs. Then we take a minimum (asymptotically) over all possible protocols they could follow, and this value is the “communication complexity” of the problem. Computing the exact communication complexity of a given problem is no simple task, since there’s always the nagging question of whether there’s some cleverer protocol than the one you came up with. So most of the results are bounds on the communication complexity of a problem.

Indeed, we can give our first simple bound for the “$ x$ greater than $ y$” problem we posed above. Say the strings $ x,y$ both have $ n$ bits. What Alice does is send her entire string $ x$ to Bob, and Bob then computes the answer and sends the answer bit back to Alice. This requires $ n + 1$ bits of communication. This proves that the communication complexity of “$ x > y$” is bounded from above by $ n+1$. A much harder question is, can we do any better?

To make any progress on upper or lower bounds we need to be a bit more formal about the communication model. Basically, the useful analysis happens when the players alternate sending single bits, and this is only off by small constant factors from a more general model. This is the asymptotic analysis, that we only distinguish between things like linear complexity $ O(n)$ versus sublinear options like $ \log(n)$ or $ \sqrt{n}$ or even constant complexity $ O(1)$. Indeed, the protocol we described for $ x > y$ is the stupidest possible protocol for the problem, and it’s actually valid for any problem. For this problem it happens to be optimal, but we’re just trying to emphasize that nontrivial bounds are all sub-linear in the size of the inputs.

On to the formal model.

Definition: player is a computationally unbounded Turing machine.

And we really mean unbounded. Our players have no time or space constraints, and if they want they can solve undecidable problems like the halting problem or computing Kolmogorov complexity. This is to emphasize that the critical resource is the amount of communication between players. Moreover, it gives us a hint that lower bounds in this model won’t come form computational intractability, but instead will be “information-theoretic.”

Definition: Let $ \Sigma^*$ be the set of all binary strings. A communication protocol is a pair of functions $ A,B: \Sigma^* \times \Sigma^* \to \{ 0,1 \}$.

The input to these functions $ A(x, h)$ should be thought of as follows: $ x$ is the player’s secret input and $ h$ is the communication history so far. The output is the single bit that they will send in that round (which can be determined by the length of $ h$ since only one bit is sent in each round). The protocol then runs by having Alice send $ b_1 = A(x, \{ \})$ to Bob, then Bob replies with $ b_2 = B(y, b_1)$, Alice continues with $ b_3 = A(x, b_1b_2)$, and so on. We implicitly understand that the content of a communication protocol includes a termination condition, but we’ll omit this from the notation. We call the length of the protocol the number of rounds.

Definition: A communication protocol $ A,B$ is said to be valid for a boolean function $ f(x,y)$ if for all strings $ x, y$, the protocol for $ A, B$ terminates on some round $ t$ with $ b_t = 1$ if and only if $ f(x,y) = 1$.

So to define the communication complexity, we let the function $ L_{A,B}(n)$ be the maximum length of the protocol $ A, B$ when run on strings of length $ n$ (the worst-case for a given input size). Then the communication complexity of a function $ f$ is the minimum of $ L_{A,B}$ over all valid protocols $ A, B$. In symbols,

$ \displaystyle CC_f(n) = \min_{A,B \textup{ is valid for } f} L_{A,B}(n)$

We will often abuse the notation by writing the communication complexity of a function as $ CC(f)$, understanding that it’s measured asymptotically as a function of $ n$.

Matrices and Lower Bounds

Let’s prove a lower bound, that to compute the equality function you need to send a linear number of bits in the worst case. In doing this we’ll develop a general algebraic tool.

So let’s write out the function $ f$ as a binary matrix $ M(f)$ in the following way. Write all $ 2^n$ inputs of length $ n$ in some fixed order along the rows and columns of the matrix, and let entry $ i,j$ be the value of $ f(i,j)$. For example, the 6-bit function $ f$ which computes whether the majority of the two player’s bits are ones looks like this:

maj-matrix

The key insight to remember is that if the matrix of a function has a nice structure, then one needs very little communication to compute it. Let’s see why.

Say in the first round the row player sends a bit $ b$. This splits the matrix into two submatrices $ A_0, A_1$ by picking the rows of $ A_0$ to be those inputs for which the row player sends a $ b=0$, and likewise for $ A_1$ with $ b=1$. If you’re willing to rearrange the rows of the matrix so that $ A_0$ and $ A_1$ stack on top of each other, then this splits the matrix into two rectangles. Now we can switch to the column player and see which bit he sends in reply to each of the possible choices for $ b$ (say he sends back $ b’$). This separately splits each of $ A_0, A_1$ into two subrectangles corresponding to which inputs for the column player make him send the specific value of $ b’$. Continuing in this fashion we recurse until we find a submatrix consisting entirely of ones or entirely of zeros, and then we can say with certainty what the value of the function $ f$ is.

It’s difficult to visualize because every time we subdivide we move around the rows and columns within the submatrix corresponding to the inputs for each player. So the following would be a possible subdivision of an 8×8 matrix (with the values in the rectangles denoting which communicated bits got you there), but it’s sort of a strange one because we didn’t move the inputs around at all. It’s just a visual aid.

maj-matrix-subdivision

If we do this for $ t$ steps we get $ 2^t$ subrectangles. A crucial fact is that any valid communication protocol for a function has to give a subdivision of the matrix where all the rectangles are constant. or else there would be two pairs of inputs $ (x,y), (x’, y’)$, which are labeled identically by the communication protocol, but which have different values under $ f$.

So naturally one expects the communication complexity of $ f$ would require as many steps as there are steps in the best decomposition, that is, the decomposition with the fewest levels of subdivision. Indeed, we’ll prove this and introduce some notation to make the discourse less clumsy.

Definition: For an $ m \times n$ matrix $ M$, a rectangle is a submatrix $ A \times B$ where $ A \subset \{ 1, \dots m \}, B \subset \{ 1, \dots, n \}$. A rectangle is called monochromatic if all entires in the corresponding submatrix $ \left.M\right|_{A \times B}$ are the same. A monochromatic tiling of $ M$ is a partition of $ M$ into disjoint monochromatic rectangles. Define $ \chi(f)$ to be the minimum number of rectangles in any monochromatic tiling of $ M(f)$.

As we said, if there are $ t$ steps in a valid communication protocol for $ f$, then there are $ 2^t$ rectangles in the corresponding monochromatic tiling of $ M(f)$. Here is an easy consequence of this.

Proposition: If $ f$ has communication complexity $ CC(f)$, then there is a monochromatic tiling of $ M(f)$ with at most $ 2^{CC(f)}$ rectangles. In particular, $ \log(\chi(f)) \leq CC(f)$.

Proof. Pick any protocol that achieves the communication complexity of $ f$, and apply the process we described above to subdivide $ M(f)$. This will take exactly $ CC(f)$, and produce no more than $ 2^{CC(f)}$ rectangles.

$ \square$

This already gives us a bunch of theorems. Take the EQ function, for example. Its matrix is the identity matrix, and it’s not hard to see that every monochromatic tiling requires $ 2^n$ rectangles, one for each entry of the diagonal. I.e., $ CC(EQ) \geq n$. But we already know that one player can just send all his bits, so actually $ CC(EQ) = \Theta(n)$. Now it’s not always so easy to compute $ \chi(f)$. The impressive thing to do is to use efficiently computable information about $ M(f)$ to give bounds on $ \chi(f)$ and hence on $ CC(f)$. So can we come up with a better lower bound that depends on something we can compute? The answer is yes.

Theorem: For every function $ f$, $ \chi(f) \geq \textup{rank }M(f)$.

Proof. This just takes some basic linear algebra. One way to think of the rank of a matrix $ A$ is as the smallest way to write $ A$ as a linear combination of rank 1 matrices (smallest as in, the smallest number of terms needed to do this). The theorem is true no matter which field you use to compute the rank, although in this proof and in the rest of this post we’ll use the real numbers.

If you give me a monochromatic tiling by rectangles, I can view each rectangle as a matrix whose rank is at most one. If the entries are all zeros then the rank is zero, and if the entries are all ones then (using zero elsewhere) this is by itself a rank 1 matrix. So adding up these rectangles as separate components gives me an upper bound on the rank of $ A$. So the minimum way to do this is also an upper bound on the rank of $ A$.

$ \square$

Now computing something like $ CC(EQ)$ is even easier, because the rank of $ M(EQ) = M(I_{2^n})$ is just $ 2^n$.

Upper Bounds

There are other techniques to show lower bounds that are stronger than the rank and tiling method (because they imply the rank and tiling method). See this survey for a ton of details. But I want to discuss upper bounds a bit, because the central open conjecture in communication complexity is an upper bound.

The Log-Rank Conjecture: There is a universal constant $ c$, such that for all $ f$, the communication complexity $ CC(f) = O((\log \textup{rank }M(f))^c)$.

All known examples satisfy the conjecture, but unfortunately the farthest progress toward the conjecture is still exponentially worse than the conjecture’s statement. In 1997 the record was due to Andrei Kotlov who proved that $ CC(f) \leq \log(4/3) \textup{rank }M(f)$. It was not until 2013 that any (unconditional) improvements were made to this, when Shachar Lovett proved that $ CC(f) = O(\sqrt{\textup{rank }M(f)} \cdot \log \textup{rank }M(f))$.

The interested reader can check out this survey of Shachar Lovett from earlier this year (2014) for detailed proofs of these theorems and a discussion of the methods. I will just discuss one idea from this area that ties in nicely with our discussion: which is that finding an efficient communication protocol for a low-rank function $ f$ reduces to finding a large monochromatic rectangle in $ M(f)$.

Theorem [Nisan-Wigderson 94]: Let $ c(r)$ be a function. Suppose that for any function $ f: X \times Y \to \{ 0,1 \}$, we can find a monochromatic rectangle of size $ R \geq 2^{-c(r)} \cdot | X \times Y |$ where $ r = \textup{rank }M(f)$. Then any such $ f$ is computable by a deterministic protocol with communication complexity.

$ \displaystyle O \left ( \log^2(r) + \sum_{i=0}^{\log r} c(r/2^i) \right )$

Just to be concrete, this says that if $ c(r)$ is polylogarithmic, then finding these big rectangles implies a protocol also with polylogarithmic complexity. Since the complexity of the protocol is a function of $ r$ alone, the log-rank conjecture follows as a consequence. The best known results use the theorem for larger $ c(r) = r^b$ for some $ b < 1$, which gives communication complexity also $ O(r^b)$.

The proof of the theorem is detailed, but mostly what you’d expect. You take your function, split it up into the big monochromatic rectangle and the other three parts. Then you argue that when you recurse to one of the other three parts, either the rank is cut in half, or the size of the matrix is much smaller. In either case, you can apply the theorem once again. Then you bound the number of leaves in the resulting protocol tree by looking at each level $ i$ where the rank has dropped to $ r/2^i$. For the full details, see page 4 of the Shachar survey.

Multiple Players and More

In the future we’ll cover some applications of communication complexity, many of which are related to computing in restricted models such as parallel computation and streaming computation. For example, in parallel computing you often have processors which get arbitrary chunks of data as input and need to jointly compute something. Lower bounds on the communication complexity can help you prove they require a certain amount of communication in order to do that.

But in these models there are many players. And the type of communication matters: it can be point-to-point or broadcast, or something more exotic like MapReduce. So before we can get to these applications we need to define and study the appropriate generalizations of communication complexity to multiple interacting parties.

Until then!