The Codes of Solomon, Reed, and Muller

Last time we defined the Hamming code. We also saw that it meets the Hamming bound, which is a measure of how densely a code can be packed inside an ambient space and still maintain a given distance. This time we’ll define the Reed-Solomon code which optimizes a different bound called the Singleton bound, and then generalize them to a larger class of codes called Reed-Muller codes. In future posts we’ll consider algorithmic issues behind decoding the codes, for now we just care about their existence and optimality properties.

The Singleton bound

Recall that a code C is a set of strings called codewords, and that the parameters of a code C are written (n,k,d)_q. Remember n is the length of a codeword, k = \log_q |C| is the message length, d is the minimum distance between any two codewords, and q is the size of the alphabet used for the codewords. Finally, remember that for linear codes our alphabets were either just \{ 0,1 \} where q=2, or more generally a finite field \mathbb{F}_q for q a prime power.

One way to motivate for the Singleton bound goes like this. We can easily come up with codes for the following parameters. For (n,n,1)_2 the identity function works. And to get a (n,n-1,2)_2-code we can encode a binary string x by appending the parity bit \sum_i x_i \mod 2 to the end (as an easy exercise, verify this has distance 2). An obvious question is can we generalize this to a (n, n-d+1, d)_2-code for any d? Perhaps a more obvious question is: why can’t we hope for better? A larger d or k \geq n-d+1? Because the Singleton bound says so.

Theorem [Singleton 64]: If C is an (n,k,d)_q-code, then k \leq n-d+1.

Proof. The proof is pleasantly simple. Let \Sigma be your alphabet and look at the projection map \pi : \Sigma^n \to \Sigma^{k-1} which projects x = (x_1, \dots, x_n) \mapsto (x_1, \dots, x_{k-1}). Remember that the size of the code is |C| = q^k, and because the codomain of \pi, i.e. \Sigma^{k-1} has size q^{k-1} < q^k, it follows that \pi is not an injective map. In particular, there are two codewords x,y whose first k-1 coordinates are equal. Even if all of their remaining coordinates differ, this implies that d(x,y) < n-k+1.


It’s embarrassing that such a simple argument can prove that one can do no better. There are codes that meet this bound and they are called maximum distance separable (MDS) codes. One might wonder how MDS codes relate to perfect codes, but they are incomparable; there are perfect codes that are not MDS codes, and conversely MDS codes need not be perfect. The Reed-Solomon code is an example of the latter.

The Reed-Solomon Code

Irving Reed (left) and Gustave Solomon (right).

Irving Reed (left) and Gustave Solomon (right).

The Reed-Solomon code has a very simple definition, especially for those of you who have read about secret sharing.

Given a prime power q and integers k \leq n \leq q, the Reed-Solomon code with these parameters is defined by its encoding function E: \mathbb{F}_q^k \to \mathbb{F}_q^n as follows.

  1. Generate \mathbb{F}_q explicitly.
  2. Pick n distinct elements \alpha_i \in \mathbb{F}_q.
  3. A message x \in \mathbb{F}_q^k is a list of elements c_0 \dots c_{k-1}. Represent the message as a polynomial m(x) = \sum_j c_jx^j.
  4. The encoding of a message is the tuple E(m) = (m(\alpha_1), \dots, m(\alpha_n)). That is, we just evaluate m(x) at our chosen locations in \alpha_i.

Here’s an example when q=5, n=3, k=3. We’ll pick the points 1,3,4 \in \mathbb{F}_5, and let our message be x = (4,1,2), which is encoded as a polynomial m(x) = 4 + x + 2x^2. Then the encoding of the message is

\displaystyle E(m) = (m(1), m(3), m(4)) = (2, 0, 0)

Decoding the message is a bit more difficult (more on that next time), but for now let’s prove the basic facts about this code.

Fact: The Reed-Solomon code is linear. This is just because polynomials of a limited degree form a vector space. Adding polynomials is adding their coefficients, and scaling them is scaling their coefficients. Moreover, the evaluation of a polynomial at a point is a linear map, i.e. it’s always true that m_1(\alpha) + m_2(\alpha) = (m_1 + m_2)(\alpha), and scaling the coefficients is no different. So the codewords also form a vector space.

Fact: d = n - k + 1, or equivalently the Reed-Solomon code meets the Singleton bound. This follows from a simple fact: any two different single-variable polynomials of degree at most k-1 agree on at most k-1 points. Indeed, otherwise two such polynomials f,g would give a new polynomial f-g which has more than k-1 roots, but the fundamental theorem of algebra (the adaptation for finite fields) says the only polynomial with this many roots is the zero polynomial.

So the Reed-Solomon code is maximum distance separable. Neat!

One might wonder why one would want good codes with large alphabets. One reason is that with a large alphabet we can interpret a byte as an element of \mathbb{F}_{256} to get error correction on bytes. So if you want to encode some really large stream of bytes (like a DVD) using such a scheme and you get bursts of contiguous errors in small regions (like a scratch), then you can do pretty powerful error correction. In fact, this is more or less the idea behind error correction for DVDs. So I hear. You can read more about the famous applications at Wikipedia.

The Reed-Muller code

The Reed-Muller code is a neat generalization of the Reed-Solomon code to multivariable polynomials. The reason they’re so useful is not necessarily because they optimize some bound (if they do, I haven’t heard of it), but because they specialize to all sorts of useful codes with useful properties. One of these is properties is called local decodability, which has big applications in theoretical computer science.

Anyway, before I state the definition let me remind the reader about compact notation for multivariable polynomials. I can represent the variables x_1, \dots, x_n used in the polynomial as a vector \mathbf{x} and likewise a monomial x_1^{\alpha_1} x_2^{\alpha_2} \dots x_n^{\alpha_n} by a “vector power” \mathbf{x}^\alpha, where \sum_i \alpha_i = d is the degree of that monomial, and you’d write an entire polynomial as \sum_\alpha c_\alpha x^{\alpha} where \alpha ranges over all exponents you want.

Definition: Let m, l be positive integers and q > l be a prime power. The Reed-Muller code with parameters m,l,q is defined as follows:

  1. The message is the list of multinomial coefficients of a homogeneous degree l polynomial in m variables, f(\mathbf{x}) = \sum_{\alpha} c_\alpha x^\alpha.
  2. You encode a message f(\mathbf{x}) as the tuple of all polynomial evaluations (f(x))_{x \in \mathbb{F}_q^m}.

Here the actual parameters of the code are n=q^m, and k = \binom{m+l}{m} being the number of possible coefficients. Finally d = (1 - l/q)n, and we can prove this in the same way as we did for the Reed-Solomon code, using a beefed up fact about the number of roots of a multivariate polynomial:

Fact: Two multivariate degree \leq l polynomials over a finite field \mathbb{F}_q agree on at most an l/q fraction of \mathbb{F}_q^m.

For messages of desired length k, a clever choice of parameters gives a good code. Let m = \log k / \log \log k, q = \log^2 k, and pick l such that \binom{m+l}{m} = k. Then the Reed-Muller code has polynomial length n = k^2, and because l = o(q) we get that the distance of the code is asymptotically d = (1-o(1))n, i.e. it tends to n.

A fun fact about Reed-Muller codes: they were apparently used on the Voyager space missions to relay image data back to Earth.

The Way Forward

So we defined Reed-Solomon and Reed-Muller codes, but we didn’t really do any programming yet. The reason is because the encoding algorithms are very straightforward. If you’ve been following this blog you’ll know we have already written code to explicitly represent polynomials over finite fields, and extending that code to multivariable polynomials, at least for the sake of encoding the Reed-Muller code, is straightforward.

The real interesting algorithms come when you’re trying to decode. For example, in the Reed-Solomon code we’d take as input a bunch of points in a plane (over a finite field), only some of which are consistent with the underlying polynomial that generated them, and we have to reconstruct the unknown polynomial exactly. Even worse, for Reed-Muller we have to do it with many variables!

We’ll see exactly how to do that and produce working code next time.

Until then!

Finding the majority element of a stream

Problem: Given a massive data stream of n values in \{ 1, 2, \dots, m \} and the guarantee that one value occurs more than n/2 times in the stream, determine exactly which value does so.

Solution: (in Python)

def majority(stream):
   held = next(stream)
   counter = 1

   for item in stream:
      if item == held:
         counter += 1
      elif counter == 0:
         held = item
         counter = 1
         counter -= 1

   return held

Discussion: Let’s prove correctness. Say that s is the unknown value that occurs more than n/2 times. The idea of the algorithm is that if you could pair up elements of your stream so that distinct values are paired up, and then you “kill” these pairs, then s will always survive. The way this algorithm pairs up the values is by holding onto the most recent value that has no pair (implicitly, by keeping a count how many copies of that value you saw). Then when you come across a new element, you decrement the counter and implicitly account for one new pair.

Let’s analyze the complexity of the algorithm. Clearly the algorithm only uses a single pass through the data. Next, if the stream has size n, then this algorithm uses O(\log(n) + \log(m)) space. Indeed, if the stream entirely consists of a single value (say, a stream of all 1’s) then the counter will be n at the end, which takes \log(n) bits to store. On the other hand, if there are m possible values then storing the largest requires \log(m) bits.

Finally, the guarantee that one value occurs more than n/2 times is necessary. If it is not the case the algorithm could output anything (including the most infrequent element!). And moreover, if we don’t have this guarantee then every algorithm that solves the problem must use at least \Omega(n) space in the worst case. In particular, say that m=n, and the first n/2 items are all distinct and the last n/2 items are all the same one, the majority value s. If you do not know s in advance, then you must keep at least one bit of information to know which symbols occurred in the first half of the stream because any of them could be s. So the guarantee allows us to bypass that barrier.

This algorithm can be generalized to detect k items with frequency above some threshold n/(k+1) using space O(k \log n). The idea is to keep k counters instead of one, adding new elements when any counter is zero. When you see an element not being tracked by your k counters (which are all positive), you decrement all the counters by 1. This is like a k-to-one matching rather than a pairing.

Hamming’s Code

Or how to detect and correct errors

Last time we made a quick tour through the main theorems of Claude Shannon, which essentially solved the following two problems about communicating over a digital channel.

  1. What is the best encoding for information when you are guaranteed that your communication channel is error free?
  2. Are there any encoding schemes that can recover from random noise introduced during transmission?

The answers to these questions were purely mathematical theorems, of course. But the interesting shortcoming of Shannon’s accomplishment was that his solution for the noisy coding problem (2) was nonconstructive. The question remains: can we actually come up with efficiently computable encoding schemes? The answer is yes! Marcel Golay was the first to discover such a code in 1949 (just a year after Shannon’s landmark paper), and Golay’s construction was published on a single page! We’re not going to define Golay’s code in this post, but we will mention its interesting status in coding theory later. The next year Richard Hamming discovered another simpler and larger family of codes, and went on to do some of the major founding work in coding theory. For his efforts he won a Turing Award and played a major part in bringing about the modern digital age. So we’ll start with Hamming’s codes.

We will assume some basic linear algebra knowledge, as detailed our first linear algebra primer. We will also use some basic facts about polynomials and finite fields, though the lazy reader can just imagine everything as binary \{ 0,1 \} and still grok the important stuff.


Richard Hamming, inventor of Hamming codes. [image source]

What is a code?

The formal definition of a code is simple: a code C is just a subset of \{ 0,1 \}^n for some n. Elements of C are called codewords.

This is deceptively simple, but here’s the intuition. Say we know we want to send messages of length k, so that our messages are in \{ 0,1 \}^k. Then we’re really viewing a code C as the image of some encoding function \textup{Enc}: \{ 0,1 \}^k \to \{ 0,1 \}^n. We can define C by just describing what the set is, or we can define it by describing the encoding function. Either way, we will make sure that \textup{Enc} is an injective function, so that no two messages get sent to the same codeword. Then |C| = 2^k, and we can call k = \log |C| the message length of C even if we don’t have an explicit encoding function.

Moreover, while in this post we’ll always work with \{ 0,1 \}, the alphabet of your encoded messages could be an arbitrary set \Sigma. So then a code C would be a subset of tuples in \Sigma^n, and we would call q = |\Sigma|.

So we have these parameters n, k, q, and we need one more. This is the minimum distance of a code, which we’ll denote by d. This is defined to be the minimum Hamming distance between all distinct pairs of codewords, where by Hamming distance I just mean the number of coordinates that two tuples differ in. Recalling the remarks we made last time about Shannon’s nonconstructive proof, when we decode an encoded message y (possibly with noisy bits) we look for the (unencoded) message x whose encoding \textup{Enc}(x) is as close to y as possible. This will only work in the worst case if all pairs of codewords are sufficiently far apart. Hence we track the minimum distance of a code.

So coding theorists turn this mess of parameters into notation.

Definition: A code C is called an (n, k, d)_q-code if

  • C \subset \Sigma^n for some alphabet \Sigma,
  • k = \log |C|,
  • C has minimum distance d, and
  • the alphabet \Sigma has size q.

The basic goals of coding theory are:

  1. For which values of these four parameters do codes exist?
  2. Fixing any three parameters, how can we optimize the other one?

In this post we’ll see how simple linear-algebraic constructions can give optima for one of these problems, optimizing k for d=3, and we’ll state a characterization theorem for optimizing k for a general d. Next time we’ll continue with a second construction that optimizes a different bound called the Singleton bound.

Linear codes and the Hamming code

A code is called linear if it can be identified with a linear subspace of some finite-dimensional vector space. In this post all of our vector spaces will be \{ 0,1 \}^n, that is tuples of bits under addition mod 2. But you can do the same constructions with any finite scalar field \mathbb{F}_q for a prime power q, i.e. have your vector space be \mathbb{F}_q^n. We’ll go back and forth between describing a binary code q=2 over \{ 0,1 \} and a code in $\mathbb{F}_q^n$. So to say a code is linear means:

  • The zero vector is a codeword.
  • The sum of any two codewords is a codeword.
  • Any scalar multiple of a codeword is a codeword.

Linear codes are the simplest kinds of codes, but already they give a rich variety of things to study. The benefit of linear codes is that you can describe them in a lot of different and useful ways besides just describing the encoding function. We’ll use two that we define here. The idea is simple: you can describe everything about a linear subspace by giving a basis for the space.

Definition: generator matrix of a (n,k,d)_q-code C is a k \times n matrix G whose rows form a basis for C.

There are a lot of equivalent generator matrices for a linear code (we’ll come back to this later), but the main benefit is that having a generator matrix allows one to encode messages x \in \{0,1 \}^k by left multiplication xG. Intuitively, we can think of the bits of x as describing the coefficients of the chosen linear combination of the rows of G, which uniquely describes an element of the subspace. Note that because a k-dimensional subspace of \{ 0,1 \}^n has 2^k elements, we’re not abusing notation by calling k = \log |C| both the message length and the dimension.

For the second description of C, we’ll remind the reader that every linear subspace C has a unique orthogonal complement C^\perp, which is the subspace of vectors that are orthogonal to vectors in C.

Definition: Let H^T be a generator matrix for C^\perp. Then H is called a parity check matrix.

Note H has the basis for C^\perp as columns. This means it has dimensions n \times (n-k). Moreover, it has the property that x \in C if and only if the left multiplication xH = 0. Having zero dot product with all columns of H characterizes membership in C.

The benefit of having a parity check matrix is that you can do efficient error detection: just compute yH on your received message y, and if it’s nonzero there was an error! What if there were so many errors, and just the right errors that y coincided with a different codeword than it started? Then you’re screwed. In other words, the parity check matrix is only guarantee to detect errors if you have fewer errors than the minimum distance of your code.

So that raises an obvious question: if you give me the generator matrix of a linear code can I compute its minimum distance? It turns out that this problem is NP-hard in general. In fact, you can show that this is equivalent to finding the smallest linearly dependent set of rows of the parity check matrix, and it is easier to see why such a problem might be hard. But if you construct your codes cleverly enough you can compute their distance properties with ease.

Before we do that, one more definition and a simple proposition about linear codes. The Hamming weight of a vector x, denoted wt(x), is the number of nonzero entries in x.

Proposition: The minimum distance of a linear code C is the minimum Hamming weight over all nonzero vectors x \in C.

Proof. Consider a nonzero x \in C. On one hand, the zero vector is a codeword and wt(x) is by definition the Hamming distance between x and zero, so it is an upper bound on the minimum distance. In fact, it’s also a lower bound: if x,y are two nonzero codewords, then x-y is also a codeword and wt(x-y) is the Hamming distance between x and y.


So now we can define our first code, the Hamming code. It will be a (n, k, 3)_2-code. The construction is quite simple. We have fixed d=3, q=2, and we will also fix l = n-k. One can think of this as fixing n and maximizing k, but it will only work for n of a special form.

We’ll construct the Hamming code by describing a parity-check matrix H. In fact, we’re going to see what conditions the minimum distance d=3 imposes on H, and find out those conditions are actually sufficient to get d=3. We’ll start with 2. If we want to ensure d \geq 2, then you need it to be the case that no nonzero vector of Hamming weight 1 is a code word. Indeed, if e_i is a vector with all zeros except a one in position i, then e_i H = h_i is the i-th row of H. We need e_i H \neq 0, so this imposes the condition that no row of H can be zero. It’s easy to see that this is sufficient for d \geq 2.

Likewise for d \geq 3, given a vector y = e_i + e_j for some positions i \neq j, then yH = h_i + h_j may not be zero. But because our sums are mod 2, saying that h_i + h_j \neq 0 is the same as saying h_i \neq h_j. Again it’s an if and only if. So we have the two conditions.

  • No row of H may be zero.
  • All rows of H must be distinct.

That is, any parity check matrix with those two properties defines a distance 3 linear code. The only question that remains is how large can n  be if the vectors have length n-k = l? That’s just the number of distinct nonzero binary strings of length l, which is 2^l - 1. Picking any way to arrange these strings as the rows of a matrix (say, in lexicographic order) gives you a good parity check matrix.

Theorem: For every l > 0, there is a (2^l - 1, 2^l - l - 1, 3)_2-code called the Hamming code.

Since the Hamming code has distance 3, we can always detect if at most a single error occurs. Moreover, we can correct a single error using the Hamming code. If x \in C and wt(e) = 1 is an error bit in position i, then the incoming message would be y = x + e. Now compute yH = xH + eH = 0 + eH = h_i and flip bit i of y. That is, whichever row of H you get tells you the index of the error, so you can flip the corresponding bit and correct it. If you order the rows lexicographically like we said, then h_i = i as a binary number. Very slick.

Before we move on, we should note one interesting feature of linear codes.

Definition: A code is called systematic if it can be realized by an encoding function that appends some number n-k “check bits” to the end of each message.

The interesting feature is that all linear codes are systematic. The reason is as follows. The generator matrix G of a linear code has as rows a basis for the code as a linear subspace. We can perform Gaussian elimination on G and get a new generator matrix that looks like [I \mid A] where I is the identity matrix of the appropriate size and A is some junk. The point is that encoding using this generator matrix leaves the message unchanged, and adds a bunch of bits to the end that are determined by A. It’s a different encoding function on \{ 0,1\}^k, but it has the same image in \{ 0,1 \}^n, i.e. the code is unchanged. Gaussian elimination just performed a change of basis.

If you work out the parameters of the Hamming code, you’ll see that it is a systematic code which adds \Theta(\log n) check bits to a message, and we’re able to correct a single error in this code. An obvious question is whether this is necessary? Could we get away with adding fewer check bits? The answer is no, and a simple “information theoretic” argument shows this. A single index out of n requires \log n bits to describe, and being able to correct a single error is like identifying a unique index. Without logarithmically many bits, you just don’t have enough information.

The Hamming bound and perfect codes

One nice fact about Hamming codes is that they optimize a natural problem: the problem of maximizing d given a fixed choice of n, k, and q. To get this let’s define V_n(r) denote the volume of a ball of radius r in the space \mathbb{F}_2^n. I.e., if you fix any string (doesn’t matter which) x, V_n(r) is the size of the set \{ y : d(x,y) \leq r \}, where d(x,y) is the hamming distance.

There is a theorem called the Hamming bound, which describes a limit to how much you can pack disjoint balls of radius r inside \mathbb{F}_2^n.

Theorem: If an (n,k,d)_2-code exists, then

\displaystyle 2^k V_n \left ( \left \lfloor \frac{d-1}{2} \right \rfloor \right ) \leq 2^n

Proof. The proof is quite simple. To say a code C has distance d means that for every string x \in C there is no other string y within Hamming distance d of x. In other words, the balls centered around both x,y of radius r = \lfloor (d-1)/2 \rfloor are disjoint. The extra difference of one is for odd d, e.g. when d=3 you need balls of radius 1 to guarantee no overlap. Now |C| = 2^k, so the total number of strings covered by all these balls is the left-hand side of the expression. But there are at most 2^n strings in \mathbb{F}_2^n, establishing the desired inequality.


Now a code is called perfect if it actually meets the Hamming bound exactly. As you probably guessed, the Hamming codes are perfect codes. It’s not hard to prove this, and I’m leaving it as an exercise to the reader.

The obvious follow-up question is whether there are any other perfect codes. The answer is yes, some of which are nonlinear. But some of them are “trivial.” For example, when d=1 you can just use the identity encoding to get the code C = \mathbb{F}_2^n. You can also just have a code which consists of a single codeword. There are also some codes that encode by repeating the message multiple times. These are called “repetition codes,” and all three of these examples are called trivial (as a definition). Now there are some nontrivial and nonlinear perfect codes I won’t describe here, but here is the nice characterization theorem.

Theorem [van Lint ’71, Tietavainen ‘73]: Let C be a nontrivial perfect (n,d,k)_q code. Then the parameters must either be that of a Hamming code, or one of the two:

  • A (23, 12, 7)_2-code
  • A (11, 6, 5)_3-code

The last two examples are known as the binary and ternary Golay codes, respectively, which are also linear. In other words, every possible set of parameters for a perfect code can be realized as one of these three linear codes.

So this theorem was a big deal in coding theory. The Hamming and Golay codes were both discovered within a year of each other, in 1949 and 1950, but the nonexistence of other perfect linear codes was open for twenty more years. This wrapped up a very neat package.

Next time we’ll discuss the Singleton bound, which optimizes for a different quantity and is incomparable with perfect codes. We’ll define the Reed-Solomon and show they optimize this bound as well. These codes are particularly famous for being the error correcting codes used in DVDs. We’ll then discuss the algorithmic issues surrounding decoding, and more recent connections to complexity theory.

Until then!

Zero-One Laws for Random Graphs

Last time we saw a number of properties of graphs, such as connectivity, where the probability that an Erdős–Rényi random graph G(n,p) satisfies the property is asymptotically either zero or one. And this zero or one depends on whether the parameter p is above or below a universal threshold (that depends only on n and the property in question).

To remind the reader, the Erdős–Rényi random “graph” G(n,p) is a distribution over graphs that you draw from by including each edge independently with probability p. Last time we saw that the existence of an isolated vertex has a sharp threshold at (\log n) / n, meaning if p is asymptotically smaller than the threshold there will certainly be isolated vertices, and if p is larger there will certainly be no isolated vertices. We also gave a laundry list of other properties with such thresholds.

One might want to study this phenomenon in general. Even if we might not be able to find all the thresholds we want for a given property, can we classify which properties have thresholds and which do not?

The answer turns out to be mostly yes! For large classes of properties, there are proofs that say things like, “either this property holds with probability tending to one, or it holds with probability tending to zero.” These are called “zero-one laws,” and they’re sort of meta theorems. We’ll see one such theorem in this post relating to constant edge-probabilities in random graphs, and we’ll remark on another at the end.

Sentences about graphs in first order logic

A zero-one law generally works by defining a class of properties, and then applying a generic first/second moment-type argument to every property in the class.

So first we define what kinds of properties we’ll discuss. We’ll pick a large class: anything that can be expressed in first-order logic in the language of graphs. That is, any finite logical statement that uses existential and universal quantifiers over variables, and whose only relation (test) is whether an edge exists between two vertices. We’ll call this test e(x,y). So you write some sentence P in this language, and you take a graph G, and you can ask P(G) = 1, whether the graph satisfies the sentence.

This seems like a really large class of properties, and it is, but let’s think carefully about what kinds of properties can be expressed this way. Clearly the existence of a triangle can be written this way, it’s just the sentence

\exists x,y,z : e(x,y) \wedge e(y,z) \wedge e(x,z)

I’m using \wedge for AND, and \vee for OR, and \neg for NOT. Similarly, one can express the existence of a clique of size k, or the existence of an independent set of size k, or a path of a fixed length, or whether there is a vertex of maximal degree n-1.

Here’s a question: can we write a formula which will be true for a graph if and only if it’s connected? Well such a formula seems like it would have to know about how many vertices there are in the graph, so it could say something like “for all x,y there is a path from x to y.” It seems like you’d need a family of such formulas that grows with n to make anything work. But this isn’t a proof; the question remains whether there is some other tricky way to encode connectivity.

But as it turns out, connectivity is not a formula you can express in propositional logic. We won’t prove it here, but we will note at the end of the article that connectivity is in a different class of properties that you can prove has a similar zero-one law.

The zero-one law for first order logic

So the theorem about first-order expressible sentences is as follows.

Theorem: Let P be a property of graphs that can be expressed in the first order language of graphs (with the e(x,y) relation). Then for any constant p, the probability that P holds in G(n,p) has a limit of zero or one as n \to \infty.

Proof. We’ll prove the simpler case of p=1/2, but the general case is analogous. Given such a graph G drawn from G(n,p), what we’ll do is define a countably infinite family of propositional formulas \varphi_{k,l}, and argue that they form a sort of “basis” for all first-order sentences about graphs.

First let’s describe the \varphi_{k,l}. For any k,l \in \mathbb{N}, the sentence will assert that for every set of k vertices and every set of l vertices, there is some other vertex connected to the first k but not the last l.

\displaystyle \varphi_{k,l} : \forall x_1, \dots, x_k, y_1, \dots, y_l \exists z : \\ e(z,x_1) \wedge \dots \wedge e(z,x_k) \wedge \neg e(z,y_1) \wedge \dots \wedge \neg e(z,y_l).

In other words, these formulas encapsulate every possible incidence pattern for a single vertex. It is a strange set of formulas, but they have a very nice property we’re about to get to. So for a fixed \varphi_{k,l}, what is the probability that it’s false on n vertices? We want to give an upper bound and hence show that the formula is true with probability approaching 1. That is, we want to show that all the \varphi_{k,l} are true with probability tending to 1.

Computing the probability: we have \binom{n}{k} \binom{n-k}{l} possibilities to choose these sets, and the probability that some other fixed vertex z has the good connections is 2^{-(k+l)} so the probability z is not good is 1 - 2^{-(k+l)}, and taking a product over all choices of z gives the probability that there is some bad vertex z with an exponent of (n - (k + l)). Combining all this together gives an upper bound of \varphi_{k,l} being false of:

\displaystyle \binom{n}{k}\binom{n-k}{l} (1-2^{-k-1})^{n-k-l}

And k, l are constant, so the left two terms are polynomials while the rightmost term is an exponentially small function, and this implies that the whole expression tends to zero, as desired.

Break from proof.

A bit of model theory

So what we’ve proved so far is that the probability of every formula of the form \varphi_{k,l} being satisfied in G(n,1/2) tends to 1.

Now look at the set of all such formulas

\displaystyle \Phi = \{ \varphi_{k,l} : k,l \in \mathbb{N} \}

We ask: is there any graph which satisfies all of these formulas? Certainly it cannot be finite, because a finite graph would not be able to satisfy formulas with sufficiently large values of l, k > n. But indeed, there is a countably infinite graph that works. It’s called the Rado graph, pictured below.


The Rado graph has some really interesting properties, such as that it contains every finite and countably infinite graph as induced subgraphs. Basically this means, as far as countably infinite graphs go, it’s the big momma of all graphs. It’s the graph in a very concrete sense of the word. It satisfies all of the formulas in \Phi, and in fact it’s uniquely determined by this, meaning that if any other countably infinite graph satisfies all the formulas in \Phi, then that graph is isomorphic to the Rado graph.

But for our purposes (proving a zero-one law), there’s a better perspective than graph theory on this object. In the logic perspective, the set \Phi is called a theory, meaning a set of statements that you consider “axioms” in some logical system. And we’re asking whether there any model realizing the theory. That is, is there some logical system with a semantic interpretation (some mathematical object based on numbers, or sets, or whatever) that satisfies all the axioms?

A good analogy comes from the rational numbers, because they satisfy a similar property among all ordered sets. In fact, the rational numbers are the unique countable, ordered set with the property that it has no biggest/smallest element and is dense. That is, in the ordering there is always another element between any two elements you want. So the theorem says if you have two countable sets with these properties, then they are actually isomorphic as ordered sets, and they are isomorphic to the rational numbers.

So, while we won’t prove that the Rado graph is a model for our theory \Phi, we will use that fact to great benefit. One consequence of having a theory with a model is that the theory is consistent, meaning it can’t imply any contradictions. Another fact is that this theory \Phi is complete. Completeness means that any formula or it’s negation is logically implied by the theory. Note these are syntactical implications (using standard rules of propositional logic), and have nothing to do with the model interpreting the theory.

The proof that \Phi is complete actually follows from the uniqueness of the Rado graph as the only countable model of \Phi. Suppose the contrary, that \Phi is not consistent, then there has to be some formula \psi that is not provable, and it’s negation is also not provable, by starting from \Phi. Now extend \Phi in two ways: by adding \psi and by adding \neg \psi. Both of the new theories are still countable, and by a theorem from logic this means they both still have countable models. But both of these new models are also countable models of \Phi, so they have to both be the Rado graph. But this is very embarrassing for them, because we assumed they disagree on the truth of \psi.

So now we can go ahead and prove the zero-one law theorem.

Return to proof.

Given an arbitrary property \varphi \not \in \Psi. Now either \varphi or it’s negation can be derived from \Phi. Without loss of generality suppose it’s \varphi. Take all the formulas from the theory you need to derive \varphi, and note that since it is a proof in propositional logic you will only finitely many such \varphi_{k,l}. Now look at the probabilities of the \varphi_{k,l}: they are all true with probability tending to 1, so the implied statement of the proof of \varphi (i.e., \varphi itself) must also hold with probability tending to 1. And we’re done!


If you don’t like model theory, there is another “purely combinatorial” proof of the zero-one law using something called Ehrenfeucht–Fraïssé games. It is a bit longer, though.

Other zero-one laws

One might naturally ask two questions: what if your probability is not constant, and what other kinds of properties have zero-one laws? Both great questions.

For the first, there are some extra theorems. I’ll just describe one that has always seemed very strange to me. If your probability is of the form p = n^{-\alpha} but \alpha is irrational, then the zero-one law still holds! This is a theorem of Baldwin-Shelah-Spencer, and it really makes you wonder why irrational numbers would be so well behaved while rational numbers are not :)

For the second question, there is another theorem about monotone properties of graphs. Monotone properties come in two flavors, so called “increasing” and “decreasing.” I’ll describe increasing monotone properties and the decreasing counterpart should be obvious. A property is called monotone increasing if adding edges can never destroy the property. That is, with an empty graph you don’t have the property (or maybe you do), and as you start adding edges eventually you suddenly get the property, but then adding more edges can’t cause you to lose the property again. Good examples of this include connectivity, or the existence of a triangle.

So the theorem is that there is an identical zero-one law for monotone properties. Great!

It’s not so often that you get to see these neat applications of logic and model theory to graph theory and (by extension) computer science. But when you do get to apply them they seem very powerful and mysterious. I think it’s a good thing.

Until next time!