Making Hybrid Images | Neural Networks and Backpropagation |
Elliptic Curves and Cryptography |

Bezier Curves and Picasso | Computing Homology | Probably Approximately Correct – A Formal Theory of Learning |

# Hamming’s Code

## Or how to detect and correct errors

Last time we made a quick tour through the main theorems of Claude Shannon, which essentially solved the following two problems about communicating over a digital channel.

- What is the best encoding for information when you are guaranteed that your communication channel is error free?
- Are there any encoding schemes that can recover from random noise introduced during transmission?

The answers to these questions were purely mathematical theorems, of course. But the interesting shortcoming of Shannon’s accomplishment was that his solution for the noisy coding problem (2) was nonconstructive. The question remains: can we actually come up with efficiently computable encoding schemes? The answer is yes! Marcel Golay was the first to discover such a code in 1949 (just a year after Shannon’s landmark paper), and Golay’s construction was published on a single page! We’re not going to define Golay’s code in this post, but we will mention its interesting status in coding theory later. The next year Richard Hamming discovered another simpler and larger family of codes, and went on to do some of the major founding work in coding theory. For his efforts he won a Turing Award and played a major part in bringing about the modern digital age. So we’ll start with Hamming’s codes.

We will assume some basic linear algebra knowledge, as detailed our first linear algebra primer. We will also use some basic facts about polynomials and finite fields, though the lazy reader can just imagine everything as binary and still grok the important stuff.

## What is a code?

The formal definition of a code is simple: a *code* is just a subset of for some . Elements of are called *codewords.*

This is deceptively simple, but here’s the intuition. Say we know we want to send messages of length , so that our messages are in . Then we’re really viewing a code as the image of some encoding function . We can define by just describing what the set is, or we can define it by describing the encoding function. Either way, we will make sure that is an injective function, so that no two messages get sent to the same codeword. Then , and we can call the *message length* of even if we don’t have an explicit encoding function.

Moreover, while in this post we’ll always work with , the alphabet of your encoded messages could be an arbitrary set . So then a code would be a subset of tuples in , and we would call .

So we have these parameters , and we need one more. This is the *minimum distance* of a code, which we’ll denote by . This is defined to be the minimum Hamming distance between all distinct pairs of codewords, where by *Hamming distance *I just mean the number of coordinates that two tuples differ in. Recalling the remarks we made last time about Shannon’s nonconstructive proof, when we decode an encoded message (possibly with noisy bits) we look for the (unencoded) message whose encoding is as close to as possible. This will only work in the worst case if all pairs of codewords are sufficiently far apart. Hence we track the minimum distance of a code.

So coding theorists turn this mess of parameters into notation.

**Definition: **A code is called an -code if

- for some alphabet ,
- ,
- has minimum distance , and
- the alphabet has size .

The basic goals of coding theory are:

- For which values of these four parameters do codes exist?
- Fixing any three parameters, how can we optimize the other one?

In this post we’ll see how simple linear-algebraic constructions can give optima for one of these problems, optimizing for , and we’ll state a characterization theorem for optimizing for a general . Next time we’ll continue with a second construction that optimizes a different bound called the Singleton bound.

## Linear codes and the Hamming code

A code is called *linear* if it can be identified with a linear subspace of some finite-dimensional vector space. In this post all of our vector spaces will be , that is tuples of bits under addition mod 2. But you can do the same constructions with any finite scalar field for a prime power , i.e. have your vector space be . We’ll go back and forth between describing a binary code over and a code in $\mathbb{F}_q^n$. So to say a code is linear means:

- The zero vector is a codeword.
- The sum of any two codewords is a codeword.
- Any scalar multiple of a codeword is a codeword.

Linear codes are the simplest kinds of codes, but already they give a rich variety of things to study. The benefit of linear codes is that you can describe them in a lot of different and useful ways besides just describing the encoding function. We’ll use two that we define here. The idea is simple: you can describe everything about a linear subspace by giving a basis for the space.

**Definition: **A *generator matrix *of a -code is a matrix whose rows form a basis for .

There are a lot of equivalent generator matrices for a linear code (we’ll come back to this later), but the main benefit is that having a generator matrix allows one to encode messages by left multiplication . Intuitively, we can think of the bits of as describing the coefficients of the chosen linear combination of the rows of , which uniquely describes an element of the subspace. Note that because a -dimensional subspace of has elements, we’re not abusing notation by calling both the message length and the dimension.

For the second description of , we’ll remind the reader that every linear subspace has a unique *orthogonal complement* , which is the subspace of vectors that are orthogonal to vectors in .

**Definition: **Let be a generator matrix for . Then is called a *parity check* matrix.

Note has the basis for as *columns. *This means it has dimensions . Moreover, it has the property that *if and only if* the left multiplication . Having zero dot product with all columns of characterizes membership in .

The benefit of having a parity check matrix is that you can do efficient error detection: just compute on your received message , and if it’s nonzero there was an error! What if there were so many errors, and just the right errors that coincided with a different codeword than it started? Then you’re screwed. In other words, the parity check matrix is only guarantee to detect errors if you have fewer errors than the minimum distance of your code.

So that raises an obvious question: if you give me the generator matrix of a linear code can I compute its minimum distance? It turns out that this problem is NP-hard in general. In fact, you can show that this is equivalent to finding the smallest linearly *dependent* set of rows of the parity check matrix, and it is easier to see why such a problem might be hard. But if you construct your codes cleverly enough you can compute their distance properties with ease.

Before we do that, one more definition and a simple proposition about linear codes. The *Hamming weight* of a vector , denoted , is the number of nonzero entries in .

**Proposition: **The minimum distance of a linear code is the minimum Hamming weight over all nonzero vectors .

*Proof. *Consider a nonzero . On one hand, the zero vector is a codeword and is by definition the Hamming distance between and zero, so it is an upper bound on the minimum distance. In fact, it’s also a lower bound: if are two nonzero codewords, then is also a codeword and is the Hamming distance between and .

So now we can define our first code, the Hamming code. It will be a -code. The construction is quite simple. We have fixed , and we will also fix . One can think of this as fixing and maximizing , but it will only work for of a special form.

We’ll construct the Hamming code by describing a parity-check matrix . In fact, we’re going to see what conditions the minimum distance imposes on , and find out those conditions are actually sufficient to get . We’ll start with 2. If we want to ensure , then you need it to be the case that no nonzero vector of Hamming weight 1 is a code word. Indeed, if is a vector with all zeros except a one in position , then is the -th row of . We need , so this imposes the condition that no row of can be zero. It’s easy to see that this is sufficient for .

Likewise for , given a vector for some positions , then may not be zero. But because our sums are mod 2, saying that is the same as saying . Again it’s an if and only if. So we have the two conditions.

- No row of may be zero.
- All rows of must be distinct.

That is, *any *parity check matrix with those two properties defines a distance 3 linear code. The only question that remains is how large can be if the vectors have length ? That’s just the number of distinct nonzero binary strings of length , which is . Picking any way to arrange these strings as the rows of a matrix (say, in lexicographic order) gives you a good parity check matrix.

**Theorem: **For every , there is a -code called the *Hamming code.*

Since the Hamming code has distance 3, we can always detect if at most a single error occurs. Moreover, we can *correct* a single error using the Hamming code. If and is an error bit in position , then the incoming message would be . Now compute and flip bit of . That is, whichever row of you get tells you the index of the error, so you can flip the corresponding bit and correct it. If you order the rows lexicographically like we said, then as a binary number. Very slick.

Before we move on, we should note one interesting feature of linear codes.

**Definition:** A code is called *systematic* if it can be realized by an encoding function that appends some number “check bits” to the end of each message.

The interesting feature is that all linear codes are systematic. The reason is as follows. The generator matrix of a linear code has as rows a basis for the code as a linear subspace. We can perform Gaussian elimination on and get a new generator matrix that looks like where is the identity matrix of the appropriate size and is some junk. The point is that encoding using *this *generator matrix leaves the message unchanged, and adds a bunch of bits to the end that are determined by . It’s a different encoding function on , but it has the same image in , i.e. the code is unchanged. Gaussian elimination just performed a change of basis.

If you work out the parameters of the Hamming code, you’ll see that it is a systematic code which adds check bits to a message, and we’re able to correct a single error in this code. An obvious question is whether this is necessary? Could we get away with adding fewer check bits? The answer is no, and a simple “information theoretic” argument shows this. A single index out of requires bits to describe, and being able to correct a single error is like identifying a unique index. Without logarithmically many bits, you just don’t have enough information.

## The Hamming bound and perfect codes

One nice fact about Hamming codes is that they optimize a natural problem: the problem of maximizing given a fixed choice of , , and . To get this let’s define denote the volume of a ball of radius in the space . I.e., if you fix any string (doesn’t matter which) , is the size of the set , where is the hamming distance.

There is a theorem called the *Hamming bound, *which describes a limit to how much you can pack disjoint balls of radius inside .

**Theorem: **If an -code exists, then

*Proof.* The proof is quite simple. To say a code has distance means that for every string there is no other string within Hamming distance of . In other words, the balls centered around both of radius are disjoint. The extra difference of one is for odd , e.g. when you need balls of radius 1 to guarantee no overlap. Now , so the total number of strings covered by all these balls is the left-hand side of the expression. But there are at most strings in , establishing the desired inequality.

Now a code is called *perfect *if it actually meets the Hamming bound exactly. As you probably guessed, the Hamming codes are perfect codes. It’s not hard to prove this, and I’m leaving it as an exercise to the reader.

The obvious follow-up question is whether there are any other perfect codes. The answer is yes, some of which are nonlinear. But some of them are “trivial.” For example, when you can just use the identity encoding to get the code . You can also just have a code which consists of a single codeword. There are also some codes that encode by repeating the message multiple times. These are called “repetition codes,” and all three of these examples are called *trivial *(as a definition). Now there are some nontrivial and nonlinear perfect codes I won’t describe here, but here is the nice characterization theorem.

**Theorem [van Lint ’71, Tietavainen ‘****73]: **Let be a nontrivial perfect code. Then the parameters must either be that of a Hamming code, or one of the two:

- A -code
- A -code

The last two examples are known as the *binary* and* ternary Golay codes*, respectively, which are also linear. In other words, every possible set of parameters for a perfect code can be realized as one of these three linear codes.

So this theorem was a big deal in coding theory. The Hamming and Golay codes were both discovered within a year of each other, in 1949 and 1950, but the nonexistence of other perfect linear codes was open for twenty more years. This wrapped up a very neat package.

Next time we’ll discuss the Singleton bound, which optimizes for a different quantity and is incomparable with perfect codes. We’ll define the Reed-Solomon and show they optimize this bound as well. These codes are particularly famous for being the error correcting codes used in DVDs. We’ll then discuss the algorithmic issues surrounding decoding, and more recent connections to complexity theory.

Until then!

# A Proofless Introduction to Information Theory

There are two basic problems in information theory that are very easy to explain. Two people, Alice and Bob, want to communicate over a digital channel over some long period of time, and they know the probability that certain messages will be sent ahead of time. For example, English language sentences are more likely than gibberish, and “Hi” is much more likely than “asphyxiation.” The problems are:

- Say communication is very expensive. Then the problem is to come up with an encoding scheme for the messages which minimizes the expected length of an encoded message and guarantees the ability to unambiguously decode a message. This is called the
*noiseless coding problem.* - Say communication is not expensive, but error prone. In particular, each bit of your message is erroneously flipped with some known probably , and all the errors are independent. Then the question is, how can one encode their messages to as to guarantee (with high probability) the ability to decode any sent message? This is called the
*noisy coding problem.*

There are actually many models of “communication with noise” that generalize (2), such as models based on Markov chains. We are not going to cover them here.

Here is a simple example for the noiseless problem. Say you are just sending binary digits as your messages, and you know that the string “00000000” (eight zeros) occurs half the time, and all other eight-bit strings occur equally likely in the other half. It would make sense, then, to encode the “eight zeros” string as a 0, and prefix all other strings with a 1 to distinguish them from zero. You would save on average bits in every message.

One amazing thing about these two problems is that they were posed and solved in the same paper by Claude Shannon in 1948. One byproduct of his work was the notion of *entropy, *which in this context measures the “information content” of a message, or the expected “compressibility” of a single bit under the best encoding. For the extremely dedicated reader of this blog, note this differs from Kolmogorov complexity in that we’re not analyzing the compressibility of a string by itself, but rather when compared to a distribution. So really we should think of (the domain of) the distribution as being compressed, not the string.

## Entropy and noiseless encoding

Before we can state Shannon’s theorems we have to define entropy.

**Definition: **Suppose is a distribution on a finite set , and I’ll use to denote the probability of drawing from . The *entropy *of , denoted is defined as

It is strange to think about this sum in abstract, so let’s suppose is a biased coin flip with bias of landing heads. Then we can plot the entropy as follows

The horizontal axis is the bias , and the vertical axis is the value of , which with some algebra is . From the graph above we can see that the entropy is maximized when and minimized at . You can verify all of this with calculus, and you can prove that the uniform distribution maximizes entropy in general as well.

So what is this saying? A high entropy measures how *incompressible *something is, and low entropy gives us lots of compressibility. Indeed, if our message consisted of the results of 10 such coin flips, and was close to 1, we could be able to compress a lot by encoding strings with lots of 1’s using few bits. On the other hand, if we couldn’t get any compression at all. All strings would be equally likely.

Shannon’s famous theorem shows that the entropy of the distribution is actually all that matters. Some quick notation: is the set of all binary strings.

**Theorem (Noiseless Coding Theorem) [Shannon 1948]: **For every finite set and distribution over , there are encoding and decoding functions such that

- The encoding/decoding actually works, i.e. for all .
- The expected length of an encoded message is between and .

Moreover, *no encoding scheme can do better.*

Item 2 and the last sentence are the magical parts. In other words, if you know your distribution over messages, you *precisely* know how long to expect your messages to be. And you know that you can’t hope to do any better!

As the title of this post says, we aren’t going to give a proof here. Wikipedia has a proof if you’re really interested in the details.

## Noisy Coding

The noisy coding problem is more interesting because in a certain sense (that was not solved by Shannon) it is still being studied today in the field of coding theory. The interpretation of the noisy coding problem is that you want to be able to recover from white noise errors introduced during transmission. The concept is called *error correction*. To restate what we said earlier, we want to recover from error with probability asymptotically close to 1, where the probability is over the errors.

It should be intuitively clear that you can’t do so without your encoding “blowing up” the length of the messages. Indeed, if your encoding does not blow up the message length then a single error will confound you since many valid messages would differ by only a single bit. So the question is does such an encoding exist, and if so how much do we need to blow up the message length? Shannon’s second theorem answers both questions.

**Theorem (Noisy Coding Theorem) [Shannon 1948]: **For any constant noise rate , there is an encoding scheme with the following property. If is the message sent by Alice, and is the message received by Bob (i.e. with random noise), then as a function of . In addition, if we denote by the entropy of the distribution of an error on a single bit, then choosing any guarantees the existence of such an encoding scheme, and no scheme exists for any smaller .

This theorem formalizes a “yes” answer to the noisy coding problem, but moreover it characterizes the blowup needed for such a scheme to exist. The deep fact is that *it only depends on the noise rate*.

A word about the proof: it’s probabilistic. That is, Shannon proved such an encoding scheme exists by picking to be a random function (!). Then finds (nonconstructively) the string such that the number of bits different between and is minimized. This “number of bits that differ” measure is called the *Hamming distance. *Then he showed using relatively standard probability tools that this scheme has the needed properties with high probability, the implication being that some scheme has to exist for such a probability to even be positive. The sharp threshold for takes a bit more work. If you want the details, check out the first few lectures of Madhu Sudan’s MIT class.

The non-algorithmic nature of his solution is what opened the door to more research. The question has surpassed, “Are there any encodings that work?” to the more interesting, “What is the algorithmic cost of constructing such an encoding?” It became a question of complexity, not computability. Moreover, the guarantees people wanted were strengthened to worst case guarantees. In other words, if I can guarantee *at most 12 errors*, is there an encoding scheme that will allow me to always recover the original message, and not just with high probability? One can imagine that if your message contains nuclear codes or your bank balance, you’d definitely want to have 100% recovery ability.

Indeed, two years later Richard Hamming spawned the theory of *error correcting codes *and defined codes that can always correct a single error. This theory has expanded and grown over the last sixty years, and these days the algorithmic problems of coding theory have deep connections to most areas of computer science, including learning theory, cryptography, and quantum computing.

We’ll cover Hamming’s basic codes next time, and then move on to Reed-Solomon codes and others. Until then!

# Zero-One Laws for Random Graphs

Last time** **we saw a number of properties of graphs, such as connectivity, where the probability that an Erdős–Rényi random graph satisfies the property is asymptotically either zero or one. And this zero or one depends on whether the parameter is above or below a universal threshold (that depends only on and the property in question).

To remind the reader, the Erdős–Rényi random “graph” is a distribution over graphs that you draw from by including each edge independently with probability . Last time we saw that the existence of an isolated vertex has a sharp threshold at , meaning if is asymptotically smaller than the threshold there will certainly be isolated vertices, and if is larger there will certainly be *no* isolated vertices. We also gave a laundry list of other properties with such thresholds.

One might want to study this phenomenon in general. Even if we might not be able to find all the thresholds we want for a given property, can we classify which properties have thresholds and which do not?

The answer turns out to be mostly yes! For large classes of properties, there are proofs that say things like, “either this property holds with probability tending to one, or it holds with probability tending to zero.” These are called “zero-one laws,” and they’re sort of meta theorems. We’ll see one such theorem in this post relating to constant edge-probabilities in random graphs, and we’ll remark on another at the end.

## Sentences about graphs in first order logic

A zero-one law generally works by defining a class of properties, and then applying a generic first/second moment-type argument to every property in the class.

So first we define what kinds of properties we’ll discuss. We’ll pick a large class: **anything that can be expressed in first-order logic** in the language of graphs. That is, any finite logical statement that uses existential and universal quantifiers over variables, and whose only relation (test) is whether an edge exists between two vertices. We’ll call this test . So you write some sentence in this language, and you take a graph , and you can ask , whether the graph satisfies the sentence.

This seems like a *really* large class of properties, and it is, but let’s think carefully about what kinds of properties can be expressed this way. Clearly the existence of a triangle can be written this way, it’s just the sentence

I’m using for AND, and for OR, and for NOT. Similarly, one can express the existence of a clique of size , or the existence of an independent set of size , or a path of a fixed length, or whether there is a vertex of maximal degree .

Here’s a question: can we write a formula which will be true for a graph if and only if it’s connected? Well such a formula seems like it would have to know about how many vertices there are in the graph, so it could say something like “for all there is a path from to .” It seems like you’d need a family of such formulas that grows with to make anything work. But this isn’t a proof; the question remains whether there is some other tricky way to encode connectivity.

But as it turns out, connectivity is *not* a formula you can express in propositional logic. We won’t prove it here, but we will note at the end of the article that connectivity is in a different class of properties that you can prove has a similar zero-one law.

## The zero-one law for first order logic

So the theorem about first-order expressible sentences is as follows.

**Theorem:** Let be a property of graphs that can be expressed in the first order language of graphs (with the relation). Then for any constant , the probability that holds in has a limit of zero or one as .

*Proof. *We’ll prove the simpler case of , but the general case is analogous. Given such a graph drawn from , what we’ll do is define a countably infinite family of propositional formulas , and argue that they form a sort of “basis” for all first-order sentences about graphs.

First let’s describe the . For any , the sentence will assert that for every set of vertices and every set of vertices, there is some other vertex connected to the first but not the last .

.

In other words, these formulas encapsulate every possible incidence pattern for a single vertex. It is a strange set of formulas, but they have a very nice property we’re about to get to. So for a fixed , what is the probability that it’s false on vertices? We want to give an upper bound and hence show that the formula is true with probability approaching 1. That is, we want to show that *all* the are true with probability tending to 1.

Computing the probability: we have possibilities to choose these sets, and the probability that some other fixed vertex has the good connections is so the probability is not good is , and taking a product over all choices of gives the probability that *there is some bad vertex * with an exponent of . Combining all this together gives an upper bound of being false of:

And are constant, so the left two terms are polynomials while the rightmost term is an exponentially small function, and this implies that the whole expression tends to zero, as desired.

*Break from proof.*

## A bit of model theory

So what we’ve proved so far is that the probability of every formula of the form being satisfied in tends to 1.

Now look at the set of all such formulas

We ask: is there any graph which satisfies all of these formulas? Certainly it cannot be finite, because a finite graph would not be able to satisfy formulas with sufficiently large values of . But indeed, there is a *countably infinite* graph that works. It’s called the *Rado graph*, pictured below.

The Rado graph has some really interesting properties, such as that it contains *every finite and countably infinite graph* as induced subgraphs. Basically this means, as far as countably infinite graphs go, it’s the big momma of all graphs. It’s *the* graph in a very concrete sense of the word. It satisfies all of the formulas in , and in fact it’s uniquely determined by this, meaning that if any other countably infinite graph satisfies all the formulas in , then that graph is isomorphic to the Rado graph.

But for our purposes (proving a zero-one law), there’s a better perspective than graph theory on this object. In the logic perspective, the set is called a* theory*, meaning a set of statements that you consider “axioms” in some logical system. And we’re asking whether there any model realizing the theory. That is, is there some logical system with a semantic interpretation (some mathematical object based on numbers, or sets, or whatever) that satisfies all the axioms?

A good analogy comes from the rational numbers, because they satisfy a similar property among all ordered sets. In fact, the rational numbers are the unique countable, ordered set with the property that it has no biggest/smallest element and is dense. That is, in the ordering there is always another element between any two elements you want. So the theorem says if you have two countable sets with these properties, then they are actually isomorphic as ordered sets, and they are isomorphic to the rational numbers.

So, while we won’t prove that the Rado graph is a model for our theory , we will use that fact to great benefit. One consequence of having a theory with a model is that the theory is *consistent, *meaning it can’t imply any contradictions. Another fact is that this theory is *complete.* Completeness means that any formula or it’s negation is logically implied by the theory. Note these are syntactical implications (using standard rules of propositional logic), and have nothing to do with the model interpreting the theory.

The proof that is complete actually follows from the uniqueness of the Rado graph as the only countable model of . Suppose the contrary, that is not consistent, then there has to be some formula that is not provable, and it’s negation is also not provable, by starting from . Now extend in two ways: by adding and by adding . Both of the new theories are still countable, and by a theorem from logic this means they both still have countable models. But both of these new models are also countable models of , so they have to both be the Rado graph. But this is very embarrassing for them, because we assumed they disagree on the truth of .

So now we can go ahead and prove the zero-one law theorem.

*Return to proof.*

Given an arbitrary property . Now either or it’s negation can be derived from . Without loss of generality suppose it’s . Take all the formulas from the theory you need to derive , and note that since it is a proof in propositional logic you will only finitely many such . Now look at the probabilities of the : they are *all true *with probability tending to 1, so the implied statement of the proof of (i.e., itself) must also hold with probability tending to 1. And we’re done!

If you don’t like model theory, there is another “purely combinatorial” proof of the zero-one law using something called Ehrenfeucht–Fraïssé games. It is a bit longer, though.

## Other zero-one laws

One might naturally ask two questions: what if your probability is not constant, and what other kinds of properties have zero-one laws? Both great questions.

For the first, there are some extra theorems. I’ll just describe one that has always seemed very strange to me. If your probability is of the form but is *irrational*, then the zero-one law still holds! This is a theorem of Baldwin-Shelah-Spencer, and it really makes you wonder why irrational numbers would be so well behaved while rational numbers are not :)

For the second question, there is another theorem about *monotone *properties of graphs. Monotone properties come in two flavors, so called “increasing” and “decreasing.” I’ll describe increasing monotone properties and the decreasing counterpart should be obvious. A property is called *monotone increasing* if adding edges can never destroy the property. That is, with an empty graph you don’t have the property (or maybe you do), and as you start adding edges eventually you suddenly get the property, but then adding *more* edges can’t cause you to lose the property again. Good examples of this include connectivity, or the existence of a triangle.

So the theorem is that there is an identical zero-one law for monotone properties. Great!

It’s not so often that you get to see these neat applications of logic and model theory to graph theory and (by extension) computer science. But when you do get to apply them they seem very powerful and mysterious. I think it’s a good thing.

Until next time!

# The Giant Component and Explosive Percolation

Last time we left off with a tantalizing conjecture: a random graph with edge probability is almost surely a connected graph. We arrived at that conjecture from some ad-hoc data analysis, so let’s go back and treat it with some more rigorous mathematical techniques. As we do, we’ll discover some very interesting “threshold theorems” that essentially say a random graph will either certainly have a property, or it will certainly not have it.

## Big components

Recalling the basic definition: an Erdős-Rényi (ER) random graph with vertices and edge probability is a probability distribution over all graphs on vertices. Generatively, you draw from an ER distribution by flipping a -biased coin for each pair of vertices, and adding the edge if you flip heads. We call the random event of drawing a graph from this distribution a “random graph” even though it’s not a graph, and we denote an ER random graph by . When , the distribution is the uniform distribution over all graphs on vertices.

Now let’s get to some theorems. The main tools we’ll use are called the *first and second moment method*. Let’s illustrate them by example.

### The first moment method

Say we want to know what values of are likely to produce graphs with isolated vertices (vertices with no neighbors), and which are not. Of course, the value of will depend on in general, but we can already see by example that if then the probability of a fixed vertex being isolated is . We can use the union bound (sum this value over all vertices) to show that the probability of *any* vertex being isolated is at most which also tends to zero very quickly. This is not the first moment method, I’m just making the point that all of our results will be interpreted asymptotically as .

So now we can ask: what is the *expected number *of isolated vertices? If I call the random variable that counts the expected number of isolated* *vertices, then I’m asking about . Really what I’m doing is interpreting as a random variable depending on , and asking about the evolution of as .

Now the *first moment method *states, somewhat obviously, that if the expectation tends to zero then the value of itself also tends to zero. Indeed, this follows from Markov’s inequality, which states that the probability that is bounded by . In symbols,

.

In our case is counting something (it’s integer valued), so asking whether is equivalent to asking whether . The upper bound on the probability of being strictly positive is then just .

So let’s find out when the expected number of isolated vertices goes to zero. We’ll use the wondrous linearity of expectation to split into a sum of counts for each vertex. That is, if is 1 when vertex is isolated and 0 otherwise (this is called an *indicator variable*), then and linearity of expectation gives

Now the expectation of an indicator random variable is just the probability that the event occurs (it’s trivial to check). It’s easy to compute the probability that a vertex is isolated: it’s . So the sum above works out to be . It should really be but the extra factor of doesn’t change anything. The question is what’s the “smallest” way to set as a function of in order to make the above thing go to zero? Using the fact that for all , we get

And setting simplifies the right hand side to . This is almost what we want, so let’s set to be *anything that grows asymptotically faster than *. The notation for this is . Then using some slick asymptotic notation we can prove that the RHS of the inequality above goes to zero, and so the LHS must as well. Back to the big picture: we just showed that the expectation of (the expected number of isolated vertices) goes to zero, and so by the first moment method the value of (the *actual* number of isolated vertices) has to go to zero with probability tending to 1.

Some quick interpretations: when each vertex has neighbors in expectation. Moreover, having no isolated vertices is just a little bit short of the entire graph being connected (our ultimate goal is to figure out exactly when this happens). But already we can see that our conjecture from the beginning is probably false: we aren’t able to use this same method to show that when for some constant rules out isolated vertices as . We just got lucky in our data analysis that 5 is about the natural log of 100 (which is 4.6).

### The second moment method

Now what about the other side of the coin? If is asymptotically *less* than do we necessarily get isolated vertices? That would really put our conjecture to rest. In this case the answer is yes, but it might not be in general. Let’s discuss.

We said that in general if then the value of has to go to zero too (that’s the first moment method). The flip side of this is: if does necessarily the value of also tend to infinity? The answer is not always yes. Here is a gruesome example I originally heard from a book: say is the number of people that will die in the next decade due to an asteroid hitting the earth. The probability that the event happens is quite small, but if it does happen then the number of people that will die is quite large. It is perfectly reasonable for this to drag up the expectation (as the world population grows every decade), but at least we hope a growing population doesn’t by itself increase the *value* of .

Mathematics is on our side here. We’re asking under what conditions on does the following implication hold: implies .

With the first moment method we used Markov’s inequality (a statement about expectation, also called the first moment). With the second moment method we’ll use a statement about the second moment (variances), and the most common is Chebyshev’s inequality. Chebyshev’s inequality states that the probability deviates from its expectation by more than is bounded by . In symbols, for all we have

Now the opposite of , written in terms of deviation from expectation, is . In words, in order for any number to be zero, it has to have a distance of at least from any number . It’s such a stupidly simple statement it’s almost confusing. So then we’re saying that

.

In order to make this probability go to zero, it’s enough to have . Again, the little-o means “grows asymptotically slower than.” So the numerator of the fraction on the RHS will grow asymptotically slower than the denominator, meaning the whole fraction tends to zero. This condition and its implication are together called the “second moment method.”

Great! So we just need to compute and check what conditions on make it fit the theorem. Recall that , and we want to upper bound this in terms of . Let’s compute first.

Now the variance.

Expanding as a sum of indicator variables for each vertex, we can split the square into a sum over pairs. Note that since they are 0-1 valued indicator variables, and is the indicator variable for *both *events happening simultaneously.

By what we said about indicators, the last line is just

And we can compute each of these pieces quite easily. They are (asymptotically ignoring some constants):

Now combining the two terms together (subtracting off the square of the expectation),

Now we divide by to get . Since we’re trying to see if is a sharp threshold, the natural choice is to let . Indeed, using the upper bound and plugging in the little-o bounds the whole quantity by

i.e., the whole thing tends to zero, as desired.

## Other thresholds

So we just showed that the property of having no isolated vertices in a random graph has a *sharp* threshold at . Meaning at any larger probability the graph is almost surely devoid of isolated vertices, and at any lower probability the graph almost surely has some isolated vertices.

This might seem like a miracle theorem, but there turns out to be similar theorems for *lots* of properties. Most of them you can also prove using basically the same method we’ve been using here. I’ll list some below. Also note they are all sharp, two-sided thresholds in the same way that the isolated vertex boundary is.

- The existence of a component of size has a threshold of .
- for any is a threshold for the existence of a
*giant component*of linear size . Moreover, above this threshold no other components will have size . - In addition to being a threshold for having no isolated vertices, it is also a threshold for connectivity.
- is a sharp threshold for the existence of Hamiltonian cycles in the following sense: if then there will be a Hamilton cycle almost surely, if there will be no Hamiltonian cycle almost surely, and if the probability of a Hamiltonian cycle is . This was proved by Kolmos and Szemeredi in 1983. Moreover, there is an efficient algorithm to find Hamiltonian cycles in these random graphs when they exist with high probability.

## Explosive Percolation

So now we know that as the probability of an edge increases, at some point the graph will spontaneously become connected; at some time that is roughly before, the so-called “giant component” will emerge and quickly engulf the entire graph.

Here’s a different perspective on this situation originally set forth by Achlioptas, D’Souza, and Spencer in 2009. It has since become called an “Achlioptas process.”

The idea is that you are watching a random graph grow. Rather than think about random graphs as having a probability above or below some threshold, you can think of it as the number of edges growing (so the thresholds will all be multiplied by ). Then you can imagine that you start with an empty graph, and at every time step someone is adding a new random edge to your graph. Fine, eventually you’ll get so many edges that a giant component emerges and you can measure when that happens.

But now imagine that instead of being given a *single *random new edge, you are given a choice. Say God presents you with two random edges, and you must pick which to add to your graph. Obviously you will eventually still get a giant component, but the question is how long can you prevent it from occurring? That is, how far back can we push the threshold for connectedness by cleverly selecting the new edge?

What Achlioptas and company conjectured was that you can push it back (some), but that when you push it back as far as it can go, the threshold becomes discontinuous. That is, they believed there was a constant such that the size of the largest component jumps from to in steps.

This turned out to be false, and Riordan and Warnke proved it. Nevertheless, the idea has been interpreted in an interesting light. People have claimed it is a useful model of disaster in the following sense. If you imagine that an edge between two vertices is a “crisis” relating two entities. Then in every step God presents you with two crises and you only have the resources to fix one. The idea is that when the entire graph is connected, you have this one big disaster where all the problems are interacting with each other. The percolation process describes how long you can “survive” while avoiding the big disaster.

There are critiques of this interpretation, though, mainly about how simplistic it is. In particular, an Achlioptas process models a crisis as an exogenous force when in reality problems are usually endogenous. You don’t expect a meteor to hit the Earth, but you do expect humans to have an impact on the environment. Also, not everybody in the network is trying to avoid errors. Some companies thrive in economic downturns by managing your toxic assets, for example. So one could reasonably argue that Achlioptas processes aren’t complex enough to model the realistic types of disasters we face.

Either way, I find it fantastic that something like a random graph (which for decades was securely in pure combinatorics away from applications) is spurring such discussion.

Next time, we’ll take one more dive into the theory of Erdős-Rényi random graphs to prove a very “meta” theorem about sharp thresholds. Then we’ll turn our attention to other models of random graphs, hopefully more realistic ones :)

Until then!