The Čech Complex and the Vietoris-Rips Complex

It’s about time we got back to computational topology. Previously in this series we endured a lightning tour of the fundamental group and homology, then we saw how to compute the homology of a simplicial complex using linear algebra.

What we really want to do is talk about the inherent shape of data. Homology allows us to compute some qualitative features of a given shape, i.e., find and count the number of connected components or a given shape, or the number of “2-dimensional holes” it has. This is great, but data doesn’t come in a form suitable for computing homology. Though they may have originated from some underlying process that follows nice rules, data points are just floating around in space with no obvious connection between them.

Here is a cool example of Thom Yorke, the lead singer of the band Radiohead, whose face was scanned with a laser scanner for their music video “House of Cards.”

radiohead-still

Radiohead’s Thom Yorke in the music video for House of Cards (click the image to watch the video).

Given a point cloud such as the one above, our long term goal (we’re just getting started in this post) is to algorithmically discover what the characteristic topological features are in the data. Since homology is pretty coarse, we might detect the fact that the point cloud above looks like a hollow sphere with some holes in it corresponding to nostrils, ears, and the like. The hope is that if the data set isn’t too corrupted by noise, then it’s a good approximation to the underlying space it is sampled from. By computing the topological features of a point cloud we can understand the process that generated it, and Science can proceed.

But it’s not always as simple as Thom Yorke’s face. It turns out the producers of the music video had to actually degrade the data to get what you see above, because their lasers were too precise and didn’t look artistic enough! But you can imagine that if your laser is mounted on a car on a bumpy road, or tracking some object in the sky, or your data comes from acoustic waves traveling through earth, you’re bound to get noise. Or more realistically, if your data comes from thousands of stock market prices then the process generating the data is super mysterious. It changes over time, it may not follow any discernible pattern (though speculators may hope it does), and you can’t hope to visualize the entire dataset in any useful way.

But with persistent homology, so the claim goes, you’d get a good qualitative understanding of the dataset. Your results would be resistant to noise inherent in the data. It also wouldn’t be sensitive to the details of your data cleaning process. And with a dash of ingenuity, you can come up with a reasonable mathematical model of the underlying generative process. You could use that model to design algorithms, make big bucks, discover new drugs, recognize pictures of cats, or whatever tickles your fancy.

But our first problem is to resolve the input data type error. We want to use homology to describe data, but our data is a point cloud and homology operates on simplicial complexes. In this post we’ll see two ways one can do this, and see how they’re related.

The Čech complex

Let’s start with the Čech complex. Given a point set X in some metric space and a number \varepsilon > 0, the Čech complex C_\varepsilon is the simplicial complex whose simplices are formed as follows. For each subset S \subset X of points, form a (\varepsilon/2)-ball around each point in S, and include S as a simplex (of dimension |S|) if there is a common point contained in all of the balls in S. This structure obviously satisfies the definition of a simplicial complex: any sub-subset S' \subset S of a simplex S will be also be a simplex. Here is an example of the epsilon balls.

Image credit: Robert Ghrist

An example of a point cloud (left) and a corresponding choice of (epsilon/2)-balls. To get the Cech complex, we add a k-simplex any time we see a subset of k points with common intersection.  [Image credit: Robert Ghrist]

Let me superscript the Čech complex to illustrate the pieces. Specifically, we’ll let C_\varepsilon^{j} denote all the simplices of dimension up to j. In particular, C_\varepsilon^1 is a graph where an edge is placed between x,y if d(x,y) < \varepsilon, and C_{\varepsilon}^2 places triangles (2-simplices) on triples of points whose balls have a three-way intersection.

A topologist will have a minor protest here: the simplicial complex is supposed to resemble the structure inherent in the underlying points, but how do we know that this abstract simplicial complex (which is really hard to visualize!) resembles the topological space we used to make it? That is, X was sitting in some metric space, and the union of these epsilon-balls forms some topological space X(\varepsilon) that is close in structure to X. But is the Čech complex C_\varepsilon close to X(\varepsilon)? Do they have the same topological structure? It’s not a trivial theorem to prove, but it turns out to be true.

The Nerve Theorem: The homotopy types of X(\varepsilon) and C_\varepsilon are the same.

We won’t remind the readers about homotopy theory, but suffice it to say that when two topological spaces have the same homotopy type, then homology can’t distinguish them. In other words, if homotopy type is too coarse for a discriminator for our dataset, then persistent homology will fail us for sure.

So this theorem is a good sanity check. If we want to learn about our point cloud, we can pick a \varepsilon and study the topology of the corresponding Čech complex C_\varepsilon. The reason this is called the “Nerve Theorem” is because one can generalize it to an arbitrary family of convex sets. Given some family F of convex sets, the nerve is the complex obtained by adding simplices for mutually overlapping subfamilies in the same way. The nerve theorem is actually more general, it says that with sufficient conditions on the family F being “nice,” the resulting Čech complex has the same topological structure as F.

The problem is that Čech complexes are tough to compute. To tell whether there are any 10-simplices (without additional knowledge) you have to inspect all subsets of size 10. In general computing the entire complex requires exponential time in the size of X, which is extremely inefficient. So we need a different kind of complex, or at least a different representation to compensate.

The Vietoris-Rips complex

The Vietoris-Rips complex is essentially the same as the Čech complex, except instead of adding a d-simplex when there is a common point of intersection of all the (\varepsilon/2)-balls, we just do so when all the balls have pairwise intersections. We’ll denote the Vietoris-Rips complex with parameter \varepsilon as VR_{\varepsilon}.

Here is an example to illustrate: if you give me three points that are the vertices of an equilateral triangle of side length 1, and I draw (1/2)-balls around each point, then they will have all three pairwise intersections but no common point of intersection.

intersection

Three balls which intersect pairwise, but have no point of triple intersection. With appropriate parameters, the Cech and V-R complexes are different.

So in this example the Vietoris-Rips complex is a graph with a 2-simplex, while the Čech complex is just a graph.

One obvious question is: do we still get the benefits of the nerve theorem with Vietoris-Rips complexes? The answer is no, obviously, because the Vietoris-Rips complex and Čech complex in this triangle example have totally different topology! But everything’s not lost. What we can do instead is compare Vietoris-Rips and Čech complexes with related parameters.

Theorem: For all \varepsilon > 0, the following inclusions hold

\displaystyle C_{\varepsilon} \subset VR_{2 \varepsilon} \subset C_{2\varepsilon}

So if the Čech complexes for both \varepsilon and 2\varepsilon are good approximations of the underlying data, then so is the Vietoris-Rips complex. In fact, you can make this chain of inclusions slightly tighter, and if you’re interested you can see Theorem 2.5 in this recent paper of de Silva and Ghrist.

Now your first objection should be that computing a Vietoris-Rips complex still requires exponential time, because you have to scan all subsets for the possibility that they form a simplex. It’s true, but one nice thing about the Vietoris-Rips complex is that it can be represented implicitly as a graph. You just include an edge between two points if their corresponding balls overlap. Once we want to compute the actual simplices in the complex we have to scan for cliques in the graph, so that sucks. But it turns out that computing the graph is the first step in other more efficient methods for computing (or approximating) the VR complex.

Let’s go ahead and write a (trivial) program that computes the graph representation of the Vietoris-Rips complex of a given data set.

import numpy
def naiveVR(points, epsilon):
   points = [numpy.array(x) for x in points]   
   vrComplex = [(x,y) for (x,y) in combinations(points, 2) if norm(x - y) < 2*epsilon]
   return numpy.array(vrComplex)

Let’s try running it on a modestly large example: the first frame of the Radiohead music video. It’s got about 12,000 points in \mathbb{R}^4 (x,y,z,intensity), and sadly it takes about twenty minutes. There are a couple of ways to make it more efficient. One is to use specially-crafted data structures for computing threshold queries (i.e., find all points within \varepsilon of this point). But those are only useful for small thresholds, and we’re interested in sweeping over a range of thresholds. Another is to invoke approximations of the data structure which give rise to “approximate” Vietoris-Rips complexes.

Other stuff

In a future post we’ll implement a method for speeding up the computation of the Vietoris-Rips complex, since this is the primary bottleneck for topological data analysis. But for now the conceptual idea of how Čech complexes and Vietoris-Rips complexes can be used to turn point clouds into simplicial complexes in reasonable ways.

Before we close we should mention that there are other ways to do this. I’ve chosen the algebraic flavor of topological data analysis due to my familiarity with algebra and the work based on this approach. The other approaches have a more geometric flavor, and are based on the Delaunay triangulation, a hallmark of computational geometry algorithms. The two approaches I’ve heard of are called the alpha complex and the flow complex. The downside of these approaches is that, because they are based on the Delaunay triangulation, they have poor scaling in the dimension of the data. Because high dimensional data is crucial, many researchers have been spending their time figuring out how to speed up approximations of the V-R complex. See these slides of Afra Zomorodian for an example.

Until next time!

A Proofless Introduction to Information Theory

There are two basic problems in information theory that are very easy to explain. Two people, Alice and Bob, want to communicate over a digital channel over some long period of time, and they know the probability that certain messages will be sent ahead of time. For example, English language sentences are more likely than gibberish, and “Hi” is much more likely than “asphyxiation.” The problems are:

  1. Say communication is very expensive. Then the problem is to come up with an encoding scheme for the messages which minimizes the expected length of an encoded message and guarantees the ability to unambiguously decode a message. This is called the noiseless coding problem.
  2. Say communication is not expensive, but error prone. In particular, each bit i of your message is erroneously flipped with some known probably p, and all the errors are independent. Then the question is, how can one encode their messages to as to guarantee (with high probability) the ability to decode any sent message? This is called the noisy coding problem.

There are actually many models of “communication with noise” that generalize (2), such as models based on Markov chains. We are not going to cover them here.

Here is a simple example for the noiseless problem. Say you are just sending binary digits as your messages, and you know that the string “00000000” (eight zeros) occurs half the time, and all other eight-bit strings occur equally likely in the other half. It would make sense, then, to encode the “eight zeros” string as a 0, and prefix all other strings with a 1 to distinguish them from zero. You would save on average 7 \cdot 1/2 + (-1) \cdot 1/2 = 3 bits in every message.

One amazing thing about these two problems is that they were posed and solved in the same paper by Claude Shannon in 1948. One byproduct of his work was the notion of entropy, which in this context measures the “information content” of a message, or the expected “compressibility” of a single bit under the best encoding. For the extremely dedicated reader of this blog, note this differs from Kolmogorov complexity in that we’re not analyzing the compressibility of a string by itself, but rather when compared to a distribution. So really we should think of (the domain of) the distribution as being compressed, not the string.

Claude Shannon. Image credit: Wikipedia

Entropy and noiseless encoding

Before we can state Shannon’s theorems we have to define entropy.

Definition: Suppose D is a distribution on a finite set X, and I’ll use D(x) to denote the probability of drawing x from D. The entropy of D, denoted H(D) is defined as

H(D) = \sum_{x \in X} D(x) \log \frac{1}{D(x)}

It is strange to think about this sum in abstract, so let’s suppose D is a biased coin flip with bias 0 \leq p \leq 1 of landing heads. Then we can plot the entropy as follows

Image source: Wikipedia

Image source: Wikipedia

The horizontal axis is the bias p, and the vertical axis is the value of H(D), which with some algebra is - p \log p - (1-p) \log (1-p). From the graph above we can see that the entropy is maximized when p=1/2 and minimized at p=0, 1. You can verify all of this with calculus, and you can prove that the uniform distribution maximizes entropy in general as well.

So what is this saying? A high entropy measures how incompressible something is, and low entropy gives us lots of compressibility. Indeed, if our message consisted of the results of 10 such coin flips, and p was close to 1, we could be able to compress a lot by encoding strings with lots of 1’s using few bits. On the other hand, if p=1/2 we couldn’t get any compression at all. All strings would be equally likely.

Shannon’s famous theorem shows that the entropy of the distribution is actually all that matters. Some quick notation: \{ 0,1 \}^* is the set of all binary strings.

Theorem (Noiseless Coding Theorem) [Shannon 1948]: For every finite set X and distribution D over X, there are encoding and decoding functions \textup{Enc}: X \to \{0,1 \}^*, \textup{Dec}: \{ 0,1 \}^* \to X such that

  1. The encoding/decoding actually works, i.e. \textup{Dec}(\textup{Enc}(x)) = x for all x.
  2. The expected length of an encoded message is between H(D) and H(D) + 1.

Moreover, no encoding scheme can do better.

Item 2 and the last sentence are the magical parts. In other words, if you know your distribution over messages, you precisely know how long to expect your messages to be. And you know that you can’t hope to do any better!

As the title of this post says, we aren’t going to give a proof here. Wikipedia has a proof if you’re really interested in the details.

Noisy Coding

The noisy coding problem is more interesting because in a certain sense (that was not solved by Shannon) it is still being studied today in the field of coding theory. The interpretation of the noisy coding problem is that you want to be able to recover from white noise errors introduced during transmission. The concept is called error correction. To restate what we said earlier, we want to recover from error with probability asymptotically close to 1, where the probability is over the errors.

It should be intuitively clear that you can’t do so without your encoding “blowing up” the length of the messages. Indeed, if your encoding does not blow up the message length then a single error will confound you since many valid messages would differ by only a single bit. So the question is does such an encoding exist, and if so how much do we need to blow up the message length? Shannon’s second theorem answers both questions.

Theorem (Noisy Coding Theorem) [Shannon 1948]: For any constant noise rate p < 1/2, there is an encoding scheme \textup{Enc} : \{ 0,1 \}^k \to \{0,1\}^{ck}, \textup{Dec} : \{ 0,1 \}^{ck} \to \{ 0,1\}^k with the following property. If x is the message sent by Alice, and y is the message received by Bob (i.e. \textup{Enc}(x) with random noise), then \Pr[\textup{Dec}(y) = x] \to 1 as a function of n=ck. In addition, if we denote by H(p) the entropy of the distribution of an error on a single bit, then choosing any c > \frac{1}{1-H(p)} guarantees the existence of such an encoding scheme, and no scheme exists for any smaller c.

This theorem formalizes a “yes” answer to the noisy coding problem, but moreover it characterizes the blowup needed for such a scheme to exist. The deep fact is that it only depends on the noise rate.

A word about the proof: it’s probabilistic. That is, Shannon proved such an encoding scheme exists by picking \textup{Enc} to be a random function (!). Then \textup{Dec}(y) finds (nonconstructively) the string x such that the number of bits different between \textup{Enc}(x) and y is minimized. This “number of bits that differ” measure is called the Hamming distance. Then he showed using relatively standard probability tools that this scheme has the needed properties with high probability, the implication being that some scheme has to exist for such a probability to even be positive. The sharp threshold for c takes a bit more work. If you want the details, check out the first few lectures of Madhu Sudan’s MIT class.

The non-algorithmic nature of his solution is what opened the door to more research. The question has surpassed, “Are there any encodings that work?” to the more interesting, “What is the algorithmic cost of constructing such an encoding?” It became a question of complexity, not computability. Moreover, the guarantees people wanted were strengthened to worst case guarantees. In other words, if I can guarantee at most 12 errors, is there an encoding scheme that will allow me to always recover the original message, and not just with high probability? One can imagine that if your message contains nuclear codes or your bank balance, you’d definitely want to have 100% recovery ability.

Indeed, two years later Richard Hamming spawned the theory of error correcting codes and defined codes that can always correct a single error. This theory has expanded and grown over the last sixty years, and these days the algorithmic problems of coding theory have deep connections to most areas of computer science, including learning theory, cryptography, and quantum computing.

We’ll cover Hamming’s basic codes next time, and then move on to Reed-Solomon codes and others. Until then!

The Giant Component and Explosive Percolation

Last time we left off with a tantalizing conjecture: a random graph with edge probability p = 5/n is almost surely a connected graph. We arrived at that conjecture from some ad-hoc data analysis, so let’s go back and treat it with some more rigorous mathematical techniques. As we do, we’ll discover some very interesting “threshold theorems” that essentially say a random graph will either certainly have a property, or it will certainly not have it.

phase-transition-n-grows

The phase transition we empirically observed from last time.

Big components

Recalling the basic definition: an Erdős-Rényi (ER) random graph with n vertices and edge probability p is a probability distribution over all graphs on n vertices. Generatively, you draw from an ER distribution by flipping a p-biased coin for each pair of vertices, and adding the edge if you flip heads. We call the random event of drawing a graph from this distribution a “random graph” even though it’s not a graph, and we denote an ER random graph by G(n,p). When p = 1/2, the distribution G(n,1/2) is the uniform distribution over all graphs on n vertices.

Now let’s get to some theorems. The main tools we’ll use are called the first and second moment method. Let’s illustrate them by example.

The first moment method

Say we want to know what values of p are likely to produce graphs with isolated vertices (vertices with no neighbors), and which are not. Of course, the value of p will depend on n \to \infty in general, but we can already see by example that if p = 1/2 then the probability of a fixed vertex being isolated is 2^{-n} \to 0. We can use the union bound (sum this value over all vertices) to show that the probability of any vertex being isolated is at most n2^{-n} which also tends to zero very quickly. This is not the first moment method, I’m just making the point that all of our results will be interpreted asymptotically as n \to \infty.

So now we can ask: what is the expected number of isolated vertices? If I call X the random variable that counts the expected number of isolated vertices, then I’m asking about \mathbb{E}[X]. Really what I’m doing is interpreting X as a random variable depending on n, p(n), and asking about the evolution of \mathbb{E}[X] as n \to \infty.

Now the first moment method states, somewhat obviously, that if the expectation tends to zero then the value of X itself also tends to zero. Indeed, this follows from Markov’s inequality, which states that the probability that X \geq a is bounded by \mathbb{E}[X]/a. In symbols,

\displaystyle \Pr[X \geq a] \leq \frac{\mathbb{E}[X]}{a}.

In our case X is counting something (it’s integer valued), so asking whether X > 0 is equivalent to asking whether X \geq 1. The upper bound on the probability of X being strictly positive is then just \mathbb{E}[X].

So let’s find out when the expected number of isolated vertices goes to zero. We’ll use the wondrous linearity of expectation to split X into a sum of counts for each vertex. That is, if X_i is 1 when vertex i is isolated and 0 otherwise (this is called an indicator variable), then X = \sum_{i=1}^n X_i and linearity of expectation gives

\displaystyle \mathbb{E}[X] = \mathbb{E}[\sum_{i=1}^n X_i] = \sum_{i=1}^n \mathbb{E}[X_i]

Now the expectation of an indicator random variable is just the probability that the event occurs (it’s trivial to check). It’s easy to compute the probability that a vertex is isolated: it’s (1-p)^n. So the sum above works out to be n(1-p)^n. It should really be n(1-p)^{n-1} but the extra factor of (1-p) doesn’t change anything. The question is what’s the “smallest” way to set p as a function of n in order to make the above thing go to zero? Using the fact that (1-x) < e^{-x} for all x > 0, we get

n(1-p)^n < ne^{-pn}

And setting p = (\log n) / n simplifies the right hand side to ne^{- \log n} = n / n = 1. This is almost what we want, so let’s set p to be anything that grows asymptotically faster than (\log n) / n. The notation for this is \omega((\log n) / n). Then using some slick asymptotic notation we can prove that the RHS of the inequality above goes to zero, and so the LHS must as well. Back to the big picture: we just showed that the expectation of X (the expected number of isolated vertices) goes to zero, and so by the first moment method the value of X (the actual number of isolated vertices) has to go to zero with probability tending to 1.

Some quick interpretations: when p = (\log n) / n each vertex has \log n neighbors in expectation. Moreover, having no isolated vertices is just a little bit short of the entire graph being connected (our ultimate goal is to figure out exactly when this happens). But already we can see that our conjecture from the beginning is probably false: we aren’t able to use this same method to show that when p = c/n for some constant c rules out isolated vertices as n \to \infty. We just got lucky in our data analysis that 5 is about the natural log of 100 (which is 4.6).

The second moment method

Now what about the other side of the coin? If p is asymptotically less than (\log n) / n do we necessarily get isolated vertices? That would really put our conjecture to rest. In this case the answer is yes, but it might not be in general. Let’s discuss.

We said that in general if \mathbb{E}[X] \to 0 then the value of X has to go to zero too (that’s the first moment method). The flip side of this is: if \mathbb{E}[X] \to \infty does necessarily the value of X also tend to infinity? The answer is not always yes. Here is a gruesome example I originally heard from a book: say X is the number of people that will die in the next decade due to an asteroid hitting the earth. The probability that the event happens is quite small, but if it does happen then the number of people that will die is quite large. It is perfectly reasonable for this to drag up the expectation (as the world population grows every decade), but at least we hope a growing population doesn’t by itself increase the value of X.

Mathematics is on our side here. We’re asking under what conditions on \mathbb{E}[X] does the following implication hold: \mathbb{E}[X] \to \infty implies \Pr[X > 0] \to 1.

With the first moment method we used Markov’s inequality (a statement about expectation, also called the first moment). With the second moment method we’ll use a statement about the second moment (variances), and the most common is Chebyshev’s inequality. Chebyshev’s inequality states that the probability X deviates from its expectation by more than c is bounded by \textup{Var}[X] / c^2. In symbols, for all c > 0 we have

\displaystyle \Pr[|X - \mathbb{E}[X]| \geq c] \leq \frac{\textup{Var}[X]}{c^2}

Now the opposite of X > 0, written in terms of deviation from expectation, is |X - \mathbb{E}[X]| \geq \mathbb{E}[X]. In words, in order for any number a to be zero, it has to have a distance of at least b from any number b. It’s such a stupidly simple statement it’s almost confusing. So then we’re saying that

\displaystyle \Pr[X = 0] \leq \frac{\textup{Var}[X]}{\mathbb{E}[X]^2}.

In order to make this probability go to zero, it’s enough to have \textup{Var}[X] = o(\mathbb{E}[X]^2). Again, the little-o means “grows asymptotically slower than.” So the numerator of the fraction on the RHS will grow asymptotically slower than the denominator, meaning the whole fraction tends to zero. This condition and its implication are together called the “second moment method.”

Great! So we just need to compute \textup{Var}[X] and check what conditions on p make it fit the theorem. Recall that \textup{Var}[X] = \mathbb{E}[X^2] - \mathbb{E}[X]^2, and we want to upper bound this in terms of \mathbb{E}[X]^2. Let’s compute \mathbb{E}[X]^2 first.

\displaystyle \mathbb{E}[X]^2 = n^2(1-p)^{2n}

Now the variance.

\displaystyle \textup{Var}[X] = \mathbb{E}[X^2] - n^2(1-p)^{2n}

Expanding X as a sum of indicator variables X_i for each vertex, we can split the square into a sum over pairs. Note that X_i^2 = X_i since they are 0-1 valued indicator variables, and X_iX_j is the indicator variable for both events happening simultaneously.

\displaystyle \begin{aligned} \mathbb{E}[X^2] &= \mathbb{E}[\sum_{i,j} X_{i,j}] \\ &=\mathbb{E} \left [ \sum_i X_i^2 + \sum_{i \neq j} X_iX_j \right ] \\ &= \sum_i \mathbb{E}[X_i^2] + \sum_{i \neq j} \mathbb{E}[X_iX_j] \end{aligned}

By what we said about indicators, the last line is just

\displaystyle \sum_i \Pr[i \textup{ is isolated}] + \sum_{i \neq j} \Pr[i,j \textup{ are both isolated}]

And we can compute each of these pieces quite easily. They are (asymptotically ignoring some constants):

\displaystyle n(1-p)^n + n^2(1-p)(1-p)^{2n-4}

Now combining the two terms together (subtracting off the square of the expectation),

\displaystyle \begin{aligned} \textup{Var}[X] &\leq n(1-p)^n + n^2(1-p)^{-3}(1-p)^{2n} - n^2(1-p)^{2n} \\ &= n(1-p)^n + n^2(1-p)^{2n} \left ( (1-p)^{-3} - 1 \right ) \end{aligned}

Now we divide by \mathbb{E}[X]^2 to get n^{-1}(1-p)^{-n} + (1-p)^{-3} - 1. Since we’re trying to see if p = (\log n) / n is a sharp threshold, the natural choice is to let p = o((\log n) / n). Indeed, using the 1-x < e^{-x} upper bound and plugging in the little-o bounds the whole quantity by

\displaystyle \frac{1}{n}e^{o(\log n)} + o(n^{1/n}) - 1 = o(1)

i.e., the whole thing tends to zero, as desired.

Other thresholds

So we just showed that the property of having no isolated vertices in a random graph has a sharp threshold at p = (\log n) / n. Meaning at any larger probability the graph is almost surely devoid of isolated vertices, and at any lower probability the graph almost surely has some isolated vertices.

This might seem like a miracle theorem, but there turns out to be similar theorems for lots of properties. Most of them you can also prove using basically the same method we’ve been using here. I’ll list some below. Also note they are all sharp, two-sided thresholds in the same way that the isolated vertex boundary is.

  • The existence of a component of size \omega(\log (n)) has a threshold of 1/n.
  • p = c/n for any c > 0 is a threshold for the existence of a giant component of linear size \Theta(n). Moreover, above this threshold no other components will have size \omega(\log n).
  • In addition to (\log n) / n being a threshold for having no isolated vertices, it is also a threshold for connectivity.
  • p = (\log n + \log \log n + c(n)) / n is a sharp threshold for the existence of Hamiltonian cycles in the following sense: if c(n) = \omega(1) then there will be a Hamilton cycle almost surely, if c(n) \to -\infty there will be no Hamiltonian cycle almost surely, and if c(n) \to c the probability of a Hamiltonian cycle is e^{-e^{-c}}. This was proved by Kolmos and Szemeredi in 1983. Moreover, there is an efficient algorithm to find Hamiltonian cycles in these random graphs when they exist with high probability.

Explosive Percolation

So now we know that as the probability of an edge increases, at some point the graph will spontaneously become connected; at some time that is roughly \log(n) before, the so-called “giant component” will emerge and quickly engulf the entire graph.

Here’s a different perspective on this situation originally set forth by Achlioptas, D’Souza, and Spencer in 2009. It has since become called an “Achlioptas process.”

The idea is that you are watching a random graph grow. Rather than think about random graphs as having a probability above or below some threshold, you can think of it as the number of edges growing (so the thresholds will all be multiplied by n). Then you can imagine that you start with an empty graph, and at every time step someone is adding a new random edge to your graph. Fine, eventually you’ll get so many edges that a giant component emerges and you can measure when that happens.

But now imagine that instead of being given a single random new edge, you are given a choice. Say God presents you with two random edges, and you must pick which to add to your graph. Obviously you will eventually still get a giant component, but the question is how long can you prevent it from occurring? That is, how far back can we push the threshold for connectedness by cleverly selecting the new edge?

What Achlioptas and company conjectured was that you can push it back (some), but that when you push it back as far as it can go, the threshold becomes discontinuous. That is, they believed there was a constant \delta \geq 1/2 such that the size of the largest component jumps from o(n) to \delta n in o(n) steps.

This turned out to be false, and Riordan and Warnke proved it. Nevertheless, the idea has been interpreted in an interesting light. People have claimed it is a useful model of disaster in the following sense. If you imagine that an edge between two vertices is a “crisis” relating two entities. Then in every step God presents you with two crises and you only have the resources to fix one. The idea is that when the entire graph is connected, you have this one big disaster where all the problems are interacting with each other. The percolation process describes how long you can “survive” while avoiding the big disaster.

There are critiques of this interpretation, though, mainly about how simplistic it is. In particular, an Achlioptas process models a crisis as an exogenous force when in reality problems are usually endogenous. You don’t expect a meteor to hit the Earth, but you do expect humans to have an impact on the environment. Also, not everybody in the network is trying to avoid errors. Some companies thrive in economic downturns by managing your toxic assets, for example. So one could reasonably argue that Achlioptas processes aren’t complex enough to model the realistic types of disasters we face.

Either way, I find it fantastic that something like a random graph (which for decades was securely in pure combinatorics away from applications) is spurring such discussion.

Next time, we’ll take one more dive into the theory of Erdős-Rényi random graphs to prove a very “meta” theorem about sharp thresholds. Then we’ll turn our attention to other models of random graphs, hopefully more realistic ones :)

Until then!

When Greedy Algorithms are Perfect: the Matroid

Greedy algorithms are by far one of the easiest and most well-understood algorithmic techniques. There is a wealth of variations, but at its core the greedy algorithm optimizes something using the natural rule, “pick what looks best” at any step. So a greedy routing algorithm would say to a routing problem: “You want to visit all these locations with minimum travel time? Let’s start by going to the closest one. And from there to the next closest one. And so on.”

Because greedy algorithms are so simple, researchers have naturally made a big effort to understand their performance. Under what conditions will they actually solve the problem we’re trying to solve, or at least get close? In a previous post we gave some easy-to-state conditions under which greedy gives a good approximation, but the obvious question remains: can we characterize when greedy algorithms give an optimal solution to a problem?

The answer is yes, and the framework that enables us to do this is called a matroid. That is, if we can phrase the problem we’re trying to solve as a matroid, then the greedy algorithm is guaranteed to be optimal. Let’s start with an example when greedy is provably optimal: the minimum spanning tree problem. Throughout the article we’ll assume the reader is familiar with the very basics of linear algebra and graph theory (though we’ll remind ourselves what a minimum spanning tree is shortly). For a refresher, this blog has primers on both subjects. But first, some history.

History

Matroids were first introduced by Hassler Whitney in 1935, and independently discovered a little later by B.L. van der Waerden (a big name in combinatorics). They were both interested in devising a general description of “independence,” the properties of which are strikingly similar when specified in linear algebra and graph theory. Since then the study of matroids has blossomed into a large and beautiful theory, one part of which is the characterization of the greedy algorithm: greedy is optimal on a problem if and only if the problem can be represented as a matroid. Mathematicians have also characterized which matroids can be modeled as spanning trees of graphs (we will see this momentarily). As such, matroids have become a standard topic in the theory and practice of algorithms.

Minimum Spanning Trees

It is often natural in an undirected graph G = (V,E) to find a connected subset of edges that touch every vertex. As an example, if you’re working on a power network you might want to identify a “backbone” of the network so that you can use the backbone to cheaply travel from any node to any other node. Similarly, in a routing network (like the internet) it costs a lot of money to lay down cable, it’s in the interest of the internet service providers to design analogous backbones into their infrastructure.

A minimal subset of edges in a backbone like this is guaranteed to form a tree. This is simply because if you have a cycle in your subgraph then removing any edge on that cycle doesn’t break connectivity or the fact that you can get from any vertex to any other (and trees are the maximal subgraphs without cycles). As such, these “backbones” are called spanning trees. “Span” here means that you can get from any vertex to any other vertex, and it suggests the connection to linear algebra that we’ll describe later, and it’s a simple property of a tree that there is a unique path between any two vertices in the tree.

An example of a spanning tree

An example of a spanning tree

When your edges e \in E have nonnegative weights w_e \in \mathbb{R}^{\geq 0}, we can further ask to find a minimum cost spanning tree. The cost of a spanning tree T is just the sum of its edges, and it’s important enough of a definition to offset.

Definition: A minimum spanning tree T of a weighted graph G (with weights w_e \geq 0 for e \in E) is a spanning tree which minimizes the quantity

w(T) = \sum_{e \in T} w_e

There are a lot of algorithms to find minimal spanning trees, but one that will lead us to matroids is Kruskal’s algorithm. It’s quite simple. We’ll maintain a forest F in G, which is just a subgraph consisting of a bunch of trees that may or may not be connected. At the beginning F is just all the vertices with no edges. And then at each step we add to F the edge e whose weight is smallest and also does not introduce any cycles into F. If the input graph G is connected then this will always produce a minimal spanning tree.

Theorem: Kruskal’s algorithm produces a minimal spanning tree of a connected graph.

Proof. Call F_t the forest produced at step t of the algorithm. Then F_0 is the set of all vertices of G and F_{n-1} is the final forest output by Kruskal’s (as a quick exercise, prove all spanning trees on n vertices have n-1 edges, so we will stop after n-1 rounds). It’s clear that F_{n-1} is a tree because the algorithm guarantees no F_i will have a cycle. And any tree with n-1 edges is necessarily a spanning tree, because if some vertex were left out then there would be n-1 edges on a subgraph of n-1 vertices, necessarily causing a cycle somewhere in that subgraph.

Now we’ll prove that F_{n-1} has minimal cost. We’ll prove this in a similar manner to the general proof for matroids. Indeed, say you had a tree T whose cost is strictly less than that of F_{n-1} (we can also suppose that T is minimal, but this is not necessary). Pick the minimal weight edge e \in T that is not in F_{n-1}. Adding e to F_{n-1} introduces a unique cycle C in F_{n-1}. This cycle has some strange properties. First, e has the highest cost of any edge on C. For otherwise, Kruskal’s algorithm would have chosen it before the heavier weight edges. Second, there is another edge in C that’s not in T (because T was a tree it can’t have the entire cycle). Call such an edge e'. Now we can remove e' from F_{n-1} and add e. This can only increase the total cost of F_{n-1}, but this transformation produces a tree with one more edge in common with T than before. This contradicts that T had strictly lower weight than F_{n-1}, because repeating the process we described would eventually transform F_{n-1} into T exactly, while only increasing the total cost.

\square

Just to recap, we defined sets of edges to be “good” if they did not contain a cycle, and a spanning tree is a maximal set of edges with this property. In this scenario, the greedy algorithm performed optimally at finding a spanning tree with minimal total cost.

Columns of Matrices

Now let’s consider a different kind of problem. Say I give you a matrix like this one:

\displaystyle A = \begin{pmatrix} 2 & 0 & 1 & -1 & 0 \\ 0 & -4 & 0 & 1 & 0 \\ 0 & 0 & 1 & 0 & 7 \end{pmatrix}

In the standard interpretation of linear algebra, this matrix represents a linear function f from one vector space V to another W, with the basis (v_1, \dots, v_5) of V being represented by columns and the basis (w_1, w_2, w_3) of W being represented by the rows. Column j tells you how to write f(v_j) as a linear combination of the w_i, and in so doing uniquely defines f.

Now one thing we want to calculate is the rank of this matrix. That is, what is the dimension of the image of V under f? By linear algebraic arguments we know that this is equivalent to asking “how many linearly independent columns of A can we find”? An interesting consequence is that if you have two sets of columns that are both linearly independent and maximally so (adding any other column to either set would necessarily introduce a dependence in that set), then these two sets have the same size. This is part of why the rank of a matrix is well-defined.

If we were to give the columns of A costs, then we could ask about finding the minimal-cost maximally-independent column set. It sounds like a mouthful, but it’s exactly the same idea as with spanning trees: we want a set of vectors that spans the whole column space of A, but contains no “cycles” (linearly dependent combinations), and we want the cheapest such set.

So we have two kinds of “independence systems” that seem to be related. One interesting question we can ask is whether these kinds of independence systems are “the same” in a reasonable way. Hardcore readers of this blog may see the connection quite quickly. For any graph G = (V,E), there is a natural linear map from E to V, so that a linear dependence among the columns (edges) corresponds to a cycle in G. This map is called the incidence matrix by combinatorialists and the first boundary map by topologists.

The map is easy to construct: for each edge e = (v_i,v_j) you add a column with a 1 in the j-th row and a -1 in the i-th row. Then taking a sum of edges gives you zero if and only if the edges form a cycle. So we can think of a set of edges as “independent” if they don’t contain a cycle. It’s a little bit less general than independence over \mathbb{R}, but you can make it exactly the same kind of independence if you change your field from real numbers to \mathbb{Z}/2\mathbb{Z}. We won’t do this because it will detract from our end goal (to analyze greedy algorithms in realistic settings), but for further reading this survey of Oxley assumes that perspective.

So with the recognition of how similar these notions of independence are, we are ready to define matroids.

The Matroid

So far we’ve seen two kinds of independence: “sets of edges with no cycles” (also called forests) and “sets of linearly independent vectors.” Both of these share two trivial properties: there are always nonempty independent sets, and every subset of an independent set is independent. We will call any family of subsets with this property an independence system.

Definition: Let X be a finite set. An independence system over X is a family \mathscr{I} of subsets of X with the following two properties.

  1. \mathscr{I} is nonempty.
  2. If I \in \mathscr{I}, then so is every subset of I.

This is too general to characterize greedy algorithms, so we need one more property shared by our examples. There are a few things we do, but here’s one nice property that turns out to be enough.

Definition: A matroid M = (X, \mathscr{I}) is a set X and an independence system \mathscr{I} over X with the following property:

If A, B are in \mathscr{I} with |A| = |B| + 1, then there is an element x \in A \setminus B such that B \cup \{ a \} \in \mathscr{I}.

In other words, this property says if I have an independent set that is not maximally independent, I can grow the set by adding some suitably-chosen element from a larger independent set. We’ll call this the extension property. For a warmup exercise, let’s prove that the extension property is equivalent to the following (assuming the other properties of a matroid):

For every subset Y \subset X, all maximal independent sets contained in Y have equal size.

Proof. For one direction, if you have two maximal sets A, B \subset Y \subset X that are not the same size (say A is bigger), then you can take any subset of A whose size is exactly |B| + 1, and use the extension property to make B larger, a contradiction. For the other direction, say that I know all maximal independent sets of any Y \subset X have the same size, and you give me A, B \subset X. I need to find an a \in A \setminus B that I can add to B and keep it independent. What I do is take the subset Y = A \cup B. Now the sizes of A, B don’t change, but B can’t be maximal inside Y because it’s smaller than A (A might not be maximal either, but it’s still independent). And the only way to extend B is by adding something from A, as desired.

\square

So we can use the extension property and the cardinality property interchangeably when talking about matroids. Continuing to connect matroid language to linear algebra and graph theory, the maximal independent sets of a matroid are called bases, the size of any basis is the rank of the matroid, and the minimal dependent sets are called circuits. In fact, you can characterize matroids in terms of the properties of their circuits, which are dual to the properties of bases (and hence all independent sets) in a very concrete sense.

But while you could spend all day characterizing the many kinds of matroids and comatroids out there, we are still faced with the task of seeing how the greedy algorithm performs on a matroid. That is, suppose that your matroid M = (X, \mathscr{I}) has a nonnegative real number w(x) associated with each x \in X. And suppose we had a black-box function to determine if a given set S \subset X is independent. Then the greedy algorithm maintains a set B, and at every step adds a minimum weight element that maintains the independence of B. If we measure the cost of a subset by the sum of the weights of its elements, then the question is whether the greedy algorithm finds a minimum weight basis of the matroid.

The answer is even better than yes. In fact, the answer is that the greedy algorithm performs perfectly if and only if the problem is a matroid! More rigorously,

Theorem: Suppose that M = (X, \mathscr{I}) is an independence system, and that we have a black-box algorithm to determine whether a given set is independent. Define the greedy algorithm to iteratively adds the cheapest element of X that maintains independence. Then the greedy algorithm produces a maximally independent set S of minimal cost for every nonnegative cost function on X, if and only if M is a matroid.

It’s clear that the algorithm will produce a set that is maximally independent. The only question is whether what it produces has minimum weight among all maximally independent sets. We’ll break the theorem into the two directions of the “if and only if”:

Part 1: If M is a matroid, then greedy works perfectly no matter the cost function.
Part 2: If greedy works perfectly for every cost function, then M is a matroid.

Proof of Part 1.

Call the cost function w : X \to \mathbb{R}^{\geq 0}, and suppose that the greedy algorithm picks elements B = \{ x_1, x_2, \dots, x_r \} (in that order). It’s easy to see that w(x_1) \leq w(x_2) \leq \dots \leq w(x_r). Now if you give me any list of r independent elements y_1, y_2, \dots, y_r \in X that has w(y_1) \leq \dots \leq w(y_r), I claim that w(x_i) \leq w(y_i) for all i. This proves what we want, because if there were a basis of size r with smaller weight, sorting its elements by weight would give a list contradicting this claim.

To prove the claim, suppose to the contrary that it were false, and for some k we have w(x_k) > w(y_k). Moreover, pick the smallest k for which this is true. Note k > 1, and so we can look at the special sets S = \{ x_1, \dots, x_{k-1} \} and T = \{ y_1, \dots, y_k \}. Now |T| = |S|+1, so by the matroid property there is some j between 1 and r so that S \cup \{ y_j \} is an independent set (and y_j is not in S). But then w(y_j) \leq w(y_k) < w(x_k), and so the greedy algorithm would have picked y_j before it picks x_k (and the strict inequality means they’re different elements). This contradicts how the greedy algorithm runs, and hence proves the claim.

Proof of Part 2.

We’ll prove this contrapositively as follows. Suppose we have our independence system and it doesn’t satisfy the last matroid condition. Then we’ll construct a special weight function that causes the greedy algorithm to fail. So let A,B be independent sets with |A| = |B| + 1, but for every a \in A \setminus B adding a to B never gives you an independent set.

Now what we’ll do is define our weight function so that the greedy algorithm picks the elements we want in the order we want (roughly). In particular, we’ll assign all elements of A \cap B a tiny weight we’ll call w_1. For elements of B - A we’ll use w_2, and for A - B we’ll use w_3, with w_4 for everything else. In a more compact notation:

CodeCogsEqn

We need two things for this weight function to screw up the greedy algorithm. The first is that w_1 < w_2 < w_3 < w_4, so that greedy picks the elements in the order we want. Note that this means it’ll first pick all of A \cap B, and then all of B - A, and by assumption it won’t be able to pick anything from A - B, but since B is assumed to be non-maximal, we have to pick at least one element from X - (A \cup B) and pay w_4 for it.

So the second thing we want is that the cost of doing greedy is worse than picking any maximally independent set that contains A (and we know that there has to be some maximal independent set containing A). In other words, if we call m the size of a maximally independent set, we want

\displaystyle |A \cap B| w_1 + |B-A|w_2 + (m - |B|)w_4 > |A \cap B|w_1 + |A-B|w_3 + (m-|A|)w_4

This can be rearranged (using the fact that |A| = |B|+1) to

\displaystyle w_4 > |A-B|w_3 - |B-A|w_2

The point here is that the greedy picks too many elements of weight w_4, since if we were to start by taking all of A (instead of all of B), then we could get by with one fewer. That might not be optimal, but it’s better than greedy and that’s enough for the proof.

So we just need to make w_4 large enough to make this inequality hold, while still maintaining w_2 < w_3. There are probably many ways to do this, and here’s one. Pick some 0 < \varepsilon < 1, and set

settings

It’s trivial that w_1 < w_2 and w_3 < w_4. For the rest we need some observations. First, the fact that |A-B| = |B-A| + 1 implies that w_2 < w_3. Second, both |A-B| and |B-A| are nonempty, since otherwise the second property of independence systems would contradict our assumption that augmenting B with elements of A breaks independence. Using this, we can divide by these quantities to get

\displaystyle w_4 = 2 > 1 = \frac{|A-B|(1 + \varepsilon)}{|A-B|} - \frac{|B-A|\varepsilon}{|B-A|}

This proves the claim and finishes the proof.

\square

As a side note, we proved everything here with respect to minimizing the sum of the weights, but one can prove an identical theorem for maximization. The only part that’s really different is picking the clever weight function in part 2. In fact, you can convert between the two by defining a new weight function that subtracts the old weights from some fixed number N that is larger than any of the original weights. So these two problems really are the same thing.

This is pretty amazing! So if you can prove your problem is a matroid then you have an awesome algorithm automatically. And if you run the greedy algorithm for fun and it seems like it works all the time, then that may be hinting that your problem is a matroid. This is one of the best situations one could possibly hope for.

But as usual, there are a few caveats to consider. They are both related to efficiency. The first is the black box algorithm for determining if a set is independent. In a problem like minimum spanning tree or finding independent columns of a matrix, there are polynomial time algorithms for determining independence. These two can both be done, for example, with Gaussian elimination. But there’s nothing to stop our favorite matroid from requiring an exponential amount of time to check if a set is independent. This makes greedy all but useless, since we need to check for independence many times in every round.

Another, perhaps subtler, issue is that the size of the ground set X might be exponentially larger than the rank of the matroid. In other words, at every step our greedy algorithm needs to find a new element to add to the set it’s building up. But there could be such a huge ocean of candidates, all but a few of which break independence. In practice an algorithm might be working with X implicitly, so we could still hope to solve the problem if we had enough knowledge to speed up the search for a new element.

There are still other concerns. For example, a naive approach to implementing greedy takes quadratic time, since you may have to look through every element of X to find the minimum-cost guy to add. What if you just have to have faster runtime than O(n^2)? You can still be interested in finding more efficient algorithms that still perform perfectly, and to the best of my knowledge there’s nothing that says that greedy is the only exact algorithm for your favorite matroid. And then there are models where you don’t have direct/random access to the input, and lots of other ways that you can improve on greedy. But those stories are for another time.

Until then!