Zero-One Laws for Random Graphs

Last time we saw a number of properties of graphs, such as connectivity, where the probability that an Erdős–Rényi random graph G(n,p) satisfies the property is asymptotically either zero or one. And this zero or one depends on whether the parameter p is above or below a universal threshold (that depends only on n and the property in question).

To remind the reader, the Erdős–Rényi random “graph” G(n,p) is a distribution over graphs that you draw from by including each edge independently with probability p. Last time we saw that the existence of an isolated vertex has a sharp threshold at (\log n) / n, meaning if p is asymptotically smaller than the threshold there will certainly be isolated vertices, and if p is larger there will certainly be no isolated vertices. We also gave a laundry list of other properties with such thresholds.

One might want to study this phenomenon in general. Even if we might not be able to find all the thresholds we want for a given property, can we classify which properties have thresholds and which do not?

The answer turns out to be mostly yes! For large classes of properties, there are proofs that say things like, “either this property holds with probability tending to one, or it holds with probability tending to zero.” These are called “zero-one laws,” and they’re sort of meta theorems. We’ll see one such theorem in this post relating to constant edge-probabilities in random graphs, and we’ll remark on another at the end.

Sentences about graphs in first order logic

A zero-one law generally works by defining a class of properties, and then applying a generic first/second moment-type argument to every property in the class.

So first we define what kinds of properties we’ll discuss. We’ll pick a large class: anything that can be expressed in first-order logic in the language of graphs. That is, any finite logical statement that uses existential and universal quantifiers over variables, and whose only relation (test) is whether an edge exists between two vertices. We’ll call this test e(x,y). So you write some sentence P in this language, and you take a graph G, and you can ask P(G) = 1, whether the graph satisfies the sentence.

This seems like a really large class of properties, and it is, but let’s think carefully about what kinds of properties can be expressed this way. Clearly the existence of a triangle can be written this way, it’s just the sentence

\exists x,y,z : e(x,y) \wedge e(y,z) \wedge e(x,z)

I’m using \wedge for AND, and \vee for OR, and \neg for NOT. Similarly, one can express the existence of a clique of size k, or the existence of an independent set of size k, or a path of a fixed length, or whether there is a vertex of maximal degree n-1.

Here’s a question: can we write a formula which will be true for a graph if and only if it’s connected? Well such a formula seems like it would have to know about how many vertices there are in the graph, so it could say something like “for all x,y there is a path from x to y.” It seems like you’d need a family of such formulas that grows with n to make anything work. But this isn’t a proof; the question remains whether there is some other tricky way to encode connectivity.

But as it turns out, connectivity is not a formula you can express in propositional logic. We won’t prove it here, but we will note at the end of the article that connectivity is in a different class of properties that you can prove has a similar zero-one law.

The zero-one law for first order logic

So the theorem about first-order expressible sentences is as follows.

Theorem: Let P be a property of graphs that can be expressed in the first order language of graphs (with the e(x,y) relation). Then for any constant p, the probability that P holds in G(n,p) has a limit of zero or one as n \to \infty.

Proof. We’ll prove the simpler case of p=1/2, but the general case is analogous. Given such a graph G drawn from G(n,p), what we’ll do is define a countably infinite family of propositional formulas \varphi_{k,l}, and argue that they form a sort of “basis” for all first-order sentences about graphs.

First let’s describe the \varphi_{k,l}. For any k,l \in \mathbb{N}, the sentence will assert that for every set of k vertices and every set of l vertices, there is some other vertex connected to the first k but not the last l.

\displaystyle \varphi_{k,l} : \forall x_1, \dots, x_k, y_1, \dots, y_l \exists z : \\ e(z,x_1) \wedge \dots \wedge e(z,x_k) \wedge \neg e(z,y_1) \wedge \dots \wedge \neg e(z,y_l).

In other words, these formulas encapsulate every possible incidence pattern for a single vertex. It is a strange set of formulas, but they have a very nice property we’re about to get to. So for a fixed \varphi_{k,l}, what is the probability that it’s false on n vertices? We want to give an upper bound and hence show that the formula is true with probability approaching 1. That is, we want to show that all the \varphi_{k,l} are true with probability tending to 1.

Computing the probability: we have \binom{n}{k} \binom{n-k}{l} possibilities to choose these sets, and the probability that some other fixed vertex z has the good connections is 2^{-(k+l)} so the probability z is not good is 1 - 2^{-(k+l)}, and taking a product over all choices of z gives the probability that there is some bad vertex z with an exponent of (n - (k + l)). Combining all this together gives an upper bound of \varphi_{k,l} being false of:

\displaystyle \binom{n}{k}\binom{n-k}{l} (1-2^{-k-1})^{n-k-l}

And k, l are constant, so the left two terms are polynomials while the rightmost term is an exponentially small function, and this implies that the whole expression tends to zero, as desired.

Break from proof.

A bit of model theory

So what we’ve proved so far is that the probability of every formula of the form \varphi_{k,l} being satisfied in G(n,1/2) tends to 1.

Now look at the set of all such formulas

\displaystyle \Phi = \{ \varphi_{k,l} : k,l \in \mathbb{N} \}

We ask: is there any graph which satisfies all of these formulas? Certainly it cannot be finite, because a finite graph would not be able to satisfy formulas with sufficiently large values of l, k > n. But indeed, there is a countably infinite graph that works. It’s called the Rado graph, pictured below.

rado

The Rado graph has some really interesting properties, such as that it contains every finite and countably infinite graph as induced subgraphs. Basically this means, as far as countably infinite graphs go, it’s the big momma of all graphs. It’s the graph in a very concrete sense of the word. It satisfies all of the formulas in \Phi, and in fact it’s uniquely determined by this, meaning that if any other countably infinite graph satisfies all the formulas in \Phi, then that graph is isomorphic to the Rado graph.

But for our purposes (proving a zero-one law), there’s a better perspective than graph theory on this object. In the logic perspective, the set \Phi is called a theory, meaning a set of statements that you consider “axioms” in some logical system. And we’re asking whether there any model realizing the theory. That is, is there some logical system with a semantic interpretation (some mathematical object based on numbers, or sets, or whatever) that satisfies all the axioms?

A good analogy comes from the rational numbers, because they satisfy a similar property among all ordered sets. In fact, the rational numbers are the unique countable, ordered set with the property that it has no biggest/smallest element and is dense. That is, in the ordering there is always another element between any two elements you want. So the theorem says if you have two countable sets with these properties, then they are actually isomorphic as ordered sets, and they are isomorphic to the rational numbers.

So, while we won’t prove that the Rado graph is a model for our theory \Phi, we will use that fact to great benefit. One consequence of having a theory with a model is that the theory is consistent, meaning it can’t imply any contradictions. Another fact is that this theory \Phi is complete. Completeness means that any formula or it’s negation is logically implied by the theory. Note these are syntactical implications (using standard rules of propositional logic), and have nothing to do with the model interpreting the theory.

The proof that \Phi is complete actually follows from the uniqueness of the Rado graph as the only countable model of \Phi. Suppose the contrary, that \Phi is not consistent, then there has to be some formula \psi that is not provable, and it’s negation is also not provable, by starting from \Phi. Now extend \Phi in two ways: by adding \psi and by adding \neg \psi. Both of the new theories are still countable, and by a theorem from logic this means they both still have countable models. But both of these new models are also countable models of \Phi, so they have to both be the Rado graph. But this is very embarrassing for them, because we assumed they disagree on the truth of \psi.

So now we can go ahead and prove the zero-one law theorem.

Return to proof.

Given an arbitrary property \varphi \not \in \Psi. Now either \varphi or it’s negation can be derived from \Phi. Without loss of generality suppose it’s \varphi. Take all the formulas from the theory you need to derive \varphi, and note that since it is a proof in propositional logic you will only finitely many such \varphi_{k,l}. Now look at the probabilities of the \varphi_{k,l}: they are all true with probability tending to 1, so the implied statement of the proof of \varphi (i.e., \varphi itself) must also hold with probability tending to 1. And we’re done!

\square

If you don’t like model theory, there is another “purely combinatorial” proof of the zero-one law using something called Ehrenfeucht–Fraïssé games. It is a bit longer, though.

Other zero-one laws

One might naturally ask two questions: what if your probability is not constant, and what other kinds of properties have zero-one laws? Both great questions.

For the first, there are some extra theorems. I’ll just describe one that has always seemed very strange to me. If your probability is of the form p = n^{-\alpha} but \alpha is irrational, then the zero-one law still holds! This is a theorem of Baldwin-Shelah-Spencer, and it really makes you wonder why irrational numbers would be so well behaved while rational numbers are not :)

For the second question, there is another theorem about monotone properties of graphs. Monotone properties come in two flavors, so called “increasing” and “decreasing.” I’ll describe increasing monotone properties and the decreasing counterpart should be obvious. A property is called monotone increasing if adding edges can never destroy the property. That is, with an empty graph you don’t have the property (or maybe you do), and as you start adding edges eventually you suddenly get the property, but then adding more edges can’t cause you to lose the property again. Good examples of this include connectivity, or the existence of a triangle.

So the theorem is that there is an identical zero-one law for monotone properties. Great!

It’s not so often that you get to see these neat applications of logic and model theory to graph theory and (by extension) computer science. But when you do get to apply them they seem very powerful and mysterious. I think it’s a good thing.

Until next time!

Occam’s Razor and PAC-learning

So far our discussion of learning theory has been seeing the definition of PAC-learningtinkering with it, and seeing simple examples of learnable concept classes. We’ve said that our real interest is in proving big theorems about what big classes of problems can and can’t be learned. One major tool for doing this with PAC is the concept of VC-dimension, but to set the stage we’re going to prove a simpler theorem that gives a nice picture of PAC-learning when your hypothesis class is small. In short, the theorem we’ll prove says that if you have a finite set of hypotheses to work with, and you can always find a hypothesis that’s consistent with the data you’ve seen, then you can learn efficiently. It’s obvious, but we want to quantify exactly how much data you need to ensure low error. This will also give us some concrete mathematical justification for philosophical claims about simplicity, and the theorems won’t change much when we generalize to VC-dimension in a future post.

The Chernoff bound

One tool we will need in this post, which shows up all across learning theory, is the Chernoff-Hoeffding bound. We covered this famous inequality in detail previously on this blog, but the part of that post we need is the following theorem that says, informally, that if you average a bunch of bounded random variables, then the probability this average random variable deviates from its expectation is exponentially small in the amount of deviation. Here’s the slightly simplified version we’ll use:

Theorem: Let X_1, \dots, X_m be independent random variables whose values are in the range [0,1]. Call \mu_i = \mathbf{E}[X_i], X = \sum_i X_i, and \mu = \mathbf{E}[X] = \sum_i \mu_i. Then for all t > 0,

\displaystyle \Pr(|X-\mu| > t) \leq 2e^{-2t^2 / m}

One nice thing about the Chernoff bound is that it doesn’t matter how the variables are distributed. This is important because in PAC we need guarantees that hold for any distribution generating data. Indeed, in our case the random variables above will be individual examples drawn from the distribution generating the data. We’ll be estimating the probability that our hypothesis has error deviating more than \varepsilon, and we’ll want to bound this by \delta, as in the definition of PAC-learning. Since the amount of deviation (error) and the number of samples (m) both occur in the exponent, the trick is in balancing the two values to get what we want.

Realizability and finite hypothesis classes

Let’s recall the PAC model once more. We have a distribution D generating labeled examples (x, c(x)), where c is an unknown function coming from some concept class C. Our algorithm can draw a polynomial number of these examples, and it must produce a hypothesis h from some hypothesis class H (which may or may not contain c). The guarantee we need is that, for any \delta, \varepsilon > 0, the algorithm produces a hypothesis whose error on D is at most \varepsilon, and this event happens with probability at least 1-\delta. All of these probabilities are taken over the randomness in the algorithm’s choices and the distribution D, and it has to work no matter what the distribution D is.

Let’s introduce some simplifications. First, we’ll assume that the hypothesis and concept classes H and C are finite. Second, we’ll assume that C \subset H, so that you can actually hope to find a hypothesis of zero error. This is called realizability. Later we’ll relax these first two assumptions, but they make the analysis a bit cleaner. Finally, we’ll assume that we have an algorithm which, when given labeled examples, can find in polynomial time a hypothesis h \in H that is consistent with every example.

These assumptions give a trivial learning algorithm: draw a bunch of examples and output any consistent hypothesis. The question is, how many examples do we need to guarantee that the hypothesis we find has the prescribed generalization error? It will certainly grow with 1 / \varepsilon, but we need to ensure it will only grow polynomially fast in this parameter. Indeed, realizability is such a strong assumption that we can prove a polynomial bound using even more basic probability theory than the Chernoff bound.

Theorem: A algorithm that efficiently finds a consistent hypothesis will PAC-learn any finite concept class provided it has at least m samples, where

\displaystyle m \geq \frac{1}{\varepsilon} \left ( \log |H| + \log \left ( \frac{1}{\delta} \right ) \right )

Proof. All we need to do is bound the probability that a bad hypothesis (one with error more than \varepsilon) is consistent with the given data. Now fix D, c, \delta, \varepsilon, and draw m examples and let h be any hypothesis that is consistent with the drawn examples. Suppose that the bad thing happens, that \Pr_D(h(x) \neq c(x)) > \varepsilon.

Because the examples are all drawn independently from D, the chance that all m examples are consistent with h is

\displaystyle (1 - \Pr_{x \sim D}(h(x) \neq c(x)))^m < (1 - \varepsilon)^m

What we’re saying here is, the probability that a specific bad hypothesis is actually consistent with your drawn examples is exponentially small in the error tolerance. So if we apply the union bound, the probability that some hypothesis you could produce is bad is at most (1 - \varepsilon)^m S, where S is the number of hypotheses the algorithm might produce.

A crude upper bound on the number of hypotheses you could produce is just the total number of hypotheses, |H|. Even cruder, let’s use the inequality (1 - x) < e^{-x} to give the bound

\displaystyle (1 - \varepsilon)^m |H| < e^{-\varepsilon m} |H|

Now we want to make sure that this probability, the probability of choosing a high-error (yet consistent) hypothesis, is at most \delta. So we can set the above quantity less than \delta and solve for m:

\displaystyle e^{-\varepsilon m} |H| \leq \delta

Taking logs and solving for m gives the desired bound.

\square

An obvious objection is: what if you aren’t working with a hypothesis class where you can guarantee that you’ll find a consistent hypothesis? Well, in that case we’ll need to inspect the definition of PAC again and reevaluate our measures of error. It turns out we’ll get a similar theorem as above, but with the stipulation that we’re only achieving error within epsilon of the error of the best available hypothesis.

But before we go on, this theorem has some deep philosophical interpretations. In particular, suppose that, before drawing your data, you could choose to work with one of two finite hypothesis classes H_1, H_2, with |H_1| > |H_2|. If you can find a consistent hypothesis no matter which hypothesis class you use, then this theorem says that your generalization guarantees are much stronger if you start with the smaller hypothesis class.

In other words, all else being equal, the smaller set of hypotheses is better. For this reason, the theorem is sometimes called the “Occam’s Razor” theorem. We’ll see a generalization of this theorem in the next section.

Unrealizability and an extra epsilon

Now suppose that $H$ doesn’t contain any hypotheses with error less than \varepsilon. What can we hope to do in this case? One thing is that we can hope to find a hypothesis whose error is within \varepsilon of the minimal error of any hypothesis in H. Moreover, we might not have any consistent hypotheses for some data samples! So rather than require an algorithm to produce an h \in H that is perfectly consistent with the data, we just need it to produce a hypothesis that has minimal empirical error, in the sense that it is as close to consistent as the best hypothesis of h on the data you happened to draw. It seems like such a strategy would find you a hypothesis that’s close to the best one in H, but we need to prove it and determine how many samples we need to draw to succeed.

So let’s make some definitions to codify this. For a given hypothesis, call \textup{err}(h) the true error of h on the distribution D. Our assumption is that there may be no hypotheses in H with \textup{err}(h) = 0. Next we’ll call the empirical error \hat{\textup{err}}(h).

Definition: We say a concept class C is agnostically learnable using the hypothesis class H if for all c \in C and all distributions D (and all \varepsilon, \delta > 0), there is a learning algorithm A which produces a hypothesis h that with probability at least 1 - \delta satisfies

\displaystyle \text{err}(h) \leq \min_{h' \in H} \text{err}(h') + \varepsilon

and everything runs in the same sort of polynomial time as for vanilla PAC-learning. This is called the agnostic setting or the unrealizable setting, in the sense that we may not be able to find a hypothesis with perfect empirical error.

We seek to prove that all concept classes are agnostically learnable with a finite hypothesis class, provided you have an algorithm that can minimize empirical error. But actually we’ll prove something stronger.

Theorem: Let H be a finite hypothesis class and m the number of samples drawn. Then for any \delta > 0, with probability 1-\delta the following holds:

\displaystyle \forall h \in H, \hat{\text{err}}(h) \leq \text{err}(h) + \sqrt{\frac{\log |H| + \log(2 / \delta)}{2m}}

In other words, we can precisely quantify how the empirical error converges to the true error as the number of samples grows. But this holds for all hypotheses in H, so this provides a uniform bound of the difference between true and empirical error for the entire hypothesis class.

Proving this requires the Chernoff bound. Fix a single hypothesis h \in H. If you draw an example x, call Z the random variable which is 1 when h(x) \neq c(x), and 0 otherwise. So if you draw m samples and call the i-th variable Z_i, the empirical error of the hypothesis is \frac{1}{m}\sum_i Z_i. Moreover, the actual error is the expectation of this random variable since \mathbf{E}[1/m \sum_i Z_i] = Z.

So what we’re asking is the probability that the empirical error deviates from the true error by a lot. Let’s call “a lot” some parameter \varepsilon/2 > 0 (the reason for dividing by two will become clear in the corollary to the theorem). Then plugging things into the Chernoff-Hoeffding bound gives a bound on the probability of the “bad event,” that the empirical error deviates too much.

\displaystyle \Pr[|\hat{\text{err}}(h) - \text{err}(h)| > \varepsilon / 2] < 2e^{-\frac{\varepsilon^2m}{2}}

Now to get a bound on the probability that some hypothesis is bad, we apply the union bound and use the fact that |H| is finite to get

\displaystyle \Pr[|\hat{\text{err}}(h) - \text{err}(h)| > \varepsilon / 2] < 2|H|e^{-\frac{\varepsilon^2m}{2}}

Now say we want to bound this probability by \delta. We set 2|H|e^{-\varepsilon^2m/2} \leq \delta, solve for m, and get

\displaystyle m \geq \frac{2}{\varepsilon^2}\left ( \log |H| + \log \frac{2}{\delta} \right )

This gives us a concrete quantification of the tradeoff between m, \varepsilon, \delta, and |H|. Indeed, if we pick m to be this large, then solving for \varepsilon / 2 gives the exact inequality from the theorem.

\square

Now we know that if we pick enough samples (polynomially many in all the parameters), and our algorithm can find a hypothesis h of minimal empirical error, then we get the following corollary:

Corollary: For any \varepsilon, \delta > 0, the algorithm that draws m \geq \frac{2}{\varepsilon^2}(\log |H| + \log(2/ \delta)) examples and finds any hypothesis of minimal empirical error will, with probability at least 1-\delta, produce a hypothesis that is within \varepsilon of the best hypothesis in H.

Proof. By the previous theorem, with the desired probability, for all h \in H we have |\hat{\text{err}}(h) - \text{err}(h)| < \varepsilon/2. Call g = \min_{h' \in H} \text{err}(h'). Then because the empirical error of h is also minimal, we have |\hat{\text{err}}(g) - \text{err}(h)| < \varepsilon / 2. And using the previous theorem again and the triangle inequality, we get |\text{err}(g) - \text{err}(h)| < 2 \varepsilon / 2 = \varepsilon. In words, the true error of the algorithm’s hypothesis is close to the error of the best hypothesis, as desired.

\square

Next time

Both of these theorems tell us something about the generalization guarantees for learning with hypothesis classes of a certain size. But this isn’t exactly the most reasonable measure of the “complexity” of a family of hypotheses. For example, one could have a hypothesis class with a billion intervals on \mathbb{R} (say you’re trying to learn intervals, or thresholds, or something easy), and the guarantees we proved in this post are nowhere near optimal.

So the question is: say you have a potentially infinite class of hypotheses, but the hypotheses are all “simple” in some way. First, what is the right notion of simplicity? And second, how can you get guarantees based on that analogous to these? We’ll discuss this next time when we define the VC-dimension.

Until then!

Martingales and the Optional Stopping Theorem

This is a guest post by my colleague Adam Lelkes.

The goal of this primer is to introduce an important and beautiful tool from probability theory, a model of fair betting games called martingales. In this post I will assume that the reader is familiar with the basics of probability theory. For those that need to refresh their knowledge, Jeremy’s excellent primers (1, 2) are a good place to start.

The Geometric Distribution and the ABRACADABRA Problem

Before we start playing with martingales, let’s start with an easy exercise. Consider the following experiment: we throw an ordinary die repeatedly until the first time a six appears. How many throws will this take in expectation? The reader might recognize immediately that this exercise can be easily solved using the basic properties of the geometric distribution, which models this experiment exactly. We have independent trials, every trial succeeding with some fixed probability p. If X denotes the number of trials needed to get the first success, then clearly \Pr(X = k) = (1-p)^{k-1} p (since first we need k-1 failures which occur independently with probability 1-p, then we need one success which happens with probability p). Thus the expected value of X is

\displaystyle E(X) = \sum_{k=1}^\infty k P(X = k) = \sum_{k=1}^\infty k (1-p)^{k-1} p = \frac1p

by basic calculus. In particular, if success is defined as getting a six, then p=1/6 thus the expected time is 1/p=6.

Now let us move on to a somewhat similar, but more interesting and difficult problem, the ABRACADABRA problem. Here we need two things for our experiment, a monkey and a typewriter. The monkey is asked to start bashing random keys on a typewriter. For simplicity’s sake, we assume that the typewriter has exactly 26 keys corresponding to the 26 letters of the English alphabet and the monkey hits each key with equal probability. There is a famous theorem in probability, the infinite monkey theorem, that states that given infinite time, our monkey will almost surely type the complete works of William Shakespeare. Unfortunately, according to astronomists the sun will begin to die in a few billion years, and the expected time we need to wait until a monkey types the complete works of William Shakespeare is orders of magnitude longer, so it is not feasible to use monkeys to produce works of literature.

So let’s scale down our goals, and let’s just wait until our monkey types the word ABRACADABRA. What is the expected time we need to wait until this happens? The reader’s first idea might be to use the geometric distribution again. ABRACADABRA is eleven letters long, the probability of getting one letter right is \frac{1}{26}, thus the probability of a random eleven-letter word being ABRACADABRA is exactly \left(\frac{1}{26}\right)^{11}. So if typing 11 letters is one trial, the expected number of trials is

\displaystyle \frac1{\left(\frac{1}{26}\right)^{11}}=26^{11}

which means 11\cdot 26^{11} keystrokes, right?

Well, not exactly. The problem is that we broke up our random string into eleven-letter blocks and waited until one block was ABRACADABRA. However, this word can start in the middle of a block. In other words, we considered a string a success only if the starting position of the word ABRACADABRA was divisible by 11. For example, FRZUNWRQXKLABRACADABRA would be recognized as success by this model but the same would not be true for AABRACADABRA. However, it is at least clear from this observation that 11\cdot 26^{11} is a strict upper bound for the expected waiting time. To find the exact solution, we need one very clever idea, which is the following:

Let’s Open a Casino!

Do I mean that abandoning our monkey and typewriter and investing our time and money in a casino is a better idea, at least in financial terms? This might indeed be the case, but here we will use a casino to determine the expected wait time for the ABRACADABRA problem. Unfortunately we won’t make any money along the way (in expectation) since our casino will be a fair one.

Let’s do the following thought experiment: let’s open a casino next to our typewriter. Before each keystroke, a new gambler comes to our casino and bets $1 that the next letter will be A. If he loses, he goes home disappointed. If he wins, he bets all the money he won on the event that the next letter will be B. Again, if he loses, he goes home disappointed. (This won’t wreak havoc on his financial situation, though, as he only loses $1 of his own money.) If he wins again, he bets all the money on the event that the next letter will be R, and so on.

If a gambler wins, how much does he win? We said that the casino would be fair, i.e. the expected outcome should be zero. That means that it the gambler bets $1, he should receive $26 if he wins, since the probability of getting the next letter right is exactly \frac{1}{26} (thus the expected value of the change in the gambler’s fortune is \frac{25}{26}\cdot (-1) + \frac{1}{26}\cdot (+25) = 0.

Let’s keep playing this game until the word ABRACADABRA first appears and let’s denote the number of keystrokes up to this time as T. As soon as we see this word, we close our casino. How much was the revenue of our casino then? Remember that before each keystroke, a new gambler comes in and bets $1, and if he wins, he will only bet the money he has received so far, so our revenue will be exactly T dollars.

How much will we have to pay for the winners? Note that the only winners in the last round are the players who bet on A. How many of them are there? There is one that just came in before the last keystroke and this was his first bet. He wins $26. There was one who came three keystrokes earlier and he made four successful bets (ABRA). He wins \$26^4. Finally there is the luckiest gambler who went through the whole ABRACADABRA sequence, his prize will be \$26^{11}. Thus our casino will have to give out 26^{11}+26^4+26 dollars in total, which is just under the price of 200,000 WhatsApp acquisitions.

Now we will make one crucial observation: even at the time when we close the casino, the casino is fair! Thus in expectation our expenses will be equal to our income. Our income is T dollars, the expected value of our expenses is 26^{11}+26^4+26 dollars, thus E(T)=26^{11}+26^4+26. A beautiful solution, isn’t it? So if our monkey types at 150 characters per minute on average, we will have to wait around 47 million years until we see ABRACADABRA. Oh well.

Time to be More Formal

After giving an intuitive outline of the solution, it is time to formalize the concepts that we used, to translate our fairy tales into mathematics. The mathematical model of the fair casino is called a martingale, named after a class of betting strategies that enjoyed popularity in 18th century France. The gambler’s fortune (or the casino’s, depending on our viewpoint) can be modeled with a sequence of random variables. X_0 will denote the gambler’s fortune before the game starts, X_1 the fortune after one round and so on. Such a sequence of random variables is called a stochastic process. We will require the expected value of the gambler’s fortune to be always finite.

How can we formalize the fairness of the game? Fairness means that the gambler’s fortune does not change in expectation, i.e. the expected value of X_n, given X_1, X_2, \ldots, X_{n-1} is the same as X_{n-1}. This can be written as E(X_n | X_1, X_2, \ldots, X_{n-1}) = X_{n-1} or, equivalently, E(X_n - X_{n-1} | X_1, X_2, \ldots, X_{n-1}) = 0.

The reader might be less comfortable with the first formulation. What does it mean, after all, that the conditional expected value of a random variable is another random variable? Shouldn’t the expected value be a number? The answer is that in order to have solid theoretical foundations for the definition of a martingale, we need a more sophisticated notion of conditional expectations. Such sophistication involves measure theory, which is outside the scope of this post. We will instead naively accept the definition above, and the reader can look up all the formal details in any serious probability text (such as [1]).

Clearly the fair casino we constructed for the ABRACADABRA exercise is an example of a martingale. Another example is the simple symmetric random walk on the number line: we start at 0, toss a coin in each step, and move one step in the positive or negative direction based on the outcome of our coin toss.

The Optional Stopping Theorem

Remember that we closed our casino as soon as the word ABRACADABRA appeared and we claimed that our casino was also fair at that time. In mathematical language, the closed casino is called a stopped martingale. The stopped martingale is constructed as follows: we wait until our martingale X exhibits a certain behaviour (e.g. the word ABRACADABRA is typed by the monkey), and we define a new martingale X’ as follows: let X'_n = X_n if n < T and X'_n = X_T if n \ge T where T denotes the stopping time, i.e. the time at which the desired event occurs. Notice that T itself is a random variable.

We require our stopping time T to depend only on the past, i.e. that at any time we should be able to decide whether the event that we are waiting for has already happened or not (without looking into the future). This is a very reasonable requirement. If we could look into the future, we could obviously cheat by closing our casino just before some gambler would win a huge prize.

We said that the expected wealth of the casino at the stopping time is the same as the initial wealth. This is guaranteed by Doob’s optional stopping theorem, which states that under certain conditions, the expected value of a martingale at the stopping time is equal to its expected initial value.

Theorem: (Doob’s optional stopping theorem) Let X_n be a martingale stopped at step T, and suppose one of the following three conditions hold:

  1. The stopping time T is almost surely bounded by some constant;
  2. The stopping time T is almost surely finite and every step of the stopped martingale X_n is almost surely bounded by some constant; or
  3. The expected stopping time E(T) is finite and the absolute value of the martingale increments |X_n-X_{n-1}| are almost surely bounded by a constant.

Then E(X_T) = E(X_0).

We omit the proof because it requires measure theory, but the interested reader can see it in these notes.

For applications, (1) and (2) are the trivial cases. In the ABRACADABRA problem, the third condition holds: the expected stopping time is finite (in fact, we showed using the geometric distribution that it is less than 26^{12}) and the absolute value of a martingale increment is either 1 or a net payoff which is bounded by 26^{11}+26^4+26. This shows that our solution is indeed correct.

Gambler’s Ruin

Another famous application of martingales is the gambler’s ruin problem. This problem models the following game: there are two players, the first player has a dollars, the second player has b dollars. In each round they toss a coin and the loser gives one dollar to the winner. The game ends when one of the players runs out of money. There are two obvious questions: (1) what is the probability that the first player wins and (2) how long will the game take in expectation?

Let X_n denote the change in the second player’s fortune, and set X_0 = 0. Let T_k denote the first time s when X_s = k. Then our first question can be formalized as trying to determine \Pr(T_{-b} < T_a). Let t = \min \{ T_{-b}, T_a\}. Clearly t is a stopping time. By the optional stopping theorem we have that

\displaystyle 0=E(X_0)=E(X_t)=-b\Pr(T_{-b} < T_a)+a(1-\Pr(T_{-b} < T_a))

thus \Pr(T_{-b} < T_a)=\frac{a}{a+b}.

I would like to ask the reader to try to answer the second question. It is a little bit trickier than the first one, though, so here is a hint: X_n^2-n is also a martingale (prove it), and applying the optional stopping theorem to it leads to the answer.

A Randomized Algorithm for 2-SAT

The reader is probably familiar with 3-SAT, the first problem shown to be NP-complete. Recall that 3-SAT is the following problem: given a boolean formula in conjunctive normal form with at most three literals in each clause, decide whether there is a satisfying truth assignment. It is natural to ask if or why 3 is special, i.e. why don’t we work with k-SAT for some k \ne 3 instead? Clearly the hardness of the problem is monotone increasing in k since k-SAT is a special case of (k+1)-SAT. On the other hand, SAT (without any bound on the number of literals per clause) is clearly in NP, thus 3-SAT is just as hard as k-SAT for any k>3. So the only question is: what can we say about 2-SAT?

It turns out that 2-SAT is easier than satisfiability in general: 2-SAT is in P. There are many algorithms for solving 2-SAT. Here is one deterministic algorithm: associate a graph to the 2-SAT instance such that there is one vertex for each variable and each negated variable and the literals x and y are connected by a directed edge if there is a clause (\bar x \lor y). Recall that \bar x \lor y is equivalent to x \implies y, so the edges show the implications between the variables. Clearly the 2-SAT instance is not satisfiable if there is a variable x such that there are directed paths x \to \bar x and \bar x \to x (since x \Leftrightarrow \bar x is always false). It can be shown that this is not only a sufficient but also a necessary condition for unsatisfiability, hence the 2-SAT instance is satisfiable if and only if there is are no such path. If there are directed paths from one vertex of a graph to another and vice versa then they are said to belong to the same strongly connected component. There are several graph algorithms for finding strongly connected components of directed graphs, the most well-known algorithms are all based on depth-first search.

Now we give a very simple randomized algorithm for 2-SAT (due to Christos Papadimitriou in a ’91 paper): start with an arbitrary truth assignment and while there are unsatisfied clauses, pick one and flip the truth value of a random literal in it. Stop after O(n^2) rounds where n denotes the number of variables. Clearly if the formula is not satisfiable then nothing can go wrong, we will never find a satisfying truth assignment. If the formula is satisfiable, we want to argue that with high probability we will find a satisfying truth assignment in O(n^2) steps.

The idea of the proof is the following: fix an arbitrary satisfying truth assignment and consider the Hamming distance of our current assignment from it. The Hamming distance of two truth assignments (or in general, of two binary vectors) is the number of coordinates in which they differ. Since we flip one bit in every step, this Hamming distance changes by \pm 1 in every round. It also easy to see that in every step the distance is at least as likely to be decreased as to be increased (since we pick an unsatisfied clause, which means at least one of the two literals in the clause differs in value from the satisfying assignment).

Thus this is an unfair “gambler’s ruin” problem where the gambler’s fortune is the Hamming distance from the solution, and it decreases with probability at least \frac{1}{2}. Such a stochastic process is called a supermartingale — and this is arguably a better model for real-life casinos. (If we flip the inequality, the stochastic process we get is called a submartingale.) Also, in this case the gambler’s fortune (the Hamming distance) cannot increase beyond n. We can also think of this process as a random walk on the set of integers: we start at some number and in each round we make one step to the left or to the right with some probability. If we use random walk terminology, 0 is called an absorbing barrier since we stop the process when we reach 0. The number n, on the other hand, is called a reflecting barrier: we cannot reach n+1, and whenever we get close we always bounce back.

There is an equivalent version of the optimal stopping theorem for supermartingales and submartingales, where the conditions are the same but the consequence holds with an inequality instead of equality. It follows from the optional stopping theorem that the gambler will be ruined (i.e. a satisfying truth assignment will be found) in O(n^2) steps with high probability.

[1] For a reference on stochastic processes and martingales, see the text of Durrett .

Simulating a Biased Coin with a Fair Coin

This is a guest post by my friend and colleague Adam Lelkes. Adam’s interests are in algebra and theoretical computer science. This gem came up because Adam gave a talk on probabilistic computation in which he discussed this technique.

Problem: simulate a biased coin using a fair coin.

Solution: (in Python)

def biasedCoin(binaryDigitStream, fairCoin):
   for d in binaryDigitStream:
      if fairCoin() != d:
         return d

Discussion: This function takes two arguments, an iterator representing the binary expansion of the intended probability of getting 1 (let us denote it as p) and another function that returns 1 or 0 with equal probability. At first glance this might seem like an overcomplicated way of solving this problem: why can’t the probability be a floating point number?

The point is that p can have infinite precision! Assuming that fairCoin() gives us a perfectly random stream of 1’s and 0’s (independently and with probability 1/2) and we can read each bit of the binary expansion of p, this function returns 1 with probability exactly p even if p is irrational or a fraction with infinite decimal expansion. If we used floating point arithmetic there would be a small chance we get unlucky and exhaust the precision available. We would only get an approximation of the true bias at best.

Now let us explain why this algorithm works. We keep tossing our fair coins to get a sequence of random bits, until one of our random bits is different from the corresponding bit in the binary expansion of p. If we stop after i steps, that means that the first i-1 bits in the two binary sequences were the same, which happens with probability \frac{1}{2^{i-1}}. Given that this happens, in the ith step we will return the ith bit of p; let us denote this bit by p_i. Then the probability of returning 1 is \sum_{i=1}^\infty \frac{p_i}{2^{i-1}}, which is the binary expansion of p.

This algorithm is also efficient. By efficient here we mean that the expected running time is constant. Of course, to show this we need to make some assumption about the computational complexity of calculating the bits of p. If we assume that the bits of p are efficiently computable in the sense that the time required to compute p_i is bounded by a polynomial in i, then this algorithm does run in constant expected time.

Indeed, the expected running time is \sum_{i=0}^\infty \frac{i^n}{2^i}. Showing that this sum is a constant is an easy calculus exercise: using the ratio test we get that

\displaystyle \textup{limsup}_{i \to \infty} \left | \frac{\frac{(i+1)^n}{2^{i+1}}}{\frac{i^n}{2^i}} \right | = \limsup_{i\to\infty} \frac{\left(\frac{i+1}{i}\right)^n}{2} = \frac{1}{2} < 1,

thus the series is convergent.

Now that we proved that our algorithm works, it’s time to try it! Let’s say that we want to simulate a coin which gives “heads” with probability 1/3.
We need to construct our binary digit stream. Since 1/3 is 0.010101… in binary, we could use the following simple generator:

def oneThird():
   while True:
      yield 0
      yield 1

However, we might want to have a more general generator that gives us the binary representation of any number. The following function, which takes a number between 0 and 1 as its argument, does the job:

def binaryDigits(fraction):
   while True:
      fraction *= 2
      yield int(fraction)
      fraction = fraction % 1

We also need a fair coin simulator. For this simulation, let’s just use Python’s built-in pseudo-random number generator:

def fairCoin():
   return random.choice([0,1])

Let us toss our biased coin 10000 times and take the sum. We expect the sum to be around 3333. Indeed, when I tried

>>> sum(biasedCoin(oneThird(), fairCoin) for i in range(10000))
3330

It might be worth noting oneThird() is approximately ten times faster than binaryDigits(fractions.Fraction(1,3)), so when a large number of biased coins is needed, you can hardwire the binary representation of p into the program.