The Boosting Margin, or Why Boosting Doesn’t Overfit

There’s a well-understood phenomenon in machine learning called overfitting. The idea is best shown by a graph:


Let me explain. The vertical axis represents the error of a hypothesis. The horizontal axis represents the complexity of the hypothesis. The blue curve represents the error of a machine learning algorithm’s output on its training data, and the red curve represents the generalization of that hypothesis to the real world. The overfitting phenomenon is marker in the middle of the graph, before which the training error and generalization error both go down, but after which the training error continues to fall while the generalization error rises.

The explanation is a sort of numerical version of Occam’s Razor that says more complex hypotheses can model a fixed data set better and better, but at some point a simpler hypothesis better models the underlying phenomenon that generates the data. To optimize a particular learning algorithm, one wants to set parameters of their model to hit the minimum of the red curve.

This is where things get juicy. Boosting, which we covered in gruesome detail previously, has a natural measure of complexity represented by the number of rounds you run the algorithm for. Each round adds one additional “weak learner” weighted vote. So running for a thousand rounds gives a vote of a thousand weak learners. Despite this, boosting doesn’t overfit on many datasets. In fact, and this is a shocking fact, researchers observed that Boosting would hit zero training error, they kept running it for more rounds, and the generalization error kept going down! It seemed like the complexity could grow arbitrarily without penalty.

Schapire, Freund, Bartlett, and Lee proposed a theoretical explanation for this based on the notion of a margin, and the goal of this post is to go through the details of their theorem and proof. Remember that the standard AdaBoost algorithm produces a set of weak hypotheses h_i(x) and a corresponding weight \alpha_i \in [-1,1] for each round i=1, \dots, T. The classifier at the end is a weighted majority vote of all the weak learners (roughly: weak learners with high error on “hard” data points get less weight).

Definition: The signed confidence of a labeled example (x,y) is the weighted sum:

\displaystyle \textup{conf}(x) = \sum_{i=1}^T \alpha_i h_i(x)

The margin of (x,y) is the quantity \textup{margin}(x,y) = y \textup{conf}(x). The notation implicitly depends on the outputs of the AdaBoost algorithm via “conf.”

We use the product of the label and the confidence for the observation that y \cdot \textup{conf}(x) \leq 0 if and only if the classifier is incorrect. The theorem we’ll prove in this post is

Theorem: With high probability over a random choice of training data, for any 0 < \theta < 1 generalization error of boosting is bounded from above by

\displaystyle \Pr_{\textup{train}}[\textup{margin}(x) \leq \theta] + O \left ( \frac{1}{\theta} (\textup{typical error terms}) \right )

In words, the generalization error of the boosting hypothesis is bounded by the distribution of margins observed on the training data. To state and prove the theorem more generally we have to return to the details of PAC-learning. Here and in the rest of this post, \Pr_D denotes \Pr_{x \sim D}, the probability over a random example drawn from the distribution D, and \Pr_S denotes the probability over a random (training) set of examples drawn from D.

Theorem: Let S be a set of m random examples chosen from the distribution D generating the data. Assume the weak learner corresponds to a finite hypothesis space H of size |H|, and let \delta > 0. Then with probability at least 1 - \delta (over the choice of S), every weighted-majority vote function f satisfies the following generalization bound for every \theta > 0.

\displaystyle \Pr_D[y f(x) \leq 0] \leq \Pr_S[y f(x) \leq \theta] + O \left ( \frac{1}{\sqrt{m}} \sqrt{\frac{\log m \log |H|}{\theta^2} + \log(1/\delta)} \right )

In other words, this phenomenon is a fact about voting schemes, not boosting in particular. From now on, a “majority vote” function f(x) will mean to take the sign of a sum of the form \sum_{i=1}^N a_i h_i(x), where a_i \geq 0 and \sum_i a_i = 1. This is the “convex hull” of the set of weak learners H. If H is infinite (in our proof it will be finite, but we’ll state a generalization afterward), then only finitely many of the a_i in the sum may be nonzero.

To prove the theorem, we’ll start by defining a class of functions corresponding to “unweighted majority votes with duplicates:”

Definition: Let C_N be the set of functions f(x) of the form \frac{1}{N} \sum_{i=1}^N h_i(x) where h_i \in H and the h_i may contain duplicates (some of the h_i may be equal to some other of the h_j).

Now every majority vote function f can be written as a weighted sum of h_i with weights a_i (I’m using a instead of \alpha to distinguish arbitrary weights from those weights arising from Boosting). So any such f(x) defines a natural distribution over H where you draw function h_i with probability a_i. I’ll call this distribution A_f. If we draw from this distribution N times and take an unweighted sum, we’ll get a function g(x) \in C_N. Call the random process (distribution) generating functions in this way Q_f. In diagram form, the logic goes

f \to weights a_i \to distribution over H \to function in C_N by drawing N times according to H.

The main fact about the relationship between f and Q_f is that each is completely determined by the other. Obviously Q_f is determined by f because we defined it that way, but f is also completely determined by Q_f as follows:

\displaystyle f(x) = \mathbb{E}_{g \sim Q_f}[g(x)]

Proving the equality is an exercise for the reader.

Proof of Theorem. First we’ll split the probability \Pr_D[y f(x) \leq 0] into two pieces, and then bound each piece.

First a probability reminder. If we have two events A and B (in what’s below, this will be yg(x) \leq \theta/2 and yf(x) \leq 0, we can split up \Pr[A] into \Pr[A \textup{ and } B] + \Pr[A \textup{ and } \overline{B}] (where \overline{B} is the opposite of B). This is called the law of total probability. Moreover, because \Pr[A \textup{ and } B] = \Pr[A | B] \Pr[B] and because these quantities are all at most 1, it’s true that \Pr[A \textup{ and } B] \leq \Pr[A \mid B] (the conditional probability) and that \Pr[A \textup{ and } B] \leq \Pr[B].

Back to the proof. Notice that for any g(x) \in C_N and any \theta > 0, we can write \Pr_D[y f(x) \leq 0] as a sum:

\displaystyle \Pr_D[y f(x) \leq 0] =\\ \Pr_D[yg(x) \leq \theta/2 \textup{ and } y f(x) \leq 0] + \Pr_D[yg(x) > \theta/2 \textup{ and } y f(x) \leq 0]

Now I’ll loosen the first term by removing the second event (that only makes the whole probability bigger) and loosen the second term by relaxing it to a conditional:

\displaystyle \Pr_D[y f(x) \leq 0] \leq \Pr_D[y g(x) \leq \theta / 2] + \Pr_D[yg(x) > \theta/2 \mid yf(x) \leq 0]

Now because the inequality is true for every g(x) \in C_N, it’s also true if we take an expectation of the RHS over any distribution we choose. We’ll choose the distribution Q_f to get

\displaystyle \Pr_D[yf(x) \leq 0] \leq T_1 + T_2

And T_1 (term 1) is

\displaystyle T_1 = \Pr_{x \sim D, g \sim Q_f} [yg(x) \leq \theta /2] = \mathbb{E}_{g \sim Q_f}[\Pr_D[yg(x) \leq \theta/2]]

And T_2 is

\displaystyle \Pr_{x \sim D, g \sim Q_f}[yg(x) > \theta/2 \mid yf(x) \leq 0] = \mathbb{E}_D[\Pr_{g \sim Q_f}[yg(x) > \theta/2 \mid yf(x) \leq 0]]

We can rewrite the probabilities using expectations because (1) the variables being drawn in the distributions are independent, and (2) the probability of an event is the expectation of the indicator function of the event.

Now we’ll bound the terms T_1, T_2 separately. We’ll start with T_2.

Fix (x,y) and look at the quantity inside the expectation of T_2.

\displaystyle \Pr_{g \sim Q_f}[yg(x) > \theta/2 \mid yf(x) \leq 0]

This should intuitively be very small for the following reason. We’re sampling g according to a distribution whose expectation is f, and we know that yf(x) \leq 0. Of course yg(x) is unlikely to be large.

Mathematically we can prove this by transforming the thing inside the probability to a form suitable for the Chernoff bound. Saying yg(x) > \theta / 2 is the same as saying |yg(x) - \mathbb{E}[yg(x)]| > \theta /2, i.e. that some random variable which is a sum of independent random variables (the h_i) deviates from its expectation by at least \theta/2. Since the y‘s are all \pm 1 and constant inside the expectation, they can be removed from the absolute value to get

\displaystyle \leq \Pr_{g \sim Q_f}[g(x) - \mathbb{E}[g(x)] > \theta/2]

The Chernoff bound allows us to bound this by an exponential in the number of random variables in the sum, i.e. N. It turns out the bound is e^{-N \theta^2 / 8}.

Now recall T_1

\displaystyle T_1 = \Pr_{x \sim D, g \sim Q_f} [yg(x) \leq \theta /2] = \mathbb{E}_{g \sim Q_f}[\Pr_D[yg(x) \leq \theta/2]]

For T_1, we don’t want to bound it absolutely like we did for T_2, because there is nothing stopping the classifier f from being a bad classifier and having lots of error. Rather, we want to bound it in terms of the probability that yf(x) \leq \theta. We’ll do this in two steps. In step 1, we’ll go from \Pr_D of the g‘s to \Pr_S of the g‘s.

Step 1: For any fixed g, \theta, if we take a sample S of size m, then consider the event in which the sample probability deviates from the true distribution by some value \varepsilon_N, i.e. the event

\displaystyle \Pr_D[yg(x) \leq \theta /2] > \Pr_{S, x \sim S}[yg(x) \leq \theta/2] + \varepsilon_N

The claim is this happens with probability at most e^{-2m\varepsilon_N^2}. This is again the Chernoff bound in disguise, because the expected value of \Pr_S is \Pr_D, and the probability over S is an average of random variables (it’s a slightly different form of the Chernoff bound; see this post for more). From now on we’ll drop the x \sim S when writing \Pr_S.

The bound above holds true for any fixed g,\theta, but we want a bound over all g and \theta. To do that we use the union bound. Note that there are only (N+1) possible choices for a nonnegative \theta because g(x) is a sum of N values each of which is either \pm1. And there are only |C_N| \leq |H|^N possibilities for g(x). So the union bound says the above event will occur with probability at most (N+1)|H|^N e^{-2m\varepsilon_N^2}.

If we want the event to occur with probability at most \delta_N, we can judiciously pick

\displaystyle \varepsilon_N = \sqrt{(1/2m) \log ((N+1)|H|^N / \delta_N)}

And since the bound holds in general, we can take expectation with respect to Q_f and nothing changes. This means that for any \delta_N, our chosen \varepsilon_N ensures that the following is true with probability at least 1-\delta_N:

\displaystyle \Pr_{D, g \sim Q_f}[yg(x) \leq \theta/2] \leq \Pr_{S, g \sim Q_f}[yg(x) \leq \theta/2] + \varepsilon_N

Now for step 2, we bound the probability that yg(x) \leq \theta/2 on a sample to the probability that yf(x) \leq \theta on a sample.

Step 2: The first claim is that

\displaystyle \Pr_{S, g \sim Q_f}[yg(x) \leq \theta / 2] \leq \Pr_{S} [yf(x) \leq \theta] + \mathbb{E}_{S}[\Pr_{g \sim Q_f}[yg(x) \leq \theta/2 \mid yf(x) \geq \theta]]

What we did was break up the LHS into two “and”s, when yf(x) > \theta and yf(x) \leq \theta (this was still an equality). Then we loosened the first term to \Pr_{S}[yf(x) \leq \theta] since that is only more likely than both yg(x) \leq \theta/2 and yf(x) \leq \theta. Then we loosened the second term again using the fact that a probability of an “and” is bounded by the conditional probability.

Now we have the probability of yg(x) \leq \theta / 2 bounded by the probability that yf(x) \leq 0 plus some stuff. We just need to bound the “plus some stuff” absolutely and then we’ll be done. The argument is the same as our previous use of the Chernoff bound: we assume yf(x) \geq \theta, and yet yg(x) \leq \theta / 2. So the deviation of yg(x) from its expectation is large, and the probability that happens is exponentially small in the amount of deviation. The bound you get is

\displaystyle \Pr_{g \sim Q}[yg(x) \leq \theta/2 \mid yf(x) > \theta] \leq e^{-N\theta^2 / 8}.

And again we use the union bound to ensure the failure of this bound for any N will be very small. Specifically, if we want the total failure probability to be at most \delta, then we need to pick some \delta_j‘s so that \delta = \sum_{j=0}^{\infty} \delta_j. Choosing \delta_N = \frac{\delta}{N(N+1)} works.

Putting everything together, we get that with probability at least 1-\delta for every \theta and every N, this bound on the failure probability of f(x):

\displaystyle \Pr_{x \sim D}[yf(x) \leq 0] \leq \Pr_{S, x \sim S}[yf(x) \leq \theta] + 2e^{-N \theta^2 / 8} + \sqrt{\frac{1}{2m} \log \left ( \frac{N(N+1)^2 |H|^N}{\delta} \right )}.

This claim is true for every N, so we can pick N that minimizes it. Doing a little bit of behind-the-scenes calculus that is left as an exercise to the reader, a tight choice of N is (4/ \theta)^2 \log(m/ \log |H|). And this gives the statement of the theorem.


We proved this for finite hypothesis classes, and if you know what VC-dimension is, you’ll know that it’s a central tool for reasoning about the complexity of infinite hypothesis classes. An analogous theorem can be proved in terms of the VC dimension. In that case, calling d the VC-dimension of the weak learner’s output hypothesis class, the bound is

\displaystyle \Pr_D[yf(x) \leq 0] \leq \Pr_S[yf(x) \leq \theta] + O \left ( \frac{1}{\sqrt{m}} \sqrt{\frac{d \log^2(m/d)}{\theta^2} + \log(1/\delta)} \right )

How can we interpret these bounds with so many parameters floating around? That’s where asymptotic notation comes in handy. If we fix \theta \leq 1/2 and \delta = 0.01, then the big-O part of the theorem simplifies to \sqrt{(\log |H| \cdot \log m) / m}, which is easier to think about since (\log m)/m goes to zero very fast.

Now the theorem we just proved was about any weighted majority function. The question still remains: why is AdaBoost good? That follows from another theorem, which we’ll state and leave as an exercise (it essentially follows by unwrapping the definition of the AdaBoost algorithm from last time).

Theorem: Suppose that during AdaBoost the weak learners produce hypotheses with training errors \varepsilon_1, \dots , \varepsilon_T. Then for any \theta,

\displaystyle \Pr_{(x,y) \sim S} [yf(x) \leq \theta] \leq 2^T \prod_{t=1}^T \sqrt{\varepsilon_t^{(1-\theta)} (1-\varepsilon_t)^{(1+\theta)}}

Let’s interpret this for some concrete numbers. Say that \theta = 0 and \varepsilon_t is any fixed value less than 1/2. In this case the term inside product becomes \sqrt{\varepsilon (1-\varepsilon)} < 1/2 and the whole bound tends exponentially quickly to zero in the number of rounds T. On the other hand, if we raise \theta to about 1/3, then in order to maintain the LHS tending to zero we would need \varepsilon < \frac{1}{4} ( 3 - \sqrt{5} ) which is about 20% error.

If you’re interested in learning more about Boosting, there is an excellent book by Freund and Schapire (the inventors of boosting) called Boosting: Foundations and Algorithms. There they include a tighter analysis based on the idea of Rademacher complexity. The bound I presented in this post is nice because the proof doesn’t require any machinery past basic probability, but if you want to reach the cutting edge of knowledge about boosting you need to invest in the technical stuff.

Until next time!

What does it mean for an algorithm to be fair?

In 2014 the White House commissioned a 90-day study that culminated in a report (pdf) on the state of “big data” and related technologies. The authors give many recommendations, including this central warning.

Warning: algorithms can facilitate illegal discrimination!

Here’s a not-so-imaginary example of the problem. A bank wants people to take loans with high interest rates, and it also serves ads for these loans. A modern idea is to use an algorithm to decide, based on the sliver of known information about a user visiting a website, which advertisement to present that gives the largest chance of the user clicking on it. There’s one problem: these algorithms are trained on historical data, and poor uneducated people (often racial minorities) have a historical trend of being more likely to succumb to predatory loan advertisements than the general population. So an algorithm that is “just” trying to maximize clickthrough may also be targeting black people, de facto denying them opportunities for fair loans. Such behavior is illegal.


On the other hand, even if algorithms are not making illegal decisions, by training algorithms on data produced by humans, we naturally reinforce prejudices of the majority. This can have negative effects, like Google’s autocomplete finishing “Are transgenders” with “going to hell?” Even if this is the most common question being asked on Google, and even if the majority think it’s morally acceptable to display this to users, this shows that algorithms do in fact encode our prejudices. People are slowly coming to realize this, to the point where it was recently covered in the New York Times.

There are many facets to the algorithm fairness problem one that has not even been widely acknowledged as a problem, despite the Times article. The message has been echoed by machine learning researchers but mostly ignored by practitioners. In particular, “experts” continually make ignorant claims such as, “equations can’t be racist,” and the following quote from the above linked article about how the Chicago Police Department has been using algorithms to do predictive policing.

Wernick denies that [the predictive policing] algorithm uses “any racial, neighborhood, or other such information” to assist in compiling the heat list [of potential repeat offenders].

Why is this ignorant? Because of the well-known fact that removing explicit racial features from data does not eliminate an algorithm’s ability to learn race. If racial features disproportionately correlate with crime (as they do in the US), then an algorithm which learns race is actually doing exactly what it is designed to do! One needs to be very thorough to say that an algorithm does not “use race” in its computations. Algorithms are not designed in a vacuum, but rather in conjunction with the designer’s analysis of their data. There are two points of failure here: the designer can unwittingly encode biases into the algorithm based on a biased exploration of the data, and the data itself can encode biases due to human decisions made to create it. Because of this, the burden of proof is (or should be!) on the practitioner to guarantee they are not violating discrimination law. Wernick should instead prove mathematically that the policing algorithm does not discriminate.

While that viewpoint is idealistic, it’s a bit naive because there is no accepted definition of what it means for an algorithm to be fair. In fact, from a precise mathematical standpoint, there isn’t even a precise legal definition of what it means for any practice to be fair. In the US the existing legal theory is called disparate impact, which states that a practice can be considered illegal discrimination if it has a “disproportionately adverse” effect on members of a protected group. Here “disproportionate” is precisely defined by the 80% rule, but this is somehow not enforced as stated. As with many legal issues, laws are broad assertions that are challenged on a case-by-case basis. In the case of fairness, the legal decision usually hinges on whether an individual was treated unfairly, because the individual is the one who files the lawsuit. Our understanding of the law is cobbled together, essentially through anecdotes slanted by political agendas. A mathematician can’t make progress with that. We want the mathematical essence of fairness, not something that can be interpreted depending on the court majority.

The problem is exacerbated for data mining because the practitioners often demonstrate a poor understanding of statistics, the management doesn’t understand algorithms, and almost everyone is lulled into a false sense of security via abstraction (remember, “equations can’t be racist”). Experts in discrimination law aren’t trained to audit algorithms, and engineers aren’t trained in social science or law. The speed with which research becomes practice far outpaces the speed at which anyone can keep up. This is especially true at places like Google and Facebook, where teams of in-house mathematicians and algorithm designers bypass the delay between academia and industry.

And perhaps the worst part is that even the world’s best mathematicians and computer scientists don’t know how to interpret the output of many popular learning algorithms. This isn’t just a problem that stupid people aren’t listening to smart people, it’s that everyone is “stupid.” A more politically correct way to say it: transparency in machine learning is a wide open problem. Take, for example, deep learning. A far-removed adaptation of neuroscience to data mining, deep learning has become the flagship technique spearheading modern advances in image tagging, speech recognition, and other classification problems.

A typical example of how a deep neural network learns to tag images. Image source:

A typical example of how a deep neural network learns to tag images. Image source:

The picture above shows how low level “features” (which essentially boil down to simple numerical combinations of pixel values) are combined in a “neural network” to more complicated image-like structures. The claim that these features represent natural concepts like “cat” and “horse” have fueled the public attention on deep learning for years. But looking at the above, is there any reasonable way to say whether these are encoding “discriminatory information”? Not only is this an open question, but we don’t even know what kinds of problems deep learning can solve! How can we understand to what extent neural networks can encode discrimination if we don’t have a deep understanding of why a neural network is good at what it does?

What makes this worse is that there are only about ten people in the world who understand the practical aspects of deep learning well enough to achieve record results for deep learning. This means they spent a ton of time tinkering the model to make it domain-specific, and nobody really knows whether the subtle differences between the top models correspond to genuine advances or slight overfitting or luck. Who is to say whether the fiasco with Google tagging images of black people as apes was caused by the data or the deep learning algorithm or by some obscure tweak made by the designer? I doubt even the designer could tell you with any certainty.

Opacity and a lack of interpretability is the rule more than the exception in machine learning. Celebrated techniques like Support Vector Machines, Boosting, and recent popular “tensor methods” are all highly opaque. This means that even if we knew what fairness meant, it is still a challenge (though one we’d be suited for) to modify existing algorithms to become fair. But with recent success stories in theoretical computer science connecting security, trust, and privacy, computer scientists have started to take up the call of nailing down what fairness means, and how to measure and enforce fairness in algorithms. There is now a yearly workshop called Fairness, Accountability, and Transparency in Machine Learning (FAT-ML, an awesome acronym), and some famous theory researchers are starting to get involved, as are social scientists and legal experts. Full disclosure, two days ago I gave a talk as part of this workshop on modifications to AdaBoost that seem to make it more fair. More on that in a future post.

From our perspective, we the computer scientists and mathematicians, the central obstacle is still that we don’t have a good definition of fairness.

In the next post I want to get a bit more technical. I’ll describe the parts of the fairness literature I like (which will be biased), I’ll hypothesize about the tension between statistical fairness and individual fairness, and I’ll entertain ideas on how someone designing a controversial algorithm (such as a predictive policing algorithm) could maintain transparency and accountability over its discriminatory impact. In subsequent posts I want to explain in more detail why it seems so difficult to come up with a useful definition of fairness, and to describe some of the ideas I and my coauthors have worked on.

Until then!

Methods of Proof — Diagonalization

A while back we featured a post about why learning mathematics can be hard for programmers, and I claimed a major issue was not understanding the basic methods of proof (the lingua franca between intuition and rigorous mathematics). I boiled these down to the “basic four,” direct implication, contrapositive, contradiction, and induction. But in mathematics there is an ever growing supply of proof methods. There are books written about the “probabilistic method,” and I recently went to a lecture where the “linear algebra method” was displayed. There has been recent talk of a “quantum method” for proving theorems unrelated to quantum mechanics, and many more.

So in continuing our series of methods of proof, we’ll move up to some of the more advanced methods of proof. And in keeping with the spirit of the series, we’ll spend most of our time discussing the structural form of the proofs. This time, diagonalization.


Perhaps one of the most famous methods of proof after the basic four is proof by diagonalization. Why do they call it diagonalization? Because the idea behind diagonalization is to write out a table that describes how a collection of objects behaves, and then to manipulate the “diagonal” of that table to get a new object that you can prove isn’t in the table.

The simplest and most famous example of this is the proof that there is no bijection between the natural numbers and the real numbers. We defined injections, and surjections and bijections, in two earlier posts in this series, but for new readers a bijection is just a one-to-one mapping between two collections of things. For example, one can construct a bijection between all positive integers and all even positive integers by mapping n to 2n. If there is a bijection between two (perhaps infinite) sets, then we say they have the same size or cardinality. And so to say there is no bijection between the natural numbers and the real numbers is to say that one of these two sets (the real numbers) is somehow “larger” than the other, despite both being infinite in size. It’s deep, it used to be very controversial, and it made the method of diagonalization famous. Let’s see how it works.

Theorem: There is no bijection from the natural numbers \mathbb{N} to the real numbers \mathbb{R}.

Proof. Suppose to the contrary (i.e., we’re about to do proof by contradiction) that there is a bijection f: \mathbb{N} \to \mathbb{R}. That is, you give me a positive integer k and I will spit out f(k), with the property that different k give different f(k), and every real number is hit by some natural number k (this is just what it means to be a one-to-one mapping).

First let me just do some setup. I claim that all we need to do is show that there is no bijection between \mathbb{N} and the real numbers between 0 and 1. In particular, I claim there is a bijection from (0,1) to all real numbers, so if there is a bijection from \mathbb{N} \to (0,1) then we could combine the two bijections. To show there is a bijection from (0,1) \to \mathbb{R}, I can first make a bijection from the open interval (0,1) to the interval (-\infty, 0) \cup (1, \infty) by mapping x to 1/x. With a little bit of extra work (read, messy details) you can extend this to all real numbers. Here’s a sketch: make a bijection from (0,1) to (0,2) by doubling; then make a bijection from (0,2) to all real numbers by using the (0,1) part to get (-\infty, 0) \cup (1, \infty), and use the [1,2) part to get [0,1] by subtracting 1 (almost! To be super rigorous you also have to argue that the missing number 1 doesn’t change the cardinality, or else write down a more complicated bijection; still, the idea should be clear).

Okay, setup is done. We just have to show there is no bijection between (0,1) and the natural numbers.

The reason I did all that setup is so that I can use the fact that every real number in (0,1) has an infinite binary decimal expansion whose only nonzero digits are after the decimal point. And so I’ll write down the expansion of f(1) as a row in a table (an infinite row), and below it I’ll write down the expansion of f(2), below that f(3), and so on, and the decimal points will line up. The table looks like this.

firsttableThe d‘s above are either 0 or 1. I need to be a bit more detailed in my table, so I’ll index the digits of f(1) by b_{1,1}, b_{1,2}, b_{1,3}, \dots, the digits of f(2) by b_{2,1}, b_{2,2}, b_{2,3}, \dots, and so on. This makes the table look like this


It’s a bit harder to read, but trust me the notation is helpful.

Now by the assumption that f is a bijection, I’m assuming that every real number shows up as a number in this table, and no real number shows up twice. So if I could construct a number that I can prove is not in the table, I will arrive at a contradiction: the table couldn’t have had all real numbers to begin with! And that will prove there is no bijection between the natural numbers and the real numbers.

Here’s how I’ll come up with such a number N (this is the diagonalization part). It starts with 0., and it’s first digit after the decimal is 1-b_{1,1}. That is, we flip the bit b_{1,1} to get the first digit of N. The second digit is 1-b_{2,2}, the third is 1-b_{3,3}, and so on. In general, digit i is 1-b_{i,i}.

Now we show that N isn’t in the table. If it were, then it would have to be N = f(m) for some m, i.e. be the m-th row in the table. Moreover, by the way we built the table, the m-th digit of N would be b_{m,m}. But we defined N so that it’s m-th digit was actually 1-b_{m,m}. This is very embarrassing for N (it’s a contradiction!). So N isn’t in the table.


It’s the kind of proof that blows your mind the first time you see it, because it says that there is more than one kind of infinity. Not something you think about every day, right?

The Halting Problem

The second example we’ll show of a proof by diagonalization is the Halting Theorem, proved originally by Alan Turing, which says that there are some problems that computers can’t solve, even if given unbounded space and time to perform their computations. The formal mathematical model is called a Turing machine, but for simplicity you can think of “Turing machines” and “algorithms described in words” as the same thing. Or if you want it can be “programs written in programming language X.” So we’ll use the three words “Turing machine,” “algorithm,” and “program” interchangeably.

The proof works by actually defining a problem and proving it can’t be solved. The problem is called the halting problem, and it is the problem of deciding: given a program P and an input x to that program, will P ever stop running when given x as input? What I mean by “decide” is that any program that claims to solve the halting problem is itself required to halt for every possible input with the correct answer. A “halting problem solver” can’t loop infinitely!

So first we’ll give the standard proof that the halting problem can’t be solved, and then we’ll inspect the form of the proof more closely to see why it’s considered a diagonalization argument.

Theorem: The halting program cannot be solved by Turing machines.

Proof. Suppose to the contrary that T is a program that solves the halting problem. We’ll use T as a black box to come up with a new program I’ll call meta-T, defined in pseudo-python as follows.

def metaT(P):
   run T on (P,P)
   if T says that P halts:
      loop infinitely
      halt and output "success!"

In words, meta-T accepts as input the source code of a program P, and then uses T to tell if P halts (when given its own source code as input). Based on the result, it behaves the opposite of P; if P halts then meta-T loops infinitely and vice versa. It’s a little meta, right?

Now let’s do something crazy: let’s run meta-T on itself! That is, run


So meta. The question is what is the output of this call? The meta-T program uses T to determine whether meta-T halts when given itself as input. So let’s say that the answer to this question is “yes, it does halt.” Then by the definition of meta-T, the program proceeds to loop forever. But this is a problem, because it means that metaT(metaT) (which is the original thing we ran) actually does not halt, contradicting T‘s answer! Likewise, if T says that metaT(metaT) should loop infinitely, that will cause meta-T to halt, a contradiction. So T cannot be correct, and the halting problem can’t be solved.


This theorem is deep because it says that you can’t possibly write a program to which can always detect bugs in other programs. Infinite loops are just one special kind of bug.

But let’s take a closer look and see why this is a proof by diagonalization. The first thing we need to convince ourselves is that the set of all programs is countable (that is, there is a bijection from \mathbb{N} to the set of all programs). This shouldn’t be so hard to see: you can list all programs in lexicographic order, since the set of all strings is countable, and then throw out any that are not syntactically valid programs. Likewise, the set of all inputs, really just all strings, is countable.

The second thing we need to convince ourselves of is that a problem corresponds to an infinite binary string. To do this, we’ll restrict our attention to problems with yes/no answers, that is where the goal of the program is to output a single bit corresponding to yes or no for a given input. Then if we list all possible inputs in increasing lexicographic order, a problem can be represented by the infinite list of bits that are the correct outputs to each input.

For example, if the problem is to determine whether a given binary input string corresponds to an even number, the representation might look like this:


Of course this all depends on the details of how one encodes inputs, but the point is that if you wanted to you could nail all this down precisely. More importantly for us we can represent the halting problem as an infinite table of bits. If the columns of the table are all programs (in lex order), and the rows of the table correspond to inputs (in lex order), then the table would have at entry (x,P) a 1 if P(x) halts and a 0 otherwise.


here b_{i,j} is 1 if P_j(x_i) halts and 0 otherwise. The table encodes the answers to the halting problem for all possible inputs.

Now we assume for contradiction sake that some program solves the halting problem, i.e. that every entry of the table is computable. Now we’ll construct the answers output by meta-T by flipping each bit of the diagonal of the table. The point is that meta-T corresponds to some row of the table, because there is some input string that is interpreted as the source code of meta-T. Then we argue that the entry of the table for (\textup{meta-}T, \textup{meta-}T) contradicts its definition, and we’re done!

So these are two of the most high-profile uses of the method of diagonalization. It’s a great tool for your proving repertoire.

Until next time!

Hamming’s Code

Or how to detect and correct errors

Last time we made a quick tour through the main theorems of Claude Shannon, which essentially solved the following two problems about communicating over a digital channel.

  1. What is the best encoding for information when you are guaranteed that your communication channel is error free?
  2. Are there any encoding schemes that can recover from random noise introduced during transmission?

The answers to these questions were purely mathematical theorems, of course. But the interesting shortcoming of Shannon’s accomplishment was that his solution for the noisy coding problem (2) was nonconstructive. The question remains: can we actually come up with efficiently computable encoding schemes? The answer is yes! Marcel Golay was the first to discover such a code in 1949 (just a year after Shannon’s landmark paper), and Golay’s construction was published on a single page! We’re not going to define Golay’s code in this post, but we will mention its interesting status in coding theory later. The next year Richard Hamming discovered another simpler and larger family of codes, and went on to do some of the major founding work in coding theory. For his efforts he won a Turing Award and played a major part in bringing about the modern digital age. So we’ll start with Hamming’s codes.

We will assume some basic linear algebra knowledge, as detailed our first linear algebra primer. We will also use some basic facts about polynomials and finite fields, though the lazy reader can just imagine everything as binary \{ 0,1 \} and still grok the important stuff.


Richard Hamming, inventor of Hamming codes. [image source]

What is a code?

The formal definition of a code is simple: a code C is just a subset of \{ 0,1 \}^n for some n. Elements of C are called codewords.

This is deceptively simple, but here’s the intuition. Say we know we want to send messages of length k, so that our messages are in \{ 0,1 \}^k. Then we’re really viewing a code C as the image of some encoding function \textup{Enc}: \{ 0,1 \}^k \to \{ 0,1 \}^n. We can define C by just describing what the set is, or we can define it by describing the encoding function. Either way, we will make sure that \textup{Enc} is an injective function, so that no two messages get sent to the same codeword. Then |C| = 2^k, and we can call k = \log |C| the message length of C even if we don’t have an explicit encoding function.

Moreover, while in this post we’ll always work with \{ 0,1 \}, the alphabet of your encoded messages could be an arbitrary set \Sigma. So then a code C would be a subset of tuples in \Sigma^n, and we would call q = |\Sigma|.

So we have these parameters n, k, q, and we need one more. This is the minimum distance of a code, which we’ll denote by d. This is defined to be the minimum Hamming distance between all distinct pairs of codewords, where by Hamming distance I just mean the number of coordinates that two tuples differ in. Recalling the remarks we made last time about Shannon’s nonconstructive proof, when we decode an encoded message y (possibly with noisy bits) we look for the (unencoded) message x whose encoding \textup{Enc}(x) is as close to y as possible. This will only work in the worst case if all pairs of codewords are sufficiently far apart. Hence we track the minimum distance of a code.

So coding theorists turn this mess of parameters into notation.

Definition: A code C is called an (n, k, d)_q-code if

  • C \subset \Sigma^n for some alphabet \Sigma,
  • k = \log |C|,
  • C has minimum distance d, and
  • the alphabet \Sigma has size q.

The basic goals of coding theory are:

  1. For which values of these four parameters do codes exist?
  2. Fixing any three parameters, how can we optimize the other one?

In this post we’ll see how simple linear-algebraic constructions can give optima for one of these problems, optimizing k for d=3, and we’ll state a characterization theorem for optimizing k for a general d. Next time we’ll continue with a second construction that optimizes a different bound called the Singleton bound.

Linear codes and the Hamming code

A code is called linear if it can be identified with a linear subspace of some finite-dimensional vector space. In this post all of our vector spaces will be \{ 0,1 \}^n, that is tuples of bits under addition mod 2. But you can do the same constructions with any finite scalar field \mathbb{F}_q for a prime power q, i.e. have your vector space be \mathbb{F}_q^n. We’ll go back and forth between describing a binary code q=2 over \{ 0,1 \} and a code in $\mathbb{F}_q^n$. So to say a code is linear means:

  • The zero vector is a codeword.
  • The sum of any two codewords is a codeword.
  • Any scalar multiple of a codeword is a codeword.

Linear codes are the simplest kinds of codes, but already they give a rich variety of things to study. The benefit of linear codes is that you can describe them in a lot of different and useful ways besides just describing the encoding function. We’ll use two that we define here. The idea is simple: you can describe everything about a linear subspace by giving a basis for the space.

Definition: generator matrix of a (n,k,d)_q-code C is a k \times n matrix G whose rows form a basis for C.

There are a lot of equivalent generator matrices for a linear code (we’ll come back to this later), but the main benefit is that having a generator matrix allows one to encode messages x \in \{0,1 \}^k by left multiplication xG. Intuitively, we can think of the bits of x as describing the coefficients of the chosen linear combination of the rows of G, which uniquely describes an element of the subspace. Note that because a k-dimensional subspace of \{ 0,1 \}^n has 2^k elements, we’re not abusing notation by calling k = \log |C| both the message length and the dimension.

For the second description of C, we’ll remind the reader that every linear subspace C has a unique orthogonal complement C^\perp, which is the subspace of vectors that are orthogonal to vectors in C.

Definition: Let H^T be a generator matrix for C^\perp. Then H is called a parity check matrix.

Note H has the basis for C^\perp as columns. This means it has dimensions n \times (n-k). Moreover, it has the property that x \in C if and only if the left multiplication xH = 0. Having zero dot product with all columns of H characterizes membership in C.

The benefit of having a parity check matrix is that you can do efficient error detection: just compute yH on your received message y, and if it’s nonzero there was an error! What if there were so many errors, and just the right errors that y coincided with a different codeword than it started? Then you’re screwed. In other words, the parity check matrix is only guarantee to detect errors if you have fewer errors than the minimum distance of your code.

So that raises an obvious question: if you give me the generator matrix of a linear code can I compute its minimum distance? It turns out that this problem is NP-hard in general. In fact, you can show that this is equivalent to finding the smallest linearly dependent set of rows of the parity check matrix, and it is easier to see why such a problem might be hard. But if you construct your codes cleverly enough you can compute their distance properties with ease.

Before we do that, one more definition and a simple proposition about linear codes. The Hamming weight of a vector x, denoted wt(x), is the number of nonzero entries in x.

Proposition: The minimum distance of a linear code C is the minimum Hamming weight over all nonzero vectors x \in C.

Proof. Consider a nonzero x \in C. On one hand, the zero vector is a codeword and wt(x) is by definition the Hamming distance between x and zero, so it is an upper bound on the minimum distance. In fact, it’s also a lower bound: if x,y are two nonzero codewords, then x-y is also a codeword and wt(x-y) is the Hamming distance between x and y.


So now we can define our first code, the Hamming code. It will be a (n, k, 3)_2-code. The construction is quite simple. We have fixed d=3, q=2, and we will also fix l = n-k. One can think of this as fixing n and maximizing k, but it will only work for n of a special form.

We’ll construct the Hamming code by describing a parity-check matrix H. In fact, we’re going to see what conditions the minimum distance d=3 imposes on H, and find out those conditions are actually sufficient to get d=3. We’ll start with 2. If we want to ensure d \geq 2, then you need it to be the case that no nonzero vector of Hamming weight 1 is a code word. Indeed, if e_i is a vector with all zeros except a one in position i, then e_i H = h_i is the i-th row of H. We need e_i H \neq 0, so this imposes the condition that no row of H can be zero. It’s easy to see that this is sufficient for d \geq 2.

Likewise for d \geq 3, given a vector y = e_i + e_j for some positions i \neq j, then yH = h_i + h_j may not be zero. But because our sums are mod 2, saying that h_i + h_j \neq 0 is the same as saying h_i \neq h_j. Again it’s an if and only if. So we have the two conditions.

  • No row of H may be zero.
  • All rows of H must be distinct.

That is, any parity check matrix with those two properties defines a distance 3 linear code. The only question that remains is how large can n  be if the vectors have length n-k = l? That’s just the number of distinct nonzero binary strings of length l, which is 2^l - 1. Picking any way to arrange these strings as the rows of a matrix (say, in lexicographic order) gives you a good parity check matrix.

Theorem: For every l > 0, there is a (2^l - 1, 2^l - l - 1, 3)_2-code called the Hamming code.

Since the Hamming code has distance 3, we can always detect if at most a single error occurs. Moreover, we can correct a single error using the Hamming code. If x \in C and wt(e) = 1 is an error bit in position i, then the incoming message would be y = x + e. Now compute yH = xH + eH = 0 + eH = h_i and flip bit i of y. That is, whichever row of H you get tells you the index of the error, so you can flip the corresponding bit and correct it. If you order the rows lexicographically like we said, then h_i = i as a binary number. Very slick.

Before we move on, we should note one interesting feature of linear codes.

Definition: A code is called systematic if it can be realized by an encoding function that appends some number n-k “check bits” to the end of each message.

The interesting feature is that all linear codes are systematic. The reason is as follows. The generator matrix G of a linear code has as rows a basis for the code as a linear subspace. We can perform Gaussian elimination on G and get a new generator matrix that looks like [I \mid A] where I is the identity matrix of the appropriate size and A is some junk. The point is that encoding using this generator matrix leaves the message unchanged, and adds a bunch of bits to the end that are determined by A. It’s a different encoding function on \{ 0,1\}^k, but it has the same image in \{ 0,1 \}^n, i.e. the code is unchanged. Gaussian elimination just performed a change of basis.

If you work out the parameters of the Hamming code, you’ll see that it is a systematic code which adds \Theta(\log n) check bits to a message, and we’re able to correct a single error in this code. An obvious question is whether this is necessary? Could we get away with adding fewer check bits? The answer is no, and a simple “information theoretic” argument shows this. A single index out of n requires \log n bits to describe, and being able to correct a single error is like identifying a unique index. Without logarithmically many bits, you just don’t have enough information.

The Hamming bound and perfect codes

One nice fact about Hamming codes is that they optimize a natural problem: the problem of maximizing d given a fixed choice of n, k, and q. To get this let’s define V_n(r) denote the volume of a ball of radius r in the space \mathbb{F}_2^n. I.e., if you fix any string (doesn’t matter which) x, V_n(r) is the size of the set \{ y : d(x,y) \leq r \}, where d(x,y) is the hamming distance.

There is a theorem called the Hamming bound, which describes a limit to how much you can pack disjoint balls of radius r inside \mathbb{F}_2^n.

Theorem: If an (n,k,d)_2-code exists, then

\displaystyle 2^k V_n \left ( \left \lfloor \frac{d-1}{2} \right \rfloor \right ) \leq 2^n

Proof. The proof is quite simple. To say a code C has distance d means that for every string x \in C there is no other string y within Hamming distance d of x. In other words, the balls centered around both x,y of radius r = \lfloor (d-1)/2 \rfloor are disjoint. The extra difference of one is for odd d, e.g. when d=3 you need balls of radius 1 to guarantee no overlap. Now |C| = 2^k, so the total number of strings covered by all these balls is the left-hand side of the expression. But there are at most 2^n strings in \mathbb{F}_2^n, establishing the desired inequality.


Now a code is called perfect if it actually meets the Hamming bound exactly. As you probably guessed, the Hamming codes are perfect codes. It’s not hard to prove this, and I’m leaving it as an exercise to the reader.

The obvious follow-up question is whether there are any other perfect codes. The answer is yes, some of which are nonlinear. But some of them are “trivial.” For example, when d=1 you can just use the identity encoding to get the code C = \mathbb{F}_2^n. You can also just have a code which consists of a single codeword. There are also some codes that encode by repeating the message multiple times. These are called “repetition codes,” and all three of these examples are called trivial (as a definition). Now there are some nontrivial and nonlinear perfect codes I won’t describe here, but here is the nice characterization theorem.

Theorem [van Lint ’71, Tietavainen ‘73]: Let C be a nontrivial perfect (n,d,k)_q code. Then the parameters must either be that of a Hamming code, or one of the two:

  • A (23, 12, 7)_2-code
  • A (11, 6, 5)_3-code

The last two examples are known as the binary and ternary Golay codes, respectively, which are also linear. In other words, every possible set of parameters for a perfect code can be realized as one of these three linear codes.

So this theorem was a big deal in coding theory. The Hamming and Golay codes were both discovered within a year of each other, in 1949 and 1950, but the nonexistence of other perfect linear codes was open for twenty more years. This wrapped up a very neat package.

Next time we’ll discuss the Singleton bound, which optimizes for a different quantity and is incomparable with perfect codes. We’ll define the Reed-Solomon and show they optimize this bound as well. These codes are particularly famous for being the error correcting codes used in DVDs. We’ll then discuss the algorithmic issues surrounding decoding, and more recent connections to complexity theory.

Until then!