Weak Learning, Boosting, and the AdaBoost algorithm

When addressing the question of what it means for an algorithm to learn, one can imagine many different models, and there are quite a few. This invariably raises the question of which models are “the same” and which are “different,” along with a precise description of how we’re comparing models. We’ve seen one learning model so far, called Probably Approximately Correct (PAC), which espouses the following answer to the learning question:

An algorithm can “solve” a classification task using labeled examples drawn from some distribution if it can achieve accuracy that is arbitrarily close to perfect on the distribution, and it can meet this goal with arbitrarily high probability, where it’s runtime and the number of examples needed scales efficiently with all the parameters (accuracy, confidence, size of an example). Moreover, the algorithm needs to succeed no matter what distribution generates the examples.

You can think of this as a game between the algorithm designer and an adversary. First, the learning problem is fixed and everyone involved knows what the task is. Then the algorithm designer has to pick an algorithm. Then the adversary, knowing the chosen algorithm, chooses a nasty distribution D over examples that are fed to the learning algorithm. The algorithm designer “wins” if the algorithm produces a hypothesis with low error on D when given samples from D. And our goal is to prove that the algorithm designer can pick a single algorithm that is extremely likely to win no matter what D the adversary picks.

We’ll momentarily restate this with a more precise definition, because in this post we will compare it to a slightly different model, which is called the weak PAC-learning model. It’s essentially the same as PAC, except it only requires the algorithm to have accuracy that is slightly better than random guessing. That is, the algorithm will output a classification function which will correctly classify a random label with probability at least \frac{1}{2} + \eta for some small, but fixed, \eta > 0. The quantity \eta (the Greek “eta”) is called the edge as in “the edge over random guessing.” We call an algorithm that produces such a hypothesis a weak learner, and in contrast we’ll call a successful algorithm in the usual PAC model a strong learner.

The amazing fact is that strong learning and weak learning are equivalent! Of course a weak learner is not the same thing as a strong learner. What we mean by “equivalent” is that:

A problem can be weak-learned if and only if it can be strong-learned.

So they are computationally the same. One direction of this equivalence is trivial: if you have a strong learner for a classification task then it’s automatically a weak learner for the same task. The reverse is much harder, and this is the crux: there is an algorithm for transforming a weak learner into a strong learner! Informally, we “boost” the weak learning algorithm by feeding it examples from carefully constructed distributions, and then take a majority vote. This “reduction” from strong to weak learning is where all the magic happens.

In this post we’ll get into the depths of this boosting technique. We’ll review the model of PAC-learning, define what it means to be a weak learner, “organically” come up with the AdaBoost algorithm from some intuitive principles, prove that AdaBoost reduces error on the training data, and then run it on data. It turns out that despite the origin of boosting being a purely theoretical question, boosting algorithms have had a wide impact on practical machine learning as well.

As usual, all of the code and data used in this post is available on this blog’s Github page.

History and multiplicative weights

Before we get into the details, here’s a bit of history and context. PAC learning was introduced by Leslie Valiant in 1984, laying the foundation for a flurry of innovation. In 1988 Michael Kearns posed the question of whether one can “boost” a weak learner to a strong learner. Two years later Rob Schapire published his landmark paper “The Strength of Weak Learnability” closing the theoretical question by providing the first “boosting” algorithm. Schapire and Yoav Freund worked together for the next few years to produce a simpler and more versatile algorithm called AdaBoost, and for this they won the Gödel Prize, one of the highest honors in theoretical computer science. AdaBoost is also the standard boosting algorithm used in practice, though there are enough variants to warrant a book on the subject.

I’m going to define and prove that AdaBoost works in this post, and implement it and test it on some data. But first I want to give some high level discussion of the technique, and afterward the goal is to make that wispy intuition rigorous.

The central technique of AdaBoost has been discovered and rediscovered in computer science, and recently it was recognized abstractly in its own right. It is called the Multiplicative Weights Update Algorithm (MWUA), and it has applications in everything from learning theory to combinatorial optimization and game theory. The idea is to

  1. Maintain a nonnegative weight for the elements of some set,
  2. Draw a random element proportionally to the weights,
  3. So something with the chosen element, and based on the outcome of the “something…”
  4. Update the weights and repeat.

The “something” is usually a black box algorithm like “solve this simple optimization problem.” The output of the “something” is interpreted as a reward or penalty, and the weights are updated according to the severity of the penalty (the details of how this is done differ depending on the goal). In this light one can interpret MWUA as minimizing regret with respect to the best alternative element one could have chosen in hindsight. In fact, this was precisely the technique we used to attack the adversarial bandit learning problem (the Exp3 algorithm is a multiplicative weight scheme). See this lengthy technical survey of Arora and Kale for a research-level discussion of the algorithm and its applications.

Now let’s remind ourselves of the formal definition of PAC. If you’ve read the previous post on the PAC model, this next section will be redundant.

Distributions, hypotheses, and targets

In PAC-learning you are trying to give labels to data from some set X. There is a distribution D producing data from X, and it’s used for everything: to provide data the algorithm uses to learn, to measure your accuracy, and every other time you might get samples from X. You as the algorithm designer don’t know what D is, and a successful learning algorithm has to work no matter what D is. There’s some unknown function c called the target concept, which assigns a \pm 1 label to each data point in X. The target is the function we’re trying to “learn.” When the algorithm draws an example from D, it’s allowed to query the label c(x) and use all of the labels it’s seen to come up with some hypothesis h that is used for new examples that the algorithm may not have seen before. The problem is “solved” if h has low error on all of D.

To give a concrete example let’s do spam emails. Say that X is the set of all emails, and D is the distribution over emails that get sent to my personal inbox. A PAC-learning algorithm would take all my emails, along with my classification of which are spam and which are not spam (plus and minus 1). The algorithm would produce a hypothesis h that can be used to label new emails, and if the algorithm is truly a PAC-learner, then our guarantee is that with high probability (over the randomness in which emails I receive) the algorithm will produce an h that has low error on the entire distribution of emails that get sent to me (relative to my personal spam labeling function).

Of course there are practical issues with this model. I don’t have a consistent function for calling things spam, the distribution of emails I get and my labeling function can change over time, and emails don’t come according to a distribution with independent random draws. But that’s the theoretical model, and we can hope that algorithms we devise for this model happen to work well in practice.

Here’s the formal definition of the error of a hypothesis h(x) produced by the learning algorithm:

\textup{err}_{c,D}(h) = P_{x \sim D}(h(x) \neq c(x))

It’s read “The error of h with respect to the concept c we’re trying to learn and the distribution D is the probability over x drawn from D that the hypothesis produces the wrong label.” We can now define PAC-learning formally, introducing the parameters \delta for “probably” and \varepsilon for “approximately.” Let me say it informally first:

An algorithm PAC-learns if, for any \varepsilon, \delta > 0 and any distribution D, with probability at least 1-\delta the hypothesis h produced by the algorithm has error at most \varepsilon.

To flush out the other things hiding, here’s the full definition.

Definition (PAC): An algorithm A(\varepsilon, \delta) is said to PAC-learn the concept class H over the set X if, for any distribution D over X and for any 0 < \varepsilon, \delta < 1/2 and for any target concept c \in H, the probability that A produces a hypothesis h of error at most \varepsilon is at least 1-\delta. In symbols, \Pr_D(\textup{err}_{c,D}(h) \leq \varepsilon) > 1 - \delta. Moreover, A must run in time polynomial in 1/\varepsilon, 1/\delta and n, where n is the size of an element x \in X.

The reason we need a class of concepts (instead of just one target concept) is that otherwise we could just have a constant algorithm that outputs the correct labeling function. Indeed, when we get a problem we ask whether there exists an algorithm that can solve it. I.e., a problem is “PAC-learnable” if there is some algorithm that learns it as described above. With just one target concept there can exist an algorithm to solve the problem by hard-coding a description of the concept in the source code. So we need to have some “class of possible answers” that the algorithm is searching through so that the algorithm actually has a job to do.

We call an algorithm that gets this guarantee a strong learner. A weak learner has the same definition, except that we replace \textup{err}_{c,D}(h) \leq \varepsilon by the weak error bound: for some fixed 0 < \eta < 1/2. the error \textup{err}_{c,D}(h) \leq 1/2 - \eta. So we don’t require the algorithm to achieve any desired accuracy, it just has to get some accuracy slightly better than random guessing, which we don’t get to choose. As we will see, the value of \eta influences the convergence of the boosting algorithm. One important thing to note is that \eta is a constant independent of n, the size of an example, and m, the number of examples. In particular, we need to avoid the “degenerate” possibility that \eta(n) = 2^{-n} so that as our learning problem scales the quality of the weak learner degrades toward 1/2. We want it to be bounded away from 1/2.

So just to clarify all the parameters floating around, \delta will always be the “probably” part of PAC, \varepsilon is the error bound (the “approximately” part) for strong learners, and \eta is the error bound for weak learners.

What could a weak learner be?

Now before we prove that you can “boost” a weak learner to a strong learner, we should have some idea of what a weak learner is. Informally, it’s just a ‘rule of thumb’ that you can somehow guarantee does a little bit better than random guessing.

In practice, however, people sort of just make things up and they work. It’s kind of funny, but until recently nobody has really studied what makes a “good weak learner.” They just use an example like the one we’re about to show, and as long as they get a good error rate they don’t care if it has any mathematical guarantees. Likewise, they don’t expect the final “boosted” algorithm to do arbitrarily well, they just want low error rates.

The weak learner we’ll use in this post produces “decision stumps.” If you know what a decision tree is, then a decision stump is trivial: it’s a decision tree where the whole tree is just one node. If you don’t know what a decision tree is, a decision stump is a classification rule of the form:

Pick some feature i and some value of that feature v, and output label +1 if the input example has value v for feature i, and output label -1 otherwise.

Concretely, a decision stump might mark an email spam if it contains the word “viagra.” Or it might deny a loan applicant a loan if their credit score is less than some number.

Our weak learner produces a decision stump by simply looking through all the features and all the values of the features until it finds a decision stump that has the best error rate. It’s brute force, baby! Actually we’ll do something a little bit different. We’ll make our data numeric and look for a threshold of the feature value to split positive labels from negative labels. Here’s the Python code we’ll use in this post for boosting. This code was part of a collaboration with my two colleagues Adam Lelkes and Ben Fish. As usual, all of the code used in this post is available on Github.

First we make a class for a decision stump. The attributes represent a feature, a threshold value for that feature, and a choice of labels for the two cases. The classify function shows how simple the hypothesis is.

class Stump:
   def __init__(self):
      self.gtLabel = None
      self.ltLabel = None
      self.splitThreshold = None
      self.splitFeature = None

   def classify(self, point):
      if point[self.splitFeature] >= self.splitThreshold:
         return self.gtLabel
      else:
         return self.ltLabel

   def __call__(self, point):
      return self.classify(point)

Then for a fixed feature index we’ll define a function that computes the best threshold value for that index.

def minLabelErrorOfHypothesisAndNegation(data, h):
   posData, negData = ([(x, y) for (x, y) in data if h(x) == 1],
                       [(x, y) for (x, y) in data if h(x) == -1])

   posError = sum(y == -1 for (x, y) in posData) + sum(y == 1 for (x, y) in negData)
   negError = sum(y == 1 for (x, y) in posData) + sum(y == -1 for (x, y) in negData)
   return min(posError, negError) / len(data)


def bestThreshold(data, index, errorFunction):
   '''Compute best threshold for a given feature. Returns (threshold, error)'''

   thresholds = [point[index] for (point, label) in data]
   def makeThreshold(t):
      return lambda x: 1 if x[index] >= t else -1
   errors = [(threshold, errorFunction(data, makeThreshold(threshold))) for threshold in thresholds]
   return min(errors, key=lambda p: p[1])

Here we allow the user to provide a generic error function that the weak learner tries to minimize, but in our case it will just be minLabelErrorOfHypothesisAndNegation. In words, our threshold function will label an example as +1 if feature i has value greater than the threshold and -1 otherwise. But we might want to do the opposite, labeling -1 above the threshold and +1 below. The bestThreshold function doesn’t care, it just wants to know which threshold value is the best. Then we compute what the right hypothesis is in the next function.

def buildDecisionStump(drawExample, errorFunction=defaultError):
   # find the index of the best feature to split on, and the best threshold for
   # that index. A labeled example is a pair (example, label) and drawExample() 
   # accepts no arguments and returns a labeled example. 

   data = [drawExample() for _ in range(500)]

   bestThresholds = [(i,) + bestThreshold(data, i, errorFunction) for i in range(len(data[0][0]))]
   feature, thresh, _ = min(bestThresholds, key = lambda p: p[2])

   stump = Stump()
   stump.splitFeature = feature
   stump.splitThreshold = thresh
   stump.gtLabel = majorityVote([x for x in data if x[0][feature] >= thresh])
   stump.ltLabel = majorityVote([x for x in data if x[0][feature] < thresh])

   return stump

It’s a little bit inefficient but no matter. To illustrate the PAC framework we emphasize that the weak learner needs nothing except the ability to draw from a distribution. It does so, and then it computes the best threshold and creates a new stump reflecting that. The majorityVote function just picks the most common label of examples in the list. Note that drawing 500 samples is arbitrary, and in general we might increase it to increase the success probability of finding a good hypothesis. In fact, when proving PAC-learning theorems the number of samples drawn often depends on the accuracy and confidence parameters \varepsilon, \delta. We omit them here for simplicity.

Strong learners from weak learners

So suppose we have a weak learner A for a concept class H, and for any concept c from H it can produce with probability at least 1 - \delta a hypothesis h with error bound 1/2 - \eta. How can we modify this algorithm to get a strong learner? Here is an idea: we can maintain a large number of separate instances of the weak learner A, run them on our dataset, and then combine their hypotheses with a majority vote. In code this might look like the following python snippet. For now examples are binary vectors and the labels are \pm 1, so the sign of a real number will be its label.

def boost(learner, data, rounds=100):
   m = len(data)
   learners = [learner(random.choice(data, m/rounds)) for _ in range(rounds)]
   
   def hypothesis(example):
      return sign(sum(1/rounds * h(example) for h in learners))

   return hypothesis

This is a bit too simplistic: what if the majority of the weak learners are wrong? In fact, with an overly naive mindset one might imagine a scenario in which the different instances of A have high disagreement, so is the prediction going to depend on which random subset the learner happens to get? We can do better: instead of taking a majority vote we can take a weighted majority vote. That is, give the weak learner a random subset of your data, and then test its hypothesis on the data to get a good estimate of its error. Then you can use this error to say whether the hypothesis is any good, and give good hypotheses high weight and bad hypotheses low weight (proportionally to the error). Then the “boosted” hypothesis would take a weighted majority vote of all your hypotheses on an example. This might look like the following.

# data is a list of (example, label) pairs
def error(hypothesis, data):
   return sum(1 for x,y in data if hypothesis(x) != y) / len(data)


def boost(learner, data, rounds=100):
   m = len(data)
   weights = [0] * rounds
   learners = [None] * rounds

   for t in range(rounds):
      learners[t] = learner(random.choice(data, m/rounds))
      weights[t] = 1 - error(learners[t], data)
   
   def hypothesis(example):
      return sign(sum(weight * h(example) for (h, weight) in zip(learners, weights)))

   return hypothesis

This might be better, but we can do something even cleverer. Rather than use the estimated error just to say something about the hypothesis, we can identify the mislabeled examples in a round and somehow encourage A to do better at classifying those examples in later rounds. This turns out to be the key insight, and it’s why the algorithm is called AdaBoost (Ada stands for “adaptive”). We’re adaptively modifying the distribution over the training data we feed to A based on which data A learns “easily” and which it does not. So as the boosting algorithm runs, the distribution given to A has more and more probability weight on the examples that A misclassified. And, this is the key, A has the guarantee that it will weak learn no matter what the distribution over the data is. Of course, it’s error is also measured relative to the adaptively chosen distribution, and the crux of the argument will be relating this error to the error on the original distribution we’re trying to strong learn.

To implement this idea in mathematics, we will start with a fixed sample X = \{x_1, \dots, x_m\} drawn from D and assign a weight 0 \leq \mu_i \leq 1 to each x_i. Call c(x) the true label of an example. Initially, set \mu_i to be 1. Since our dataset can have repetitions, normalizing the \mu_i to a probability distribution gives an estimate of D. Now we’ll pick some “update” parameter \zeta > 1 (this is intentionally vague). Then we’ll repeat the following procedure for some number of rounds t = 1, \dots, T.

  1. Renormalize the \mu_i to a probability distribution.
  2. Train the weak learner A, and provide it with a simulated distribution D' that draws examples x_i according to their weights \mu_i. The weak learner outputs a hypothesis h_t.
  3. For every example x_i mislabeled by h_t, update \mu_i by replacing it with \mu_i \zeta.
  4. For every correctly labeled example replace \mu_i with \mu_i / \zeta.

At the end our final hypothesis will be a weighted majority vote of all the h_t, where the weights depend on the amount of error in each round. Note that when the weak learner misclassifies an example we increase the weight of that example, which means we’re increasing the likelihood it will be drawn in future rounds. In particular, in order to maintain good accuracy the weak learner will eventually have to produce a hypothesis that fixes its mistakes in previous rounds. Likewise, when examples are correctly classified, we reduce their weights. So examples that are “easy” to learn are given lower emphasis. And that’s it. That’s the prize-winning idea. It’s elegant, powerful, and easy to understand. The rest is working out the values of all the parameters and proving it does what it’s supposed to.

The details and a proof

Let’s jump straight into a Python program that performs boosting.

First we pick a data representation. Examples are pairs (x,c(x)) whose type is the tuple (object, int). Our labels will be \pm 1 valued. Since our algorithm is entirely black-box, we don’t need to assume anything about how the examples X are represented. Our dataset is just a list of labeled examples, and the weights are floats. So our boosting function prototype looks like this

# boost: [(object, int)], learner, int -> (object -> int)
# boost the given weak learner into a strong learner
def boost(examples, weakLearner, rounds):
   ...

And a weak learner, as we saw for decision stumps, has the following function prototype.

# weakLearner: (() -> (list, label)) -> (list -> label)
# accept as input a function that draws labeled examples from a distribution,
# and output a hypothesis list -> label
def weakLearner(draw):
   ...
   return hypothesis

Assuming we have a weak learner, we can fill in the rest of the boosting algorithm with some mysterious details. First, a helper function to compute the weighted error of a hypothesis on some exmaples. It also returns the correctness of the hypothesis on each example which we’ll use later.

# compute the weighted error of a given hypothesis on a distribution
# return all of the hypothesis results and the error
def weightedLabelError(h, examples, weights):
   hypothesisResults = [h(x)*y for (x,y) in examples] # +1 if correct, else -1
   return hypothesisResults, sum(w for (z,w) in zip(hypothesisResults, weights) if z < 0)

Next we have the main boosting algorithm. Here draw is a function that accepts as input a list of floats that sum to 1 and picks an index proportional to the weight of the entry at that index.

 
def boost(examples, weakLearner, rounds):
   distr = normalize([1.] * len(examples))
   hypotheses = [None] * rounds
   alpha = [0] * rounds

   for t in range(rounds):
      def drawExample():
         return examples[draw(distr)]

      hypotheses[t] = weakLearner(drawExample)
      hypothesisResults, error = computeError(hypotheses[t], examples, distr)

      alpha[t] = 0.5 * math.log((1 - error) / (.0001 + error))
      distr = normalize([d * math.exp(-alpha[t] * h)
                         for (d,h) in zip(distr, hypothesisResults)])
      print("Round %d, error %.3f" % (t, error))

   def finalHypothesis(x):
      return sign(sum(a * h(x) for (a, h) in zip(alpha, hypotheses)))

   return finalHypothesis

The code is almost clear. For each round we run the weak learner on our hand-crafted distribution. We compute the error of the resulting hypothesis on that distribution, and then we update the distribution in this mysterious way depending on some alphas and logs and exponentials. In particular, we use the expression c(x) h(x), the product of the true label and predicted label, as computed in weightedLabelError. As the comment says, this will either be +1 or -1 depending on whether the predicted label is correct or incorrect, respectively. The choice of those strange logarithms and exponentials are the result of some optimization: they allow us to minimize training error as quickly as possible (we’ll see this in the proof to follow). The rest of this section will prove that this works when the weak learner is correct. One small caveat: in the proof we will assume the error of the hypothesis is not zero (because a weak learner is not supposed to return a perfect hypothesis!), but in practice we want to avoid dividing by zero so we add the small 0.0001 to avoid that. As a quick self-check: why wouldn’t we just stop in the middle and output that “perfect” hypothesis? (What distribution is it “perfect” over? It might not be the original distribution!)

If we wanted to define the algorithm in pseudocode (which helps for the proof) we would write it this way. Given T rounds, start with D_1 being the uniform distribution over labeled input examples X, where x has label c(x). Say there are m input examples.

  1. For each t=1, \dots T:
    1. Let h_t be the weak learning algorithm run on D_t.
    2. Let \varepsilon_t be the error of h_t on D_t.
    3. Let \alpha_t = \frac{1}{2} \log ((1- \varepsilon) / \varepsilon).
    4. Update each entry of D_{t+1} by the rule D_{t+1}(x) = \frac{D_t(x)}{Z_t} e^{- h_t(x) c(x) \alpha_t}, where Z_t is chosen to normalize D_{t+1} to a distribution.
  2. Output as the final hypothesis the sign of h(x) = \sum_{t=1}^T \alpha_t h_t(x), i.e. h'(x) = \textup{sign}(h(x)).

Now let’s prove this works. That is, we’ll prove the error on the input dataset (the training set) decreases exponentially quickly in the number of rounds. Then we’ll run it on an example and save generalization error for the next post. Over many years this algorithm and tweaked so that the proof is very straightforward.

Theorem: If AdaBoost is given a weak learner and stopped on round t, and the edge \eta_t over random choice satisfies \varepsilon_t = 1/2 - \eta_t, then the training error of the AdaBoost is at most e^{-2 \sum_t \eta_t^2}.

Proof. Let m be the number of examples given to the boosting algorithm. First, we derive a closed-form expression for D_{t} in terms of the normalization constants Z_t. Expanding the recurrence relation gives

\displaystyle D_{t}(x) = D_1(x)\frac{e^{-\alpha_1 c(x) h_1(x)}}{Z_1} \dots \frac{e^{- \alpha_t c(x) h_t(x)}}{Z_t}

Because the starting distribution is uniform, and combining the products into a sum of the exponents, this simplifies to

\displaystyle \frac{1}{m} \frac{e^{-c(x) \sum_{s=1}^t \alpha_s h_t(x)}}{\prod_{s=1}^t Z_s} = \frac{1}{m}\frac{e^{-c(x) h(x)}}{\prod_s Z_s}

Next, we show that the training error is bounded by the product of the normalization terms \prod_{s=1}^t Z_s. This part has always seemed strange to me, that the training error of boosting depends on the factors you need to normalize a distribution. But it’s just a different perspective on the multiplicative weights scheme. If we didn’t explicitly normalize the distribution at each step, we’d get nonnegative weights (which we could convert to a distribution just for the sampling step) and the training error would depend on the product of the weight updates in each step. Anyway let’s prove it.

The training error is defined to be \frac{1}{m} (\textup{\# incorrect predictions by } h). This can be written with an indicator function as follows:

\displaystyle \frac{1}{m} \sum_{x \in X} 1_{c(x) h(x) \leq 0}

Because the sign of h(x) determines its prediction, the product is negative when h is incorrect. Now we can do a strange thing, we’re going to upper bound the indicator function (which is either zero or one) by e^{-c(x)h(x)}. This works because if h predicts correctly then the indicator function is zero while the exponential is greater than zero. On the other hand if h is incorrect the exponential is greater than one because e^z \geq 1 when z \geq 0. So we get

\displaystyle \leq \sum_i \frac{1}{m} e^{-c(x)h(x)}

and rearranging the formula for D_t from the first part gives

\displaystyle \sum_{x \in X} D_T(x) \prod_{t=1}^T Z_t

Since the D_T forms a distribution, it sums to 1 and we can factor the Z_t out. So the training error is just bounded by the \prod_{t=1}^T Z_t.

The last step is to bound the product of the normalization factors. It’s enough to show that Z_t \leq e^{-2 \eta_t^2}. The normalization constant is just defined as the sum of the numerator of the terms in step D. i.e.

\displaystyle Z_t = \sum_i D_t(i) e^{-\alpha_t c(x) h_t(x)}

We can split this up into the correct and incorrect terms (that contribute to +1 or -1 in the exponent) to get

\displaystyle Z_t = e^{-\alpha_t} \sum_{\textup{correct } x} D_t(x) + e^{\alpha_t} \sum_{\textup{incorrect } x} D_t(x)

But by definition the sum of the incorrect part of D is \varepsilon_t and 1-\varepsilon_t for the correct part. So we get

\displaystyle e^{-\alpha_t}(1-\varepsilon_t) + e^{\alpha_t} \varepsilon_t

Finally, since this is an upper bound we want to pick \alpha_t so as to minimize this expression. With a little calculus you can see the \alpha_t we chose in the algorithm pseudocode achieves the minimum, and this simplifies to 2 \sqrt{\varepsilon_t (1-\varepsilon_t)}. Plug in \varepsilon_t = 1/2 - \eta_t to get \sqrt{1 - 4 \eta_t^2} and use the calculus fact that 1 - z \leq e^{-z} to get e^{-2\eta_t^2} as desired.

\square

This is fine and dandy, it says that if you have a true weak learner then the training error of AdaBoost vanishes exponentially fast in the number of boosting rounds. But what about generalization error? What we really care about is whether the hypothesis produced by boosting has low error on the original distribution D as a whole, not just the training sample we started with.

One might expect that if you run boosting for more and more rounds, then it will eventually overfit the training data and its generalization accuracy will degrade. However, in practice this is not the case! The longer you boost, even if you get down to zero training error, the better generalization tends to be. For a long time this was sort of a mystery, and we’ll resolve the mystery in the sequel to this post. For now, we’ll close by showing a run of AdaBoost on some real world data.

The “adult” census dataset

The “adult” dataset is a standard dataset taken from the 1994 US census. It tracks a number of demographic and employment features (including gender, age, employment sector, etc.) and the goal is to predict whether an individual makes over $50k per year. Here are the first few lines from the training set.

39, State-gov, 77516, Bachelors, 13, Never-married, Adm-clerical, Not-in-family, White, Male, 2174, 0, 40, United-States, <=50K
50, Self-emp-not-inc, 83311, Bachelors, 13, Married-civ-spouse, Exec-managerial, Husband, White, Male, 0, 0, 13, United-States, <=50K
38, Private, 215646, HS-grad, 9, Divorced, Handlers-cleaners, Not-in-family, White, Male, 0, 0, 40, United-States, <=50K
53, Private, 234721, 11th, 7, Married-civ-spouse, Handlers-cleaners, Husband, Black, Male, 0, 0, 40, United-States, <=50K
28, Private, 338409, Bachelors, 13, Married-civ-spouse, Prof-specialty, Wife, Black, Female, 0, 0, 40, Cuba, <=50K
37, Private, 284582, Masters, 14, Married-civ-spouse, Exec-managerial, Wife, White, Female, 0, 0, 40, United-States, <=50K

We perform some preprocessing of the data, so that the categorical examples turn into binary features. You can see the full details in the github repository for this post; here are the first few post-processed lines (my newlines added).

>>> from data import adult
>>> train, test = adult.load()
>>> train[:3]
[((39, 1, 0, 0, 0, 0, 0, 1, 0, 0, 13, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 2174, 0, 40, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), -1), 

((50, 1, 0, 1, 0, 0, 0, 0, 0, 0, 13, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 13, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), -1), 

((38, 1, 1, 0, 0, 0, 0, 0, 0, 0, 9, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 40, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), -1)]

Now we can run boosting on the training data, and compute its error on the test data.

>>> from boosting import boost
>>> from data import adult
>>> from decisionstump import buildDecisionStump
>>> train, test = adult.load()
>>> weakLearner = buildDecisionStump
>>> rounds = 20
>>> h = boost(train, weakLearner, rounds)
Round 0, error 0.199
Round 1, error 0.231
Round 2, error 0.308
Round 3, error 0.380
Round 4, error 0.392
Round 5, error 0.451
Round 6, error 0.436
Round 7, error 0.459
Round 8, error 0.452
Round 9, error 0.432
Round 10, error 0.444
Round 11, error 0.447
Round 12, error 0.450
Round 13, error 0.454
Round 14, error 0.505
Round 15, error 0.476
Round 16, error 0.484
Round 17, error 0.500
Round 18, error 0.493
Round 19, error 0.473
>>> error(h, train)
0.153343
>>> error(h, test)
0.151711

This isn’t too shabby. I’ve tried running boosting for more rounds (a hundred) and the error doesn’t seem to improve by much. This implies that finding the best decision stump is not a weak learner (or at least it fails for this dataset), and we can see that indeed the training errors across rounds roughly tend to 1/2.

Though we have not compared our results above to any baseline, AdaBoost seems to work pretty well. This is kind of a meta point about theoretical computer science research. One spends years trying to devise algorithms that work in theory (and finding conditions under which we can get good algorithms in theory), but when it comes to practice we can’t do anything but hope the algorithms will work well. It’s kind of amazing that something like Boosting works in practice. It’s not clear to me that weak learners should exist at all, even for a given real world problem. But the results speak for themselves.

Next time

Next time we’ll get a bit deeper into the theory of boosting. We’ll derive the notion of a “margin” that quantifies the confidence of boosting in its prediction. Then we’ll describe (and maybe prove) a theorem that says if the “minimum margin” of AdaBoost on the training data is large, then the generalization error of AdaBoost on the entire distribution is small. The notion of a margin is actually quite a deep one, and it shows up in another famous machine learning technique called the Support Vector Machine. In fact, it’s part of some recent research I’ve been working on as well. More on that in the future.

Until next time!

The Many Faces of Set Cover

A while back Peter Norvig posted a wonderful pair of articles about regex golf. The idea behind regex golf is to come up with the shortest possible regular expression that matches one given list of strings, but not the other.

“Regex Golf,” by Randall Munroe.

In the first article, Norvig runs a basic algorithm to recreate and improve the results from the comic, and in the second he beefs it up with some improved search heuristics. My favorite part about this topic is that regex golf can be phrased in terms of a problem called set cover. I noticed this when reading the comic, and was delighted to see Norvig use that as the basis of his algorithm.

The set cover problem shows up in other places, too. If you have a database of items labeled by users, and you want to find the smallest set of labels to display that covers every item in the database, you’re doing set cover. I hear there are applications in biochemistry and biology but haven’t seen them myself.

If you know what a set is (just think of the “set” or “hash set” type from your favorite programming language), then set cover has a simple definition.

Definition (The Set Cover Problem): You are given a finite set U called a “universe” and sets S_1, \dots, S_n each of which is a subset of U. You choose some of the S_i to ensure that every x \in U is in one of your chosen sets, and you want to minimize the number of S_i you picked.

It’s called a “cover” because the sets you pick “cover” every element of U. Let’s do a simple. Let U = \{ 1,2,3,4,5 \} and

\displaystyle S_1 = \{ 1,3,4 \}, S_2 = \{ 2,3,5 \}, S_3 = \{ 1,4,5 \}, S_4 = \{ 2,4 \}

Then the smallest possible number of sets you can pick is 2, and you can achieve this by picking both S_1, S_2 or both S_2, S_3. The connection to regex golf is that you pick U to be the set of strings you want to match, and you pick a set of regexes that match some of the strings in U but none of the strings you want to avoid matching (I’ll call them V). If w is such a regex, then you can form the set S_w of strings that w matches. Then if you find a small set cover with the strings w_1, \dots, w_t, then you can “or” them together to get a single regex w_1 \mid w_2 \mid \dots \mid w_t that matches all of U but none of V.

Set cover is what’s called NP-hard, and one implication is that we shouldn’t hope to find an efficient algorithm that will always give you the shortest regex for every regex golf problem. But despite this, there are approximation algorithms for set cover. What I mean by this is that there is a regex-golf algorithm A that outputs a subset of the regexes matching all of U, and the number of regexes it outputs is such-and-such close to the minimum possible number. We’ll make “such-and-such” more formal later in the post.

What made me sad was that Norvig didn’t go any deeper than saying, “We can try to approximate set cover, and the greedy algorithm is pretty good.” It’s true, but the ideas are richer than that! Set cover is a simple example to showcase interesting techniques from theoretical computer science. And perhaps ironically, in Norvig’s second post a header promised the article would discuss the theory of set cover, but I didn’t see any of what I think of as theory. Instead he partially analyzes the structure of the regex golf instances he cares about. This is useful, but not really theoretical in any way unless he can say something universal about those instances.

I don’t mean to bash Norvig. His articles were great! And in-depth theory was way beyond scope. So this post is just my opportunity to fill in some theory gaps. We’ll do three things:

  1. Show formally that set cover is NP-hard.
  2. Prove the approximation guarantee of the greedy algorithm.
  3. Show another (very different) approximation algorithm based on linear programming.

Along the way I’ll argue that by knowing (or at least seeing) the details of these proofs, one can get a better sense of what features to look for in the set cover instance you’re trying to solve. We’ll also see how set cover depicts the broader themes of theoretical computer science.

NP-hardness

The first thing we should do is show that set cover is NP-hard. Intuitively what this means is that we can take some hard problem P and encode instances of P inside set cover problems. This idea is called a reduction, because solving problem P will “reduce” to solving set cover, and the method we use to encode instance of P as set cover problems will have a small amount of overhead. This is one way to say that set cover is “at least as hard as” P.

The hard problem we’ll reduce to set cover is called 3-satisfiability (3-SAT). In 3-SAT, the input is a formula whose variables are either true or false, and the formula is expressed as an OR of a bunch of clauses, each of which is an AND of three variables (or their negations). This is called 3-CNF form. A simple example:

\displaystyle (x \vee y \vee \neg z) \wedge (\neg x \vee w \vee y) \wedge (z \vee x \vee \neg w)

The goal of the algorithm is to decide whether there is an assignment to the variables which makes the formula true. 3-SAT is one of the most fundamental problems we believe to be hard and, roughly speaking, by reducing it to set cover we include set cover in a class called NP-complete, and if any one of these problems can be solved efficiently, then they all can (this is the famous P versus NP problem, and an efficient algorithm would imply P equals NP).

So a reduction would consist of the following: you give me a formula \varphi in 3-CNF form, and I have to produce (in a way that depends on \varphi!) a universe U and a choice of subsets S_i \subset U in such a way that

\varphi has a true assignment of variables if and only if the corresponding set cover problem has a cover using k sets.

In other words, I’m going to design a function f from 3-SAT instances to set cover instances, such that x is satisfiable if and only if f(x) has a set cover with k sets.

Why do I say it only for k sets? Well, if you can always answer this question then I claim you can find the minimum size of a set cover needed by doing a binary search for the smallest value of k. So finding the minimum size of a set cover reduces to the problem of telling if theres a set cover of size k.

Now let’s do the reduction from 3-SAT to set cover.

If you give me \varphi = C_1 \wedge C_2 \wedge \dots \wedge C_m where each C_i is a clause and the variables are denoted x_1, \dots, x_n, then I will choose as my universe U to be the set of all the clauses and indices of the variables (these are all just formal symbols). i.e.

\displaystyle U = \{ C_1, C_2, \dots, C_m, 1, 2, \dots, n \}

The first part of U will ensure I make all the clauses true, and the last part will ensure I don’t pick a variable to be both true and false at the same time.

To show how this works I have to pick my subsets. For each variable x_i, I’ll make two sets, one called S_{x_i} and one called S_{\neg x_i}. They will both contain i in addition to the clauses which they make true when the corresponding literal is true (by literal I just mean the variable or its negation). For example, if C_j uses the literal \neg x_7, then S_{\neg x_7} will contain C_j but S_{x_7} will not. Finally, I’ll set k = n, the number of variables.

Now to prove this reduction works I have to prove two things: if my starting formula has a satisfying assignment I have to show the set cover problem has a cover of size k. Indeed, take the sets S_{y} for all literals y that are set to true in a satisfying assignment. There can be at most n true literals since half are true and half are false, so there will be at most n sets, and these sets clearly cover all of U because every literal has to be satisfied by some literal or else the formula isn’t true.

The reverse direction is similar: if I have a set cover of size n, I need to use it to come up with a satisfying truth assignment for the original formula. But indeed, the sets that get chosen can’t include both a S_{x_i} and its negation set S_{\neg x_i}, because there are n of the elements \{1, 2, \dots, n \} \subset U, and each i is only in the two S_{x_i}, S_{\neg x_i}. Just by counting if I cover all the indices i, I already account for n sets! And finally, since I have covered all the clauses, the literals corresponding to the sets I chose give exactly a satisfying assignment.

Whew! So set cover is NP-hard because I encoded this logic problem 3-SAT within its rules. If we think 3-SAT is hard (and we do) then set cover must also be hard. So if we can’t hope to solve it exactly we should try to approximate the best solution.

The greedy approach

The method that Norvig uses in attacking the meta-regex golf problem is the greedy algorithm. The greedy algorithm is exactly what you’d expect: you maintain a list L of the subsets you’ve picked so far, and at each step you pick the set S_i that maximizes the number of new elements of U that aren’t already covered by the sets in L. In python pseudocode:

def greedySetCover(universe, sets):
   chosenSets = set()
   leftToCover = universe.copy()
   unchosenSets = sets

   covered = lambda s: leftToCover & s

   while universe != 0:
      if len(chosenSets) == len(sets):
         raise Exception("No set cover possible")
      
      nextSet = max(unchosenSets, key=lambda s: len(covered(s)))
      unchosenSets.remove(nextSet)
      chosenSets.add(nextSet)
      leftToCover -= nextSet
      
   return chosenSets

This is what theory has to say about the greedy algorithm:

Theorem: If it is possible to cover U by the sets in F = \{ S_1, \dots, S_n \}, then the greedy algorithm always produces a cover that at worst has size O(\log(n)) \textup{OPT}, where \textup{OPT} is the size of the smallest cover. Moreover, this is asymptotically the best any algorithm can do.

One simple fact we need from calculus is that the following sum is asymptotically the same as \log(n):

\displaystyle H(n) = 1 + \frac{1}{2} + \frac{1}{3} + \dots + \frac{1}{n} = \log(n) + O(1)

Proof. [adapted from Wan] Let’s say the greedy algorithm picks sets T_1, T_2, \dots, T_k in that order. We’ll set up a little value system for the elements of U. Specifically, the value of each T_i is 1, and in step i we evenly distribute this unit value across all newly covered elements of T_i. So for T_1 each covered element gets value 1/|T_1|, and if T_2 covers four new elements, each gets a value of 1/4. One can think of this “value” as a price, or energy, or unit mass, or whatever. It’s just an accounting system (albeit a clever one) we use to make some inequalities clear later.

In general call the value v_x of element x \in U the value assigned to x at the step where it’s first covered. In particular, the number of sets chosen by the greedy algorithm k is just \sum_{x \in U} v_x. We’re just bunching back together the unit value we distributed for each step of the algorithm.

Now we want to compare the sets chosen by greedy to the optimal choice. Call a smallest set cover C_{\textup{OPT}}. Let’s stare at the following inequality.

\displaystyle \sum_{x \in U} v_x \leq \sum_{S \in C_{\textup{OPT}}} \sum_{x \in S} v_x

It’s true because each x counts for a v_x at most once in the left hand side, and in the right hand side the sets in C_{\textup{OPT}} must hit each x at least once but may hit some x more than once. Also remember the left hand side is equal to k.

Now we want to show that the inner sum on the right hand side, \sum_{x \in S} v_x, is at most H(|S|). This will in fact prove the entire theorem: because each set S_i has size at most n, the inequality above will turn into

\displaystyle k \leq |C_{\textup{OPT}}| H(|S|) \leq |C_{\textup{OPT}}| H(n)

And so k \leq \textup{OPT} \cdot O(\log(n)), which is the statement of the theorem.

So we want to show that \sum_{x \in S} v_x \leq H(|S|). For each j define \delta_j(S) to be the number of elements in S not covered in T_1, \cup \dots \cup T_j. Notice that \delta_{j-1}(S) - \delta_{j}(S) is the number of elements of S that are covered for the first time in step j. If we call t_S the smallest integer j for which \delta_j(S) = 0, we can count up the differences up to step t_S, we get

\sum_{x \in S} v_x = \sum_{i=1}^{t_S} (\delta_{i-1}(S) - \delta_i(S)) \cdot \frac{1}{T_i - (T_1 \cup \dots \cup T_{i-1})}

The rightmost term is just the cost assigned to the relevant elements at step i. Moreover, because T_i covers more new elements than S (by definition of the greedy algorithm), the fraction above is at most 1/\delta_{i-1}(S). The end is near. For brevity I’ll drop the (S) from \delta_j(S).

\displaystyle \begin{aligned} \sum_{x \in S} v_x & \leq \sum_{i=1}^{t_S} (\delta_{i-1} - \delta_i) \frac{1}{\delta_{i-1}} \\ & \leq \sum_{i=1}^{t_S} (\frac{1}{1 + \delta_i} + \frac{1}{2+\delta_i} \dots + \frac{1}{\delta_{i-1}}) \\ & = \sum_{i=1}^{t_S} H(\delta_{i-1}) - H(\delta_i) \\ &= H(\delta_0) - H(\delta_{t_S}) = H(|S|) \end{aligned}

And that proves the claim.

\square

I have three postscripts to this proof:

  1. This is basically the exact worst-case approximation that the greedy algorithm achieves. In fact, Petr Slavik proved in 1996 that the greedy gives you a set of size exactly (\log n - \log \log n + O(1)) \textup{OPT} in the worst case.
  2. This is also the best approximation that any set cover algorithm can achieve, provided that P is not NP. This result was basically known in 1994, but it wasn’t until 2013 and the use of some very sophisticated tools that the best possible bound was found with the smallest assumptions.
  3. In the proof we used that |S| \leq n to bound things, but if we knew that our sets S_i (i.e. subsets matched by a regex) had sizes bounded by, say, B, the same proof would show that the approximation factor is \log(B) instead of \log n. However, in order for that to be useful you need B to be a constant, or at least to grow more slowly than any polynomial in n, since e.g. \log(n^{0.1}) = 0.1 \log n. In fact, taking a second look at Norvig’s meta regex golf problem, some of his instances had this property! Which means the greedy algorithm gives a much better approximation ratio for certain meta regex golf problems than it does for the worst case general problem. This is one instance where knowing the proof of a theorem helps us understand how to specialize it to our interests.
norvig-table

Norvig’s frequency table for president meta-regex golf. The left side counts the size of each set (defined by a regex)

The linear programming approach

So we just said that you can’t possibly do better than the greedy algorithm for approximating set cover. There must be nothing left to say, job well done, right? Wrong! Our second analysis, based on linear programming, shows that instances with special features can have better approximation results.

In particular, if we’re guaranteed that each element x \in U occurs in at most B of the sets S_i, then the linear programming approach will give a B-approximation, i.e. a cover whose size is at worst larger than OPT by a multiplicative factor of B. In the case that B is constant, we can beat our earlier greedy algorithm.

The technique is now a classic one in optimization, called LP-relaxation (LP stands for linear programming). The idea is simple. Most optimization problems can be written as integer linear programs, that is there you have n variables x_1, \dots, x_n \in \{ 0, 1 \} and you want to maximize (or minimize) a linear function of the x_i subject to some linear constraints. The thing you’re trying to optimize is called the objective. While in general solving integer linear programs is NP-hard, we can relax the “integer” requirement to 0 \leq x_i \leq 1, or something similar. The resulting linear program, called the relaxed program, can be solved efficiently using the simplex algorithm or another more complicated method.

The output of solving the relaxed program is an assignment of real numbers for the x_i that optimizes the objective function. A key fact is that the solution to the relaxed linear program will be at least as good as the solution to the original integer program, because the optimal solution to the integer program is a valid candidate for the optimal solution to the linear program. Then the idea is that if we use some clever scheme to round the x_i to integers, we can measure how much this degrades the objective and prove that it doesn’t degrade too much when compared to the optimum of the relaxed program, which means it doesn’t degrade too much when compared to the optimum of the integer program as well.

If this sounds wishy washy and vague don’t worry, we’re about to make it super concrete for set cover.

We’ll make a binary variable x_i for each set S_i in the input, and x_i = 1 if and only if we include it in our proposed cover. Then the objective function we want to minimize is \sum_{i=1}^n x_i. If we call our elements X = \{ e_1, \dots, e_m \}, then we need to write down a linear constraint that says each element e_j is hit by at least one set in the proposed cover. These constraints have to depend on the sets S_i, but that’s not a problem. One good constraint for element e_j is

\displaystyle \sum_{i : e_j \in S_i} x_i \geq 1

In words, the only way that an e_j will not be covered is if all the sets containing it have their x_i = 0. And we need one of these constraints for each j. Putting it together, the integer linear program is

The integer program for set cover.

The integer program for set cover.

Once we understand this formulation of set cover, the relaxation is trivial. We just replace the last constraint with inequalities.

setcoverlp

For a given candidate assignment x to the x_i, call Z(x) the objective value (in this case \sum_i x_i). Now we can be more concrete about the guarantees of this relaxation method. Let \textup{OPT}_{\textup{IP}} be the optimal value of the integer program and x_{\textup{IP}} a corresponding assignment to x_i achieving the optimum. Likewise let \textup{OPT}_{\textup{LP}}, x_{\textup{LP}} be the optimal things for the linear relaxation. We will prove:

Theorem: There is a deterministic algorithm that rounds x_{\textup{LP}} to integer values x so that the objective value Z(x) \leq B \textup{OPT}_{\textup{IP}}, where B is the maximum number of sets that any element e_j occurs in. So this gives a B-approximation of set cover.

Proof. Let B be as described in the theorem, and call y = x_{\textup{LP}} to make the indexing notation easier. The rounding algorithm is to set x_i = 1 if y_i \geq 1/B and zero otherwise.

To prove the theorem we need to show two things hold about this new candidate solution x:

  1. The choice of all S_i for which x_i = 1 covers every element.
  2. The number of sets chosen (i.e. Z(x)) is at most B times more than \textup{OPT}_{\textup{LP}}.

Since \textup{OPT}_{\textup{LP}} \leq \textup{OPT}_{\textup{IP}}, so if we can prove number 2 we get Z(x) \leq B \textup{OPT}_{\textup{LP}} \leq B \textup{OPT}_{\textup{IP}}, which is the theorem.

So let’s prove 1. Fix any j and we’ll show that element e_j is covered by some set in the rounded solution. Call B_j the number of times element e_j occurs in the input sets. By definition B_j \leq B, so 1/B_j \geq 1/B. Recall y was the optimal solution to the relaxed linear program, and so it must be the case that the linear constraint for each e_j is satisfied: \sum_{i : e_j \in S_i} x_i \geq 1. We know that there are B_j terms and they sums to at least 1, so not all terms can be smaller than 1/B_j (otherwise they’d sum to something less than 1). In other words, some variable x_i in the sum is at least 1/B_j \geq 1/B, and so x_i is set to 1 in the rounded solution, corresponding to a set S_i that contains e_j. This finishes the proof of 1.

Now let’s prove 2. For each j, we know that for each x_i = 1, the corresponding variable y_i \geq 1/B. In particular 1 \leq y_i B. Now we can simply bound the sum.

\displaystyle \begin{aligned} Z(x) = \sum_i x_i &\leq \sum_i x_i (B y_i) \\ &\leq B \sum_{i} y_i \\ &= B \cdot \textup{OPT}_{\textup{LP}} \end{aligned}

The second inequality is true because some of the x_i are zero, but we can ignore them when we upper bound and just include all the y_i. This proves part 2 and the theorem.

\square

I’ve got some more postscripts to this proof:

  1. The proof works equally well when the sets are weighted, i.e. your cost for picking a set is not 1 for every set but depends on some arbitrarily given constants w_i \geq 0.
  2. We gave a deterministic algorithm rounding y to x, but one can get the same result (with high probability) using a randomized algorithm. The idea is to flip a coin with bias y_i roughly \log(n) times and set x_i = 1 if and only if the coin lands heads at least once. The guarantee is no better than what we proved, but for some other problems randomness can help you get approximations where we don’t know of any deterministic algorithms to get the same guarantees. I can’t think of any off the top of my head, but I’m pretty sure they’re out there.
  3. For step 1 we showed that at least one term in the inequality for e_j would be rounded up to 1, and this guaranteed we covered all the elements. A natural question is: why not also round up at most one term of each of these inequalities? It might be that in the worst case you don’t get a better guarantee, but it would be a quick extra heuristic you could use to post-process a rounded solution.
  4. Solving linear programs is slow. There are faster methods based on so-called “primal-dual” methods that use information about the dual of the linear program to construct a solution to the problem. Goemans and Williamson have a nice self-contained chapter on their website about this with a ton of applications.

Additional Reading

Goemans and Williamson have a large textbook called The Design of Approximation Algorithms. One problem is that this field is like a big heap of unrelated techniques, so it’s not like the book will build up some neat theoretical foundation that works for every problem. Rather, it’s messy and there are lots of details, but there are definitely diamonds in the rough, such as the problem of (and algorithms for) coloring 3-colorable graphs with “approximately 3″ colors, and the infamous unique games conjecture.

I wrote a post a while back giving conditions which, if a problem satisfies those conditions, the greedy algorithm will give a constant-factor approximation. This is much better than the worst case \log(n)-approximation we saw in this post. Moreover, I also wrote a post about matroids, which is a characterization of problems where the greedy algorithm is actually optimal.

Set cover is one of the main tools that IBM’s AntiVirus software uses to detect viruses. Similarly to the regex golf problem, they find a set of strings that occurs source code in some viruses but not (usually) in good programs. Then they look for a small set of strings that covers all the viruses, and their virus scan just has to search binaries for those strings. Hopefully the size of your set cover is really small compared to the number of viruses you want to protect against. I can’t find a reference that details this, but that is understandable because it is proprietary software.

Until next time!

Markov Chain Monte Carlo Without all the Bullshit

I have a little secret: I don’t like the terminology, notation, and style of writing in statistics. I find it unnecessarily complicated. This shows up when trying to read about Markov Chain Monte Carlo methods. Take, for example, the abstract to the Markov Chain Monte Carlo article in the Encyclopedia of Biostatistics.

Markov chain Monte Carlo (MCMC) is a technique for estimating by simulation the expectation of a statistic in a complex model. Successive random selections form a Markov chain, the stationary distribution of which is the target distribution. It is particularly useful for the evaluation of posterior distributions in complex Bayesian models. In the Metropolis–Hastings algorithm, items are selected from an arbitrary “proposal” distribution and are retained or not according to an acceptance rule. The Gibbs sampler is a special case in which the proposal distributions are conditional distributions of single components of a vector parameter. Various special cases and applications are considered.

I can only vaguely understand what the author is saying here (and really only because I know ahead of time what MCMC is). There are certainly references to more advanced things than what I’m going to cover in this post. But it seems very difficult to find an explanation of Markov Chain Monte Carlo without all any superfluous jargon. The “bullshit” here is the implicit claim of an author that such jargon is needed. Maybe it is to explain advanced applications (like attempts to do “inference in Bayesian networks”), but it is certainly not needed to define or analyze the basic ideas.

So to counter, here’s my own explanation of Markov Chain Monte Carlo, inspired by the treatment of John Hopcroft and Ravi Kannan.

The Problem is Drawing from a Distribution

Markov Chain Monte Carlo is a technique to solve the problem of sampling from a complicated distribution. Let me explain by the following imaginary scenario. Say I have a magic box which can estimate probabilities of baby names very well. I can give it a string like “Malcolm” and it will tell me the exact probability p_{\textup{Malcolm}} that you will choose this name for your next child. So there’s a distribution D over all names, it’s very specific to your preferences, and for the sake of argument say this distribution is fixed and you don’t get to tamper with it.

Now comes the problem: I want to efficiently draw a name from this distribution D. This is the problem that Markov Chain Monte Carlo aims to solve. Why is it a problem? Because I have no idea what process you use to pick a name, so I can’t simulate that process myself. Here’s another method you could try: generate a name x uniformly at random, ask the machine for p_x, and then flip a biased coin with probability p_x and use x if the coin lands heads. The problem with this is that there are exponentially many names! The variable here is the number of bits needed to write down a name n = |x|. So either the probabilities p_x will be exponentially small and I’ll be flipping for a very long time to get a single name, or else there will only be a few names with nonzero probability and it will take me exponentially many draws to find them. Inefficiency is the death of me.

So this is a serious problem! Let’s restate it formally just to be clear.

Definition (The sampling problem):  Let D be a distribution over a finite set X. You are given black-box access to the probability distribution function p(x) which outputs the probability of drawing x \in X according to D. Design an efficient randomized algorithm A which outputs an element of X so that the probability of outputting x is approximately p(x). More generally, output a sample of elements from X drawn according to p(x).

Assume that A has access to only fair random coins, though this allows one to efficiently simulate flipping a biased coin of any desired probability.

Notice that with such an algorithm we’d be able to do things like estimate the expected value of some random variable f : X \to \mathbb{R}. We could take a large sample S \subset X via the solution to the sampling problem, and then compute the average value of f on that sample. This is what a Monte Carlo method does when sampling is easy. In fact, the Markov Chain solution to the sampling problem will allow us to do the sampling and the estimation of \mathbb{E}(f) in one fell swoop if you want.

But the core problem is really a sampling problem, and “Markov Chain Monte Carlo” would be more accurately called the “Markov Chain Sampling Method.” So let’s see why a Markov Chain could possibly help us.

Random Walks, the “Markov Chain” part of MCMC

Markov Chain is essentially a fancy term for a random walk on a graph.

You give me a directed graph G = (V,E), and for each edge e = (u,v) \in E you give me a number p_{u,v} \in [0,1]. In order to make a random walk make sense, the p_{u,v} need to satisfy the following constraint:

For any vertex x \in V, the set all values p_{x,y} on outgoing edges (x,y) must sum to 1, i.e. form a probability distribution.

If this is satisfied then we can take a random walk on G according to the probabilities as follows: start at some vertex x_0. Then pick an outgoing edge at random according to the probabilities on the outgoing edges, and follow it to x_1. Repeat if possible.

I say “if possible” because an arbitrary graph will not necessarily have any outgoing edges from a given vertex. We’ll need to impose some additional conditions on the graph in order to apply random walks to Markov Chain Monte Carlo, but in any case the idea of randomly walking is well-defined, and we call the whole object (V,E, \{ p_e \}_{e \in E})Markov chain.

Here is an example where the vertices in the graph correspond to emotional states.

An example Markov chain [image source http://www.mathcs.emory.edu/~cheung/]

An example Markov chain; image source http://www.mathcs.emory.edu/~cheung/

In statistics land, they take the “state” interpretation of a random walk very seriously. They call the edge probabilities “state-to-state transitions.”

The main theorem we need to do anything useful with Markov chains is the stationary distribution theorem (sometimes called the “Fundamental Theorem of Markov Chains,” and for good reason). What it says intuitively is that for a very long random walk, the probability that you end at some vertex v is independent of where you started! All of these probabilities taken together is called the stationary distribution of the random walk, and it is uniquely determined by the Markov chain.

However, for the reasons we stated above (“if possible”), the stationary distribution theorem is not true of every Markov chain. The main property we need is that the graph G is strongly connected. Recall that a directed graph is called connected if, when you ignore direction, there is a path from every vertex to every other vertex. It is called strongly connected if you still get paths everywhere when considering direction. If we additionally require the stupid edge-case-catcher that no edge can have zero probability, then strong connectivity (of one component of a graph) is equivalent to the following property:

For every vertex v \in V(G), an infinite random walk started at v will return to v with probability 1.

In fact it will return infinitely often. This property is called the persistence of the state v by statisticians. I dislike this term because it appears to describe a property of a vertex, when to me it describes a property of the connected component containing that vertex. In any case, since in Markov Chain Monte Carlo we’ll be picking the graph to walk on (spoiler!) we will ensure the graph is strongly connected by design.

Finally, in order to describe the stationary distribution in a more familiar manner (using linear algebra), we will write the transition probabilities as a matrix A where entry a_{j,i} = p_{(i,j)} if there is an edge (i,j) \in E and zero otherwise. Here the rows and columns correspond to vertices of G, and each column i forms the probability distribution of going from state i to some other state in one step of the random walk. Note A is the transpose of the weighted adjacency matrix of the directed weighted graph G where the weights are the transition probabilities (the reason I do it this way is because matrix-vector multiplication will have the matrix on the left instead of the right; see below).

This matrix allows me to describe things nicely using the language of linear algebra. In particular if you give me a basis vector e_i interpreted as “the random walk currently at vertex i,” then Ae_i gives a vector whose j-th coordinate is the probability that the random walk would be at vertex j after one more step in the random walk. Likewise, if you give me a probability distribution q over the vertices, then Aq gives a probability vector interpreted as follows:

If a random walk is in state i with probability q_i, then the j-th entry of Aq is the probability that after one more step in the random walk you get to vertex j.

Interpreted this way, the stationary distribution is a probability distribution \pi such that A \pi = \pi, in other words \pi is an eigenvector of A with eigenvalue 1.

A quick side note for avid readers of this blog: this analysis of a random walk is exactly what we did back in the early days of this blog when we studied the PageRank algorithm for ranking webpages. There we called the matrix A “a web matrix,” noted it was column stochastic (as it is here), and appealed to a special case of the Perron-Frobenius theorem to show that there is a unique maximal eigenvalue equal to one (with a dimension one eigenspace) whose eigenvector we used as a sort of “stationary distribution” and the final ranking of web pages. There we described an algorithm to actually find that eigenvector by iterated multiplication by A. The following theorem is essentially a variant of this algorithm but works under weaker conditions; for the web matrix we added additional “fake” edges that give the needed stronger conditions.

Theorem: Let G be a strongly connected graph with associated edge probabilities \{ p_e \}_e \in E forming a Markov chain. For a probability vector x_0, define x_{t+1} = Ax_t for all t \geq 1, and let v_t be the long-term average v_t = \frac1t \sum_{s=1}^t x_s. Then:

  1. There is a unique probability vector \pi with A \pi = \pi.
  2. For all x_0, the limit \lim_{t \to \infty} v_t = \pi.

Proof. Since v_t is a probability vector we just want to show that |Av_t - v_t| \to 0 as t \to \infty. Indeed, we can expand this quantity as

\displaystyle \begin{aligned} Av_t - v_t &=\frac1t (Ax_0 + Ax_1 + \dots + Ax_{t-1}) - \frac1t (x_0 + \dots + x_{t-1}) \\ &= \frac1t (x_t - x_0) \end{aligned}

But x_t, x_0 are unit vectors, so their difference is at most 2, meaning |Av_t - v_t| \leq \frac2t \to 0. Now it’s clear that this does not depend on v_0. For uniqueness we will cop out and appeal to the Perron-Frobenius theorem that says any matrix of this form has a unique such (normalized) eigenvector.

\square

One additional remark is that, in addition to computing the stationary distribution by actually computing this average or using an eigensolver, one can analytically solve for it as the inverse of a particular matrix. Define B = A-I_n, where I_n is the n \times n identity matrix. Let C be B with a row of ones appended to the bottom and the topmost row removed. Then one can show (quite opaquely) that the last column of C^{-1} is \pi. We leave this as an exercise to the reader, because I’m pretty sure nobody uses this method in practice.

One final remark is about why we need to take an average over all our x_t in the theorem above. There is an extra technical condition one can add to strong connectivity, called aperiodicity, which allows one to beef up the theorem so that x_t itself converges to the stationary distribution. Rigorously, aperiodicity is the property that, regardless of where you start your random walk, after some sufficiently large number of steps n the random walk has a positive probability of being at every vertex at every subsequent step. As an example of a graph where aperiodicity fails: an undirected cycle on an even number of vertices. In that case there will only be a positive probability of being at certain vertices every other step, and averaging those two long term sequences gives the actual stationary distribution.

Screen Shot 2015-04-07 at 6.55.39 PM

Image source: Wikipedia

One way to guarantee that your Markov chain is aperiodic is to ensure there is a positive probability of staying at any vertex. I.e., that your graph has a self-loop. This is what we’ll do in the next section.

Constructing a graph to walk on

Recall that the problem we’re trying to solve is to draw from a distribution over a finite set X with probability function p(x). The MCMC method is to construct a Markov chain whose stationary distribution is exactly p, even when you just have black-box access to evaluating p. That is, you (implicitly) pick a graph G and (implicitly) choose transition probabilities for the edges to make the stationary distribution p. Then you take a long enough random walk on G and output the x corresponding to whatever state you land on.

The easy part is coming up with a graph that has the right stationary distribution (in fact, “most” graphs will work). The hard part is to come up with a graph where you can prove that the convergence of a random walk to the stationary distribution is fast in comparison to the size of X. Such a proof is beyond the scope of this post, but the “right” choice of a graph is not hard to understand.

The one we’ll pick for this post is called the Metropolis-Hastings algorithm. The input is your black-box access to p(x), and the output is a set of rules that implicitly define a random walk on a graph whose vertex set is X.

It works as follows: you pick some way to put X on a lattice, so that each state corresponds to some vector in \{ 0,1, \dots, n\}^d. Then you add (two-way directed) edges to all neighboring lattice points. For n=5, d=2 it would look like this:

And for d=3, n \in \{2,3\} it would look like this:

You have to be careful here to ensure the vertices you choose for X are not disconnected, but in many applications X is naturally already a lattice.

Now we have to describe the transition probabilities. Let r be the maximum degree of a vertex in this lattice (r=2d). Suppose we’re at vertex i and we want to know where to go next. We do the following:

  1. Pick neighbor j with probability 1/r (there is some chance to stay at i).
  2. If you picked neighbor j and p(j) \geq p(i) then deterministically go to j.
  3. Otherwise, p(j) < p(i), and you go to j with probability p(j) / p(i).

We can state the probability weight p_{i,j} on edge (i,j) more compactly as

\displaystyle p_{i,j} = \frac1r \min(1, p(j) / p(i)) \\ p_{i,i} = 1 - \sum_{(i,j) \in E(G); j \neq i} p_{i,j}

It is easy to check that this is indeed a probability distribution for each vertex i. So we just have to show that p(x) is the stationary distribution for this random walk.

Here’s a fact to do that: if a probability distribution v with entries v(x) for each x \in X has the property that v(x)p_{x,y} = v(y)p_{y,x} for all x,y \in X, the v is the stationary distribution. To prove it, fix x and take the sum of both sides of that equation over all y. The result is exactly the equation v(x) = \sum_{y} v(y)p_{y,x}, which is the same as v = Av. Since the stationary distribution is the unique vector satisfying this equation, v has to be it.

Doing this with out chosen p(i) is easy, since p(i)p_{i,j} and p(i)p_{j,i} are both equal to \frac1r \min(p(i), p(j)) by applying a tiny bit of algebra to the definition. So we’re done! One can just randomly walk according to these probabilities and get a sample.

Last words

The last thing I want to say about MCMC is to show that you can estimate the expected value of a function \mathbb{E}(f) simultaneously while random-walking through your Metropolis-Hastings graph (or any graph whose stationary distribution is p(x)). By definition the expected value of f is \sum_x f(x) p(x).

Now what we can do is compute the average value of f(x) just among those states we’ve visited during our random walk. With a little bit of extra work you can show that this quantity will converge to the true expected value of f at about the same time that the random walk converges to the stationary distribution. (Here the “about” means we’re off by a constant factor depending on f). In order to prove this you need some extra tools I’m too lazy to write about in this post, but the point is that it works.

The reason I did not start by describing MCMC in terms of estimating the expected value of a function is because the core problem is a sampling problem. Moreover, there are many applications of MCMC that need nothing more than a sample. For example, MCMC can be used to estimate the volume of an arbitrary (maybe high dimensional) convex set. See these lecture notes of Alistair Sinclair for more.

If demand is popular enough, I could implement the Metropolis-Hastings algorithm in code (it wouldn’t be industry-strength, but perhaps illuminating? I’m not so sure…).

Until next time!

Finding the majority element of a stream

Problem: Given a massive data stream of n values in \{ 1, 2, \dots, m \} and the guarantee that one value occurs more than n/2 times in the stream, determine exactly which value does so.

Solution: (in Python)

def majority(stream):
   held = next(stream)
   counter = 1

   for item in stream:
      if item == held:
         counter += 1
      elif counter == 0:
         held = item
         counter = 1
      else:
         counter -= 1

   return held

Discussion: Let’s prove correctness. Say that s is the unknown value that occurs more than n/2 times. The idea of the algorithm is that if you could pair up elements of your stream so that distinct values are paired up, and then you “kill” these pairs, then s will always survive. The way this algorithm pairs up the values is by holding onto the most recent value that has no pair (implicitly, by keeping a count how many copies of that value you saw). Then when you come across a new element, you decrement the counter and implicitly account for one new pair.

Let’s analyze the complexity of the algorithm. Clearly the algorithm only uses a single pass through the data. Next, if the stream has size n, then this algorithm uses O(\log(n) + \log(m)) space. Indeed, if the stream entirely consists of a single value (say, a stream of all 1’s) then the counter will be n at the end, which takes \log(n) bits to store. On the other hand, if there are m possible values then storing the largest requires \log(m) bits.

Finally, the guarantee that one value occurs more than n/2 times is necessary. If it is not the case the algorithm could output anything (including the most infrequent element!). And moreover, if we don’t have this guarantee then every algorithm that solves the problem must use at least \Omega(n) space in the worst case. In particular, say that m=n, and the first n/2 items are all distinct and the last n/2 items are all the same one, the majority value s. If you do not know s in advance, then you must keep at least one bit of information to know which symbols occurred in the first half of the stream because any of them could be s. So the guarantee allows us to bypass that barrier.

This algorithm can be generalized to detect k items with frequency above some threshold n/(k+1) using space O(k \log n). The idea is to keep k counters instead of one, adding new elements when any counter is zero. When you see an element not being tracked by your k counters (which are all positive), you decrement all the counters by 1. This is like a k-to-one matching rather than a pairing.