Bayesian Ranking for Rated Items

Problem: You have a catalog of items with discrete ratings (thumbs up/thumbs down, or 5-star ratings, etc.), and you want to display them in the “right” order.

Solution: In Python

'''
  score: [int], [int], [float] -> float

  Return the expected value of the rating for an item with known
  ratings specified by `ratings`, prior belief specified by
  `rating_prior`, and a utility function specified by `rating_utility`,
  assuming the ratings are a multinomial distribution and the prior
  belief is a Dirichlet distribution.
'''
def score(self, ratings, rating_prior, rating_utility):
    ratings = [r + p for (r, p) in zip(ratings, rating_prior)]
    score = sum(r * u for (r, u) in zip(ratings, rating_utility))
    return score / sum(ratings)

Discussion: This deceptively short solution can lead you on a long and winding path into the depths of statistics. I will do my best to give a short, clear version of the story.

As a working example I chose merely because I recently listened to a related podcast, say you’re selling mass-market romance novels—which, by all accounts, is a predictable genre. You have a list of books, each of which has been rated on a scale of 0-5 stars by some number of users. You want to display the top books first, so that time-constrained readers can experience the most titillating novels first, and newbies to the genre can get the best first time experience and be incentivized to buy more.

The setup required to arrive at the above code is the following, which I’ll phrase as a story.

Users’ feelings about a book, and subsequent votes, are independent draws from a known distribution (with unknown parameters). I will just call these distributions “discrete” distributions. So given a book and user, there is some unknown list (p_0, p_1, p_2, p_3, p_4, p_5) of probabilities (\sum_i p_i = 1) for each possible rating a user could give for that book.

But how do users get these probabilities? In this story, the probabilities are the output of a randomized procedure that generates distributions. That modeling assumption is called a “Dirichlet prior,” with Dirichlet meaning it generates discrete distributions, and prior meaning it encodes domain-specific information (such as the fraction of 4-star ratings for a typical romance novel).

So the story is you have a book, and that book gets a Dirichlet distribution (unknown to us), and then when a user comes along they sample from the Dirichlet distribution to get a discrete distribution, which they then draw from to choose a rating. We observe the ratings, and we need to find the book’s underlying Dirichlet. We start by assigning it some default Dirichlet (the prior) and update that Dirichlet as we observe new ratings. Some other assumptions:

  1. Books are indistinguishable except in the parameters of their Dirichlet distribution.
  2. The parameters of a book’s Dirichlet distribution don’t change over time, and inherently reflect the book’s value.

So a Dirichlet distribution is a process that produces discrete distributions. For simplicity, in this post we will say a Dirichlet distribution is parameterized by a list of six integers (n_0, \dots, n_5), one for each possible star rating. These values represent our belief in the “typical” distribution of votes for a new book. We’ll discuss more about how to set the values later. Sampling a value (a book’s list of probabilities) from the Dirichlet distribution is not trivial, but we don’t need to do that for this program. Rather, we need to be able to interpret a fixed Dirichlet distribution, and update it given some observed votes.

The interpretation we use for a Dirichlet distribution is its expected value, which, recall, is the parameters of a discrete distribution. In particular if n = \sum_i n_i, then the expected value is a discrete distribution whose probabilities are

\displaystyle \left (  \frac{n_0}{n}, \frac{n_1}{n}, \dots, \frac{n_5}{n} \right )

So you can think of each integer in the specification of a Dirichlet as “ghost ratings,” sometimes called pseudocounts, and we’re saying the probability is proportional to the count.

This is great, because if we knew the true Dirichlet distribution for a book, we could compute its ranking without a second thought. The ranking would simply be the expected star rating:

def simple_score(distribution):
   return sum(i * p for (i, p) in enumerate(distribution))

Putting books with the highest score on top would maximize the expected happiness of a user visiting the site, provided that happiness matches the user’s voting behavior, since the simple_score is just the expected vote.

Also note that all the rating system needs to make this work is that the rating options are linearly ordered. So a thumbs up/down (heaving bosom/flaccid member?) would work, too. We don’t need to know how happy it makes them to see a 5-star vs 4-star book. However, because as we’ll see next we have to approximate the distribution, and hence have uncertainty for scores of books with only a few ratings, it helps to incorporate numerical utility values (we’ll see this at the end).

Next, to update a given Dirichlet distribution with the results of some observed ratings, we have to dig a bit deeper into Bayes rule and the formulas for sampling from a Dirichlet distribution. Rather than do that, I’ll point you to this nice writeup by Jonathan Huang, where the core of the derivation is in Section 2.3 (page 4), and remark that the rule for updating for a new observation is to just add it to the existing counts.

Theorem: Given a Dirichlet distribution with parameters (n_1, \dots, n_k) and a new observation of outcome i, the updated Dirichlet distribution has parameters (n_1, \dots, n_{i-1}, n_i + 1, n_{i+1}, \dots, n_k). That is, you just update the i-th entry by adding 1 to it.

This particular arithmetic to do the update is a mathematical consequence (derived in the link above) of the philosophical assumption that Bayes rule is how you should model your beliefs about uncertainty, coupled with the assumption that the Dirichlet process is how the users actually arrive at their votes.

The initial values (n_0, \dots, n_5) for star ratings should be picked so that they represent the average rating distribution among all prior books, since this is used as the default voting distribution for a new, unknown book. If you have more information about whether a book is likely to be popular, you can use a different prior. For example, if JK Rowling wrote a Harry Potter Romance novel that was part of the canon, you could pretty much guarantee it would be popular, and set n_5 high compared to n_0. Of course, if it were actually popular you could just wait for the good ratings to stream in, so tinkering with these values on a per-book basis might not help much. On the other hand, most books by unknown authors are bad, and n_5 should be close to zero. Selecting a prior dictates how influential ratings of new items are compared to ratings of items with many votes. The more pseudocounts you add to the prior, the less new votes count.

This gets us to the following code for star ratings.

def score(self, ratings, rating_prior):
    ratings = [r + p for (r, p) in zip(ratings, rating_prior)]
    score = sum(i * u for (i, u) in enumerate(ratings))
    return score / sum(ratings)

The only thing missing from the solution at the beginning is the utilities. The utilities are useful for two reasons. First, because books with few ratings encode a lot of uncertainty, having an idea about how extreme a feeling is implied by a specific rating allows one to give better rankings of new books.

Second, for many services, such as taxi rides on Lyft, the default star rating tends to be a 5-star, and 4-star or lower mean something went wrong. For books, 3-4 stars is a default while 5-star means you were very happy.

The utilities parameter allows you to weight rating outcomes appropriately. So if you are in a Lyft-like scenario, you might specify utilities like [-10, -5, -3, -2, 1] to denote that a 4-star rating has the same negative impact as two 5-star ratings would positively contribute. On the other hand, for books the gap between 4-star and 5-star is much less than the gap between 3-star and 4-star. The utilities simply allow you to calibrate how the votes should be valued in comparison to each other, instead of using their literal star counts.

Load Balancing and the Power of Hashing

Here’s a bit of folklore I often hear (and retell) that’s somewhere between a joke and deep wisdom: if you’re doing a software interview that involves some algorithms problem that seems hard, your best bet is to use hash tables.

More succinctly put: Google loves hash tables.

As someone with a passion for math and theoretical CS, it’s kind of silly and reductionist. But if you actually work with terabytes of data that can’t fit on a single machine, it also makes sense.

But to understand why hash tables are so applicable, you should have at least a fuzzy understanding of the math that goes into it, which is surprisingly unrelated to the actual act of hashing. Instead it’s the guarantees that a “random enough” hash provides that makes it so useful. The basic intuition is that if you have an algorithm that works well assuming the input data is completely random, then you can probably get a good guarantee by preprocessing the input by hashing.

In this post I’ll explain the details, and show the application to an important problem that one often faces in dealing with huge amounts of data: how to allocate resources efficiently (load balancing). As usual, all of the code used in the making of this post is available on Github.

Next week, I’ll follow this post up with another application of hashing to estimating the number of distinct items in a set that’s too large to store in memory.

Families of Hash Functions

To emphasize which specific properties of hash functions are important for a given application, we start by introducing an abstraction: a hash function is just some computable function that accepts strings as input and produces numbers between 1 and n as output. We call the set of allowed inputs U (for “Universe”). A family of hash functions is just a set of possible hash functions to choose from. We’ll use a scripty \mathscr{H} for our family, and so every hash function h in \mathscr{H} is a function h : U \to \{ 1, \dots, n \}.

You can use a single hash function h to maintain an unordered set of objects in a computer. The reason this is a problem that needs solving is because if you were to store items sequentially in a list, and if you want to determine if a specific item is already in the list, you need to potentially check every item in the list (or do something fancier). In any event, without hashing you have to spend some non-negligible amount of time searching. With hashing, you can choose the location of an element x \in U based on the value of its hash h(x). If you pick your hash function well, then you’ll have very few collisions and can deal with them efficiently. The relevant section on Wikipedia has more about the various techniques to deal with collisions in hash tables specifically, but we want to move beyond that in this post.

Here we have a family of random hash functions. So what’s the use of having many hash functions? You can pick a hash randomly from a “good” family of hash functions. While this doesn’t seem so magical, it has the informal property that it makes arbitrary data “random enough,” so that an algorithm which you designed to work with truly random data will also work with the hashes of arbitrary data. Moreover, even if an adversary knows \mathscr{H} and knows that you’re picking a hash function at random, there’s no way for the adversary to manufacture problems by feeding bad data. With overwhelming probability the worst-case scenario will not occur. Our first example of this is in load-balancing.

Load balancing and 2-uniformity

You can imagine load balancing in two ways, concretely and mathematically. In the concrete version you have a public-facing server that accepts requests from users, and forwards them to a back-end server which processes them and sends a response to the user. When you have a billion users and a million servers, you want to forward the requests in such a way that no server gets too many requests, or else the users will experience delays. Moreover, you’re worried that the League of Tanzanian Hackers is trying to take down your website by sending you requests in a carefully chosen order so as to screw up your load balancing algorithm.

The mathematical version of this problem usually goes with the metaphor of balls and bins. You have some collection of m balls and n bins in which to put the balls, and you want to put the balls into the bins. But there’s a twist: an adversary is throwing balls at you, and you have to put them into the bins before the next ball comes, so you don’t have time to remember (or count) how many balls are in each bin already. You only have time to do a small bit of mental arithmetic, sending ball i to bin f(i) where f is some simple function. Moreover, whatever rule you pick for distributing the balls in the bins, the adversary knows it and will throw balls at you in the worst order possible.

silk-balls.jpg

A young man applying his knowledge of balls and bins. That’s totally what he’s doing.

There is one obvious approach: why not just pick a uniformly random bin for each ball? The problem here is that we need the choice to be persistent. That is, if the adversary throws the same ball at us a second time, we need to put it in the same bin as the first time, and it doesn’t count toward the overall load. This is where the ball/bin metaphor breaks down. In the request/server picture, there is data specific to each user stored on the back-end server between requests (a session), and you need to make sure that data is not lost for some reasonable period of time. And if we were to save a uniform random choice after each request, we’d need to store a number for every request, which is too much. In short, we need the mapping to be persistent, but we also want it to be “like random” in effect.

So what do you do? The idea is to take a “good” family of hash functions \mathscr{H}, pick one h \in \mathscr{H} uniformly at random for the whole game, and when you get a request/ball x \in U send it to server/bin h(x). Note that in this case, the adversary knows your universal family \mathscr{H} ahead of time, and it knows your algorithm of committing to some single randomly chosen h \in \mathscr{H}, but the adversary does not know which particular h you chose.

The property of a family of hash functions that makes this strategy work is called 2-universality.

Definition: A family of functions \mathscr{H} from some universe U \to \{ 1, \dots, n \}. is called 2-universal if, for every two distinct x, y \in U, the probability over the random choice of a hash function h from \mathscr{H} that h(x) = h(y) is at most 1/n. In notation,

\displaystyle \Pr_{h \in \mathscr{H}}[h(x) = h(y)] \leq \frac{1}{n}

I’ll give an example of such a family shortly, but let’s apply this to our load balancing problem. Our load-balancing algorithm would fail if, with even some modest probability, there is some server that receives many more than its fair share (m/n) of the m requests. If \mathscr{H} is 2-universal, then we can compute an upper bound on the expected load of a given server, say server 1. Specifically, pick any element x which hashes to 1 under our randomly chosen h. Then we can compute an upper bound on the expected number of other elements that hash to 1. In this computation we’ll only use the fact that expectation splits over sums, and the definition of 2-universal. Call \mathbf{1}_{h(y) = 1} the random variable which is zero when h(y) \neq 1 and one when h(y) = 1, and call X = \sum_{y \in U} \mathbf{1}_{h(y) = 1}. In words, X simply represents the number of inputs that hash to 1. Then

exp-calc

So in expectation we can expect server 1 gets its fair share of requests. And clearly this doesn’t depend on the output hash being 1; it works for any server. There are two obvious questions.

  1. How do we measure the risk that, despite the expectation we computed above, some server is overloaded?
  2. If it seems like (1) is on track to happen, what can you do?

For 1 we’re asking to compute, for a given deviation t, the probability that X - \mathbb{E}[X] > t. This makes more sense if we jump to multiplicative factors, since it’s usually okay for a server to bear twice or three times its usual load, but not like \sqrt{n} times more than it’s usual load. (Industry experts, please correct me if I’m wrong! I’m far from an expert on the practical details of load balancing.)

So we want to know what is the probability that X - \mathbb{E}[X] > t \cdot \mathbb{E}[X] for some small number t, and we want this to get small quickly as t grows. This is where the Chebyshev inequality becomes useful. For those who don’t want to click the link, for our sitauation Chebyshev’s inequality is the statement that, for any random variable X

\displaystyle \Pr[|X - \mathbb{E}[X]| > t\mathbb{E}[X]] \leq \frac{\textup{Var}[X]}{t^2 \mathbb{E}^2[X]}.

So all we need to do is compute the variance of the load of a server. It’s a bit of a hairy calculation to write down, but rest assured it doesn’t use anything fancier than the linearity of expectation and 2-universality. Let’s dive in. We start by writing the definition of variance as an expectation, and then we split X up into its parts, expand the product and group the parts.

\displaystyle \textup{Var}[X] = \mathbb{E}[(X - \mathbb{E}[X])^2] = \mathbb{E}[X^2] - (\mathbb{E}[X])^2

The easy part is (\mathbb{E}[X])^2, it’s just (1 + (m-1)/n)^2, and the hard part is \mathbb{E}[X^2]. So let’s compute that

esquared-calcluation

In order to continue (and get a reasonable bound) we need an additional property of our hash family which is not immediately spelled out by 2-universality. Specifically, we need that for every h and i, \Pr_x[h(x) = i] = O(\frac{1}{n}). In other words, each hash function should evenly split the inputs across servers.

The reason this helps is because we can split \Pr[h(x) = h(y) = 1]  into \Pr[h(x) = h(y) \mid h(x) = 1] \cdot \Pr[h(x) = 1]. Using 2-universality to bound the left term, this quantity is at most 1/n^2, and since there are \binom{m}{2} total terms in the double sum above, the whole thing is at most O(m/n + m^2 / n^2) = O(m^2 / n^2). Note that in our big-O analysis we’re assuming m is much bigger than n.

Sweeping some of the details inside the big-O, this means that our variance is O(m^2/n^2), and so our bound on the deviation of X from its expectation by a multiplicative factor of t is at most O(1/t^2).

Now we computed a bound on the probability that a single server is not overloaded, but if we want to extend that to the worst-case server, the typical probability technique is to take the union bound over all servers. This means we just add up all the individual bounds and ignore how they relate. So the probability that none of the servers has a load more than a multiplicative factor of t is at most O(n/t^2). This is only less than one when t = \Omega(\sqrt{n}), so all we can say with this analysis is that (with some small constant probability) no server will have a load worse than \sqrt{n} times more than the expected load.

So we have this analysis that seems not so good. If we have a million servers then the worst load on one server could potentially be a thousand times higher than the expected load. This doesn’t scale, and the problem could be in any (or all) of three places:

  1. Our analysis is weak, and we should use tighter bounds because the true max load is actually much smaller.
  2. Our hash families don’t have strong enough properties, and we should beef those up to get tighter bounds.
  3. The whole algorithm sucks and needs to be improved.

It turns out all three are true. One heuristic solution is easy and avoids all math. Have some second server (which does not process requests) count hash collisions. When some server exceeds a factor of t more than the expected load, send a message to the load balancer to randomly pick a new hash function from \mathscr{H} and for any requests that don’t have existing sessions (this is included in the request data), use the new hash function. Once the old sessions expire, switch any new incoming requests from those IPs over to the new hash function.

But there are much better solutions out there. Unfortunately their analyses are too long for a blog post (they fill multiple research papers). Fortunately their descriptions and guarantees are easy to describe, and they’re easy to program. The basic idea goes by the name “the power of two choices,” which we explored on this blog in a completely different context of random graphs.

In more detail, the idea is that you start by picking two random hash functions h_1, h_2 \in \mathscr{H}, and when you get a new request, you compute both hashes, inspect the load of the two servers indexed by those hashes, and send the request to the server with the smaller load.

This has the disadvantage of requiring bidirectional talk between the load balancer and the server, rather than obliviously forwarding requests. But the advantage is an exponential decrease in the worst-case maximum load. In particular, the following theorem holds for the case where the hashes are fully random.

Theorem: Suppose one places m balls into n bins in order according to the following procedure: for each ball pick two uniformly random and independent integers 1 \leq i,j \leq n, and place the ball into the bin with the smallest current size. If there are ties pick the bin with the smaller index. Then with high probability the largest bin has no more than \Theta(m/n) + O(\log \log (n)) balls.

This theorem appears to have been proved in a few different forms, with the best analysis being by Berenbrink et al. You can improve the constant on the \log \log n by computing more than 2 hashes. How does this relate to a good family of hash functions, which is not quite fully random? Let’s explore the answer by implementing the algorithm in python.

An example of universal hash functions, and the load balancing algorithm

In order to implement the load balancer, we need to have some good hash functions under our belt. We’ll go with the simplest example of a hash function that’s easy to prove nice properties for. Specifically each hash in our family just performs some arithmetic modulo a random prime.

Definition: Pick any prime p > m, and for any 1 \leq a < p and 0 \leq b \leq n define h_{a,b}(x) = (ax + b \mod p) \mod m. Let \mathscr{H} = \{ h_{a,b} \mid 0 \leq b < p, 1 \leq a < p \}.

This family of hash functions is 2-universal.

Theorem: For every x \neq y \in \{0, \dots, p\},

\Pr_{h \in \mathscr{H}}[h(x) = h(y)] \leq 1/p

Proof. To say that h(x) = h(y) is to say that ax+b = ay+b + i \cdot m \mod p for some integer i. I.e., the two remainders of ax+b and ay+b are equivalent mod m. The b‘s cancel and we can solve for a

a = im (x-y)^{-1} \mod p

Since a \neq 0, there are p-1 possible choices for a. Moreover, there is no point to pick i bigger than p/m since we’re working modulo p. So there are (p-1)/m possible values for the right hand side of the above equation. So if we chose them uniformly at random, (remember, x-y is fixed ahead of time, so the only choice is a, i), then there is a (p-1)/m out of p-1 chance that the equality holds, which is at most 1/m. (To be exact you should account for taking a floor of (p-1)/m when m does not evenly divide p-1, but it only decreases the overall probability.)

\square

If m and p were equal then this would be even more trivial: it’s just the fact that there is a unique line passing through any two distinct points. While that’s obviously true from standard geometry, it is also true when you work with arithmetic modulo a prime. In fact, it works using arithmetic over any field.

Implementing these hash functions is easier than shooting fish in a barrel.

import random

def draw(p, m):
a = random.randint(1, p-1)
b = random.randint(0, p-1)

return lambda x: ((a*x + b) % p) % m

To encapsulate the process a little bit we implemented a UniversalHashFamily class which computes a random probable prime to use as the modulus and stores m. The interested reader can see the Github repository for more.

If we try to run this and feed in a large range of inputs, we can see how the outputs are distributed. In this example m is a hundred thousand and n is a hundred (it’s not two terabytes, but give me some slack it’s a demo and I’ve only got my desktop!). So the expected bin size for any 2-universal family is just about 1,000.

>>> m = 100000
>>> n = 100
>>> H = UniversalHashFamily(numBins=n, primeBounds=[n, 2*n])
>>> results = []
>>> for simulation in range(100):
...    bins = [0] * n
...    h = H.draw()
...    for i in range(m):
...       bins[h(i)] += 1
...    results.append(max(bins))
...
>>> max(bins) # a single run
1228
>>> min(bins)
613
>>> max(results) # the max bin size over all runs
1228
>>> min(results)
1227

Indeed, the max is very close to the expected value.

But this example is misleading, because the point of this was that some adversary would try to screw us over by picking a worst-case input. If the adversary knew exactly which h was chosen (which it doesn’t) then the worst case input would be the set of all inputs that have the given hash output value. Let’s see it happen live.

>>> h = H.draw()
>>> badInputs = [i for i in range(m) if h(i) == 9]
>>> len(badInputs)
1227
>>> testInputs(n,m,badInputs,hashFunction=h)
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1227, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]

The expected size of a bin is 12, but as expected this is 100 times worse (linearly worse in n). But if we instead pick a random h after the bad inputs are chosen, the result is much better.

>>> testInputs(n,m,badInputs) # randomly picks a hash
[19, 20, 20, 19, 18, 18, 17, 16, 16, 16, 16, 17, 18, 18, 19, 20, 20, 19, 18, 17, 17, 16, 16, 16, 16, 17, 18, 18, 19, 20, 20, 19, 18, 17, 17, 16, 16, 16, 16, 8, 8, 9, 9, 10, 10, 10, 10, 9, 9, 8, 8, 8, 8, 8, 8, 9, 9, 10, 10, 10, 10, 9, 9, 8, 8, 8, 8, 8, 8, 9, 9, 10, 10, 10, 10, 9, 8, 8, 8, 8, 8, 8, 8, 9, 9, 10, 10, 10, 10, 9, 8, 8, 8, 8, 8, 8, 8, 9, 9, 10]

However, if you re-ran this test many times, you’d eventually get unlucky and draw the hash function for which this actually is the worst input, and get a single huge bin. Other times you can get a bad hash in which two or three bins have all the inputs.

An interesting question is, what is really the worst-case input for this algorithm? I suspect it’s characterized by some choice of hash output values, taking all inputs for the chosen outputs. If this is the case, then there’s a tradeoff between the number of inputs you pick and how egregious the worst bin is. As an exercise to the reader, empirically estimate this tradeoff and find the best worst-case input for the adversary. Also, for your choice of parameters, estimate by simulation the probability that the max bin is three times larger than the expected value.

Now that we’ve played around with the basic hashing algorithm and made a family of 2-universal hashes, let’s see the power of two choices. Recall, this algorithm picks two random hash functions and sends an input to the bin with the smallest size. This obviously generalizes to k choices, although the theoretical guarantee only improves by a constant factor, so let’s implement the more generic version.

class ChoiceHashFamily(object):
   def __init__(self, hashFamily, queryBinSize, numChoices=2):
      self.queryBinSize = queryBinSize
      self.hashFamily = hashFamily
      self.numChoices = numChoices

   def draw(self):
      hashes = [self.hashFamily.draw()
                   for _ in range(self.numChoices)]

      def h(x):
         indices = [h(x) for h in hashes]
         counts = [self.queryBinSize(i) for i in indices]
         count, index = min([(c,i) for (c,i) in zip(counts,indices)])
         return index

      return h

And if we test this with the bad inputs (as used previously, all the inputs that hash to 9), as a typical output we get

>>> bins
[15, 16, 15, 15, 16, 14, 16, 14, 16, 15, 16, 15, 15, 15, 17, 14, 16, 14, 16, 16, 15, 16, 15, 16, 15, 15, 17, 15, 16, 15, 15, 15, 15, 16, 15, 14, 16, 14, 16, 15, 15, 15, 14, 16, 15, 15, 15, 14, 17, 14, 15, 15, 14, 16, 13, 15, 14, 15, 15, 15, 14, 15, 13, 16, 14, 16, 15, 15, 15, 16, 15, 15, 13, 16, 14, 15, 15, 16, 14, 15, 15, 15, 11, 13, 11, 12, 13, 14, 13, 11, 11, 12, 14, 14, 13, 10, 16, 12, 14, 10]

And a typical list of bin maxima is

>>> results
[16, 16, 16, 18, 17, 365, 18, 16, 16, 365, 18, 17, 17, 17, 17, 16, 16, 17, 18, 16, 17, 18, 17, 16, 17, 17, 18, 16, 18, 17, 17, 17, 17, 18, 18, 17, 17, 16, 17, 365, 17, 18, 16, 16, 18, 17, 16, 18, 365, 16, 17, 17, 16, 16, 18, 17, 17, 17, 17, 17, 18, 16, 18, 16, 16, 18, 17, 17, 365, 16, 17, 17, 17, 17, 16, 17, 16, 17, 16, 16, 17, 17, 16, 365, 18, 16, 17, 17, 17, 17, 17, 18, 17, 17, 16, 18, 18, 17, 17, 17]

Those big bumps are the times when we picked an unlucky hash function, which is scarily large, although this bad event would be proportionally less likely as you scale up. But in the good case the load is clearly more even than the previous example, and the max load would get linearly smaller as you pick between a larger set of randomly chosen hashes (obviously).

Coupling this with the technique of switching hash functions when you start to observe a large deviation, and you have yourself an elegant solution.

In addition to load balancing, hashing has a ton of applications. Remember, the main key that you may want to use hashing is when you have an algorithm that works well when the input data is random. This comes up in streaming and sublinear algorithms, in data structure design and analysis, and many other places. We’ll be covering those applications in future posts on this blog.

Until then!

The Boosting Margin, or Why Boosting Doesn’t Overfit

There’s a well-understood phenomenon in machine learning called overfitting. The idea is best shown by a graph:

overfitting

Let me explain. The vertical axis represents the error of a hypothesis. The horizontal axis represents the complexity of the hypothesis. The blue curve represents the error of a machine learning algorithm’s output on its training data, and the red curve represents the generalization of that hypothesis to the real world. The overfitting phenomenon is marker in the middle of the graph, before which the training error and generalization error both go down, but after which the training error continues to fall while the generalization error rises.

The explanation is a sort of numerical version of Occam’s Razor that says more complex hypotheses can model a fixed data set better and better, but at some point a simpler hypothesis better models the underlying phenomenon that generates the data. To optimize a particular learning algorithm, one wants to set parameters of their model to hit the minimum of the red curve.

This is where things get juicy. Boosting, which we covered in gruesome detail previously, has a natural measure of complexity represented by the number of rounds you run the algorithm for. Each round adds one additional “weak learner” weighted vote. So running for a thousand rounds gives a vote of a thousand weak learners. Despite this, boosting doesn’t overfit on many datasets. In fact, and this is a shocking fact, researchers observed that Boosting would hit zero training error, they kept running it for more rounds, and the generalization error kept going down! It seemed like the complexity could grow arbitrarily without penalty.

Schapire, Freund, Bartlett, and Lee proposed a theoretical explanation for this based on the notion of a margin, and the goal of this post is to go through the details of their theorem and proof. Remember that the standard AdaBoost algorithm produces a set of weak hypotheses h_i(x) and a corresponding weight \alpha_i \in [-1,1] for each round i=1, \dots, T. The classifier at the end is a weighted majority vote of all the weak learners (roughly: weak learners with high error on “hard” data points get less weight).

Definition: The signed confidence of a labeled example (x,y) is the weighted sum:

\displaystyle \textup{conf}(x) = \sum_{i=1}^T \alpha_i h_i(x)

The margin of (x,y) is the quantity \textup{margin}(x,y) = y \textup{conf}(x). The notation implicitly depends on the outputs of the AdaBoost algorithm via “conf.”

We use the product of the label and the confidence for the observation that y \cdot \textup{conf}(x) \leq 0 if and only if the classifier is incorrect. The theorem we’ll prove in this post is

Theorem: With high probability over a random choice of training data, for any 0 < \theta < 1 generalization error of boosting is bounded from above by

\displaystyle \Pr_{\textup{train}}[\textup{margin}(x) \leq \theta] + O \left ( \frac{1}{\theta} (\textup{typical error terms}) \right )

In words, the generalization error of the boosting hypothesis is bounded by the distribution of margins observed on the training data. To state and prove the theorem more generally we have to return to the details of PAC-learning. Here and in the rest of this post, \Pr_D denotes \Pr_{x \sim D}, the probability over a random example drawn from the distribution D, and \Pr_S denotes the probability over a random (training) set of examples drawn from D.

Theorem: Let S be a set of m random examples chosen from the distribution D generating the data. Assume the weak learner corresponds to a finite hypothesis space H of size |H|, and let \delta > 0. Then with probability at least 1 - \delta (over the choice of S), every weighted-majority vote function f satisfies the following generalization bound for every \theta > 0.

\displaystyle \Pr_D[y f(x) \leq 0] \leq \Pr_S[y f(x) \leq \theta] + O \left ( \frac{1}{\sqrt{m}} \sqrt{\frac{\log m \log |H|}{\theta^2} + \log(1/\delta)} \right )

In other words, this phenomenon is a fact about voting schemes, not boosting in particular. From now on, a “majority vote” function f(x) will mean to take the sign of a sum of the form \sum_{i=1}^N a_i h_i(x), where a_i \geq 0 and \sum_i a_i = 1. This is the “convex hull” of the set of weak learners H. If H is infinite (in our proof it will be finite, but we’ll state a generalization afterward), then only finitely many of the a_i in the sum may be nonzero.

To prove the theorem, we’ll start by defining a class of functions corresponding to “unweighted majority votes with duplicates:”

Definition: Let C_N be the set of functions f(x) of the form \frac{1}{N} \sum_{i=1}^N h_i(x) where h_i \in H and the h_i may contain duplicates (some of the h_i may be equal to some other of the h_j).

Now every majority vote function f can be written as a weighted sum of h_i with weights a_i (I’m using a instead of \alpha to distinguish arbitrary weights from those weights arising from Boosting). So any such f(x) defines a natural distribution over H where you draw function h_i with probability a_i. I’ll call this distribution A_f. If we draw from this distribution N times and take an unweighted sum, we’ll get a function g(x) \in C_N. Call the random process (distribution) generating functions in this way Q_f. In diagram form, the logic goes

f \to weights a_i \to distribution over H \to function in C_N by drawing N times according to H.

The main fact about the relationship between f and Q_f is that each is completely determined by the other. Obviously Q_f is determined by f because we defined it that way, but f is also completely determined by Q_f as follows:

\displaystyle f(x) = \mathbb{E}_{g \sim Q_f}[g(x)]

Proving the equality is an exercise for the reader.

Proof of Theorem. First we’ll split the probability \Pr_D[y f(x) \leq 0] into two pieces, and then bound each piece.

First a probability reminder. If we have two events A and B (in what’s below, this will be yg(x) \leq \theta/2 and yf(x) \leq 0, we can split up \Pr[A] into \Pr[A \textup{ and } B] + \Pr[A \textup{ and } \overline{B}] (where \overline{B} is the opposite of B). This is called the law of total probability. Moreover, because \Pr[A \textup{ and } B] = \Pr[A | B] \Pr[B] and because these quantities are all at most 1, it’s true that \Pr[A \textup{ and } B] \leq \Pr[A \mid B] (the conditional probability) and that \Pr[A \textup{ and } B] \leq \Pr[B].

Back to the proof. Notice that for any g(x) \in C_N and any \theta > 0, we can write \Pr_D[y f(x) \leq 0] as a sum:

\displaystyle \Pr_D[y f(x) \leq 0] =\\ \Pr_D[yg(x) \leq \theta/2 \textup{ and } y f(x) \leq 0] + \Pr_D[yg(x) > \theta/2 \textup{ and } y f(x) \leq 0]

Now I’ll loosen the first term by removing the second event (that only makes the whole probability bigger) and loosen the second term by relaxing it to a conditional:

\displaystyle \Pr_D[y f(x) \leq 0] \leq \Pr_D[y g(x) \leq \theta / 2] + \Pr_D[yg(x) > \theta/2 \mid yf(x) \leq 0]

Now because the inequality is true for every g(x) \in C_N, it’s also true if we take an expectation of the RHS over any distribution we choose. We’ll choose the distribution Q_f to get

\displaystyle \Pr_D[yf(x) \leq 0] \leq T_1 + T_2

And T_1 (term 1) is

\displaystyle T_1 = \Pr_{x \sim D, g \sim Q_f} [yg(x) \leq \theta /2] = \mathbb{E}_{g \sim Q_f}[\Pr_D[yg(x) \leq \theta/2]]

And T_2 is

\displaystyle \Pr_{x \sim D, g \sim Q_f}[yg(x) > \theta/2 \mid yf(x) \leq 0] = \mathbb{E}_D[\Pr_{g \sim Q_f}[yg(x) > \theta/2 \mid yf(x) \leq 0]]

We can rewrite the probabilities using expectations because (1) the variables being drawn in the distributions are independent, and (2) the probability of an event is the expectation of the indicator function of the event.

Now we’ll bound the terms T_1, T_2 separately. We’ll start with T_2.

Fix (x,y) and look at the quantity inside the expectation of T_2.

\displaystyle \Pr_{g \sim Q_f}[yg(x) > \theta/2 \mid yf(x) \leq 0]

This should intuitively be very small for the following reason. We’re sampling g according to a distribution whose expectation is f, and we know that yf(x) \leq 0. Of course yg(x) is unlikely to be large.

Mathematically we can prove this by transforming the thing inside the probability to a form suitable for the Chernoff bound. Saying yg(x) > \theta / 2 is the same as saying |yg(x) - \mathbb{E}[yg(x)]| > \theta /2, i.e. that some random variable which is a sum of independent random variables (the h_i) deviates from its expectation by at least \theta/2. Since the y‘s are all \pm 1 and constant inside the expectation, they can be removed from the absolute value to get

\displaystyle \leq \Pr_{g \sim Q_f}[g(x) - \mathbb{E}[g(x)] > \theta/2]

The Chernoff bound allows us to bound this by an exponential in the number of random variables in the sum, i.e. N. It turns out the bound is e^{-N \theta^2 / 8}.

Now recall T_1

\displaystyle T_1 = \Pr_{x \sim D, g \sim Q_f} [yg(x) \leq \theta /2] = \mathbb{E}_{g \sim Q_f}[\Pr_D[yg(x) \leq \theta/2]]

For T_1, we don’t want to bound it absolutely like we did for T_2, because there is nothing stopping the classifier f from being a bad classifier and having lots of error. Rather, we want to bound it in terms of the probability that yf(x) \leq \theta. We’ll do this in two steps. In step 1, we’ll go from \Pr_D of the g‘s to \Pr_S of the g‘s.

Step 1: For any fixed g, \theta, if we take a sample S of size m, then consider the event in which the sample probability deviates from the true distribution by some value \varepsilon_N, i.e. the event

\displaystyle \Pr_D[yg(x) \leq \theta /2] > \Pr_{S, x \sim S}[yg(x) \leq \theta/2] + \varepsilon_N

The claim is this happens with probability at most e^{-2m\varepsilon_N^2}. This is again the Chernoff bound in disguise, because the expected value of \Pr_S is \Pr_D, and the probability over S is an average of random variables (it’s a slightly different form of the Chernoff bound; see this post for more). From now on we’ll drop the x \sim S when writing \Pr_S.

The bound above holds true for any fixed g,\theta, but we want a bound over all g and \theta. To do that we use the union bound. Note that there are only (N+1) possible choices for a nonnegative \theta because g(x) is a sum of N values each of which is either \pm1. And there are only |C_N| \leq |H|^N possibilities for g(x). So the union bound says the above event will occur with probability at most (N+1)|H|^N e^{-2m\varepsilon_N^2}.

If we want the event to occur with probability at most \delta_N, we can judiciously pick

\displaystyle \varepsilon_N = \sqrt{(1/2m) \log ((N+1)|H|^N / \delta_N)}

And since the bound holds in general, we can take expectation with respect to Q_f and nothing changes. This means that for any \delta_N, our chosen \varepsilon_N ensures that the following is true with probability at least 1-\delta_N:

\displaystyle \Pr_{D, g \sim Q_f}[yg(x) \leq \theta/2] \leq \Pr_{S, g \sim Q_f}[yg(x) \leq \theta/2] + \varepsilon_N

Now for step 2, we bound the probability that yg(x) \leq \theta/2 on a sample to the probability that yf(x) \leq \theta on a sample.

Step 2: The first claim is that

\displaystyle \Pr_{S, g \sim Q_f}[yg(x) \leq \theta / 2] \leq \Pr_{S} [yf(x) \leq \theta] + \mathbb{E}_{S}[\Pr_{g \sim Q_f}[yg(x) \leq \theta/2 \mid yf(x) \geq \theta]]

What we did was break up the LHS into two “and”s, when yf(x) > \theta and yf(x) \leq \theta (this was still an equality). Then we loosened the first term to \Pr_{S}[yf(x) \leq \theta] since that is only more likely than both yg(x) \leq \theta/2 and yf(x) \leq \theta. Then we loosened the second term again using the fact that a probability of an “and” is bounded by the conditional probability.

Now we have the probability of yg(x) \leq \theta / 2 bounded by the probability that yf(x) \leq 0 plus some stuff. We just need to bound the “plus some stuff” absolutely and then we’ll be done. The argument is the same as our previous use of the Chernoff bound: we assume yf(x) \geq \theta, and yet yg(x) \leq \theta / 2. So the deviation of yg(x) from its expectation is large, and the probability that happens is exponentially small in the amount of deviation. The bound you get is

\displaystyle \Pr_{g \sim Q}[yg(x) \leq \theta/2 \mid yf(x) > \theta] \leq e^{-N\theta^2 / 8}.

And again we use the union bound to ensure the failure of this bound for any N will be very small. Specifically, if we want the total failure probability to be at most \delta, then we need to pick some \delta_j‘s so that \delta = \sum_{j=0}^{\infty} \delta_j. Choosing \delta_N = \frac{\delta}{N(N+1)} works.

Putting everything together, we get that with probability at least 1-\delta for every \theta and every N, this bound on the failure probability of f(x):

\displaystyle \Pr_{x \sim D}[yf(x) \leq 0] \leq \Pr_{S, x \sim S}[yf(x) \leq \theta] + 2e^{-N \theta^2 / 8} + \sqrt{\frac{1}{2m} \log \left ( \frac{N(N+1)^2 |H|^N}{\delta} \right )}.

This claim is true for every N, so we can pick N that minimizes it. Doing a little bit of behind-the-scenes calculus that is left as an exercise to the reader, a tight choice of N is (4/ \theta)^2 \log(m/ \log |H|). And this gives the statement of the theorem.

\square

We proved this for finite hypothesis classes, and if you know what VC-dimension is, you’ll know that it’s a central tool for reasoning about the complexity of infinite hypothesis classes. An analogous theorem can be proved in terms of the VC dimension. In that case, calling d the VC-dimension of the weak learner’s output hypothesis class, the bound is

\displaystyle \Pr_D[yf(x) \leq 0] \leq \Pr_S[yf(x) \leq \theta] + O \left ( \frac{1}{\sqrt{m}} \sqrt{\frac{d \log^2(m/d)}{\theta^2} + \log(1/\delta)} \right )

How can we interpret these bounds with so many parameters floating around? That’s where asymptotic notation comes in handy. If we fix \theta \leq 1/2 and \delta = 0.01, then the big-O part of the theorem simplifies to \sqrt{(\log |H| \cdot \log m) / m}, which is easier to think about since (\log m)/m goes to zero very fast.

Now the theorem we just proved was about any weighted majority function. The question still remains: why is AdaBoost good? That follows from another theorem, which we’ll state and leave as an exercise (it essentially follows by unwrapping the definition of the AdaBoost algorithm from last time).

Theorem: Suppose that during AdaBoost the weak learners produce hypotheses with training errors \varepsilon_1, \dots , \varepsilon_T. Then for any \theta,

\displaystyle \Pr_{(x,y) \sim S} [yf(x) \leq \theta] \leq 2^T \prod_{t=1}^T \sqrt{\varepsilon_t^{(1-\theta)} (1-\varepsilon_t)^{(1+\theta)}}

Let’s interpret this for some concrete numbers. Say that \theta = 0 and \varepsilon_t is any fixed value less than 1/2. In this case the term inside product becomes \sqrt{\varepsilon (1-\varepsilon)} < 1/2 and the whole bound tends exponentially quickly to zero in the number of rounds T. On the other hand, if we raise \theta to about 1/3, then in order to maintain the LHS tending to zero we would need \varepsilon < \frac{1}{4} ( 3 - \sqrt{5} ) which is about 20% error.

If you’re interested in learning more about Boosting, there is an excellent book by Freund and Schapire (the inventors of boosting) called Boosting: Foundations and Algorithms. There they include a tighter analysis based on the idea of Rademacher complexity. The bound I presented in this post is nice because the proof doesn’t require any machinery past basic probability, but if you want to reach the cutting edge of knowledge about boosting you need to invest in the technical stuff.

Until next time!

Weak Learning, Boosting, and the AdaBoost algorithm

When addressing the question of what it means for an algorithm to learn, one can imagine many different models, and there are quite a few. This invariably raises the question of which models are “the same” and which are “different,” along with a precise description of how we’re comparing models. We’ve seen one learning model so far, called Probably Approximately Correct (PAC), which espouses the following answer to the learning question:

An algorithm can “solve” a classification task using labeled examples drawn from some distribution if it can achieve accuracy that is arbitrarily close to perfect on the distribution, and it can meet this goal with arbitrarily high probability, where its runtime and the number of examples needed scales efficiently with all the parameters (accuracy, confidence, size of an example). Moreover, the algorithm needs to succeed no matter what distribution generates the examples.

You can think of this as a game between the algorithm designer and an adversary. First, the learning problem is fixed and everyone involved knows what the task is. Then the algorithm designer has to pick an algorithm. Then the adversary, knowing the chosen algorithm, chooses a nasty distribution D over examples that are fed to the learning algorithm. The algorithm designer “wins” if the algorithm produces a hypothesis with low error on D when given samples from D. And our goal is to prove that the algorithm designer can pick a single algorithm that is extremely likely to win no matter what D the adversary picks.

We’ll momentarily restate this with a more precise definition, because in this post we will compare it to a slightly different model, which is called the weak PAC-learning model. It’s essentially the same as PAC, except it only requires the algorithm to have accuracy that is slightly better than random guessing. That is, the algorithm will output a classification function which will correctly classify a random label with probability at least \frac{1}{2} + \eta for some small, but fixed, \eta > 0. The quantity \eta (the Greek “eta”) is called the edge as in “the edge over random guessing.” We call an algorithm that produces such a hypothesis a weak learner, and in contrast we’ll call a successful algorithm in the usual PAC model a strong learner.

The amazing fact is that strong learning and weak learning are equivalent! Of course a weak learner is not the same thing as a strong learner. What we mean by “equivalent” is that:

A problem can be weak-learned if and only if it can be strong-learned.

So they are computationally the same. One direction of this equivalence is trivial: if you have a strong learner for a classification task then it’s automatically a weak learner for the same task. The reverse is much harder, and this is the crux: there is an algorithm for transforming a weak learner into a strong learner! Informally, we “boost” the weak learning algorithm by feeding it examples from carefully constructed distributions, and then take a majority vote. This “reduction” from strong to weak learning is where all the magic happens.

In this post we’ll get into the depths of this boosting technique. We’ll review the model of PAC-learning, define what it means to be a weak learner, “organically” come up with the AdaBoost algorithm from some intuitive principles, prove that AdaBoost reduces error on the training data, and then run it on data. It turns out that despite the origin of boosting being a purely theoretical question, boosting algorithms have had a wide impact on practical machine learning as well.

As usual, all of the code and data used in this post is available on this blog’s Github page.

History and multiplicative weights

Before we get into the details, here’s a bit of history and context. PAC learning was introduced by Leslie Valiant in 1984, laying the foundation for a flurry of innovation. In 1988 Michael Kearns posed the question of whether one can “boost” a weak learner to a strong learner. Two years later Rob Schapire published his landmark paper “The Strength of Weak Learnability” closing the theoretical question by providing the first “boosting” algorithm. Schapire and Yoav Freund worked together for the next few years to produce a simpler and more versatile algorithm called AdaBoost, and for this they won the Gödel Prize, one of the highest honors in theoretical computer science. AdaBoost is also the standard boosting algorithm used in practice, though there are enough variants to warrant a book on the subject.

I’m going to define and prove that AdaBoost works in this post, and implement it and test it on some data. But first I want to give some high level discussion of the technique, and afterward the goal is to make that wispy intuition rigorous.

The central technique of AdaBoost has been discovered and rediscovered in computer science, and recently it was recognized abstractly in its own right. It is called the Multiplicative Weights Update Algorithm (MWUA), and it has applications in everything from learning theory to combinatorial optimization and game theory. The idea is to

  1. Maintain a nonnegative weight for the elements of some set,
  2. Draw a random element proportionally to the weights,
  3. So something with the chosen element, and based on the outcome of the “something…”
  4. Update the weights and repeat.

The “something” is usually a black box algorithm like “solve this simple optimization problem.” The output of the “something” is interpreted as a reward or penalty, and the weights are updated according to the severity of the penalty (the details of how this is done differ depending on the goal). In this light one can interpret MWUA as minimizing regret with respect to the best alternative element one could have chosen in hindsight. In fact, this was precisely the technique we used to attack the adversarial bandit learning problem (the Exp3 algorithm is a multiplicative weight scheme). See this lengthy technical survey of Arora and Kale for a research-level discussion of the algorithm and its applications.

Now let’s remind ourselves of the formal definition of PAC. If you’ve read the previous post on the PAC model, this next section will be redundant.

Distributions, hypotheses, and targets

In PAC-learning you are trying to give labels to data from some set X. There is a distribution D producing data from X, and it’s used for everything: to provide data the algorithm uses to learn, to measure your accuracy, and every other time you might get samples from X. You as the algorithm designer don’t know what D is, and a successful learning algorithm has to work no matter what D is. There’s some unknown function c called the target concept, which assigns a \pm 1 label to each data point in X. The target is the function we’re trying to “learn.” When the algorithm draws an example from D, it’s allowed to query the label c(x) and use all of the labels it’s seen to come up with some hypothesis h that is used for new examples that the algorithm may not have seen before. The problem is “solved” if h has low error on all of D.

To give a concrete example let’s do spam emails. Say that X is the set of all emails, and D is the distribution over emails that get sent to my personal inbox. A PAC-learning algorithm would take all my emails, along with my classification of which are spam and which are not spam (plus and minus 1). The algorithm would produce a hypothesis h that can be used to label new emails, and if the algorithm is truly a PAC-learner, then our guarantee is that with high probability (over the randomness in which emails I receive) the algorithm will produce an h that has low error on the entire distribution of emails that get sent to me (relative to my personal spam labeling function).

Of course there are practical issues with this model. I don’t have a consistent function for calling things spam, the distribution of emails I get and my labeling function can change over time, and emails don’t come according to a distribution with independent random draws. But that’s the theoretical model, and we can hope that algorithms we devise for this model happen to work well in practice.

Here’s the formal definition of the error of a hypothesis h(x) produced by the learning algorithm:

\textup{err}_{c,D}(h) = P_{x \sim D}(h(x) \neq c(x))

It’s read “The error of h with respect to the concept c we’re trying to learn and the distribution D is the probability over x drawn from D that the hypothesis produces the wrong label.” We can now define PAC-learning formally, introducing the parameters \delta for “probably” and \varepsilon for “approximately.” Let me say it informally first:

An algorithm PAC-learns if, for any \varepsilon, \delta > 0 and any distribution D, with probability at least 1-\delta the hypothesis h produced by the algorithm has error at most \varepsilon.

To flush out the other things hiding, here’s the full definition.

Definition (PAC): An algorithm A(\varepsilon, \delta) is said to PAC-learn the concept class H over the set X if, for any distribution D over X and for any 0 < \varepsilon, \delta < 1/2 and for any target concept c \in H, the probability that A produces a hypothesis h of error at most \varepsilon is at least 1-\delta. In symbols, \Pr_D(\textup{err}_{c,D}(h) \leq \varepsilon) > 1 - \delta. Moreover, A must run in time polynomial in 1/\varepsilon, 1/\delta and n, where n is the size of an element x \in X.

The reason we need a class of concepts (instead of just one target concept) is that otherwise we could just have a constant algorithm that outputs the correct labeling function. Indeed, when we get a problem we ask whether there exists an algorithm that can solve it. I.e., a problem is “PAC-learnable” if there is some algorithm that learns it as described above. With just one target concept there can exist an algorithm to solve the problem by hard-coding a description of the concept in the source code. So we need to have some “class of possible answers” that the algorithm is searching through so that the algorithm actually has a job to do.

We call an algorithm that gets this guarantee a strong learner. A weak learner has the same definition, except that we replace \textup{err}_{c,D}(h) \leq \varepsilon by the weak error bound: for some fixed 0 < \eta < 1/2. the error \textup{err}_{c,D}(h) \leq 1/2 - \eta. So we don’t require the algorithm to achieve any desired accuracy, it just has to get some accuracy slightly better than random guessing, which we don’t get to choose. As we will see, the value of \eta influences the convergence of the boosting algorithm. One important thing to note is that \eta is a constant independent of n, the size of an example, and m, the number of examples. In particular, we need to avoid the “degenerate” possibility that \eta(n) = 2^{-n} so that as our learning problem scales the quality of the weak learner degrades toward 1/2. We want it to be bounded away from 1/2.

So just to clarify all the parameters floating around, \delta will always be the “probably” part of PAC, \varepsilon is the error bound (the “approximately” part) for strong learners, and \eta is the error bound for weak learners.

What could a weak learner be?

Now before we prove that you can “boost” a weak learner to a strong learner, we should have some idea of what a weak learner is. Informally, it’s just a ‘rule of thumb’ that you can somehow guarantee does a little bit better than random guessing.

In practice, however, people sort of just make things up and they work. It’s kind of funny, but until recently nobody has really studied what makes a “good weak learner.” They just use an example like the one we’re about to show, and as long as they get a good error rate they don’t care if it has any mathematical guarantees. Likewise, they don’t expect the final “boosted” algorithm to do arbitrarily well, they just want low error rates.

The weak learner we’ll use in this post produces “decision stumps.” If you know what a decision tree is, then a decision stump is trivial: it’s a decision tree where the whole tree is just one node. If you don’t know what a decision tree is, a decision stump is a classification rule of the form:

Pick some feature i and some value of that feature v, and output label +1 if the input example has value v for feature i, and output label -1 otherwise.

Concretely, a decision stump might mark an email spam if it contains the word “viagra.” Or it might deny a loan applicant a loan if their credit score is less than some number.

Our weak learner produces a decision stump by simply looking through all the features and all the values of the features until it finds a decision stump that has the best error rate. It’s brute force, baby! Actually we’ll do something a little bit different. We’ll make our data numeric and look for a threshold of the feature value to split positive labels from negative labels. Here’s the Python code we’ll use in this post for boosting. This code was part of a collaboration with my two colleagues Adam Lelkes and Ben Fish. As usual, all of the code used in this post is available on Github.

First we make a class for a decision stump. The attributes represent a feature, a threshold value for that feature, and a choice of labels for the two cases. The classify function shows how simple the hypothesis is.

class Stump:
   def __init__(self):
      self.gtLabel = None
      self.ltLabel = None
      self.splitThreshold = None
      self.splitFeature = None

   def classify(self, point):
      if point[self.splitFeature] >= self.splitThreshold:
         return self.gtLabel
      else:
         return self.ltLabel

   def __call__(self, point):
      return self.classify(point)

Then for a fixed feature index we’ll define a function that computes the best threshold value for that index.

def minLabelErrorOfHypothesisAndNegation(data, h):
   posData, negData = ([(x, y) for (x, y) in data if h(x) == 1],
                       [(x, y) for (x, y) in data if h(x) == -1])

   posError = sum(y == -1 for (x, y) in posData) + sum(y == 1 for (x, y) in negData)
   negError = sum(y == 1 for (x, y) in posData) + sum(y == -1 for (x, y) in negData)
   return min(posError, negError) / len(data)


def bestThreshold(data, index, errorFunction):
   '''Compute best threshold for a given feature. Returns (threshold, error)'''

   thresholds = [point[index] for (point, label) in data]
   def makeThreshold(t):
      return lambda x: 1 if x[index] >= t else -1
   errors = [(threshold, errorFunction(data, makeThreshold(threshold))) for threshold in thresholds]
   return min(errors, key=lambda p: p[1])

Here we allow the user to provide a generic error function that the weak learner tries to minimize, but in our case it will just be minLabelErrorOfHypothesisAndNegation. In words, our threshold function will label an example as +1 if feature i has value greater than the threshold and -1 otherwise. But we might want to do the opposite, labeling -1 above the threshold and +1 below. The bestThreshold function doesn’t care, it just wants to know which threshold value is the best. Then we compute what the right hypothesis is in the next function.

def buildDecisionStump(drawExample, errorFunction=defaultError):
   # find the index of the best feature to split on, and the best threshold for
   # that index. A labeled example is a pair (example, label) and drawExample() 
   # accepts no arguments and returns a labeled example. 

   data = [drawExample() for _ in range(500)]

   bestThresholds = [(i,) + bestThreshold(data, i, errorFunction) for i in range(len(data[0][0]))]
   feature, thresh, _ = min(bestThresholds, key = lambda p: p[2])

   stump = Stump()
   stump.splitFeature = feature
   stump.splitThreshold = thresh
   stump.gtLabel = majorityVote([x for x in data if x[0][feature] >= thresh])
   stump.ltLabel = majorityVote([x for x in data if x[0][feature] < thresh])

   return stump

It’s a little bit inefficient but no matter. To illustrate the PAC framework we emphasize that the weak learner needs nothing except the ability to draw from a distribution. It does so, and then it computes the best threshold and creates a new stump reflecting that. The majorityVote function just picks the most common label of examples in the list. Note that drawing 500 samples is arbitrary, and in general we might increase it to increase the success probability of finding a good hypothesis. In fact, when proving PAC-learning theorems the number of samples drawn often depends on the accuracy and confidence parameters \varepsilon, \delta. We omit them here for simplicity.

Strong learners from weak learners

So suppose we have a weak learner A for a concept class H, and for any concept c from H it can produce with probability at least 1 - \delta a hypothesis h with error bound 1/2 - \eta. How can we modify this algorithm to get a strong learner? Here is an idea: we can maintain a large number of separate instances of the weak learner A, run them on our dataset, and then combine their hypotheses with a majority vote. In code this might look like the following python snippet. For now examples are binary vectors and the labels are \pm 1, so the sign of a real number will be its label.

def boost(learner, data, rounds=100):
   m = len(data)
   learners = [learner(random.choice(data, m/rounds)) for _ in range(rounds)]
   
   def hypothesis(example):
      return sign(sum(1/rounds * h(example) for h in learners))

   return hypothesis

This is a bit too simplistic: what if the majority of the weak learners are wrong? In fact, with an overly naive mindset one might imagine a scenario in which the different instances of A have high disagreement, so is the prediction going to depend on which random subset the learner happens to get? We can do better: instead of taking a majority vote we can take a weighted majority vote. That is, give the weak learner a random subset of your data, and then test its hypothesis on the data to get a good estimate of its error. Then you can use this error to say whether the hypothesis is any good, and give good hypotheses high weight and bad hypotheses low weight (proportionally to the error). Then the “boosted” hypothesis would take a weighted majority vote of all your hypotheses on an example. This might look like the following.

# data is a list of (example, label) pairs
def error(hypothesis, data):
   return sum(1 for x,y in data if hypothesis(x) != y) / len(data)


def boost(learner, data, rounds=100):
   m = len(data)
   weights = [0] * rounds
   learners = [None] * rounds

   for t in range(rounds):
      learners[t] = learner(random.choice(data, m/rounds))
      weights[t] = 1 - error(learners[t], data)
   
   def hypothesis(example):
      return sign(sum(weight * h(example) for (h, weight) in zip(learners, weights)))

   return hypothesis

This might be better, but we can do something even cleverer. Rather than use the estimated error just to say something about the hypothesis, we can identify the mislabeled examples in a round and somehow encourage A to do better at classifying those examples in later rounds. This turns out to be the key insight, and it’s why the algorithm is called AdaBoost (Ada stands for “adaptive”). We’re adaptively modifying the distribution over the training data we feed to A based on which data A learns “easily” and which it does not. So as the boosting algorithm runs, the distribution given to A has more and more probability weight on the examples that A misclassified. And, this is the key, A has the guarantee that it will weak learn no matter what the distribution over the data is. Of course, it’s error is also measured relative to the adaptively chosen distribution, and the crux of the argument will be relating this error to the error on the original distribution we’re trying to strong learn.

To implement this idea in mathematics, we will start with a fixed sample X = \{x_1, \dots, x_m\} drawn from D and assign a weight 0 \leq \mu_i \leq 1 to each x_i. Call c(x) the true label of an example. Initially, set \mu_i to be 1. Since our dataset can have repetitions, normalizing the \mu_i to a probability distribution gives an estimate of D. Now we’ll pick some “update” parameter \zeta > 1 (this is intentionally vague). Then we’ll repeat the following procedure for some number of rounds t = 1, \dots, T.

  1. Renormalize the \mu_i to a probability distribution.
  2. Train the weak learner A, and provide it with a simulated distribution D' that draws examples x_i according to their weights \mu_i. The weak learner outputs a hypothesis h_t.
  3. For every example x_i mislabeled by h_t, update \mu_i by replacing it with \mu_i \zeta.
  4. For every correctly labeled example replace \mu_i with \mu_i / \zeta.

At the end our final hypothesis will be a weighted majority vote of all the h_t, where the weights depend on the amount of error in each round. Note that when the weak learner misclassifies an example we increase the weight of that example, which means we’re increasing the likelihood it will be drawn in future rounds. In particular, in order to maintain good accuracy the weak learner will eventually have to produce a hypothesis that fixes its mistakes in previous rounds. Likewise, when examples are correctly classified, we reduce their weights. So examples that are “easy” to learn are given lower emphasis. And that’s it. That’s the prize-winning idea. It’s elegant, powerful, and easy to understand. The rest is working out the values of all the parameters and proving it does what it’s supposed to.

The details and a proof

Let’s jump straight into a Python program that performs boosting.

First we pick a data representation. Examples are pairs (x,c(x)) whose type is the tuple (object, int). Our labels will be \pm 1 valued. Since our algorithm is entirely black-box, we don’t need to assume anything about how the examples X are represented. Our dataset is just a list of labeled examples, and the weights are floats. So our boosting function prototype looks like this

# boost: [(object, int)], learner, int -> (object -> int)
# boost the given weak learner into a strong learner
def boost(examples, weakLearner, rounds):
   ...

And a weak learner, as we saw for decision stumps, has the following function prototype.

# weakLearner: (() -> (list, label)) -> (list -> label)
# accept as input a function that draws labeled examples from a distribution,
# and output a hypothesis list -> label
def weakLearner(draw):
   ...
   return hypothesis

Assuming we have a weak learner, we can fill in the rest of the boosting algorithm with some mysterious details. First, a helper function to compute the weighted error of a hypothesis on some exmaples. It also returns the correctness of the hypothesis on each example which we’ll use later.

# compute the weighted error of a given hypothesis on a distribution
# return all of the hypothesis results and the error
def weightedLabelError(h, examples, weights):
   hypothesisResults = [h(x)*y for (x,y) in examples] # +1 if correct, else -1
   return hypothesisResults, sum(w for (z,w) in zip(hypothesisResults, weights) if z < 0)

Next we have the main boosting algorithm. Here draw is a function that accepts as input a list of floats that sum to 1 and picks an index proportional to the weight of the entry at that index.

 
def boost(examples, weakLearner, rounds):
   distr = normalize([1.] * len(examples))
   hypotheses = [None] * rounds
   alpha = [0] * rounds

   for t in range(rounds):
      def drawExample():
         return examples[draw(distr)]

      hypotheses[t] = weakLearner(drawExample)
      hypothesisResults, error = computeError(hypotheses[t], examples, distr)

      alpha[t] = 0.5 * math.log((1 - error) / (.0001 + error))
      distr = normalize([d * math.exp(-alpha[t] * h)
                         for (d,h) in zip(distr, hypothesisResults)])
      print("Round %d, error %.3f" % (t, error))

   def finalHypothesis(x):
      return sign(sum(a * h(x) for (a, h) in zip(alpha, hypotheses)))

   return finalHypothesis

The code is almost clear. For each round we run the weak learner on our hand-crafted distribution. We compute the error of the resulting hypothesis on that distribution, and then we update the distribution in this mysterious way depending on some alphas and logs and exponentials. In particular, we use the expression c(x) h(x), the product of the true label and predicted label, as computed in weightedLabelError. As the comment says, this will either be +1 or -1 depending on whether the predicted label is correct or incorrect, respectively. The choice of those strange logarithms and exponentials are the result of some optimization: they allow us to minimize training error as quickly as possible (we’ll see this in the proof to follow). The rest of this section will prove that this works when the weak learner is correct. One small caveat: in the proof we will assume the error of the hypothesis is not zero (because a weak learner is not supposed to return a perfect hypothesis!), but in practice we want to avoid dividing by zero so we add the small 0.0001 to avoid that. As a quick self-check: why wouldn’t we just stop in the middle and output that “perfect” hypothesis? (What distribution is it “perfect” over? It might not be the original distribution!)

If we wanted to define the algorithm in pseudocode (which helps for the proof) we would write it this way. Given T rounds, start with D_1 being the uniform distribution over labeled input examples X, where x has label c(x). Say there are m input examples.

  1. For each t=1, \dots T:
    1. Let h_t be the weak learning algorithm run on D_t.
    2. Let \varepsilon_t be the error of h_t on D_t.
    3. Let \alpha_t = \frac{1}{2} \log ((1- \varepsilon) / \varepsilon).
    4. Update each entry of D_{t+1} by the rule D_{t+1}(x) = \frac{D_t(x)}{Z_t} e^{- h_t(x) c(x) \alpha_t}, where Z_t is chosen to normalize D_{t+1} to a distribution.
  2. Output as the final hypothesis the sign of h(x) = \sum_{t=1}^T \alpha_t h_t(x), i.e. h'(x) = \textup{sign}(h(x)).

Now let’s prove this works. That is, we’ll prove the error on the input dataset (the training set) decreases exponentially quickly in the number of rounds. Then we’ll run it on an example and save generalization error for the next post. Over many years this algorithm and tweaked so that the proof is very straightforward.

Theorem: If AdaBoost is given a weak learner and stopped on round t, and the edge \eta_t over random choice satisfies \varepsilon_t = 1/2 - \eta_t, then the training error of the AdaBoost is at most e^{-2 \sum_t \eta_t^2}.

Proof. Let m be the number of examples given to the boosting algorithm. First, we derive a closed-form expression for D_{t} in terms of the normalization constants Z_t. Expanding the recurrence relation gives

\displaystyle D_{t}(x) = D_1(x)\frac{e^{-\alpha_1 c(x) h_1(x)}}{Z_1} \dots \frac{e^{- \alpha_t c(x) h_t(x)}}{Z_t}

Because the starting distribution is uniform, and combining the products into a sum of the exponents, this simplifies to

\displaystyle \frac{1}{m} \frac{e^{-c(x) \sum_{s=1}^t \alpha_s h_t(x)}}{\prod_{s=1}^t Z_s} = \frac{1}{m}\frac{e^{-c(x) h(x)}}{\prod_s Z_s}

Next, we show that the training error is bounded by the product of the normalization terms \prod_{s=1}^t Z_s. This part has always seemed strange to me, that the training error of boosting depends on the factors you need to normalize a distribution. But it’s just a different perspective on the multiplicative weights scheme. If we didn’t explicitly normalize the distribution at each step, we’d get nonnegative weights (which we could convert to a distribution just for the sampling step) and the training error would depend on the product of the weight updates in each step. Anyway let’s prove it.

The training error is defined to be \frac{1}{m} (\textup{\# incorrect predictions by } h). This can be written with an indicator function as follows:

\displaystyle \frac{1}{m} \sum_{x \in X} 1_{c(x) h(x) \leq 0}

Because the sign of h(x) determines its prediction, the product is negative when h is incorrect. Now we can do a strange thing, we’re going to upper bound the indicator function (which is either zero or one) by e^{-c(x)h(x)}. This works because if h predicts correctly then the indicator function is zero while the exponential is greater than zero. On the other hand if h is incorrect the exponential is greater than one because e^z \geq 1 when z \geq 0. So we get

\displaystyle \leq \sum_i \frac{1}{m} e^{-c(x)h(x)}

and rearranging the formula for D_t from the first part gives

\displaystyle \sum_{x \in X} D_T(x) \prod_{t=1}^T Z_t

Since the D_T forms a distribution, it sums to 1 and we can factor the Z_t out. So the training error is just bounded by the \prod_{t=1}^T Z_t.

The last step is to bound the product of the normalization factors. It’s enough to show that Z_t \leq e^{-2 \eta_t^2}. The normalization constant is just defined as the sum of the numerator of the terms in step D. i.e.

\displaystyle Z_t = \sum_i D_t(i) e^{-\alpha_t c(x) h_t(x)}

We can split this up into the correct and incorrect terms (that contribute to +1 or -1 in the exponent) to get

\displaystyle Z_t = e^{-\alpha_t} \sum_{\textup{correct } x} D_t(x) + e^{\alpha_t} \sum_{\textup{incorrect } x} D_t(x)

But by definition the sum of the incorrect part of D is \varepsilon_t and 1-\varepsilon_t for the correct part. So we get

\displaystyle e^{-\alpha_t}(1-\varepsilon_t) + e^{\alpha_t} \varepsilon_t

Finally, since this is an upper bound we want to pick \alpha_t so as to minimize this expression. With a little calculus you can see the \alpha_t we chose in the algorithm pseudocode achieves the minimum, and this simplifies to 2 \sqrt{\varepsilon_t (1-\varepsilon_t)}. Plug in \varepsilon_t = 1/2 - \eta_t to get \sqrt{1 - 4 \eta_t^2} and use the calculus fact that 1 - z \leq e^{-z} to get e^{-2\eta_t^2} as desired.

\square

This is fine and dandy, it says that if you have a true weak learner then the training error of AdaBoost vanishes exponentially fast in the number of boosting rounds. But what about generalization error? What we really care about is whether the hypothesis produced by boosting has low error on the original distribution D as a whole, not just the training sample we started with.

One might expect that if you run boosting for more and more rounds, then it will eventually overfit the training data and its generalization accuracy will degrade. However, in practice this is not the case! The longer you boost, even if you get down to zero training error, the better generalization tends to be. For a long time this was sort of a mystery, and we’ll resolve the mystery in the sequel to this post. For now, we’ll close by showing a run of AdaBoost on some real world data.

The “adult” census dataset

The “adult” dataset is a standard dataset taken from the 1994 US census. It tracks a number of demographic and employment features (including gender, age, employment sector, etc.) and the goal is to predict whether an individual makes over $50k per year. Here are the first few lines from the training set.

39, State-gov, 77516, Bachelors, 13, Never-married, Adm-clerical, Not-in-family, White, Male, 2174, 0, 40, United-States, <=50K
50, Self-emp-not-inc, 83311, Bachelors, 13, Married-civ-spouse, Exec-managerial, Husband, White, Male, 0, 0, 13, United-States, <=50K
38, Private, 215646, HS-grad, 9, Divorced, Handlers-cleaners, Not-in-family, White, Male, 0, 0, 40, United-States, <=50K
53, Private, 234721, 11th, 7, Married-civ-spouse, Handlers-cleaners, Husband, Black, Male, 0, 0, 40, United-States, <=50K
28, Private, 338409, Bachelors, 13, Married-civ-spouse, Prof-specialty, Wife, Black, Female, 0, 0, 40, Cuba, <=50K
37, Private, 284582, Masters, 14, Married-civ-spouse, Exec-managerial, Wife, White, Female, 0, 0, 40, United-States, <=50K

We perform some preprocessing of the data, so that the categorical examples turn into binary features. You can see the full details in the github repository for this post; here are the first few post-processed lines (my newlines added).

>>> from data import adult
>>> train, test = adult.load()
>>> train[:3]
[((39, 1, 0, 0, 0, 0, 0, 1, 0, 0, 13, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 2174, 0, 40, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), -1), 

((50, 1, 0, 1, 0, 0, 0, 0, 0, 0, 13, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 13, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), -1), 

((38, 1, 1, 0, 0, 0, 0, 0, 0, 0, 9, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 40, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), -1)]

Now we can run boosting on the training data, and compute its error on the test data.

>>> from boosting import boost
>>> from data import adult
>>> from decisionstump import buildDecisionStump
>>> train, test = adult.load()
>>> weakLearner = buildDecisionStump
>>> rounds = 20
>>> h = boost(train, weakLearner, rounds)
Round 0, error 0.199
Round 1, error 0.231
Round 2, error 0.308
Round 3, error 0.380
Round 4, error 0.392
Round 5, error 0.451
Round 6, error 0.436
Round 7, error 0.459
Round 8, error 0.452
Round 9, error 0.432
Round 10, error 0.444
Round 11, error 0.447
Round 12, error 0.450
Round 13, error 0.454
Round 14, error 0.505
Round 15, error 0.476
Round 16, error 0.484
Round 17, error 0.500
Round 18, error 0.493
Round 19, error 0.473
>>> error(h, train)
0.153343
>>> error(h, test)
0.151711

This isn’t too shabby. I’ve tried running boosting for more rounds (a hundred) and the error doesn’t seem to improve by much. This implies that finding the best decision stump is not a weak learner (or at least it fails for this dataset), and we can see that indeed the training errors across rounds roughly tend to 1/2.

Though we have not compared our results above to any baseline, AdaBoost seems to work pretty well. This is kind of a meta point about theoretical computer science research. One spends years trying to devise algorithms that work in theory (and finding conditions under which we can get good algorithms in theory), but when it comes to practice we can’t do anything but hope the algorithms will work well. It’s kind of amazing that something like Boosting works in practice. It’s not clear to me that weak learners should exist at all, even for a given real world problem. But the results speak for themselves.

Next time

Next time we’ll get a bit deeper into the theory of boosting. We’ll derive the notion of a “margin” that quantifies the confidence of boosting in its prediction. Then we’ll describe (and maybe prove) a theorem that says if the “minimum margin” of AdaBoost on the training data is large, then the generalization error of AdaBoost on the entire distribution is small. The notion of a margin is actually quite a deep one, and it shows up in another famous machine learning technique called the Support Vector Machine. In fact, it’s part of some recent research I’ve been working on as well. More on that in the future.

If you’re dying to learn more about Boosting, but don’t want to wait for me, check out the book Boosting: Foundations and Algorithms, by Freund and Schapire.

Until next time!