In 2014 the White House commissioned a 90-day study that culminated in a report (pdf) on the state of “big data” and related technologies. The authors give many recommendations, including this central warning.
Warning: algorithms can facilitate illegal discrimination!
Here’s a not-so-imaginary example of the problem. A bank wants people to take loans with high interest rates, and it also serves ads for these loans. A modern idea is to use an algorithm to decide, based on the sliver of known information about a user visiting a website, which advertisement to present that gives the largest chance of the user clicking on it. There’s one problem: these algorithms are trained on historical data, and poor uneducated people (often racial minorities) have a historical trend of being more likely to succumb to predatory loan advertisements than the general population. So an algorithm that is “just” trying to maximize clickthrough may also be targeting black people, de facto denying them opportunities for fair loans. Such behavior is illegal.
On the other hand, even if algorithms are not making illegal decisions, by training algorithms on data produced by humans, we naturally reinforce prejudices of the majority. This can have negative effects, like Google’s autocomplete finishing “Are transgenders” with “going to hell?” Even if this is the most common question being asked on Google, and even if the majority think it’s morally acceptable to display this to users, this shows that algorithms do in fact encode our prejudices. People are slowly coming to realize this, to the point where it was recently covered in the New York Times.
There are many facets to the algorithm fairness problem one that has not even been widely acknowledged as a problem, despite the Times article. The message has been echoed by machine learning researchers but mostly ignored by practitioners. In particular, “experts” continually make ignorant claims such as, “equations can’t be racist,” and the following quote from the above linked article about how the Chicago Police Department has been using algorithms to do predictive policing.
Wernick denies that [the predictive policing] algorithm uses “any racial, neighborhood, or other such information” to assist in compiling the heat list [of potential repeat offenders].
Why is this ignorant? Because of the well-known fact that removing explicit racial features from data does not eliminate an algorithm’s ability to learn race. If racial features disproportionately correlate with crime (as they do in the US), then an algorithm which learns race is actually doing exactly what it is designed to do! One needs to be very thorough to say that an algorithm does not “use race” in its computations. Algorithms are not designed in a vacuum, but rather in conjunction with the designer’s analysis of their data. There are two points of failure here: the designer can unwittingly encode biases into the algorithm based on a biased exploration of the data, and the data itself can encode biases due to human decisions made to create it. Because of this, the burden of proof is (or should be!) on the practitioner to guarantee they are not violating discrimination law. Wernick should instead prove mathematically that the policing algorithm does not discriminate.
While that viewpoint is idealistic, it’s a bit naive because there is no accepted definition of what it means for an algorithm to be fair. In fact, from a precise mathematical standpoint, there isn’t even a precise legal definition of what it means for any practice to be fair. In the US the existing legal theory is called disparate impact, which states that a practice can be considered illegal discrimination if it has a “disproportionately adverse” effect on members of a protected group. Here “disproportionate” is precisely defined by the 80% rule, but this is somehow not enforced as stated. As with many legal issues, laws are broad assertions that are challenged on a case-by-case basis. In the case of fairness, the legal decision usually hinges on whether an individual was treated unfairly, because the individual is the one who files the lawsuit. Our understanding of the law is cobbled together, essentially through anecdotes slanted by political agendas. A mathematician can’t make progress with that. We want the mathematical essence of fairness, not something that can be interpreted depending on the court majority.
The problem is exacerbated for data mining because the practitioners often demonstrate a poor understanding of statistics, the management doesn’t understand algorithms, and almost everyone is lulled into a false sense of security via abstraction (remember, “equations can’t be racist”). Experts in discrimination law aren’t trained to audit algorithms, and engineers aren’t trained in social science or law. The speed with which research becomes practice far outpaces the speed at which anyone can keep up. This is especially true at places like Google and Facebook, where teams of in-house mathematicians and algorithm designers bypass the delay between academia and industry.
And perhaps the worst part is that even the world’s best mathematicians and computer scientists don’t know how to interpret the output of many popular learning algorithms. This isn’t just a problem that stupid people aren’t listening to smart people, it’s that everyone is “stupid.” A more politically correct way to say it: transparency in machine learning is a wide open problem. Take, for example, deep learning. A far-removed adaptation of neuroscience to data mining, deep learning has become the flagship technique spearheading modern advances in image tagging, speech recognition, and other classification problems.
A typical example of how a deep neural network learns to tag images. Image source: http://engineering.flipboard.com/2015/05/scaling-convnets/
The picture above shows how low level “features” (which essentially boil down to simple numerical combinations of pixel values) are combined in a “neural network” to more complicated image-like structures. The claim that these features represent natural concepts like “cat” and “horse” have fueled the public attention on deep learning for years. But looking at the above, is there any reasonable way to say whether these are encoding “discriminatory information”? Not only is this an open question, but we don’t even know what kinds of problems deep learning can solve! How can we understand to what extent neural networks can encode discrimination if we don’t have a deep understanding of why a neural network is good at what it does?
What makes this worse is that there are only about ten people in the world who understand the practical aspects of deep learning well enough to achieve record results for deep learning. This means they spent a ton of time tinkering the model to make it domain-specific, and nobody really knows whether the subtle differences between the top models correspond to genuine advances or slight overfitting or luck. Who is to say whether the fiasco with Google tagging images of black people as apes was caused by the data or the deep learning algorithm or by some obscure tweak made by the designer? I doubt even the designer could tell you with any certainty.
Opacity and a lack of interpretability is the rule more than the exception in machine learning. Celebrated techniques like Support Vector Machines, Boosting, and recent popular “tensor methods” are all highly opaque. This means that even if we knew what fairness meant, it is still a challenge (though one we’d be suited for) to modify existing algorithms to become fair. But with recent success stories in theoretical computer science connecting security, trust, and privacy, computer scientists have started to take up the call of nailing down what fairness means, and how to measure and enforce fairness in algorithms. There is now a yearly workshop called Fairness, Accountability, and Transparency in Machine Learning (FAT-ML, an awesome acronym), and some famous theory researchers are starting to get involved, as are social scientists and legal experts. Full disclosure, two days ago I gave a talk as part of this workshop on modifications to AdaBoost that seem to make it more fair. More on that in a future post.
From our perspective, we the computer scientists and mathematicians, the central obstacle is still that we don’t have a good definition of fairness.
In the next post I want to get a bit more technical. I’ll describe the parts of the fairness literature I like (which will be biased), I’ll hypothesize about the tension between statistical fairness and individual fairness, and I’ll entertain ideas on how someone designing a controversial algorithm (such as a predictive policing algorithm) could maintain transparency and accountability over its discriminatory impact. In subsequent posts I want to explain in more detail why it seems so difficult to come up with a useful definition of fairness, and to describe some of the ideas I and my coauthors have worked on.
When addressing the question of what it means for an algorithm to learn, one can imagine many different models, and there are quite a few. This invariably raises the question of which models are “the same” and which are “different,” along with a precise description of how we’re comparing models. We’ve seen one learning model so far, called Probably Approximately Correct (PAC), which espouses the following answer to the learning question:
An algorithm can “solve” a classification task using labeled examples drawn from some distribution if it can achieve accuracy that is arbitrarily close to perfect on the distribution, and it can meet this goal with arbitrarily high probability, where its runtime and the number of examples needed scales efficiently with all the parameters (accuracy, confidence, size of an example). Moreover, the algorithm needs to succeed no matter what distribution generates the examples.
You can think of this as a game between the algorithm designer and an adversary. First, the learning problem is fixed and everyone involved knows what the task is. Then the algorithm designer has to pick an algorithm. Then the adversary, knowing the chosen algorithm, chooses a nasty distribution over examples that are fed to the learning algorithm. The algorithm designer “wins” if the algorithm produces a hypothesis with low error on when given samples from . And our goal is to prove that the algorithm designer can pick a single algorithm that is extremely likely to win no matter what the adversary picks.
We’ll momentarily restate this with a more precise definition, because in this post we will compare it to a slightly different model, which is called the weak PAC-learning model. It’s essentially the same as PAC, except it only requires the algorithm to have accuracy that is slightly better than random guessing. That is, the algorithm will output a classification function which will correctly classify a random label with probability at least for some small, but fixed, . The quantity (the Greek “eta”) is called the edge as in “the edge over random guessing.” We call an algorithm that produces such a hypothesis a weak learner, and in contrast we’ll call a successful algorithm in the usual PAC model a strong learner.
The amazing fact is that strong learning and weak learning are equivalent! Of course a weak learner is not the same thing as a strong learner. What we mean by “equivalent” is that:
A problem can be weak-learned if and only if it can be strong-learned.
So they are computationally the same. One direction of this equivalence is trivial: if you have a strong learner for a classification task then it’s automatically a weak learner for the same task. The reverse is much harder, and this is the crux: there is an algorithm for transforming a weak learner into a strong learner! Informally, we “boost” the weak learning algorithm by feeding it examples from carefully constructed distributions, and then take a majority vote. This “reduction” from strong to weak learning is where all the magic happens.
In this post we’ll get into the depths of this boosting technique. We’ll review the model of PAC-learning, define what it means to be a weak learner, “organically” come up with the AdaBoost algorithm from some intuitive principles, prove that AdaBoost reduces error on the training data, and then run it on data. It turns out that despite the origin of boosting being a purely theoretical question, boosting algorithms have had a wide impact on practical machine learning as well.
Before we get into the details, here’s a bit of history and context. PAC learning was introduced by Leslie Valiant in 1984, laying the foundation for a flurry of innovation. In 1988 Michael Kearns posed the question of whether one can “boost” a weak learner to a strong learner. Two years later Rob Schapire published his landmark paper “The Strength of Weak Learnability” closing the theoretical question by providing the first “boosting” algorithm. Schapire and Yoav Freund worked together for the next few years to produce a simpler and more versatile algorithm called AdaBoost, and for this they won the Gödel Prize, one of the highest honors in theoretical computer science. AdaBoost is also the standard boosting algorithm used in practice, though there are enough variants to warrant a book on the subject.
I’m going to define and prove that AdaBoost works in this post, and implement it and test it on some data. But first I want to give some high level discussion of the technique, and afterward the goal is to make that wispy intuition rigorous.
The central technique of AdaBoost has been discovered and rediscovered in computer science, and recently it was recognized abstractly in its own right. It is called the Multiplicative Weights Update Algorithm (MWUA), and it has applications in everything from learning theory to combinatorial optimization and game theory. The idea is to
Maintain a nonnegative weight for the elements of some set,
Draw a random element proportionally to the weights,
So something with the chosen element, and based on the outcome of the “something…”
Update the weights and repeat.
The “something” is usually a black box algorithm like “solve this simple optimization problem.” The output of the “something” is interpreted as a reward or penalty, and the weights are updated according to the severity of the penalty (the details of how this is done differ depending on the goal). In this light one can interpret MWUA as minimizing regret with respect to the best alternative element one could have chosen in hindsight. In fact, this was precisely the technique we used to attack the adversarial bandit learning problem (the Exp3 algorithm is a multiplicative weight scheme). See this lengthy technical survey of Arora and Kale for a research-level discussion of the algorithm and its applications.
Now let’s remind ourselves of the formal definition of PAC. If you’ve read the previous post on the PAC model, this next section will be redundant.
Distributions, hypotheses, and targets
In PAC-learning you are trying to give labels to data from some set . There is a distribution producing data from , and it’s used for everything: to provide data the algorithm uses to learn, to measure your accuracy, and every other time you might get samples from . You as the algorithm designer don’t know what is, and a successful learning algorithm has to work no matter whatis. There’s some unknown function called the target concept, which assigns a label to each data point in . The target is the function we’re trying to “learn.” When the algorithm draws an example from , it’s allowed to query the label and use all of the labels it’s seen to come up with some hypothesis that is used for new examples that the algorithm may not have seen before.The problem is “solved” if has low error on all of .
To give a concrete example let’s do spam emails. Say that is the set of all emails, and is the distribution over emails that get sent to my personal inbox. A PAC-learning algorithm would take all my emails, along with my classification of which are spam and which are not spam (plus and minus 1). The algorithm would produce a hypothesis that can be used to label new emails, and if the algorithm is truly a PAC-learner, then our guarantee is that with high probability (over the randomness in which emails I receive) the algorithm will produce an that has low error on the entire distribution of emails that get sent to me (relative to my personal spam labeling function).
Of course there are practical issues with this model. I don’t have a consistent function for calling things spam, the distribution of emails I get and my labeling function can change over time, and emails don’t come according to a distribution with independent random draws. But that’s the theoretical model, and we can hope that algorithms we devise for this model happen to work well in practice.
Here’s the formal definition of the error of a hypothesis produced by the learning algorithm:
It’s read “The error of with respect to the concept we’re trying to learn and the distribution is the probability over drawn from that the hypothesis produces the wrong label.” We can now define PAC-learning formally, introducing the parameters for “probably” and for “approximately.” Let me say it informally first:
An algorithm PAC-learns if, for any and any distribution , with probability at least the hypothesis produced by the algorithm has error at most .
To flush out the other things hiding, here’s the full definition.
Definition (PAC): An algorithm is said to PAC-learn the concept class over the set if, for any distribution over and for any and for any target concept , the probability that produces a hypothesis of error at most is at least . In symbols, . Moreover, must run in time polynomial in and , where is the size of an element .
The reason we need a class of concepts (instead of just one target concept) is that otherwise we could just have a constant algorithm that outputs the correct labeling function. Indeed, when we get a problem we ask whether there exists an algorithm that can solve it. I.e., a problem is “PAC-learnable” if there is some algorithm that learns it as described above. With just one target concept there can exist an algorithm to solve the problem by hard-coding a description of the concept in the source code. So we need to have some “class of possible answers” that the algorithm is searching through so that the algorithm actually has a job to do.
We call an algorithm that gets this guarantee a strong learner. A weak learner has the same definition, except that we replace by the weak error bound: for some fixed. the error . So we don’t require the algorithm to achieve any desired accuracy, it just has to get some accuracy slightly better than random guessing, which we don’t get to choose. As we will see, the value of influences the convergence of the boosting algorithm. One important thing to note is that is a constant independent of , the size of an example, and , the number of examples. In particular, we need to avoid the “degenerate” possibility that so that as our learning problem scales the quality of the weak learner degrades toward 1/2. We want it to be bounded away from 1/2.
So just to clarify all the parameters floating around, will always be the “probably” part of PAC, is the error bound (the “approximately” part) for strong learners, and is the error bound for weak learners.
What could a weak learner be?
Now before we prove that you can “boost” a weak learner to a strong learner, we should have some idea of what a weak learner is. Informally, it’s just a ‘rule of thumb’ that you can somehow guarantee does a little bit better than random guessing.
In practice, however, people sort of just make things up and they work. It’s kind of funny, but until recently nobody has really studied what makes a “good weak learner.” They just use an example like the one we’re about to show, and as long as they get a good error rate they don’t care if it has any mathematical guarantees. Likewise, they don’t expect the final “boosted” algorithm to do arbitrarily well, they just want low error rates.
The weak learner we’ll use in this post produces “decision stumps.” If you know what a decision tree is, then a decision stump is trivial: it’s a decision tree where the whole tree is just one node. If you don’t know what a decision tree is, a decision stump is a classification rule of the form:
Pick some feature and some value of that feature , and output label if the input example has value for feature , and output label otherwise.
Concretely, a decision stump might mark an email spam if it contains the word “viagra.” Or it might deny a loan applicant a loan if their credit score is less than some number.
Our weak learner produces a decision stump by simply looking through all the features and all the values of the features until it finds a decision stump that has the best error rate. It’s brute force, baby! Actually we’ll do something a little bit different. We’ll make our data numeric and look for a threshold of the feature value to split positive labels from negative labels. Here’s the Python code we’ll use in this post for boosting. This code was part of a collaboration with my two colleagues Adam Lelkes and Ben Fish. As usual, all of the code used in this post is available on Github.
First we make a class for a decision stump. The attributes represent a feature, a threshold value for that feature, and a choice of labels for the two cases. The classify function shows how simple the hypothesis is.
Then for a fixed feature index we’ll define a function that computes the best threshold value for that index.
def minLabelErrorOfHypothesisAndNegation(data, h):
posData, negData = ([(x, y) for (x, y) in data if h(x) == 1],
[(x, y) for (x, y) in data if h(x) == -1])
posError = sum(y == -1 for (x, y) in posData) + sum(y == 1 for (x, y) in negData)
negError = sum(y == 1 for (x, y) in posData) + sum(y == -1 for (x, y) in negData)
return min(posError, negError) / len(data)
def bestThreshold(data, index, errorFunction):
”’Compute best threshold for a given feature. Returns (threshold, error)”’
thresholds = [point[index] for (point, label) in data]
return lambda x: 1 if x[index] >= t else -1
errors = [(threshold, errorFunction(data, makeThreshold(threshold))) for threshold in thresholds]
return min(errors, key=lambda p: p)
Here we allow the user to provide a generic error function that the weak learner tries to minimize, but in our case it will just be minLabelErrorOfHypothesisAndNegation. In words, our threshold function will label an example as if feature has value greater than the threshold and otherwise. But we might want to do the opposite, labeling above the threshold and below. The bestThreshold function doesn’t care, it just wants to know which threshold value is the best. Then we compute what the right hypothesis is in the next function.
def buildDecisionStump(drawExample, errorFunction=defaultError):
# find the index of the best feature to split on, and the best threshold for
# that index. A labeled example is a pair (example, label) and drawExample()
# accepts no arguments and returns a labeled example.
data = [drawExample() for _ in range(500)]
bestThresholds = [(i,) + bestThreshold(data, i, errorFunction) for i in range(len(data))]
feature, thresh, _ = min(bestThresholds, key = lambda p: p)
stump = Stump()
stump.splitFeature = feature
stump.splitThreshold = thresh
stump.gtLabel = majorityVote([x for x in data if x[feature] >= thresh])
stump.ltLabel = majorityVote([x for x in data if x[feature] < thresh])
It’s a little bit inefficient but no matter. To illustrate the PAC framework we emphasize that the weak learner needs nothing except the ability to draw from a distribution. It does so, and then it computes the best threshold and creates a new stump reflecting that. The majorityVote function just picks the most common label of examples in the list. Note that drawing 500 samples is arbitrary, and in general we might increase it to increase the success probability of finding a good hypothesis. In fact, when proving PAC-learning theorems the number of samples drawn often depends on the accuracy and confidence parameters . We omit them here for simplicity.
Strong learners from weak learners
So suppose we have a weak learner for a concept class , and for any concept from it can produce with probability at least a hypothesis with error bound . How can we modify this algorithm to get a strong learner? Here is an idea: we can maintain a large number of separate instances of the weak learner , run them on our dataset, and then combine their hypotheses with a majority vote. In code this might look like the following python snippet. For now examples are binary vectors and the labels are , so the sign of a real number will be its label.
def boost(learner, data, rounds=100):
m = len(data)
learners = [learner(random.choice(data, m/rounds)) for _ in range(rounds)]
return sign(sum(1/rounds * h(example) for h in learners))
This is a bit too simplistic: what if the majority of the weak learners are wrong? In fact, with an overly naive mindset one might imagine a scenario in which the different instances of have high disagreement, so is the prediction going to depend on which random subset the learner happens to get? We can do better: instead of taking a majority vote we can take a weighted majority vote. That is, give the weak learner a random subset of your data, and then test its hypothesis on the data to get a good estimate of its error. Then you can use this error to say whether the hypothesis is any good, and give good hypotheses high weight and bad hypotheses low weight (proportionally to the error). Then the “boosted” hypothesis would take a weighted majority vote of all your hypotheses on an example. This might look like the following.
# data is a list of (example, label) pairs
def error(hypothesis, data):
return sum(1 for x,y in data if hypothesis(x) != y) / len(data)
for t in range(rounds):
learners[t] = learner(random.choice(data, m/rounds))
weights[t] = 1 – error(learners[t], data)
return sign(sum(weight * h(example) for (h, weight) in zip(learners, weights)))
This might be better, but we can do something even cleverer. Rather than use the estimated error just to say something about the hypothesis, we can identify the mislabeled examples in a round and somehow encourage to do better at classifying those examples in later rounds. This turns out to be the key insight, and it’s why the algorithm is called AdaBoost (Ada stands for “adaptive”). We’re adaptively modifying the distribution over the training data we feed to based on which data learns “easily” and which it does not. So as the boosting algorithm runs, the distribution given to has more and more probability weight on the examples that misclassified. And, this is the key, has the guarantee that it will weak learn no matter what the distribution over the data is. Of course, it’s error is also measured relative to the adaptively chosen distribution, and the crux of the argument will be relating this error to the error on the original distribution we’re trying to strong learn.
To implement this idea in mathematics, we will start with a fixed sample drawn from and assign a weight to each . Call the true label of an example. Initially, set to be 1. Since our dataset can have repetitions, normalizing the to a probability distribution gives an estimate of . Now we’ll pick some “update” parameter (this is intentionally vague). Then we’ll repeat the following procedure for some number of rounds .
Renormalize the to a probability distribution.
Train the weak learner , and provide it with a simulated distribution that draws examples according to their weights . The weak learner outputs a hypothesis .
For every example mislabeled by , update by replacing it with .
For every correctly labeled example replace with .
At the end our final hypothesis will be a weighted majority vote of all the , where the weights depend on the amount of error in each round. Note that when the weak learner misclassifies an example we increase the weight of that example, which means we’re increasing the likelihood it will be drawn in future rounds. In particular, in order to maintain good accuracy the weak learner will eventually have to produce a hypothesis that fixes its mistakes in previous rounds. Likewise, when examples are correctly classified, we reduce their weights. So examples that are “easy” to learn are given lower emphasis. And that’s it. That’s the prize-winning idea. It’s elegant, powerful, and easy to understand. The rest is working out the values of all the parameters and proving it does what it’s supposed to.
The details and a proof
Let’s jump straight into a Python program that performs boosting.
First we pick a data representation. Examples are pairs whose type is the tuple (object, int). Our labels will be valued. Since our algorithm is entirely black-box, we don’t need to assume anything about how the examples are represented. Our dataset is just a list of labeled examples, and the weights are floats. So our boosting function prototype looks like this
# boost: [(object, int)], learner, int -> (object -> int)
# boost the given weak learner into a strong learner
def boost(examples, weakLearner, rounds):
And a weak learner, as we saw for decision stumps, has the following function prototype.
# weakLearner: (() -> (list, label)) -> (list -> label)
# accept as input a function that draws labeled examples from a distribution,
# and output a hypothesis list -> label
Assuming we have a weak learner, we can fill in the rest of the boosting algorithm with some mysterious details. First, a helper function to compute the weighted error of a hypothesis on some exmaples. It also returns the correctness of the hypothesis on each example which we’ll use later.
# compute the weighted error of a given hypothesis on a distribution
# return all of the hypothesis results and the error
def weightedLabelError(h, examples, weights):
hypothesisResults = [h(x)*y for (x,y) in examples] # +1 if correct, else -1
return hypothesisResults, sum(w for (z,w) in zip(hypothesisResults, weights) if z < 0)
Next we have the main boosting algorithm. Here draw is a function that accepts as input a list of floats that sum to 1 and picks an index proportional to the weight of the entry at that index.
return sign(sum(a * h(x) for (a, h) in zip(alpha, hypotheses)))
The code is almost clear. For each round we run the weak learner on our hand-crafted distribution. We compute the error of the resulting hypothesis on that distribution, and then we update the distribution in this mysterious way depending on some alphas and logs and exponentials. In particular, we use the expression , the product of the true label and predicted label, as computed in weightedLabelError. As the comment says, this will either be or depending on whether the predicted label is correct or incorrect, respectively. The choice of those strange logarithms and exponentials are the result of some optimization: they allow us to minimize training error as quickly as possible (we’ll see this in the proof to follow). The rest of this section will prove that this works when the weak learner is correct. One small caveat: in the proof we will assume the error of the hypothesis is not zero (because a weak learner is not supposed to return a perfect hypothesis!), but in practice we want to avoid dividing by zero so we add the small 0.0001 to avoid that. As a quick self-check: why wouldn’t we just stop in the middle and output that “perfect” hypothesis? (What distribution is it “perfect” over? It might not be the original distribution!)
If we wanted to define the algorithm in pseudocode (which helps for the proof) we would write it this way. Given rounds, start with being the uniform distribution over labeled input examples , where has label . Say there are input examples.
For each :
Let be the weak learning algorithm run on .
Let be the error of on .
Update each entry of by the rule , where is chosen to normalize to a distribution.
Output as the final hypothesis the sign of , i.e. .
Now let’s prove this works. That is, we’ll prove the error on the input dataset (the training set) decreases exponentially quickly in the number of rounds. Then we’ll run it on an example and save generalization error for the next post. Over many years this algorithm and tweaked so that the proof is very straightforward.
Theorem: If AdaBoost is given a weak learner and stopped on round , and the edge over random choice satisfies , then the training error of the AdaBoost is at most .
Proof. Let be the number of examples given to the boosting algorithm. First, we derive a closed-form expression for in terms of the normalization constants . Expanding the recurrence relation gives
Because the starting distribution is uniform, and combining the products into a sum of the exponents, this simplifies to
Next, we show that the training error is bounded by the product of the normalization terms . This part has always seemed strange to me, that the training error of boosting depends on the factors you need to normalize a distribution. But it’s just a different perspective on the multiplicative weights scheme. If we didn’t explicitly normalize the distribution at each step, we’d get nonnegative weights (which we could convert to a distribution just for the sampling step) and the training error would depend on the product of the weight updates in each step. Anyway let’s prove it.
The training error is defined to be . This can be written with an indicator function as follows:
Because the sign of determines its prediction, the product is negative when is incorrect. Now we can do a strange thing, we’re going to upper bound the indicator function (which is either zero or one) by . This works because if predicts correctly then the indicator function is zero while the exponential is greater than zero. On the other hand if is incorrect the exponential is greater than one because when . So we get
and rearranging the formula for from the first part gives
Since the forms a distribution, it sums to 1 and we can factor the out. So the training error is just bounded by the .
The last step is to bound the product of the normalization factors. It’s enough to show that . The normalization constant is just defined as the sum of the numerator of the terms in step D. i.e.
We can split this up into the correct and incorrect terms (that contribute to or in the exponent) to get
But by definition the sum of the incorrect part of is and for the correct part. So we get
Finally, since this is an upper bound we want to pick so as to minimize this expression. With a little calculus you can see the we chose in the algorithm pseudocode achieves the minimum, and this simplifies to . Plug in to get and use the calculus fact that to get as desired.
This is fine and dandy, it says that if you have a true weak learner then the training error of AdaBoost vanishes exponentially fast in the number of boosting rounds. But what about generalization error? What we really care about is whether the hypothesis produced by boosting has low error on the original distribution as a whole, not just the training sample we started with.
One might expect that if you run boosting for more and more rounds, then it will eventually overfit the training data and its generalization accuracy will degrade. However, in practice this is not the case! The longer you boost, even if you get down to zero training error, the better generalization tends to be. For a long time this was sort of a mystery, and we’ll resolve the mystery in the sequel to this post. For now, we’ll close by showing a run of AdaBoost on some real world data.
The “adult” census dataset
The “adult” dataset is a standard dataset taken from the 1994 US census. It tracks a number of demographic and employment features (including gender, age, employment sector, etc.) and the goal is to predict whether an individual makes over $50k per year. Here are the first few lines from the training set.
We perform some preprocessing of the data, so that the categorical examples turn into binary features. You can see the full details in the github repository for this post; here are the first few post-processed lines (my newlines added).
This isn’t too shabby. I’ve tried running boosting for more rounds (a hundred) and the error doesn’t seem to improve by much. This implies that finding the best decision stump is not a weak learner (or at least it fails for this dataset), and we can see that indeed the training errors across rounds roughly tend to 1/2.
Though we have not compared our results above to any baseline, AdaBoost seems to work pretty well. This is kind of a meta point about theoretical computer science research. One spends years trying to devise algorithms that work in theory (and finding conditions under which we can get good algorithms in theory), but when it comes to practice we can’t do anything but hope the algorithms will work well. It’s kind of amazing that something like Boosting works in practice. It’s not clear to me that weak learners should exist at all, even for a given real world problem. But the results speak for themselves.
Next time we’ll get a bit deeper into the theory of boosting. We’ll derive the notion of a “margin” that quantifies the confidence of boosting in its prediction. Then we’ll describe (and maybe prove) a theorem that says if the “minimum margin” of AdaBoost on the training data is large, then the generalization error of AdaBoost on the entire distribution is small. The notion of a margin is actually quite a deep one, and it shows up in another famous machine learning technique called the Support Vector Machine. In fact, it’s part of some recent research I’ve been working on as well. More on that in the future.
Problem: Alice chooses a secret polynomial with nonnegative integer coefficients. Bob wants to discover this polynomial by querying Alice for the value of for some integer of Bob’s choice. What is the minimal number of queries Bob needs to determine exactly?
Solution: Two queries. The first is , and if we call , then the second query is .
To someone who is familiar with polynomials, this may seem shocking, and I’ll explain why it works in a second. After all, it’s very easy to prove that if Bob gives Alice all of his queries at the same time (if the queries are not adaptive), then it’s impossible to discover what is using fewer than queries. This is due to a fact called polynomial interpolation, which we’ve seen on this blog before in the context of secret sharing. Specifically, there is a unique single-variable degree polynomial passing through points (with distinct -values). So if you knew the degree of , you could determine it easily. But Bob doesn’t know the degree of the polynomial, and there’s no way he can figure it out without adaptive queries! Indeed, if Bob tries and gives a set of queries, Alice could have easily picked a polynomial of degree . So it’s literally impossible to solve this problem without adaptive queries.
The lovely fact is that once you allow adaptiveness, the number of queries you need doesn’t even depend on the degree of the secret polynomial!
Okay let’s get to the solution. It was crucial that our polynomial had nonnegative integer coefficients, because we’re going to do a tiny bit of number theory. Let . First, note that is exactly the sum of the coefficients , and in particular is larger than any single coefficient. So call this , and query . This gives us a number of the form
And because is so big, we can compute easily by computing . Now set , and this has the form . We can compute modulus again to get , and repeat until we have all the coefficients. We’ll stop once we get a that is zero.
As a small technical note, this is a polynomial-time algorithm in the number of bits needed to write down . So this demonstrates the power of adaptive queries: we get from something which is uncomputable with any number of queries to something which is efficiently computable with a constant number of queries.
The obvious follow-up question is: can you come up with an efficient algorithm if we allow the coefficients to be negative integers?
So far our discussion of learning theory has been seeing the definition of PAC-learning, tinkering with it, and seeing simple examples of learnable concept classes. We’ve said that our real interest is in proving big theorems about what big classes of problems can and can’t be learned. One major tool for doing this with PAC is the concept of VC-dimension, but to set the stage we’re going to prove a simpler theorem that gives a nice picture of PAC-learning when your hypothesis class is small. In short, the theorem we’ll prove says that if you have a finite set of hypotheses to work with, and you can always find a hypothesis that’s consistent with the data you’ve seen, then you can learn efficiently. It’s obvious, but we want to quantify exactly how much data you need to ensure low error. This will also give us some concrete mathematical justification for philosophical claims about simplicity, and the theorems won’t change much when we generalize to VC-dimension in a future post.
The Chernoff bound
One tool we will need in this post, which shows up all across learning theory, is the Chernoff-Hoeffding bound. We covered this famous inequality in detail previously on this blog, but the part of that post we need is the following theorem that says, informally, that if you average a bunch of bounded random variables, then the probability this average random variable deviates from its expectation is exponentially small in the amount of deviation. Here’s the slightly simplified version we’ll use:
Theorem: Let be independent random variables whose values are in the range . Call , , and . Then for all ,
One nice thing about the Chernoff bound is that it doesn’t matter how the variables are distributed. This is important because in PAC we need guarantees that hold for any distribution generating data. Indeed, in our case the random variables above will be individual examples drawn from the distribution generating the data. We’ll be estimating the probability that our hypothesis has error deviating more than , and we’ll want to bound this by , as in the definition of PAC-learning. Since the amount of deviation (error) and the number of samples () both occur in the exponent, the trick is in balancing the two values to get what we want.
Realizability and finite hypothesis classes
Let’s recall the PAC model once more. We have a distribution generating labeled examples , where is an unknown function coming from some concept class . Our algorithm can draw a polynomial number of these examples, and it must produce a hypothesis from some hypothesis class (which may or may not contain ). The guarantee we need is that, for any , the algorithm produces a hypothesis whose error on is at most , and this event happens with probability at least . All of these probabilities are taken over the randomness in the algorithm’s choices and the distribution , and it has to work no matter what the distribution is.
Let’s introduce some simplifications. First, we’ll assume that the hypothesis and concept classes and are finite. Second, we’ll assume that , so that you can actually hope to find a hypothesis of zero error. This is called realizability. Later we’ll relax these first two assumptions, but they make the analysis a bit cleaner. Finally, we’ll assume that we have an algorithm which, when given labeled examples, can find in polynomial time a hypothesis that is consistent with every example.
These assumptions give a trivial learning algorithm: draw a bunch of examples and output any consistent hypothesis. The question is, how many examples do we need to guarantee that the hypothesis we find has the prescribed generalization error? It will certainly grow with , but we need to ensure it will only grow polynomially fast in this parameter. Indeed, realizability is such a strong assumption that we can prove a polynomial bound using even more basic probability theory than the Chernoff bound.
Theorem: A algorithm that efficiently finds a consistent hypothesis will PAC-learn any finite concept class provided it has at least samples, where
Proof. All we need to do is bound the probability that a bad hypothesis (one with error more than ) is consistent with the given data. Now fix , and draw examples and let be any hypothesis that is consistent with the drawn examples. Suppose that the bad thing happens, that .
Because the examples are all drawn independently from , the chance that all examples are consistent with is
What we’re saying here is, the probability that a specific bad hypothesis is actually consistent with your drawn examples is exponentially small in the error tolerance. So if we apply the union bound, the probability that some hypothesis you could produce is bad is at most , where is the number of hypotheses the algorithm might produce.
A crude upper bound on the number of hypotheses you could produce is just the total number of hypotheses, . Even cruder, let’s use the inequality to give the bound
Now we want to make sure that this probability, the probability of choosing a high-error (yet consistent) hypothesis, is at most . So we can set the above quantity less than and solve for :
Taking logs and solving for gives the desired bound.
An obvious objection is: what if you aren’t working with a hypothesis class where you can guarantee that you’ll find a consistent hypothesis? Well, in that case we’ll need to inspect the definition of PAC again and reevaluate our measures of error. It turns out we’ll get a similar theorem as above, but with the stipulation that we’re only achieving error within epsilon of the error of the best available hypothesis.
But before we go on, this theorem has some deep philosophical interpretations. In particular, suppose that, before drawing your data, you could choose to work with one of two finite hypothesis classes , with . If you can find a consistent hypothesis no matter which hypothesis class you use, then this theorem says that your generalization guarantees are much stronger if you start with the smaller hypothesis class.
In other words, all else being equal, the smaller set of hypotheses is better. For this reason, the theorem is sometimes called the “Occam’s Razor” theorem. We’ll see a generalization of this theorem in the next section.
Unrealizability and an extra epsilon
Now suppose that $H$ doesn’t contain any hypotheses with error less than . What can we hope to do in this case? One thing is that we can hope to find a hypothesis whose error is within of the minimal error of any hypothesis in . Moreover, we might not have any consistent hypotheses for some data samples! So rather than require an algorithm to produce an that is perfectly consistent with the data, we just need it to produce a hypothesis that has minimal empirical error, in the sense that it is as close to consistent as the best hypothesis of on the data you happened to draw. It seems like such a strategy would find you a hypothesis that’s close to the best one in , but we need to prove it and determine how many samples we need to draw to succeed.
So let’s make some definitions to codify this. For a given hypothesis, call the true error of on the distribution . Our assumption is that there may be no hypotheses in with . Next we’ll call the empirical error .
Definition: We say a concept class is agnostically learnable using the hypothesis class if for all and all distributions (and all ), there is a learning algorithm which produces a hypothesis that with probability at least satisfies
and everything runs in the same sort of polynomial time as for vanilla PAC-learning. This is called the agnostic setting or the unrealizable setting, in the sense that we may not be able to find a hypothesis with perfect empirical error.
We seek to prove that all concept classes are agnostically learnable with a finite hypothesis class, provided you have an algorithm that can minimize empirical error. But actually we’ll prove something stronger.
Theorem: Let be a finite hypothesis class and the number of samples drawn. Then for any , with probability the following holds:
In other words, we can precisely quantify how the empirical error converges to the true error as the number of samples grows. But this holds for all hypotheses in , so this provides a uniform bound of the difference between true and empirical error for the entire hypothesis class.
Proving this requires the Chernoff bound. Fix a single hypothesis . If you draw an example , call the random variable which is 1 when , and 0 otherwise. So if you draw samples and call the -th variable , the empirical error of the hypothesis is . Moreover, the actual error is the expectation of this random variable since .
So what we’re asking is the probability that the empirical error deviates from the true error by a lot. Let’s call “a lot” some parameter (the reason for dividing by two will become clear in the corollary to the theorem). Then plugging things into the Chernoff-Hoeffding bound gives a bound on the probability of the “bad event,” that the empirical error deviates too much.
Now to get a bound on the probability that some hypothesis is bad, we apply the union bound and use the fact that is finite to get
Now say we want to bound this probability by . We set , solve for , and get
This gives us a concrete quantification of the tradeoff between and . Indeed, if we pick to be this large, then solving for gives the exact inequality from the theorem.
Now we know that if we pick enough samples (polynomially many in all the parameters), and our algorithm can find a hypothesis of minimal empirical error, then we get the following corollary:
Corollary: For any , the algorithm that draws examples and finds any hypothesis of minimal empirical error will, with probability at least , produce a hypothesis that is within of the best hypothesis in .
Proof. By the previous theorem, with the desired probability, for all we have . Call . Then because the empirical error of is also minimal, we have . And using the previous theorem again and the triangle inequality, we get . In words, the true error of the algorithm’s hypothesis is close to the error of the best hypothesis, as desired.
Both of these theorems tell us something about the generalization guarantees for learning with hypothesis classes of a certain size. But this isn’t exactly the most reasonable measure of the “complexity” of a family of hypotheses. For example, one could have a hypothesis class with a billion intervals on (say you’re trying to learn intervals, or thresholds, or something easy), and the guarantees we proved in this post are nowhere near optimal.
So the question is: say you have a potentially infinite class of hypotheses, but the hypotheses are all “simple” in some way. First, what is the right notion of simplicity? And second, how can you get guarantees based on that analogous to these? We’ll discuss this next time when we define the VC-dimension.