|Making Hybrid Images||Neural Networks
|Bezier Curves and Picasso||Computing Homology||Probably Approximately
Correct – A Formal Theory
In 2014 the White House commissioned a 90-day study that culminated in a report (pdf) on the state of “big data” and related technologies. The authors give many recommendations, including this central warning.
Warning: algorithms can facilitate illegal discrimination!
Here’s a not-so-imaginary example of the problem. A bank wants people to take loans with high interest rates, and it also serves ads for these loans. A modern idea is to use an algorithm to decide, based on the sliver of known information about a user visiting a website, which advertisement to present that gives the largest chance of the user clicking on it. There’s one problem: these algorithms are trained on historical data, and poor uneducated people (often racial minorities) have a historical trend of being more likely to succumb to predatory loan advertisements than the general population. So an algorithm that is “just” trying to maximize clickthrough may also be targeting black people, de facto denying them opportunities for fair loans. Such behavior is illegal.
On the other hand, even if algorithms are not making illegal decisions, by training algorithms on data produced by humans, we naturally reinforce prejudices of the majority. This can have negative effects, like Google’s autocomplete finishing “Are transgenders” with “going to hell?” Even if this is the most common question being asked on Google, and even if the majority think it’s morally acceptable to display this to users, this shows that algorithms do in fact encode our prejudices. People are slowly coming to realize this, to the point where it was recently covered in the New York Times.
There are many facets to the algorithm fairness problem one that has not even been widely acknowledged as a problem, despite the Times article. The message has been echoed by machine learning researchers but mostly ignored by practitioners. In particular, “experts” continually make ignorant claims such as, “equations can’t be racist,” and the following quote from the above linked article about how the Chicago Police Department has been using algorithms to do predictive policing.
Wernick denies that [the predictive policing] algorithm uses “any racial, neighborhood, or other such information” to assist in compiling the heat list [of potential repeat offenders].
Why is this ignorant? Because of the well-known fact that removing explicit racial features from data does not eliminate an algorithm’s ability to learn race. If racial features disproportionately correlate with crime (as they do in the US), then an algorithm which learns race is actually doing exactly what it is designed to do! One needs to be very thorough to say that an algorithm does not “use race” in its computations. Algorithms are not designed in a vacuum, but rather in conjunction with the designer’s analysis of their data. There are two points of failure here: the designer can unwittingly encode biases into the algorithm based on a biased exploration of the data, and the data itself can encode biases due to human decisions made to create it. Because of this, the burden of proof is (or should be!) on the practitioner to guarantee they are not violating discrimination law. Wernick should instead prove mathematically that the policing algorithm does not discriminate.
While that viewpoint is idealistic, it’s a bit naive because there is no accepted definition of what it means for an algorithm to be fair. In fact, from a precise mathematical standpoint, there isn’t even a precise legal definition of what it means for any practice to be fair. In the US the existing legal theory is called disparate impact, which states that a practice can be considered illegal discrimination if it has a “disproportionately adverse” effect on members of a protected group. Here “disproportionate” is precisely defined by the 80% rule, but this is somehow not enforced as stated. As with many legal issues, laws are broad assertions that are challenged on a case-by-case basis. In the case of fairness, the legal decision usually hinges on whether an individual was treated unfairly, because the individual is the one who files the lawsuit. Our understanding of the law is cobbled together, essentially through anecdotes slanted by political agendas. A mathematician can’t make progress with that. We want the mathematical essence of fairness, not something that can be interpreted depending on the court majority.
The problem is exacerbated for data mining because the practitioners often demonstrate a poor understanding of statistics, the management doesn’t understand algorithms, and almost everyone is lulled into a false sense of security via abstraction (remember, “equations can’t be racist”). Experts in discrimination law aren’t trained to audit algorithms, and engineers aren’t trained in social science or law. The speed with which research becomes practice far outpaces the speed at which anyone can keep up. This is especially true at places like Google and Facebook, where teams of in-house mathematicians and algorithm designers bypass the delay between academia and industry.
And perhaps the worst part is that even the world’s best mathematicians and computer scientists don’t know how to interpret the output of many popular learning algorithms. This isn’t just a problem that stupid people aren’t listening to smart people, it’s that everyone is “stupid.” A more politically correct way to say it: transparency in machine learning is a wide open problem. Take, for example, deep learning. A far-removed adaptation of neuroscience to data mining, deep learning has become the flagship technique spearheading modern advances in image tagging, speech recognition, and other classification problems.
The picture above shows how low level “features” (which essentially boil down to simple numerical combinations of pixel values) are combined in a “neural network” to more complicated image-like structures. The claim that these features represent natural concepts like “cat” and “horse” have fueled the public attention on deep learning for years. But looking at the above, is there any reasonable way to say whether these are encoding “discriminatory information”? Not only is this an open question, but we don’t even know what kinds of problems deep learning can solve! How can we understand to what extent neural networks can encode discrimination if we don’t have a deep understanding of why a neural network is good at what it does?
What makes this worse is that there are only about ten people in the world who understand the practical aspects of deep learning well enough to achieve record results for deep learning. This means they spent a ton of time tinkering the model to make it domain-specific, and nobody really knows whether the subtle differences between the top models correspond to genuine advances or slight overfitting or luck. Who is to say whether the fiasco with Google tagging images of black people as apes was caused by the data or the deep learning algorithm or by some obscure tweak made by the designer? I doubt even the designer could tell you with any certainty.
Opacity and a lack of interpretability is the rule more than the exception in machine learning. Celebrated techniques like Support Vector Machines, Boosting, and recent popular “tensor methods” are all highly opaque. This means that even if ew knew what fairness meant, it is still a challenge (though one we’d be suited for) to modify existing algorithms to become fair. But with recent success stories in theoretical computer science connecting security, trust, and privacy, computer scientists have started to take up the call of nailing down what fairness means, and how to measure and enforce fairness in algorithms. There is now a yearly workshop called Fairness, Accountability, and Transparency in Machine Learning (FAT-ML, an awesome acronym), and some famous theory researchers are starting to get involved, as are social scientists and legal experts. Full disclosure, two days ago I gave a talk as part of this workshop on modifications to AdaBoost that seem to make it more fair. More on that in a future post.
From our perspective, we the computer scientists and mathematicians, the central obstacle is still that we don’t have a good definition of fairness.
In the next post I want to get a bit more technical. I’ll describe the parts of the fairness literature I like (which will be biased), I’ll hypothesize about the tension between statistical fairness and individual fairness, and I’ll entertain ideas on how someone designing a controversial algorithm (such as a predictive policing algorithm) could maintain transparency and accountability over its discriminatory impact. In subsequent posts I want to explain in more detail why it seems so difficult to come up with a useful definition of fairness, and to describe some of the ideas I and my coauthors have worked on.
A while back we featured a post about why learning mathematics can be hard for programmers, and I claimed a major issue was not understanding the basic methods of proof (the lingua franca between intuition and rigorous mathematics). I boiled these down to the “basic four,” direct implication, contrapositive, contradiction, and induction. But in mathematics there is an ever growing supply of proof methods. There are books written about the “probabilistic method,” and I recently went to a lecture where the “linear algebra method” was displayed. There has been recent talk of a “quantum method” for proving theorems unrelated to quantum mechanics, and many more.
So in continuing our series of methods of proof, we’ll move up to some of the more advanced methods of proof. And in keeping with the spirit of the series, we’ll spend most of our time discussing the structural form of the proofs. This time, diagonalization.
Perhaps one of the most famous methods of proof after the basic four is proof by diagonalization. Why do they call it diagonalization? Because the idea behind diagonalization is to write out a table that describes how a collection of objects behaves, and then to manipulate the “diagonal” of that table to get a new object that you can prove isn’t in the table.
The simplest and most famous example of this is the proof that there is no bijection between the natural numbers and the real numbers. We defined injections, and surjections and bijections, in two earlier posts in this series, but for new readers a bijection is just a one-to-one mapping between two collections of things. For example, one can construct a bijection between all positive integers and all even positive integers by mapping to . If there is a bijection between two (perhaps infinite) sets, then we say they have the same size or cardinality. And so to say there is no bijection between the natural numbers and the real numbers is to say that one of these two sets (the real numbers) is somehow “larger” than the other, despite both being infinite in size. It’s deep, it used to be very controversial, and it made the method of diagonalization famous. Let’s see how it works.
Theorem: There is no bijection from the natural numbers to the real numbers .
Proof. Suppose to the contrary (i.e., we’re about to do proof by contradiction) that there is a bijection . That is, you give me a positive integer and I will spit out , with the property that different give different , and every real number is hit by some natural number (this is just what it means to be a one-to-one mapping).
First let me just do some setup. I claim that all we need to do is show that there is no bijection between and the real numbers between 0 and 1. In particular, I claim there is a bijection from to all real numbers, so if there is a bijection from then we could combine the two bijections. To show there is a bijection from , I can first make a bijection from the open interval to the interval by mapping to . With a little bit of extra work (read, messy details) you can extend this to all real numbers. Here’s a sketch: make a bijection from to by doubling; then make a bijection from to all real numbers by using the part to get , and use the part to get by subtracting 1 (almost! To be super rigorous you also have to argue that the missing number 1 doesn’t change the cardinality, or else write down a more complicated bijection; still, the idea should be clear).
Okay, setup is done. We just have to show there is no bijection between and the natural numbers.
The reason I did all that setup is so that I can use the fact that every real number in has an infinite binary decimal expansion whose only nonzero digits are after the decimal point. And so I’ll write down the expansion of as a row in a table (an infinite row), and below it I’ll write down the expansion of , below that , and so on, and the decimal points will line up. The table looks like this.
It’s a bit harder to read, but trust me the notation is helpful.
Now by the assumption that is a bijection, I’m assuming that every real number shows up as a number in this table, and no real number shows up twice. So if I could construct a number that I can prove is not in the table, I will arrive at a contradiction: the table couldn’t have had all real numbers to begin with! And that will prove there is no bijection between the natural numbers and the real numbers.
Here’s how I’ll come up with such a number (this is the diagonalization part). It starts with 0., and it’s first digit after the decimal is . That is, we flip the bit to get the first digit of . The second digit is , the third is , and so on. In general, digit is .
Now we show that isn’t in the table. If it were, then it would have to be for some , i.e. be the -th row in the table. Moreover, by the way we built the table, the -th digit of would be . But we defined so that it’s -th digit was actually . This is very embarrassing for (it’s a contradiction!). So isn’t in the table.
It’s the kind of proof that blows your mind the first time you see it, because it says that there is more than one kind of infinity. Not something you think about every day, right?
The Halting Problem
The second example we’ll show of a proof by diagonalization is the Halting Theorem, proved originally by Alan Turing, which says that there are some problems that computers can’t solve, even if given unbounded space and time to perform their computations. The formal mathematical model is called a Turing machine, but for simplicity you can think of “Turing machines” and “algorithms described in words” as the same thing. Or if you want it can be “programs written in programming language X.” So we’ll use the three words “Turing machine,” “algorithm,” and “program” interchangeably.
The proof works by actually defining a problem and proving it can’t be solved. The problem is called the halting problem, and it is the problem of deciding: given a program and an input to that program, will ever stop running when given as input? What I mean by “decide” is that any program that claims to solve the halting problem is itself required to halt for every possible input with the correct answer. A “halting problem solver” can’t loop infinitely!
So first we’ll give the standard proof that the halting problem can’t be solved, and then we’ll inspect the form of the proof more closely to see why it’s considered a diagonalization argument.
Theorem: The halting program cannot be solved by Turing machines.
Proof. Suppose to the contrary that is a program that solves the halting problem. We’ll use as a black box to come up with a new program I’ll call meta-, defined in pseudo-python as follows.
def metaT(P): run T on (P,P) if T says that P halts: loop infinitely else: halt and output "success!"
In words, meta- accepts as input the source code of a program , and then uses to tell if halts (when given its own source code as input). Based on the result, it behaves the opposite of ; if halts then meta- loops infinitely and vice versa. It’s a little meta, right?
Now let’s do something crazy: let’s run meta- on itself! That is, run
So meta. The question is what is the output of this call? The meta- program uses to determine whether meta- halts when given itself as input. So let’s say that the answer to this question is “yes, it does halt.” Then by the definition of meta-, the program proceeds to loop forever. But this is a problem, because it means that
metaT(metaT) (which is the original thing we ran) actually does not halt, contradicting ‘s answer! Likewise, if says that
metaT(metaT) should loop infinitely, that will cause meta- to halt, a contradiction. So cannot be correct, and the halting problem can’t be solved.
This theorem is deep because it says that you can’t possibly write a program to which can always detect bugs in other programs. Infinite loops are just one special kind of bug.
But let’s take a closer look and see why this is a proof by diagonalization. The first thing we need to convince ourselves is that the set of all programs is countable (that is, there is a bijection from to the set of all programs). This shouldn’t be so hard to see: you can list all programs in lexicographic order, since the set of all strings is countable, and then throw out any that are not syntactically valid programs. Likewise, the set of all inputs, really just all strings, is countable.
The second thing we need to convince ourselves of is that a problem corresponds to an infinite binary string. To do this, we’ll restrict our attention to problems with yes/no answers, that is where the goal of the program is to output a single bit corresponding to yes or no for a given input. Then if we list all possible inputs in increasing lexicographic order, a problem can be represented by the infinite list of bits that are the correct outputs to each input.
For example, if the problem is to determine whether a given binary input string corresponds to an even number, the representation might look like this:
Of course this all depends on the details of how one encodes inputs, but the point is that if you wanted to you could nail all this down precisely. More importantly for us we can represent the halting problem as an infinite table of bits. If the columns of the table are all programs (in lex order), and the rows of the table correspond to inputs (in lex order), then the table would have at entry a 1 if halts and a 0 otherwise.
here is 1 if halts and 0 otherwise. The table encodes the answers to the halting problem for all possible inputs.
Now we assume for contradiction sake that some program solves the halting problem, i.e. that every entry of the table is computable. Now we’ll construct the answers output by meta- by flipping each bit of the diagonal of the table. The point is that meta- corresponds to some row of the table, because there is some input string that is interpreted as the source code of meta-. Then we argue that the entry of the table for contradicts its definition, and we’re done!
So these are two of the most high-profile uses of the method of diagonalization. It’s a great tool for your proving repertoire.
Until next time!
When addressing the question of what it means for an algorithm to learn, one can imagine many different models, and there are quite a few. This invariably raises the question of which models are “the same” and which are “different,” along with a precise description of how we’re comparing models. We’ve seen one learning model so far, called Probably Approximately Correct (PAC), which espouses the following answer to the learning question:
An algorithm can “solve” a classification task using labeled examples drawn from some distribution if it can achieve accuracy that is arbitrarily close to perfect on the distribution, and it can meet this goal with arbitrarily high probability, where it’s runtime and the number of examples needed scales efficiently with all the parameters (accuracy, confidence, size of an example). Moreover, the algorithm needs to succeed no matter what distribution generates the examples.
You can think of this as a game between the algorithm designer and an adversary. First, the learning problem is fixed and everyone involved knows what the task is. Then the algorithm designer has to pick an algorithm. Then the adversary, knowing the chosen algorithm, chooses a nasty distribution over examples that are fed to the learning algorithm. The algorithm designer “wins” if the algorithm produces a hypothesis with low error on when given samples from . And our goal is to prove that the algorithm designer can pick a single algorithm that is extremely likely to win no matter what the adversary picks.
We’ll momentarily restate this with a more precise definition, because in this post we will compare it to a slightly different model, which is called the weak PAC-learning model. It’s essentially the same as PAC, except it only requires the algorithm to have accuracy that is slightly better than random guessing. That is, the algorithm will output a classification function which will correctly classify a random label with probability at least for some small, but fixed, . The quantity (the Greek “eta”) is called the edge as in “the edge over random guessing.” We call an algorithm that produces such a hypothesis a weak learner, and in contrast we’ll call a successful algorithm in the usual PAC model a strong learner.
The amazing fact is that strong learning and weak learning are equivalent! Of course a weak learner is not the same thing as a strong learner. What we mean by “equivalent” is that:
A problem can be weak-learned if and only if it can be strong-learned.
So they are computationally the same. One direction of this equivalence is trivial: if you have a strong learner for a classification task then it’s automatically a weak learner for the same task. The reverse is much harder, and this is the crux: there is an algorithm for transforming a weak learner into a strong learner! Informally, we “boost” the weak learning algorithm by feeding it examples from carefully constructed distributions, and then take a majority vote. This “reduction” from strong to weak learning is where all the magic happens.
In this post we’ll get into the depths of this boosting technique. We’ll review the model of PAC-learning, define what it means to be a weak learner, “organically” come up with the AdaBoost algorithm from some intuitive principles, prove that AdaBoost reduces error on the training data, and then run it on data. It turns out that despite the origin of boosting being a purely theoretical question, boosting algorithms have had a wide impact on practical machine learning as well.
History and multiplicative weights
Before we get into the details, here’s a bit of history and context. PAC learning was introduced by Leslie Valiant in 1984, laying the foundation for a flurry of innovation. In 1988 Michael Kearns posed the question of whether one can “boost” a weak learner to a strong learner. Two years later Rob Schapire published his landmark paper “The Strength of Weak Learnability” closing the theoretical question by providing the first “boosting” algorithm. Schapire and Yoav Freund worked together for the next few years to produce a simpler and more versatile algorithm called AdaBoost, and for this they won the Gödel Prize, one of the highest honors in theoretical computer science. AdaBoost is also the standard boosting algorithm used in practice, though there are enough variants to warrant a book on the subject.
I’m going to define and prove that AdaBoost works in this post, and implement it and test it on some data. But first I want to give some high level discussion of the technique, and afterward the goal is to make that wispy intuition rigorous.
The central technique of AdaBoost has been discovered and rediscovered in computer science, and recently it was recognized abstractly in its own right. It is called the Multiplicative Weights Update Algorithm (MWUA), and it has applications in everything from learning theory to combinatorial optimization and game theory. The idea is to
- Maintain a nonnegative weight for the elements of some set,
- Draw a random element proportionally to the weights,
- So something with the chosen element, and based on the outcome of the “something…”
- Update the weights and repeat.
The “something” is usually a black box algorithm like “solve this simple optimization problem.” The output of the “something” is interpreted as a reward or penalty, and the weights are updated according to the severity of the penalty (the details of how this is done differ depending on the goal). In this light one can interpret MWUA as minimizing regret with respect to the best alternative element one could have chosen in hindsight. In fact, this was precisely the technique we used to attack the adversarial bandit learning problem (the Exp3 algorithm is a multiplicative weight scheme). See this lengthy technical survey of Arora and Kale for a research-level discussion of the algorithm and its applications.
Now let’s remind ourselves of the formal definition of PAC. If you’ve read the previous post on the PAC model, this next section will be redundant.
Distributions, hypotheses, and targets
In PAC-learning you are trying to give labels to data from some set . There is a distribution producing data from , and it’s used for everything: to provide data the algorithm uses to learn, to measure your accuracy, and every other time you might get samples from . You as the algorithm designer don’t know what is, and a successful learning algorithm has to work no matter what is. There’s some unknown function called the target concept, which assigns a label to each data point in . The target is the function we’re trying to “learn.” When the algorithm draws an example from , it’s allowed to query the label and use all of the labels it’s seen to come up with some hypothesis that is used for new examples that the algorithm may not have seen before. The problem is “solved” if has low error on all of .
To give a concrete example let’s do spam emails. Say that is the set of all emails, and is the distribution over emails that get sent to my personal inbox. A PAC-learning algorithm would take all my emails, along with my classification of which are spam and which are not spam (plus and minus 1). The algorithm would produce a hypothesis that can be used to label new emails, and if the algorithm is truly a PAC-learner, then our guarantee is that with high probability (over the randomness in which emails I receive) the algorithm will produce an that has low error on the entire distribution of emails that get sent to me (relative to my personal spam labeling function).
Of course there are practical issues with this model. I don’t have a consistent function for calling things spam, the distribution of emails I get and my labeling function can change over time, and emails don’t come according to a distribution with independent random draws. But that’s the theoretical model, and we can hope that algorithms we devise for this model happen to work well in practice.
Here’s the formal definition of the error of a hypothesis produced by the learning algorithm:
It’s read “The error of with respect to the concept we’re trying to learn and the distribution is the probability over drawn from that the hypothesis produces the wrong label.” We can now define PAC-learning formally, introducing the parameters for “probably” and for “approximately.” Let me say it informally first:
An algorithm PAC-learns if, for any and any distribution , with probability at least the hypothesis produced by the algorithm has error at most .
To flush out the other things hiding, here’s the full definition.
Definition (PAC): An algorithm is said to PAC-learn the concept class over the set if, for any distribution over and for any and for any target concept , the probability that produces a hypothesis of error at most is at least . In symbols, . Moreover, must run in time polynomial in and , where is the size of an element .
The reason we need a class of concepts (instead of just one target concept) is that otherwise we could just have a constant algorithm that outputs the correct labeling function. Indeed, when we get a problem we ask whether there exists an algorithm that can solve it. I.e., a problem is “PAC-learnable” if there is some algorithm that learns it as described above. With just one target concept there can exist an algorithm to solve the problem by hard-coding a description of the concept in the source code. So we need to have some “class of possible answers” that the algorithm is searching through so that the algorithm actually has a job to do.
We call an algorithm that gets this guarantee a strong learner. A weak learner has the same definition, except that we replace by the weak error bound: for some fixed . the error . So we don’t require the algorithm to achieve any desired accuracy, it just has to get some accuracy slightly better than random guessing, which we don’t get to choose. As we will see, the value of influences the convergence of the boosting algorithm. One important thing to note is that is a constant independent of , the size of an example, and , the number of examples. In particular, we need to avoid the “degenerate” possibility that so that as our learning problem scales the quality of the weak learner degrades toward 1/2. We want it to be bounded away from 1/2.
So just to clarify all the parameters floating around, will always be the “probably” part of PAC, is the error bound (the “approximately” part) for strong learners, and is the error bound for weak learners.
What could a weak learner be?
Now before we prove that you can “boost” a weak learner to a strong learner, we should have some idea of what a weak learner is. Informally, it’s just a ‘rule of thumb’ that you can somehow guarantee does a little bit better than random guessing.
In practice, however, people sort of just make things up and they work. It’s kind of funny, but until recently nobody has really studied what makes a “good weak learner.” They just use an example like the one we’re about to show, and as long as they get a good error rate they don’t care if it has any mathematical guarantees. Likewise, they don’t expect the final “boosted” algorithm to do arbitrarily well, they just want low error rates.
The weak learner we’ll use in this post produces “decision stumps.” If you know what a decision tree is, then a decision stump is trivial: it’s a decision tree where the whole tree is just one node. If you don’t know what a decision tree is, a decision stump is a classification rule of the form:
Pick some feature and some value of that feature , and output label if the input example has value for feature , and output label otherwise.
Concretely, a decision stump might mark an email spam if it contains the word “viagra.” Or it might deny a loan applicant a loan if their credit score is less than some number.
Our weak learner produces a decision stump by simply looking through all the features and all the values of the features until it finds a decision stump that has the best error rate. It’s brute force, baby! Actually we’ll do something a little bit different. We’ll make our data numeric and look for a threshold of the feature value to split positive labels from negative labels. Here’s the Python code we’ll use in this post for boosting. This code was part of a collaboration with my two colleagues Adam Lelkes and Ben Fish. As usual, all of the code used in this post is available on Github.
First we make a class for a decision stump. The attributes represent a feature, a threshold value for that feature, and a choice of labels for the two cases. The classify function shows how simple the hypothesis is.
class Stump: def __init__(self): self.gtLabel = None self.ltLabel = None self.splitThreshold = None self.splitFeature = None def classify(self, point): if point[self.splitFeature] >= self.splitThreshold: return self.gtLabel else: return self.ltLabel def __call__(self, point): return self.classify(point)
Then for a fixed feature index we’ll define a function that computes the best threshold value for that index.
def minLabelErrorOfHypothesisAndNegation(data, h): posData, negData = ([(x, y) for (x, y) in data if h(x) == 1], [(x, y) for (x, y) in data if h(x) == -1]) posError = sum(y == -1 for (x, y) in posData) + sum(y == 1 for (x, y) in negData) negError = sum(y == 1 for (x, y) in posData) + sum(y == -1 for (x, y) in negData) return min(posError, negError) / len(data) def bestThreshold(data, index, errorFunction): '''Compute best threshold for a given feature. Returns (threshold, error)''' thresholds = [point[index] for (point, label) in data] def makeThreshold(t): return lambda x: 1 if x[index] >= t else -1 errors = [(threshold, errorFunction(data, makeThreshold(threshold))) for threshold in thresholds] return min(errors, key=lambda p: p)
Here we allow the user to provide a generic error function that the weak learner tries to minimize, but in our case it will just be
minLabelErrorOfHypothesisAndNegation. In words, our threshold function will label an example as if feature has value greater than the threshold and otherwise. But we might want to do the opposite, labeling above the threshold and below. The
bestThreshold function doesn’t care, it just wants to know which threshold value is the best. Then we compute what the right hypothesis is in the next function.
def buildDecisionStump(drawExample, errorFunction=defaultError): # find the index of the best feature to split on, and the best threshold for # that index. A labeled example is a pair (example, label) and drawExample() # accepts no arguments and returns a labeled example. data = [drawExample() for _ in range(500)] bestThresholds = [(i,) + bestThreshold(data, i, errorFunction) for i in range(len(data))] feature, thresh, _ = min(bestThresholds, key = lambda p: p) stump = Stump() stump.splitFeature = feature stump.splitThreshold = thresh stump.gtLabel = majorityVote([x for x in data if x[feature] >= thresh]) stump.ltLabel = majorityVote([x for x in data if x[feature] < thresh]) return stump
It’s a little bit inefficient but no matter. To illustrate the PAC framework we emphasize that the weak learner needs nothing except the ability to draw from a distribution. It does so, and then it computes the best threshold and creates a new stump reflecting that. The
majorityVote function just picks the most common label of examples in the list. Note that drawing 500 samples is arbitrary, and in general we might increase it to increase the success probability of finding a good hypothesis. In fact, when proving PAC-learning theorems the number of samples drawn often depends on the accuracy and confidence parameters . We omit them here for simplicity.
Strong learners from weak learners
So suppose we have a weak learner for a concept class , and for any concept from it can produce with probability at least a hypothesis with error bound . How can we modify this algorithm to get a strong learner? Here is an idea: we can maintain a large number of separate instances of the weak learner , run them on our dataset, and then combine their hypotheses with a majority vote. In code this might look like the following python snippet. For now examples are binary vectors and the labels are , so the sign of a real number will be its label.
def boost(learner, data, rounds=100): m = len(data) learners = [learner(random.choice(data, m/rounds)) for _ in range(rounds)] def hypothesis(example): return sign(sum(1/rounds * h(example) for h in learners)) return hypothesis
This is a bit too simplistic: what if the majority of the weak learners are wrong? In fact, with an overly naive mindset one might imagine a scenario in which the different instances of have high disagreement, so is the prediction going to depend on which random subset the learner happens to get? We can do better: instead of taking a majority vote we can take a weighted majority vote. That is, give the weak learner a random subset of your data, and then test its hypothesis on the data to get a good estimate of its error. Then you can use this error to say whether the hypothesis is any good, and give good hypotheses high weight and bad hypotheses low weight (proportionally to the error). Then the “boosted” hypothesis would take a weighted majority vote of all your hypotheses on an example. This might look like the following.
# data is a list of (example, label) pairs def error(hypothesis, data): return sum(1 for x,y in data if hypothesis(x) != y) / len(data) def boost(learner, data, rounds=100): m = len(data) weights =  * rounds learners = [None] * rounds for t in range(rounds): learners[t] = learner(random.choice(data, m/rounds)) weights[t] = 1 - error(learners[t], data) def hypothesis(example): return sign(sum(weight * h(example) for (h, weight) in zip(learners, weights))) return hypothesis
This might be better, but we can do something even cleverer. Rather than use the estimated error just to say something about the hypothesis, we can identify the mislabeled examples in a round and somehow encourage to do better at classifying those examples in later rounds. This turns out to be the key insight, and it’s why the algorithm is called AdaBoost (Ada stands for “adaptive”). We’re adaptively modifying the distribution over the training data we feed to based on which data learns “easily” and which it does not. So as the boosting algorithm runs, the distribution given to has more and more probability weight on the examples that misclassified. And, this is the key, has the guarantee that it will weak learn no matter what the distribution over the data is. Of course, it’s error is also measured relative to the adaptively chosen distribution, and the crux of the argument will be relating this error to the error on the original distribution we’re trying to strong learn.
To implement this idea in mathematics, we will start with a fixed sample drawn from and assign a weight to each . Call the true label of an example. Initially, set to be 1. Since our dataset can have repetitions, normalizing the to a probability distribution gives an estimate of . Now we’ll pick some “update” parameter (this is intentionally vague). Then we’ll repeat the following procedure for some number of rounds .
- Renormalize the to a probability distribution.
- Train the weak learner , and provide it with a simulated distribution that draws examples according to their weights . The weak learner outputs a hypothesis .
- For every example mislabeled by , update by replacing it with .
- For every correctly labeled example replace with .
At the end our final hypothesis will be a weighted majority vote of all the , where the weights depend on the amount of error in each round. Note that when the weak learner misclassifies an example we increase the weight of that example, which means we’re increasing the likelihood it will be drawn in future rounds. In particular, in order to maintain good accuracy the weak learner will eventually have to produce a hypothesis that fixes its mistakes in previous rounds. Likewise, when examples are correctly classified, we reduce their weights. So examples that are “easy” to learn are given lower emphasis. And that’s it. That’s the prize-winning idea. It’s elegant, powerful, and easy to understand. The rest is working out the values of all the parameters and proving it does what it’s supposed to.
The details and a proof
Let’s jump straight into a Python program that performs boosting.
First we pick a data representation. Examples are pairs whose type is the tuple
(object, int). Our labels will be valued. Since our algorithm is entirely black-box, we don’t need to assume anything about how the examples are represented. Our dataset is just a list of labeled examples, and the weights are floats. So our boosting function prototype looks like this
# boost: [(object, int)], learner, int -> (object -> int) # boost the given weak learner into a strong learner def boost(examples, weakLearner, rounds): ...
And a weak learner, as we saw for decision stumps, has the following function prototype.
# weakLearner: (() -> (list, label)) -> (list -> label) # accept as input a function that draws labeled examples from a distribution, # and output a hypothesis list -> label def weakLearner(draw): ... return hypothesis
Assuming we have a weak learner, we can fill in the rest of the boosting algorithm with some mysterious details. First, a helper function to compute the weighted error of a hypothesis on some exmaples. It also returns the correctness of the hypothesis on each example which we’ll use later.
# compute the weighted error of a given hypothesis on a distribution # return all of the hypothesis results and the error def weightedLabelError(h, examples, weights): hypothesisResults = [h(x)*y for (x,y) in examples] # +1 if correct, else -1 return hypothesisResults, sum(w for (z,w) in zip(hypothesisResults, weights) if z < 0)
Next we have the main boosting algorithm. Here
draw is a function that accepts as input a list of floats that sum to 1 and picks an index proportional to the weight of the entry at that index.
def boost(examples, weakLearner, rounds): distr = normalize([1.] * len(examples)) hypotheses = [None] * rounds alpha =  * rounds for t in range(rounds): def drawExample(): return examples[draw(distr)] hypotheses[t] = weakLearner(drawExample) hypothesisResults, error = computeError(hypotheses[t], examples, distr) alpha[t] = 0.5 * math.log((1 - error) / (.0001 + error)) distr = normalize([d * math.exp(-alpha[t] * h) for (d,h) in zip(distr, hypothesisResults)]) print("Round %d, error %.3f" % (t, error)) def finalHypothesis(x): return sign(sum(a * h(x) for (a, h) in zip(alpha, hypotheses))) return finalHypothesis
The code is almost clear. For each round we run the weak learner on our hand-crafted distribution. We compute the error of the resulting hypothesis on that distribution, and then we update the distribution in this mysterious way depending on some alphas and logs and exponentials. In particular, we use the expression , the product of the true label and predicted label, as computed in
weightedLabelError. As the comment says, this will either be or depending on whether the predicted label is correct or incorrect, respectively. The choice of those strange logarithms and exponentials are the result of some optimization: they allow us to minimize training error as quickly as possible (we’ll see this in the proof to follow). The rest of this section will prove that this works when the weak learner is correct. One small caveat: in the proof we will assume the error of the hypothesis is not zero (because a weak learner is not supposed to return a perfect hypothesis!), but in practice we want to avoid dividing by zero so we add the small 0.0001 to avoid that. As a quick self-check: why wouldn’t we just stop in the middle and output that “perfect” hypothesis? (What distribution is it “perfect” over? It might not be the original distribution!)
If we wanted to define the algorithm in pseudocode (which helps for the proof) we would write it this way. Given rounds, start with being the uniform distribution over labeled input examples , where has label . Say there are input examples.
- For each :
- Let be the weak learning algorithm run on .
- Let be the error of on .
- Let .
- Update each entry of by the rule , where is chosen to normalize to a distribution.
- Output as the final hypothesis the sign of , i.e. .
Now let’s prove this works. That is, we’ll prove the error on the input dataset (the training set) decreases exponentially quickly in the number of rounds. Then we’ll run it on an example and save generalization error for the next post. Over many years this algorithm and tweaked so that the proof is very straightforward.
Theorem: If AdaBoost is given a weak learner and stopped on round , and the edge over random choice satisfies , then the training error of the AdaBoost is at most .
Proof. Let be the number of examples given to the boosting algorithm. First, we derive a closed-form expression for in terms of the normalization constants . Expanding the recurrence relation gives
Because the starting distribution is uniform, and combining the products into a sum of the exponents, this simplifies to
Next, we show that the training error is bounded by the product of the normalization terms . This part has always seemed strange to me, that the training error of boosting depends on the factors you need to normalize a distribution. But it’s just a different perspective on the multiplicative weights scheme. If we didn’t explicitly normalize the distribution at each step, we’d get nonnegative weights (which we could convert to a distribution just for the sampling step) and the training error would depend on the product of the weight updates in each step. Anyway let’s prove it.
The training error is defined to be . This can be written with an indicator function as follows:
Because the sign of determines its prediction, the product is negative when is incorrect. Now we can do a strange thing, we’re going to upper bound the indicator function (which is either zero or one) by . This works because if predicts correctly then the indicator function is zero while the exponential is greater than zero. On the other hand if is incorrect the exponential is greater than one because when . So we get
and rearranging the formula for from the first part gives
Since the forms a distribution, it sums to 1 and we can factor the out. So the training error is just bounded by the .
The last step is to bound the product of the normalization factors. It’s enough to show that . The normalization constant is just defined as the sum of the numerator of the terms in step D. i.e.
We can split this up into the correct and incorrect terms (that contribute to or in the exponent) to get
But by definition the sum of the incorrect part of is and for the correct part. So we get
Finally, since this is an upper bound we want to pick so as to minimize this expression. With a little calculus you can see the we chose in the algorithm pseudocode achieves the minimum, and this simplifies to . Plug in to get and use the calculus fact that to get as desired.
This is fine and dandy, it says that if you have a true weak learner then the training error of AdaBoost vanishes exponentially fast in the number of boosting rounds. But what about generalization error? What we really care about is whether the hypothesis produced by boosting has low error on the original distribution as a whole, not just the training sample we started with.
One might expect that if you run boosting for more and more rounds, then it will eventually overfit the training data and its generalization accuracy will degrade. However, in practice this is not the case! The longer you boost, even if you get down to zero training error, the better generalization tends to be. For a long time this was sort of a mystery, and we’ll resolve the mystery in the sequel to this post. For now, we’ll close by showing a run of AdaBoost on some real world data.
The “adult” census dataset
The “adult” dataset is a standard dataset taken from the 1994 US census. It tracks a number of demographic and employment features (including gender, age, employment sector, etc.) and the goal is to predict whether an individual makes over $50k per year. Here are the first few lines from the training set.
39, State-gov, 77516, Bachelors, 13, Never-married, Adm-clerical, Not-in-family, White, Male, 2174, 0, 40, United-States, <=50K 50, Self-emp-not-inc, 83311, Bachelors, 13, Married-civ-spouse, Exec-managerial, Husband, White, Male, 0, 0, 13, United-States, <=50K 38, Private, 215646, HS-grad, 9, Divorced, Handlers-cleaners, Not-in-family, White, Male, 0, 0, 40, United-States, <=50K 53, Private, 234721, 11th, 7, Married-civ-spouse, Handlers-cleaners, Husband, Black, Male, 0, 0, 40, United-States, <=50K 28, Private, 338409, Bachelors, 13, Married-civ-spouse, Prof-specialty, Wife, Black, Female, 0, 0, 40, Cuba, <=50K 37, Private, 284582, Masters, 14, Married-civ-spouse, Exec-managerial, Wife, White, Female, 0, 0, 40, United-States, <=50K
We perform some preprocessing of the data, so that the categorical examples turn into binary features. You can see the full details in the github repository for this post; here are the first few post-processed lines (my newlines added).
>>> from data import adult >>> train, test = adult.load() >>> train[:3] [((39, 1, 0, 0, 0, 0, 0, 1, 0, 0, 13, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 2174, 0, 40, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), -1), ((50, 1, 0, 1, 0, 0, 0, 0, 0, 0, 13, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 13, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), -1), ((38, 1, 1, 0, 0, 0, 0, 0, 0, 0, 9, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 40, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), -1)]
Now we can run boosting on the training data, and compute its error on the test data.
>>> from boosting import boost >>> from data import adult >>> from decisionstump import buildDecisionStump >>> train, test = adult.load() >>> weakLearner = buildDecisionStump >>> rounds = 20 >>> h = boost(train, weakLearner, rounds) Round 0, error 0.199 Round 1, error 0.231 Round 2, error 0.308 Round 3, error 0.380 Round 4, error 0.392 Round 5, error 0.451 Round 6, error 0.436 Round 7, error 0.459 Round 8, error 0.452 Round 9, error 0.432 Round 10, error 0.444 Round 11, error 0.447 Round 12, error 0.450 Round 13, error 0.454 Round 14, error 0.505 Round 15, error 0.476 Round 16, error 0.484 Round 17, error 0.500 Round 18, error 0.493 Round 19, error 0.473 >>> error(h, train) 0.153343 >>> error(h, test) 0.151711
This isn’t too shabby. I’ve tried running boosting for more rounds (a hundred) and the error doesn’t seem to improve by much. This implies that finding the best decision stump is not a weak learner (or at least it fails for this dataset), and we can see that indeed the training errors across rounds roughly tend to 1/2.
Though we have not compared our results above to any baseline, AdaBoost seems to work pretty well. This is kind of a meta point about theoretical computer science research. One spends years trying to devise algorithms that work in theory (and finding conditions under which we can get good algorithms in theory), but when it comes to practice we can’t do anything but hope the algorithms will work well. It’s kind of amazing that something like Boosting works in practice. It’s not clear to me that weak learners should exist at all, even for a given real world problem. But the results speak for themselves.
Next time we’ll get a bit deeper into the theory of boosting. We’ll derive the notion of a “margin” that quantifies the confidence of boosting in its prediction. Then we’ll describe (and maybe prove) a theorem that says if the “minimum margin” of AdaBoost on the training data is large, then the generalization error of AdaBoost on the entire distribution is small. The notion of a margin is actually quite a deep one, and it shows up in another famous machine learning technique called the Support Vector Machine. In fact, it’s part of some recent research I’ve been working on as well. More on that in the future.
If you’re dying to learn more about Boosting, but don’t want to wait for me, check out the book Boosting: Foundations and Algorithms, by Freund and Schapire.
Until next time!
And the year four birthday post
So Math ∩ Programming was actually born on June 12th 2011, but since my schedule is about to get super busy (more on that below) I’m writing my birthday post a month early.
First things first: I’m conducting a survey of my readers! If you want to help shape the future of Math ∩ Programming, please go take the survey. The results will be private, but I may at some point release some statistics about responses to some of the questions.
So my life is busy this summer. Here’s a few of the things I’ll be doing.
- I’m giving a poster at the Network Science 2015 workshop at Snowbird, Utah.
- I’m attending the huge ACM FCRC 2015 conference extravaganza. There will be 12 conferences in one convention center! I’d have reason to go to three or four of them, but they’re all timewise overlapping so I’ll probably approach it like a buffet and try a little piece of everything.
- I’m going to present a little paper at the ICML Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT ML, the best acronym) in France, and we’re working on beefing it up into a more substantial paper.
- I recently submitted my application for Hungarian citizenship, which I have by birthright but even after being submitted requires a lot of paperwork in a language I don’t speak very well.
- I’m rushing to hit some May and June paper deadlines, since…
- I’m getting married in June! So exciting!
With regards to my blog, I have a big fat list of promises I have not delivered on. Including,
- Finishing the series on linear programming.
- Getting to any interesting quantum algorithms.
- Machine-learny things like support vector machines, boosting, VC-dimension, community detection methods, etc.
- Topology-related stuff and persistent homology.
- Following the big blog map I drew last year.
The reason is because I’ve been spending some time on other blog-related projects I plan to announce in the coming months, as well as spending more and more time on a growing list of research projects, and of course on wedding planning. So sadly this will be a very slow summer for M∩P.
And finally, starting in the Fall I will be on the academic job market. I’ll likely post again when September comes around, but my anticipated graduation date is May 2016. So that’s going to be quite the roller coaster I’m sure.