Occam’s Razor and PAC-learning

So far our discussion of learning theory has been seeing the definition of PAC-learningtinkering with it, and seeing simple examples of learnable concept classes. We’ve said that our real interest is in proving big theorems about what big classes of problems can and can’t be learned. One major tool for doing this with PAC is the concept of VC-dimension, but to set the stage we’re going to prove a simpler theorem that gives a nice picture of PAC-learning when your hypothesis class is small. In short, the theorem we’ll prove says that if you have a finite set of hypotheses to work with, and you can always find a hypothesis that’s consistent with the data you’ve seen, then you can learn efficiently. It’s obvious, but we want to quantify exactly how much data you need to ensure low error. This will also give us some concrete mathematical justification for philosophical claims about simplicity, and the theorems won’t change much when we generalize to VC-dimension in a future post.

The Chernoff bound

One tool we will need in this post, which shows up all across learning theory, is the Chernoff-Hoeffding bound. We covered this famous inequality in detail previously on this blog, but the part of that post we need is the following theorem that says, informally, that if you average a bunch of bounded random variables, then the probability this average random variable deviates from its expectation is exponentially small in the amount of deviation. Here’s the slightly simplified version we’ll use:

Theorem: Let $ X_1, \dots, X_m$ be independent random variables whose values are in the range $ [0,1]$. Call $ \mu_i = \mathbf{E}[X_i]$, $ X = \sum_i X_i$, and $ \mu = \mathbf{E}[X] = \sum_i \mu_i$. Then for all $ t > 0$,

$ \displaystyle \Pr(|X-\mu| > t) \leq 2e^{-2t^2 / m}$

One nice thing about the Chernoff bound is that it doesn’t matter how the variables are distributed. This is important because in PAC we need guarantees that hold for any distribution generating data. Indeed, in our case the random variables above will be individual examples drawn from the distribution generating the data. We’ll be estimating the probability that our hypothesis has error deviating more than $ \varepsilon$, and we’ll want to bound this by $ \delta$, as in the definition of PAC-learning. Since the amount of deviation (error) and the number of samples ($ m$) both occur in the exponent, the trick is in balancing the two values to get what we want.

Realizability and finite hypothesis classes

Let’s recall the PAC model once more. We have a distribution $ D$ generating labeled examples $ (x, c(x))$, where $ c$ is an unknown function coming from some concept class $ C$. Our algorithm can draw a polynomial number of these examples, and it must produce a hypothesis $ h$ from some hypothesis class $ H$ (which may or may not contain $ c$). The guarantee we need is that, for any $ \delta, \varepsilon > 0$, the algorithm produces a hypothesis whose error on $ D$ is at most $ \varepsilon$, and this event happens with probability at least $ 1-\delta$. All of these probabilities are taken over the randomness in the algorithm’s choices and the distribution $ D$, and it has to work no matter what the distribution $ D$ is.

Let’s introduce some simplifications. First, we’ll assume that the hypothesis and concept classes $ H$ and $ C$ are finite. Second, we’ll assume that $ C \subset H$, so that you can actually hope to find a hypothesis of zero error. This is called realizability. Later we’ll relax these first two assumptions, but they make the analysis a bit cleaner. Finally, we’ll assume that we have an algorithm which, when given labeled examples, can find in polynomial time a hypothesis $ h \in H$ that is consistent with every example.

These assumptions give a trivial learning algorithm: draw a bunch of examples and output any consistent hypothesis. The question is, how many examples do we need to guarantee that the hypothesis we find has the prescribed generalization error? It will certainly grow with $ 1 / \varepsilon$, but we need to ensure it will only grow polynomially fast in this parameter. Indeed, realizability is such a strong assumption that we can prove a polynomial bound using even more basic probability theory than the Chernoff bound.

Theorem: A algorithm that efficiently finds a consistent hypothesis will PAC-learn any finite concept class provided it has at least $ m$ samples, where

$ \displaystyle m \geq \frac{1}{\varepsilon} \left ( \log |H| + \log \left ( \frac{1}{\delta} \right ) \right )$

Proof. All we need to do is bound the probability that a bad hypothesis (one with error more than $ \varepsilon$) is consistent with the given data. Now fix $ D, c, \delta, \varepsilon$, and draw $ m$ examples and let $ h$ be any hypothesis that is consistent with the drawn examples. Suppose that the bad thing happens, that $ \Pr_D(h(x) \neq c(x)) > \varepsilon$.

Because the examples are all drawn independently from $ D$, the chance that all $ m$ examples are consistent with $ h$ is

$ \displaystyle (1 – \Pr_{x \sim D}(h(x) \neq c(x)))^m < (1 – \varepsilon)^m$

What we’re saying here is, the probability that a specific bad hypothesis is actually consistent with your drawn examples is exponentially small in the error tolerance. So if we apply the union bound, the probability that some hypothesis you could produce is bad is at most $ (1 – \varepsilon)^m S$, where $ S$ is the number of hypotheses the algorithm might produce.

A crude upper bound on the number of hypotheses you could produce is just the total number of hypotheses, $ |H|$. Even cruder, let’s use the inequality $ (1 – x) < e^{-x}$ to give the bound

$ \displaystyle (1 – \varepsilon)^m |H| < e^{-\varepsilon m} |H|$

Now we want to make sure that this probability, the probability of choosing a high-error (yet consistent) hypothesis, is at most $ \delta$. So we can set the above quantity less than $ \delta$ and solve for $ m$:

$ \displaystyle e^{-\varepsilon m} |H| \leq \delta$

Taking logs and solving for $ m$ gives the desired bound.

$ \square$

An obvious objection is: what if you aren’t working with a hypothesis class where you can guarantee that you’ll find a consistent hypothesis? Well, in that case we’ll need to inspect the definition of PAC again and reevaluate our measures of error. It turns out we’ll get a similar theorem as above, but with the stipulation that we’re only achieving error within epsilon of the error of the best available hypothesis.

But before we go on, this theorem has some deep philosophical interpretations. In particular, suppose that, before drawing your data, you could choose to work with one of two finite hypothesis classes $ H_1, H_2$, with $ |H_1| > |H_2|$. If you can find a consistent hypothesis no matter which hypothesis class you use, then this theorem says that your generalization guarantees are much stronger if you start with the smaller hypothesis class.

In other words, all else being equal, the smaller set of hypotheses is better. For this reason, the theorem is sometimes called the “Occam’s Razor” theorem. We’ll see a generalization of this theorem in the next section.

Unrealizability and an extra epsilon

Now suppose that $H$ doesn’t contain any hypotheses with error less than $ \varepsilon$. What can we hope to do in this case? One thing is that we can hope to find a hypothesis whose error is within $ \varepsilon$ of the minimal error of any hypothesis in $ H$. Moreover, we might not have any consistent hypotheses for some data samples! So rather than require an algorithm to produce an $ h \in H$ that is perfectly consistent with the data, we just need it to produce a hypothesis that has minimal empirical error, in the sense that it is as close to consistent as the best hypothesis of $ h$ on the data you happened to draw. It seems like such a strategy would find you a hypothesis that’s close to the best one in $ H$, but we need to prove it and determine how many samples we need to draw to succeed.

So let’s make some definitions to codify this. For a given hypothesis, call $ \textup{err}(h)$ the true error of $ h$ on the distribution $ D$. Our assumption is that there may be no hypotheses in $ H$ with $ \textup{err}(h) = 0$. Next we’ll call the empirical error $ \hat{\textup{err}}(h)$.

Definition: We say a concept class $ C$ is agnostically learnable using the hypothesis class $ H$ if for all $ c \in C$ and all distributions $ D$ (and all $ \varepsilon, \delta > 0$), there is a learning algorithm $ A$ which produces a hypothesis $ h$ that with probability at least $ 1 – \delta$ satisfies

$ \displaystyle \text{err}(h) \leq \min_{h’ \in H} \text{err}(h’) + \varepsilon$

and everything runs in the same sort of polynomial time as for vanilla PAC-learning. This is called the agnostic setting or the unrealizable setting, in the sense that we may not be able to find a hypothesis with perfect empirical error.

We seek to prove that all concept classes are agnostically learnable with a finite hypothesis class, provided you have an algorithm that can minimize empirical error. But actually we’ll prove something stronger.

Theorem: Let $ H$ be a finite hypothesis class and $ m$ the number of samples drawn. Then for any $ \delta > 0$, with probability $ 1-\delta$ the following holds:

$ \displaystyle \forall h \in H, \hat{\text{err}}(h) \leq \text{err}(h) + \sqrt{\frac{\log |H| + \log(2 / \delta)}{2m}}$

In other words, we can precisely quantify how the empirical error converges to the true error as the number of samples grows. But this holds for all hypotheses in $ H$, so this provides a uniform bound of the difference between true and empirical error for the entire hypothesis class.

Proving this requires the Chernoff bound. Fix a single hypothesis $ h \in H$. If you draw an example $ x$, call $ Z$ the random variable which is 1 when $ h(x) \neq c(x)$, and 0 otherwise. So if you draw $ m$ samples and call the $ i$-th variable $ Z_i$, the empirical error of the hypothesis is $ \frac{1}{m}\sum_i Z_i$. Moreover, the actual error is the expectation of this random variable since $ \mathbf{E}[1/m \sum_i Z_i] = Z$.

So what we’re asking is the probability that the empirical error deviates from the true error by a lot. Let’s call “a lot” some parameter $ \varepsilon/2 > 0$ (the reason for dividing by two will become clear in the corollary to the theorem). Then plugging things into the Chernoff-Hoeffding bound gives a bound on the probability of the “bad event,” that the empirical error deviates too much.

$ \displaystyle \Pr[|\hat{\text{err}}(h) – \text{err}(h)| > \varepsilon / 2] < 2e^{-\frac{\varepsilon^2m}{2}}$

Now to get a bound on the probability that some hypothesis is bad, we apply the union bound and use the fact that $ |H|$ is finite to get

$ \displaystyle \Pr[|\hat{\text{err}}(h) – \text{err}(h)| > \varepsilon / 2] < 2|H|e^{-\frac{\varepsilon^2m}{2}}$

Now say we want to bound this probability by $ \delta$. We set $ 2|H|e^{-\varepsilon^2m/2} \leq \delta$, solve for $ m$, and get

$ \displaystyle m \geq \frac{2}{\varepsilon^2}\left ( \log |H| + \log \frac{2}{\delta} \right )$

This gives us a concrete quantification of the tradeoff between $ m, \varepsilon, \delta, $ and $ |H|$. Indeed, if we pick $ m$ to be this large, then solving for $ \varepsilon / 2$ gives the exact inequality from the theorem.

$ \square$

Now we know that if we pick enough samples (polynomially many in all the parameters), and our algorithm can find a hypothesis $ h$ of minimal empirical error, then we get the following corollary:

Corollary: For any $ \varepsilon, \delta > 0$, the algorithm that draws $ m \geq \frac{2}{\varepsilon^2}(\log |H| + \log(2/ \delta))$ examples and finds any hypothesis of minimal empirical error will, with probability at least $ 1-\delta$, produce a hypothesis that is within $ \varepsilon$ of the best hypothesis in $ H$.

Proof. By the previous theorem, with the desired probability, for all $ h \in H$ we have $ |\hat{\text{err}}(h) – \text{err}(h)| < \varepsilon/2$. Call $ g = \min_{h’ \in H} \text{err}(h’)$. Then because the empirical error of $ h$ is also minimal, we have $ |\hat{\text{err}}(g) – \text{err}(h)| < \varepsilon / 2$. And using the previous theorem again and the triangle inequality, we get $ |\text{err}(g) – \text{err}(h)| < 2 \varepsilon / 2 = \varepsilon$. In words, the true error of the algorithm’s hypothesis is close to the error of the best hypothesis, as desired.

$ \square$

Next time

Both of these theorems tell us something about the generalization guarantees for learning with hypothesis classes of a certain size. But this isn’t exactly the most reasonable measure of the “complexity” of a family of hypotheses. For example, one could have a hypothesis class with a billion intervals on $ \mathbb{R}$ (say you’re trying to learn intervals, or thresholds, or something easy), and the guarantees we proved in this post are nowhere near optimal.

So the question is: say you have a potentially infinite class of hypotheses, but the hypotheses are all “simple” in some way. First, what is the right notion of simplicity? And second, how can you get guarantees based on that analogous to these? We’ll discuss this next time when we define the VC-dimension.

Until then!

Probabilistic Bounds — A Primer

Probabilistic arguments are a key tool for the analysis of algorithms in machine learning theory and probability theory. They also assume a prominent role in the analysis of randomized and streaming algorithms, where one imposes a restriction on the amount of storage space an algorithm is allowed to use for its computations (usually sublinear in the size of the input).

While a whole host of probabilistic arguments are used, one theorem in particular (or family of theorems) is ubiquitous: the Chernoff bound. In its simplest form, the Chernoff bound gives an exponential bound on the deviation of sums of random variables from their expected value.

This is perhaps most important to algorithm analysis in the following mindset. Say we have a program whose output is a random variable $ X$. Moreover suppose that the expected value of $ X$ is the correct output of the algorithm. Then we can run the algorithm multiple times and take a median (or some sort of average) across all runs. The probability that the algorithm gives a wildly incorrect answer is the probability that more than half of the runs give values which are wildly far from their expected value. Chernoff’s bound ensures this will happen with small probability.

So this post is dedicated to presenting the main versions of the Chernoff bound that are used in learning theory and randomized algorithms. Unfortunately the proof of the Chernoff bound in its full glory is beyond the scope of this blog. However, we will give short proofs of weaker, simpler bounds as a straightforward application of this blog’s previous work laying down the theory.

If the reader has not yet intuited it, this post will rely heavily on the mathematical formalisms of probability theory. We will assume our reader is familiar with the material from our first probability theory primer, and it certainly wouldn’t hurt to have read our conditional probability theory primer, though we won’t use conditional probability directly. We will refrain from using measure-theoretic probability theory entirely (some day my colleagues in analysis will like me, but not today).

Two Easy Bounds of Markov and Chebyshev

The first bound we’ll investigate is almost trivial in nature, but comes in handy. Suppose we have a random variable $ X$ which is non-negative (as a function). Markov’s inequality is the statement that, for any constant $ a > 0$,

$ \displaystyle \textup{P}(X \geq a) \leq \frac{\textup{E}(X)}{a}$

In words, the probability that $ X$ grows larger than some fixed constant is bounded by a quantity that is inversely proportional to the constant.

The proof is quite simple. Let $ \chi_a$ be the indicator random variable for the event that $ X \geq a$ ($ \chi_a = 1$ when $ X \geq a$ and zero otherwise). As with all indicator random variables, the expected value of $ \chi_a$ is the probability that the event happens (if this is mysterious, use the definition of expected value). So $ \textup{E}(\chi_a) = \textup{P}(X \geq a)$, and linearity of expectation allows us to include a factor of $ a$:

$ \textup{E}(a \chi_a) = a \textup{P}(X \geq a)$

The rest of the proof is simply the observation that $ \textup{E}(a \chi_a) \leq \textup{E}(X)$. Indeed, as random variables we have the inequality $ a \chi_a \leq X$. Whenever $ X < a$, the value of $ a \chi_a = 0$ while $ X$ is nonnegative by definition. And whenever $ a \leq X$,the value of $ a \chi_a = a$ while $ X$ is by assumption at least $ a$. It follows that $ \textup{E}(a \chi_a) \leq \textup{E}(X)$.

This last point is a simple property of expectation we omitted from our first primer. It usually goes by monotonicity of expectation, and we prove it here. First, if $ X \geq 0$ then $ \textup{E}(X) \geq 0$ (this is trivial). Second, if $ 0 \leq X \leq Y$, then define a new random variable $ Z = Y-X$. Since $ Z \geq 0$ and using linearity of expectation, it must be that $ \textup{E}(Z) = \textup{E}(Y) – \textup{E}(X) \geq 0$. Hence $ \textup{E}(X) \leq \textup{E}(Y)$. Note that we do require that $ X$ has a finite expected value for this argument to work, but if this is not the case then Markov’s inequality is nonsensical anyway.

Markov’s inequality by itself is not particularly impressive or useful. For example, if $ X$ is the number of heads in a hundred coin flips, Markov’s inequality ensures us that the probability of getting at least 99 heads is at most 50/99, which is about 1/2. Shocking. We know that the true probability is much closer to $ 2^{-100}$, so Markov’s inequality is a bust.

However, it does give us a more useful bound as a corollary. This bound is known as Chebyshev’s inequality, and its use is sometimes referred to as the second moment method because it gives a bound based on the variance of a random variable (instead of the expected value, the “first moment”).

The statement is as follows.

Chebyshev’s Inequality: Let $ X$ be a random variable with finite expected value and positive variance. Then we can bound the probability that $ X$ deviates from its expected value by a quantity that is proportional to the variance of $ X$. In particular, for any $ \lambda > 0$,

$ \displaystyle \textup{P}(|X – \textup{E}(X)| \geq \lambda) \leq \frac{\textup{Var}(X)}{\lambda^2}$

And without any additional assumptions on $ X$, this bound is sharp.

Proof. The proof is a simple application of Markov’s inequality. Let $ Y = (X – \textup{E}(X))^2$, so that $ \textup{E}(Y) = \textup{Var}(X)$. Then by Markov’s inequality

$ \textup{P}(Y \geq \lambda^2) \leq \frac{\textup{E}(Y)}{\lambda^2}$

Since $ Y$ is nonnegative $ |X – \textup{E}(X)| = \sqrt(Y)$, and $ \textup{P}(Y \geq \lambda^2) = \textup{P}(|X – \textup{E}(X)| \geq \lambda)$. The theorem is proved. $ \square$

Chebyshev’s inequality shows up in so many different places (and usually in rather dry, technical bits), that it’s difficult to give a good example application.  Here is one that shows up somewhat often.

Say $ X$ is a nonnegative integer-valued random variable, and we want to argue about when $ X = 0$ versus when $ X > 0$, given that we know $ \textup{E}(X)$. No matter how large $ \textup{E}(X)$ is, it can still be possible that $ \textup{P}(X = 0)$ is arbitrarily close to 1. As a colorful example, let $ X$ is the number of alien lifeforms discovered in the next ten years. We might debate that $ \textup{E}(X)$ can arbitrarily large: if some unexpected scientific and technological breakthroughs occur tomorrow, we could discover an unbounded number of lifeforms. On the other hand, we are very likely not to discover any, and probability theory allows for such a random variable to exist.

If we know everything about $ \textup{Var}(X)$, however, we can get more informed bounds.

Theorem: If $ \textup{E}(X) \neq 0$, then $ \displaystyle \textup{P}(X = 0) \leq \frac{\textup{Var}(X)}{\textup{E}(X)^2}$.

Proof. Simply choose $ \lambda = \textup{E}(X)$ and apply Chebyshev’s inequality.

$ \displaystyle \textup{P}(X = 0) \leq \textup{P}(|X – \textup{E}(X)| \geq \textup{E}(X)) \leq \frac{\textup{Var}(X)}{\textup{E}(X)^2}$

The first inequality follows from the fact that the only time $ X$ can ever be zero is when $ |X – \textup{E}(X)| = \textup{E}(X)$, and $ X=0$ only accounts for one such possibility. $ \square$

This theorem says more. If we know that $ \textup{Var}(X)$ is significantly smaller than $ \textup{E}(X)^2$, then $ X > 0$ is more certain to occur. More precisely, and more computationally minded, suppose we have a sequence of random variables $ X_n$ so that $ \textup{E}(X_n) \to \infty$ as $ n \to \infty$. Then the theorem says that if $ \textup{Var}(X_n) = o(\textup{E}(X_n)^2)$, then $ \textup{P}(X_n > 0) \to 1$. Remembering one of our very early primers on asymptotic notation, $ f = o(g)$ means that $ f$ grows asymptotically slower than $ g$, and in terms of this fraction $ \textup{Var}(X) / \textup{E}(X)^2$, this means that the denominator dominates the fraction so that the whole thing tends to zero.

The Chernoff Bound

The Chernoff bound takes advantage of an additional hypothesis: our random variable is a sum of independent coin flips. We can use this to get exponential bounds on the deviation of the sum. More rigorously,

Theorem: Let $ X_1 , \dots, X_n$ be independent random $ \left \{ 0,1 \right \}$-valued variables, and let $ X = \sum X_i$. Suppose that $ \mu = \textup{E}(X)$. Then the probability that $ X$ deviates from $ \mu$ by more than a factor of $ \lambda > 0$ is bounded from above:

$ \displaystyle \textup{P}(X > (1+\lambda)\mu) \leq \frac{e^{\lambda \mu}}{(1+\lambda)^{(1+\lambda)\mu}}$

The proof is beyond the scope of this post, but we point the interested reader to these lecture notes.

We can apply the Chernoff bound in an easy example. Say all $ X_i$ are fair coin flips, and we’re interested in the probability of getting more than 3/4 of the coins heads. Here $ \mu = n/2$ and $ \lambda = 1/2$, so the probability is bounded from above by

$ \displaystyle \left ( \frac{e}{(3/2)^3} \right )^{n/4} \approx \frac{1}{5^n}$

So as the number of coin flips grows, the probability of seeing such an occurrence diminishes extremely quickly to zero. This is important because if we want to test to see if, say, the coins are biased toward flipping heads, we can simply run an experiment with $ n$ sufficiently large. If we observe that more than 3/4 of the flips give heads, then we proclaim the coins are biased and we can be assured we are correct with high probability. Of course, after seeing 3/4 of more heads we’d be really confident that the coin is biased. A more realistic approach is to define some $ \varepsilon$ that is small enough so as to say, “if some event occurs whose probability is smaller than $ \varepsilon$, then I call shenanigans.” Then decide how many coins and what bound one would need to make the bad event have probability approximately $ \varepsilon$. Finding this balance is one of the more difficult aspects of probabilistic algorithms, and as we’ll see later all of these quantities are left as variables and the correct values are discovered in the course of the proof.

Chernoff-Hoeffding Inequality

The Hoeffding inequality (named after the Finnish statistician, Wassily Høffding) is a variant of the Chernoff bound, but often the bounds are collectively known as Chernoff-Hoeffding inequalities. The form that Hoeffding is known for can be thought of as a simplification and a slight generalization of Chernoff’s bound above.

Theorem: Let $ X_1, \dots, X_n$ be independent random variables whose values are within some range $ [a,b]$. Call $ \mu_i = \textup{E}(X_i)$, $ X = \sum_i X_i$, and $ \mu = \textup{E}(X) = \sum_i \mu_i$. Then for all $ t > 0$,

$ \displaystyle \textup{P}(|X – \mu| > t) \leq 2e^{-2t^2 / n(b-a)^2}$

For example, if we are interested in the sum of $ n$ rolls of a fair six-sided die, then the probability that we deviate from $ (7/2)n$ by more than $ 5 \sqrt{n \log n}$ is bounded by $ 2e^{(-2 \log n)} = 2/n^2$. Supposing we want to know how many rolls we need to guarantee with probability 0.01 that we don’t deviate too much, we just do the algebra:

$ 2n^{-2} < 0.01$
$ n^2 > 200$
$ n > \sqrt{200} \approx 14$

So with 15 rolls we can be confident that the sum of the rolls will lie between 20 and 85. It’s not the best possible bound we could come up with, because we’re completely ignoring the known structure on dice rolls (that they follow a uniform distribution!). The benefit is that it’s a quick and easy bound that works for any kind of random variable with that expected value.

Another version of this theorem concerns the average of the $ X_i$, and is only a minor modification of the above.

Theorem: If $ X_1, \dots, X_n$ are as above, and $ X = \frac{1}{n} \sum_i X_i$, with $ \mu = \frac{1}{n}(\sum_i \mu_i)$, then for all $ t > 0$, we get the following bound

$ \displaystyle \textup{P}(|X – \mu| > t) \leq 2e^{-2nt^2/(b-a)^2}$

The only difference here is the extra factor of $ n$ in the exponent. So the deviation is exponential both in the amount of deviation ($ t^2$), and in the number of trials.

This theorem comes up very often in learning theory, in particular to prove Boosting works. Mathematicians will joke about how all theorems in learning theory are just applications of Chernoff-Hoeffding-type bounds. We’ll of course be seeing it again as we investigate boosting and the PAC-learning model in future posts, so we’ll see the theorems applied to their fullest extent then.

Until next time!