The Reasonable Effectiveness of the Multiplicative Weights Update Algorithm

papad

Christos Papadimitriou, who studies multiplicative weights in the context of biology.

Hard to believe

Sanjeev Arora and his coauthors consider it “a basic tool [that should be] taught to all algorithms students together with divide-and-conquer, dynamic programming, and random sampling.” Christos Papadimitriou calls it “so hard to believe that it has been discovered five times and forgotten.” It has formed the basis of algorithms in machine learning, optimization, game theory, economics, biology, and more.

What mystical algorithm has such broad applications? Now that computer scientists have studied it in generality, it’s known as the Multiplicative Weights Update Algorithm (MWUA). Procedurally, the algorithm is simple. I can even describe the core idea in six lines of pseudocode. You start with a collection of $ n$ objects, and each object has a weight.

Set all the object weights to be 1.
For some large number of rounds:
   Pick an object at random proportionally to the weights
   Some event happens
   Increase the weight of the chosen object if it does well in the event
   Otherwise decrease the weight

The name “multiplicative weights” comes from how we implement the last step: if the weight of the chosen object at step $ t$ is $ w_t$ before the event, and $ G$ represents how well the object did in the event, then we’ll update the weight according to the rule:

$ \displaystyle w_{t+1} = w_t (1 + G)$

Think of this as increasing the weight by a small multiple of the object’s performance on a given round.

Here is a simple example of how it might be used. You have some money you want to invest, and you have a bunch of financial experts who are telling you what to invest in every day. So each day you pick an expert, and you follow their advice, and you either make a thousand dollars, or you lose a thousand dollars, or something in between. Then you repeat, and your goal is to figure out which expert is the most reliable.

This is how we use multiplicative weights: if we number the experts $ 1, \dots, N$, we give each expert a weight $ w_i$ which starts at 1. Then, each day we pick an expert at random (where experts with larger weights are more likely to be picked) and at the end of the day we have some gain or loss $ G$. Then we update the weight of the chosen expert by multiplying it by $ (1 + G / 1000)$. Sometimes you have enough information to update the weights of experts you didn’t choose, too. The theoretical guarantees of the algorithm say we’ll find the best expert quickly (“quickly” will be concrete later).

In fact, let’s play a game where you, dear reader, get to decide the rewards for each expert and each day. I programmed the multiplicative weights algorithm to react according to your choices. Click the image below to go to the demo.

mwua

This core mechanism of updating weights can be interpreted in many ways, and that’s part of the reason it has sprouted up all over mathematics and computer science. Just a few examples of where this has led:

  1. In game theory, weights are the “belief” of a player about the strategy of an opponent. The most famous algorithm to use this is called Fictitious Play, and others include EXP3 for minimizing regret in the so-called “adversarial bandit learning” problem.
  2. In machine learning, weights are the difficulty of a specific training example, so that higher weights mean the learning algorithm has to “try harder” to accommodate that example. The first result I’m aware of for this is the Perceptron (and similar Winnow) algorithm for learning hyperplane separators. The most famous is the AdaBoost algorithm.
  3. Analogously, in optimization, the weights are the difficulty of a specific constraint, and this technique can be used to approximately solve linear and semidefinite programs. The approximation is because MWUA only provides a solution with some error.
  4. In mathematical biology, the weights represent the fitness of individual alleles, and filtering reproductive success based on this and updating weights for successful organisms produces a mechanism very much like evolution. With modifications, it also provides a mechanism through which to understand sex in the context of evolutionary biology.
  5. The TCP protocol, which basically defined the internet, uses additive and multiplicative weight updates (which are very similar in the analysis) to manage congestion.
  6. You can get easy $ \log(n)$-approximation algorithms for many NP-hard problems, such as set cover.

Additional, more technical examples can be found in this survey of Arora et al.

In the rest of this post, we’ll implement a generic Multiplicative Weights Update Algorithm, we’ll prove it’s main theoretical guarantees, and we’ll implement a linear program solver as an example of its applicability. As usual, all of the code used in the making of this post is available in a Github repository.

The generic MWUA algorithm

Let’s start by writing down pseudocode and an implementation for the MWUA algorithm in full generality.

In general we have some set $ X$ of objects and some set $ Y$ of “event outcomes” which can be completely independent. If these sets are finite, we can write down a table $ M$ whose rows are objects, whose columns are outcomes, and whose $ i,j$ entry $ M(i,j)$ is the reward produced by object $ x_i$ when the outcome is $ y_j$. We will also write this as $ M(x, y)$ for object $ x$ and outcome $ y$. The only assumption we’ll make on the rewards is that the values $ M(x, y)$ are bounded by some small constant $ B$ (by small I mean $ B$ should not require exponentially many bits to write down as compared to the size of $ X$). In symbols, $ M(x,y) \in [0,B]$. There are minor modifications you can make to the algorithm if you want negative rewards, but for simplicity we will leave that out. Note the table $ M$ just exists for analysis, and the algorithm does not know its values. Moreover, while the values in $ M$ are static, the choice of outcome $ y$ for a given round may be nondeterministic.

The MWUA algorithm randomly chooses an object $ x \in X$ in every round, observing the outcome $ y \in Y$, and collecting the reward $ M(x,y)$ (or losing it as a penalty). The guarantee of the MWUA theorem is that the expected sum of rewards/penalties of MWUA is not much worse than if one had picked the best object (in hindsight) every single round.

Let’s describe the algorithm in notation first and build up pseudocode as we go. The input to the algorithm is the set of objects, a subroutine that observes an outcome, a black-box reward function, a learning rate parameter, and a number of rounds.

def MWUA(objects, observeOutcome, reward, learningRate, numRounds):
   ...

We define for object $ x$ a nonnegative number $ w_x$ we call a “weight.” The weights will change over time so we’ll also sub-script a weight with a round number $ t$, i.e. $ w_{x,t}$ is the weight of object $ x$ in round $ t$. Initially, all the weights are $ 1$. Then MWUA continues in rounds. We start each round by drawing an example randomly with probability proportional to the weights. Then we observe the outcome for that round and the reward for that round.

# draw: [float] -> int
# pick an index from the given list of floats proportionally
# to the size of the entry (i.e. normalize to a probability
# distribution and draw according to the probabilities).
def draw(weights):
    choice = random.uniform(0, sum(weights))
    choiceIndex = 0

    for weight in weights:
        choice -= weight
        if choice <= 0:
            return choiceIndex

        choiceIndex += 1

# MWUA: the multiplicative weights update algorithm
def MWUA(objects, observeOutcome, reward, learningRate numRounds):
   weights = [1] * len(objects)
   for t in numRounds:
      chosenObjectIndex = draw(weights)
      chosenObject = objects[chosenObjectIndex]

      outcome = observeOutcome(t, weights, chosenObject)
      thisRoundReward = reward(chosenObject, outcome)

      ...

Sampling objects in this way is the same as associating a distribution $ D_t$ to each round, where if $ S_t = \sum_{x \in X} w_{x,t}$ then the probability of drawing $ x$, which we denote $ D_t(x)$, is $ w_{x,t} / S_t$. We don’t need to keep track of this distribution in the actual run of the algorithm, but it will help us with the mathematical analysis.

Next comes the weight update step. Let’s call our learning rate variable parameter $ \varepsilon$. In round $ t$ say we have object $ x_t$ and outcome $ y_t$, then the reward is $ M(x_t, y_t)$. We update the weight of the chosen object $ x_t$ according to the formula:

$ \displaystyle w_{x_t, t} = w_{x_t} (1 + \varepsilon M(x_t, y_t) / B)$

In the more general event that you have rewards for all objects (if not, the reward-producing function can output zero), you would perform this weight update on all objects $ x \in X$. This turns into the following Python snippet, where we hide the division by $ B$ into the choice of learning rate:

# MWUA: the multiplicative weights update algorithm
def MWUA(objects, observeOutcome, reward, learningRate, numRounds):
   weights = [1] * len(objects)
   for t in numRounds:
      chosenObjectIndex = draw(weights)
      chosenObject = objects[chosenObjectIndex]

      outcome = observeOutcome(t, weights, chosenObject)
      thisRoundReward = reward(chosenObject, outcome)

      for i in range(len(weights)):
         weights[i] *= (1 + learningRate * reward(objects[i], outcome))

One of the amazing things about this algorithm is that the outcomes and rewards could be chosen adaptively by an adversary who knows everything about the MWUA algorithm (except which random numbers the algorithm generates to make its choices). This means that the rewards in round $ t$ can depend on the weights in that same round! We will exploit this when we solve linear programs later in this post.

But even in such an oppressive, exploitative environment, MWUA persists and achieves its guarantee. And now we can state that guarantee.

Theorem (from Arora et al): The cumulative reward of the MWUA algorithm is, up to constant multiplicative factors, at least the cumulative reward of the best object minus $ \log(n)$, where $ n$ is the number of objects. (Exact formula at the end of the proof)

The core of the proof, which we’ll state as a lemma, uses one of the most elegant proof techniques in all of mathematics. It’s the idea of constructing a potential function, and tracking the change in that potential function over time. Such a proof usually has the mysterious script:

  1. Define potential function, in our case $ S_t$.
  2. State what seems like trivial facts about the potential function to write $ S_{t+1}$ in terms of $ S_t$, and hence get general information about $ S_T$ for some large $ T$.
  3. Theorem is proved.
  4. Wait, what?

Clearly, coming up with a useful potential function is a difficult and prized skill.

In this proof our potential function is the sum of the weights of the objects in a given round, $ S_t = \sum_{x \in X} w_{x, t}$. Now the lemma.

Lemma: Let $ B$ be the bound on the size of the rewards, and $ 0 < \varepsilon < 1/2$ a learning parameter. Recall that $ D_t(x)$ is the probability that MWUA draws object $ x$ in round $ t$. Write the expected reward for MWUA for round $ t$ as the following (using only the definition of expected value):

$ \displaystyle R_t = \sum_{x \in X} D_t(x) M(x, y_t)$

 Then the claim of the lemma is:

$ \displaystyle S_{t+1} \leq S_t e^{\varepsilon R_t / B}$

Proof. Expand $ S_{t+1} = \sum_{x \in X} w_{x, t+1}$ using the definition of the MWUA update:

$ \displaystyle \sum_{x \in X} w_{x, t+1} = \sum_{x \in X} w_{x, t}(1 + \varepsilon M(x, y_t) / B)$

Now distribute $ w_{x, t}$ and split into two sums:

$ \displaystyle \dots = \sum_{x \in X} w_{x, t} + \frac{\varepsilon}{B} \sum_{x \in X} w_{x,t} M(x, y_t)$

Using the fact that $ D_t(x) = \frac{w_{x,t}}{S_t}$, we can replace $ w_{x,t}$ with $ D_t(x) S_t$, which allows us to get $ R_t$

$ \displaystyle \begin{aligned} \dots &= S_t + \frac{\varepsilon S_t}{B} \sum_{x \in X} D_t(x) M(x, y_t) \\ &= S_t \left ( 1 + \frac{\varepsilon R_t}{B} \right ) \end{aligned}$

And then using the fact that $ (1 + x) \leq e^x$ (Taylor series), we can bound the last expression by $ S_te^{\varepsilon R_t / B}$, as desired.

$ \square$

Now using the lemma, we can get a hold on $ S_T$ for a large $ T$, namely that

$ \displaystyle S_T \leq S_1 e^{\varepsilon \sum_{t=1}^T R_t / B}$

If $ |X| = n$ then $ S_1=n$, simplifying the above. Moreover, the sum of the weights in round $ T$ is certainly greater than any single weight, so that for every fixed object $ x \in X$,

$ \displaystyle S_T \geq w_{x,T} \leq  (1 + \varepsilon)^{\sum_t M(x, y_t) / B}$

Squeezing $ S_t$ between these two inequalities and taking logarithms (to simplify the exponents) gives

$ \displaystyle \left ( \sum_t M(x, y_t) / B \right ) \log(1+\varepsilon) \leq \log n + \frac{\varepsilon}{B} \sum_t R_t$

Multiply through by $ B$, divide by $ \varepsilon$, rearrange, and use the fact that when $ 0 < \varepsilon < 1/2$ we have $ \log(1 + \varepsilon) \geq \varepsilon – \varepsilon^2$ (Taylor series) to get

$ \displaystyle \sum_t R_t \geq \left [ \sum_t M(x, y_t) \right ] (1-\varepsilon) – \frac{B \log n}{\varepsilon}$

The bracketed term is the payoff of object $ x$, and MWUA’s payoff is at least a fraction of that minus the logarithmic term. The bound applies to any object $ x \in X$, and hence to the best one. This proves the theorem.

$ \square$

Briefly discussing the bound itself, we see that the smaller the learning rate is, the closer you eventually get to the best object, but by contrast the more the subtracted quantity $ B \log(n) / \varepsilon$ hurts you. If your target is an absolute error bound against the best performing object on average, you can do more algebra to determine how many rounds you need in terms of a fixed $ \delta$. The answer is roughly: let $ \varepsilon = O(\delta / B)$ and pick $ T = O(B^2 \log(n) / \delta^2)$. See this survey for more.

MWUA for linear programs

Now we’ll approximately solve a linear program using MWUA. Recall that a linear program is an optimization problem whose goal is to minimize (or maximize) a linear function of many variables. The objective to minimize is usually given as a dot product $ c \cdot x$, where $ c$ is a fixed vector and $ x = (x_1, x_2, \dots, x_n)$ is a vector of non-negative variables the algorithm gets to choose. The choices for $ x$ are also constrained by a set of $ m$ linear inequalities, $ A_i \cdot x \geq b_i$, where $ A_i$ is a fixed vector and $ b_i$ is a scalar for $ i = 1, \dots, m$. This is usually summarized by putting all the $ A_i$ in a matrix, $ b_i$ in a vector, as

$ x_{\textup{OPT}} = \textup{argmin}_x \{ c \cdot x \mid Ax \geq b, x \geq 0 \}$

We can further simplify the constraints by assuming we know the optimal value $ Z = c \cdot x_{\textup{OPT}}$ in advance, by doing a binary search (more on this later). So, if we ignore the hard constraint $ Ax \geq b$, the “easy feasible region” of possible $ x$’s includes $ \{ x \mid x \geq 0, c \cdot x = Z \}$.

In order to fit linear programming into the MWUA framework we have to define two things.

  1. The objects: the set of linear inequalities $ A_i \cdot x \geq b_i$.
  2. The rewards: the error of a constraint for a special input vector $ x_t$.

Number 2 is curious (why would we give a reward for error?) but it’s crucial and we’ll discuss it momentarily.

The special input $ x_t$ depends on the weights in round $ t$ (which is allowed, recall). Specifically, if the weights are $ w = (w_1, \dots, w_m)$, we ask for a vector $ x_t$ in our “easy feasible region” which satisfies

$ \displaystyle (A^T w) \cdot x_t \geq w \cdot b$

For this post we call the implementation of procuring such a vector the “oracle,” since it can be seen as the black-box problem of, given a vector $ \alpha$ and a scalar $ \beta$ and a convex region $ R$, finding a vector $ x \in R$ satisfying $ \alpha \cdot x \geq \beta$. This allows one to solve more complex optimization problems with the same technique, swapping in a new oracle as needed. Our choice of inputs, $ \alpha = A^T w, \beta = w \cdot b$, are particular to the linear programming formulation.

Two remarks on this choice of inputs. First, the vector $ A^T w$ is a weighted average of the constraints in $ A$, and $ w \cdot b$ is a weighted average of the thresholds. So this this inequality is a “weighted average” inequality (specifically, a convex combination, since the weights are nonnegative). In particular, if no such $ x$ exists, then the original linear program has no solution. Indeed, given a solution $ x^*$ to the original linear program, each constraint, say $ A_1 x^*_1 \geq b_1$, is unaffected by left-multiplication by $ w_1$.

Second, and more important to the conceptual understanding of this algorithm, the choice of rewards and the multiplicative updates ensure that easier constraints show up less prominently in the inequality by having smaller weights. That is, if we end up overly satisfying a constraint, we penalize that object for future rounds so we don’t waste our effort on it. The byproduct of MWUA—the weights—identify the hardest constraints to satisfy, and so in each round we can put a proportionate amount of effort into solving (one of) the hard constraints. This is why it makes sense to reward error; the error is a signal for where to improve, and by over-representing the hard constraints, we force MWUA’s attention on them.

At the end, our final output is an average of the $ x_t$ produced in each round, i.e. $ x^* = \frac{1}{T}\sum_t x_t$. This vector satisfies all the constraints to a roughly equal degree. We will skip the proof that this vector does what we want, but see these notes for a simple proof. We’ll spend the rest of this post implementing the scheme outlined above.

Implementing the oracle

Fix the convex region $ R = \{ c \cdot x = Z, x \geq 0 \}$ for a known optimal value $ Z$. Define $ \textup{oracle}(\alpha, \beta)$ as the problem of finding an $ x \in R$ such that $ \alpha \cdot x \geq \beta$.

For the case of this linear region $ R$, we can simply find the index $ i$ which maximizes $ \alpha_i Z / c_i$. If this value exceeds $ \beta$, we can return the vector with that value in the $ i$-th position and zeros elsewhere. Otherwise, the problem has no solution.

To prove the “no solution” part, say $ n=2$ and you have $ x = (x_1, x_2)$ a solution to $ \alpha \cdot x \geq \beta$. Then for whichever index makes $ \alpha_i Z / c_i$ bigger, say $ i=1$, you can increase $ \alpha \cdot x$ without changing $ c \cdot x = Z$ by replacing $ x_1$ with $ x_1 + (c_2/c_1)x_2$ and $ x_2$ with zero. I.e., we’re moving the solution $ x$ along the line $ c \cdot x = Z$ until it reaches a vertex of the region bounded by $ c \cdot x = Z$ and $ x \geq 0$. This must happen when all entries but one are zero. This is the same reason why optimal solutions of (generic) linear programs occur at vertices of their feasible regions.

The code for this becomes quite simple. Note we use the numpy library in the entire codebase to make linear algebra operations fast and simple to read.

def makeOracle(c, optimalValue):
    n = len(c)

    def oracle(weightedVector, weightedThreshold):
        def quantity(i):
            return weightedVector[i] * optimalValue / c[i] if c[i] &gt; 0 else -1

        biggest = max(range(n), key=quantity)
        if quantity(biggest) &lt; weightedThreshold:
            raise InfeasibleException

        return numpy.array([optimalValue / c[i] if i == biggest else 0 for i in range(n)])

    return oracle

Implementing the core solver

The core solver implements the discussion from previously, given the optimal value of the linear program as input. To avoid too many single-letter variable names, we use linearObjective instead of $ c$.

def solveGivenOptimalValue(A, b, linearObjective, optimalValue, learningRate=0.1):
    m, n = A.shape  # m equations, n variables
    oracle = makeOracle(linearObjective, optimalValue)

    def reward(i, specialVector):
        ...

    def observeOutcome(_, weights, __):
        ...

    numRounds = 1000
    weights, cumulativeReward, outcomes = MWUA(
        range(m), observeOutcome, reward, learningRate, numRounds
    )
    averageVector = sum(outcomes) / numRounds

    return averageVector

First we make the oracle, then the reward and outcome-producing functions, then we invoke the MWUA subroutine. Here are those two functions; they are closures because they need access to $ A$ and $ b$. Note that neither $ c$ nor the optimal value show up here.

    def reward(i, specialVector):
        constraint = A[i]
        threshold = b[i]
        return threshold - numpy.dot(constraint, specialVector)

    def observeOutcome(_, weights, __):
        weights = numpy.array(weights)
        weightedVector = A.transpose().dot(weights)
        weightedThreshold = weights.dot(b)
        return oracle(weightedVector, weightedThreshold)

Implementing the binary search, and an example

Finally, the top-level routine. Note that the binary search for the optimal value is sophisticated (though it could be more sophisticated). It takes a max range for the search, and invokes the optimization subroutine, moving the upper bound down if the linear program is feasible and moving the lower bound up otherwise.

def solve(A, b, linearObjective, maxRange=1000):
    optRange = [0, maxRange]

    while optRange[1] - optRange[0] &gt; 1e-8:
        proposedOpt = sum(optRange) / 2
        print(&quot;Attempting to solve with proposedOpt=%G&quot; % proposedOpt)

        # Because the binary search starts so high, it results in extreme
        # reward values that must be tempered by a slow learning rate. Exercise
        # to the reader: determine absolute bounds for the rewards, and set
        # this learning rate in a more principled fashion.
        learningRate = 1 / max(2 * proposedOpt * c for c in linearObjective)
        learningRate = min(learningRate, 0.1)

        try:
            result = solveGivenOptimalValue(A, b, linearObjective, proposedOpt, learningRate)
            optRange[1] = proposedOpt
        except InfeasibleException:
            optRange[0] = proposedOpt

    return result

Finally, a simple example:

A = numpy.array([[1, 2, 3], [0, 4, 2]])
b = numpy.array([5, 6])
c = numpy.array([1, 2, 1])

x = solve(A, b, c)
print(x)
print(c.dot(x))
print(A.dot(x) - b)

The output:

Attempting to solve with proposedOpt=500
Attempting to solve with proposedOpt=250
Attempting to solve with proposedOpt=125
Attempting to solve with proposedOpt=62.5
Attempting to solve with proposedOpt=31.25
Attempting to solve with proposedOpt=15.625
Attempting to solve with proposedOpt=7.8125
Attempting to solve with proposedOpt=3.90625
Attempting to solve with proposedOpt=1.95312
Attempting to solve with proposedOpt=2.92969
Attempting to solve with proposedOpt=3.41797
Attempting to solve with proposedOpt=3.17383
Attempting to solve with proposedOpt=3.05176
Attempting to solve with proposedOpt=2.99072
Attempting to solve with proposedOpt=3.02124
Attempting to solve with proposedOpt=3.00598
Attempting to solve with proposedOpt=2.99835
Attempting to solve with proposedOpt=3.00217
Attempting to solve with proposedOpt=3.00026
Attempting to solve with proposedOpt=2.99931
Attempting to solve with proposedOpt=2.99978
Attempting to solve with proposedOpt=3.00002
Attempting to solve with proposedOpt=2.9999
Attempting to solve with proposedOpt=2.99996
Attempting to solve with proposedOpt=2.99999
Attempting to solve with proposedOpt=3.00001
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3  # note %G rounds the printed values
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
[ 0.     0.987  1.026]
3.00000000425
[  5.20000072e-02   8.49831849e-09]

So there we have it. A fiendishly clever use of multiplicative weights for solving linear programs.

Discussion

One of the nice aspects of MWUA is it’s completely transparent. If you want to know why a decision was made, you can simply look at the weights and look at the history of rewards of the objects. There’s also a clear interpretation of what is being optimized, as the potential function used in the proof is a measure of both quality and adaptability to change. The latter is why MWUA succeeds even in adversarial settings, and why it makes sense to think about MWUA in the context of evolutionary biology.

This even makes one imagine new problems that traditional algorithms cannot solve, but which MWUA handles with grace. For example, imagine trying to solve an “online” linear program in which over time a constraint can change. MWUA can adapt to maintain its approximate solution.

The linear programming technique is known in the literature as the Plotkin-Shmoys-Tardos framework for covering and packing problems. The same ideas extend to other convex optimization problems, including semidefinite programming.

If you’ve been reading this entire post screaming “This is just gradient descent!” Then you’re right and wrong. It bears a striking resemblance to gradient descent (see this document for details about how special cases of MWUA are gradient descent by another name), but the adaptivity for the rewards makes MWUA different.

Even though so many people have been advocating for MWUA over the past decade, it’s surprising that it doesn’t show up in the general math/CS discourse on the internet or even in many algorithms courses. The Arora survey I referenced is from 2005 and the linear programming technique I demoed is originally from 1991! I took algorithms classes wherever I could, starting undergraduate in 2007, and I didn’t even hear a whisper of this technique until midway through my PhD in theoretical CS (I did, however, study fictitious play in a game theory class). I don’t have an explanation for why this is the case, except maybe that it takes more than 20 years for techniques to make it to the classroom. At the very least, this is one good reason to go to graduate school. You learn the things (and where to look for the things) which haven’t made it to classrooms yet.

Until next time!

Big Dimensions, and What You Can Do About It

Data is abundant, data is big, and big is a problem. Let me start with an example. Let’s say you have a list of movie titles and you want to learn their genre: romance, action, drama, etc. And maybe in this scenario IMDB doesn’t exist so you can’t scrape the answer. Well, the title alone is almost never enough information. One nice way to get more data is to do the following:

  1. Pick a large dictionary of words, say the most common 100,000 non stop-words in the English language.
  2. Crawl the web looking for documents that include the title of a film.
  3. For each film, record the counts of all other words appearing in those documents.
  4. Maybe remove instances of “movie” or “film,” etc.

After this process you have a length-100,000 vector of integers associated with each movie title. IMDB’s database has around 1.5 million listed movies, and if we have a 32-bit integer per vector entry, that’s 600 GB of data to get every movie.

One way to try to find genres is to cluster this (unlabeled) dataset of vectors, and then manually inspect the clusters and assign genres. With a really fast computer we could simply run an existing clustering algorithm on this dataset and be done. Of course, clustering 600 GB of data takes a long time, but there’s another problem. The geometric intuition that we use to design clustering algorithms degrades as the length of the vectors in the dataset grows. As a result, our algorithms perform poorly. This phenomenon is called the “curse of dimensionality” (“curse” isn’t a technical term), and we’ll return to the mathematical curiosities shortly.

A possible workaround is to try to come up with faster algorithms or be more patient. But a more interesting mathematical question is the following:

Is it possible to condense high-dimensional data into smaller dimensions and retain the important geometric properties of the data?

This goal is called dimension reduction. Indeed, all of the chatter on the internet is bound to encode redundant information, so for our movie title vectors it seems the answer should be “yes.” But the questions remain, how does one find a low-dimensional condensification? (Condensification isn’t a word, the right word is embedding, but embedding is overloaded so we’ll wait until we define it) And what mathematical guarantees can you prove about the resulting condensed data? After all, it stands to reason that different techniques preserve different aspects of the data. Only math will tell.

In this post we’ll explore this so-called “curse” of dimensionality, explain the formality of why it’s seen as a curse, and implement a wonderfully simple technique called “the random projection method” which preserves pairwise distances between points after the reduction. As usual, and all the code, data, and tests used in the making of this post are on Github.

Some curious issues, and the “curse”

We start by exploring the curse of dimensionality with experiments on synthetic data.

In two dimensions, take a circle centered at the origin with radius 1 and its bounding square.

circle.png

The circle fills up most of the area in the square, in fact it takes up exactly $ \pi$ out of 4 which is about 78%. In three dimensions we have a sphere and a cube, and the ratio of sphere volume to cube volume is a bit smaller, $ 4 \pi /3$ out of a total of 8, which is just over 52%. What about in a thousand dimensions? Let’s try by simulation.

import random

def randUnitCube(n):
   return [(random.random() - 0.5)*2 for _ in range(n)]

def sphereCubeRatio(n, numSamples):
   randomSample = [randUnitCube(n) for _ in range(numSamples)]
   return sum(1 for x in randomSample if sum(a**2 for a in x) &lt;= 1) / numSamples 

The result is as we computed for small dimension,

 &gt;&gt;&gt; sphereCubeRatio(2,10000)
0.7857
&gt;&gt;&gt; sphereCubeRatio(3,10000)
0.5196

And much smaller for larger dimension

&gt;&gt;&gt; sphereCubeRatio(20,100000) # 100k samples
0.0
&gt;&gt;&gt; sphereCubeRatio(20,1000000) # 1M samples
0.0
&gt;&gt;&gt; sphereCubeRatio(20,2000000)
5e-07

Forget a thousand dimensions, for even twenty dimensions, a million samples wasn’t enough to register a single random point inside the unit sphere. This illustrates one concern, that when we’re sampling random points in the $ d$-dimensional unit cube, we need at least $ 2^d$ samples to ensure we’re getting a even distribution from the whole space. In high dimensions, this face basically rules out a naive Monte Carlo approximation, where you sample random points to estimate the probability of an event too complicated to sample from directly. A machine learning viewpoint of the same problem is that in dimension $ d$, if your machine learning algorithm requires a representative sample of the input space in order to make a useful inference, then you require $ 2^d$ samples to learn.

Luckily, we can answer our original question because there is a known formula for the volume of a sphere in any dimension. Rather than give the closed form formula, which involves the gamma function and is incredibly hard to parse, we’ll state the recursive form. Call $ V_i$ the volume of the unit sphere in dimension $ i$. Then $ V_0 = 1$ by convention, $ V_1 = 2$ (it’s an interval), and $ V_n = \frac{2 \pi V_{n-2}}{n}$. If you unpack this recursion you can see that the numerator looks like $ (2\pi)^{n/2}$ and the denominator looks like a factorial, except it skips every other number. So an even dimension would look like $ 2 \cdot 4 \cdot \dots \cdot n$, and this grows larger than a fixed exponential. So in fact the total volume of the sphere vanishes as the dimension grows! (In addition to the ratio vanishing!)

def sphereVolume(n):
   values = [0] * (n+1)
   for i in range(n+1):
      if i == 0:
         values[i] = 1
      elif i == 1:
         values[i] = 2
      else:
         values[i] = 2*math.pi / i * values[i-2]

   return values[-1]

This should be counterintuitive. I think most people would guess, when asked about how the volume of the unit sphere changes as the dimension grows, that it stays the same or gets bigger.  But at a hundred dimensions, the volume is already getting too small to fit in a float.

&gt;&gt;&gt; sphereVolume(20)
0.025806891390014047
&gt;&gt;&gt; sphereVolume(100)
2.3682021018828297e-40
&gt;&gt;&gt; sphereVolume(1000)
0.0

The scary thing is not just that this value drops, but that it drops exponentially quickly. A consequence is that, if you’re trying to cluster data points by looking at points within a fixed distance $ r$ of one point, you have to carefully measure how big $ r$ needs to be to cover the same proportional volume as it would in low dimension.

Here’s a related issue. Say I take a bunch of points generated uniformly at random in the unit cube.

from itertools import combinations

def distancesRandomPoints(n, numSamples):
   randomSample = [randUnitCube(n) for _ in range(numSamples)]
   pairwiseDistances = [dist(x,y) for (x,y) in combinations(randomSample, 2)]
   return pairwiseDistances

In two dimensions, the histogram of distances between points looks like this

2d-distances.png

However, as the dimension grows the distribution of distances changes. It evolves like the following animation, in which each frame is an increase in dimension from 2 to 100.

distances-animation.gif

The shape of the distribution doesn’t appear to be changing all that much after the first few frames, but the center of the distribution tends to infinity (in fact, it grows like $ \sqrt{n}$). The variance also appears to stay constant. This chart also becomes more variable as the dimension grows, again because we should be sampling exponentially many more points as the dimension grows (but we don’t). In other words, as the dimension grows the average distance grows and the tightness of the distribution stays the same. So at a thousand dimensions the average distance is about 26, tightly concentrated between 24 and 28. When the average is a thousand, the distribution is tight between 998 and 1002. If one were to normalize this data, it would appear that random points are all becoming equidistant from each other.

So in addition to the issues of runtime and sampling, the geometry of high-dimensional space looks different from what we expect. To get a better understanding of “big data,” we have to update our intuition from low-dimensional geometry with analysis and mathematical theorems that are much harder to visualize.

The Johnson-Lindenstrauss Lemma

Now we turn to proving dimension reduction is possible. There are a few methods one might first think of, such as look for suitable subsets of coordinates, or sums of subsets, but these would all appear to take a long time or they simply don’t work.

Instead, the key technique is to take a random linear subspace of a certain dimension, and project every data point onto that subspace. No searching required. The fact that this works is called the Johnson-Lindenstrauss Lemma. To set up some notation, we’ll call $ d(v,w)$ the usual distance between two points.

Lemma [Johnson-Lindenstrauss (1984)]: Given a set $ X$ of $ n$ points in $ \mathbb{R}^d$, project the points in $ X$ to a randomly chosen subspace of dimension $ c$. Call the projection $ \rho$. For any $ \varepsilon > 0$, if $ c$ is at least $ \Omega(\log(n) / \varepsilon^2)$, then with probability at least 1/2 the distances between points in $ X$ are preserved up to a factor of $ (1+\varepsilon)$. That is, with good probability every pair $ v,w \in X$ will satisfy

$ \displaystyle \| v-w \|^2 (1-\varepsilon) \leq \| \rho(v) – \rho(w) \|^2 \leq \| v-w \|^2 (1+\varepsilon)$

Before we do the proof, which is quite short, it’s important to point out that the target dimension $ c$ does not depend on the original dimension! It only depends on the number of points in the dataset, and logarithmically so. That makes this lemma seem like pure magic, that you can take data in an arbitrarily high dimension and put it in a much smaller dimension.

On the other hand, if you include all of the hidden constants in the bound on the dimension, it’s not that impressive. If your data have a million dimensions and you want to preserve the distances up to 1% ($ \varepsilon = 0.01$), the bound is bigger than a million! If you decrease the preservation $ \varepsilon$ to 10% (0.1), then you get down to about 12,000 dimensions, which is more reasonable. At 45% the bound drops to around 1,000 dimensions. Here’s a plot showing the theoretical bound on $ c$ in terms of $ \varepsilon$ for $ n$ fixed to a million.

boundplot

 

But keep in mind, this is just a theoretical bound for potentially misbehaving data. Later in this post we’ll see if the practical dimension can be reduced more than the theory allows. As we’ll see, an algorithm run on the projected data is still effective even if the projection goes well beyond the theoretical bound. Because the theorem is known to be tight in the worst case (see the notes at the end) this speaks more to the robustness of the typical algorithm than to the robustness of the projection method.

A second important note is that this technique does not necessarily avoid all the problems with the curse of dimensionality. We mentioned above that one potential problem is that “random points” are roughly equidistant in high dimensions. Johnson-Lindenstrauss actually preserves this problem because it preserves distances! As a consequence, you won’t see strictly better algorithm performance if you project (which we suggested is possible in the beginning of this post). But you will alleviate slow runtimes if the runtime depends exponentially on the dimension. Indeed, if you replace the dimension $ d$ with the logarithm of the number of points $ \log n$, then $ 2^d$ becomes linear in $ n$, and $ 2^{O(d)}$ becomes polynomial.

Proof of the J-L lemma

Let’s prove the lemma.

Proof. To start we make note that one can sample from the uniform distribution on dimension-$ c$ linear subspaces of $ \mathbb{R}^d$ by choosing the entries of a $ c \times d$ matrix $ A$ independently from a normal distribution with mean 0 and variance 1. Then, to project a vector $ x$ by this matrix (call the projection $ \rho$), we can compute

$ \displaystyle \rho(x) = \frac{1}{\sqrt{c}}A x$

Now fix $ \varepsilon > 0$ and fix two points in the dataset $ x,y$. We want an upper bound on the probability that the following is false

$ \displaystyle \| x-y \|^2 (1-\varepsilon) \leq \| \rho(x) – \rho(y) \|^2 \leq \| x-y \|^2 (1+\varepsilon)$

Since that expression is a pain to work with, let’s rearrange it by calling $ u = x-y$, and rearranging (using the linearity of the projection) to get the equivalent statement.

$ \left | \| \rho(u) \|^2 – \|u \|^2 \right | \leq \varepsilon \| u \|^2$

And so we want a bound on the probability that this event does not occur, meaning the inequality switches directions.

Once we get such a bound (it will depend on $ c$ and $ \varepsilon$) we need to ensure that this bound is true for every pair of points. The union bound allows us to do this, but it also requires that the probability of the bad thing happening tends to zero faster than $ 1/\binom{n}{2}$. That’s where the $ \log(n)$ will come into the bound as stated in the theorem.

Continuing with our use of $ u$ for notation, define $ X$ to be the random variable $ \frac{c}{\| u \|^2} \| \rho(u) \|^2$. By expanding the notation and using the linearity of expectation, you can show that the expected value of $ X$ is $ c$, meaning that in expectation, distances are preserved. We are on the right track, and just need to show that the distribution of $ X$, and thus the possible deviations in distances, is tightly concentrated around $ c$. In full rigor, we will show

$ \displaystyle \Pr [X \geq (1+\varepsilon) c] < e^{-(\varepsilon^2 – \varepsilon^3) \frac{c}{4}}$

Let $ A_i$ denote the $ i$-th column of $ A$. Define by $ X_i$ the quantity $ \langle A_i, u \rangle / \| u \|$. This is a weighted average of the entries of $ A_i$ by the entries of $ u$. But since we chose the entries of $ A$ from the normal distribution, and since a weighted average of normally distributed random variables is also normally distributed (has the same distribution), $ X_i$ is a $ N(0,1)$ random variable. Moreover, each column is independent. This allows us to decompose $ X$ as

$ X = \frac{k}{\| u \|^2} \| \rho(u) \|^2 = \frac{\| Au \|^2}{\| u \|^2}$

Expanding further,

$ X = \sum_{i=1}^c \frac{\| A_i u \|^2}{\|u\|^2} = \sum_{i=1}^c X_i^2$

Now the event $ X \leq (1+\varepsilon) c$ can be expressed in terms of the nonegative variable $ e^{\lambda X}$, where $ 0 < \lambda < 1/2$ is parameter, to get

$ \displaystyle \Pr[X \geq (1+\varepsilon) c] = \Pr[e^{\lambda X} \geq e^{(1+\varepsilon)c \lambda}]$

This will become useful because the sum $ X = \sum_i X_i^2$ will split into a product momentarily. First we apply Markov’s inequality, which says that for any nonnegative random variable $ Y$, $ \Pr[Y \geq t] \leq \mathbb{E}[Y] / t$. This lets us write

$ \displaystyle \Pr[e^{\lambda X} \geq e^{(1+\varepsilon) c \lambda}] \leq \frac{\mathbb{E}[e^{\lambda X}]}{e^{(1+\varepsilon) c \lambda}}$

Now we can split up the exponent $ \lambda X$ into $ \sum_{i=1}^c \lambda X_i^2$, and using the i.i.d.-ness of the $ X_i^2$ we can rewrite the RHS of the inequality as

$ \left ( \frac{\mathbb{E}[e^{\lambda X_1^2}]}{e^{(1+\varepsilon)\lambda}} \right )^c$

A similar statement using $ -\lambda$ is true for the $ (1-\varepsilon)$ part, namely that

$ \displaystyle \Pr[X \leq (1-\varepsilon)c] \leq \left ( \frac{\mathbb{E}[e^{-\lambda X_1^2}]}{e^{-(1-\varepsilon)\lambda}} \right )^c$

The last thing that’s needed is to bound $ \mathbb{E}[e^{\lambda X_i^2}]$, but since $ X_i^2 \sim N(0,1)$, we can use the known density function for a normal distribution, and integrate to get the exact value $ \mathbb{E}[e^{\lambda X_1^2}] = \frac{1}{\sqrt{1-2\lambda}}$. Including this in the bound gives us a closed-form bound in terms of $ \lambda, c, \varepsilon$. Using standard calculus the optimal $ \lambda \in (0,1/2)$ is $ \lambda = \varepsilon / 2(1+\varepsilon)$. This gives

$ \displaystyle \Pr[X \geq (1+\varepsilon) c] \leq ((1+\varepsilon)e^{-\varepsilon})^{c/2}$

Using the Taylor series expansion for $ e^x$, one can show the bound $ 1+\varepsilon < e^{\varepsilon – (\varepsilon^2 – \varepsilon^3)/2}$, which simplifies the final upper bound to $ e^{-(\varepsilon^2 – \varepsilon^3) c/4}$.

Doing the same thing for the $ (1-\varepsilon)$ version gives an equivalent bound, and so the total bound is doubled, i.e. $ 2e^{-(\varepsilon^2 – \varepsilon^3) c/4}$.

As we said at the beginning, applying the union bound means we need

$ \displaystyle 2e^{-(\varepsilon^2 – \varepsilon^3) c/4} < \frac{1}{\binom{n}{2}}$

Solving this for $ c$ gives $ c \geq \frac{8 \log m}{\varepsilon^2 – \varepsilon^3}$, as desired.

$ \square$

Projecting in Practice

Let’s write a python program to actually perform the Johnson-Lindenstrauss dimension reduction scheme. This is sometimes called the Johnson-Lindenstrauss transform, or JLT.

First we define a random subspace by sampling an appropriately-sized matrix with normally distributed entries, and a function that performs the projection onto a given subspace (for testing).

import random
import math
import numpy

def randomSubspace(subspaceDimension, ambientDimension):
   return numpy.random.normal(0, 1, size=(subspaceDimension, ambientDimension))

def project(v, subspace):
   subspaceDimension = len(subspace)
   return (1 / math.sqrt(subspaceDimension)) * subspace.dot(v)

We have a function that computes the theoretical bound on the optimal dimension to reduce to.

def theoreticalBound(n, epsilon):
   return math.ceil(8*math.log(n) / (epsilon**2 - epsilon**3))

And then performing the JLT is simply matrix multiplication

def jlt(data, subspaceDimension):
   ambientDimension = len(data[0])
   A = randomSubspace(subspaceDimension, ambientDimension)
   return (1 / math.sqrt(subspaceDimension)) * A.dot(data.T).T

The high-dimensional dataset we’ll use comes from a data mining competition called KDD Cup 2001. The dataset we used deals with drug design, and the goal is to determine whether an organic compound binds to something called thrombin. Thrombin has something to do with blood clotting, and I won’t pretend I’m an expert. The dataset, however, has over a hundred thousand features for about 2,000 compounds. Here are a few approximate target dimensions we can hope for as epsilon varies.

&gt;&gt;&gt; [((1/x),theoreticalBound(n=2000, epsilon=1/x))
       for x in [2, 3, 4, 5, 6, 7, 8, 9, 10, 15, 20]]
[('0.50', 487), ('0.33', 821), ('0.25', 1298), ('0.20', 1901),
 ('0.17', 2627), ('0.14', 3477), ('0.12', 4448), ('0.11', 5542),
 ('0.10', 6757), ('0.07', 14659), ('0.05', 25604)]

Going down from a hundred thousand dimensions to a few thousand is by any measure decreases the size of the dataset by about 95%. We can also observe how the distribution of overall distances varies as the size of the subspace we project to varies.

The animation proceeds from 5000 dimensions down to 2 (when the plot is at its bulkiest closer to zero).

The animation proceeds from 5000 dimensions down to 2 (when the plot is at its bulkiest closer to zero).

The last three frames are for 10, 5, and 2 dimensions respectively. As you can see the histogram starts to beef up around zero. To be honest I was expecting something a bit more dramatic like a uniform-ish distribution. Of course, the distribution of distances is not all that matters. Another concern is the worst case change in distances between any two points before and after the projection. We can see that indeed when we project to the dimension specified in the theorem, that the distances are within the prescribed bounds.

def checkTheorem(oldData, newData, epsilon):
   numBadPoints = 0

   for (x,y), (x2,y2) in zip(combinations(oldData, 2), combinations(newData, 2)):
      oldNorm = numpy.linalg.norm(x2-y2)**2
      newNorm = numpy.linalg.norm(x-y)**2

      if newNorm == 0 or oldNorm == 0:
         continue

      if abs(oldNorm / newNorm - 1) &amp;gt; epsilon:
         numBadPoints += 1

   return numBadPoints

if __name__ == &amp;quot;__main__&amp;quot;
   from data import thrombin
   train, labels = thrombin.load() 

   numPoints = len(train)
   epsilon = 0.2
   subspaceDim = theoreticalBound(numPoints, epsilon)
   ambientDim = len(train[0])
   newData = jlt(train, subspaceDim)

   print(checkTheorem(train, newData, epsilon))

This program prints zero every time I try running it, which is the poor man’s way of saying it works “with high probability.” We can also plot statistics about the number of pairs of data points that are distorted by more than $ \varepsilon$ as the subspace dimension shrinks. We ran this on the following set of subspace dimensions with $ \varepsilon = 0.1$ and took average/standard deviation over twenty trials:

   dims = [1000, 750, 500, 250, 100, 75, 50, 25, 10, 5, 2]

The result is the following chart, whose x-axis is the dimension projected to (so the left hand is the most extreme projection to 2, 5, 10 dimensions), the y-axis is the number of distorted pairs, and the error bars represent a single standard deviation away from the mean.

thrombin-worst-case

This chart provides good news about this dataset because the standard deviations are low. It tells us something that mathematicians often ignore: the predictability of the tradeoff that occurs once you go past the theoretically perfect bound. In this case, the standard deviations tell us that it’s highly predictable. Moreover, since this tradeoff curve measures pairs of points, we might conjecture that the distortion is localized around a single set of points that got significantly “rattled” by the projection. This would be an interesting exercise to explore.

Now all of these charts are really playing with the JLT and confirming the correctness of our code (and hopefully our intuition). The real question is: how well does a machine learning algorithm perform on the original data when compared to the projected data? If the algorithm only “depends” on the pairwise distances between the points, then we should expect nearly identical accuracy in the unprojected and projected versions of the data. To show this we’ll use an easy learning algorithm, the k-nearest-neighbors clustering method. The problem, however, is that there are very few positive examples in this particular dataset. So looking for the majority label of the nearest $ k$ neighbors for any $ k > 2$ unilaterally results in the “all negative” classifier, which has 97% accuracy. This happens before and after projecting.

To compensate for this, we modify k-nearest-neighbors slightly by having the label of a predicted point be 1 if any label among its nearest neighbors is 1. So it’s not a majority vote, but rather a logical OR of the labels of nearby neighbors. Our point in this post is not to solve the problem well, but rather to show how an algorithm (even a not-so-good one) can degrade as one projects the data into smaller and smaller dimensions. Here is the code.

def nearestNeighborsAccuracy(data, labels, k=10):
   from sklearn.neighbors import NearestNeighbors
   trainData, trainLabels, testData, testLabels = randomSplit(data, labels) # cross validation
   model = NearestNeighbors(n_neighbors=k).fit(trainData)
   distances, indices = model.kneighbors(testData)
   predictedLabels = []

   for x in indices:
      xLabels = [trainLabels[i] for i in x[1:]]
      predictedLabel = max(xLabels)
      predictedLabels.append(predictedLabel)

   totalAccuracy = sum(x == y for (x,y) in zip(testLabels, predictedLabels)) / len(testLabels)
   falsePositive = (sum(x == 0 and y == 1 for (x,y) in zip(testLabels, predictedLabels)) /
      sum(x == 0 for x in testLabels))
   falseNegative = (sum(x == 1 and y == 0 for (x,y) in zip(testLabels, predictedLabels)) /
      sum(x == 1 for x in testLabels))

   return totalAccuracy, falsePositive, falseNegative

And here is the accuracy of this modified k-nearest-neighbors algorithm run on the thrombin dataset. The horizontal line represents the accuracy of the produced classifier on the unmodified data set. The x-axis represents the dimension projected to (left-hand side is the lowest), and the y-axis represents the accuracy. The mean accuracy over fifty trials was plotted, with error bars representing one standard deviation. The complete code to reproduce the plot is in the Github repository.

thrombin-knn-accuracy

Likewise, we plot the proportion of false positive and false negatives for the output classifier. Note that a “positive” label made up only about 2% of the total data set. First the false positives

thrombin-knn-fp

Then the false negatives

thrombin-knn-fn

As we can see from these three charts, things don’t really change that much (for this dataset) even when we project down to around 200-300 dimensions. Note that for these parameters the “correct” theoretical choice for dimension was on the order of 5,000 dimensions, so this is a 95% savings from the naive approach, and 99.75% space savings from the original data. Not too shabby.

Notes

The $ \Omega(\log(n))$ worst-case dimension bound is asymptotically tight, though there is some small gap in the literature that depends on $ \varepsilon$. This result is due to Noga Alon, the very last result (Section 9) of this paper. [Update: as djhsu points out in the comments, this gap is now closed thanks to Larsen and Nelson]

We did dimension reduction with respect to preserving the Euclidean distance between points. One might naturally wonder if you can achieve the same dimension reduction with a different metric, say the taxicab metric or a $ p$-norm. In fact, you cannot achieve anything close to logarithmic dimension reduction for the taxicab ($ l_1$) metric. This result is due to Brinkman-Charikar in 2004.

The code we used to compute the JLT is not particularly efficient. There are much more efficient methods. One of them, borrowing its namesake from the Fast Fourier Transform, is called the Fast Johnson-Lindenstrauss Transform. The technique is due to Ailon-Chazelle from 2009, and it involves something called “preconditioning a sparse projection matrix with a randomized Fourier transform.” I don’t know precisely what that means, but it would be neat to dive into that in a future post.

The central focus in this post was whether the JLT preserves distances between points, but one might be curious as to whether the points themselves are well approximated. The answer is an enthusiastic no. If the data were images, the projected points would look nothing like the original images. However, it appears the degradation tradeoff is measurable (by some accounts perhaps linear), and there appears to be some work (also this by the same author) when restricting to sparse vectors (like word-association vectors).

Note that the JLT is not the only method for dimensionality reduction. We previously saw principal component analysis (applied to face recognition), and in the future we will cover a related technique called the Singular Value Decomposition. It is worth noting that another common technique specific to nearest-neighbor is called “locality-sensitive hashing.” Here the goal is to project the points in such a way that “similar” points land very close to each other. Say, if you were to discretize the plane into bins, these bins would form the hash values and you’d want to maximize the probability that two points with the same label land in the same bin. Then you can do things like nearest-neighbors by comparing bins.

Another interesting note, if your data is linearly separable (like the examples we saw in our age-old post on Perceptrons), then you can use the JLT to make finding a linear separator easier. First project the data onto the dimension given in the theorem. With high probability the points will still be linearly separable. And then you can use a perceptron-type algorithm in the smaller dimension. If you want to find out which side a new point is on, you project and compare with the separator in the smaller dimension.

Beyond its interest for practical dimensionality reduction, the JLT has had many other interesting theoretical consequences. More generally, the idea of “randomly projecting” your data onto some small dimensional space has allowed mathematicians to get some of the best-known results on many optimization and learning problems, perhaps the most famous of which is called MAX-CUT; the result is by Goemans-Williamson and it led to a mathematical constant being named after them, $ \alpha_{GW} =.878567 \dots$. If you’re interested in more about the theory, Santosh Vempala wrote a wonderful (and short!) treatise dedicated to this topic.

randomprojectionbook

The Inequality

Math and computer science are full of inequalities, but there is one that shows up more often in my work than any other. Of course, I’m talking about

$ \displaystyle 1+x \leq e^{x}$

This is The Inequality. I’ve been told on many occasions that the entire field of machine learning reduces to The Inequality combined with the Chernoff bound (which is proved using The Inequality).

Why does it show up so often in machine learning? Mostly because in analyzing an algorithm you want to bound the probability that some bad event happens. Bad events are usually phrased similarly to

$ \displaystyle \prod_{i=1}^m (1-p_i)$

And applying The Inequality we can bound this from above by

$ \displaystyle\prod_{i=1}^m (1-p_i) \leq \prod_{i=1}^m e^{-p_i} = e^{-\sum_{i=1}^m p_i}$

The point is that usually $ m$ is the size of your dataset, which you get to choose, and by picking larger $ m$ you make the probability of the bad event vanish exponentially quickly in $ m$. (Here $ p_i$ is unrelated to how I am about to use $ p_i$ as weights).

Of course, The Inequality has much deeper implications than bounds for the efficiency and correctness of machine learning algorithms. To convince you of the depth of this simple statement, let’s see its use in an elegant proof of the arithmetic geometric inequality.

Theorem: (The arithmetic-mean geometric-mean inequality, general version): For all non-negative real numbers $ a_1, \dots, a_n$ and all positive $ p_1, \dots, p_n$ such that $ p_1 + \dots + p_n = 1$, the following inequality holds:

$ \displaystyle a_1^{p_1} \cdots a_n^{p_n} \leq p_1 a_1 + \dots + p_n a_n$

Note that when all the $ p_i = 1/n$ this is the standard AM-GM inequality.

Proof. This proof is due to George Polya (in Hungarian, Pólya György).

We start by modifying The Inequality $ 1+x \leq e^x$ by a shift of variables $ x \mapsto x-1$, so that the inequality now reads $ x \leq e^{x-1}$. We can apply this to each $ a_i$ giving $ a_i \leq e^{a_i – 1}$, and in fact,

$ \displaystyle a_1^{p_1} \cdots a_n^{p_n} \leq e^{\sum_{i=1}^n p_ia_i – p_i} = e^{\left ( \sum_{i=1}^n p_ia_i \right ) – 1}$

Now we have something quite curious: if we call $ A$ the sum $ p_1a_1 + \dots + p_na_n$, the above shows that $ a_1^{p_1} \cdots a_n^{p_n} \leq e^{A-1}$. Moreover, again because $ A \leq e^{A-1}$ that shows that the right hand side of the inequality we’re trying to prove is also bounded by $ e^{A-1}$. So we know that both sides of our desired inequality (and in particular, the max) is bounded from above by $ e^{A-1}$. This seems like a conundrum until we introduce the following beautiful idea: normalize by the thing you think should be the larger of the two sides of the inequality.

Define new variables $ b_i = a_i / A$ and notice that $ \sum_i p_i b_i = 1$ just by unraveling the definition. Call this sum $ B = \sum_i p_i b_i$. Now we know that

$ b_1^{p_1} \cdots b_n^{p_n} = \left ( \frac{a_1}{A} \right )^{p_1} \cdots \left ( \frac{a_n}{A} \right )^{p_n} \leq e^{B – 1} = e^0 = 1$

Now we unpack the pieces, and multiply through by $ A^{p_1}A^{p_2} \cdots A^{p_n} = A$, the result is exactly the AM-GM inequality.

$ \square$

Even deeper, there is only one case when The Inequality is tight, i.e. when $ 1+x = e^x$, and that is $ x=0$. This allows us to use the proof above to come to a full characterization of the case of equality in the proof above. Indeed, the crucial step was that $ (a_i / A) = e^{A-1}$, which is only true when $ (a_i / A) = 1$, i.e. when $ a_i = A$. Spending a few seconds thinking about this gives the characterization of equality if and only if $ a_1 = a_2 = \dots = a_n = A$.

So this is excellent: the arithmetic-geometric inequality is a deep theorem with applications all over mathematics and statistics. Adding another layer of indirection for impressiveness, one can use the AM-GM inequality to prove the Cauchy-Schwarz inequality rather directly. Sadly, the Wikipedia page for the Cauchy-Schwarz inequality hardly does it justice as far as the massive number of applications. For example, many novel techniques in geometry and number theory are proved directly from C-S. More, in fact, than I can hope to learn.

Of course, no article about The Inequality could be complete without a proof of The Inequality.

Theorem: For all $ x \in \mathbb{R}$, $ 1+x \leq e^x$.

Proof. The proof starts by proving a simpler theorem, named after Bernoulli, that $ 1+nx \leq (1+x)^n$ for every $ x [-1, \infty)$ and every $ n \in \mathbb{N}$. This is relatively straightforward by induction. The base case is trivial, and

$ \displaystyle (1+x)^{n+1} = (1+x)(1+x)^n \geq (1+x)(1+nx) = 1 + (n+1)x + nx^2$

And because $ nx^2 \geq 0$, we get Bernoulli’s inequality.

Now for any $ z \geq 0$ we can set $ x = z/n$, and get $ (1+z) = (1+nx) \leq (1+\frac{z}{n})^n$ for every $ n$. Note that Bernoulli’s inequality is preserved for larger and larger $ n$ because $ x \geq 0$. So taking limits of both sides as $ n \to \infty$ we get the definition of $ e^z$ on the right hand side of the inequality. We can prove a symmetrical inequality for $ -x$ when $ x < 0$, and this proves the theorem.

$ \square$

What other insights can we glean about The Inequality? For one, it’s a truncated version of the Taylor series approximation

$ \displaystyle e^x = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \dots$

Indeed, the Taylor remainder theorem tells us that the first two terms approximate $ e^x$ around zero with error depending on some constant times $ e^x x^2 \geq 0$. In other words, $ 1+x$ is a lower bound on $ e^x$ around zero. It is perhaps miraculous that this extends to a lower bound everywhere, until you realize that exponentials grow extremely quickly and lines do not.

One might wonder whether we can improve our approximation with higher order approximations. Indeed we can, but we have to be a bit careful. In particular, $ 1+x+x^2/2 \leq e^x$ is only true for nonnegative $ x$ (because the remainder theorem now applies to $ x^3$, but if we restrict to odd terms we win: $ 1+x+x^2/2 + x^3/6 \leq e^x$ is true for all $ x$.

What is really surprising about The Inequality is that, at least in the applications I work with, we rarely see higher order approximations used. For most applications, The difference between an error term which is quadratic and one which is cubic or quartic is often not worth the extra work in analyzing the result. You get the same theorem: that something vanishes exponentially quickly.

If you’re interested in learning more about the theory of inequalities, I wholeheartedly recommend The Cauchy-Schwarz Master Class. This book is wonderfully written, and chock full of fun exercises. I know because I do exercises from books like this one on planes and trains. It’s my kind of sudoku 🙂

The Boosting Margin, or Why Boosting Doesn’t Overfit

There’s a well-understood phenomenon in machine learning called overfitting. The idea is best shown by a graph:

overfitting

Let me explain. The vertical axis represents the error of a hypothesis. The horizontal axis represents the complexity of the hypothesis. The blue curve represents the error of a machine learning algorithm’s output on its training data, and the red curve represents the generalization of that hypothesis to the real world. The overfitting phenomenon is marker in the middle of the graph, before which the training error and generalization error both go down, but after which the training error continues to fall while the generalization error rises.

The explanation is a sort of numerical version of Occam’s Razor that says more complex hypotheses can model a fixed data set better and better, but at some point a simpler hypothesis better models the underlying phenomenon that generates the data. To optimize a particular learning algorithm, one wants to set parameters of their model to hit the minimum of the red curve.

This is where things get juicy. Boosting, which we covered in gruesome detail previously, has a natural measure of complexity represented by the number of rounds you run the algorithm for. Each round adds one additional “weak learner” weighted vote. So running for a thousand rounds gives a vote of a thousand weak learners. Despite this, boosting doesn’t overfit on many datasets. In fact, and this is a shocking fact, researchers observed that Boosting would hit zero training error, they kept running it for more rounds, and the generalization error kept going down! It seemed like the complexity could grow arbitrarily without penalty.

Schapire, Freund, Bartlett, and Lee proposed a theoretical explanation for this based on the notion of a margin, and the goal of this post is to go through the details of their theorem and proof. Remember that the standard AdaBoost algorithm produces a set of weak hypotheses $ h_i(x)$ and a corresponding weight $ \alpha_i \in [-1,1]$ for each round $ i=1, \dots, T$. The classifier at the end is a weighted majority vote of all the weak learners (roughly: weak learners with high error on “hard” data points get less weight).

Definition: The signed confidence of a labeled example $ (x,y)$ is the weighted sum:

$ \displaystyle \textup{conf}(x) = \sum_{i=1}^T \alpha_i h_i(x)$

The margin of $ (x,y)$ is the quantity $ \textup{margin}(x,y) = y \textup{conf}(x)$. The notation implicitly depends on the outputs of the AdaBoost algorithm via “conf.”

We use the product of the label and the confidence for the observation that $ y \cdot \textup{conf}(x) \leq 0$ if and only if the classifier is incorrect. The theorem we’ll prove in this post is

Theorem: With high probability over a random choice of training data, for any $ 0 < \theta < 1$ generalization error of boosting is bounded from above by

$ \displaystyle \Pr_{\textup{train}}[\textup{margin}(x) \leq \theta] + O \left ( \frac{1}{\theta} (\textup{typical error terms}) \right )$

In words, the generalization error of the boosting hypothesis is bounded by the distribution of margins observed on the training data. To state and prove the theorem more generally we have to return to the details of PAC-learning. Here and in the rest of this post, $ \Pr_D$ denotes $ \Pr_{x \sim D}$, the probability over a random example drawn from the distribution $ D$, and $ \Pr_S$ denotes the probability over a random (training) set of examples drawn from $ D$.

Theorem: Let $ S$ be a set of $ m$ random examples chosen from the distribution $ D$ generating the data. Assume the weak learner corresponds to a finite hypothesis space $ H$ of size $ |H|$, and let $ \delta > 0$. Then with probability at least $ 1 – \delta$ (over the choice of $ S$), every weighted-majority vote function $ f$ satisfies the following generalization bound for every $ \theta > 0$.

$ \displaystyle \Pr_D[y f(x) \leq 0] \leq \Pr_S[y f(x) \leq \theta] + O \left ( \frac{1}{\sqrt{m}} \sqrt{\frac{\log m \log |H|}{\theta^2} + \log(1/\delta)} \right )$

In other words, this phenomenon is a fact about voting schemes, not boosting in particular. From now on, a “majority vote” function $ f(x)$ will mean to take the sign of a sum of the form $ \sum_{i=1}^N a_i h_i(x)$, where $ a_i \geq 0$ and $ \sum_i a_i = 1$. This is the “convex hull” of the set of weak learners $ H$. If $ H$ is infinite (in our proof it will be finite, but we’ll state a generalization afterward), then only finitely many of the $ a_i$ in the sum may be nonzero.

To prove the theorem, we’ll start by defining a class of functions corresponding to “unweighted majority votes with duplicates:”

Definition: Let $ C_N$ be the set of functions $ f(x)$ of the form $ \frac{1}{N} \sum_{i=1}^N h_i(x)$ where $ h_i \in H$ and the $ h_i$ may contain duplicates (some of the $ h_i$ may be equal to some other of the $ h_j$).

Now every majority vote function $ f$ can be written as a weighted sum of $ h_i$ with weights $ a_i$ (I’m using $ a$ instead of $ \alpha$ to distinguish arbitrary weights from those weights arising from Boosting). So any such $ f(x)$ defines a natural distribution over $ H$ where you draw function $ h_i$ with probability $ a_i$. I’ll call this distribution $ A_f$. If we draw from this distribution $ N$ times and take an unweighted sum, we’ll get a function $ g(x) \in C_N$. Call the random process (distribution) generating functions in this way $ Q_f$. In diagram form, the logic goes

$ f \to $ weights $ a_i \to$ distribution over $ H \to$ function in $ C_N$ by drawing $ N$ times according to $ H$.

The main fact about the relationship between $ f$ and $ Q_f$ is that each is completely determined by the other. Obviously $ Q_f$ is determined by $ f$ because we defined it that way, but $ f$ is also completely determined by $ Q_f$ as follows:

$ \displaystyle f(x) = \mathbb{E}_{g \sim Q_f}[g(x)]$

Proving the equality is an exercise for the reader.

Proof of Theorem. First we’ll split the probability $ \Pr_D[y f(x) \leq 0]$ into two pieces, and then bound each piece.

First a probability reminder. If we have two events $ A$ and $ B$ (in what’s below, this will be $ yg(x) \leq \theta/2$ and $ yf(x) \leq 0$, we can split up $ \Pr[A]$ into $ \Pr[A \textup{ and } B] + \Pr[A \textup{ and } \overline{B}]$ (where $ \overline{B}$ is the opposite of $ B$). This is called the law of total probability. Moreover, because $ \Pr[A \textup{ and } B] = \Pr[A | B] \Pr[B]$ and because these quantities are all at most 1, it’s true that $ \Pr[A \textup{ and } B] \leq \Pr[A \mid B]$ (the conditional probability) and that $ \Pr[A \textup{ and } B] \leq \Pr[B]$.

Back to the proof. Notice that for any $ g(x) \in C_N$ and any $ \theta > 0$, we can write $ \Pr_D[y f(x) \leq 0]$ as a sum:

$ \displaystyle \Pr_D[y f(x) \leq 0] =\\ \Pr_D[yg(x) \leq \theta/2 \textup{ and } y f(x) \leq 0] + \Pr_D[yg(x) > \theta/2 \textup{ and } y f(x) \leq 0]$

Now I’ll loosen the first term by removing the second event (that only makes the whole probability bigger) and loosen the second term by relaxing it to a conditional:

$ \displaystyle \Pr_D[y f(x) \leq 0] \leq \Pr_D[y g(x) \leq \theta / 2] + \Pr_D[yg(x) > \theta/2 \mid yf(x) \leq 0]$

Now because the inequality is true for every $ g(x) \in C_N$, it’s also true if we take an expectation of the RHS over any distribution we choose. We’ll choose the distribution $ Q_f$ to get

$ \displaystyle \Pr_D[yf(x) \leq 0] \leq T_1 + T_2$

And $ T_1$ (term 1) is

$ \displaystyle T_1 = \Pr_{x \sim D, g \sim Q_f} [yg(x) \leq \theta /2] = \mathbb{E}_{g \sim Q_f}[\Pr_D[yg(x) \leq \theta/2]]$

And $ T_2$ is

$ \displaystyle \Pr_{x \sim D, g \sim Q_f}[yg(x) > \theta/2 \mid yf(x) \leq 0] = \mathbb{E}_D[\Pr_{g \sim Q_f}[yg(x) > \theta/2 \mid yf(x) \leq 0]]$

We can rewrite the probabilities using expectations because (1) the variables being drawn in the distributions are independent, and (2) the probability of an event is the expectation of the indicator function of the event.

Now we’ll bound the terms $ T_1, T_2$ separately. We’ll start with $ T_2$.

Fix $ (x,y)$ and look at the quantity inside the expectation of $ T_2$.

$ \displaystyle \Pr_{g \sim Q_f}[yg(x) > \theta/2 \mid yf(x) \leq 0]$

This should intuitively be very small for the following reason. We’re sampling $ g$ according to a distribution whose expectation is $ f$, and we know that $ yf(x) \leq 0$. Of course $ yg(x)$ is unlikely to be large.

Mathematically we can prove this by transforming the thing inside the probability to a form suitable for the Chernoff bound. Saying $ yg(x) > \theta / 2$ is the same as saying $ |yg(x) – \mathbb{E}[yg(x)]| > \theta /2$, i.e. that some random variable which is a sum of independent random variables (the $ h_i$) deviates from its expectation by at least $ \theta/2$. Since the $ y$’s are all $ \pm 1$ and constant inside the expectation, they can be removed from the absolute value to get

$ \displaystyle \leq \Pr_{g \sim Q_f}[g(x) – \mathbb{E}[g(x)] > \theta/2]$

The Chernoff bound allows us to bound this by an exponential in the number of random variables in the sum, i.e. $ N$. It turns out the bound is $ e^{-N \theta^2 / 8}$.

Now recall $ T_1$

$ \displaystyle T_1 = \Pr_{x \sim D, g \sim Q_f} [yg(x) \leq \theta /2] = \mathbb{E}_{g \sim Q_f}[\Pr_D[yg(x) \leq \theta/2]]$

For $ T_1$, we don’t want to bound it absolutely like we did for $ T_2$, because there is nothing stopping the classifier $ f$ from being a bad classifier and having lots of error. Rather, we want to bound it in terms of the probability that $ yf(x) \leq \theta$. We’ll do this in two steps. In step 1, we’ll go from $ \Pr_D$ of the $ g$’s to $ \Pr_S$ of the $ g$’s.

Step 1: For any fixed $ g, \theta$, if we take a sample $ S$ of size $ m$, then consider the event in which the sample probability deviates from the true distribution by some value $ \varepsilon_N$, i.e. the event

$ \displaystyle \Pr_D[yg(x) \leq \theta /2] > \Pr_{S, x \sim S}[yg(x) \leq \theta/2] + \varepsilon_N$

The claim is this happens with probability at most $ e^{-2m\varepsilon_N^2}$. This is again the Chernoff bound in disguise, because the expected value of $ \Pr_S$ is $ \Pr_D$, and the probability over $ S$ is an average of random variables (it’s a slightly different form of the Chernoff bound; see this post for more). From now on we’ll drop the $ x \sim S$ when writing $ \Pr_S$.

The bound above holds true for any fixed $ g,\theta$, but we want a bound over all $ g$ and $ \theta$. To do that we use the union bound. Note that there are only $ (N+1)$ possible choices for a nonnegative $ \theta$ because $ g(x)$ is a sum of $ N$ values each of which is either $ \pm1$. And there are only $ |C_N| \leq |H|^N$ possibilities for $ g(x)$. So the union bound says the above event will occur with probability at most $ (N+1)|H|^N e^{-2m\varepsilon_N^2}$.

If we want the event to occur with probability at most $ \delta_N$, we can judiciously pick

$ \displaystyle \varepsilon_N = \sqrt{(1/2m) \log ((N+1)|H|^N / \delta_N)}$

And since the bound holds in general, we can take expectation with respect to $ Q_f$ and nothing changes. This means that for any $ \delta_N$, our chosen $ \varepsilon_N$ ensures that the following is true with probability at least $ 1-\delta_N$:

$ \displaystyle \Pr_{D, g \sim Q_f}[yg(x) \leq \theta/2] \leq \Pr_{S, g \sim Q_f}[yg(x) \leq \theta/2] + \varepsilon_N$

Now for step 2, we bound the probability that $ yg(x) \leq \theta/2$ on a sample to the probability that $ yf(x) \leq \theta$ on a sample.

Step 2: The first claim is that

$ \displaystyle \Pr_{S, g \sim Q_f}[yg(x) \leq \theta / 2] \leq \Pr_{S} [yf(x) \leq \theta] + \mathbb{E}_{S}[\Pr_{g \sim Q_f}[yg(x) \leq \theta/2 \mid yf(x) \geq \theta]]$

What we did was break up the LHS into two “and”s, when $ yf(x) > \theta$ and $ yf(x) \leq \theta$ (this was still an equality). Then we loosened the first term to $ \Pr_{S}[yf(x) \leq \theta]$ since that is only more likely than both $ yg(x) \leq \theta/2$ and $ yf(x) \leq \theta$. Then we loosened the second term again using the fact that a probability of an “and” is bounded by the conditional probability.

Now we have the probability of $ yg(x) \leq \theta / 2$ bounded by the probability that $ yf(x) \leq 0$ plus some stuff. We just need to bound the “plus some stuff” absolutely and then we’ll be done. The argument is the same as our previous use of the Chernoff bound: we assume $ yf(x) \geq \theta$, and yet $ yg(x) \leq \theta / 2$. So the deviation of $ yg(x)$ from its expectation is large, and the probability that happens is exponentially small in the amount of deviation. The bound you get is

$ \displaystyle \Pr_{g \sim Q}[yg(x) \leq \theta/2 \mid yf(x) > \theta] \leq e^{-N\theta^2 / 8}.$

And again we use the union bound to ensure the failure of this bound for any $ N$ will be very small. Specifically, if we want the total failure probability to be at most $ \delta$, then we need to pick some $ \delta_j$’s so that $ \delta = \sum_{j=0}^{\infty} \delta_j$. Choosing $ \delta_N = \frac{\delta}{N(N+1)}$ works.

Putting everything together, we get that with probability at least $ 1-\delta$ for every $ \theta$ and every $ N$, this bound on the failure probability of $ f(x)$:

$ \displaystyle \Pr_{x \sim D}[yf(x) \leq 0] \leq \Pr_{S, x \sim S}[yf(x) \leq \theta] + 2e^{-N \theta^2 / 8} + \sqrt{\frac{1}{2m} \log \left ( \frac{N(N+1)^2 |H|^N}{\delta} \right )}.$

This claim is true for every $ N$, so we can pick $ N$ that minimizes it. Doing a little bit of behind-the-scenes calculus that is left as an exercise to the reader, a tight choice of $ N$ is $ (4/ \theta)^2 \log(m/ \log |H|)$. And this gives the statement of the theorem.

$ \square$

We proved this for finite hypothesis classes, and if you know what VC-dimension is, you’ll know that it’s a central tool for reasoning about the complexity of infinite hypothesis classes. An analogous theorem can be proved in terms of the VC dimension. In that case, calling $ d$ the VC-dimension of the weak learner’s output hypothesis class, the bound is

$ \displaystyle \Pr_D[yf(x) \leq 0] \leq \Pr_S[yf(x) \leq \theta] + O \left ( \frac{1}{\sqrt{m}} \sqrt{\frac{d \log^2(m/d)}{\theta^2} + \log(1/\delta)} \right )$

How can we interpret these bounds with so many parameters floating around? That’s where asymptotic notation comes in handy. If we fix $ \theta \leq 1/2$ and $ \delta = 0.01$, then the big-O part of the theorem simplifies to $ \sqrt{(\log |H| \cdot \log m) / m}$, which is easier to think about since $ (\log m)/m$ goes to zero very fast.

Now the theorem we just proved was about any weighted majority function. The question still remains: why is AdaBoost good? That follows from another theorem, which we’ll state and leave as an exercise (it essentially follows by unwrapping the definition of the AdaBoost algorithm from last time).

Theorem: Suppose that during AdaBoost the weak learners produce hypotheses with training errors $ \varepsilon_1, \dots , \varepsilon_T$. Then for any $ \theta$,

$ \displaystyle \Pr_{(x,y) \sim S} [yf(x) \leq \theta] \leq 2^T \prod_{t=1}^T \sqrt{\varepsilon_t^{(1-\theta)} (1-\varepsilon_t)^{(1+\theta)}}$

Let’s interpret this for some concrete numbers. Say that $ \theta = 0$ and $ \varepsilon_t$ is any fixed value less than $ 1/2$. In this case the term inside product becomes $ \sqrt{\varepsilon (1-\varepsilon)} < 1/2$ and the whole bound tends exponentially quickly to zero in the number of rounds $ T$. On the other hand, if we raise $ \theta$ to about 1/3, then in order to maintain the LHS tending to zero we would need $ \varepsilon < \frac{1}{4} ( 3 – \sqrt{5} )$ which is about 20% error.

If you’re interested in learning more about Boosting, there is an excellent book by Freund and Schapire (the inventors of boosting) called Boosting: Foundations and Algorithms. There they include a tighter analysis based on the idea of Rademacher complexity. The bound I presented in this post is nice because the proof doesn’t require any machinery past basic probability, but if you want to reach the cutting edge of knowledge about boosting you need to invest in the technical stuff.

Until next time!