Singular Value Decomposition Part 2: Theorem, Proof, Algorithm

I’m just going to jump right into the definitions and rigor, so if you haven’t read the previous post motivating the singular value decomposition, go back and do that first. This post will be theorem, proof, algorithm, data. The data set we test on is a thousand-story CNN news data set. All of the data, code, and examples used in this post is in a github repository, as usual.

We start with the best-approximating $ k$-dimensional linear subspace.

Definition: Let $ X = \{ x_1, \dots, x_m \}$ be a set of $ m$ points in $ \mathbb{R}^n$. The best approximating $ k$-dimensional linear subspace of $ X$ is the $ k$-dimensional linear subspace $ V \subset \mathbb{R}^n$ which minimizes the sum of the squared distances from the points in $ X$ to $ V$.

Let me clarify what I mean by minimizing the sum of squared distances. First we’ll start with the simple case: we have a vector $ x \in X$, and a candidate line $ L$ (a 1-dimensional subspace) that is the span of a unit vector $ v$. The squared distance from $ x$ to the line spanned by $ v$ is the squared length of $ x$ minus the squared length of the projection of $ x$ onto $ v$. Here’s a picture.

vectormax

I’m saying that the pink vector $ z$ in the picture is the difference of the black and green vectors $ x-y$, and that the “distance” from $ x$ to $ v$ is the length of the pink vector. The reason is just the Pythagorean theorem: the vector $ x$ is the hypotenuse of a right triangle whose other two sides are the projected vector $ y$ and the difference vector $ z$.

Let’s throw down some notation. I’ll call $ \textup{proj}_v: \mathbb{R}^n \to \mathbb{R}^n$ the linear map that takes as input a vector $ x$ and produces as output the projection of $ x$ onto $ v$. In fact we have a brief formula for this when $ v$ is a unit vector. If we call $ x \cdot v$ the usual dot product, then $ \textup{proj}_v(x) = (x \cdot v)v$. That’s $ v$ scaled by the inner product of $ x$ and $ v$. In the picture above, since the line $ L$ is the span of the vector $ v$, that means that $ y = \textup{proj}_v(x)$ and $ z = x -\textup{proj}_v(x) = x-y$.

The dot-product formula is useful for us because it allows us to compute the squared length of the projection by taking a dot product $ |x \cdot v|^2$. So then a formula for the distance of $ x$ from the line spanned by the unit vector $ v$ is

$ \displaystyle (\textup{dist}_v(x))^2 = \left ( \sum_{i=1}^n x_i^2 \right ) – |x \cdot v|^2$

This formula is just a restatement of the Pythagorean theorem for perpendicular vectors.

$ \displaystyle \sum_{i} x_i^2 = (\textup{proj}_v(x))^2 + (\textup{dist}_v(x))^2$

In particular, the difference vector we originally called $ z$ has squared length $ \textup{dist}_v(x)^2$. The vector $ y$, which is perpendicular to $ z$ and is also the projection of $ x$ onto $ L$, it’s squared length is $ (\textup{proj}_v(x))^2$. And the Pythagorean theorem tells us that summing those two squared lengths gives you the squared length of the hypotenuse $ x$.

If we were trying to find the best approximating 1-dimensional subspace for a set of data points $ X$, then we’d want to minimize the sum of the squared distances for every point $ x \in X$. Namely, we want the $ v$ that solves $ \min_{|v|=1} \sum_{x \in X} (\textup{dist}_v(x))^2$.

With some slight algebra we can make our life easier. The short version: minimizing the sum of squared distances is the same thing as maximizing the sum of squared lengths of the projections. The longer version: let’s go back to a single point $ x$ and the line spanned by $ v$. The Pythagorean theorem told us that

$ \displaystyle \sum_{i} x_i^2 = (\textup{proj}_v(x))^2 + (\textup{dist}_v(x))^2$

The squared length of $ x$ is constant. It’s an input to the algorithm and it doesn’t change through a run of the algorithm. So we get the squared distance by subtracting $ (\textup{proj}_v(x))^2$ from a constant number,

$ \displaystyle \sum_{i} x_i^2 – (\textup{proj}_v(x))^2 = (\textup{dist}_v(x))^2$

which means if we want to minimize the squared distance, we can instead maximize the squared projection. Maximizing the subtracted thing minimizes the whole expression.

It works the same way if you’re summing over all the data points in $ X$. In fact, we can say it much more compactly this way. If the rows of $ A$ are your data points, then $ Av$ contains as each entry the (signed) dot products $ x_i \cdot v$. And the squared norm of this vector, $ |Av|^2$, is exactly the sum of the squared lengths of the projections of the data onto the line spanned by $ v$. The last thing is that maximizing a square is the same as maximizing its square root, so we can switch freely between saying our objective is to find the unit vector $ v$ that maximizes $ |Av|$ and that which maximizes $ |Av|^2$.

At this point you should be thinking,

Great, we have written down an optimization problem: $ \max_{v : |v|=1} |Av|$. If we could solve this, we’d have the best 1-dimensional linear approximation to the data contained in the rows of $ A$. But (1) how do we solve that problem? And (2) you promised a $ k$-dimensional approximating subspace. I feel betrayed! Swindled! Bamboozled!

Here’s the fantastic thing. We can solve the 1-dimensional optimization problem efficiently (we’ll do it later in this post), and (2) is answered by the following theorem.

The SVD Theorem: Computing the best $ k$-dimensional subspace reduces to $ k$ applications of the one-dimensional problem.

We will prove this after we introduce the terms “singular value” and “singular vector.”

Singular values and vectors

As I just said, we can get the best $ k$-dimensional approximating linear subspace by solving the one-dimensional maximization problem $ k$ times. The singular vectors of $ A$ are defined recursively as the solutions to these sub-problems. That is, I’ll call $ v_1$ the first singular vector of $ A$, and it is:

$ \displaystyle v_1 = \arg \max_{v, |v|=1} |Av|$

And the corresponding first singular value, denoted $ \sigma_1(A)$, is the maximal value of the optimization objective, i.e. $ |Av_1|$. (I will use this term frequently, that $ |Av|$ is the “objective” of the optimization problem.) Informally speaking, $ (\sigma_1(A))^2$ represents how much of the data was captured by the first singular vector. Meaning, how close the vectors are to lying on the line spanned by $ v_1$. Larger values imply the approximation is better. In fact, if all the data points lie on a line, then $ (\sigma_1(A))^2$ is the sum of the squared norms of the rows of $ A$.

Now here is where we see the reduction from the $ k$-dimensional case to the 1-dimensional case. To find the best 2-dimensional subspace, you first find the best one-dimensional subspace (spanned by $ v_1$), and then find the best 1-dimensional subspace, but only considering those subspaces that are the spans of unit vectors perpendicular to $ v_1$. The notation for “vectors $ v$ perpendicular to $ v_1$” is $ v \perp v_1$. Restating, the second singular vector $ v _2$ is defined as

$ \displaystyle v_2 = \arg \max_{v \perp v_1, |v| = 1} |Av|$

And the SVD theorem implies the subspace spanned by $ \{ v_1, v_2 \}$ is the best 2-dimensional linear approximation to the data. Likewise $ \sigma_2(A) = |Av_2|$ is the second singular value. Its squared magnitude tells us how much of the data that was not “captured” by $ v_1$ is captured by $ v_2$. Again, if the data lies in a 2-dimensional subspace, then the span of $ \{ v_1, v_2 \}$ will be that subspace.

We can continue this process. Recursively define $ v_k$, the $ k$-th singular vector, to be the vector which maximizes $ |Av|$, when $ v$ is considered only among the unit vectors which are perpendicular to $ \textup{span} \{ v_1, \dots, v_{k-1} \}$. The corresponding singular value $ \sigma_k(A)$ is the value of the optimization problem.

As a side note, because of the way we defined the singular values as the objective values of “nested” optimization problems, the singular values are decreasing, $ \sigma_1(A) \geq \sigma_2(A) \geq \dots \geq \sigma_n(A) \geq 0$. This is obvious: you only pick $ v_2$ in the second optimization problem because you already picked $ v_1$ which gave a bigger singular value, so $ v_2$’s objective can’t be bigger.

If you keep doing this, one of two things happen. Either you reach $ v_n$ and since the domain is $ n$-dimensional there are no remaining vectors to choose from, the $ v_i$ are an orthonormal basis of $ \mathbb{R}^n$. This means that the data in $ A$ contains a full-rank submatrix. The data does not lie in any smaller-dimensional subspace. This is what you’d expect from real data.

Alternatively, you could get to a stage $ v_k$ with $ k < n$ and when you try to solve the optimization problem you find that every perpendicular $ v$ has $ Av = 0$. In this case, the data actually does lie in a $ k$-dimensional subspace, and the first-through-$ k$-th singular vectors you computed span this subspace.

Let’s do a quick sanity check: how do we know that the singular vectors $ v_i$ form a basis? Well formally they only span a basis of the row space of $ A$, i.e. a basis of the subspace spanned by the data contained in the rows of $ A$. But either way the point is that each $ v_{i+1}$ spans a new dimension from the previous $ v_1, \dots, v_i$ because we’re choosing $ v_{i+1}$ to be orthogonal to all the previous $ v_i$. So the answer to our sanity check is “by construction.”

Back to the singular vectors, the discussion from the last post tells us intuitively that the data is probably never in a small subspace.  You never expect the process of finding singular vectors to stop before step $ n$, and if it does you take a step back and ask if something deeper is going on. Instead, in real life you specify how much of the data you want to capture, and you keep computing singular vectors until you’ve passed the threshold. Alternatively, you specify the amount of computing resources you’d like to spend by fixing the number of singular vectors you’ll compute ahead of time, and settle for however good the $ k$-dimensional approximation is.

Before we get into any code or solve the 1-dimensional optimization problem, let’s prove the SVD theorem.

Proof of SVD theorem.

Recall we’re trying to prove that the first $ k$ singular vectors provide a linear subspace $ W$ which maximizes the squared-sum of the projections of the data onto $ W$. For $ k=1$ this is trivial, because we defined $ v_1$ to be the solution to that optimization problem. The case of $ k=2$ contains all the important features of the general inductive step. Let $ W$ be any best-approximating 2-dimensional linear subspace for the rows of $ A$. We’ll show that the subspace spanned by the two singular vectors $ v_1, v_2$ is at least as good (and hence equally good).

Let $ w_1, w_2$ be any orthonormal basis for $ W$ and let $ |Aw_1|^2 + |Aw_2|^2$ be the quantity that we’re trying to maximize (and which $ W$ maximizes by assumption). Moreover, we can pick the basis vector $ w_2$ to be perpendicular to $ v_1$. To prove this we consider two cases: either $ v_1$ is already perpendicular to $ W$ in which case it’s trivial, or else $ v_1$ isn’t perpendicular to $ W$ and you can choose $ w_1$ to be $ \textup{proj}_W(v_1)$ and choose $ w_2$ to be any unit vector perpendicular to $ w_1$.

Now since $ v_1$ maximizes $ |Av|$, we have $ |Av_1|^2 \geq |Aw_1|^2$. Moreover, since $ w_2$ is perpendicular to $ v_1$, the way we chose $ v_2$ also makes $ |Av_2|^2 \geq |Aw_2|^2$. Hence the objective $ |Av_1|^2 + |Av_2|^2 \geq |Aw_1|^2 + |Aw_2|^2$, as desired.

For the general case of $ k$, the inductive hypothesis tells us that the first $ k$ terms of the objective for $ k+1$ singular vectors is maximized, and we just have to pick any vector $ w_{k+1}$ that is perpendicular to all $ v_1, v_2, \dots, v_k$, and the rest of the proof is just like the 2-dimensional case.

$ \square$

Now remember that in the last post we started with the definition of the SVD as a decomposition of a matrix $ A = U\Sigma V^T$? And then we said that this is a certain kind of change of basis? Well the singular vectors $ v_i$ together form the columns of the matrix $ V$ (the rows of $ V^T$), and the corresponding singular values $ \sigma_i(A)$ are the diagonal entries of $ \Sigma$. When $ A$ is understood we’ll abbreviate the singular value as $ \sigma_i$.

To reiterate with the thoughts from last post, the process of applying $ A$ is exactly recovered by the process of first projecting onto the (full-rank space of) singular vectors $ v_1, \dots, v_k$, scaling each coordinate of that projection according to the corresponding singular values, and then applying this $ U$ thing we haven’t talked about yet.

So let’s determine what $ U$ has to be. The way we picked $ v_i$ to make $ A$ diagonal gives us an immediate suggestion: use the $ Av_i$ as the columns of $ U$. Indeed, define $ u_i = Av_i$, the images of the singular vectors under $ A$. We can swiftly show the $ u_i$ form a basis of the image of $ A$. The reason is because if $ v = \sum_i c_i v_i$ (using all $ n$ of the singular vectors $ v_i$), then by linearity $ Av = \sum_{i} c_i Av_i = \sum_i c_i u_i$. It is also easy to see why the $ u_i$ are orthogonal (prove it as an exercise). Let’s further make sure the $ u_i$ are unit vectors and redefine them as $ u_i = \frac{1}{\sigma_i}Av_i$

If you put these thoughts together, you can say exactly what $ A$ does to any given vector $ x$. Since the $ v_i$ form an orthonormal basis, $ x = \sum_i (x \cdot v_i) v_i$, and then applying $ A$ gives

$ \displaystyle \begin{aligned}Ax &= A \left ( \sum_i (x \cdot v_i) v_i \right ) \\  &= \sum_i (x \cdot v_i) A_i v_i \\ &= \sum_i (x \cdot v_i) \sigma_i u_i \end{aligned}$

If you’ve been closely reading this blog in the last few months, you’ll recognize a very nice way to write the last line of the above equation. It’s an outer product. So depending on your favorite symbols, you’d write this as either $ A = \sum_{i} \sigma_i u_i \otimes v_i$ or $ A = \sum_i \sigma_i u_i v_i^T$. Or, if you like expressing things as matrix factorizations, as $ A = U\Sigma V^T$. All three are describing the same object.

Let’s move on to some code.

A black box example

Before we implement SVD from scratch (an urge that commands me from the depths of my soul!), let’s see a black-box example that uses existing tools. For this we’ll use the numpy library.

Recall our movie-rating matrix from the last post:

movieratings

The code to compute the svd of this matrix is as simple as it gets:

from numpy.linalg import svd

movieRatings = [
    [2, 5, 3],
    [1, 2, 1],
    [4, 1, 1],
    [3, 5, 2],
    [5, 3, 1],
    [4, 5, 5],
    [2, 4, 2],
    [2, 2, 5],
]

U, singularValues, V = svd(movieRatings)

Printing these values out gives

[[-0.39458526  0.23923575 -0.35445911 -0.38062172 -0.29836818 -0.49464816 -0.30703202 -0.29763321]
 [-0.15830232  0.03054913 -0.15299759 -0.45334816  0.31122898  0.23892035 -0.37313346  0.67223457]
 [-0.22155201 -0.52086121  0.39334917 -0.14974792 -0.65963979  0.00488292 -0.00783684  0.25934607]
 [-0.39692635 -0.08649009 -0.41052882  0.74387448 -0.10629499  0.01372565 -0.17959298  0.26333462]
 [-0.34630257 -0.64128825  0.07382859 -0.04494155  0.58000668 -0.25806239  0.00211823 -0.24154726]
 [-0.53347449  0.19168874  0.19949342 -0.03942604  0.00424495  0.68715732 -0.06957561 -0.40033035]
 [-0.31660464  0.06109826 -0.30599517 -0.19611823 -0.01334272  0.01446975  0.85185852  0.19463493]
 [-0.32840223  0.45970413  0.62354764  0.1783041   0.17631186 -0.39879476  0.06065902  0.25771578]]
[ 15.09626916   4.30056855   3.40701739]
[[-0.54184808 -0.67070995 -0.50650649]
 [-0.75152295  0.11680911  0.64928336]
 [ 0.37631623 -0.73246419  0.56734672]]

Now this is a bit weird, because the matrices $ U, V$ are the wrong shape! Remember, there are only supposed to be three vectors since the input matrix has rank three. So what gives? This is a distinction that goes by the name “full” versus “reduced” SVD. The idea goes back to our original statement that $ U \Sigma V^T$ is a decomposition with $ U, V^T$ both orthogonal and square matrices. But in the derivation we did in the last section, the $ U$ and $ V$ were not square. The singular vectors $ v_i$ could potentially stop before even becoming full rank.

In order to get to square matrices, what people sometimes do is take the two bases $ v_1, \dots, v_k$ and $ u_1, \dots, u_k$ and arbitrarily choose ways to complete them to a full orthonormal basis of their respective vector spaces. In other words, they just make the matrix square by filling it with data for no reason other than that it’s sometimes nice to have a complete basis. We don’t care about this. To be honest, I think the only place this comes in useful is in the desire to be particularly tidy in a mathematical formulation of something.

We can still work with it programmatically. By fudging around a bit with numpy’s shapes to get a diagonal matrix, we can reconstruct the input rating matrix from the factors.

Sigma = np.vstack([
    np.diag(singularValues),
    np.zeros((5, 3)),
])

print(np.round(movieRatings - np.dot(U, np.dot(Sigma, V)), decimals=10))

And the output is, as one expects, a matrix of all zeros. Meaning that we decomposed the movie rating matrix, and built it back up from the factors.

We can actually get the SVD as we defined it (with rectangular matrices) by passing a special flag to numpy’s svd.

U, singularValues, V = svd(movieRatings, full_matrices=False)
print(U)
print(singularValues)
print(V)

Sigma = np.diag(singularValues)
print(np.round(movieRatings - np.dot(U, np.dot(Sigma, V)), decimals=10))

And the result

[[-0.39458526  0.23923575 -0.35445911]
 [-0.15830232  0.03054913 -0.15299759]
 [-0.22155201 -0.52086121  0.39334917]
 [-0.39692635 -0.08649009 -0.41052882]
 [-0.34630257 -0.64128825  0.07382859]
 [-0.53347449  0.19168874  0.19949342]
 [-0.31660464  0.06109826 -0.30599517]
 [-0.32840223  0.45970413  0.62354764]]
[ 15.09626916   4.30056855   3.40701739]
[[-0.54184808 -0.67070995 -0.50650649]
 [-0.75152295  0.11680911  0.64928336]
 [ 0.37631623 -0.73246419  0.56734672]]
[[-0. -0. -0.]
 [-0. -0.  0.]
 [ 0. -0.  0.]
 [-0. -0. -0.]
 [-0. -0. -0.]
 [-0. -0. -0.]
 [-0. -0. -0.]
 [ 0. -0. -0.]]

This makes the reconstruction less messy, since we can just multiply everything without having to add extra rows of zeros to $ \Sigma$.

What do the singular vectors and values tell us about the movie rating matrix? (Besides nothing, since it’s a contrived example) You’ll notice that the first singular vector $ \sigma_1 > 15$ while the other two singular values are around $ 4$. This tells us that the first singular vector covers a large part of the structure of the matrix. I.e., a rank-1 matrix would be a pretty good approximation to the whole thing. As an exercise to the reader, write a program that evaluates this claim (how good is “good”?).

The greedy optimization routine

Now we’re going to write SVD from scratch. We’ll first implement the greedy algorithm for the 1-d optimization problem, and then we’ll perform the inductive step to get a full algorithm. Then we’ll run it on the CNN data set.

The method we’ll use to solve the 1-dimensional problem isn’t necessarily industry strength (see this document for a hint of what industry strength looks like), but it is simple conceptually. It’s called the power method. Now that we have our decomposition of theorem, understanding how the power method works is quite easy.

Let’s work in the language of a matrix decomposition $ A = U \Sigma V^T$, more for practice with that language than anything else (using outer products would give us the same result with slightly different computations). Then let’s observe $ A^T A$, wherein we’ll use the fact that $ U$ is orthonormal and so $ U^TU$ is the identity matrix:

$ \displaystyle A^TA = (U \Sigma V^T)^T(U \Sigma V^T) = V \Sigma U^TU \Sigma V^T = V \Sigma^2 V^T$

So we can completely eliminate $ U$ from the discussion, and look at just $ V \Sigma^2 V^T$. And what’s nice about this matrix is that we can compute its eigenvectors, and eigenvectors turn out to be exactly the singular vectors. The corresponding eigenvalues are the squared singular values. This should be clear from the above derivation. If you apply $ (V \Sigma^2 V^T)$ to any $ v_i$, the only parts of the product that aren’t zero are the ones involving $ v_i$ with itself, and the scalar $ \sigma_i^2$ factors in smoothly. It’s dead simple to check.

Theorem: Let $ x$ be a random unit vector and let $ B = A^TA = V \Sigma^2 V^T$. Then with high probability, $ \lim_{s \to \infty} B^s x$ is in the span of the first singular vector $ v_1$. If we normalize $ B^s x$ to a unit vector at each $ s$, then furthermore the limit is $ v_1$.

Proof. Start with a random unit vector $ x$, and write it in terms of the singular vectors $ x = \sum_i c_i v_i$. That means $ Bx = \sum_i c_i \sigma_i^2 v_i$. If you recursively apply this logic, you get $ B^s x = \sum_i c_i \sigma_i^{2s} v_i$. In particular, the dot product of $ (B^s x)$ with any $ v_j$ is $ c_i \sigma_j^{2s}$.

What this means is that so long as the first singular value $ \sigma_1$ is sufficiently larger than the second one $ \sigma_2$, and in turn all the other singular values, the part of $ B^s x$  corresponding to $ v_1$ will be much larger than the rest. Recall that if you expand a vector in terms of an orthonormal basis, in this case $ B^s x$ expanded in the $ v_i$, the coefficient of $ B^s x$ on $ v_j$ is exactly the dot product. So to say that $ B^sx$ converges to being in the span of $ v_1$ is the same as saying that the ratio of these coefficients, $ |(B^s x \cdot v_1)| / |(B^s x \cdot v_j)| \to \infty$ for any $ j$. In other words, the coefficient corresponding to the first singular vector dominates all of the others. And so if we normalize, the coefficient of $ B^s x$ corresponding to $ v_1$ tends to 1, while the rest tend to zero.

Indeed, this ratio is just $ (\sigma_1 / \sigma_j)^{2s}$ and the base of this exponential is bigger than 1.

$ \square$

If you want to be a little more precise and find bounds on the number of iterations required to converge, you can. The worry is that your random starting vector is “too close” to one of the smaller singular vectors $ v_j$, so that if the ratio of $ \sigma_1 / \sigma_j$ is small, then the “pull” of $ v_1$ won’t outweigh the pull of $ v_j$ fast enough. Choosing a random unit vector allows you to ensure with high probability that this doesn’t happen. And conditioned on it not happening (or measuring “how far the event is from happening” precisely), you can compute a precise number of iterations required to converge. The last two pages of these lecture notes have all the details.

We won’t compute a precise number of iterations. Instead we’ll just compute until the angle between $ B^{s+1}x$ and $ B^s x$ is very small. Here’s the algorithm

import numpy as np
from numpy.linalg import norm

from random import normalvariate
from math import sqrt

def randomUnitVector(n):
    unnormalized = [normalvariate(0, 1) for _ in range(n)]
    theNorm = sqrt(sum(x * x for x in unnormalized))
    return [x / theNorm for x in unnormalized]

def svd_1d(A, epsilon=1e-10):
    ''' The one-dimensional SVD '''

    n, m = A.shape
    x = randomUnitVector(m)
    lastV = None
    currentV = x
    B = np.dot(A.T, A)

    iterations = 0
    while True:
        iterations += 1
        lastV = currentV
        currentV = np.dot(B, lastV)
        currentV = currentV / norm(currentV)

        if abs(np.dot(currentV, lastV)) &gt; 1 - epsilon:
            print(&quot;converged in {} iterations!&quot;.format(iterations))
            return currentV

We start with a random unit vector $ x$, and then loop computing $ x_{t+1} = Bx_t$, renormalizing at each step. The condition for stopping is that the magnitude of the dot product between $ x_t$ and $ x_{t+1}$ (since they’re unit vectors, this is the cosine of the angle between them) is very close to 1.

And using it on our movie ratings example:

if __name__ == &quot;__main__&quot;:
    movieRatings = np.array([
        [2, 5, 3],
        [1, 2, 1],
        [4, 1, 1],
        [3, 5, 2],
        [5, 3, 1],
        [4, 5, 5],
        [2, 4, 2],
        [2, 2, 5],
    ], dtype='float64')

    print(svd_1d(movieRatings))

With the result

converged in 6 iterations!
[-0.54184805 -0.67070993 -0.50650655]

Note that the sign of the vector may be different from numpy’s output because we start with a random vector to begin with.

The recursive step, getting from $ v_1$ to the entire SVD, is equally straightforward. Say you start with the matrix $ A$ and you compute $ v_1$. You can use $ v_1$ to compute $ u_1$ and $ \sigma_1(A)$. Then you want to ensure you’re ignoring all vectors in the span of $ v_1$ for your next greedy optimization, and to do this you can simply subtract the rank 1 component of $ A$ corresponding to $ v_1$. I.e., set $ A’ = A – \sigma_1(A) u_1 v_1^T$. Then it’s easy to see that $ \sigma_1(A’) = \sigma_2(A)$ and basically all the singular vectors shift indices by 1 when going from $ A$ to $ A’$. Then you repeat.

If that’s not clear enough, here’s the code.

def svd(A, epsilon=1e-10):
    n, m = A.shape
    svdSoFar = []

    for i in range(m):
        matrixFor1D = A.copy()

        for singularValue, u, v in svdSoFar[:i]:
            matrixFor1D -= singularValue * np.outer(u, v)

        v = svd_1d(matrixFor1D, epsilon=epsilon)  # next singular vector
        u_unnormalized = np.dot(A, v)
        sigma = norm(u_unnormalized)  # next singular value
        u = u_unnormalized / sigma

        svdSoFar.append((sigma, u, v))

    # transform it into matrices of the right shape
    singularValues, us, vs = [np.array(x) for x in zip(*svdSoFar)]

    return singularValues, us.T, vs

And we can run this on our movie rating matrix to get the following

&gt;&gt;&gt; theSVD = svd(movieRatings)
&gt;&gt;&gt; theSVD[0]
array([ 15.09626916,   4.30056855,   3.40701739])
&gt;&gt;&gt; theSVD[1]
array([[ 0.39458528, -0.23923093,  0.35446407],
       [ 0.15830233, -0.03054705,  0.15299815],
       [ 0.221552  ,  0.52085578, -0.39336072],
       [ 0.39692636,  0.08649568,  0.41052666],
       [ 0.34630257,  0.64128719, -0.07384286],
       [ 0.53347448, -0.19169154, -0.19948959],
       [ 0.31660465, -0.0610941 ,  0.30599629],
       [ 0.32840221, -0.45971273, -0.62353781]])
&gt;&gt;&gt; theSVD[2]
array([[ 0.54184805,  0.67071006,  0.50650638],
       [ 0.75151641, -0.11679644, -0.64929321],
       [-0.37632934,  0.73246611, -0.56733554]])

Checking this against our numpy output shows it’s within a reasonable level of precision (considering the power method took on the order of ten iterations!)

&gt;&gt;&gt; np.round(np.abs(npSVD[0]) - np.abs(theSVD[1]), decimals=5)
array([[ -0.00000000e+00,  -0.00000000e+00,   0.00000000e+00],
       [  0.00000000e+00,  -0.00000000e+00,   0.00000000e+00],
       [  0.00000000e+00,  -1.00000000e-05,   1.00000000e-05],
       [  0.00000000e+00,   0.00000000e+00,  -0.00000000e+00],
       [  0.00000000e+00,  -0.00000000e+00,   1.00000000e-05],
       [ -0.00000000e+00,   0.00000000e+00,  -0.00000000e+00],
       [  0.00000000e+00,  -0.00000000e+00,   0.00000000e+00],
       [ -0.00000000e+00,   1.00000000e-05,  -1.00000000e-05]])
&gt;&gt;&gt; np.round(np.abs(npSVD[2]) - np.abs(theSVD[2]), decimals=5)
array([[  0.00000000e+00,   0.00000000e+00,  -0.00000000e+00],
       [ -1.00000000e-05,  -1.00000000e-05,   1.00000000e-05],
       [  1.00000000e-05,   0.00000000e+00,  -1.00000000e-05]])
&gt;&gt;&gt; np.round(np.abs(npSVD[1]) - np.abs(theSVD[0]), decimals=5)
array([ 0.,  0., -0.])

So there we have it. We added an extra little bit to the svd function, an argument $ k$ which stops computing the svd after it reaches rank $ k$.

CNN stories

One interesting use of the SVD is in topic modeling. Topic modeling is the process of taking a bunch of documents (news stories, or emails, or movie scripts, whatever) and grouping them by topic, where the algorithm gets to choose what counts as a “topic.” Topic modeling is just the name that natural language processing folks use instead of clustering.

The SVD can help one model topics as follows. First you construct a matrix $ A$ called a document-term matrix whose rows correspond to words in some fixed dictionary and whose columns correspond to documents. The $ (i,j)$ entry of $ A$ contains the number of times word $ i$ shows up in document $ j$. Or, more precisely, some quantity derived from that count, like a normalized count. See this table on wikipedia for a list of options related to that. We’ll just pick one arbitrarily for use in this post.

The point isn’t how we normalize the data, but what the SVD of $ A = U \Sigma V^T$ means in this context. Recall that the domain of $ A$, as a linear map, is a vector space whose dimension is the number of stories. We think of the vectors in this space as documents, or rather as an “embedding” of the abstract concept of a document using the counts of how often each word shows up in a document as a proxy for the semantic meaning of the document. Likewise, the codomain is the space of all words, and each word is embedded by which documents it occurs in. If we compare this to the movie rating example, it’s the same thing: a movie is the vector of ratings it receives from people, and a person is the vector of ratings of various movies.

Say you take a rank 3 approximation to $ A$. Then you get three singular vectors $ v_1, v_2, v_3$ which form a basis for a subspace of words, i.e., the “idealized” words. These idealized words are your topics, and you can compute where a “new word” falls by looking at which documents it appears in (writing it as a vector in the domain) and saying its “topic” is the closest of the $ v_1, v_2, v_3$. The same process applies to new documents. You can use this to cluster existing documents as well.

The dataset we’ll use for this post is a relatively small corpus of a thousand CNN stories picked from 2012. Here’s an excerpt from one of them

$ cat data/cnn-stories/story479.txt
3 things to watch on Super Tuesday
Here are three things to watch for: Romney's big day. He's been the off-and-on frontrunner throughout the race, but a big Super Tuesday could begin an end game toward a sometimes hesitant base coalescing behind former Massachusetts Gov. Mitt Romney. Romney should win his home state of Massachusetts, neighboring Vermont and Virginia, ...

So let’s first build this document-term matrix, with the normalized values, and then we’ll compute it’s SVD and see what the topics look like.

Step 1 is cleaning the data. We used a bunch of routines from the nltk library that boils down to this loop:

    for filename, documentText in documentDict.items():
        tokens = tokenize(documentText)
        tagged_tokens = pos_tag(tokens)
        wnl = WordNetLemmatizer()
        stemmedTokens = [wnl.lemmatize(word, wordnetPos(tag)).lower()
                         for word, tag in tagged_tokens]

This turns the Super Tuesday story into a list of words (with repetition):

[&quot;thing&quot;, &quot;watch&quot;, &quot;three&quot;, &quot;thing&quot;, &quot;watch&quot;, &quot;big&quot;, ... ]

If you’ll notice the name Romney doesn’t show up in the list of words. I’m only keeping the words that show up in the top 100,000 most common English words, and then lemmatizing all of the words to their roots. It’s not a perfect data cleaning job, but it’s simple and good enough for our purposes.

Now we can create the document term matrix.

def makeDocumentTermMatrix(data):
    words = allWords(data)  # get the set of all unique words

    wordToIndex = dict((word, i) for i, word in enumerate(words))
    indexToWord = dict(enumerate(words))
    indexToDocument = dict(enumerate(data))

    matrix = np.zeros((len(words), len(data)))
    for docID, document in enumerate(data):
        docWords = Counter(document['words'])
        for word, count in docWords.items():
            matrix[wordToIndex[word], docID] = count

    return matrix, (indexToWord, indexToDocument)

This creates a matrix with the raw integer counts. But what we need is a normalized count. The idea is that a common word like “thing” shows up disproportionately more often than “election,” and we don’t want raw magnitude of a word count to outweigh its semantic contribution to the classification. This is the applied math part of the algorithm design. So what we’ll do (and this technique together with SVD is called latent semantic indexing) is normalize each entry so that it measures both the frequency of a term in a document and the relative frequency of a term compared to the global frequency of that term. There are many ways to do this, and we’ll just pick one. See the github repository if you’re interested.

So now lets compute a rank 10 decomposition and see how to cluster the results.

    data = load()
    matrix, (indexToWord, indexToDocument) = makeDocumentTermMatrix(data)
    matrix = normalize(matrix)
    sigma, U, V = svd(matrix, k=10)

This uses our svd, not numpy’s. Though numpy’s routine is much faster, it’s fun to see things work with code written from scratch. The result is too large to display here, but I can report the singular values.

&gt;&gt;&gt; sigma
array([ 42.85249098,  21.85641975,  19.15989197,  16.2403354 ,
        15.40456779,  14.3172779 ,  13.47860033,  13.23795002,
        12.98866537,  12.51307445])

Now we take our original inputs and project them onto the subspace spanned by the singular vectors. This is the part that represents each word (resp., document) in terms of the idealized words (resp., documents), the singular vectors. Then we can apply a simple k-means clustering algorithm to the result, and observe the resulting clusters as documents.

    projectedDocuments = np.dot(matrix.T, U)
    projectedWords = np.dot(matrix, V.T)

    documentCenters, documentClustering = cluster(projectedDocuments)
    wordCenters, wordClustering = cluster(projectedWords)

    wordClusters = [
        [indexToWord[i] for (i, x) in enumerate(wordClustering) if x == j]
        for j in range(len(set(wordClustering)))
    ]

    documentClusters = [
        [indexToDocument[i]['text']
         for (i, x) in enumerate(documentClustering) if x == j]
        for j in range(len(set(documentClustering)))
    ]

And now we can inspect individual clusters. Right off the bat we can tell the clusters aren’t quite right simply by looking at the sizes of each cluster.

&gt;&gt;&gt; Counter(wordClustering)
Counter({1: 9689, 2: 1051, 8: 680, 5: 557, 3: 321, 7: 225, 4: 174, 6: 124, 9: 123})
&gt;&gt;&gt; Counter(documentClustering)
Counter({7: 407, 6: 109, 0: 102, 5: 87, 9: 85, 2: 65, 8: 55, 4: 47, 3: 23, 1: 15})

What looks wrong to me is the size of the largest word cluster. If we could group words by topic, then this is saying there’s a topic with over nine thousand words associated with it! Inspecting it even closer, it includes words like “vegan,” “skunk,” and “pope.” On the other hand, some word clusters are spot on. Examine, for example, the fifth cluster which includes words very clearly associated with crime stories.

&gt;&gt;&gt; wordClusters[4]
['account', 'accuse', 'act', 'affiliate', 'allegation', 'allege', 'altercation', 'anything', 'apartment', 'arrest', 'arrive', 'assault', 'attorney', 'authority', 'bag', 'black', 'blood', 'boy', 'brother', 'bullet', 'candy', 'car', 'carry', 'case', 'charge', 'chief', 'child', 'claim', 'client', 'commit', 'community', 'contact', 'convenience', 'court', 'crime', 'criminal', 'cry', 'dead', 'deadly', 'death', 'defense', 'department', 'describe', 'detail', 'determine', 'dispatcher', 'district', 'document', 'enforcement', 'evidence', 'extremely', 'family', 'father', 'fear', 'fiancee', 'file', 'five', 'foot', 'friend', 'front', 'gate', 'girl', 'girlfriend', 'grand', 'ground', 'guilty', 'gun', 'gunman', 'gunshot', 'hand', 'happen', 'harm', 'head', 'hear', 'heard', 'hoodie', 'hour', 'house', 'identify', 'immediately', 'incident', 'information', 'injury', 'investigate', 'investigation', 'investigator', 'involve', 'judge', 'jury', 'justice', 'kid', 'killing', 'lawyer', 'legal', 'letter', 'life', 'local', 'man', 'men', 'mile', 'morning', 'mother', 'murder', 'near', 'nearby', 'neighbor', 'newspaper', 'night', 'nothing', 'office', 'officer', 'online', 'outside', 'parent', 'person', 'phone', 'police', 'post', 'prison', 'profile', 'prosecute', 'prosecution', 'prosecutor', 'pull', 'racial', 'racist', 'release', 'responsible', 'return', 'review', 'role', 'saw', 'scene', 'school', 'scream', 'search', 'sentence', 'serve', 'several', 'shoot', 'shooter', 'shooting', 'shot', 'slur', 'someone', 'son', 'sound', 'spark', 'speak', 'staff', 'stand', 'store', 'story', 'student', 'surveillance', 'suspect', 'suspicious', 'tape', 'teacher', 'teen', 'teenager', 'told', 'tragedy', 'trial', 'vehicle', 'victim', 'video', 'walk', 'watch', 'wear', 'whether', 'white', 'witness', 'young']

As sad as it makes me to see that ‘black’ and ‘slur’ and ‘racial’ appear in this category, it’s a reminder that naively using the output of a machine learning algorithm can perpetuate racism.

Here’s another interesting cluster corresponding to economic words:

&gt;&gt;&gt; wordClusters[6]
['agreement', 'aide', 'analyst', 'approval', 'approve', 'austerity', 'average', 'bailout', 'beneficiary', 'benefit', 'bill', 'billion', 'break', 'broadband', 'budget', 'class', 'combine', 'committee', 'compromise', 'conference', 'congressional', 'contribution', 'core', 'cost', 'currently', 'cut', 'deal', 'debt', 'defender', 'deficit', 'doc', 'drop', 'economic', 'economy', 'employee', 'employer', 'erode', 'eurozone', 'expire', 'extend', 'extension', 'fee', 'finance', 'fiscal', 'fix', 'fully', 'fund', 'funding', 'game', 'generally', 'gleefully', 'growth', 'hamper', 'highlight', 'hike', 'hire', 'holiday', 'increase', 'indifferent', 'insistence', 'insurance', 'job', 'juncture', 'latter', 'legislation', 'loser', 'low', 'lower', 'majority', 'maximum', 'measure', 'middle', 'negotiation', 'offset', 'oppose', 'package', 'pass', 'patient', 'pay', 'payment', 'payroll', 'pension', 'plight', 'portray', 'priority', 'proposal', 'provision', 'rate', 'recession', 'recovery', 'reduce', 'reduction', 'reluctance', 'repercussion', 'rest', 'revenue', 'rich', 'roughly', 'sale', 'saving', 'scientist', 'separate', 'sharp', 'showdown', 'sign', 'specialist', 'spectrum', 'spending', 'strength', 'tax', 'tea', 'tentative', 'term', 'test', 'top', 'trillion', 'turnaround', 'unemployed', 'unemployment', 'union', 'wage', 'welfare', 'worker', 'worth']

One can also inspect the stories, though the clusters are harder to print out here. Interestingly the first cluster of documents are stories exclusively about Trayvon Martin. The second cluster is mostly international military conflicts. The third cluster also appears to be about international conflict, but what distinguishes it from the first cluster is that every story in the second cluster discusses Syria.

&gt;&gt;&gt; len([x for x in documentClusters[1] if 'Syria' in x]) / len(documentClusters[1])
0.05555555555555555
&gt;&gt;&gt; len([x for x in documentClusters[2] if 'Syria' in x]) / len(documentClusters[2])
1.0

Anyway, you can explore the data more at your leisure (and tinker with the parameters to improve it!).

Issues with the power method

Though I mentioned that the power method isn’t an industry strength algorithm I didn’t say why. Let’s revisit that before we finish. The problem is that the convergence rate of even the 1-dimensional problem depends on the ratio of the first and second singular values, $ \sigma_1 / \sigma_2$. If that ratio is very close to 1, then the convergence will take a long time and need many many matrix-vector multiplications.

One way to alleviate that is to do the trick where, to compute a large power of a matrix, you iteratively square $ B$. But that requires computing a matrix square (instead of a bunch of matrix-vector products), and that requires a lot of time and memory if the matrix isn’t sparse. When the matrix is sparse, you can actually do the power method quite quickly, from what I’ve heard and read.

But nevertheless, the industry standard methods involve computing a particular matrix decomposition that is not only faster than the power method, but also numerically stable. That means that the algorithm’s runtime and accuracy doesn’t depend on slight changes in the entries of the input matrix. Indeed, you can have two matrices where $ \sigma_1 / \sigma_2$ is very close to 1, but changing a single entry will make that ratio much larger. The power method depends on this, so it’s not numerically stable. But the industry standard technique is not. This technique involves something called Householder reflections. So while the power method was great for a proof of concept, there’s much more work to do if you want true SVD power.

Until next time!

Big Dimensions, and What You Can Do About It

Data is abundant, data is big, and big is a problem. Let me start with an example. Let’s say you have a list of movie titles and you want to learn their genre: romance, action, drama, etc. And maybe in this scenario IMDB doesn’t exist so you can’t scrape the answer. Well, the title alone is almost never enough information. One nice way to get more data is to do the following:

  1. Pick a large dictionary of words, say the most common 100,000 non stop-words in the English language.
  2. Crawl the web looking for documents that include the title of a film.
  3. For each film, record the counts of all other words appearing in those documents.
  4. Maybe remove instances of “movie” or “film,” etc.

After this process you have a length-100,000 vector of integers associated with each movie title. IMDB’s database has around 1.5 million listed movies, and if we have a 32-bit integer per vector entry, that’s 600 GB of data to get every movie.

One way to try to find genres is to cluster this (unlabeled) dataset of vectors, and then manually inspect the clusters and assign genres. With a really fast computer we could simply run an existing clustering algorithm on this dataset and be done. Of course, clustering 600 GB of data takes a long time, but there’s another problem. The geometric intuition that we use to design clustering algorithms degrades as the length of the vectors in the dataset grows. As a result, our algorithms perform poorly. This phenomenon is called the “curse of dimensionality” (“curse” isn’t a technical term), and we’ll return to the mathematical curiosities shortly.

A possible workaround is to try to come up with faster algorithms or be more patient. But a more interesting mathematical question is the following:

Is it possible to condense high-dimensional data into smaller dimensions and retain the important geometric properties of the data?

This goal is called dimension reduction. Indeed, all of the chatter on the internet is bound to encode redundant information, so for our movie title vectors it seems the answer should be “yes.” But the questions remain, how does one find a low-dimensional condensification? (Condensification isn’t a word, the right word is embedding, but embedding is overloaded so we’ll wait until we define it) And what mathematical guarantees can you prove about the resulting condensed data? After all, it stands to reason that different techniques preserve different aspects of the data. Only math will tell.

In this post we’ll explore this so-called “curse” of dimensionality, explain the formality of why it’s seen as a curse, and implement a wonderfully simple technique called “the random projection method” which preserves pairwise distances between points after the reduction. As usual, and all the code, data, and tests used in the making of this post are on Github.

Some curious issues, and the “curse”

We start by exploring the curse of dimensionality with experiments on synthetic data.

In two dimensions, take a circle centered at the origin with radius 1 and its bounding square.

circle.png

The circle fills up most of the area in the square, in fact it takes up exactly $ \pi$ out of 4 which is about 78%. In three dimensions we have a sphere and a cube, and the ratio of sphere volume to cube volume is a bit smaller, $ 4 \pi /3$ out of a total of 8, which is just over 52%. What about in a thousand dimensions? Let’s try by simulation.

import random

def randUnitCube(n):
   return [(random.random() - 0.5)*2 for _ in range(n)]

def sphereCubeRatio(n, numSamples):
   randomSample = [randUnitCube(n) for _ in range(numSamples)]
   return sum(1 for x in randomSample if sum(a**2 for a in x) &lt;= 1) / numSamples 

The result is as we computed for small dimension,

 &gt;&gt;&gt; sphereCubeRatio(2,10000)
0.7857
&gt;&gt;&gt; sphereCubeRatio(3,10000)
0.5196

And much smaller for larger dimension

&gt;&gt;&gt; sphereCubeRatio(20,100000) # 100k samples
0.0
&gt;&gt;&gt; sphereCubeRatio(20,1000000) # 1M samples
0.0
&gt;&gt;&gt; sphereCubeRatio(20,2000000)
5e-07

Forget a thousand dimensions, for even twenty dimensions, a million samples wasn’t enough to register a single random point inside the unit sphere. This illustrates one concern, that when we’re sampling random points in the $ d$-dimensional unit cube, we need at least $ 2^d$ samples to ensure we’re getting a even distribution from the whole space. In high dimensions, this face basically rules out a naive Monte Carlo approximation, where you sample random points to estimate the probability of an event too complicated to sample from directly. A machine learning viewpoint of the same problem is that in dimension $ d$, if your machine learning algorithm requires a representative sample of the input space in order to make a useful inference, then you require $ 2^d$ samples to learn.

Luckily, we can answer our original question because there is a known formula for the volume of a sphere in any dimension. Rather than give the closed form formula, which involves the gamma function and is incredibly hard to parse, we’ll state the recursive form. Call $ V_i$ the volume of the unit sphere in dimension $ i$. Then $ V_0 = 1$ by convention, $ V_1 = 2$ (it’s an interval), and $ V_n = \frac{2 \pi V_{n-2}}{n}$. If you unpack this recursion you can see that the numerator looks like $ (2\pi)^{n/2}$ and the denominator looks like a factorial, except it skips every other number. So an even dimension would look like $ 2 \cdot 4 \cdot \dots \cdot n$, and this grows larger than a fixed exponential. So in fact the total volume of the sphere vanishes as the dimension grows! (In addition to the ratio vanishing!)

def sphereVolume(n):
   values = [0] * (n+1)
   for i in range(n+1):
      if i == 0:
         values[i] = 1
      elif i == 1:
         values[i] = 2
      else:
         values[i] = 2*math.pi / i * values[i-2]

   return values[-1]

This should be counterintuitive. I think most people would guess, when asked about how the volume of the unit sphere changes as the dimension grows, that it stays the same or gets bigger.  But at a hundred dimensions, the volume is already getting too small to fit in a float.

&gt;&gt;&gt; sphereVolume(20)
0.025806891390014047
&gt;&gt;&gt; sphereVolume(100)
2.3682021018828297e-40
&gt;&gt;&gt; sphereVolume(1000)
0.0

The scary thing is not just that this value drops, but that it drops exponentially quickly. A consequence is that, if you’re trying to cluster data points by looking at points within a fixed distance $ r$ of one point, you have to carefully measure how big $ r$ needs to be to cover the same proportional volume as it would in low dimension.

Here’s a related issue. Say I take a bunch of points generated uniformly at random in the unit cube.

from itertools import combinations

def distancesRandomPoints(n, numSamples):
   randomSample = [randUnitCube(n) for _ in range(numSamples)]
   pairwiseDistances = [dist(x,y) for (x,y) in combinations(randomSample, 2)]
   return pairwiseDistances

In two dimensions, the histogram of distances between points looks like this

2d-distances.png

However, as the dimension grows the distribution of distances changes. It evolves like the following animation, in which each frame is an increase in dimension from 2 to 100.

distances-animation.gif

The shape of the distribution doesn’t appear to be changing all that much after the first few frames, but the center of the distribution tends to infinity (in fact, it grows like $ \sqrt{n}$). The variance also appears to stay constant. This chart also becomes more variable as the dimension grows, again because we should be sampling exponentially many more points as the dimension grows (but we don’t). In other words, as the dimension grows the average distance grows and the tightness of the distribution stays the same. So at a thousand dimensions the average distance is about 26, tightly concentrated between 24 and 28. When the average is a thousand, the distribution is tight between 998 and 1002. If one were to normalize this data, it would appear that random points are all becoming equidistant from each other.

So in addition to the issues of runtime and sampling, the geometry of high-dimensional space looks different from what we expect. To get a better understanding of “big data,” we have to update our intuition from low-dimensional geometry with analysis and mathematical theorems that are much harder to visualize.

The Johnson-Lindenstrauss Lemma

Now we turn to proving dimension reduction is possible. There are a few methods one might first think of, such as look for suitable subsets of coordinates, or sums of subsets, but these would all appear to take a long time or they simply don’t work.

Instead, the key technique is to take a random linear subspace of a certain dimension, and project every data point onto that subspace. No searching required. The fact that this works is called the Johnson-Lindenstrauss Lemma. To set up some notation, we’ll call $ d(v,w)$ the usual distance between two points.

Lemma [Johnson-Lindenstrauss (1984)]: Given a set $ X$ of $ n$ points in $ \mathbb{R}^d$, project the points in $ X$ to a randomly chosen subspace of dimension $ c$. Call the projection $ \rho$. For any $ \varepsilon > 0$, if $ c$ is at least $ \Omega(\log(n) / \varepsilon^2)$, then with probability at least 1/2 the distances between points in $ X$ are preserved up to a factor of $ (1+\varepsilon)$. That is, with good probability every pair $ v,w \in X$ will satisfy

$ \displaystyle \| v-w \|^2 (1-\varepsilon) \leq \| \rho(v) – \rho(w) \|^2 \leq \| v-w \|^2 (1+\varepsilon)$

Before we do the proof, which is quite short, it’s important to point out that the target dimension $ c$ does not depend on the original dimension! It only depends on the number of points in the dataset, and logarithmically so. That makes this lemma seem like pure magic, that you can take data in an arbitrarily high dimension and put it in a much smaller dimension.

On the other hand, if you include all of the hidden constants in the bound on the dimension, it’s not that impressive. If your data have a million dimensions and you want to preserve the distances up to 1% ($ \varepsilon = 0.01$), the bound is bigger than a million! If you decrease the preservation $ \varepsilon$ to 10% (0.1), then you get down to about 12,000 dimensions, which is more reasonable. At 45% the bound drops to around 1,000 dimensions. Here’s a plot showing the theoretical bound on $ c$ in terms of $ \varepsilon$ for $ n$ fixed to a million.

boundplot

 

But keep in mind, this is just a theoretical bound for potentially misbehaving data. Later in this post we’ll see if the practical dimension can be reduced more than the theory allows. As we’ll see, an algorithm run on the projected data is still effective even if the projection goes well beyond the theoretical bound. Because the theorem is known to be tight in the worst case (see the notes at the end) this speaks more to the robustness of the typical algorithm than to the robustness of the projection method.

A second important note is that this technique does not necessarily avoid all the problems with the curse of dimensionality. We mentioned above that one potential problem is that “random points” are roughly equidistant in high dimensions. Johnson-Lindenstrauss actually preserves this problem because it preserves distances! As a consequence, you won’t see strictly better algorithm performance if you project (which we suggested is possible in the beginning of this post). But you will alleviate slow runtimes if the runtime depends exponentially on the dimension. Indeed, if you replace the dimension $ d$ with the logarithm of the number of points $ \log n$, then $ 2^d$ becomes linear in $ n$, and $ 2^{O(d)}$ becomes polynomial.

Proof of the J-L lemma

Let’s prove the lemma.

Proof. To start we make note that one can sample from the uniform distribution on dimension-$ c$ linear subspaces of $ \mathbb{R}^d$ by choosing the entries of a $ c \times d$ matrix $ A$ independently from a normal distribution with mean 0 and variance 1. Then, to project a vector $ x$ by this matrix (call the projection $ \rho$), we can compute

$ \displaystyle \rho(x) = \frac{1}{\sqrt{c}}A x$

Now fix $ \varepsilon > 0$ and fix two points in the dataset $ x,y$. We want an upper bound on the probability that the following is false

$ \displaystyle \| x-y \|^2 (1-\varepsilon) \leq \| \rho(x) – \rho(y) \|^2 \leq \| x-y \|^2 (1+\varepsilon)$

Since that expression is a pain to work with, let’s rearrange it by calling $ u = x-y$, and rearranging (using the linearity of the projection) to get the equivalent statement.

$ \left | \| \rho(u) \|^2 – \|u \|^2 \right | \leq \varepsilon \| u \|^2$

And so we want a bound on the probability that this event does not occur, meaning the inequality switches directions.

Once we get such a bound (it will depend on $ c$ and $ \varepsilon$) we need to ensure that this bound is true for every pair of points. The union bound allows us to do this, but it also requires that the probability of the bad thing happening tends to zero faster than $ 1/\binom{n}{2}$. That’s where the $ \log(n)$ will come into the bound as stated in the theorem.

Continuing with our use of $ u$ for notation, define $ X$ to be the random variable $ \frac{c}{\| u \|^2} \| \rho(u) \|^2$. By expanding the notation and using the linearity of expectation, you can show that the expected value of $ X$ is $ c$, meaning that in expectation, distances are preserved. We are on the right track, and just need to show that the distribution of $ X$, and thus the possible deviations in distances, is tightly concentrated around $ c$. In full rigor, we will show

$ \displaystyle \Pr [X \geq (1+\varepsilon) c] < e^{-(\varepsilon^2 – \varepsilon^3) \frac{c}{4}}$

Let $ A_i$ denote the $ i$-th column of $ A$. Define by $ X_i$ the quantity $ \langle A_i, u \rangle / \| u \|$. This is a weighted average of the entries of $ A_i$ by the entries of $ u$. But since we chose the entries of $ A$ from the normal distribution, and since a weighted average of normally distributed random variables is also normally distributed (has the same distribution), $ X_i$ is a $ N(0,1)$ random variable. Moreover, each column is independent. This allows us to decompose $ X$ as

$ X = \frac{k}{\| u \|^2} \| \rho(u) \|^2 = \frac{\| Au \|^2}{\| u \|^2}$

Expanding further,

$ X = \sum_{i=1}^c \frac{\| A_i u \|^2}{\|u\|^2} = \sum_{i=1}^c X_i^2$

Now the event $ X \leq (1+\varepsilon) c$ can be expressed in terms of the nonegative variable $ e^{\lambda X}$, where $ 0 < \lambda < 1/2$ is parameter, to get

$ \displaystyle \Pr[X \geq (1+\varepsilon) c] = \Pr[e^{\lambda X} \geq e^{(1+\varepsilon)c \lambda}]$

This will become useful because the sum $ X = \sum_i X_i^2$ will split into a product momentarily. First we apply Markov’s inequality, which says that for any nonnegative random variable $ Y$, $ \Pr[Y \geq t] \leq \mathbb{E}[Y] / t$. This lets us write

$ \displaystyle \Pr[e^{\lambda X} \geq e^{(1+\varepsilon) c \lambda}] \leq \frac{\mathbb{E}[e^{\lambda X}]}{e^{(1+\varepsilon) c \lambda}}$

Now we can split up the exponent $ \lambda X$ into $ \sum_{i=1}^c \lambda X_i^2$, and using the i.i.d.-ness of the $ X_i^2$ we can rewrite the RHS of the inequality as

$ \left ( \frac{\mathbb{E}[e^{\lambda X_1^2}]}{e^{(1+\varepsilon)\lambda}} \right )^c$

A similar statement using $ -\lambda$ is true for the $ (1-\varepsilon)$ part, namely that

$ \displaystyle \Pr[X \leq (1-\varepsilon)c] \leq \left ( \frac{\mathbb{E}[e^{-\lambda X_1^2}]}{e^{-(1-\varepsilon)\lambda}} \right )^c$

The last thing that’s needed is to bound $ \mathbb{E}[e^{\lambda X_i^2}]$, but since $ X_i^2 \sim N(0,1)$, we can use the known density function for a normal distribution, and integrate to get the exact value $ \mathbb{E}[e^{\lambda X_1^2}] = \frac{1}{\sqrt{1-2\lambda}}$. Including this in the bound gives us a closed-form bound in terms of $ \lambda, c, \varepsilon$. Using standard calculus the optimal $ \lambda \in (0,1/2)$ is $ \lambda = \varepsilon / 2(1+\varepsilon)$. This gives

$ \displaystyle \Pr[X \geq (1+\varepsilon) c] \leq ((1+\varepsilon)e^{-\varepsilon})^{c/2}$

Using the Taylor series expansion for $ e^x$, one can show the bound $ 1+\varepsilon < e^{\varepsilon – (\varepsilon^2 – \varepsilon^3)/2}$, which simplifies the final upper bound to $ e^{-(\varepsilon^2 – \varepsilon^3) c/4}$.

Doing the same thing for the $ (1-\varepsilon)$ version gives an equivalent bound, and so the total bound is doubled, i.e. $ 2e^{-(\varepsilon^2 – \varepsilon^3) c/4}$.

As we said at the beginning, applying the union bound means we need

$ \displaystyle 2e^{-(\varepsilon^2 – \varepsilon^3) c/4} < \frac{1}{\binom{n}{2}}$

Solving this for $ c$ gives $ c \geq \frac{8 \log m}{\varepsilon^2 – \varepsilon^3}$, as desired.

$ \square$

Projecting in Practice

Let’s write a python program to actually perform the Johnson-Lindenstrauss dimension reduction scheme. This is sometimes called the Johnson-Lindenstrauss transform, or JLT.

First we define a random subspace by sampling an appropriately-sized matrix with normally distributed entries, and a function that performs the projection onto a given subspace (for testing).

import random
import math
import numpy

def randomSubspace(subspaceDimension, ambientDimension):
   return numpy.random.normal(0, 1, size=(subspaceDimension, ambientDimension))

def project(v, subspace):
   subspaceDimension = len(subspace)
   return (1 / math.sqrt(subspaceDimension)) * subspace.dot(v)

We have a function that computes the theoretical bound on the optimal dimension to reduce to.

def theoreticalBound(n, epsilon):
   return math.ceil(8*math.log(n) / (epsilon**2 - epsilon**3))

And then performing the JLT is simply matrix multiplication

def jlt(data, subspaceDimension):
   ambientDimension = len(data[0])
   A = randomSubspace(subspaceDimension, ambientDimension)
   return (1 / math.sqrt(subspaceDimension)) * A.dot(data.T).T

The high-dimensional dataset we’ll use comes from a data mining competition called KDD Cup 2001. The dataset we used deals with drug design, and the goal is to determine whether an organic compound binds to something called thrombin. Thrombin has something to do with blood clotting, and I won’t pretend I’m an expert. The dataset, however, has over a hundred thousand features for about 2,000 compounds. Here are a few approximate target dimensions we can hope for as epsilon varies.

&gt;&gt;&gt; [((1/x),theoreticalBound(n=2000, epsilon=1/x))
       for x in [2, 3, 4, 5, 6, 7, 8, 9, 10, 15, 20]]
[('0.50', 487), ('0.33', 821), ('0.25', 1298), ('0.20', 1901),
 ('0.17', 2627), ('0.14', 3477), ('0.12', 4448), ('0.11', 5542),
 ('0.10', 6757), ('0.07', 14659), ('0.05', 25604)]

Going down from a hundred thousand dimensions to a few thousand is by any measure decreases the size of the dataset by about 95%. We can also observe how the distribution of overall distances varies as the size of the subspace we project to varies.

The animation proceeds from 5000 dimensions down to 2 (when the plot is at its bulkiest closer to zero).

The animation proceeds from 5000 dimensions down to 2 (when the plot is at its bulkiest closer to zero).

The last three frames are for 10, 5, and 2 dimensions respectively. As you can see the histogram starts to beef up around zero. To be honest I was expecting something a bit more dramatic like a uniform-ish distribution. Of course, the distribution of distances is not all that matters. Another concern is the worst case change in distances between any two points before and after the projection. We can see that indeed when we project to the dimension specified in the theorem, that the distances are within the prescribed bounds.

def checkTheorem(oldData, newData, epsilon):
   numBadPoints = 0

   for (x,y), (x2,y2) in zip(combinations(oldData, 2), combinations(newData, 2)):
      oldNorm = numpy.linalg.norm(x2-y2)**2
      newNorm = numpy.linalg.norm(x-y)**2

      if newNorm == 0 or oldNorm == 0:
         continue

      if abs(oldNorm / newNorm - 1) &amp;gt; epsilon:
         numBadPoints += 1

   return numBadPoints

if __name__ == &amp;quot;__main__&amp;quot;
   from data import thrombin
   train, labels = thrombin.load() 

   numPoints = len(train)
   epsilon = 0.2
   subspaceDim = theoreticalBound(numPoints, epsilon)
   ambientDim = len(train[0])
   newData = jlt(train, subspaceDim)

   print(checkTheorem(train, newData, epsilon))

This program prints zero every time I try running it, which is the poor man’s way of saying it works “with high probability.” We can also plot statistics about the number of pairs of data points that are distorted by more than $ \varepsilon$ as the subspace dimension shrinks. We ran this on the following set of subspace dimensions with $ \varepsilon = 0.1$ and took average/standard deviation over twenty trials:

   dims = [1000, 750, 500, 250, 100, 75, 50, 25, 10, 5, 2]

The result is the following chart, whose x-axis is the dimension projected to (so the left hand is the most extreme projection to 2, 5, 10 dimensions), the y-axis is the number of distorted pairs, and the error bars represent a single standard deviation away from the mean.

thrombin-worst-case

This chart provides good news about this dataset because the standard deviations are low. It tells us something that mathematicians often ignore: the predictability of the tradeoff that occurs once you go past the theoretically perfect bound. In this case, the standard deviations tell us that it’s highly predictable. Moreover, since this tradeoff curve measures pairs of points, we might conjecture that the distortion is localized around a single set of points that got significantly “rattled” by the projection. This would be an interesting exercise to explore.

Now all of these charts are really playing with the JLT and confirming the correctness of our code (and hopefully our intuition). The real question is: how well does a machine learning algorithm perform on the original data when compared to the projected data? If the algorithm only “depends” on the pairwise distances between the points, then we should expect nearly identical accuracy in the unprojected and projected versions of the data. To show this we’ll use an easy learning algorithm, the k-nearest-neighbors clustering method. The problem, however, is that there are very few positive examples in this particular dataset. So looking for the majority label of the nearest $ k$ neighbors for any $ k > 2$ unilaterally results in the “all negative” classifier, which has 97% accuracy. This happens before and after projecting.

To compensate for this, we modify k-nearest-neighbors slightly by having the label of a predicted point be 1 if any label among its nearest neighbors is 1. So it’s not a majority vote, but rather a logical OR of the labels of nearby neighbors. Our point in this post is not to solve the problem well, but rather to show how an algorithm (even a not-so-good one) can degrade as one projects the data into smaller and smaller dimensions. Here is the code.

def nearestNeighborsAccuracy(data, labels, k=10):
   from sklearn.neighbors import NearestNeighbors
   trainData, trainLabels, testData, testLabels = randomSplit(data, labels) # cross validation
   model = NearestNeighbors(n_neighbors=k).fit(trainData)
   distances, indices = model.kneighbors(testData)
   predictedLabels = []

   for x in indices:
      xLabels = [trainLabels[i] for i in x[1:]]
      predictedLabel = max(xLabels)
      predictedLabels.append(predictedLabel)

   totalAccuracy = sum(x == y for (x,y) in zip(testLabels, predictedLabels)) / len(testLabels)
   falsePositive = (sum(x == 0 and y == 1 for (x,y) in zip(testLabels, predictedLabels)) /
      sum(x == 0 for x in testLabels))
   falseNegative = (sum(x == 1 and y == 0 for (x,y) in zip(testLabels, predictedLabels)) /
      sum(x == 1 for x in testLabels))

   return totalAccuracy, falsePositive, falseNegative

And here is the accuracy of this modified k-nearest-neighbors algorithm run on the thrombin dataset. The horizontal line represents the accuracy of the produced classifier on the unmodified data set. The x-axis represents the dimension projected to (left-hand side is the lowest), and the y-axis represents the accuracy. The mean accuracy over fifty trials was plotted, with error bars representing one standard deviation. The complete code to reproduce the plot is in the Github repository.

thrombin-knn-accuracy

Likewise, we plot the proportion of false positive and false negatives for the output classifier. Note that a “positive” label made up only about 2% of the total data set. First the false positives

thrombin-knn-fp

Then the false negatives

thrombin-knn-fn

As we can see from these three charts, things don’t really change that much (for this dataset) even when we project down to around 200-300 dimensions. Note that for these parameters the “correct” theoretical choice for dimension was on the order of 5,000 dimensions, so this is a 95% savings from the naive approach, and 99.75% space savings from the original data. Not too shabby.

Notes

The $ \Omega(\log(n))$ worst-case dimension bound is asymptotically tight, though there is some small gap in the literature that depends on $ \varepsilon$. This result is due to Noga Alon, the very last result (Section 9) of this paper. [Update: as djhsu points out in the comments, this gap is now closed thanks to Larsen and Nelson]

We did dimension reduction with respect to preserving the Euclidean distance between points. One might naturally wonder if you can achieve the same dimension reduction with a different metric, say the taxicab metric or a $ p$-norm. In fact, you cannot achieve anything close to logarithmic dimension reduction for the taxicab ($ l_1$) metric. This result is due to Brinkman-Charikar in 2004.

The code we used to compute the JLT is not particularly efficient. There are much more efficient methods. One of them, borrowing its namesake from the Fast Fourier Transform, is called the Fast Johnson-Lindenstrauss Transform. The technique is due to Ailon-Chazelle from 2009, and it involves something called “preconditioning a sparse projection matrix with a randomized Fourier transform.” I don’t know precisely what that means, but it would be neat to dive into that in a future post.

The central focus in this post was whether the JLT preserves distances between points, but one might be curious as to whether the points themselves are well approximated. The answer is an enthusiastic no. If the data were images, the projected points would look nothing like the original images. However, it appears the degradation tradeoff is measurable (by some accounts perhaps linear), and there appears to be some work (also this by the same author) when restricting to sparse vectors (like word-association vectors).

Note that the JLT is not the only method for dimensionality reduction. We previously saw principal component analysis (applied to face recognition), and in the future we will cover a related technique called the Singular Value Decomposition. It is worth noting that another common technique specific to nearest-neighbor is called “locality-sensitive hashing.” Here the goal is to project the points in such a way that “similar” points land very close to each other. Say, if you were to discretize the plane into bins, these bins would form the hash values and you’d want to maximize the probability that two points with the same label land in the same bin. Then you can do things like nearest-neighbors by comparing bins.

Another interesting note, if your data is linearly separable (like the examples we saw in our age-old post on Perceptrons), then you can use the JLT to make finding a linear separator easier. First project the data onto the dimension given in the theorem. With high probability the points will still be linearly separable. And then you can use a perceptron-type algorithm in the smaller dimension. If you want to find out which side a new point is on, you project and compare with the separator in the smaller dimension.

Beyond its interest for practical dimensionality reduction, the JLT has had many other interesting theoretical consequences. More generally, the idea of “randomly projecting” your data onto some small dimensional space has allowed mathematicians to get some of the best-known results on many optimization and learning problems, perhaps the most famous of which is called MAX-CUT; the result is by Goemans-Williamson and it led to a mathematical constant being named after them, $ \alpha_{GW} =.878567 \dots$. If you’re interested in more about the theory, Santosh Vempala wrote a wonderful (and short!) treatise dedicated to this topic.

randomprojectionbook

Load Balancing and the Power of Hashing

Here’s a bit of folklore I often hear (and retell) that’s somewhere between a joke and deep wisdom: if you’re doing a software interview that involves some algorithms problem that seems hard, your best bet is to use hash tables.

More succinctly put: Google loves hash tables.

As someone with a passion for math and theoretical CS, it’s kind of silly and reductionist. But if you actually work with terabytes of data that can’t fit on a single machine, it also makes sense.

But to understand why hash tables are so applicable, you should have at least a fuzzy understanding of the math that goes into it, which is surprisingly unrelated to the actual act of hashing. Instead it’s the guarantees that a “random enough” hash provides that makes it so useful. The basic intuition is that if you have an algorithm that works well assuming the input data is completely random, then you can probably get a good guarantee by preprocessing the input by hashing.

In this post I’ll explain the details, and show the application to an important problem that one often faces in dealing with huge amounts of data: how to allocate resources efficiently (load balancing). As usual, all of the code used in the making of this post is available on Github.

Next week, I’ll follow this post up with another application of hashing to estimating the number of distinct items in a set that’s too large to store in memory.

Families of Hash Functions

To emphasize which specific properties of hash functions are important for a given application, we start by introducing an abstraction: a hash function is just some computable function that accepts strings as input and produces numbers between 1 and $ n$ as output. We call the set of allowed inputs $ U$ (for “Universe”). A family of hash functions is just a set of possible hash functions to choose from. We’ll use a scripty $ \mathscr{H}$ for our family, and so every hash function $ h$ in $ \mathscr{H}$ is a function $ h : U \to \{ 1, \dots, n \}$.

You can use a single hash function $ h$ to maintain an unordered set of objects in a computer. The reason this is a problem that needs solving is because if you were to store items sequentially in a list, and if you want to determine if a specific item is already in the list, you need to potentially check every item in the list (or do something fancier). In any event, without hashing you have to spend some non-negligible amount of time searching. With hashing, you can choose the location of an element $ x \in U$ based on the value of its hash $ h(x)$. If you pick your hash function well, then you’ll have very few collisions and can deal with them efficiently. The relevant section on Wikipedia has more about the various techniques to deal with collisions in hash tables specifically, but we want to move beyond that in this post.

Here we have a family of random hash functions. So what’s the use of having many hash functions? You can pick a hash randomly from a “good” family of hash functions. While this doesn’t seem so magical, it has the informal property that it makes arbitrary data “random enough,” so that an algorithm which you designed to work with truly random data will also work with the hashes of arbitrary data. Moreover, even if an adversary knows $ \mathscr{H}$ and knows that you’re picking a hash function at random, there’s no way for the adversary to manufacture problems by feeding bad data. With overwhelming probability the worst-case scenario will not occur. Our first example of this is in load-balancing.

Load balancing and 2-uniformity

You can imagine load balancing in two ways, concretely and mathematically. In the concrete version you have a public-facing server that accepts requests from users, and forwards them to a back-end server which processes them and sends a response to the user. When you have a billion users and a million servers, you want to forward the requests in such a way that no server gets too many requests, or else the users will experience delays. Moreover, you’re worried that the League of Tanzanian Hackers is trying to take down your website by sending you requests in a carefully chosen order so as to screw up your load balancing algorithm.

The mathematical version of this problem usually goes with the metaphor of balls and bins. You have some collection of $ m$ balls and $ n$ bins in which to put the balls, and you want to put the balls into the bins. But there’s a twist: an adversary is throwing balls at you, and you have to put them into the bins before the next ball comes, so you don’t have time to remember (or count) how many balls are in each bin already. You only have time to do a small bit of mental arithmetic, sending ball $ i$ to bin $ f(i)$ where $ f$ is some simple function. Moreover, whatever rule you pick for distributing the balls in the bins, the adversary knows it and will throw balls at you in the worst order possible.

silk-balls.jpg
A young man applying his knowledge of balls and bins. That’s totally what he’s doing.

There is one obvious approach: why not just pick a uniformly random bin for each ball? The problem here is that we need the choice to be persistent. That is, if the adversary throws the same ball at us a second time, we need to put it in the same bin as the first time, and it doesn’t count toward the overall load. This is where the ball/bin metaphor breaks down. In the request/server picture, there is data specific to each user stored on the back-end server between requests (a session), and you need to make sure that data is not lost for some reasonable period of time. And if we were to save a uniform random choice after each request, we’d need to store a number for every request, which is too much. In short, we need the mapping to be persistent, but we also want it to be “like random” in effect.

So what do you do? The idea is to take a “good” family of hash functions $ \mathscr{H}$, pick one $ h \in \mathscr{H}$ uniformly at random for the whole game, and when you get a request/ball $ x \in U$ send it to server/bin $ h(x)$. Note that in this case, the adversary knows your universal family $ \mathscr{H}$ ahead of time, and it knows your algorithm of committing to some single randomly chosen $ h \in \mathscr{H}$, but the adversary does not know which particular $ h$ you chose.

The property of a family of hash functions that makes this strategy work is called 2-universality.

Definition: A family of functions $ \mathscr{H}$ from some universe $ U \to \{ 1, \dots, n \}$. is called 2-universal if, for every two distinct $ x, y \in U$, the probability over the random choice of a hash function $ h$ from $ \mathscr{H}$ that $ h(x) = h(y)$ is at most $ 1/n$. In notation,

$ \displaystyle \Pr_{h \in \mathscr{H}}[h(x) = h(y)] \leq \frac{1}{n}$

I’ll give an example of such a family shortly, but let’s apply this to our load balancing problem. Our load-balancing algorithm would fail if, with even some modest probability, there is some server that receives many more than its fair share ($ m/n$) of the $ m$ requests. If $ \mathscr{H}$ is 2-universal, then we can compute an upper bound on the expected load of a given server, say server 1. Specifically, pick any element $ x$ which hashes to 1 under our randomly chosen $ h$. Then we can compute an upper bound on the expected number of other elements that hash to 1. In this computation we’ll only use the fact that expectation splits over sums, and the definition of 2-universal. Call $ \mathbf{1}_{h(y) = 1}$ the random variable which is zero when $ h(y) \neq 1$ and one when $ h(y) = 1$, and call $ X = \sum_{y \in U} \mathbf{1}_{h(y) = 1}$. In words, $ X$ simply represents the number of inputs that hash to 1. Then

exp-calc

So in expectation we can expect server 1 gets its fair share of requests. And clearly this doesn’t depend on the output hash being 1; it works for any server. There are two obvious questions.

  1. How do we measure the risk that, despite the expectation we computed above, some server is overloaded?
  2. If it seems like (1) is on track to happen, what can you do?

For 1 we’re asking to compute, for a given deviation $ t$, the probability that $ X – \mathbb{E}[X] > t$. This makes more sense if we jump to multiplicative factors, since it’s usually okay for a server to bear twice or three times its usual load, but not like $ \sqrt{n}$ times more than it’s usual load. (Industry experts, please correct me if I’m wrong! I’m far from an expert on the practical details of load balancing.)

So we want to know what is the probability that $ X – \mathbb{E}[X] > t \cdot \mathbb{E}[X]$ for some small number $ t$, and we want this to get small quickly as $ t$ grows. This is where the Chebyshev inequality becomes useful. For those who don’t want to click the link, for our sitauation Chebyshev’s inequality is the statement that, for any random variable $ X$

$ \displaystyle \Pr[|X – \mathbb{E}[X]| > t\mathbb{E}[X]] \leq \frac{\textup{Var}[X]}{t^2 \mathbb{E}^2[X]}.$

So all we need to do is compute the variance of the load of a server. It’s a bit of a hairy calculation to write down, but rest assured it doesn’t use anything fancier than the linearity of expectation and 2-universality. Let’s dive in. We start by writing the definition of variance as an expectation, and then we split $ X$ up into its parts, expand the product and group the parts.

$ \displaystyle \textup{Var}[X] = \mathbb{E}[(X – \mathbb{E}[X])^2] = \mathbb{E}[X^2] – (\mathbb{E}[X])^2$

The easy part is $ (\mathbb{E}[X])^2$, it’s just $ (1 + (m-1)/n)^2$, and the hard part is $ \mathbb{E}[X^2]$. So let’s compute that

esquared-calcluation

In order to continue (and get a reasonable bound) we need an additional property of our hash family which is not immediately spelled out by 2-universality. Specifically, we need that for every $ h$ and $ i$, $ \Pr_x[h(x) = i] = O(\frac{1}{n})$. In other words, each hash function should evenly split the inputs across servers.

The reason this helps is because we can split $ \Pr[h(x) = h(y) = 1]$  into $ \Pr[h(x) = h(y) \mid h(x) = 1] \cdot \Pr[h(x) = 1]$. Using 2-universality to bound the left term, this quantity is at most $ 1/n^2$, and since there are $ \binom{m}{2}$ total terms in the double sum above, the whole thing is at most $ O(m/n + m^2 / n^2) = O(m^2 / n^2)$. Note that in our big-O analysis we’re assuming $ m$ is much bigger than $ n$.

Sweeping some of the details inside the big-O, this means that our variance is $ O(m^2/n^2)$, and so our bound on the deviation of $ X$ from its expectation by a multiplicative factor of $ t$ is at most $ O(1/t^2)$.

Now we computed a bound on the probability that a single server is not overloaded, but if we want to extend that to the worst-case server, the typical probability technique is to take the union bound over all servers. This means we just add up all the individual bounds and ignore how they relate. So the probability that some server has a load more than a multiplicative factor of $ t$ is bounded from above $ O(n/t^2)$. This is only less than one when $ t = \Omega(\sqrt{n})$, so all we can say with this analysis is that (with some small constant probability) no server will have a load worse than $ \sqrt{n}$ times more than the expected load.

So we have this analysis that seems not so good. If we have a million servers then the worst load on one server could potentially be a thousand times higher than the expected load. This doesn’t scale, and the problem could be in any (or all) of three places:

  1. Our analysis is weak, and we should use tighter bounds because the true max load is actually much smaller.
  2. Our hash families don’t have strong enough properties, and we should beef those up to get tighter bounds.
  3. The whole algorithm sucks and needs to be improved.

It turns out all three are true. One heuristic solution is easy and avoids all math. Have some second server (which does not process requests) count hash collisions. When some server exceeds a factor of $ t$ more than the expected load, send a message to the load balancer to randomly pick a new hash function from $ \mathscr{H}$ and for any requests that don’t have existing sessions (this is included in the request data), use the new hash function. Once the old sessions expire, switch any new incoming requests from those IPs over to the new hash function.

But there are much better solutions out there. Unfortunately their analyses are too long for a blog post (they fill multiple research papers). Fortunately their descriptions and guarantees are easy to describe, and they’re easy to program. The basic idea goes by the name “the power of two choices,” which we explored on this blog in a completely different context of random graphs.

In more detail, the idea is that you start by picking two random hash functions $ h_1, h_2 \in \mathscr{H}$, and when you get a new request, you compute both hashes, inspect the load of the two servers indexed by those hashes, and send the request to the server with the smaller load.

This has the disadvantage of requiring bidirectional talk between the load balancer and the server, rather than obliviously forwarding requests. But the advantage is an exponential decrease in the worst-case maximum load. In particular, the following theorem holds for the case where the hashes are fully random.

Theorem: Suppose one places $ m$ balls into $ n$ bins in order according to the following procedure: for each ball pick two uniformly random and independent integers $ 1 \leq i,j \leq n$, and place the ball into the bin with the smallest current size. If there are ties pick the bin with the smaller index. Then with high probability the largest bin has no more than $ \Theta(m/n) + O(\log \log (n))$ balls.

This theorem appears to have been proved in a few different forms, with the best analysis being by Berenbrink et al. You can improve the constant on the $ \log \log n$ by computing more than 2 hashes. How does this relate to a good family of hash functions, which is not quite fully random? Let’s explore the answer by implementing the algorithm in python.

An example of universal hash functions, and the load balancing algorithm

In order to implement the load balancer, we need to have some good hash functions under our belt. We’ll go with the simplest example of a hash function that’s easy to prove nice properties for. Specifically each hash in our family just performs some arithmetic modulo a random prime.

Definition: Pick any prime $ p > m$, and for any $ 1 \leq a < p$ and $ 0 \leq b \leq n$ define $ h_{a,b}(x) = (ax + b \mod p) \mod m$. Let $ \mathscr{H} = \{ h_{a,b} \mid 0 \leq b < p, 1 \leq a < p \}$.

This family of hash functions is 2-universal.

Theorem: For every $ x \neq y \in \{0, \dots, p\}$,

$ \Pr_{h \in \mathscr{H}}[h(x) = h(y)] \leq 1/p$

Proof. To say that $ h(x) = h(y)$ is to say that $ ax+b = ay+b + i \cdot m \mod p$ for some integer $ i$. I.e., the two remainders of $ ax+b$ and $ ay+b$ are equivalent mod $ m$. The $ b$’s cancel and we can solve for $ a$

$ a = im (x-y)^{-1} \mod p$

Since $ a \neq 0$, there are $ p-1$ possible choices for $ a$. Moreover, there is no point to pick $ i$ bigger than $ p/m$ since we’re working modulo $ p$. So there are $ (p-1)/m$ possible values for the right hand side of the above equation. So if we chose them uniformly at random, (remember, $ x-y$ is fixed ahead of time, so the only choice is $ a, i$), then there is a $ (p-1)/m$ out of $ p-1$ chance that the equality holds, which is at most $ 1/m$. (To be exact you should account for taking a floor of $ (p-1)/m$ when $ m$ does not evenly divide $ p-1$, but it only decreases the overall probability.)

$ \square$

If $ m$ and $ p$ were equal then this would be even more trivial: it’s just the fact that there is a unique line passing through any two distinct points. While that’s obviously true from standard geometry, it is also true when you work with arithmetic modulo a prime. In fact, it works using arithmetic over any field.

Implementing these hash functions is easier than shooting fish in a barrel.

import random

def draw(p, m):
   a = random.randint(1, p-1)
   b = random.randint(0, p-1)

   return lambda x: ((a*x + b) % p) % m

To encapsulate the process a little bit we implemented a UniversalHashFamily class which computes a random probable prime to use as the modulus and stores $ m$. The interested reader can see the Github repository for more.

If we try to run this and feed in a large range of inputs, we can see how the outputs are distributed. In this example $ m$ is a hundred thousand and $ n$ is a hundred (it’s not two terabytes, but give me some slack it’s a demo and I’ve only got my desktop!). So the expected bin size for any 2-universal family is just about 1,000.

>>> m = 100000
>>> n = 100
>>> H = UniversalHashFamily(numBins=n, primeBounds=[n, 2*n])
>>> results = []
>>> for simulation in range(100):
...    bins = [0] * n
...    h = H.draw()
...    for i in range(m):
...       bins[h(i)] += 1
...    results.append(max(bins))
...
>>> max(bins) # a single run
1228
>>> min(bins)
613
>>> max(results) # the max bin size over all runs
1228
>>> min(results)
1227

Indeed, the max is very close to the expected value.

But this example is misleading, because the point of this was that some adversary would try to screw us over by picking a worst-case input. If the adversary knew exactly which $ h$ was chosen (which it doesn’t) then the worst case input would be the set of all inputs that have the given hash output value. Let’s see it happen live.

>>> h = H.draw()
>>> badInputs = [i for i in range(m) if h(i) == 9]
>>>; len(badInputs)
1227
>>> testInputs(n,m,badInputs,hashFunction=h)
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1227, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]

The expected size of a bin is 12, but as expected this is 100 times worse (linearly worse in $ n$). But if we instead pick a random $ h$ after the bad inputs are chosen, the result is much better.

>>> testInputs(n,m,badInputs) # randomly picks a hash
[19, 20, 20, 19, 18, 18, 17, 16, 16, 16, 16, 17, 18, 18, 19, 20, 20, 19, 18, 17, 17, 16, 16, 16, 16, 17, 18, 18, 19, 20, 20, 19, 18, 17, 17, 16, 16, 16, 16, 8, 8, 9, 9, 10, 10, 10, 10, 9, 9, 8, 8, 8, 8, 8, 8, 9, 9, 10, 10, 10, 10, 9, 9, 8, 8, 8, 8, 8, 8, 9, 9, 10, 10, 10, 10, 9, 8, 8, 8, 8, 8, 8, 8, 9, 9, 10, 10, 10, 10, 9, 8, 8, 8, 8, 8, 8, 8, 9, 9, 10]

However, if you re-ran this test many times, you’d eventually get unlucky and draw the hash function for which this actually is the worst input, and get a single huge bin. Other times you can get a bad hash in which two or three bins have all the inputs.

An interesting question is, what is really the worst-case input for this algorithm? I suspect it’s characterized by some choice of hash output values, taking all inputs for the chosen outputs. If this is the case, then there’s a tradeoff between the number of inputs you pick and how egregious the worst bin is. As an exercise to the reader, empirically estimate this tradeoff and find the best worst-case input for the adversary. Also, for your choice of parameters, estimate by simulation the probability that the max bin is three times larger than the expected value.

Now that we’ve played around with the basic hashing algorithm and made a family of 2-universal hashes, let’s see the power of two choices. Recall, this algorithm picks two random hash functions and sends an input to the bin with the smallest size. This obviously generalizes to $ k$ choices, although the theoretical guarantee only improves by a constant factor, so let’s implement the more generic version.

class ChoiceHashFamily(object):
   def __init__(self, hashFamily, queryBinSize, numChoices=2):
      self.queryBinSize = queryBinSize
      self.hashFamily = hashFamily
      self.numChoices = numChoices

   def draw(self):
      hashes = [self.hashFamily.draw()
                   for _ in range(self.numChoices)]

      def h(x):
         indices = [h(x) for h in hashes]
         counts = [self.queryBinSize(i) for i in indices]
         count, index = min([(c,i) for (c,i) in zip(counts,indices)])
         return index

      return h

And if we test this with the bad inputs (as used previously, all the inputs that hash to 9), as a typical output we get

>>> bins
[15, 16, 15, 15, 16, 14, 16, 14, 16, 15, 16, 15, 15, 15, 17, 14, 16, 14, 16, 16, 15, 16, 15, 16, 15, 15, 17, 15, 16, 15, 15, 15, 15, 16, 15, 14, 16, 14, 16, 15, 15, 15, 14, 16, 15, 15, 15, 14, 17, 14, 15, 15, 14, 16, 13, 15, 14, 15, 15, 15, 14, 15, 13, 16, 14, 16, 15, 15, 15, 16, 15, 15, 13, 16, 14, 15, 15, 16, 14, 15, 15, 15, 11, 13, 11, 12, 13, 14, 13, 11, 11, 12, 14, 14, 13, 10, 16, 12, 14, 10]

And a typical list of bin maxima is

>>> results
[16, 16, 16, 18, 17, 365, 18, 16, 16, 365, 18, 17, 17, 17, 17, 16, 16, 17, 18, 16, 17, 18, 17, 16, 17, 17, 18, 16, 18, 17, 17, 17, 17, 18, 18, 17, 17, 16, 17, 365, 17, 18, 16, 16, 18, 17, 16, 18, 365, 16, 17, 17, 16, 16, 18, 17, 17, 17, 17, 17, 18, 16, 18, 16, 16, 18, 17, 17, 365, 16, 17, 17, 17, 17, 16, 17, 16, 17, 16, 16, 17, 17, 16, 365, 18, 16, 17, 17, 17, 17, 17, 18, 17, 17, 16, 18, 18, 17, 17, 17]

Those big bumps are the times when we picked an unlucky hash function, which is scarily large, although this bad event would be proportionally less likely as you scale up. But in the good case the load is clearly more even than the previous example, and the max load would get linearly smaller as you pick between a larger set of randomly chosen hashes (obviously).

Coupling this with the technique of switching hash functions when you start to observe a large deviation, and you have yourself an elegant solution.

In addition to load balancing, hashing has a ton of applications. Remember, the main key that you may want to use hashing is when you have an algorithm that works well when the input data is random. This comes up in streaming and sublinear algorithms, in data structure design and analysis, and many other places. We’ll be covering those applications in future posts on this blog.

Until then!

One definition of algorithmic fairness: statistical parity

If you haven’t read the first post on fairness, I suggest you go back and read it because it motivates why we’re talking about fairness for algorithms in the first place. In this post I’ll describe one of the existing mathematical definitions of “fairness,” its origin, and discuss its strengths and shortcomings.

Before jumping in I should remark that nobody has found a definition which is widely agreed as a good definition of fairness in the same way we have for, say, the security of a random number generator. So this post is intended to be exploratory rather than dictating The Facts. Rather, it’s an idea with some good intuitive roots which may or may not stand up to full mathematical scrutiny.

Statistical parity

Here is one way to define fairness.

Your population is a set $ X$ and there is some known subset $ S \subset X$ that is a “protected” subset of the population. For discussion we’ll say $ X$ is people and $ S$ is people who dye their hair teal. We are afraid that banks give fewer loans to the teals because of hair-colorism, despite teal-haired people being just as creditworthy as the general population on average.

Now we assume that there is some distribution $ D$ over $ X$ which represents the probability that any individual will be drawn for evaluation. In other words, some people will just have no reason to apply for a loan (maybe they’re filthy rich, or don’t like homes, cars, or expensive colleges), and so $ D$ takes that into account. Generally we impose no restrictions on $ D$, and the definition of fairness will have to work no matter what $ D$ is.

Now suppose we have a (possibly randomized) classifier $ h:X \to \{-1,1\}$ giving labels to $ X$. When given a person $ x$ as input $ h(x)=1$ if $ x$ gets a loan and $ -1$ otherwise. The bias, or statistical imparity, of $ h$ on $ S$ with respect to $ X,D$ is the following quantity. In words, it is the difference between the probability that a random individual drawn from $ S$ is labeled 1 and the probability that a random individual from the complement $ S^C$ is labeled 1.

$ \textup{bias}_h(X,S,D) = \Pr[h(x) = 1 | x \in S^{C}] – \Pr[h(x) = 1 | x \in S]$

The probability is taken both over the distribution $ D$ and the random choices made by the algorithm. This is the statistical equivalent of the legal doctrine of adverse impact. It measures the difference that the majority and protected classes get a particular outcome. When that difference is small, the classifier is said to have “statistical parity,” i.e. to conform to this notion of fairness.

Definition: A hypothesis $ h:X \to \{-1,1\}$ is said to have statistical parity on $ D$ with respect to $ S$ up to bias $ \varepsilon$ if $ |\textup{bias}_h(X,S,D)| < \varepsilon$.

So if a hypothesis achieves statistical parity, then it treats the general population statistically similarly to the protected class. So if 30% of normal-hair-colored people get loans, statistical parity requires roughly 30% of teals to also get loans.

It’s pretty simple to write a program to compute the bias. First we’ll write a function that computes the bias of a given set of labels. We’ll determine whether a data point $ x \in X$ is in the protected class by specifying a specific value of a specific index. I.e., we’re assuming the feature selection has already happened by this point.

# labelBias: [[float]], [int], int, obj -&gt; float
# compute the signed bias of a set of labels on a given dataset
def labelBias(data, labels, protectedIndex, protectedValue):   
   protectedClass = [(x,l) for (x,l) in zip(data, labels) 
      if x[protectedIndex] == protectedValue]   
   elseClass = [(x,l) for (x,l) in zip(data, labels) 
      if x[protectedIndex] != protectedValue]

   if len(protectedClass) == 0 or len(elseClass) == 0:
      raise Exception(&quot;One of the classes is empty!&quot;)
   else:
      protectedProb = sum(1 for (x,l) in protectedClass if l == 1) / len(protectedClass)
      elseProb = sum(1 for (x,l) in elseClass  if l == 1) / len(elseClass)

   return elseProb - protectedProb

Then generalizing this to an input hypothesis is a one-liner.

# signedBias: [[float]], int, obj, h -&gt; float
# compute the signed bias of a hypothesis on a given dataset
def signedBias(data, h, protectedIndex, protectedValue):
   return labelBias(pts, [h(x) for x in pts], protectedIndex, protectedValue)

Now we can load the census data from the UCI machine learning repository and compute some biases in the labels. The data points in this dataset correspond to demographic features of people from a census survey, and the labels are +1 if the individual’s salary is at least 50k, and -1 otherwise. I wrote some helpers to load the data from a file (which you can see in this post’s Github repo).

if __name__ == &quot;__main__&quot;:
   from data import adult
   train, test = adult.load(separatePointsAndLabels=True)

   # [(test name, (index, value))]
   tests = [('gender', (1,0)), 
            ('private employment', (2,1)), 
            ('asian race', (33,1)),
            ('divorced', (12, 1))]

   for (name, (index, value)) in tests:
      print(&quot;'%s' bias in training data: %.4f&quot; %
         (name, labelBias(train[0], train[1], index, value)))

(I chose ‘asian race’ instead of just ‘asian’ because there are various ‘country of origin’ features that are for countries in Asia.)

Running this gives the following.

anti-'female' bias in training data: 0.1963
anti-'private employment' bias in training data: 0.0731
anti-'asian race' bias in training data: -0.0256
anti-'divorced' bias in training data: 0.1582

Here a positive value means it’s biased against the quoted thing, a negative value means it’s biased in favor of the quoted thing.

Now let me define a stupidly trivial classifier that predicts 1 if the country of origin is India and zero otherwise. If I do this and compute the gender bias of this classifier on the training data I get the following.

&gt;&gt;&gt; indian = lambda x: x[47] == 1
&gt;&gt;&gt; len([x for x in train[0] if indian(x)]) / len(train[0]) # fraction of Indians
0.0030711587481956942
&gt;&gt;&gt; signedBias(train[0], indian, 1, 0)
0.0030631816119030884

So this says that predicting based on being of Indian origin (which probably has very low accuracy, since many non-Indians make at least $50k) does not bias significantly with respect to gender.

We can generalize statistical parity in various ways, such as using some other specified set $ T$ in place of $ S^C$, or looking at discrepancies among $ k$ different sub-populations or with $ m$ different outcome labels. In fact, the mathematical name for this measurement (which is a measurement of a set of distributions) is called the total variation distance. The form we sketched here is a simple case that just works for the binary-label two-class scenario.

Now it is important to note that statistical parity says nothing about the truth about the protected class $ S$. I mean two things by this. First, you could have some historical data you want to train a classifier $ h$ on, and usually you’ll be given training labels for the data that tell you whether $ h(x)$ should be $ 1$ or $ -1$. In the absence of discrimination, getting high accuracy with respect to the training data is enough. But if there is some historical discrimination against $ S$ then the training labels are not trustworthy. As a consequence, achieving statistical parity for $ S$ necessarily reduces the accuracy of $ h$. In other words, when there is bias in the data accuracy is measured in favor of encoding the bias. Studying fairness from this perspective means you study the tradeoff between high accuracy and low statistical disparity. However, and this is why statistical parity says nothing about whether the individuals $ h$ behaves differently on (differently compared to the training labels) were the correct individuals to behave differently on. If the labels alone are all we have to work with, and we don’t know the true labels, then we’d need to apply domain-specific knowledge, which is suddenly out of scope of machine learning.

Second, nothing says optimizing for statistical parity is the correct thing to do. In other words, it may be that teal-haired people are truly less creditworthy (jokingly, maybe there is a hidden innate characteristic causing both uncreditworthiness and a desire to dye your hair!) and by enforcing statistical parity you are going against a fact of Nature. Though there are serious repercussions for suggesting such things in real life, my point is that statistical parity does not address anything outside the desire for an algorithm to exhibit a certain behavior. The obvious counterargument is that if, as a society, we have decided that teal-hairedness should be protected by law regardless of Nature, then we’re defining statistical parity to be correct. We’re changing our optimization criterion and as algorithm designers we don’t care about anything else. We care about what guarantees we can prove about algorithms, and the utility of the results.

The third side of the coin is that if all we care about is statistical parity, then we’ll have a narrow criterion for success that can be gamed by an actively biased adversary.

Statistical parity versus targeted bias

Statistical parity has some known pitfalls. In their paper “Fairness Through Awareness” (Section 3.1 and Appendix A), Dwork, et al. argue convincingly that these are primarily issues of individual fairness and targeted discrimination. They give six examples of “evils” including a few that maintain statistical parity while not being fair from the perspective of an individual. Here are my two favorite ones to think about (using teal-haired people and loans again):

  1. Self-fulfilling prophecy: The bank intentionally gives a few loans to teal-haired people who are (for unrelated reasons) obviously uncreditworthy, so that in the future they can point to these examples to justify discriminating against teals. This can appear even if the teals are chosen uniformly at random, since the average creditworthiness of a random teal-haired person is lower than a carefully chosen normal-haired person.
  2. Reverse tokenism: The bank intentionally does not give loans to some highly creditworthy normal-haired people, let’s call one Martha, so that when a teal complains that they are denied a loan, the bank can point to Martha and say, “Look how qualified she is, and we didn’t even give her a loan! You’re much less qualified.” Here Martha is the “token” example used to justify discrimination against teals.

I like these two examples for two reasons. First, they illustrate how hard coming up with a good definition is: it’s not clear how to encapsulate both statistical parity and resistance to this kind of targeted discrimination. Second, they highlight that discrimination can both be unintentional and intentional. Since computer scientists tend to work with worst-case guarantees, this makes we think the right definition will be resilient to some level of adversarial discrimination. But again, these two examples are not formalized, and it’s not even clear to what extent existing algorithms suffer from manipulations of these kinds. For instance, many learning algorithms are relatively resilient to changing the desired label of a single point.

In any case, the thing to take away from this discussion is that there is not yet an accepted definition of “fairness,” and there seems to be a disconnect between what it means to be fair for an individual versus a population. There are some other proposals in the literature, and I’ll just mention one: Dwork et al. propose that individual fairness mean that “similar individuals are treated similarly.” I will cover this notion (and what’s know about it) in a future post.

Until then!