Data is abundant, data is big, and big is a problem. Let me start with an example. Let’s say you have a list of movie titles and you want to learn their genre: romance, action, drama, etc. And maybe in this scenario IMDB doesn’t exist so you can’t scrape the answer. Well, the title alone is almost never enough information. One nice way to get more data is to do the following:

- Pick a large dictionary of words, say the most common 100,000 non stop-words in the English language.
- Crawl the web looking for documents that include the title of a film.
- For each film, record the counts of all other words appearing in those documents.
- Maybe remove instances of “movie” or “film,” etc.

After this process you have a length-100,000 vector of integers associated with each movie title. IMDB’s database has around 1.5 million listed movies, and if we have a 32-bit integer per vector entry, that’s 600 GB of data to get every movie.

One way to try to find genres is to cluster this (unlabeled) dataset of vectors, and then manually inspect the clusters and assign genres. With a really fast computer we could simply run an existing clustering algorithm on this dataset and be done. Of course, clustering 600 GB of data takes a long time, but there’s another problem. The geometric intuition that we use to design clustering algorithms *degrades* as the length of the vectors in the dataset grows. As a result, our algorithms perform poorly. This phenomenon is called the “curse of dimensionality” (“curse” isn’t a technical term), and we’ll return to the mathematical curiosities shortly.

A possible workaround is to try to come up with faster algorithms or be more patient. But a more interesting mathematical question is the following:

Is it possible to condense high-dimensional data into smaller dimensions and retain the important geometric properties of the data?

This goal is called *dimension reduction*. Indeed, all of the chatter on the internet is bound to encode redundant information, so for our movie title vectors it seems the answer should be “yes.” But the questions remain, how does one *find* a low-dimensional condensification? (Condensification isn’t a word, the right word is embedding, but embedding is overloaded so we’ll wait until we define it) And what mathematical guarantees can you prove about the resulting condensed data? After all, it stands to reason that different techniques preserve different aspects of the data. Only math will tell.

In this post we’ll explore this so-called “curse” of dimensionality, explain the formality of why it’s seen as a curse, and implement a wonderfully simple technique called “the random projection method” which preserves pairwise distances between points after the reduction. As usual, and all the code, data, and tests used in the making of this post are on Github.

## Some curious issues, and the “curse”

We start by exploring the curse of dimensionality with experiments on synthetic data.

In two dimensions, take a circle centered at the origin with radius 1 and its bounding square.

The circle fills up most of the area in the square, in fact it takes up exactly out of 4 which is about 78%. In three dimensions we have a sphere and a cube, and the ratio of sphere volume to cube volume is a bit smaller, out of a total of 8, which is just over 52%. What about in a thousand dimensions? Let’s try by simulation.

[code language=”python”]

import random

def randUnitCube(n):

return [(random.random() – 0.5)*2 for _ in range(n)]

def sphereCubeRatio(n, numSamples):

randomSample = [randUnitCube(n) for _ in range(numSamples)]

return sum(1 for x in randomSample if sum(a**2 for a in x) <= 1) / numSamples [/code]

The result is as we computed for small dimension,

[code language=”python”] >>> sphereCubeRatio(2,10000)

0.7857

>>> sphereCubeRatio(3,10000)

0.5196

[/code]

And much smaller for larger dimension

[code language=”python”]

>>> sphereCubeRatio(20,100000) # 100k samples

0.0

>>> sphereCubeRatio(20,1000000) # 1M samples

0.0

>>> sphereCubeRatio(20,2000000)

5e-07

[/code]

Forget a thousand dimensions, for even *twenty* dimensions, a million samples wasn’t enough to register a single random point inside the unit sphere. This illustrates one concern, that when we’re sampling random points in the -dimensional unit cube, we need at least samples to ensure we’re getting a even distribution from the whole space. In high dimensions, this face basically rules out a naive Monte Carlo approximation, where you sample random points to estimate the probability of an event too complicated to sample from directly. A machine learning viewpoint of the same problem is that in dimension , if your machine learning algorithm requires a representative sample of the input space in order to make a useful inference, then you require samples to learn.

Luckily, we can answer our original question because there is a known formula for the volume of a sphere in any dimension. Rather than give the closed form formula, which involves the gamma function and is incredibly hard to parse, we’ll state the recursive form. Call the volume of the unit sphere in dimension . Then by convention, (it’s an interval), and . If you unpack this recursion you can see that the numerator looks like and the denominator looks like a factorial, except it skips every other number. So an even dimension would look like , and this grows larger than a fixed exponential. So in fact the total volume of the sphere vanishes as the dimension grows! (In addition to the ratio vanishing!)

[code language=”python”]

def sphereVolume(n):

values = [0] * (n+1)

for i in range(n+1):

if i == 0:

values[i] = 1

elif i == 1:

values[i] = 2

else:

values[i] = 2*math.pi / i * values[i-2]

return values[-1]

[/code]

This should be counterintuitive. I think most people would guess, when asked about how the volume of the unit sphere changes as the dimension grows, that it stays the same or gets bigger. But at a hundred dimensions, the volume is already getting too small to fit in a float.

[code language=”python”]

>>> sphereVolume(20)

0.025806891390014047

>>> sphereVolume(100)

2.3682021018828297e-40

>>> sphereVolume(1000)

0.0

[/code]

The scary thing is not just that this value drops, but that it drops *exponentially quickly*. A consequence is that, if you’re trying to cluster data points by looking at points within a fixed distance of one point, you have to carefully measure how big needs to be to cover the same proportional volume as it would in low dimension.

Here’s a related issue. Say I take a bunch of points generated uniformly at random in the unit cube.

[code language=”python”]

from itertools import combinations

def distancesRandomPoints(n, numSamples):

randomSample = [randUnitCube(n) for _ in range(numSamples)]

pairwiseDistances = [dist(x,y) for (x,y) in combinations(randomSample, 2)]

return pairwiseDistances

[/code]

In two dimensions, the histogram of distances between points looks like this

However, as the dimension grows the distribution of distances changes. It evolves like the following animation, in which each frame is an increase in dimension from 2 to 100.

The shape of the distribution doesn’t appear to be changing all that much after the first few frames, but the center of the distribution tends to infinity (in fact, it grows like ). The variance also appears to stay constant. This chart also becomes more variable as the dimension grows, again because we should be sampling exponentially many more points as the dimension grows (but we don’t). In other words, as the dimension grows the average distance grows and the tightness of the distribution stays the same. So at a thousand dimensions the average distance is about 26, tightly concentrated between 24 and 28. When the average is a thousand, the distribution is tight between 998 and 1002. If one were to normalize this data, it would appear that random points are all becoming equidistant from each other.

So in addition to the issues of runtime and sampling, the geometry of high-dimensional space looks different from what we expect. To get a better understanding of “big data,” we have to update our intuition from low-dimensional geometry with analysis and mathematical theorems that are much harder to visualize.

## The Johnson-Lindenstrauss Lemma

Now we turn to proving dimension reduction is possible. There are a few methods one might first think of, such as look for suitable subsets of coordinates, or sums of subsets, but these would all appear to take a long time or they simply don’t work.

Instead, the key technique is to take a *random* linear subspace of a certain dimension, and project every data point onto that subspace. No searching required. The fact that this works is called the *Johnson-Lindenstrauss Lemma. *To set up some notation, we’ll call the usual distance between two points.

**Lemma [Johnson-Lindenstrauss (1984)]: **Given a set of points in , project the points in to a randomly chosen subspace of dimension . Call the projection . For any , if is at least , then with probability at least 1/2** **the distances between points in are preserved up to a factor of . That is, with good probability every pair will satisfy

Before we do the proof, which is quite short, it’s important to point out that the target dimension does not depend on the original dimension! It only depends on the number of points in the dataset, and logarithmically so. That makes this lemma seem like pure magic, that you can take data in an arbitrarily high dimension and put it in a much smaller dimension.

On the other hand, if you include all of the hidden constants in the bound on the dimension, it’s not *that* impressive. If your data have a million dimensions and you want to preserve the distances up to 1% (), the bound is *bigger* than a million! If you decrease the preservation to 10% (0.1), then you get down to about 12,000 dimensions, which is more reasonable. At 45% the bound drops to around 1,000 dimensions. Here’s a plot showing the theoretical bound on in terms of for fixed to a million.

But keep in mind, this is just a *theoretical* bound for potentially misbehaving data. Later in this post we’ll see if the practical dimension can be reduced more than the theory allows. As we’ll see, an algorithm run on the projected data is still effective even if the projection goes well beyond the theoretical bound. Because the theorem is known to be tight in the worst case (see the notes at the end) this speaks more to the robustness of the typical algorithm than to the robustness of the projection method.

A second important note is that this technique does not necessarily avoid *all* the problems with the curse of dimensionality. We mentioned above that one potential problem is that “random points” are roughly equidistant in high dimensions. Johnson-Lindenstrauss actually *preserves* this problem because it preserves distances! As a consequence, you won’t see strictly better algorithm performance if you project (which we suggested is possible in the beginning of this post). But you will alleviate slow runtimes if the runtime depends exponentially on the dimension. Indeed, if you replace the dimension with the logarithm of the number of points , then becomes linear in , and becomes polynomial.

## Proof of the J-L lemma

Let’s prove the lemma.

*Proof. *To start we make note that one can sample from the uniform distribution on dimension- linear subspaces of by choosing the entries of a matrix independently from a normal distribution with mean 0 and variance 1. Then, to project a vector by this matrix (call the projection ), we can compute

Now fix and fix two points in the dataset . We want an upper bound on the probability that the following is **false**

Since that expression is a pain to work with, let’s rearrange it by calling , and rearranging (using the linearity of the projection) to get the equivalent statement.

And so we want a bound on the probability that this event does *not *occur, meaning the inequality switches directions.

Once we get such a bound (it will depend on and ) we need to ensure that this bound is true for every pair of points. The union bound allows us to do this, but it also requires that the probability of the bad thing happening tends to zero faster than . That’s where the will come into the bound as stated in the theorem.

Continuing with our use of for notation, define to be the random variable . By expanding the notation and using the linearity of expectation, you can show that the expected value of is , meaning that in expectation, distances are preserved. We are on the right track, and just need to show that the distribution of , and thus the possible deviations in distances, is tightly concentrated around . In full rigor, we will show

Let denote the -th column of . Define by the quantity . This is a weighted average of the entries of by the entries of . But since we chose the entries of from the normal distribution, and since a weighted average of normally distributed random variables is also normally distributed (has the same distribution), is a random variable. Moreover, each column is independent. This allows us to decompose as

Expanding further,

Now the event can be expressed in terms of the nonegative variable , where is parameter, to get

This will become useful because the sum will split into a product momentarily. First we apply Markov’s inequality, which says that for any nonnegative random variable , . This lets us write

Now we can split up the exponent into , and using the i.i.d.-ness of the we can rewrite the RHS of the inequality as

A similar statement using is true for the part, namely that

The last thing that’s needed is to bound , but since , we can use the known density function for a normal distribution, and integrate to get the exact value . Including this in the bound gives us a closed-form bound in terms of . Using standard calculus the optimal is . This gives

Using the Taylor series expansion for , one can show the bound , which simplifies the final upper bound to .

Doing the same thing for the version gives an equivalent bound, and so the total bound is doubled, i.e. .

As we said at the beginning, applying the union bound means we need

Solving this for gives , as desired.

## Projecting in Practice

Let’s write a python program to actually perform the Johnson-Lindenstrauss dimension reduction scheme. This is sometimes called the Johnson-Lindenstrauss transform, or JLT.

First we define a random subspace by sampling an appropriately-sized matrix with normally distributed entries, and a function that performs the projection onto a given subspace (for testing).

[code language=”python”]

import random

import math

import numpy

def randomSubspace(subspaceDimension, ambientDimension):

return numpy.random.normal(0, 1, size=(subspaceDimension, ambientDimension))

def project(v, subspace):

subspaceDimension = len(subspace)

return (1 / math.sqrt(subspaceDimension)) * subspace.dot(v)

[/code]

We have a function that computes the theoretical bound on the optimal dimension to reduce to.

[code language=”python”]

def theoreticalBound(n, epsilon):

return math.ceil(8*math.log(n) / (epsilon**2 – epsilon**3))

[/code]

And then performing the JLT is simply matrix multiplication

[code language=”python”]

def jlt(data, subspaceDimension):

ambientDimension = len(data[0])

A = randomSubspace(subspaceDimension, ambientDimension)

return (1 / math.sqrt(subspaceDimension)) * A.dot(data.T).T

[/code]

The high-dimensional dataset we’ll use comes from a data mining competition called KDD Cup 2001. The dataset we used deals with drug design, and the goal is to determine whether an organic compound binds to something called thrombin. Thrombin has something to do with blood clotting, and I won’t pretend I’m an expert. The dataset, however, has over a hundred thousand features for about 2,000 compounds. Here are a few approximate target dimensions we can hope for as epsilon varies.

[code language=”python”]

>>> [((1/x),theoreticalBound(n=2000, epsilon=1/x))

for x in [2, 3, 4, 5, 6, 7, 8, 9, 10, 15, 20]]

[(‘0.50’, 487), (‘0.33’, 821), (‘0.25’, 1298), (‘0.20’, 1901),

(‘0.17’, 2627), (‘0.14’, 3477), (‘0.12’, 4448), (‘0.11’, 5542),

(‘0.10’, 6757), (‘0.07’, 14659), (‘0.05’, 25604)]

[/code]

Going down from a hundred thousand dimensions to a few thousand is by any measure decreases the size of the dataset by about 95%. We can also observe how the distribution of overall distances varies as the size of the subspace we project to varies.

The last three frames are for 10, 5, and 2 dimensions respectively. As you can see the histogram starts to beef up around zero. To be honest I was expecting something a bit more dramatic like a uniform-ish distribution. Of course, the distribution of distances is not all that matters. Another concern is the worst case change in distances between any two points before and after the projection. We can see that indeed when we project to the dimension specified in the theorem, that the distances are within the prescribed bounds.

[code language=”python”]

def checkTheorem(oldData, newData, epsilon):

numBadPoints = 0

for (x,y), (x2,y2) in zip(combinations(oldData, 2), combinations(newData, 2)):

oldNorm = numpy.linalg.norm(x2-y2)**2

newNorm = numpy.linalg.norm(x-y)**2

if newNorm == 0 or oldNorm == 0:

continue

if abs(oldNorm / newNorm – 1) > epsilon:

numBadPoints += 1

return numBadPoints

if __name__ == "__main__"

from data import thrombin

train, labels = thrombin.load()

numPoints = len(train)

epsilon = 0.2

subspaceDim = theoreticalBound(numPoints, epsilon)

ambientDim = len(train[0])

newData = jlt(train, subspaceDim)

print(checkTheorem(train, newData, epsilon))

[/code]

This program prints zero every time I try running it, which is the poor man’s way of saying it works “with high probability.” We can also plot statistics about the number of pairs of data points that are distorted by more than as the subspace dimension shrinks. We ran this on the following set of subspace dimensions with and took average/standard deviation over twenty trials:

[code language=”python”]

dims = [1000, 750, 500, 250, 100, 75, 50, 25, 10, 5, 2]

[/code]

The result is the following chart, whose x-axis is the dimension projected to (so the left hand is the most extreme projection to 2, 5, 10 dimensions), the y-axis is the number of distorted pairs, and the error bars represent a single standard deviation away from the mean.

This chart provides good news about this dataset because the standard deviations are low. It tells us something that mathematicians often ignore: the predictability of the tradeoff that occurs once you go past the theoretically perfect bound. In this case, the standard deviations tell us that it’s highly predictable. Moreover, since this tradeoff curve measures pairs of points, we might conjecture that the distortion is localized around a single set of points that got significantly “rattled” by the projection. This would be an interesting exercise to explore.

Now all of these charts are really playing with the JLT and confirming the correctness of our code (and hopefully our intuition). The real question is: how well does a machine learning algorithm perform on the original data when compared to the projected data? If the algorithm only “depends” on the pairwise distances between the points, then we should expect nearly identical accuracy in the unprojected and projected versions of the data. To show this we’ll use an easy learning algorithm, the k-nearest-neighbors clustering method. The problem, however, is that there are very few positive examples in this particular dataset. So looking for the majority label of the nearest neighbors for any unilaterally results in the “all negative” classifier, which has 97% accuracy. This happens before and after projecting.

To compensate for this, we modify k-nearest-neighbors slightly by having the label of a predicted point be 1 if *any* label among its nearest neighbors is 1. So it’s not a majority vote, but rather a logical OR of the labels of nearby neighbors. Our point in this post is not to solve the problem well, but rather to show how an algorithm (even a not-so-good one) can degrade as one projects the data into smaller and smaller dimensions. Here is the code.

[code language=”python”]

def nearestNeighborsAccuracy(data, labels, k=10):

from sklearn.neighbors import NearestNeighbors

trainData, trainLabels, testData, testLabels = randomSplit(data, labels) # cross validation

model = NearestNeighbors(n_neighbors=k).fit(trainData)

distances, indices = model.kneighbors(testData)

predictedLabels = []

for x in indices:

xLabels = [trainLabels[i] for i in x[1:]]

predictedLabel = max(xLabels)

predictedLabels.append(predictedLabel)

totalAccuracy = sum(x == y for (x,y) in zip(testLabels, predictedLabels)) / len(testLabels)

falsePositive = (sum(x == 0 and y == 1 for (x,y) in zip(testLabels, predictedLabels)) /

sum(x == 0 for x in testLabels))

falseNegative = (sum(x == 1 and y == 0 for (x,y) in zip(testLabels, predictedLabels)) /

sum(x == 1 for x in testLabels))

return totalAccuracy, falsePositive, falseNegative

[/code]

And here is the accuracy of this modified k-nearest-neighbors algorithm run on the thrombin dataset. The horizontal line represents the accuracy of the produced classifier on the unmodified data set. The x-axis represents the dimension projected to (left-hand side is the lowest), and the y-axis represents the accuracy. The mean accuracy over fifty trials was plotted, with error bars representing one standard deviation. The complete code to reproduce the plot is in the Github repository.

Likewise, we plot the proportion of false positive and false negatives for the output classifier. Note that a “positive” label made up only about 2% of the total data set. First the false positives

Then the false negatives

As we can see from these three charts, things don’t *really* change that much (for this dataset) even when we project down to around 200-300 dimensions. Note that for these parameters the “correct” theoretical choice for dimension was on the order of 5,000 dimensions, so this is a 95% savings from the naive approach, and 99.75% space savings from the original data. Not too shabby.

## Notes

The worst-case dimension bound is asymptotically tight, though there is some small gap in the literature that depends on . This result is due to Noga Alon, the very last result (Section 9) of this paper. [Update: as djhsu points out in the comments, this gap is now closed thanks to Larsen and Nelson]

We did dimension reduction with respect to preserving the Euclidean distance between points. One might naturally wonder if you can achieve the same dimension reduction with a different metric, say the taxicab metric or a -norm. In fact, you *cannot* achieve anything close to logarithmic dimension reduction for the taxicab () metric. This result is due to Brinkman-Charikar in 2004.

The code we used to compute the JLT is not particularly efficient. There are much more efficient methods. One of them, borrowing its namesake from the Fast Fourier Transform, is called the Fast Johnson-Lindenstrauss Transform. The technique is due to Ailon-Chazelle from 2009, and it involves something called “preconditioning a sparse projection matrix with a randomized Fourier transform.” I don’t know precisely what that means, but it would be neat to dive into that in a future post.

The central focus in this post was whether the JLT preserves distances between points, but one might be curious as to whether the points themselves are well approximated. The answer is an enthusiastic *no.* If the data were images, the projected points would look nothing like the original images. However, it appears the degradation tradeoff is measurable (by some accounts perhaps linear), and there appears to be some work (also this by the same author) when restricting to sparse vectors (like word-association vectors).

Note that the JLT is not the only method for dimensionality reduction. We previously saw principal component analysis (applied to face recognition), and in the future we will cover a related technique called the Singular Value Decomposition. It is worth noting that another common technique specific to nearest-neighbor is called “locality-sensitive hashing.” Here the goal is to project the points in such a way that “similar” points land very close to each other. Say, if you were to discretize the plane into bins, these bins would form the hash values and you’d want to maximize the probability that two points with the same label land in the same bin. Then you can do things like nearest-neighbors by comparing bins.

Another interesting note, if your data is linearly separable (like the examples we saw in our age-old post on Perceptrons), then you can use the JLT to make finding a linear separator easier. First project the data onto the dimension given in the theorem. With high probability the points will still be linearly separable. And then you can use a perceptron-type algorithm in the smaller dimension. If you want to find out which side a new point is on, you project and compare with the separator in the smaller dimension.

Beyond its interest for practical dimensionality reduction, the JLT has had many other interesting theoretical consequences. More generally, the idea of “randomly projecting” your data onto some small dimensional space has allowed mathematicians to get some of the best-known results on many optimization and learning problems, perhaps the most famous of which is called MAX-CUT; the result is by Goemans-Williamson and it led to a mathematical constant being named after them, . If you’re interested in more about the theory, Santosh Vempala wrote a wonderful (and short!) treatise dedicated to this topic.

Is an advantage of the technique you explain that the projection can be done on the fly? As each large-dimensional vector comes in, can it be projected before adding it to the database? I don’t know how I would get that advantage using my finite geometry techniques, but the finished product would be as compact. The idea I would like to point out is that if you think of the vectors as regions, then you can identify the vectors using a partial ordering of the regions. See the Nov 2014 post in my blog for how to get a partial ordering of the regions, no matter how many dimensions are involved. A partial ordering will not give the distance between any two regions, but it will give the number of hops from a start region to any other region. By choosing several different partial orderings with different start regions, a vector of hop-counts provides a sort of triangulated hop-count position. For purposes of similarity, isn’t it the number of hops more interesting than the distance?

That and the simplicity of the method.

The statement of the lemma does not mention how the subspace is picked at random. This sounds a bit like the Bertrand paradox. (The proof makes it clear, though.)

I wonder if this concept is related to how biological neural networks are formed. The body has countless sensory inputs (i.e. very high dimensional space), and in general individual neurons have thousands or tens of thousands of synaptic inputs. If I’m reading this correctly, it would seem that the individual neurons aren’t missing out on much by randomly sampling from the input throwing away 99+% of it. I wonder if this is done at all in artificial neural networks?

Thanks for the Post!

The theoreticalBound() function has a minimum for epsilon=2/3. That’s a little weird, no?

For a fixed n, that’s indeed the minimum epsilon. It is weird, but (1) it’s just an upper bound and (2) as we saw in the post, you can usually expect to do much better than the best theoretical guarantee. In fact, this phenomenon happens a lot with theoretical bounds. It happened in my series of posts on bandit learning, at least.

Just a little suggestion – possibly add axis titles and plot title to your plots. Easier to read and to follow which plot means what.

… Plus frame number, or the value of the relevant parameter

I would like to point out a potential fundamental flaw in the analysis and an experiment to see if that is the case or not.

We are simulating distances with random data but real data is not random! So I argue that distances with real datasets do not suffer the same phenomena we observe with random data. We can call this a form of the “the manifold hypothesis” if we want.

The experiment should be to really mine words from documents and see how distances between documents vary as we increase the total number of words, we can start with 100 words and then 1000, 10000, 1000000.

We have to be careful about things like the dimensionality curse or the no free lunch theorem because we deal with real data and not with any data so we are interested in how distances and algorithms behave in the real world.

I hope this triggers and follow up of this article that I enjoyed a lot because I’m a huge fan of the JL lemma. (how nerdy was that?)

And you were talking about only the Titles of movies! We have a lot lot lot of other data around us! Ah! Ming blowing. And thanks for the post and intro on Lemma

Actually there is no more gap in the JLT dimension, as of last year: https://arxiv.org/abs/1609.02094

Thanks, I’ve updated the post!

Thanks for the great post! I have one question: when you say “Let A_i denote the i-th column of A”, do you mean the i-th row? Because if A is a c x d matrix, then a column is a c x 1 vector. And u is a d x 1 vector. So I don’t think it’s possible to take the inner product if A_i is a column of A.

I think it would work either way, but in general I consider the columns of matrices to be vectors in the abstract sense, forgetting their interpretation as a column. The inner product is not defined in a way that is sensitive to whether the inputs are *thought of* as columns or rows, even though many authors will use as a shorthand for .

Sorry, what I meant is that I’m concerned that since A_i is a c-dimensional vector and u is a d-dimensional vector, you can’t take their inner product since the dimensions don’t match. It’s a nitpicky detail, but since I’m a relative novice at this stuff it’s kind of confusing me.

Here is the simplest fast-johson-lindenstrauss I could come up with in numpy:

x = np.array(range(1000)

g = np.random.randn(1000)

small_x = np.fft.irfft(np.random.choice(np.fft.rfft(x * g)), size=10))

Now we should have that the norm of `x` and `small_x` is nearly the same.