Making Hybrid Images | Neural Networks and Backpropagation |
Elliptic Curves and Cryptography |

Bezier Curves and Picasso | Computing Homology | Probably Approximately Correct – A Formal Theory of Learning |

# Big Dimensions, and What You Can Do About It

Data is abundant, data is big, and big is a problem. Let me start with an example. Let’s say you have a list of movie titles and you want to learn their genre: romance, action, drama, etc. And maybe in this scenario IMDB doesn’t exist so you can’t scrape the answer. Well, the title alone is almost never enough information. One nice way to get more data is to do the following:

- Pick a large dictionary of words, say the most common 100,000 non stop-words in the English language.
- Crawl the web looking for documents that include the title of a film.
- For each film, record the counts of all other words appearing in those documents.
- Maybe remove instances of “movie” or “film,” etc.

After this process you have a length-100,000 vector of integers associated with each movie title. IMDB’s database has around 1.5 million listed movies, and if we have a 32-bit integer per vector entry, that’s 600 GB of data to get every movie.

One way to try to find genres is to cluster this (unlabeled) dataset of vectors, and then manually inspect the clusters and assign genres. With a really fast computer we could simply run an existing clustering algorithm on this dataset and be done. Of course, clustering 600 GB of data takes a long time, but there’s another problem. The geometric intuition that we use to design clustering algorithms *degrades* as the length of the vectors in the dataset grows. As a result, our algorithms perform poorly. This phenomenon is called the “curse of dimensionality” (“curse” isn’t a technical term), and we’ll return to the mathematical curiosities shortly.

A possible workaround is to try to come up with faster algorithms or be more patient. But a more interesting mathematical question is the following:

Is it possible to condense high-dimensional data into smaller dimensions and retain the important geometric properties of the data?

This goal is called *dimension reduction*. Indeed, all of the chatter on the internet is bound to encode redundant information, so for our movie title vectors it seems the answer should be “yes.” But the questions remain, how does one *find* a low-dimensional condensification? (Condensification isn’t a word, the right word is embedding, but embedding is overloaded so we’ll wait until we define it) And what mathematical guarantees can you prove about the resulting condensed data? After all, it stands to reason that different techniques preserve different aspects of the data. Only math will tell.

In this post we’ll explore this so-called “curse” of dimensionality, explain the formality of why it’s seen as a curse, and implement a wonderfully simple technique called “the random projection method” which preserves pairwise distances between points after the reduction. As usual, and all the code, data, and tests used in the making of this post are on Github.

## Some curious issues, and the “curse”

We start by exploring the curse of dimensionality with experiments on synthetic data.

In two dimensions, take a circle centered at the origin with radius 1 and its bounding square.

The circle fills up most of the area in the square, in fact it takes up exactly out of 4 which is about 78%. In three dimensions we have a sphere and a cube, and the ratio of sphere volume to cube volume is a bit smaller, out of a total of 8, which is just over 52%. What about in a thousand dimensions? Let’s try by simulation.

import random def randUnitCube(n): return [(random.random() - 0.5)*2 for _ in range(n)] def sphereCubeRatio(n, numSamples): randomSample = [randUnitCube(n) for _ in range(numSamples)] return sum(1 for x in randomSample if sum(a**2 for a in x) <= 1) / numSamples

The result is as we computed for small dimension,

>>> sphereCubeRatio(2,10000) 0.7857 >>> sphereCubeRatio(3,10000) 0.5196

And much smaller for larger dimension

>>> sphereCubeRatio(20,100000) # 100k samples 0.0 >>> sphereCubeRatio(20,1000000) # 1M samples 0.0 >>> sphereCubeRatio(20,2000000) 5e-07

Forget a thousand dimensions, for even *twenty* dimensions, a million samples wasn’t enough to register a single random point inside the unit sphere. This illustrates one concern, that when we’re sampling random points in the -dimensional unit cube, we need at least samples to ensure we’re getting a even distribution from the whole space. In high dimensions, this face basically rules out a naive Monte Carlo approximation, where you sample random points to estimate the probability of an event too complicated to sample from directly. A machine learning viewpoint of the same problem is that in dimension you will usually require samples in order to infer anything useful.

Luckily, we can answer our original question because there is a known formula for the volume of a sphere in any dimension. Rather than give the closed form formula, which involves the gamma function and is incredibly hard to parse, we’ll state the recursive form. Call the volume of the unit sphere in dimension . Then by convention, (it’s an interval), and . If you unpack this recursion you can see that the numerator looks like and the denominator looks like a factorial, except it skips every other number. So an even dimension would look like , and this grows larger than a fixed exponential. So in fact the total volume of the sphere vanishes as the dimension grows! (In addition to the ratio vanishing!)

def sphereVolume(n): values = [0] * (n+1) for i in range(n+1): if i == 0: values[i] = 1 elif i == 1: values[i] = 2 else: values[i] = 2*math.pi / i * values[i-2] return values[-1]

This should be counterintuitive. I think most people would guess, when asked about how the volume of the unit sphere changes as the dimension grows, that it stays the same or gets bigger. But at a hundred dimensions, the volume is already getting too small to fit in a float.

>>> sphereVolume(20) 0.025806891390014047 >>> sphereVolume(100) 2.3682021018828297e-40 >>> sphereVolume(1000) 0.0

The scary thing is not just that this value drops, but that it drops *exponentially quickly*. A consequence is that, if you’re trying to cluster data points by looking at points within a fixed distance of one point, you’ll have to make exponentially large in the dimension.

Here’s a related issue. Say I take a bunch of points generated uniformly at random in the unit cube.

from itertools import combinations def distancesRandomPoints(n, numSamples): randomSample = [randUnitCube(n) for _ in range(numSamples)] pairwiseDistances = [dist(x,y) for (x,y) in combinations(randomSample, 2)] return pairwiseDistances

In two dimensions, the histogram of distances between points looks like this

However, as the dimension grows the distribution of distances changes. It evolves like the following animation, in which each frame is an increase in dimension from 2 to 100.

The shape of the distribution doesn’t appear to be changing all that much after the first few frames, but the center of the distribution tends to infinity (in fact, it grows like ). The variance also appears to stay constant. This chart also becomes more variable as the dimension grows, again because we should be sampling exponentially many more points as the dimension grows (but we don’t). In other words, as the dimension grows the average distance grows and the tightness of the distribution stays the same. So at a thousand dimensions the average distance is about 26, tightly concentrated between 24 and 28. When the average is a thousand, the distribution is tight between 998 and 1002. If one were to normalize this data, it would appear that random points are all becoming equidistant from each other.

So in addition to the issues of runtime and sampling, the geometry of high-dimensional space looks different from what we expect. To get a better understanding of “big data,” we have to update our intuition from low-dimensional geometry with analysis and mathematical theorems that are much harder to visualize.

## The Johnson-Lindenstrauss Lemma

Now we turn to proving dimension reduction is possible. There are a few methods one might first think of, such as look for suitable subsets of coordinates, or sums of subsets, but these would all appear to take a long time or they simply don’t work.

Instead, the key technique is to take a *random* linear subspace of a certain dimension, and project every data point onto that subspace. No searching required. The fact that this works is called the *Johnson-Lindenstrauss Lemma. *To set up some notation, we’ll call the usual distance between two points.

**Lemma [Johnson-Lindenstrauss (1984)]: **Given a set of points in , project the points in to a randomly chosen subspace of dimension . Call the projection . For any , if is at least , then with probability at least 1/2** **the distances between points in are preserved up to a factor of . That is, with good probability every pair will satisfy

Before we do the proof, which is quite short, it’s important to point out that the target dimension does not depend on the original dimension! It only depends on the number of points in the dataset, and logarithmically so. That makes this lemma seem like pure magic, that you can take data in an arbitrarily high dimension and put it in a much smaller dimension.

On the other hand, if you include all of the hidden constants in the bound on the dimension, it’s not *that* impressive. If your data have a million dimensions and you want to preserve the distances up to 1% (), the bound is *bigger* than a million! If you decrease the preservation to 10% (0.1), then you get down to about 12,000 dimensions, which is more reasonable. At 45% the bound drops to around 1,000 dimensions. Here’s a plot showing the theoretical bound on in terms of for fixed to a million.

But keep in mind, this is just a *theoretical* bound for potentially misbehaving data. Later in this post we’ll see if the practical dimension can be reduced more than the theory allows. As we’ll see, an algorithm run on the projected data is still effective even if the projection goes well beyond the theoretical bound. Because the theorem is known to be tight in the worst case (see the notes at the end) this speaks more to the robustness of the typical algorithm than to the robustness of the projection method.

A second important note is that this technique does not necessarily avoid *all* the problems with the curse of dimensionality. We mentioned above that one potential problem is that “random points” are roughly equidistant in high dimensions. Johnson-Lindenstrauss actually *preserves* this problem because it preserves distances! As a consequence, you won’t see strictly better algorithm performance if you project (which we suggested is possible in the beginning of this post). But you will alleviate slow runtimes if the runtime depends exponentially on the dimension. Indeed, if you replace the dimension with the logarithm of the number of points , then becomes linear in , and becomes polynomial.

## Proof of the J-L lemma

Let’s prove the lemma.

*Proof. *To start we make note that one can sample from the uniform distribution on dimension- linear subspaces of by choosing the entries of a matrix independently from a normal distribution with mean 0 and variance 1. Then, to project a vector by this matrix (call the projection ), we can compute

Now fix and fix two points in the dataset . We want an upper bound on the probability that the following is **false**

Since that expression is a pain to work with, let’s rearrange it by calling , and rearranging (using the linearity of the projection) to get the equivalent statement.

And so we want a bound on the probability that this event does *not *occur, meaning the inequality switches directions.

Once we get such a bound (it will depend on and ) we need to ensure that this bound is true for every pair of points. The union bound allows us to do this, but it also requires that the probability of the bad thing happening tends to zero faster than . That’s where the will come into the bound as stated in the theorem.

Continuing with our use of for notation, define to be the random variable . By expanding the notation and using the linearity of expectation, you can show that the expected value of is , meaning that in expectation, distances are preserved. We are on the right track, and just need to show that the distribution of , and thus the possible deviations in distances, is tightly concentrated around . In full rigor, we will show

Let denote the -th column of . Define by the quantity . This is a weighted average of the entries of by the entries of . But since we chose the entries of from the normal distribution, and since a weighted average of normally distributed random variables is also normally distributed (has the same distribution), is a random variable. Moreover, each column is independent. This allows us to decompose as

Expanding further,

Now the event can be expressed in terms of the nonegative variable , where is parameter, to get

This will become useful because the sum will split into a product momentarily. First we apply Markov’s inequality, which says that for any nonnegative random variable , . This lets us write

Now we can split up the exponent into , and using the i.i.d.-ness of the we can rewrite the RHS of the inequality as

A similar statement using is true for the part, namely that

The last thing that’s needed is to bound , but since , we can use the known density function for a normal distribution, and integrate to get the exact value . Including this in the bound gives us a closed-form bound in terms of . Using standard calculus the optimal is . This gives

Using the Taylor series expansion for , one can show the bound , which simplifies the final upper bound to .

Doing the same thing for the version gives an equivalent bound, and so the total bound is doubled, i.e. .

As we said at the beginning, applying the union bound means we need

Solving this for gives , as desired.

## Projecting in Practice

Let’s write a python program to actually perform the Johnson-Lindenstrauss dimension reduction scheme. This is sometimes called the Johnson-Lindenstrauss transform, or JLT.

First we define a random subspace by sampling an appropriately-sized matrix with normally distributed entries, and a function that performs the projection onto a given subspace (for testing).

import random import math import numpy def randomSubspace(subspaceDimension, ambientDimension): return numpy.random.normal(0, 1, size=(subspaceDimension, ambientDimension)) def project(v, subspace): subspaceDimension = len(subspace) return (1 / math.sqrt(subspaceDimension)) * subspace.dot(v)

We have a function that computes the theoretical bound on the optimal dimension to reduce to.

def theoreticalBound(n, epsilon): return math.ceil(8*math.log(n) / (epsilon**2 - epsilon**3))

And then performing the JLT is simply matrix multiplication

def jlt(data, subspaceDimension): ambientDimension = len(data[0]) A = randomSubspace(subspaceDimension, ambientDimension) return (1 / math.sqrt(subspaceDimension)) * A.dot(data.T).T

The high-dimensional dataset we’ll use comes from a data mining competition called KDD Cup 2001. The dataset we used deals with drug design, and the goal is to determine whether an organic compound binds to something called thrombin. Thrombin has something to do with blood clotting, and I won’t pretend I’m an expert. The dataset, however, has over a hundred thousand features for about 2,000 compounds. Here are a few approximate target dimensions we can hope for as epsilon varies.

>>> [((1/x),theoreticalBound(n=2000, epsilon=1/x)) for x in [2, 3, 4, 5, 6, 7, 8, 9, 10, 15, 20]] [('0.50', 487), ('0.33', 821), ('0.25', 1298), ('0.20', 1901), ('0.17', 2627), ('0.14', 3477), ('0.12', 4448), ('0.11', 5542), ('0.10', 6757), ('0.07', 14659), ('0.05', 25604)]

Going down from a hundred thousand dimensions to a few thousand is by any measure decreases the size of the dataset by about 95%. We can also observe how the distribution of overall distances varies as the size of the subspace we project to varies.

The last three frames are for 10, 5, and 2 dimensions respectively. As you can see the histogram starts to beef up around zero. To be honest I was expecting something a bit more dramatic like a uniform-ish distribution. Of course, the distribution of distances is not all that matters. Another concern is the worst case change in distances between any two points before and after the projection. We can see that indeed when we project to the dimension specified in the theorem, that the distances are within the prescribed bounds.

def checkTheorem(oldData, newData, epsilon): numBadPoints = 0 for (x,y), (x2,y2) in zip(combinations(oldData, 2), combinations(newData, 2)): oldNorm = numpy.linalg.norm(x2-y2)**2 newNorm = numpy.linalg.norm(x-y)**2 if newNorm == 0 or oldNorm == 0: continue if abs(oldNorm / newNorm - 1) > epsilon: numBadPoints += 1 return numBadPoints if __name__ == "__main__" from data import thrombin train, labels = thrombin.load() numPoints = len(train) epsilon = 0.2 subspaceDim = theoreticalBound(numPoints, epsilon) ambientDim = len(train[0]) newData = jlt(train, subspaceDim) print(checkTheorem(train, newData, epsilon))

This program prints zero every time I try running it, which is the poor man’s way of saying it works “with high probability.” We can also plot statistics about the number of pairs of data points that are distorted by more than as the subspace dimension shrinks. We ran this on the following set of subspace dimensions with and took average/standard deviation over twenty trials:

dims = [1000, 750, 500, 250, 100, 75, 50, 25, 10, 5, 2]

The result is the following chart, whose x-axis is the dimension projected to (so the left hand is the most extreme projection to 2, 5, 10 dimensions), the y-axis is the number of distorted pairs, and the error bars represent a single standard deviation away from the mean.

This chart provides good news about this dataset because the standard deviations are low. It tells us something that mathematicians often ignore: the predictability of the tradeoff that occurs once you go past the theoretically perfect bound. In this case, the standard deviations tell us that it’s highly predictable. Moreover, since this tradeoff curve measures pairs of points, we might conjecture that the distortion is localized around a single set of points that got significantly “rattled” by the projection. This would be an interesting exercise to explore.

Now all of these charts are really playing with the JLT and confirming the correctness of our code (and hopefully our intuition). The real question is: how well does a machine learning algorithm perform on the original data when compared to the projected data? If the algorithm only “depends” on the pairwise distances between the points, then we should expect nearly identical accuracy in the unprojected and projected versions of the data. To show this we’ll use an easy learning algorithm, the k-nearest-neighbors clustering method. The problem, however, is that there are very few positive examples in this particular dataset. So looking for the majority label of the nearest neighbors for any unilaterally results in the “all negative” classifier, which has 97% accuracy. This happens before and after projecting.

To compensate for this, we modify k-nearest-neighbors slightly by having the label of a predicted point be 1 if *any* label among its nearest neighbors is 1. So it’s not a majority vote, but rather a logical OR of the labels of nearby neighbors. Our point in this post is not to solve the problem well, but rather to show how an algorithm (even a not-so-good one) can degrade as one projects the data into smaller and smaller dimensions. Here is the code.

def nearestNeighborsAccuracy(data, labels, k=10): from sklearn.neighbors import NearestNeighbors trainData, trainLabels, testData, testLabels = randomSplit(data, labels) # cross validation model = NearestNeighbors(n_neighbors=k).fit(trainData) distances, indices = model.kneighbors(testData) predictedLabels = [] for x in indices: xLabels = [trainLabels[i] for i in x[1:]] predictedLabel = max(xLabels) predictedLabels.append(predictedLabel) totalAccuracy = sum(x == y for (x,y) in zip(testLabels, predictedLabels)) / len(testLabels) falsePositive = (sum(x == 0 and y == 1 for (x,y) in zip(testLabels, predictedLabels)) / sum(x == 0 for x in testLabels)) falseNegative = (sum(x == 1 and y == 0 for (x,y) in zip(testLabels, predictedLabels)) / sum(x == 1 for x in testLabels)) return totalAccuracy, falsePositive, falseNegative

And here is the accuracy of this modified k-nearest-neighbors algorithm run on the thrombin dataset. The horizontal line represents the accuracy of the produced classifier on the unmodified data set. The x-axis represents the dimension projected to (left-hand side is the lowest), and the y-axis represents the accuracy. The mean accuracy over fifty trials was plotted, with error bars representing one standard deviation. The complete code to reproduce the plot is in the Github repository **[link link link]**.

Likewise, we plot the proportion of false positive and false negatives for the output classifier. Note that a “positive” label made up only about 2% of the total data set. First the false positives

Then the false negatives

As we can see from these three charts, things don’t *really* change that much (for this dataset) even when we project down to around 200-300 dimensions. Note that for these parameters the “correct” theoretical choice for dimension was on the order of 5,000 dimensions, so this is a 95% savings from the naive approach, and 99.75% space savings from the original data. Not too shabby.

## Notes

The worst-case dimension bound is asymptotically tight, though there is some small gap in the literature that depends on . This result is due to Noga Alon, the very last result (Section 9) of this paper.

We did dimension reduction with respect to preserving the Euclidean distance between points. One might naturally wonder if you can achieve the same dimension reduction with a different metric, say the taxicab metric or a -norm. In fact, you *cannot* achieve anything close to logarithmic dimension reduction for the taxicab () metric. This result is due to Brinkman-Charikar in 2004.

The code we used to compute the JLT is not particularly efficient. There are much more efficient methods. One of them, borrowing its namesake from the Fast Fourier Transform, is called the Fast Johnson-Lindenstrauss Transform. The technique is due to Ailon-Chazelle from 2009, and it involves something called “preconditioning a sparse projection matrix with a randomized Fourier transform.” I don’t know precisely what that means, but it would be neat to dive into that in a future post.

The central focus in this post was whether the JLT preserves distances between points, but one might be curious as to whether the points themselves are well approximated. The answer is an enthusiastic *no.* If the data were images, the projected points would look nothing like the original images. However, it appears the degradation tradeoff is measurable (by some accounts perhaps linear), and there appears to be some work (also this by the same author) when restricting to sparse vectors (like word-association vectors).

Note that the JLT is not the only method for dimensionality reduction. We previously saw principal component analysis (applied to face recognition), and in the future we will cover a related technique called the Singular Value Decomposition. It is worth noting that another common technique specific to nearest-neighbor is called “locality-sensitive hashing.” Here the goal is to project the points in such a way that “similar” points land very close to each other. Say, if you were to discretize the plane into bins, these bins would form the hash values and you’d want to maximize the probability that two points with the same label land in the same bin. Then you can do things like nearest-neighbors by comparing bins.

Another interesting note, if your data is linearly separable (like the examples we saw in our age-old post on Perceptrons), then you can use the JLT to make finding a linear separator easier. First project the data onto the dimension given in the theorem. With high probability the points will still be linearly separable. And then you can use a perceptron-type algorithm in the smaller dimension. If you want to find out which side a new point is on, you project and compare with the separator in the smaller dimension.

Beyond its interest for practical dimensionality reduction, the JLT has had many other interesting theoretical consequences. More generally, the idea of “randomly projecting” your data onto some small dimensional space has allowed mathematicians to get some of the best-known results on many optimization and learning problems, perhaps the most famous of which is called MAX-CUT; the result is by Goemans-Williamson and it led to a mathematical constant being named after them, . If you’re interested in more about the theory, Santosh Vempala wrote a wonderful (and short!) treatise dedicated to this topic.

# Concrete Examples of Quantum Gates

So far in this series we’ve seen a lot of motivation and defined basic ideas of what a quantum circuit is. But on rereading my posts, I think we would all benefit from some concreteness.

## “Local” operations

So by now we’ve understood that quantum circuits consist of a sequence of gates , where each is an 8-by-8 matrix that operates “locally” on some choice of three (or fewer) qubits. And in your head you imagine starting with some state vector and applying each locally to its three qubits until the end when you measure the state and get some classical output.

But the point I want to make is that actually changes the whole state vector , because the three qubits it acts “locally” on are part of the entire basis. Here’s an example. Suppose we have three qubits and they’re in the state

Recall we abbreviate basis states by subscripting them by binary strings, so , and a valid state is any unit vector over the possible basis elements. As a vector, this state is

Say we apply the gate that swaps the first and third qubits. “Locally” this gate has the following matrix:

where we index the rows and columns by the relevant strings in lexicographic order: 00, 01, 10, 11. So this operation leaves and the same while swapping the other two. However, as an operation on three qubits the operation looks quite different. And it’s sort of hard to describe a general way to write it down as a matrix because of the choice of indices. There are three different perspectives.

**Perspective 1:** if the qubits being operated on are sequential (like, the third, fourth, and fifth qubits), then we can write the matrix as where a tensor product of matrices is the Kronecker product and (the number of qubits adds up). Then the final operation looks like a “tiled product” of identity matrices by , but it’s a pain to write out. Let me hurt my self for your sake, dear reader.

And each copy of looks like

That’s a mess, but if you write it out for our example of swapping the first and third qubits of a three-qubit register you get the following:

And this makes sense: the gate changes any entry of the state vector that has values for the first and third qubit that are different. This is what happens to our state:

**Perspective 2:** just assume every operation works on the first three qubits, and wrap each operation in between an operation that swaps the first three qubits with the desired three. So like for a swap operation. Then the matrix form looks a bit simpler, and it just means we permute the columns of the matrix form we gave above so that it just has the form . This allows one to retain a shred of sanity when trying to envision the matrix for an operation that acts on three qubits that are not sequential. The downside is that to actually use this perspective in an analysis you have to carry around the extra baggage of these permutation matrices. So one might use this as a simplifying assumption (a “without loss of generality” statement).

**Perspective 3:** ignore matrices and write things down in a summation form. So if is the permutation that swaps 1 and 3 and leaves the other indices unchanged, we can write the general operation on a state as .

The third option is probably the nicest way to do things, but it’s important to keep the matrix view in mind for many reasons. Just one quick reason: “errors” in quantum gates (that are meant to approximately compute something) compound linearly in the number of gates because the operations are linear. This is a key reason that allows one to design quantum analogues of error correcting codes.

So we’ve established that the basic (atomic) quantum gates are “local” in the sense that they operate on a fixed number of qubits, but they are not local in the sense that they can screw up the entire state vector.

## A side note on the meaning of “local”

When I was chugging through learning this stuff (and I still have far to go), I wanted to come up with an alternate characterization of the word “local” so that I would feel better about using the word “local.” Mathematicians are as passionate about word choice as programmers are about text editors. In particular, for a long time I was ignorantly convinced that quantum gates that act on a small number of qubits don’t affect the *marginal distribution* of measurement outcomes for other qubits. That is, I thought that if acts on qubits 1,2,3, then and have the same probability of a measurement producing a 1 in index 4, 5, etc, *conditioned on fixing a measurement outcome for qubits 1,2,3*. In notation, if is a random variable whose values are binary strings and is a state vector, I’ll call the random process of measuring a state vector and getting a string , then my claim was that the following was true for every and every :

You could try to prove this, and you would fail because it’s false. In fact, it’s even false if acts on only *a single *qubit! Because it’s so tedious to write out all of the notation, I decided to write a program to illustrate the counterexample. (The most brazenly dedicated readers will try to prove this false fact and identify where the proof fails.)

import numpy H = (1/(2**0.5)) * numpy.array([[1,1], [1,-1]]) I = numpy.identity(4) A = numpy.kron(H,I)

Here is the 2 by 2 *Hadamard matrix*, which operates on a single qubit and maps , and . This matrix is famous for many reasons, but one simple use as a quantum gate is to generate uniform random coin flips. In particular, measuring outputs 1 and 0 with equal probability.

So in the code sample above, is the mapping which applies the Hadamard operation to the first qubit and leaves the other qubits alone.

Then we compute some arbitrary input state vector

def normalize(z): return (1.0 / (sum(abs(z)**2) ** 0.5)) * z v = numpy.arange(1,9) w = normalize(v)

And now we write a function to compute the probability of some query conditioned on some fixed bits. We simply sum up the square norms of all of the relevant indices in the state vector.

def condProb(state, query={}, fixed={}): num = 0 denom = 0 dim = int(math.log2(len(state))) for x in itertools.product([0,1], repeat=dim): if any(x[index] != b for (index,b) in fixed.items()): continue i = sum(d << i for (i,d) in enumerate(reversed(x))) denom += abs(state[i])**2 if all(x[index] == b for (index, b) in query.items()): num += abs(state[i]) ** 2 if num == 0: return 0 return num / denom

So if the query is `query = {1:0}`

and the fixed thing is `fixed = {0:0}`

, then this will compute the probability that the measurement results in the second qubit being zero conditioned on the first qubit also being zero.

And the result:

Aw = A.dot(w) query = {1:0} fixed = {0:0} print((condProb(w, query, fixed), condProb(Aw, query, fixed))) # (0.16666666666666666, 0.29069767441860467)

So they are not equal in general.

Also, in general we won’t work explicitly with full quantum gate matrices, since for qubits the have size which is big. But for finding counterexamples to guesses and false intuition, it’s a great tool.

## Some important gates on 1-3 qubits

Let’s close this post with concrete examples of quantum gates. Based on the above discussion, we can write out the 2 x 2 or 4 x 4 matrix form of the operation and understand that it can apply to any two qubits in the state of a quantum program. Gates are most interesting when they’re operating on entangled qubits, and that will come out when we visit our first quantum algorithm next time, but for now we will just discuss at a naive level how they operate on the basis vectors.

### Hadamard gate:

We introduced the Hadamard gate already, but I’ll reiterate it here.

Let be the following 2 by 2 matrix, which operates on a single qubit and maps , and .

One can use to generate uniform random coin flips. In particular, measuring outputs 1 and 0 with equal probability.

### Quantum NOT gate:

Let be the 2 x 2 matrix formed by swapping the columns of the identity matrix.

This gate is often called the “Pauli-X” gate by physicists. This matrix is far too simple to be named after a person, and I can only imagine it is still named after a person for the layer of obfuscation that so often makes people feel smarter (same goes for the Pauli-Y and Pauli-Z gates, but we’ll get to those when we need them).

If we’re thinking of as the boolean value “false” and as the boolean value “true”, then the quantum NOT gate simply swaps those two states. In particular, note that composing a Hadamard and a quantum NOT gate can have interesting effects: , but . In the second case, the minus sign is the culprit. Which brings us to…

### Phase shift gate:

Given an angle , we can “shift the phase” of one qubit by an angle of using the 2 x 2 matrix .

“Phase” is a term physicists like to use for angles. Since the coefficients of a quantum state vector are complex numbers, and since complex numbers can be thought of geometrically as vectors with direction and magnitude, it makes sense to “rotate” the coefficient of a single qubit. So does nothing to and it rotates the coefficient of by an angle of .

Continuing in our theme of concreteness, if I have the state vector and I apply a rotation of to the second qubit, then my operation is the matrix which maps and . That would map the state to .

If we instead used the rotation by we would get the output state .

### Quantum AND/OR gate:

In the last post in this series we gave the quantum AND gate and left the quantum OR gate as an exercise. Rather than write out the matrix again, let me remind you of this gate using a description of the effect on the basis where . Recall that we need three qubits in order to make the operation reversible (which is a consequence of all unitary gates being unitary matrices). Some notation: is the XOR of two bits, and is AND, is OR. The quantum AND gate maps

In words, the third coordinate is XORed with the AND of the first two coordinates. We think of the third coordinate as a “scratchwork” qubit which is maybe prepared ahead of time to be in state zero.

Simiarly, the quantum OR gate maps . As we saw last time these combined with the quantum NOT gate (and some modest number of scratchwork qubits) allows quantum circuits to simulate any classical circuit.

### Controlled-* gate:

The last example in this post is a meta-gate that represents a conditional branching. If we’re given a gate acting on qubits, then we define the *controlled-A* to be an operation which acts on qubits. Let’s call the added qubit “qubit zero.” Then controlled-A does nothing if qubit zero is in state 0, and applies if qubit zero is in state 1. Qubit zero is generally called the “control qubit.”

The matrix representing this operation decomposes into blocks if the control qubit is actually the first qubit (or you rearrange).

A common example of this is the controlled-NOT gate, often abbreviated CNOT, and it has the matrix

## Looking forward

Okay let’s take a step back and evaluate our life choices. So far we’ve spent a few hours of our time motivating quantum computing, explaining the details of qubits and quantum circuits, and seeing examples of concrete quantum gates and studying measurement. I’ve hopefully hammered into your head the notion that quantum states which aren’t pure tensors (i.e. entangled) are where the “weirdness” of quantum computing comes from. But we haven’t seen any examples of quantum algorithms yet!

Next time we’ll see our first example of an algorithm that is genuinely quantum. We won’t tackle factoring yet, but we will see quantum “weirdness” in action.

Until then!

# Reflections and a public resolution

When I started blogging my goal was to record the cool things I learned about in math and computer science so that I didn’t forget them. Then it turned into an excuse for me to learn new cool things, a way to publicize my research, and an unexpected reputation.

It seems that Math ∩ Programming has become something of a resource. In a recent survey, I found out that my readers are as young as 15 and as old as 70. And they come from all over the world. It seems that already, even before graduating with my PhD, I’ve made a bigger dent in the world with my blog than my research papers likely ever will.

And I have no intention to stop. These last few months have been slow due to job searching, grant applications, and a horde of research projects. The graph isomorphism brouhaha didn’t help my productivity either, but it was worth it on a personal level :)

Indeed, the more I blog the more ideas I get. Here are just a few titles of unfinished drafts in my queue:

- The group theoretic view of quantum gates
- Singular value decomposition
- Matrix completion and recommender systems
- The unreasonable effectiveness of the Multiplicative Weights Update Algorithm
- Cryptographic hardness assumptions — a primer
- VC-dimension, Occam’s razor, and the geometry of hypotheses
- Big dimensions, and what you can do about it
- The quantum Fourier transform
- Linear programming and the most affordable healthy diet, part 2
- What’s up with graph Laplacians?
- Persistent homology
- Byzantine generals
- Support vector machines

And I’ve been meaning to spend some time on convex optimization techniques, and maybe some deep learning while I’m at it. Feel free to vote for your favorite topics and I’ll try to prioritize accordingly.

I have a few blog-related projects in mind for this year. The one that I want to publicly commit to, my new years resolution, is to finish and publish my book. This should be a surprise because I haven’t announced it anywhere. The tentative title is, **“A Programmer’s Introduction to the Culture of Mathematics.”** It’s intended to be a book introducing mathematics “from scratch,” but with the assumption that the reader is a competent programmer. So I’ll treat the reader like an experienced coder instead of a calculus student (or worse, a researcher). I’ll rely on explanations and analogies via code. In each chapter I’ll implement a full program illustrating what you learned in the chapter, with the code on github, too. I’ll explain every bit of notation, and discuss why the strange aspects of math are the way they are. This book isn’t intended to be a terse reference, but rather a book that you read from end to end. It’s also not meant to be a new “approach” to math. Instead this is more of a travel guide for the mathematical foreigner, with translations and tours and an explanation of mathematical tastes.

I’ve already written a hundred or so pages, and I’d estimate the book is around 1/3 to 1/2 done. I’m resolving to finish it this year. I plan to set up a mailing list within the next month or two for it so those interested can keep informed on my progress.

# Hashing to Estimate the Size of a Stream

**Problem: **Estimate the number of distinct items in a data stream that is too large to fit in memory.

**Solution: **(in python)

import random def randomHash(modulus): a, b = random.randint(0,modulus-1), random.randint(0,modulus-1) def f(x): return (a*x + b) % modulus return f def average(L): return sum(L) / len(L) def numDistinctElements(stream, numParallelHashes=10): modulus = 2**20 hashes = [randomHash(modulus) for _ in range(numParallelHashes)] minima = [modulus] * numParallelHashes currentEstimate = 0 for i in stream: hashValues = [h(i) for h in hashes] for i, newValue in enumerate(hashValues): if newValue < minima[i]: minima[i] = newValue currentEstimate = modulus / average(minima) yield currentEstimate

**Discussion:** The technique used here is to use random hash functions. The central idea is the same as the general principle presented in our recent post on hashing for load balancing. In particular, if you have an algorithm that works under the assumption that the data is uniformly random, then the same algorithm will work (up to a good approximation) if you process the data through a randomly chosen hash function.

So if we assume the data in the stream consists of uniformly random real numbers between zero and one, what we would do is the following. Maintain a single number representing the minimum element in the list, and update it every time we encounter a smaller number in the stream. A simple probability calculation or an argument by symmetry shows that the expected value of the minimum is . So your estimate would be . (The extra +1 does not change much as we’ll see.) One can spend some time thinking about the variance of this estimate (indeed, our earlier post is great guidance for how such a calculation would work), but since the data is not random we need to do more work. If the elements are actually integers between zero and , then this estimate can be scaled by and everything basically works out the same.

Processing the data through a hash function chosen randomly from a 2-universal family (and we proved in the aforementioned post that this modulus thing is 2-universal) makes the outputs “essentially random” enough to have the above technique work with some small loss in accuracy. And to reduce variance, you can process the stream in parallel with many random hash functions. This rough sketch results in the code above. Indeed, before I state a formal theorem, let’s see the above code in action. First on truly random data:

S = [random.randint(1,2**20) for _ in range(10000)] for k in range(10,301,10): for est in numDistinctElements(S, k): pass print(abs(est)) # output 18299.75567190227 7940.7497160166595 12034.154552410098 12387.19432959244 15205.56844547564 8409.913113220158 8057.99978043693 9987.627098464103 10313.862295081966 9084.872639057356 10952.745228373375 10360.569781803211 11022.469475216301 9741.250165892501 11474.896038520465 10538.452261306533 10068.793492995934 10100.266495424627 9780.532155130093 8806.382800033594 10354.11482578643 10001.59202254498 10623.87031408308 9400.404915767062 10710.246772348424 10210.087633885101 9943.64709187974 10459.610972568578 10159.60175069326 9213.120899718839

As you can see the output is never off by more than a factor of 2. Now with “adversarial data.”

S = range(10000) #[random.randint(1,2**20) for _ in range(10000)] for k in range(10,301,10): for est in numDistinctElements(S, k): pass print(abs(est)) # output 12192.744186046511 15935.80547112462 10167.188106011634 12977.425742574258 6454.364151175674 7405.197740112994 11247.367453263867 4261.854392115023 8453.228233608026 7706.717624577393 7582.891328643745 5152.918628936483 1996.9365093316926 8319.20208545846 3259.0787592465967 6812.252720480753 4975.796789951151 8456.258064516129 8851.10133724288 7317.348220516398 10527.871485943775 3999.76974425661 3696.2999065091117 8308.843106180666 6740.999794281012 8468.603733730935 5728.532232608959 5822.072220349402 6382.349459544548 8734.008940222673

The estimates here are off by a factor of up to 5, and this estimate seems to get better as the number of hash functions used increases. The formal theorem is this:

**Theorem: **If is the set of distinct items in the stream and and , then with probability at least 2/3 the estimate is between and .

We omit the proof (see below for references and better methods). As a quick analysis, since we’re only storing a constant number of integers at any given step, the algorithm has space requirement , and each step takes time polynomial in to update in each step (since we have to compute multiplication and modulus of ).

This method is just the first ripple in a lake of research on this topic. The general area is called “streaming algorithms,” or “sublinear algorithms.” This particular problem, called *cardinality estimation*, is related to a family of problems called *estimating frequency moments. *The literature gets pretty involved in the various tradeoffs between space requirements and processing time per stream element.

As far as estimating cardinality goes, the first major results were due to Flajolet and Martin in 1983, where they provided a slightly more involved version of the above algorithm, which uses logarithmic space.

Later revisions to the algorithm (2003) got the space requirement down to , which is exponentially better than our solution. And further tweaks and analysis improved the variance bounds to something like a multiplicative factor of . This is called the HyperLogLog algorithm, and it has been tested in practice at Google.

Finally, a theoretically optimal algorithm (achieving an arbitrarily good estimate with logarithmic space) was presented and analyzed by Kane et al in 2010.