Hamming’s Code

Or how to detect and correct errors

Last time we made a quick tour through the main theorems of Claude Shannon, which essentially solved the following two problems about communicating over a digital channel.

  1. What is the best encoding for information when you are guaranteed that your communication channel is error free?
  2. Are there any encoding schemes that can recover from random noise introduced during transmission?

The answers to these questions were purely mathematical theorems, of course. But the interesting shortcoming of Shannon’s accomplishment was that his solution for the noisy coding problem (2) was nonconstructive. The question remains: can we actually come up with efficiently computable encoding schemes? The answer is yes! Marcel Golay was the first to discover such a code in 1949 (just a year after Shannon’s landmark paper), and Golay’s construction was published on a single page! We’re not going to define Golay’s code in this post, but we will mention its interesting status in coding theory later. The next year Richard Hamming discovered another simpler and larger family of codes, and went on to do some of the major founding work in coding theory. For his efforts he won a Turing Award and played a major part in bringing about the modern digital age. So we’ll start with Hamming’s codes.

We will assume some basic linear algebra knowledge, as detailed our first linear algebra primer. We will also use some basic facts about polynomials and finite fields, though the lazy reader can just imagine everything as binary $ \{ 0,1 \}$ and still grok the important stuff.

hamming-3

Richard Hamming, inventor of Hamming codes. [image source]

What is a code?

The formal definition of a code is simple: a code $ C$ is just a subset of $ \{ 0,1 \}^n$ for some $ n$. Elements of $ C$ are called codewords.

This is deceptively simple, but here’s the intuition. Say we know we want to send messages of length $ k$, so that our messages are in $ \{ 0,1 \}^k$. Then we’re really viewing a code $ C$ as the image of some encoding function $ \textup{Enc}: \{ 0,1 \}^k \to \{ 0,1 \}^n$. We can define $ C$ by just describing what the set is, or we can define it by describing the encoding function. Either way, we will make sure that $ \textup{Enc}$ is an injective function, so that no two messages get sent to the same codeword. Then $ |C| = 2^k$, and we can call $ k = \log |C|$ the message length of $ C$ even if we don’t have an explicit encoding function.

Moreover, while in this post we’ll always work with $ \{ 0,1 \}$, the alphabet of your encoded messages could be an arbitrary set $ \Sigma$. So then a code $ C$ would be a subset of tuples in $ \Sigma^n$, and we would call $ q = |\Sigma|$.

So we have these parameters $ n, k, q$, and we need one more. This is the minimum distance of a code, which we’ll denote by $ d$. This is defined to be the minimum Hamming distance between all distinct pairs of codewords, where by Hamming distance I just mean the number of coordinates that two tuples differ in. Recalling the remarks we made last time about Shannon’s nonconstructive proof, when we decode an encoded message $ y$ (possibly with noisy bits) we look for the (unencoded) message $ x$ whose encoding $ \textup{Enc}(x)$ is as close to $ y$ as possible. This will only work in the worst case if all pairs of codewords are sufficiently far apart. Hence we track the minimum distance of a code.

So coding theorists turn this mess of parameters into notation.

Definition: A code $ C$ is called an $ (n, k, d)_q$-code if

  • $ C \subset \Sigma^n$ for some alphabet $ \Sigma$,
  • $ k = \log |C|$,
  • $ C$ has minimum distance $ d$, and
  • the alphabet $ \Sigma$ has size $ q$.

The basic goals of coding theory are:

  1. For which values of these four parameters do codes exist?
  2. Fixing any three parameters, how can we optimize the other one?

In this post we’ll see how simple linear-algebraic constructions can give optima for one of these problems, optimizing $ k$ for $ d=3$, and we’ll state a characterization theorem for optimizing $ k$ for a general $ d$. Next time we’ll continue with a second construction that optimizes a different bound called the Singleton bound.

Linear codes and the Hamming code

A code is called linear if it can be identified with a linear subspace of some finite-dimensional vector space. In this post all of our vector spaces will be $ \{ 0,1 \}^n$, that is tuples of bits under addition mod 2. But you can do the same constructions with any finite scalar field $ \mathbb{F}_q$ for a prime power $ q$, i.e. have your vector space be $ \mathbb{F}_q^n$. We’ll go back and forth between describing a binary code $ q=2$ over $ \{ 0,1 \}$ and a code in $\mathbb{F}_q^n$. So to say a code is linear means:

  • The zero vector is a codeword.
  • The sum of any two codewords is a codeword.
  • Any scalar multiple of a codeword is a codeword.

Linear codes are the simplest kinds of codes, but already they give a rich variety of things to study. The benefit of linear codes is that you can describe them in a lot of different and useful ways besides just describing the encoding function. We’ll use two that we define here. The idea is simple: you can describe everything about a linear subspace by giving a basis for the space.

Definition: generator matrix of a $ (n,k,d)_q$-code $ C$ is a $ k \times n$ matrix $ G$ whose rows form a basis for $ C$.

There are a lot of equivalent generator matrices for a linear code (we’ll come back to this later), but the main benefit is that having a generator matrix allows one to encode messages $ x \in \{0,1 \}^k$ by left multiplication $ xG$. Intuitively, we can think of the bits of $ x$ as describing the coefficients of the chosen linear combination of the rows of $ G$, which uniquely describes an element of the subspace. Note that because a $ k$-dimensional subspace of $ \{ 0,1 \}^n$ has $ 2^k$ elements, we’re not abusing notation by calling $ k = \log |C|$ both the message length and the dimension.

For the second description of $ C$, we’ll remind the reader that every linear subspace $ C$ has a unique orthogonal complement $ C^\perp$, which is the subspace of vectors that are orthogonal to vectors in $ C$.

Definition: Let $ H^T$ be a generator matrix for $ C^\perp$. Then $ H$ is called a parity check matrix.

Note $ H$ has the basis for $ C^\perp$ as columns. This means it has dimensions $ n \times (n-k)$. Moreover, it has the property that $ x \in C$ if and only if the left multiplication $ xH = 0$. Having zero dot product with all columns of $ H$ characterizes membership in $ C$.

The benefit of having a parity check matrix is that you can do efficient error detection: just compute $ yH$ on your received message $ y$, and if it’s nonzero there was an error! What if there were so many errors, and just the right errors that $ y$ coincided with a different codeword than it started? Then you’re screwed. In other words, the parity check matrix is only guarantee to detect errors if you have fewer errors than the minimum distance of your code.

So that raises an obvious question: if you give me the generator matrix of a linear code can I compute its minimum distance? It turns out that this problem is NP-hard in general. In fact, you can show that this is equivalent to finding the smallest linearly dependent set of rows of the parity check matrix, and it is easier to see why such a problem might be hard. But if you construct your codes cleverly enough you can compute their distance properties with ease.

Before we do that, one more definition and a simple proposition about linear codes. The Hamming weight of a vector $ x$, denoted $ wt(x)$, is the number of nonzero entries in $ x$.

Proposition: The minimum distance of a linear code $ C$ is the minimum Hamming weight over all nonzero vectors $ x \in C$.

Proof. Consider a nonzero $ x \in C$. On one hand, the zero vector is a codeword and $ wt(x)$ is by definition the Hamming distance between $ x$ and zero, so it is an upper bound on the minimum distance. In fact, it’s also a lower bound: if $ x,y$ are two nonzero codewords, then $ x-y$ is also a codeword and $ wt(x-y)$ is the Hamming distance between $ x$ and $ y$.

$ \square$

So now we can define our first code, the Hamming code. It will be a $ (n, k, 3)_2$-code. The construction is quite simple. We have fixed $ d=3, q=2$, and we will also fix $ l = n-k$. One can think of this as fixing $ n$ and maximizing $ k$, but it will only work for $ n$ of a special form.

We’ll construct the Hamming code by describing a parity-check matrix $ H$. In fact, we’re going to see what conditions the minimum distance $ d=3$ imposes on $ H$, and find out those conditions are actually sufficient to get $ d=3$. We’ll start with 2. If we want to ensure $ d \geq 2$, then you need it to be the case that no nonzero vector of Hamming weight 1 is a code word. Indeed, if $ e_i$ is a vector with all zeros except a one in position $ i$, then $ e_i H = h_i$ is the $ i$-th row of $ H$. We need $ e_i H \neq 0$, so this imposes the condition that no row of $ H$ can be zero. It’s easy to see that this is sufficient for $ d \geq 2$.

Likewise for $ d \geq 3$, given a vector $ y = e_i + e_j$ for some positions $ i \neq j$, then $ yH = h_i + h_j$ may not be zero. But because our sums are mod 2, saying that $ h_i + h_j \neq 0$ is the same as saying $ h_i \neq h_j$. Again it’s an if and only if. So we have the two conditions.

  • No row of $ H$ may be zero.
  • All rows of $ H$ must be distinct.

That is, any parity check matrix with those two properties defines a distance 3 linear code. The only question that remains is how large can $ n$  be if the vectors have length $ n-k = l$? That’s just the number of distinct nonzero binary strings of length $ l$, which is $ 2^l – 1$. Picking any way to arrange these strings as the rows of a matrix (say, in lexicographic order) gives you a good parity check matrix.

Theorem: For every $ l > 0$, there is a $ (2^l – 1, 2^l – l – 1, 3)_2$-code called the Hamming code.

Since the Hamming code has distance 3, we can always detect if at most a single error occurs. Moreover, we can correct a single error using the Hamming code. If $ x \in C$ and $ wt(e) = 1$ is an error bit in position $ i$, then the incoming message would be $ y = x + e$. Now compute $ yH = xH + eH = 0 + eH = h_i$ and flip bit $ i$ of $ y$. That is, whichever row of $ H$ you get tells you the index of the error, so you can flip the corresponding bit and correct it. If you order the rows lexicographically like we said, then $ h_i = i$ as a binary number. Very slick.

Before we move on, we should note one interesting feature of linear codes.

Definition: A code is called systematic if it can be realized by an encoding function that appends some number $ n-k$ “check bits” to the end of each message.

The interesting feature is that all linear codes are systematic. The reason is as follows. The generator matrix $ G$ of a linear code has as rows a basis for the code as a linear subspace. We can perform Gaussian elimination on $ G$ and get a new generator matrix that looks like $ [I \mid A]$ where $ I$ is the identity matrix of the appropriate size and $ A$ is some junk. The point is that encoding using this generator matrix leaves the message unchanged, and adds a bunch of bits to the end that are determined by $ A$. It’s a different encoding function on $ \{ 0,1\}^k$, but it has the same image in $ \{ 0,1 \}^n$, i.e. the code is unchanged. Gaussian elimination just performed a change of basis.

If you work out the parameters of the Hamming code, you’ll see that it is a systematic code which adds $ \Theta(\log n)$ check bits to a message, and we’re able to correct a single error in this code. An obvious question is whether this is necessary? Could we get away with adding fewer check bits? The answer is no, and a simple “information theoretic” argument shows this. A single index out of $ n$ requires $ \log n$ bits to describe, and being able to correct a single error is like identifying a unique index. Without logarithmically many bits, you just don’t have enough information.

The Hamming bound and perfect codes

One nice fact about Hamming codes is that they optimize a natural problem: the problem of maximizing $ d$ given a fixed choice of $ n$, $ k$, and $ q$. To get this let’s define $ V_n(r)$ denote the volume of a ball of radius $ r$ in the space $ \mathbb{F}_2^n$. I.e., if you fix any string (doesn’t matter which) $ x$, $ V_n(r)$ is the size of the set $ \{ y : d(x,y) \leq r \}$, where $ d(x,y)$ is the hamming distance.

There is a theorem called the Hamming bound, which describes a limit to how much you can pack disjoint balls of radius $ r$ inside $ \mathbb{F}_2^n$.

Theorem: If an $ (n,k,d)_2$-code exists, then

$ \displaystyle 2^k V_n \left ( \left \lfloor \frac{d-1}{2} \right \rfloor \right ) \leq 2^n$

Proof. The proof is quite simple. To say a code $ C$ has distance $ d$ means that for every string $ x \in C$ there is no other string $ y$ within Hamming distance $ d$ of $ x$. In other words, the balls centered around both $ x,y$ of radius $ r = \lfloor (d-1)/2 \rfloor$ are disjoint. The extra difference of one is for odd $ d$, e.g. when $ d=3$ you need balls of radius 1 to guarantee no overlap. Now $ |C| = 2^k$, so the total number of strings covered by all these balls is the left-hand side of the expression. But there are at most $ 2^n$ strings in $ \mathbb{F}_2^n$, establishing the desired inequality.

$ \square$

Now a code is called perfect if it actually meets the Hamming bound exactly. As you probably guessed, the Hamming codes are perfect codes. It’s not hard to prove this, and I’m leaving it as an exercise to the reader.

The obvious follow-up question is whether there are any other perfect codes. The answer is yes, some of which are nonlinear. But some of them are “trivial.” For example, when $ d=1$ you can just use the identity encoding to get the code $ C = \mathbb{F}_2^n$. You can also just have a code which consists of a single codeword. There are also some codes that encode by repeating the message multiple times. These are called “repetition codes,” and all three of these examples are called trivial (as a definition). Now there are some nontrivial and nonlinear perfect codes I won’t describe here, but here is the nice characterization theorem.

Theorem [van Lint ’71, Tietavainen ‘73]: Let $ C$ be a nontrivial perfect $ (n,d,k)_q$ code. Then the parameters must either be that of a Hamming code, or one of the two:

  • A $ (23, 12, 7)_2$-code
  • A $ (11, 6, 5)_3$-code

The last two examples are known as the binary and ternary Golay codes, respectively, which are also linear. In other words, every possible set of parameters for a perfect code can be realized as one of these three linear codes.

So this theorem was a big deal in coding theory. The Hamming and Golay codes were both discovered within a year of each other, in 1949 and 1950, but the nonexistence of other perfect linear codes was open for twenty more years. This wrapped up a very neat package.

Next time we’ll discuss the Singleton bound, which optimizes for a different quantity and is incomparable with perfect codes. We’ll define the Reed-Solomon and show they optimize this bound as well. These codes are particularly famous for being the error correcting codes used in DVDs. We’ll then discuss the algorithmic issues surrounding decoding, and more recent connections to complexity theory.

Until then!

Posts in this series:

A Proofless Introduction to Information Theory

There are two basic problems in information theory that are very easy to explain. Two people, Alice and Bob, want to communicate over a digital channel over some long period of time, and they know the probability that certain messages will be sent ahead of time. For example, English language sentences are more likely than gibberish, and “Hi” is much more likely than “asphyxiation.” The problems are:

  1. Say communication is very expensive. Then the problem is to come up with an encoding scheme for the messages which minimizes the expected length of an encoded message and guarantees the ability to unambiguously decode a message. This is called the noiseless coding problem.
  2. Say communication is not expensive, but error prone. In particular, each bit $ i$ of your message is erroneously flipped with some known probably $ p$, and all the errors are independent. Then the question is, how can one encode their messages to as to guarantee (with high probability) the ability to decode any sent message? This is called the noisy coding problem.

There are actually many models of “communication with noise” that generalize (2), such as models based on Markov chains. We are not going to cover them here.

Here is a simple example for the noiseless problem. Say you are just sending binary digits as your messages, and you know that the string “00000000” (eight zeros) occurs half the time, and all other eight-bit strings occur equally likely in the other half. It would make sense, then, to encode the “eight zeros” string as a 0, and prefix all other strings with a 1 to distinguish them from zero. You would save on average $ 7 \cdot 1/2 + (-1) \cdot 1/2 = 3$ bits in every message.

One amazing thing about these two problems is that they were posed and solved in the same paper by Claude Shannon in 1948. One byproduct of his work was the notion of entropy, which in this context measures the “information content” of a message, or the expected “compressibility” of a single bit under the best encoding. For the extremely dedicated reader of this blog, note this differs from Kolmogorov complexity in that we’re not analyzing the compressibility of a string by itself, but rather when compared to a distribution. So really we should think of (the domain of) the distribution as being compressed, not the string.

Claude Shannon. Image credit: Wikipedia

Entropy and noiseless encoding

Before we can state Shannon’s theorems we have to define entropy.

Definition: Suppose $ D$ is a distribution on a finite set $ X$, and I’ll use $ D(x)$ to denote the probability of drawing $ x$ from $ D$. The entropy of $ D$, denoted $ H(D)$ is defined as

$ H(D) = \sum_{x \in X} D(x) \log \frac{1}{D(x)}$

It is strange to think about this sum in abstract, so let’s suppose $ D$ is a biased coin flip with bias $ 0 \leq p \leq 1$ of landing heads. Then we can plot the entropy as follows

Image source: Wikipedia

Image source: Wikipedia

The horizontal axis is the bias $ p$, and the vertical axis is the value of $ H(D)$, which with some algebra is $ – p \log p – (1-p) \log (1-p)$. From the graph above we can see that the entropy is maximized when $ p=1/2$ and minimized at $ p=0, 1$. You can verify all of this with calculus, and you can prove that the uniform distribution maximizes entropy in general as well.

So what is this saying? A high entropy measures how incompressible something is, and low entropy gives us lots of compressibility. Indeed, if our message consisted of the results of 10 such coin flips, and $ p$ was close to 1, we could be able to compress a lot by encoding strings with lots of 1’s using few bits. On the other hand, if $ p=1/2$ we couldn’t get any compression at all. All strings would be equally likely.

Shannon’s famous theorem shows that the entropy of the distribution is actually all that matters. Some quick notation: $ \{ 0,1 \}^*$ is the set of all binary strings.

Theorem (Noiseless Coding Theorem) [Shannon 1948]: For every finite set $ X$ and distribution $ D$ over $ X$, there are encoding and decoding functions $ \textup{Enc}: X \to \{0,1 \}^*, \textup{Dec}: \{ 0,1 \}^* \to X$ such that

  1. The encoding/decoding actually works, i.e. $ \textup{Dec}(\textup{Enc}(x)) = x$ for all $ x$.
  2. The expected length of an encoded message is between $ H(D)$ and $ H(D) + 1$.

Moreover, no encoding scheme can do better.

Item 2 and the last sentence are the magical parts. In other words, if you know your distribution over messages, you precisely know how long to expect your messages to be. And you know that you can’t hope to do any better!

As the title of this post says, we aren’t going to give a proof here. Wikipedia has a proof if you’re really interested in the details.

Noisy Coding

The noisy coding problem is more interesting because in a certain sense (that was not solved by Shannon) it is still being studied today in the field of coding theory. The interpretation of the noisy coding problem is that you want to be able to recover from white noise errors introduced during transmission. The concept is called error correction. To restate what we said earlier, we want to recover from error with probability asymptotically close to 1, where the probability is over the errors.

It should be intuitively clear that you can’t do so without your encoding “blowing up” the length of the messages. Indeed, if your encoding does not blow up the message length then a single error will confound you since many valid messages would differ by only a single bit. So the question is does such an encoding exist, and if so how much do we need to blow up the message length? Shannon’s second theorem answers both questions.

Theorem (Noisy Coding Theorem) [Shannon 1948]: For any constant noise rate $ p < 1/2$, there is an encoding scheme $ \textup{Enc} : \{ 0,1 \}^k \to \{0,1\}^{ck}, \textup{Dec} : \{ 0,1 \}^{ck} \to \{ 0,1\}^k$ with the following property. If $ x$ is the message sent by Alice, and $ y$ is the message received by Bob (i.e. $ \textup{Enc}(x)$ with random noise), then $ \Pr[\textup{Dec}(y) = x] \to 1$ as a function of $ n=ck$. In addition, if we denote by $ H(p)$ the entropy of the distribution of an error on a single bit, then choosing any $ c > \frac{1}{1-H(p)}$ guarantees the existence of such an encoding scheme, and no scheme exists for any smaller $ c$.

This theorem formalizes a “yes” answer to the noisy coding problem, but moreover it characterizes the blowup needed for such a scheme to exist. The deep fact is that it only depends on the noise rate.

A word about the proof: it’s probabilistic. That is, Shannon proved such an encoding scheme exists by picking $ \textup{Enc}$ to be a random function (!). Then $ \textup{Dec}(y)$ finds (nonconstructively) the string $ x$ such that the number of bits different between $ \textup{Enc}(x)$ and $ y$ is minimized. This “number of bits that differ” measure is called the Hamming distance. Then he showed using relatively standard probability tools that this scheme has the needed properties with high probability, the implication being that some scheme has to exist for such a probability to even be positive. The sharp threshold for $ c$ takes a bit more work. If you want the details, check out the first few lectures of Madhu Sudan’s MIT class.

The non-algorithmic nature of his solution is what opened the door to more research. The question has surpassed, “Are there any encodings that work?” to the more interesting, “What is the algorithmic cost of constructing such an encoding?” It became a question of complexity, not computability. Moreover, the guarantees people wanted were strengthened to worst case guarantees. In other words, if I can guarantee at most 12 errors, is there an encoding scheme that will allow me to always recover the original message, and not just with high probability? One can imagine that if your message contains nuclear codes or your bank balance, you’d definitely want to have 100% recovery ability.

Indeed, two years later Richard Hamming spawned the theory of error correcting codes and defined codes that can always correct a single error. This theory has expanded and grown over the last sixty years, and these days the algorithmic problems of coding theory have deep connections to most areas of computer science, including learning theory, cryptography, and quantum computing.

We’ll cover Hamming’s basic codes next time, and then move on to Reed-Solomon codes and others. Until then!

Posts in this series:

Information Distance — A Primer

This post assumes familiarity with our primer on Kolmogorov complexity. We recommend the uninformed reader begin there. We will do our best to keep consistent notation across both posts.

Kolmogorov Complexity as a Metric

Over the past fifty years mathematicians have been piling up more and more theorems about Kolmogorov complexity, and for good reason. One of the main interpretations of the Kolmogorov complexity function $ K$ is that for a given string $ x$, $ K(x)$ is the best theoretical compression of $ x$ under any compression scheme. So a negative result about $ K$ can provide useful bounds on how good a real-world compressor can be. It turns out that these properties also turn $ K$ into a useful tool for machine learning. The idea is summarized as follows:

Let $ x,y$ be binary strings, and as usual let’s fix some universal programming language $ L$ in which to write all of our programs. Let $ p(x,y)$ be the shortest program which computes both $ y$ when given $ x$ as an input, and $ x$ given $ y$. We would imagine that if $ x,y$ are unrelated, then the length of the program $ |p(x,y)|$ would be roughly $ K(x) + K(y)$, simply by running the shortest program to output $ x$ using no inputs, followed by the same thing for $ y$. As usual there will be some additive constant term independent of both $ x$ and $ y$. We denote this by $ c$ or $ O(1)$ interchangeably.

We would further imagine that if $ x,y$ are related (that is, if there is some information about $ x$ contained in $ y$ or vice versa), then the program $ p(x,y)$ would utilize that information and hence be shorter than $ K(x) + K(y)$. It turns out that there is an even better way to characterize $ p$, and with a few modifications we can turn the length of $ p$ into something similar to a metric on the set of all strings.

This metric has some strikingly attractive features. We will see that it is “universal” with respect to a certain class of distance functions (which is unfortunately not the class of all metrics). In particular, for any of these functions $ f$, the length of $ |p(x,y)|$ will be at worst a small amount larger than $ f(x,y)$. In words, if $ x$ and $ y$ are similar according to any of these distance functions, then they will be similar according to $ p$. Of course the devil is in the details, but this is the right idea to have in mind while we wade through the computations.

An Aside on Metrics, and Properties of Kolmogorov Complexity

In recent posts on this blog we’ve covered a number of important examples of metrics and investigated how a metric creates structure in a space. But as powerful and rare as fruitful metrics are, we have barely scratched the surface of the vast amount of literature on the subject.

As usual with our computations in Kolmogorov complexity, all of our equalities will be true up to some kind of additive sloppiness. Most of the time it will be an additive constant $ O(1)$ which is independent of anything else in the equation. We will usually omit the constant with that implicit understanding, and instead we will specify the times when it is an exact equality (or when the additive sloppiness is something other than a constant).

And so, unavoidably, the “metric” we define won’t be a true metric. It will only satisfy the metric properties (positive definite, symmetric, triangle inequality) up to a non-constant additive sloppiness. This will be part of the main theorem of this post.

Before we can reach the heart of the matter (and as a nice warm-up), we need to establish a few more properties of $ K$. Recall that by $ K(x|y)$ we mean the shortest program which computes $ x$ when provided $ y$ as an auxiliary input. We call this the conditional complexity of $ x$ given $ y$. Further, recall that $ K(x,y)$ is the length of the shortest program which outputs both $ x$ and $ y$, and a way to distinguish between the two (if everything is in binary, the distinguishing part is nontrivial; should the reader be interested, this sort of conversation is made for comment threads). Finally, the comma notation works for auxiliary inputs as well: $ K(x|y,z)$ is the length of the shortest program outputting $ x$ when given $ y,z$ and a way to distinguish them as input.

For example, the conditional Kolmogorov complexity $ K(1^n | n) = c$ is constant: the length of the string $ 1^n$ provides all but a finite amount of information about it. On the other hand, if $ x,y$ are random strings (their bits are generated independently and uniformly at random), then $ K(y|x) = K(y)$; there is no information about $ y$ contained in $ x$.

Definition: Let $ x$ be a (binary) string. We denote by $ x^*$ the shortest program which computes $ x$. That is, $ K(x) = |x^*|$. If there are two shortest programs which compute $ x$, then $ x^*$ refers to the first in the standard enumeration of all programs.

As a quick aside, the “standard enumeration” is simple: treat a binary string as if it were a natural number written in base 2, and enumerate all strings in increasing order of their corresponding number. The choice of enumeration is irrelevant, though; all that matters is that it is consistent throughout our theory.

Proposition: Kolmogorov complexity has the following properties up to additive constants:

  1. $ K(x|y^*) = K(x|y,K(y))$
  2. $ K(x|y^*) \leq K(x|y)$, and $ K(x|y) \leq K(x|y^*) + O(\log(K(y)))$
  3. $ K(x,y) = K(x) + K(y|x^*)$

The first item simply states that giving $ y^*$ as input to a program is the same as giving $ y$ and $ K(y)$. This is not hard to prove. If $ p$ is the shortest program computing $ x$ from $ y,K(y)$, then we can modify it slightly to work with $ y^*$ instead. Just add to the beginning of $ p$ the following instructions:

Compute K(y) as the length of the input y*
Simulate y* and record its output y

Since $ y^*$ is a finite string and represents a terminating program, these two steps produce the values needed to run $ p$. Moreover, the program description is constant in length, independent of $ y^*$.

On the other hand, if $ q$ is a program computing $ x$ from $ y^*$, we are tasked with finding $ y^*$ given $ y, K(y)$. The argument a standard but slightly more complicated technique in theoretical computer science called dovetailing. In particular, since we know the length of $ y^*$, and there are only finitely many programs of the same length, we can get a list $ p_1, p_2, \dots p_n$ of all programs of length $ K(y)$. We then interleave the simulation of each of these programs; that is, we run the first step of all of the $ p_i$, then the second, third, and so on. Once we find a program which halts and outputs $ y$ (and we are guaranteed that one will do so) we can stop. In pseudocode, this is just the subroutine:

L = [all programs of length K(y) in lexicographic order]
i = 1
while True:
   for program in L:
      run step i of program
      if program terminates and outputs y:
         return program

The fact that this algorithm will terminate proves the claim.

The second item in the proposition has a similar proof, and we leave it as an exercise to the reader. (Hint: the logarithm in the second part of the statement comes from the hard-coding of a binary representation of the number $ K(y)$)

The third item, that $ K(x,y) = K(x) + K(y|x^*)$ has a much more difficult proof, and its consequences are far-reaching. We will use it often in our computations. The intrepid reader will see Theorem 3.9.1 in the text of Li & Vitanyi for a complete proof, but luckily one half of the proof is trivial. That is, the proof that $ K(x,y) \leq K(x) + K(y|x^*) + c$ is similar to the argument we used above. Let $ p,q$ be the shortest programs computing $ x$ and $ y$ given $ x^*$, respectively. We can combine them into a program computing $ x$ and $ y$. First run $ p$ to compute $ x$ and compute the length of $ p$. As we saw, these two pieces of data are equivalent to $ x^*$, and so we can compute $ y$ using $ q$ as above, adding at most a finite amount program text to do so.

This property is so important it has a name.

Lemma: (Symmetry of information)

$ \displaystyle K(x,y) = K(x) + K(y|x^*) = K(y) + K(x|y^*)$

This is true (and named appropriately) since there is symmetry in the quantity $ K(x,y) = K(y,x)$. Note in particular that this doesn’t hold without the star: $ K(x,y) = K(x) + K(y|x) + O(\log(K(x)))$. Those readers who completed the exercise above will know where the logarithm comes from.

The Almost-Triangle Inequality

The first application of the symmetry of information is (surprisingly) a variant of the triangle inequality. Specifically, the function $ f(x,y) = K(x|y^*)$ satisfies the metric inequalities up to an additive constant sloppiness.

$ \displaystyle K(x|y^*) \leq K(x|z^*) + K(z|y^*) + c$

where $ c$ does not depend on $ x, y, z$. To prove this, see that

$ \displaystyle K(x,z | y^*) = K(x,y,z) – K(y) \leq K(z) + K(x|z^*) + K(y|z^*) – K(y)$

The first equality is by the symmetry of information $ K(x,y,z) = K(y) + K(x,z|y^*)$, and the second follows from the fact that $ K(x,y,z) \leq K(z) + K(x|z^*) + K(y|z^*)$. This is the same argument we used to prove the $ \leq$ case of the symmetry of information lemma.

Now we can rearrange the terms and use the symmetry of information twice, $ K(z) + K(y|z^*) = K(y,z)$ and $ K(y,z) – K(y) = K(z|y^*)$, to reach the final result.

This is interesting because it’s our first indication that Kolmogorov complexity can play a role in a metric. But there are some issues: $ K(x|y)$ is in general not symmetric. We need to some up with a symmetric quantity to use instead. There are quite a few details to this process (see this paper if you want to know them all), but the result is quite nice.

Theorem: Let $ E(x,y)$ be the length of the shortest program which computes $ x$ given $ y$ as input and $ y$ given $ x$. Then

$ \displaystyle E(x,y) = \max (K(x|y), K(y|x)) + O(\log(M))$

where $ M = \max(K(x|y), K(y|x))$.

That is, our intuitive idea of what the “information distance” from $ x$ to $ y$ should be coincides up to an additive logarithmic factor with the maximum of the conditional Kolmogorov complexities. If two strings are “close” with respect to $ E$, then there is a lot of mutual information between them. In the same paper listed above, the researchers (Bennett et al.) prove that $ E$ is a “metric” (up to additive constants) and so this gives a reasonable estimate for the true information distance in terms of conditional Kolmogorov complexities.

However, $ E$ is not the final metric used in applications, but just an inspiration for other functions. This is where the story gets slightly more complicated.

Normalized Information Distance(s)

At this point we realize that the information distance $ E$ defined above is not as good as we’d like it to be. One of its major deficiencies is that it does not compute relative distances very well. That is, it doesn’t handle strings of varying size as well as it should.

For example, take $ x$ to be a random string of length $ n$ for arbitrary $ n$.   The quantity $ E(x, \varepsilon)$, where $ \varepsilon$ is the empty string is just $ K(x) + c$ (if the input is empty, compute $ x$, otherwise output the empty string). But in a sense there is no information about $ \varepsilon$ in any string. In other words, $ \varepsilon$ is maximally dissimilar to all nonempty strings. But according to $ E$, the empty string is variably dissimilar to other strings: it’s “less similar” to strings with higher Kolmogorov complexity. This is counter-intuitive, and hence undesirable.

Unfortunately the literature is littered with alternative distance functions, and the researchers involved spend little effort relating them to each other (this is part of the business of defining things “up to sloppiness”). We are about to define the principal example we will be concerned with, and we will discuss its relationship with its computationally-friendly cousins at the end.

The link between all of these examples is normalization. That is (again up to minor additive sloppiness we’ll make clear shortly) the distance functions take values in $ [0,1]$, and a value of 0 means the strings are maximally similar, and a value of 1 implies maximal dissimilarity.

Definition: Let $ \Sigma = \left \{ 0,1 \right \}^*$ be the set of binary strings. A normalized distance $ f$ is a function $ \Sigma \times \Sigma \to [0,1]$ which is symmetric and satisfies the following density condition for all $ x \in \Sigma$ and all $ 0 \leq e \leq 1$:

$ \displaystyle |\left \{ y : d(x,y) \leq e \right \}| < 2^{eK(x) + 1}$

That is, there is a restriction on the number of strings that are close to $ x$. There is a sensible reason for such a convoluted condition: this is the Kolmogorov-complexity analogue of the Kraft inequality. One of the picky details we’ve blatantly left out in our discussion of Kolmogorov complexity is that the programs we’re allowed to write must collectively form a prefix-code. That is, no program is a proper prefix of another program. If the implications of this are unclear (or confusing), the reader may safely ignore it. It is purely a tool for theoretical analysis, and the full details are again in the text of Li & Vitanyi. We will come back to discuss other issues with this density condition later (in the mean time, think about why it’s potentially dubious), but now let us define our similarity metric.

Definition: The normalized information distance $ d(x,y)$ is defined by

$ \displaystyle d(x,y) = \frac{\max(K(x|y^*), K(y|x^*))}{\max(K(x), K(y))}$

The reason we switched from $ K(x|y)$ to $ K(x|y^*)$ will become apparent in our calculations (we will make heavy use of the symmetry of information, which does not hold by a constant factor for $ K(x|y)$).

Quickly note that this alleviates our empty string problem we had with the non-normalized metric. $ d(x,\varepsilon) = K(x)/K(x) = 1$, so they are maximally dissimilar regardless of what $ x$ is.

We will prove two theorems about this function:

Theorem 1: (Metric Axioms) $ d(x,y)$ satisfies the metric axioms up to additive $ O(1/M)$ precision, where $ M$ is the maximum of the Kolmogorov complexities of the strings involved in the (in)equality.

Theorem 2: (Universality) $ d(x,y)$ is universal with respect to the class of computable normalized distance functions. That is, if $ f$ is a normalized distance, then for all $ x,y$ we have the following inequality:

$ d(x,y) \leq f(x,y) + O(1/M)$

where again $ M$ is the minimum of the Kolmogorov complexities of the strings involved.

We should note that in fact theorem 2 holds for even more general normalized distance functions, the so-called “upper semi-computable” functions. Skipping the rigorous definition, this just means that one can recursively approximate the true value by giving a consistently improved upper bound which converges to the actual value. It is not hard to see that $ K$ is an upper semi-computable function, although it is unknown whether $ d$ is (and many believe it is not).

The proof of the first theorem is straightforward but notationally dense.

Proof of Theorem 1 (Metric Axioms): The value $ d(x,x) = K(x|x^*)/K(x) = O(1/K(x))$, since $ K(x|x^*) = K(x|x,K(x))$ is trivially constant, and $ d(x,y) \geq 0$ since Kolmogorov complexity is non-negative. Moreover, $ d(x,y)$ is exactly symmetric, so the proof boils down to verifying the triangle inequality holds.

Let $ x,y,z$ be strings. We gave a proof above that $ K(x|y^*) \leq K(x|z^*) + K(z|y^*) + O(1)$. We will modify this inequality to achieve our desired result, and there are two cases:

Case 1: $ K(z) \leq \max(K(x), K(y))$. Take the maximum of each side of the two inequalities for $ K(x|y^*), K(y|x^*)$ to get

$ \displaystyle \max(K(x|y^*), K(y|x^*)) \leq \max(K(x|z^*) + K(z|y^*) , K(y|z^*) + K(z|x^*)) + O(1)$

We can further increase the right hand side by taking termwise maxima

$ \displaystyle \max(K(x|y^*), K(y|x^*)) \leq \max(K(x|z^*), K(z|x^*)) + \max(K(y|z^*), K(z|y^*)) + O(1)$

Now divide through by $ \max(K(x), K(y))$ to get

$ \displaystyle \frac{\max(K(x|y^*), K(y|x^*))}{\max(K(x), K(y))} \leq \frac{\max(K(x|z^*), K(z|x^*))}{\max(K(x), K(y))} + \frac{\max(K(y|x^*), K(z|y^*))}{\max(K(x), K(y))} + O(1/M)$

Finally, since $ K(z)$ is smaller than the max of $ K(x), K(y)$, we can replace the  $ K(y)$ in the denominator of the first term of the right hand side by $ K(z)$. This will only possibly increase the fraction, and for the same reason we can replace $ K(x)$ by $ K(z)$ in the second term. This achieves the triangle inequality up to $ O(1/M)$, as desired.

Case 2: $ K(z) = \max(K(x), K(y), K(z))$. Without loss of generality we may also assume $ K(x) \geq K(y)$, for the other possibility has an identical argument. Now we can boil the inequality down to something simpler. We already know the denominators have to all be $ K(z)$ in the right hand side, and $ K(x)$ in the left. Moreover, we claim $ K(z|x^*) \geq K(x|z^*)$. This is by the symmetry of information:

$ \displaystyle K(x,z) = K(x|z^*) + K(z) = K(z|x^*) + K(x) \leq K(z|x^*) + K(z)$

Subtracting $ K(z)$ establishes the claim, and similarly we have $ K(z|y^*) \geq K(y|z^*)$. So the triangle inequality reduces to

$ \displaystyle \frac{K(x|y^*)}{K(x)} \leq \frac{K(z|x^*)}{K(z)} + \frac{K(z|y^*)}{K(z)} + O(1/K(z))$

Applying our original inequality again to get $ K(x|y^*) \leq K(x|z^*) + K(z|y^*) + O(1)$, we may divide through by $ K(x)$ and there are two additional cases.

$ \displaystyle \frac{K(x|y^*)}{K(x)} \leq \frac{K(x|z^*) + K(z|y^*) + O(1)}{K(x)}$

If the right-hand side is less than or equal to 1, then adding a constant $ c$ to the top and bottom of the fraction only increases the value of the fraction, and doesn’t violate the inequality. So we choose to add $ K(z)-K(x)$ to the top and bottom of the right-hand side and again using the symmetry of information, we get exactly the required value.

If the right-hand side is greater than 1, then adding any constant to the top and bottom decreases the value of the fraction, but it still remains greater than 1. Since $ K(x|y^*) \leq K(x)$ (a simple exercise), we see that the left-hand side is at most 1, and our same trick of adding $ K(z) – K(x)$ works. $ \square$

The proof of the universality theorem is considerably more elegant.

Proof of Theorem 2 (Universality): Let $ f$ be any normalized distance function, and set $ e = f(x,y)$. Suppose further that $ K(x) \leq K(y)$.

Let us enumerate all strings $ v$ such that $ f(x,v) \leq e$. In particular, since $ e = f(x,y)$, $ y$ is included in this enumeration. By the density condition, the number of such strings is at most $ 2^{eK(x) + 1}$. The index of $ y$ in this enumeration can be used as an effective description of $ y$ when given $ x$ as input. That is, there is a program which includes in its description the index of $ y$ and outputs $ y$ given $ x$. Since the number of bits needed to describe the index of $ y$ is at most $ \log(2^{eK(x) + 1}) = eK(x) + 1$, we have

$ \displaystyle K(y|x) \leq eK(x) + 1$

Again the symmetry of information lemma gives us $ K(x|y^*) \leq K(y|x^*)$. And now

$ \displaystyle d(x,y) = \frac{K(y|x^*)}{K(y)} \leq \frac{K(y|x) + O(1)}{K(y)} \leq \frac{eK(x) + O(1)}{K(y)}$

Since $ K(x) \leq K(y)$, we can replace the denominator of the last expression with $ K(x)$ (only increasing the fraction) to get $ d(x,y) \leq e + O(1/K(x))$. But $ e$ was just $ f(x,y)$, so this completes the proof of this case.

In the case $ K(y) < K(x)$, the proof is similar (enumerating the index of $ x$ instead), and at the end we get

$ \displaystyle d(x,y) = \frac{K(x|y^*)}{K(x)} \leq \frac{eK(y) + O(1)}{K(y)} = f(x,y) + O(1/K(y))$

The theorem is proved. $ \square$

Why Normalized Distance Functions?

The practical implications of the two theorems are immense. What we’re saying is that if we can represent some feature of string similarity by a normalized distance function, then that feature will be captured automatically by the normalized information distance $ d$. The researchers who discovered normalized information distance (and proved its universality) argue that in fact upper semi-computable normalized distance functions encapsulate all real-world metrics we would ever care about! Of course, there is still the problem that Kolmogorov complexity is uncomputable, but we can certainly come up with reasonable approximations (we will see precisely this in our next post).

And these same researchers have shown that approximations to $ d$ do represent a good deal of universality in practice. They’ve applied the same idea to fields as varied as genome clustering, language clustering, and music clustering. We will of course investigate the applications for ourselves on this blog, but their results seem to apply to data mining in any field.

But still this raises the obvious question (which goes unaddressed in any research article this author has read): does every metric have a sensible interpretation (or modification) as a normalized distance function? That awkward density condition seems particularly suspect, and is at the core of this author’s argument that the answer is no.

Consider the following example. Let $ f$ be a normalized distance function, and fix $ e = 1$. The density condition says that for any $ x$ we want, the number of strings which are within distance 1 of $ x$ is bounded by $ 2^{K(x) + 1}$. In particular, this quantity is finite, so there can only be finitely many strings which are within distance 1 of $ x$. But there are infinitely many strings, so this is a contradiction!

Even if we rule out this (arguably trivial) case of $ e=1$, we still run into problems. Let $ e = 1 – \varepsilon$ for any sufficiently small $ \varepsilon > 0$. Then fix $ x = 0$ (the string consisting of the single bit 0). The number of strings which are within distance $ e$ of $ x$ is bounded by $ 2^{eK(x) + 1} < 2^{K(x) + 1}$ is again finite (and quite small, since $ K(0)$ is about as simple as it gets). In other words, there are only a finite number of strings that are not maximally dissimilar to $ 0$. But one can easily come up with an infinite number of strings which share something in common with $ 0$: just use $ 0^n$ for any $ n$ you please. It is ludicrous to say that every metric should call $ 0$ as dissimilar to $ 0^n$ as the empty string is to a random string of a thousand bits.

In general, this author doesn’t find it likely that one can take any arbitrary $ f(x,y)$ which is both symmetric and has values in $ [0,1]$ and modify it to satisfy the density condition. Indeed, this author has yet to see any example of a natural normalized similarity metric. There is one which is a modification of Hamming distance, but it is relatively awkward and involves the Kolmogorov complexity of the strings involved. If the reader has any ideas to the contrary, please share them in the comments.

So it appears that the class of normalized distance functions is not as large as we might wish, and in light of this the universality theorem is not as impressive. On the other hand, there is no denying the success of applying the normalized information distance to complex real-world problems. Something profound is going on, but from this author’s viewpoint more theoretical work is needed to establish why.

Friendly Cousins of Normalized Information Distance

In practice we want to compute $ K(x|y^*)$ in terms of quantities we can actually approximate. Due to the symmetry of information, we can rewrite the metric formula as

$ \displaystyle d(x,y)=\frac{K(x,y) – \min(K(x), K(y))}{\max(K(x), K(y))}$

Indeed, since our main interpretation of $ K(x)$ is as the size of the smallest “compressed version” of the string $ x$, it would seem that we can approximate the function $ K$ by using real-world compression algorithms. And for the $ K(x,y)$ part, we recognize that (due to the need to specify a way to distinguish between the outputs $ x,y$)

$ K(x,y) \leq K(xy) + O(\log(\max(K(x), K(y))))$,

where $ K(xy)$ is the Kolmogorov complexity of the concatenation of the two strings. So if we’re willing to forgive additive logarithmic sloppiness (technically, $ O(\log(K(x))/K(x))$ sloppiness, which goes to zero asymptotocally), we can approximate normalized information distance as

$ \displaystyle d(x,y) = \frac{K(xy) – \min(K(x), K(y))}{\max(K(x), K(y))}$

In the literature researchers will also simplify the metric by removing the “star” notation

$ \displaystyle d(x,y) = \frac{\max(K(x|y), K(y|x))}{\max(K(x), K(y))}$

Unfortunately these two things aren’t equivalent. As we saw in our “basic properties” of $ K(x|y)$,

$ K(x|y) \leq K(x|y^*) + O(\log(K(y)))$

Indeed, it is not the case that $ K(x|y) = K(x|y^*)$. An easy counterexample is by trying to equate $ K(K(x) | x) = K(K(x) | x^*)$. We have already proven that the right hand side is always constant, but the left hand side could not be. An exercise in Li & Vitanyi shows there is an infinite family of strings $ x$ for which $ K(K(x) | x) \geq \log(|x|)$.

And so these two metrics cannot be equal, but they are close. In fact, denoting the non-star version by $ d_2$ and the regular version by $ d_1$, we have $ d_2(x,y) \leq d_1(x,y) + O(1)$. This changes the metric properties and the universality claim, because $ O(1/K)$ precision is stronger than $ O(1)$ precision. Indeed, the true constant is always less than 1 (e.g. when $ K(y) > K(x)$ it is $ K(y^*)/K(y)$), but this means the metric can potentially take values in the range $ [0,2]$, which is edging further and further away from the notion of normalization we originally strove for.

Finally, the last example of a cousin metric is

$ \displaystyle d_3(x,y) = \frac{K(x|y^*) + K(y|x^*)}{K(x,y)}$

We will leave it to the reader to verify this function again satisfies the metric inequalities (in the same way that the original normalized information distance does). On the other hand, it only satisfies universality up to a factor of 2. So while it still may give some nice results in practice (and it is easy to see how to approximate this), the first choice of normalized information distance was theoretically more precise.

Applications

We’ve just waded through a veritable bog of theory, but we’ve seen some big surprises along the way. Next time we’ll put these theoretical claims to the test by seeing how well we can cluster and classify data using the normalized information distance (and introducing as little domain knowledge as possible). Until then!