# Methods of Proof — Diagonalization

A while back we featured a post about why learning mathematics can be hard for programmers, and I claimed a major issue was not understanding the basic methods of proof (the lingua franca between intuition and rigorous mathematics). I boiled these down to the “basic four,” direct implication, contrapositive, contradiction, and induction. But in mathematics there is an ever growing supply of proof methods. There are books written about the “probabilistic method,” and I recently went to a lecture where the “linear algebra method” was displayed. There has been recent talk of a “quantum method” for proving theorems unrelated to quantum mechanics, and many more.

So in continuing our series of methods of proof, we’ll move up to some of the more advanced methods of proof. And in keeping with the spirit of the series, we’ll spend most of our time discussing the structural form of the proofs. This time, diagonalization.

## Diagonalization

Perhaps one of the most famous methods of proof after the basic four is proof by diagonalization. Why do they call it diagonalization? Because the idea behind diagonalization is to write out a table that describes how a collection of objects behaves, and then to manipulate the “diagonal” of that table to get a new object that you can prove isn’t in the table.

The simplest and most famous example of this is the proof that there is no bijection between the natural numbers and the real numbers. We defined injections, and surjections and bijections, in two earlier posts in this series, but for new readers a bijection is just a one-to-one mapping between two collections of things. For example, one can construct a bijection between all positive integers and all even positive integers by mapping $n$ to $2n$. If there is a bijection between two (perhaps infinite) sets, then we say they have the same size or cardinality. And so to say there is no bijection between the natural numbers and the real numbers is to say that one of these two sets (the real numbers) is somehow “larger” than the other, despite both being infinite in size. It’s deep, it used to be very controversial, and it made the method of diagonalization famous. Let’s see how it works.

Theorem: There is no bijection from the natural numbers $\mathbb{N}$ to the real numbers $\mathbb{R}$.

Proof. Suppose to the contrary (i.e., we’re about to do proof by contradiction) that there is a bijection $f: \mathbb{N} \to \mathbb{R}$. That is, you give me a positive integer $k$ and I will spit out $f(k)$, with the property that different $k$ give different $f(k)$, and every real number is hit by some natural number $k$ (this is just what it means to be a one-to-one mapping).

First let me just do some setup. I claim that all we need to do is show that there is no bijection between $\mathbb{N}$ and the real numbers between 0 and 1. In particular, I claim there is a bijection from $(0,1)$ to all real numbers, so if there is a bijection from $\mathbb{N} \to (0,1)$ then we could combine the two bijections. To show there is a bijection from $(0,1) \to \mathbb{R}$, I can first make a bijection from the open interval $(0,1)$ to the interval $(-\infty, 0) \cup (1, \infty)$ by mapping $x$ to $1/x$. With a little bit of extra work (read, messy details) you can extend this to all real numbers. Here’s a sketch: make a bijection from $(0,1)$ to $(0,2)$ by doubling; then make a bijection from $(0,2)$ to all real numbers by using the $(0,1)$ part to get $(-\infty, 0) \cup (1, \infty)$, and use the $[1,2)$ part to get $[0,1]$ by subtracting 1 (almost! To be super rigorous you also have to argue that the missing number 1 doesn’t change the cardinality, or else write down a more complicated bijection; still, the idea should be clear).

Okay, setup is done. We just have to show there is no bijection between $(0,1)$ and the natural numbers.

The reason I did all that setup is so that I can use the fact that every real number in $(0,1)$ has an infinite binary decimal expansion whose only nonzero digits are after the decimal point. And so I’ll write down the expansion of $f(1)$ as a row in a table (an infinite row), and below it I’ll write down the expansion of $f(2)$, below that $f(3)$, and so on, and the decimal points will line up. The table looks like this.

The $d$‘s above are either 0 or 1. I need to be a bit more detailed in my table, so I’ll index the digits of $f(1)$ by $b_{1,1}, b_{1,2}, b_{1,3}, \dots$, the digits of $f(2)$ by $b_{2,1}, b_{2,2}, b_{2,3}, \dots$, and so on. This makes the table look like this

It’s a bit harder to read, but trust me the notation is helpful.

Now by the assumption that $f$ is a bijection, I’m assuming that every real number shows up as a number in this table, and no real number shows up twice. So if I could construct a number that I can prove is not in the table, I will arrive at a contradiction: the table couldn’t have had all real numbers to begin with! And that will prove there is no bijection between the natural numbers and the real numbers.

Here’s how I’ll come up with such a number $N$ (this is the diagonalization part). It starts with 0., and it’s first digit after the decimal is $1-b_{1,1}$. That is, we flip the bit $b_{1,1}$ to get the first digit of $N$. The second digit is $1-b_{2,2}$, the third is $1-b_{3,3}$, and so on. In general, digit $i$ is $1-b_{i,i}$.

Now we show that $N$ isn’t in the table. If it were, then it would have to be $N = f(m)$ for some $m$, i.e. be the $m$-th row in the table. Moreover, by the way we built the table, the $m$-th digit of $N$ would be $b_{m,m}$. But we defined $N$ so that it’s $m$-th digit was actually $1-b_{m,m}$. This is very embarrassing for $N$ (it’s a contradiction!). So $N$ isn’t in the table.

$\square$

It’s the kind of proof that blows your mind the first time you see it, because it says that there is more than one kind of infinity. Not something you think about every day, right?

## The Halting Problem

The second example we’ll show of a proof by diagonalization is the Halting Theorem, proved originally by Alan Turing, which says that there are some problems that computers can’t solve, even if given unbounded space and time to perform their computations. The formal mathematical model is called a Turing machine, but for simplicity you can think of “Turing machines” and “algorithms described in words” as the same thing. Or if you want it can be “programs written in programming language X.” So we’ll use the three words “Turing machine,” “algorithm,” and “program” interchangeably.

The proof works by actually defining a problem and proving it can’t be solved. The problem is called the halting problem, and it is the problem of deciding: given a program $P$ and an input $x$ to that program, will $P$ ever stop running when given $x$ as input? What I mean by “decide” is that any program that claims to solve the halting problem is itself required to halt for every possible input with the correct answer. A “halting problem solver” can’t loop infinitely!

So first we’ll give the standard proof that the halting problem can’t be solved, and then we’ll inspect the form of the proof more closely to see why it’s considered a diagonalization argument.

Theorem: The halting program cannot be solved by Turing machines.

Proof. Suppose to the contrary that $T$ is a program that solves the halting problem. We’ll use $T$ as a black box to come up with a new program I’ll call meta-$T$, defined in pseudo-python as follows.

def metaT(P):
run T on (P,P)
if T says that P halts:
loop infinitely
else:
halt and output "success!"


In words, meta-$T$ accepts as input the source code of a program $P$, and then uses $T$ to tell if $P$ halts (when given its own source code as input). Based on the result, it behaves the opposite of $P$; if $P$ halts then meta-$T$ loops infinitely and vice versa. It’s a little meta, right?

Now let’s do something crazy: let’s run meta-$T$ on itself! That is, run

metaT(metaT)


So meta. The question is what is the output of this call? The meta-$T$ program uses $T$ to determine whether meta-$T$ halts when given itself as input. So let’s say that the answer to this question is “yes, it does halt.” Then by the definition of meta-$T$, the program proceeds to loop forever. But this is a problem, because it means that metaT(metaT) (which is the original thing we ran) actually does not halt, contradicting $T$‘s answer! Likewise, if $T$ says that metaT(metaT) should loop infinitely, that will cause meta-$T$ to halt, a contradiction. So $T$ cannot be correct, and the halting problem can’t be solved.

$\square$

This theorem is deep because it says that you can’t possibly write a program to which can always detect bugs in other programs. Infinite loops are just one special kind of bug.

But let’s take a closer look and see why this is a proof by diagonalization. The first thing we need to convince ourselves is that the set of all programs is countable (that is, there is a bijection from $\mathbb{N}$ to the set of all programs). This shouldn’t be so hard to see: you can list all programs in lexicographic order, since the set of all strings is countable, and then throw out any that are not syntactically valid programs. Likewise, the set of all inputs, really just all strings, is countable.

The second thing we need to convince ourselves of is that a problem corresponds to an infinite binary string. To do this, we’ll restrict our attention to problems with yes/no answers, that is where the goal of the program is to output a single bit corresponding to yes or no for a given input. Then if we list all possible inputs in increasing lexicographic order, a problem can be represented by the infinite list of bits that are the correct outputs to each input.

For example, if the problem is to determine whether a given binary input string corresponds to an even number, the representation might look like this:

010101010101010101...

Of course this all depends on the details of how one encodes inputs, but the point is that if you wanted to you could nail all this down precisely. More importantly for us we can represent the halting problem as an infinite table of bits. If the columns of the table are all programs (in lex order), and the rows of the table correspond to inputs (in lex order), then the table would have at entry $(x,P)$ a 1 if $P(x)$ halts and a 0 otherwise.

here $b_{i,j}$ is 1 if $P_j(x_i)$ halts and 0 otherwise. The table encodes the answers to the halting problem for all possible inputs.

Now we assume for contradiction sake that some program solves the halting problem, i.e. that every entry of the table is computable. Now we’ll construct the answers output by meta-$T$ by flipping each bit of the diagonal of the table. The point is that meta-$T$ corresponds to some row of the table, because there is some input string that is interpreted as the source code of meta-$T$. Then we argue that the entry of the table for $(\textup{meta-}T, \textup{meta-}T)$ contradicts its definition, and we’re done!

So these are two of the most high-profile uses of the method of diagonalization. It’s a great tool for your proving repertoire.

Until next time!

# Information Distance — A Primer

This post assumes familiarity with our primer on Kolmogorov complexity. We recommend the uninformed reader begin there. We will do our best to keep consistent notation across both posts.

## Kolmogorov Complexity as a Metric

Over the past fifty years mathematicians have been piling up more and more theorems about Kolmogorov complexity, and for good reason. One of the main interpretations of the Kolmogorov complexity function $K$ is that for a given string $x$, $K(x)$ is the best theoretical compression of $x$ under any compression scheme. So a negative result about $K$ can provide useful bounds on how good a real-world compressor can be. It turns out that these properties also turn $K$ into a useful tool for machine learning. The idea is summarized as follows:

Let $x,y$ be binary strings, and as usual let’s fix some universal programming language $L$ in which to write all of our programs. Let $p(x,y)$ be the shortest program which computes both $y$ when given $x$ as an input, and $x$ given $y$. We would imagine that if $x,y$ are unrelated, then the length of the program $|p(x,y)|$ would be roughly $K(x) + K(y)$, simply by running the shortest program to output $x$ using no inputs, followed by the same thing for $y$. As usual there will be some additive constant term independent of both $x$ and $y$. We denote this by $c$ or $O(1)$ interchangeably.

We would further imagine that if $x,y$ are related (that is, if there is some information about $x$ contained in $y$ or vice versa), then the program $p(x,y)$ would utilize that information and hence be shorter than $K(x) + K(y)$. It turns out that there is an even better way to characterize $p$, and with a few modifications we can turn the length of $p$ into something similar to a metric on the set of all strings.

This metric has some strikingly attractive features. We will see that it is “universal” with respect to a certain class of distance functions (which is unfortunately not the class of all metrics). In particular, for any of these functions $f$, the length of $|p(x,y)|$ will be at worst a small amount larger than $f(x,y)$. In words, if $x$ and $y$ are similar according to any of these distance functions, then they will be similar according to $p$. Of course the devil is in the details, but this is the right idea to have in mind while we wade through the computations.

## An Aside on Metrics, and Properties of Kolmogorov Complexity

In recent posts on this blog we’ve covered a number of important examples of metrics and investigated how a metric creates structure in a space. But as powerful and rare as fruitful metrics are, we have barely scratched the surface of the vast amount of literature on the subject.

As usual with our computations in Kolmogorov complexity, all of our equalities will be true up to some kind of additive sloppiness. Most of the time it will be an additive constant $O(1)$ which is independent of anything else in the equation. We will usually omit the constant with that implicit understanding, and instead we will specify the times when it is an exact equality (or when the additive sloppiness is something other than a constant).

And so, unavoidably, the “metric” we define won’t be a true metric. It will only satisfy the metric properties (positive definite, symmetric, triangle inequality) up to a non-constant additive sloppiness. This will be part of the main theorem of this post.

Before we can reach the heart of the matter (and as a nice warm-up), we need to establish a few more properties of $K$. Recall that by $K(x|y)$ we mean the shortest program which computes $x$ when provided $y$ as an auxiliary input. We call this the conditional complexity of $x$ given $y$. Further, recall that $K(x,y)$ is the length of the shortest program which outputs both $x$ and $y$, and a way to distinguish between the two (if everything is in binary, the distinguishing part is nontrivial; should the reader be interested, this sort of conversation is made for comment threads). Finally, the comma notation works for auxiliary inputs as well: $K(x|y,z)$ is the length of the shortest program outputting $x$ when given $y,z$ and a way to distinguish them as input.

For example, the conditional Kolmogorov complexity $K(1^n | n) = c$ is constant: the length of the string $1^n$ provides all but a finite amount of information about it. On the other hand, if $x,y$ are random strings (their bits are generated independently and uniformly at random), then $K(y|x) = K(y)$; there is no information about $y$ contained in $x$.

Definition: Let $x$ be a (binary) string. We denote by $x^*$ the shortest program which computes $x$. That is, $K(x) = |x^*|$. If there are two shortest programs which compute $x$, then $x^*$ refers to the first in the standard enumeration of all programs.

As a quick aside, the “standard enumeration” is simple: treat a binary string as if it were a natural number written in base 2, and enumerate all strings in increasing order of their corresponding number. The choice of enumeration is irrelevant, though; all that matters is that it is consistent throughout our theory.

Proposition: Kolmogorov complexity has the following properties up to additive constants:

1. $K(x|y^*) = K(x|y,K(y))$
2. $K(x|y^*) \leq K(x|y)$, and $K(x|y) \leq K(x|y^*) + O(\log(K(y)))$
3. $K(x,y) = K(x) + K(y|x^*)$

The first item simply states that giving $y^*$ as input to a program is the same as giving $y$ and $K(y)$. This is not hard to prove. If $p$ is the shortest program computing $x$ from $y,K(y)$, then we can modify it slightly to work with $y^*$ instead. Just add to the beginning of $p$ the following instructions:

Compute K(y) as the length of the input y*
Simulate y* and record its output y

Since $y^*$ is a finite string and represents a terminating program, these two steps produce the values needed to run $p$. Moreover, the program description is constant in length, independent of $y^*$.

On the other hand, if $q$ is a program computing $x$ from $y^*$, we are tasked with finding $y^*$ given $y, K(y)$. The argument a standard but slightly more complicated technique in theoretical computer science called dovetailing. In particular, since we know the length of $y^*$, and there are only finitely many programs of the same length, we can get a list $p_1, p_2, \dots p_n$ of all programs of length $K(y)$. We then interleave the simulation of each of these programs; that is, we run the first step of all of the $p_i$, then the second, third, and so on. Once we find a program which halts and outputs $y$ (and we are guaranteed that one will do so) we can stop. In pseudocode, this is just the subroutine:

L = [all programs of length K(y) in lexicographic order]
i = 1
while True:
for program in L:
run step i of program
if program terminates and outputs y:
return program

The fact that this algorithm will terminate proves the claim.

The second item in the proposition has a similar proof, and we leave it as an exercise to the reader. (Hint: the logarithm in the second part of the statement comes from the hard-coding of a binary representation of the number $K(y)$)

The third item, that $K(x,y) = K(x) + K(y|x^*)$ has a much more difficult proof, and its consequences are far-reaching. We will use it often in our computations. The intrepid reader will see Theorem 3.9.1 in the text of Li & Vitanyi for a complete proof, but luckily one half of the proof is trivial. That is, the proof that $K(x,y) \leq K(x) + K(y|x^*) + c$ is similar to the argument we used above. Let $p,q$ be the shortest programs computing $x$ and $y$ given $x^*$, respectively. We can combine them into a program computing $x$ and $y$. First run $p$ to compute $x$ and compute the length of $p$. As we saw, these two pieces of data are equivalent to $x^*$, and so we can compute $y$ using $q$ as above, adding at most a finite amount program text to do so.

This property is so important it has a name.

Lemma: (Symmetry of information)

$\displaystyle K(x,y) = K(x) + K(y|x^*) = K(y) + K(x|y^*)$

This is true (and named appropriately) since there is symmetry in the quantity $K(x,y) = K(y,x)$. Note in particular that this doesn’t hold without the star: $K(x,y) = K(x) + K(y|x) + O(\log(K(x)))$. Those readers who completed the exercise above will know where the logarithm comes from.

## The Almost-Triangle Inequality

The first application of the symmetry of information is (surprisingly) a variant of the triangle inequality. Specifically, the function $f(x,y) = K(x|y^*)$ satisfies the metric inequalities up to an additive constant sloppiness.

$\displaystyle K(x|y^*) \leq K(x|z^*) + K(z|y^*) + c$

where $c$ does not depend on $x, y, z$. To prove this, see that

$\displaystyle K(x,z | y^*) = K(x,y,z) - K(y) \leq K(z) + K(x|z^*) + K(y|z^*) - K(y)$

The first equality is by the symmetry of information $K(x,y,z) = K(y) + K(x,z|y^*)$, and the second follows from the fact that $K(x,y,z) \leq K(z) + K(x|z^*) + K(y|z^*)$. This is the same argument we used to prove the $\leq$ case of the symmetry of information lemma.

Now we can rearrange the terms and use the symmetry of information twice, $K(z) + K(y|z^*) = K(y,z)$ and $K(y,z) - K(y) = K(z|y^*)$, to reach the final result.

This is interesting because it’s our first indication that Kolmogorov complexity can play a role in a metric. But there are some issues: $K(x|y)$ is in general not symmetric. We need to some up with a symmetric quantity to use instead. There are quite a few details to this process (see this paper if you want to know them all), but the result is quite nice.

Theorem: Let $E(x,y)$ be the length of the shortest program which computes $x$ given $y$ as input and $y$ given $x$. Then

$\displaystyle E(x,y) = \max (K(x|y), K(y|x)) + O(\log(M))$

where $M = \max(K(x|y), K(y|x))$.

That is, our intuitive idea of what the “information distance” from $x$ to $y$ should be coincides up to an additive logarithmic factor with the maximum of the conditional Kolmogorov complexities. If two strings are “close” with respect to $E$, then there is a lot of mutual information between them. In the same paper listed above, the researchers (Bennett et al.) prove that $E$ is a “metric” (up to additive constants) and so this gives a reasonable estimate for the true information distance in terms of conditional Kolmogorov complexities.

However, $E$ is not the final metric used in applications, but just an inspiration for other functions. This is where the story gets slightly more complicated.

## Normalized Information Distance(s)

At this point we realize that the information distance $E$ defined above is not as good as we’d like it to be. One of its major deficiencies is that it does not compute relative distances very well. That is, it doesn’t handle strings of varying size as well as it should.

For example, take $x$ to be a random string of length $n$ for arbitrary $n$.   The quantity $E(x, \varepsilon)$, where $\varepsilon$ is the empty string is just $K(x) + c$ (if the input is empty, compute $x$, otherwise output the empty string). But in a sense there is no information about $\varepsilon$ in any string. In other words, $\varepsilon$ is maximally dissimilar to all nonempty strings. But according to $E$, the empty string is variably dissimilar to other strings: it’s “less similar” to strings with higher Kolmogorov complexity. This is counter-intuitive, and hence undesirable.

Unfortunately the literature is littered with alternative distance functions, and the researchers involved spend little effort relating them to each other (this is part of the business of defining things “up to sloppiness”). We are about to define the principal example we will be concerned with, and we will discuss its relationship with its computationally-friendly cousins at the end.

The link between all of these examples is normalization. That is (again up to minor additive sloppiness we’ll make clear shortly) the distance functions take values in $[0,1]$, and a value of 0 means the strings are maximally similar, and a value of 1 implies maximal dissimilarity.

Definition: Let $\Sigma = \left \{ 0,1 \right \}^*$ be the set of binary strings. A normalized distance $f$ is a function $\Sigma \times \Sigma \to [0,1]$ which is symmetric and satisfies the following density condition for all $x \in \Sigma$ and all $0 \leq e \leq 1$:

$\displaystyle |\left \{ y : d(x,y) \leq e \right \}| < 2^{eK(x) + 1}$

That is, there is a restriction on the number of strings that are close to $x$. There is a sensible reason for such a convoluted condition: this is the Kolmogorov-complexity analogue of the Kraft inequality. One of the picky details we’ve blatantly left out in our discussion of Kolmogorov complexity is that the programs we’re allowed to write must collectively form a prefix-code. That is, no program is a proper prefix of another program. If the implications of this are unclear (or confusing), the reader may safely ignore it. It is purely a tool for theoretical analysis, and the full details are again in the text of Li & Vitanyi. We will come back to discuss other issues with this density condition later (in the mean time, think about why it’s potentially dubious), but now let us define our similarity metric.

Definition: The normalized information distance $d(x,y)$ is defined by

$\displaystyle d(x,y) = \frac{\max(K(x|y^*), K(y|x^*))}{\max(K(x), K(y))}$

The reason we switched from $K(x|y)$ to $K(x|y^*)$ will become apparent in our calculations (we will make heavy use of the symmetry of information, which does not hold by a constant factor for $K(x|y)$).

Quickly note that this alleviates our empty string problem we had with the non-normalized metric. $d(x,\varepsilon) = K(x)/K(x) = 1$, so they are maximally dissimilar regardless of what $x$ is.

Theorem 1: (Metric Axioms) $d(x,y)$ satisfies the metric axioms up to additive $O(1/M)$ precision, where $M$ is the maximum of the Kolmogorov complexities of the strings involved in the (in)equality.

Theorem 2: (Universality) $d(x,y)$ is universal with respect to the class of computable normalized distance functions. That is, if $f$ is a normalized distance, then for all $x,y$ we have the following inequality:

$d(x,y) \leq f(x,y) + O(1/M)$

where again $M$ is the minimum of the Kolmogorov complexities of the strings involved.

We should note that in fact theorem 2 holds for even more general normalized distance functions, the so-called “upper semi-computable” functions. Skipping the rigorous definition, this just means that one can recursively approximate the true value by giving a consistently improved upper bound which converges to the actual value. It is not hard to see that $K$ is an upper semi-computable function, although it is unknown whether $d$ is (and many believe it is not).

The proof of the first theorem is straightforward but notationally dense.

Proof of Theorem 1 (Metric Axioms): The value $d(x,x) = K(x|x^*)/K(x) = O(1/K(x))$, since $K(x|x^*) = K(x|x,K(x))$ is trivially constant, and $d(x,y) \geq 0$ since Kolmogorov complexity is non-negative. Moreover, $d(x,y)$ is exactly symmetric, so the proof boils down to verifying the triangle inequality holds.

Let $x,y,z$ be strings. We gave a proof above that $K(x|y^*) \leq K(x|z^*) + K(z|y^*) + O(1)$. We will modify this inequality to achieve our desired result, and there are two cases:

Case 1: $K(z) \leq \max(K(x), K(y))$. Take the maximum of each side of the two inequalities for $K(x|y^*), K(y|x^*)$ to get

$\displaystyle \max(K(x|y^*), K(y|x^*)) \leq \max(K(x|z^*) + K(z|y^*) , K(y|z^*) + K(z|x^*)) + O(1)$

We can further increase the right hand side by taking termwise maxima

$\displaystyle \max(K(x|y^*), K(y|x^*)) \leq \max(K(x|z^*), K(z|x^*)) + \max(K(y|z^*), K(z|y^*)) + O(1)$

Now divide through by $\max(K(x), K(y))$ to get

$\displaystyle \frac{\max(K(x|y^*), K(y|x^*))}{\max(K(x), K(y))} \leq \frac{\max(K(x|z^*), K(z|x^*))}{\max(K(x), K(y))} + \frac{\max(K(y|x^*), K(z|y^*))}{\max(K(x), K(y))} + O(1/M)$

Finally, since $K(z)$ is smaller than the max of $K(x), K(y)$, we can replace the  $K(y)$ in the denominator of the first term of the right hand side by $K(z)$. This will only possibly increase the fraction, and for the same reason we can replace $K(x)$ by $K(z)$ in the second term. This achieves the triangle inequality up to $O(1/M)$, as desired.

Case 2: $K(z) = \max(K(x), K(y), K(z))$. Without loss of generality we may also assume $K(x) \geq K(y)$, for the other possibility has an identical argument. Now we can boil the inequality down to something simpler. We already know the denominators have to all be $K(z)$ in the right hand side, and $K(x)$ in the left. Moreover, we claim $K(z|x^*) \geq K(x|z^*)$. This is by the symmetry of information:

$\displaystyle K(x,z) = K(x|z^*) + K(z) = K(z|x^*) + K(x) \leq K(z|x^*) + K(z)$

Subtracting $K(z)$ establishes the claim, and similarly we have $K(z|y^*) \geq K(y|z^*)$. So the triangle inequality reduces to

$\displaystyle \frac{K(x|y^*)}{K(x)} \leq \frac{K(z|x^*)}{K(z)} + \frac{K(z|y^*)}{K(z)} + O(1/K(z))$

Applying our original inequality again to get $K(x|y^*) \leq K(x|z^*) + K(z|y^*) + O(1)$, we may divide through by $K(x)$ and there are two additional cases.

$\displaystyle \frac{K(x|y^*)}{K(x)} \leq \frac{K(x|z^*) + K(z|y^*) + O(1)}{K(x)}$

If the right-hand side is less than or equal to 1, then adding a constant $c$ to the top and bottom of the fraction only increases the value of the fraction, and doesn’t violate the inequality. So we choose to add $K(z)-K(x)$ to the top and bottom of the right-hand side and again using the symmetry of information, we get exactly the required value.

If the right-hand side is greater than 1, then adding any constant to the top and bottom decreases the value of the fraction, but it still remains greater than 1. Since $K(x|y^*) \leq K(x)$ (a simple exercise), we see that the left-hand side is at most 1, and our same trick of adding $K(z) - K(x)$ works. $\square$

The proof of the universality theorem is considerably more elegant.

Proof of Theorem 2 (Universality): Let $f$ be any normalized distance function, and set $e = f(x,y)$. Suppose further that $K(x) \leq K(y)$.

Let us enumerate all strings $v$ such that $f(x,v) \leq e$. In particular, since $e = f(x,y)$, $y$ is included in this enumeration. By the density condition, the number of such strings is at most $2^{eK(x) + 1}$. The index of $y$ in this enumeration can be used as an effective description of $y$ when given $x$ as input. That is, there is a program which includes in its description the index of $y$ and outputs $y$ given $x$. Since the number of bits needed to describe the index of $y$ is at most $\log(2^{eK(x) + 1}) = eK(x) + 1$, we have

$\displaystyle K(y|x) \leq eK(x) + 1$

Again the symmetry of information lemma gives us $K(x|y^*) \leq K(y|x^*)$. And now

$\displaystyle d(x,y) = \frac{K(y|x^*)}{K(y)} \leq \frac{K(y|x) + O(1)}{K(y)} \leq \frac{eK(x) + O(1)}{K(y)}$

Since $K(x) \leq K(y)$, we can replace the denominator of the last expression with $K(x)$ (only increasing the fraction) to get $d(x,y) \leq e + O(1/K(x))$. But $e$ was just $f(x,y)$, so this completes the proof of this case.

In the case $K(y) < K(x)$, the proof is similar (enumerating the index of $x$ instead), and at the end we get

$\displaystyle d(x,y) = \frac{K(x|y^*)}{K(x)} \leq \frac{eK(y) + O(1)}{K(y)} = f(x,y) + O(1/K(y))$

The theorem is proved. $\square$

## Why Normalized Distance Functions?

The practical implications of the two theorems are immense. What we’re saying is that if we can represent some feature of string similarity by a normalized distance function, then that feature will be captured automatically by the normalized information distance $d$. The researchers who discovered normalized information distance (and proved its universality) argue that in fact upper semi-computable normalized distance functions encapsulate all real-world metrics we would ever care about! Of course, there is still the problem that Kolmogorov complexity is uncomputable, but we can certainly come up with reasonable approximations (we will see precisely this in our next post).

And these same researchers have shown that approximations to $d$ do represent a good deal of universality in practice. They’ve applied the same idea to fields as varied as genome clustering, language clustering, and music clustering. We will of course investigate the applications for ourselves on this blog, but their results seem to apply to data mining in any field.

But still this raises the obvious question (which goes unaddressed in any research article this author has read): does every metric have a sensible interpretation (or modification) as a normalized distance function? That awkward density condition seems particularly suspect, and is at the core of this author’s argument that the answer is no.

Consider the following example. Let $f$ be a normalized distance function, and fix $e = 1$. The density condition says that for any $x$ we want, the number of strings which are within distance 1 of $x$ is bounded by $2^{K(x) + 1}$. In particular, this quantity is finite, so there can only be finitely many strings which are within distance 1 of $x$. But there are infinitely many strings, so this is a contradiction!

Even if we rule out this (arguably trivial) case of $e=1$, we still run into problems. Let $e = 1 - \varepsilon$ for any sufficiently small $\varepsilon > 0$. Then fix $x = 0$ (the string consisting of the single bit 0). The number of strings which are within distance $e$ of $x$ is bounded by $2^{eK(x) + 1} < 2^{K(x) + 1}$ is again finite (and quite small, since $K(0)$ is about as simple as it gets). In other words, there are only a finite number of strings that are not maximally dissimilar to $0$. But one can easily come up with an infinite number of strings which share something in common with $0$: just use $0^n$ for any $n$ you please. It is ludicrous to say that every metric should call $0$ as dissimilar to $0^n$ as the empty string is to a random string of a thousand bits.

In general, this author doesn’t find it likely that one can take any arbitrary $f(x,y)$ which is both symmetric and has values in $[0,1]$ and modify it to satisfy the density condition. Indeed, this author has yet to see any example of a natural normalized similarity metric. There is one which is a modification of Hamming distance, but it is relatively awkward and involves the Kolmogorov complexity of the strings involved. If the reader has any ideas to the contrary, please share them in the comments.

So it appears that the class of normalized distance functions is not as large as we might wish, and in light of this the universality theorem is not as impressive. On the other hand, there is no denying the success of applying the normalized information distance to complex real-world problems. Something profound is going on, but from this author’s viewpoint more theoretical work is needed to establish why.

## Friendly Cousins of Normalized Information Distance

In practice we want to compute $K(x|y^*)$ in terms of quantities we can actually approximate. Due to the symmetry of information, we can rewrite the metric formula as

$\displaystyle d(x,y)=\frac{K(x,y) - \min(K(x), K(y))}{\max(K(x), K(y))}$

Indeed, since our main interpretation of $K(x)$ is as the size of the smallest “compressed version” of the string $x$, it would seem that we can approximate the function $K$ by using real-world compression algorithms. And for the $K(x,y)$ part, we recognize that (due to the need to specify a way to distinguish between the outputs $x,y$)

$K(x,y) \leq K(xy) + O(\log(\max(K(x), K(y))))$,

where $K(xy)$ is the Kolmogorov complexity of the concatenation of the two strings. So if we’re willing to forgive additive logarithmic sloppiness (technically, $O(\log(K(x))/K(x))$ sloppiness, which goes to zero asymptotocally), we can approximate normalized information distance as

$\displaystyle d(x,y) = \frac{K(xy) - \min(K(x), K(y))}{\max(K(x), K(y))}$

In the literature researchers will also simplify the metric by removing the “star” notation

$\displaystyle d(x,y) = \frac{\max(K(x|y), K(y|x))}{\max(K(x), K(y))}$

Unfortunately these two things aren’t equivalent. As we saw in our “basic properties” of $K(x|y)$,

$K(x|y) \leq K(x|y^*) + O(\log(K(y)))$

Indeed, it is not the case that $K(x|y) = K(x|y^*)$. An easy counterexample is by trying to equate $K(K(x) | x) = K(K(x) | x^*)$. We have already proven that the right hand side is always constant, but the left hand side could not be. An exercise in Li & Vitanyi shows there is an infinite family of strings $x$ for which $K(K(x) | x) \geq \log(|x|)$.

And so these two metrics cannot be equal, but they are close. In fact, denoting the non-star version by $d_2$ and the regular version by $d_1$, we have $d_2(x,y) \leq d_1(x,y) + O(1)$. This changes the metric properties and the universality claim, because $O(1/K)$ precision is stronger than $O(1)$ precision. Indeed, the true constant is always less than 1 (e.g. when $K(y) > K(x)$ it is $K(y^*)/K(y)$), but this means the metric can potentially take values in the range $[0,2]$, which is edging further and further away from the notion of normalization we originally strove for.

Finally, the last example of a cousin metric is

$\displaystyle d_3(x,y) = \frac{K(x|y^*) + K(y|x^*)}{K(x,y)}$

We will leave it to the reader to verify this function again satisfies the metric inequalities (in the same way that the original normalized information distance does). On the other hand, it only satisfies universality up to a factor of 2. So while it still may give some nice results in practice (and it is easy to see how to approximate this), the first choice of normalized information distance was theoretically more precise.

## Applications

We’ve just waded through a veritable bog of theory, but we’ve seen some big surprises along the way. Next time we’ll put these theoretical claims to the test by seeing how well we can cluster and classify data using the normalized information distance (and introducing as little domain knowledge as possible). Until then!

# Low Complexity Art

## The Art of Omission

Whether in painting, fiction, film, landscape architecture, or paper folding, art is often said to be the art of omission. Simplicity breeds elegance, and engages the reader at a deep, aesthetic level.

A prime example is the famous six-word story written by Ernest Hemingway:

For sale: baby shoes, never worn.

He called it his best work, and rightfully so. To say so much with this simple picture is a monumental feat that authors have been trying to recreate since Hemingway’s day. Unsurprisingly, some mathematicians (for whom the art of proof had better not omit anything!) want to apply their principles to describe elegance.

## Computation and Complexity

This study of artistic elegance will be from a computational perspective, and it will be based loosely on the paper of the same name. While we include the main content of the paper in a condensed form, we will deviate in two important ways: we alter an axiom with justification, and we provide a working implementation for the reader’s use. We do not require extensive working knowledge of theoretical computation, but the informed reader should be aware that everything here is theoretically performed on a Turing machine, but the details are unimportant.

So let us begin with the computational characterization of simplicity. Unfortunately, due to our own lack of knowledge of the subject, we will overlook the underlying details and take them for granted. [At some point in the future, we will provide a primer on Kolmogorov complexity. We just ordered a wonderful book on it, and can’t wait to dig into it!]

Here we recognize that all digital images are strings of bits, and so when we speak of the complexity of a string, in addition to meaning strings in general, we specifically mean the complexity of an image.

Definition: The Kolmogorov complexity of a string is the length of the shortest program which generates it.

In order to specify “length” appropriately, we must fix some universal description language, so that all programs have the same frame of reference. Any Turing-complete programming language will do, so let us choose Python for the following examples. More specifically, there exists a universal Turing machine $U$, for which any program on any machine may be translated (compiled) into an equivalent program for $U$ by a program of fixed size. Hence, the measure of Kolmogorov complexity, when a fixed machine is specified (in this case Python), is objective over the class of all outputs.

Here is a simple example illustrating Kolmogorov complexity: consider the string of one hundred zeros. This string is obviously not very “complex,” in the sense that one could write a very short program to generate it. In Python:

print "0" * 100

One can imagine that a compiler which optimizes for brevity would output rather short assembly code as well, with a single print instruction and a conditional branch, and some constants. On the other hand, we want to call a string like

“00111010010000101101001110101000111101”

complex, because it follows no apparent pattern. Indeed, in Python the shortest program to output this string is just to print the string itself:

print "00111010010000101101001110101000111101"

And so we see that this random string of ones and zeros has a higher Kolmogorov complexity than the string of all zeros. In other words, the boring string of all zeros is “simple,” while the other is “complicated.”

Kolmogorov himself proved that there is no algorithm to compute Kolmogorov complexity (the number itself) for any input. In other words, the problem of determining exact Kolmogorov complexity is undecidable (by reduction from the halting problem; see the Turing machines primer). So we will not try in vain to actually get a number for the Kolmogorov complexity of arbitrary programs, although it is easy to count the lengths of these provably short examples, and instead we speak of complexity in terms of bounds and relativity.

## Kolmogorov Meets Picasso

To apply this to art, we want to ask, “for a given picture, what is the length of the shortest program that outputs it?” This will tell us whether a picture is simple or complex. Unfortunately for us, most pictures are neither generated by programs, nor do they have obvious programmatic representations. More feasibly, we can ask, “can we come up with pictures which have low Kolmogorov complexity and are also beautiful?” This is truly a tough task.

To do so, we must first invent an encoding for pictures, and write a program to interpret the encoding. That’s the easy part. Then, the true test, we must paint a beautiful picture.

We don’t pretend to be capable of such artistry. However, there are some who have created an encoding based on circles and drawn very nice pictures with it. Here we will present those pictures as motivation, and then develop a very similar encoding method, providing the code and examples for the reader to play with.

Jürgen Schmidhuber, a long time proponent of low-complexity art, spent a very long time (on the order of thousands of sketches) creating drawings using his circle encoding method, and here are some of his results:

Marvelous. Our creations will be much uglier. But we admit, one must start somewhere, and it might as well be where we feel most comfortable: mathematics and programming.

## Magnificence Meets Method

There are many possible encodings for drawings. We will choose one which is fairly easy to implement, and based on intersecting circles. The strokes in a drawing are arcs of these circles. We call the circles used to generate drawings legal circles, while the arcs are legal arcs. Here is an axiomatic specification of how to generate legal circles:

1. Arbitrarily define the a circle $C$ with radius 1 as legal. All other circles are generated with respect to this circle. Define a second legal circle whose center is on $C$, and also has radius 1.
2. Wherever two legal circles of equal radius intersect, a third circle of equal radius is centered at the point of intersection.
3. Every legal circle of radius $r$ has at its center another legal circle of radius $r/2$.

A legal arc is then simply any arc of a legal circle, and a legal drawing is any list of legal arcs, where each arc has a width corresponding to some fixed set of values. Now we generate all circles which intersect the interior of the base circle $C$, and sort them first by radius, then by $x$ coordinate, then $y$ coordinate. Now given a specified order on the circles, we may number them from 1 to $n$, and specify a particular circle by its index in the list. In this way, we have defined a coordinate space of arcs, with points of the form (center, thickness, arc-start, arc-end), where the arc-start and art-end coordinates are measured in radians.

We describe the programmatic construction of these circles later. For now, here is the generated picture of all circles which intersect the unit circle up to radius $2^{-5}$:

The legal circles

In addition, we provide an animation showing the different layers:

And another animation displaying the list circles sorted by index in increasing order. For an animated GIF, this file has a large size (5MB), and so we link to it separately.

As we construct smaller and smaller circles, the interior of the base circle is covered up by a larger proportion of legally usable area. By using obscenely small circles, we may theoretically construct any drawing. On the other hand, what we care about is how much information is needed to do so.

Because of our nice well ordering on circles, those circles with very small radii will have huge indices! Indeed, there are about four circles of radius $2^{-i-1}$ for each circle of radius $2^{-i}$ in any fixed area. Then, we can measure the complexity of a drawing by how many characters its list of legal arcs requires. Clearly, a rendition of Starry Night would have a large number of high-indexed circles, and hence have high Kolmogorov complexity. (On second thought, I wonder how hard it would be to get a rough sketch of a Starry-Night-esque picture in this circle encoding…it might not be all that complex).

Note that Schmidhuber defines things slightly differently. In particular, he requires that the endpoints of a legal arc must be the intersection points of two other legal arcs, making the arc-start and arc-end coordinates integers instead of radian measures. We respectfully disagree with this axiom, and we explain why here:

Which of the two arcs is more “complex”?

Of the two arcs in the picture to the left, which would you say is more complex, the larger or the smaller? We observe that two arcs of the same circle, regardless of how long or short they are, should not be significantly different in complexity.

Schmidhuber, on the other hand, implicitly claims that arcs which begin or terminate at non-standard locations (locations which only correspond to the intersections of sufficiently small circles) should be deemed more complex. But this can be a difference as small as $\pi/100$, and it drastically alters the complexity. We consider this specification unrealistic, at least to the extent to which human beings consider complexity in art. So we stick to radians.

Indeed, our model does alter the complexity for some radian measures, simply because finely specifying fractions requires more bits than integral values. But the change in complexity is hardly as drastic.

In addition, Schmidhuber allows for region shading between legal arcs. Since we did not find an easy way to implement this in Mathematica, we skipped it as extraneous.

## Such Stuff as Programs are Made of

We implemented this circle encoding in Mathematica. The reader is encouraged to download and experiment with the full notebook, available from this blog’s Github page. We will explain the important bits here.

First, we have a function to compute all the circles whose centers lie on a given circle:

borderCircleCenters[{x_, y_}, r_] :=
Table[{x + r Cos[i 2 Pi/6], y + r Sin[i 2 Pi/6]}, {i, 0, 5}];

We arbitrarily picked the first legal circle to be the unit circle, defined with center (0,0), while the second has center (1,0). This made generating all legal circles a relatively simple search task. In addition, we recognize that any arbitrary second chosen circle is simply a rotation of this chosen configuration, so one may rotate their final drawing to accommodate for a different initialization step.

Second, we have the brute-force search of all circles. We loop through all circles in a list, generating the six border circles appropriately, and then filtering out the ones we need, repeating until we have all the circles which intersect the interior of the unit circle. Note our inefficiencies: we search out as far as radius 2 to find small circles which do not necessarily intersect the unit circle, and we calculate the border circles of each circle many times. On the other hand, finding all circles as small as radius $2^{-5}$ takes about a minute on an Intel Atom processor, which is not so slow to need excessive tuning for a prototype’s sake.

getAllCenters[r_] := Module[{centers, borderCenters, searchR,
ord, rt},
ord[{a_, b_}, {c_, d_}] := If[a < c, True, b < d];
centers = {{0, 0}};

rt = Power[r, 1/2];
While[Norm[centers[[-1]]] <= Min[2, 1 + rt],
borderCenters = Map[borderCircleCenters[#, r] &, centers];
centers = centers \[Union] Flatten[borderCenters, 1]];

Sort[Select[centers, Norm[#] < 1 + r &], ord]
];

Finally, we have a function to extract from the resulting list of all centers the center and radius of a given index, and a function to convert a coordinate to its graphical representation:

(* extracts a pair {center, radius} given the
index of the circle *)
Module[{row, length, counter},
row = 1;
length = Length[layeredCenters[[row]]];
counter = index;

While[counter > length,
counter -= length;
row++;
length = Length[layeredCenters[[row]]];
];

{layeredCenters[[row, counter]], 1/2^(row - 1)}
];

drawArc[{index_, thickness_, arcStart_, arcEnd_}] :=
Graphics[{Thickness[thickness],
ImagePadding -> 5, PlotRange -> {{-1, 1}, {-1, 1}},
ImageSize -> {400, 400}]
];

And a front-end style function, which takes a list of coordinates and draws the resulting picture:

paint[coordinates_] := Show[Map[drawArc, coordinates]];

Any omitted details (at least one global variable name) are clarified in the notebook.

Now, with our paintbrush in hand, we unveil our very first low-complexity piece of art. Behold! Surprised Mr. Moustache Witnessing a Collapsing Soufflé:

Surprised Mr. Moustache, © Jeremy Kun, 2011

It’s coordinates are:

{{7, 0.005, 0, 2 Pi}, {197, 0.002, 0, 2 Pi},
{299, 0.002, 0, 2 Pi}, {783, 0.002, 0, 2 Pi},
{2140, 0.001, 0, 2 Pi}, {3592, 0.001, 0, 2 Pi},
{22, 0.004, 8 Pi/6, 10 Pi/6}, {29, 0.004, 4 Pi/3, 5 Pi/3},
{21, 0.004, Pi/3, 2 Pi/3}, {28, 0.004, Pi/3, 2 Pi/3}}

Okay, so it’s lame, and took all of ten minutes to create (guess-and-check on the indices is quick, thanks to Mathematica’s interpreter). But it has low Kolmogorov complexity! And that’s got to count for something, right?

Even if you disagree with our obviously inspired artistic genius, the Mathematica framework for creating such drawings is free and available for anyone to play with. So please, should you have any artistic talent at all (and access to Mathematica), we would love to see your low-complexity art! If we somehow come across three days of being locked in a room with access to nothing but a computer and a picture of Starry Night, we might attempt to recreate a sketch of it for this blog. But until then, we will explore other avenues.

Happy sketching!

Addendum: Note that the outstanding problem here is how to algorithmically take a given picture (or specification of what one wants to draw), and translate it into this system of coordinates. As of now, no such algorithm is known, and hence we call the process of making a drawing art. We may attempt to find such a method in the future, but it is likely hard, and if we produced an algorithm even a quarter as good as we might hope, we would likely publish a paper first, and blog about it second.

# Turing Machines – A Primer

We assume the reader is familiar with the concepts of determinism and finite automata, or has read the corresponding primer on this blog.

## The Mother of All Computers

Last time we saw some models for computation, and saw in turn how limited they were. Now, we open Pandrora’s hard drive:

Definition: A Turing machine is a tuple $(S, \Gamma, \Sigma, s_0, F, \tau)$, where

• $S$ is a set of states,
• $\Gamma$ is a set of tape symbols, including a special blank symbol $b$,
• $\Sigma \subset \Gamma$ is a set of input symbols, not including $b$,
• $s_0$ is the initial state,
• $A \subset S$ is a set of accepting states,
• $R \subset S$ is a set of rejecting states,
• $\tau: S - (A \cup R) \times \Gamma \to S \times \Gamma \times \left \{ L, R \right \}$ is a partial function called the transition function, where $L, R$ correspond to “left shift” and “right shift,” respectively.

There are a few extra components we must address to clearly see how a Turing machine operates. First, the Turing machine has a tape of cells, infinite in length, upon which the machine may read and write letters from $\Gamma$. The process of reading a letter, analogous to our encounter with pushdown automata, is encapsulated in the $\Gamma$ component of the domain of $\tau$. In other words, this machine no longer is “fed” input in sequence. Rather, input is initially written to the tape, and the Turing machine receives this input by reading the tape. The rest of the tape (the complement of the finitely many cells containing input) is filled with $b$. Similarly, the process of writing to the tape is encapsulated in the $\Gamma$ component of the codomain of $\tau$.

The “shifting” part of $\tau$ requires another explanation. First, we restrict the Turing machine to being able to see only one cell of the tape at a time. In order to better visualize this, we invent a read-write head for the machine, which can by construction only process one cell at a time. Hence, the sequence of state transition goes: read a symbol from the cell currently under the read-write head, transition from one state to another, write a symbol to the same cell, then shift the read-write head one cell to the left or right.

Finally, we only allow entry into an accept or reject state. Once the machine enters one such state, it halts and outputs its respective determination.

Now, we could provide a detailed example of a Turing machine, with every aspect of the above definition accounted for. However, that is an unnecessarily bloated endeavor, and we leave such obsequiousness to Wikipedia, instead focusing on the bigger picture at hand. We gratefully take the liberty to stand on the lemma-encrusted shoulders of giants, and simply describe algorithms that are provably encodable on a Turing machine. The nature of permissible algorithms will become clearer as we give more examples.

## The Halting Problem

We now find ourselves capable of performing a very important new operation: infinitely looping. Specifically, it is not hard to design a Turing machine which never enters an accepting or rejecting state. Simply have one non-accept/reject state, $s$, and if we are in state $s$, shift left and write a 1. Despite having a finite input, this operation will never cease, nor will it ever be in the same configuration twice. This was never possible with a DFA, NFA, or PDA, because computation always ended with the last input symbol!

We require some new terminology to deal with this:

Definition: If a Turing machine halts on a given input, either accepting or rejecting, then it decides the input. We call an acceptance problem decidable if there exists some Turing machine that halts on every input for that problem. If no such Turing machine exists, we call the problem undecidable over the class of Turing machines.

In particular, we may describe our algorithms as vaguely as we wish, as long as it is clear that each step is provably decidable. Further, we may now write algorithms which loop over some decidable condition:

while the number of '1's on the tape is even:
move the head to a blank cell
write 'x' to the tape

accept

Notice that the above algorithm halts if and only if the tape begins with an odd number of ‘1’s written to it, and it never rejects.

Now we are threatened with a very dangerous question: how can we know a Turing machine will halt, accepting or rejecting appropriately? Rather than tackle this hard question, we will use it to our advantage to prove some amazing things. But first, we need to build up more machinery.

A Turing machine may additionally simulate storage: it may box off arbitrarily large portions of the tape to contain data we wish to save, including bounded numbers, characters (or numerical representations of characters), and larger compound data structures.

Finally, and this requires a small leap of faith, we may encode within a Turing machine descriptions of other Turing machines, and then process them. Indeed, we must accept that these descriptions are finite, for any Turing machine with infinite description would be effectively useless. Then, we may develop some fixed process for encoding a Turing machine as a string of 1’s and 0’s (say, a collapsed table of its state transitions). This is a function from the set of Turing machines to the set of descriptions, and we denote the encoding of $T$ as $[T]$.

Before actually using Turing machines as inputs to other Turing machine, we glean some important information about encodings. Since the set of finite strings (encodings) over any fixed alphabet is countable, we conclude that there are only countably many possible Turing machines. However, the set of subsets (possibly infinite) of the same fixed alphabet is uncountably large. Since every Turing machine can only decide one problem, there must exist uncountably many problems which are undecidable by the class Turing machines! Now, if we really wish, we may encode Turing machines by a single natural number, with respect to a fixed bijection with $\mathbb{N}$. For a refresher on countability and uncountability, see our primer on the subject.

Since we may encode the logic of one Turing machine, say $T$, within another, say $U$ we may use the tape and head of $U$ to simulate $T$ on a given input! We leave it as an exercise to the reader to figure out how to manage the tape when it must contain an encoding of $T$ and still simulate the tape of $T$. We call $U$ a universal Turing machine, or UTM. Now we see that Turing machines can reason about other Turing machines. Brilliant!

But now that we’ve established the existence of undecidable problems, we are given the task of finding one. We do so by construction, and arrive at the famous halting problem.

We denote an encoding of a Turing machine $T$ and an input $w$ to $T$ together as a pair $[T,w]$. Then, we construct the set of halting machine-input pairs:

$H = \left \{ [T,w] | T \textup{ is a Turing machine, } w \textup{ is an input to } T, \textup{ and } T \textup{ halts on } w \right \}$

We conjecture that this problem is undecidable, and prove it so by contradiction. Proof. Suppose $U$ is a Turing machine which decides acceptance in $H$. Construct another Turing machine $V$ as follows.

On input [T] (T is a Turing machine):
run U on [T,T]
if U rejects, accept
if U accepts, loop infinitely

Before the crux of the proof, let us recall that $U$ simply determines whether $T$ halts on an input. Then, when we run $V$ on $[T]$, we have the sub-computation of deciding whether $T$ halts when run on its own description. In this case, $V$ accepts when $T$ loops infinitely when run on itself, and $V$ loops infinitely otherwise.

Now (the magic!) run $V$ on $[V]$. If $V$ accepts, that means $V$, when run on itself, does not halt (i.e. $U$ rejects $[V,V]$), a contradiction. On the other hand, if $V$ loops infinitely, then $U$ rejects $[V,V]$, implying $V$ accepts, a contradiction.

Thus, we have proven that $V$ both halts and does not halt when run on itself! This glaring contradiction implies that $V$ cannot exist. But we built $V$ up from $U$ without logical error, so we conclude that $U$ cannot exist, and the theorem is proved.

## Wrapping Up

The theory of computing goes much further than the halting problem. Indeed, most undecidable problems are proven so by reducing them to the halting problem (if one can decide problem $X$ then one can decide the halting problem, a contradiction). But beyond decidability, there is a large field of study in computational efficiency, in which all studied algorithms are run on a Turing machine. Further, studies of complexity and alternative computational models abound, including a persistent problem of classifying “how hard” problems are to compute. The interested reader should Google “P vs. NP” for more information. Unfortunately, an adequate description of the various time classes and problems therein is beyond the scope of this blog. All we require is a working knowledge of the terminology used in speaking of Turing machines, and an idea of what kinds of algorithms can be implemented on one.

That’s all for now!