Methods of Proof — Diagonalization

A while back we featured a post about why learning mathematics can be hard for programmers, and I claimed a major issue was not understanding the basic methods of proof (the lingua franca between intuition and rigorous mathematics). I boiled these down to the “basic four,” direct implication, contrapositive, contradiction, and induction. But in mathematics there is an ever growing supply of proof methods. There are books written about the “probabilistic method,” and I recently went to a lecture where the “linear algebra method” was displayed. There has been recent talk of a “quantum method” for proving theorems unrelated to quantum mechanics, and many more.

So in continuing our series of methods of proof, we’ll move up to some of the more advanced methods of proof. And in keeping with the spirit of the series, we’ll spend most of our time discussing the structural form of the proofs. This time, diagonalization.

Diagonalization

Perhaps one of the most famous methods of proof after the basic four is proof by diagonalization. Why do they call it diagonalization? Because the idea behind diagonalization is to write out a table that describes how a collection of objects behaves, and then to manipulate the “diagonal” of that table to get a new object that you can prove isn’t in the table.

The simplest and most famous example of this is the proof that there is no bijection between the natural numbers and the real numbers. We defined injections, and surjections and bijections, in two earlier posts in this series, but for new readers a bijection is just a one-to-one mapping between two collections of things. For example, one can construct a bijection between all positive integers and all even positive integers by mapping $ n$ to $ 2n$. If there is a bijection between two (perhaps infinite) sets, then we say they have the same size or cardinality. And so to say there is no bijection between the natural numbers and the real numbers is to say that one of these two sets (the real numbers) is somehow “larger” than the other, despite both being infinite in size. It’s deep, it used to be very controversial, and it made the method of diagonalization famous. Let’s see how it works.

Theorem: There is no bijection from the natural numbers $ \mathbb{N}$ to the real numbers $ \mathbb{R}$.

Proof. Suppose to the contrary (i.e., we’re about to do proof by contradiction) that there is a bijection $ f: \mathbb{N} \to \mathbb{R}$. That is, you give me a positive integer $ k$ and I will spit out $ f(k)$, with the property that different $ k$ give different $ f(k)$, and every real number is hit by some natural number $ k$ (this is just what it means to be a one-to-one mapping).

First let me just do some setup. I claim that all we need to do is show that there is no bijection between $ \mathbb{N}$ and the real numbers between 0 and 1. In particular, I claim there is a bijection from $ (0,1)$ to all real numbers, so if there is a bijection from $ \mathbb{N} \to (0,1)$ then we could combine the two bijections. To show there is a bijection from $ (0,1) \to \mathbb{R}$, I can first make a bijection from the open interval $ (0,1)$ to the interval $ (-\infty, 0) \cup (1, \infty)$ by mapping $ x$ to $ 1/x$. With a little bit of extra work (read, messy details) you can extend this to all real numbers. Here’s a sketch: make a bijection from $ (0,1)$ to $ (0,2)$ by doubling; then make a bijection from $ (0,2)$ to all real numbers by using the $ (0,1)$ part to get $ (-\infty, 0) \cup (1, \infty)$, and use the $ [1,2)$ part to get $ [0,1]$ by subtracting 1 (almost! To be super rigorous you also have to argue that the missing number 1 doesn’t change the cardinality, or else write down a more complicated bijection; still, the idea should be clear).

Okay, setup is done. We just have to show there is no bijection between $ (0,1)$ and the natural numbers.

The reason I did all that setup is so that I can use the fact that every real number in $ (0,1)$ has an infinite binary decimal expansion whose only nonzero digits are after the decimal point. And so I’ll write down the expansion of $ f(1)$ as a row in a table (an infinite row), and below it I’ll write down the expansion of $ f(2)$, below that $ f(3)$, and so on, and the decimal points will line up. The table looks like this.

firsttableThe $ d$’s above are either 0 or 1. I need to be a bit more detailed in my table, so I’ll index the digits of $ f(1)$ by $ b_{1,1}, b_{1,2}, b_{1,3}, \dots$, the digits of $ f(2)$ by $ b_{2,1}, b_{2,2}, b_{2,3}, \dots$, and so on. This makes the table look like this

secondtable

It’s a bit harder to read, but trust me the notation is helpful.

Now by the assumption that $ f$ is a bijection, I’m assuming that every real number shows up as a number in this table, and no real number shows up twice. So if I could construct a number that I can prove is not in the table, I will arrive at a contradiction: the table couldn’t have had all real numbers to begin with! And that will prove there is no bijection between the natural numbers and the real numbers.

Here’s how I’ll come up with such a number $ N$ (this is the diagonalization part). It starts with 0., and it’s first digit after the decimal is $ 1-b_{1,1}$. That is, we flip the bit $ b_{1,1}$ to get the first digit of $ N$. The second digit is $ 1-b_{2,2}$, the third is $ 1-b_{3,3}$, and so on. In general, digit $ i$ is $ 1-b_{i,i}$.

Now we show that $ N$ isn’t in the table. If it were, then it would have to be $ N = f(m)$ for some $ m$, i.e. be the $ m$-th row in the table. Moreover, by the way we built the table, the $ m$-th digit of $ N$ would be $ b_{m,m}$. But we defined $ N$ so that it’s $ m$-th digit was actually $ 1-b_{m,m}$. This is very embarrassing for $ N$ (it’s a contradiction!). So $ N$ isn’t in the table.

$ \square$

It’s the kind of proof that blows your mind the first time you see it, because it says that there is more than one kind of infinity. Not something you think about every day, right?

The Halting Problem

The second example we’ll show of a proof by diagonalization is the Halting Theorem, proved originally by Alan Turing, which says that there are some problems that computers can’t solve, even if given unbounded space and time to perform their computations. The formal mathematical model is called a Turing machine, but for simplicity you can think of “Turing machines” and “algorithms described in words” as the same thing. Or if you want it can be “programs written in programming language X.” So we’ll use the three words “Turing machine,” “algorithm,” and “program” interchangeably.

The proof works by actually defining a problem and proving it can’t be solved. The problem is called the halting problem, and it is the problem of deciding: given a program $ P$ and an input $ x$ to that program, will $ P$ ever stop running when given $ x$ as input? What I mean by “decide” is that any program that claims to solve the halting problem is itself required to halt for every possible input with the correct answer. A “halting problem solver” can’t loop infinitely!

So first we’ll give the standard proof that the halting problem can’t be solved, and then we’ll inspect the form of the proof more closely to see why it’s considered a diagonalization argument.

Theorem: The halting program cannot be solved by Turing machines.

Proof. Suppose to the contrary that $ T$ is a program that solves the halting problem. We’ll use $ T$ as a black box to come up with a new program I’ll call meta-$ T$, defined in pseudo-python as follows.

def metaT(P):
   run T on (P,P)
   if T says that P halts:
      loop infinitely
   else:
      halt and output "success!"

In words, meta-$ T$ accepts as input the source code of a program $ P$, and then uses $ T$ to tell if $ P$ halts (when given its own source code as input). Based on the result, it behaves the opposite of $ P$; if $ P$ halts then meta-$ T$ loops infinitely and vice versa. It’s a little meta, right?

Now let’s do something crazy: let’s run meta-$ T$ on itself! That is, run

metaT(metaT)

So meta. The question is what is the output of this call? The meta-$ T$ program uses $ T$ to determine whether meta-$ T$ halts when given itself as input. So let’s say that the answer to this question is “yes, it does halt.” Then by the definition of meta-$ T$, the program proceeds to loop forever. But this is a problem, because it means that metaT(metaT) (which is the original thing we ran) actually does not halt, contradicting $ T$’s answer! Likewise, if $ T$ says that metaT(metaT) should loop infinitely, that will cause meta-$ T$ to halt, a contradiction. So $ T$ cannot be correct, and the halting problem can’t be solved.

$ \square$

This theorem is deep because it says that you can’t possibly write a program to which can always detect bugs in other programs. Infinite loops are just one special kind of bug.

But let’s take a closer look and see why this is a proof by diagonalization. The first thing we need to convince ourselves is that the set of all programs is countable (that is, there is a bijection from $ \mathbb{N}$ to the set of all programs). This shouldn’t be so hard to see: you can list all programs in lexicographic order, since the set of all strings is countable, and then throw out any that are not syntactically valid programs. Likewise, the set of all inputs, really just all strings, is countable.

The second thing we need to convince ourselves of is that a problem corresponds to an infinite binary string. To do this, we’ll restrict our attention to problems with yes/no answers, that is where the goal of the program is to output a single bit corresponding to yes or no for a given input. Then if we list all possible inputs in increasing lexicographic order, a problem can be represented by the infinite list of bits that are the correct outputs to each input.

For example, if the problem is to determine whether a given binary input string corresponds to an even number, the representation might look like this:

010101010101010101...

Of course this all depends on the details of how one encodes inputs, but the point is that if you wanted to you could nail all this down precisely. More importantly for us we can represent the halting problem as an infinite table of bits. If the columns of the table are all programs (in lex order), and the rows of the table correspond to inputs (in lex order), then the table would have at entry $ (x,P)$ a 1 if $ P(x)$ halts and a 0 otherwise.


haltingtable

here $ b_{i,j}$ is 1 if $ P_j(x_i)$ halts and 0 otherwise. The table encodes the answers to the halting problem for all possible inputs.

Now we assume for contradiction sake that some program solves the halting problem, i.e. that every entry of the table is computable. Now we’ll construct the answers output by meta-$ T$ by flipping each bit of the diagonal of the table. The point is that meta-$ T$ corresponds to some row of the table, because there is some input string that is interpreted as the source code of meta-$ T$. Then we argue that the entry of the table for $ (\textup{meta-}T, \textup{meta-}T)$ contradicts its definition, and we’re done!

So these are two of the most high-profile uses of the method of diagonalization. It’s a great tool for your proving repertoire.

Until next time!

Methods of Proof — Contradiction

In this post we’ll expand our toolbox of proof techniques by adding the proof by contradiction. We’ll also expand on our knowledge of functions on sets, and tackle our first nontrivial theorem: that there is more than one kind of infinity.

Impossibility and an Example Proof by Contradiction

Many of the most impressive results in all of mathematics are proofs of impossibility. We see these in lots of different fields. In number theory, plenty of numbers cannot be expressed as fractions. In geometry, certain geometric constructions are impossible with a straight-edge and compass. In computing theory, certain programs cannot be written. And in logic even certain mathematical statements can’t be proven or disproven.

In some sense proofs of impossibility are hardest proofs, because it’s unclear to the layman how anyone could prove it’s not possible to do something. Perhaps this is part of human nature, that nothing is too impossible to escape the realm of possibility. But perhaps it’s more surprising that the main line of attack to prove something is impossible is to assume it’s possible, and see what follows as a result. This is precisely the method of proof by contradiction:

Assume the claim you want to prove is false, and deduce that something obviously impossible must happen.

There is a simple and very elegant example that I use to explain this concept to high school students in my guest lectures.

Image you’re at a party of a hundred people, and any pair of people are either mutual friends or not mutual friends. Being a mathematical drama queen, you decide to count how many friends each person has at the party. You notice that there are two people with the same number of friends, and you wonder if it will always be the case for any party. And so the problem is born: prove or disprove that at every party of $ n$ people, there exist two people with the same number of friends at the party.

If we believe this is true, and we can’t seem to find a direct proof, then we can try a proof by contradiction. That is, we assume that there are not two people with the same number of friends. Equivalently, we can assume that everybody has a distinct number of friends. Well what could the possibilities be? On one hand, if there are $ n$ people at the party, then the minimum number of friends one could have is zero (if you’re quite lonely), and the maximum is $ n-1$ (if you’re friends with everybody). So there are $ n$ distinct numbers, and $ n$ people, and everyone has to have a different number. That means someone has to have zero friends, and someone has to be friends with everybody. But this can’t possibly be true: if you’re friends with everyone (and we’re only counting mutual friendships) then nobody can be friendless. Thus, we have arrived at a contradiction, and our original assumption must have been incorrect. That is, every party has two people with the same number of friends at the party.

There are certainly other proofs of this fact (I know of a direct proof which is essentially the same proof as the one given above), and there are more mathematical ways to think about the problem. But this is a wonderful example of a proof which requires little else than the method of contradiction.

A Reprise on Truth Tables, and More Examples

Just as with our post on contrapositive implication, we can analyze proof by contradiction from the standpoint of truth tables. Recall the truth table for an implication $ p \to q$:

p  q  p->q
T  T   T
T  F   F
F  T   T
F  F   T

We notice that an implication can only be false if the hypothesis $ p$ is true and the consequence $ q$ is false. This is the motivation for a proof by contradiction: if we show this case can’t happen, then there’s no other option: the statement $ p \to q$ must be true. In other words, if supposing “p and not q” is true implies something which we know to be false, then by the very same truth table argument it must be that either “q” is true or “p” is false. In any of these cases “p implies q” is true.

But all of this truth table juggling really takes us away from the heart of the method. Let’s do some proofs.

First, we will prove that the square root of 2 is not a rational number. That is, we are proving the statement that if $ x$ is a number such that $ x^2 = 2$, then it cannot be true that $ x = p/q$ where $ p,q$ are integers.

Suppose to the contrary (this usually marks the beginning of a proof by contradiction) that $ x = p/q$ is a ratio of integers. Then we can square both sides to get $ 2 = x^2 = p^2 / q^2$, and rearrange to arrive at $ 2q^2 = p^2$. Now comes the tricky part: if a number is a divisor of $ p$, then its square must divide $ p^2$; this is true of all square numbers. In particular, it must be the case that the largest power of 2 dividing any square number is even (and $ 2^0$ counts as an even power). Now in the equation $ 2q^2 = p^2$ the right hand side is a square, so the largest power of two dividing it is even, and the right hand side is two times a square, so the largest power of 2 dividing it is odd (2 times an even power of 2 gives an odd power of two). But the two sides are equal! They can’t possibly have different largest powers of 2 dividing them. So we have arrived at a contradiction, and it must not be the case that $ x$ is rational.

This is quite a nice result, and a true understanding of the proof comes when you see why it fails for numbers which actually do have rational square roots (for instance, try it for the square root of 9 or 36/25). But the use of the method is clear: we entered a magical fairy land where the square root of 2 is a rational number, and by using nothing but logical steps, we proved that this world is a farce. It cannot exist.

Often times the jump from “suppose to the contrary” to the contradiction requires a relatively small piece of insight, but in other times it is quite large. In our example above, the insight was related to divisors (or prime factorizations) of a number, and these are not at all as obvious to the layman as our “having no friends” contradiction earlier.

For instance, here is another version of the square root of two proof, proved by contradiction, but this time using geometry. Another example is on tiling chessboards with dominoes (though the application of the proof by contradiction in this post is more subtle; can you pick out exactly when it’s used?). Indeed, most proofs of the fundamental theorem of algebra (these are much more advanced: [1] [2] [3] [4]) follow the same basic recipe: suppose otherwise, and find a contradiction.

Instead of a obviously ridiculous statement like “1 = 0”, often times the “contradiction” at the end of a proof will contradict the original hypothesis that was assumed. This happens in a famous proof that there are infinitely many prime numbers.

Indeed, if we suppose that there are finitely many prime numbers, we can write them all down: $ p_1 , \dots, p_n$. That is, we are assuming that this is a list of all prime numbers. Since the list is finite, we can multiply them all together and add 1: let $ q = p_1 \dots p_n + 1$. Indeed, as the reader will prove in the exercises below, every number has a prime divisor, but it is not hard to see that no $ p_i$ divides $ q$. This is because no matter what some number $ x$ is, no number except 1 can divide both $ x$ and $ x-1$ (one can prove this fact by contradiction if it is not obvious), and we already know that all the $ p_i$ divide $ q-1$ . So $ q$ must have some prime divisor which was not in the list we started with. This contradicts that we had a complete list of primes to begin with. And so there must be infinitely many primes.

Here are some exercises to practice the proof by contradiction:

  1. Prove that the base 2 logarithm of 3 is irrational.
  2. More generally that $ \log_a(b)$ is irrational if there is any prime $ p$ dividing $ a$ but not $ b$, or vice versa.
  3. Prove the fundamental theorem of arithmetic, that every natural number $ n \geq 2$ is a product of primes (hint: inspect the smallest failing example).

A Few More Words on Functions and Sets

Last time we defined what it means for a function $ f: X \to Y$ on sets to be injective: different things in $ X$ get mapped to different things in $ Y$. Indeed, there is another interesting notion called surjectivity, which says that $ f$ “hits” everything in $ Y$ by something in $ X$.

Definition: A function $ f: X \to Y$ is surjective if for every element $ y \in Y$ there is an $ x \in X$ for which $ f(x) = y$. A surjective function is called a surjection. A synonym often used in place of surjective is onto.

For finite sets, we use surjections to prove something nice about the sets it involves. If $ f:X \to Y$ is a surjection, then $ X$ has at least as many elements as $ Y$. The reader can prove this easily by contradiction. In our previous post we proved an analogous proposition for injective functions: if $ f: X \to Y$ is injective, then there are at least as many elements of $ Y$ as there are of $ X$. If we combine the two notions, we can see that $ X$ and $ Y$ have exactly the same size.

Definition: A function $ f: X \to Y$ which is both injective and surjective is called a bijection. The adjectival form of bijection is bijective.

So for finite sets, if there exists a bijection $ X \to Y$, then $ X$ and $ Y$ have the same number of elements. The converse is also true, if two finite sets have the same size one can make a bijection between them (though a strictly formal proof of this would require induction, which we haven’t covered yet). The main benefit of thinking about size this way is that it extends to infinite sets!

Definition: Two arbitrary sets $ X,Y$ are said to have the same cardinality if there exists a bijection $ f : X \to Y$. If there is a bijection $ f: \mathbb{N} \to X$ then $ X$ is said to have countably infinite cardinality, or simply countably infinite. If no such bijection exists (and $ X$ is not a finite set), then $ X$ is said to be uncountably infinite.

So we can say two infinite sets have the same cardinality if we can construct a bijection between them. For instance, we can prove that the even natural numbers have the same cardinality as the regular natural numbers. If $ X$ is the set of even natural numbers, just construct a function $ \mathbb{N} \to X$ by sending $ x \mapsto 2x$. This is manifestly surjective and injective (one can prove it with whatever method one wants, but it is obviously true). A quick note on notation: when mathematicians want to define a function without giving it a name, they use the “maps to” arrow $ \mapsto$. The reader can think of this as the mathematician’s version of lambda expression. So the above map would be written in python: lambda x: 2*x.

So we have proved, as curious as it sounds to say it, that there are just as many even numbers as all natural numbers. Even more impressive, one can construct a bijection between the natural numbers and the rational numbers. Mathematicians denote the latter by $ \mathbb{Q}$, and typically this proof is done by first finding a bijection from $ \mathbb{N} \to \mathbb{Z}$ and then from $ \mathbb{Z} \to \mathbb{Q}$. We are implicitly using the fact that a composition of two bijections is a bijection. The diligent reader has already proved this for injections, so if one can also prove it for surjections, by definition it will be satisfied for bijections.

Diagonalization, and a Non-Trivial Theorem

We now turn to the last proof of this post, and our first non-trivial theorem: that there is no bijection between the set of real numbers and the set of natural numbers. Before we start, we should mention that calling this theorem ‘non-trivial’ might sound insulting to the non-mathematician; the reader has been diligently working to follow the proofs in these posts and completing exercises, and they probably all feel non-trivial. In fact, mathematicians don’t use trivial with the intent to insult (most of the time) or to say something is easy or not worth doing. Instead, ‘trivial’ is used to say that a result follows naturally, that it comes from nothing but applying the definitions and using the basic methods of proof. Of course, since we’re learning the basic methods of proof nothing can really be trivial, but if we say a theorem is non-trivial that means the opposite: there is some genuinely inspired idea that sits at the focal point of the proof, more than just direct or indirect inference. Even more, a proof is called “highly non-trivial” if there are multiple novel ideas or a menagerie of complicated details to keep track of.

In any case, we have to first say what the real numbers are. Instead we won’t actually work with the entire set of real numbers, but with a “small” subset: the real numbers between zero and one. We will also view these numbers in a particular representation that should be familiar to the practicing programmer.

Definition: The set of $ [0,1]$ is the set of all infinite sequences of zeroes and ones, interpreted as the set of all binary decimal expansions of numbers between zero and one.

If we want to be rigorous, we can define an infinite sequence to either be an infinite tuple (falling back on our definition of a tuple as a set), or we can define it to be a function $ f : \mathbb{N} \to \left \{ 0, 1 \right \}$. Taking the latter view, we add one additional piece of notation:

Definition: An infinite binary sequence $ (b_i) = (b_1, b_2, \dots)$ is a function $ b : \mathbb{N} \to \left \{ 0, 1 \right \}$ where we denote by $ b_i$ the value $ b(i)$.

So now we can state the theorem.

Theorem: The set $ [0,1]$ of infinite binary sequences is uncountably infinite. That is, there is no bijection $ \mathbb{N} \to [0,1]$.

The proof, as we said, is non-trivial, but it starts off in a familiar way: we assume there is such a bijection. Suppose to the contrary that $ f : \mathbb{N} \to [0,1]$ is a bijection. Then we can list the values of $ f$ in a table. Since we want to use $ b_i$ for all of the values of $ f$, we will call

$ \displaystyle f(n) = (b_{n,i}) = b_{n,1}, b_{n,2}, \dots$

This gives us the following infinite table:

$ \displaystyle \begin{matrix} f(1) &=& b_{1,1}, & b_{1,2}, & \dots \\ f(2) &=& b_{2,1}, & b_{2,2}, & \dots \\ f(3) &=& b_{3,1}, & b_{3,2}, & \dots \\ \vdots & & \vdots & & \end{matrix}$

Now here is the tricky part. We are going to define a new binary sequence which we can guarantee does not show up in this list. This will be our contradiction, because we assumed at first that this list consisted of all of the binary sequences.

The construction itself is not so hard. Start by taking $ c_i = b_{i,i}$ for all $ i$. That is, we are using all of the diagonal elements of the table above. Now take each $ c_i$ and replace it with its opposite (i.e., flip each bit in the sequence, or equivalently apply $ b \mapsto 1-b$ to each entry). The important fact about this new sequence is it differs from every entry in this table. By the way we constructed it, no matter which $lateex n$ one chooses, this number differs from the table entry $ f(n)$ at digit $ n$ (and perhaps elsewhere). Because of this, it can’t occur as an entry in the table. So we just proved our function $ f$ isn’t surjective, contradicting our original hypothesis, and proving the theorem.

The discovery of this fact was an important step forward in the history of mathematics. The particular technique though, using the diagonal entries of the table and changing each one, comes with a name of its own: the diagonalization argument. It’s quite a bit more specialized of a proof technique than, say, the contrapositive implication, but it shows up in quite a range of mathematical literature (for instance, diagonalization is by far the most common way to prove that the Halting problem is undecidable). It is worth noting diagonalization was not the first known way to prove this theorem, just the cleanest.

The fact itself has interesting implications that lends itself nicely to confusing normal people. For instance, it implies not only that there is more than one kind of infinity, but that there are an infinity of infinities. Barring a full discussion of how far down the set-theory rabbit hole one can go, we look forward to next time, when we meet the final of the four basic methods of proof: proof by induction.

Until then!