Methods of Proof — Induction

In this final post on the basic four methods of proof (but perhaps not our last post on proof methods), we consider the proof by induction.

Proving Statements About All Natural Numbers

Induction comes in many flavors, but the goal never changes. We use induction when we want to prove something is true about all natural numbers. These statements will look something like this:

For all natural numbers n, 1 + 2 + \dots + n = n(n+1)/2.

Of course, there are many ways to prove this fact, but induction relies on one key idea: if we know the statement is true for some specific number n, then that gives us information about whether the statement is true for n+1. If this isn’t true about the problem, then proof by induction is hopeless.

Let’s see how we can apply it to the italicized statement above (though we haven’t yet said what induction is, this example will pave the way for a formal description of the technique). The first thing we notice is that indeed, if we know something about the first n numbers, then we can just add n+1 to it to get the sum of the first n+1 numbers. That is,

\displaystyle 1 + \dots + n + (n+1) = (1 + \dots + n) + (n+1)

Reiterating our key point, this is how we know induction is a valid strategy: the statement written for a fixed n translates naturally into the statement about n+1. Now suppose we know the theorem is true for n. Then we can rewrite the above sum as follows:

\displaystyle 1 + \dots + n + (n+1) = \frac{n(n+1)}{2} + (n+1)

With some algebra, we can write the left-hand side as a single fraction:

\displaystyle 1 + \dots + (n+1) = \frac{n(n+1) + 2(n+1)}{2}

and factoring the numerator gives

\displaystyle 1 + \dots + (n+1) = \frac{(n+1)(n+2)}{2}

Indeed, this is precisely what we’re looking for! It’s what happens when you replace n by n+1 in the original statement of the problem.

At this point we’re very close to being finished. We proved that if the statement is true for n, then it’s true for n+1. And by the same reasoning, it will be true for n+2, n+3, and all numbers after n. But this raises the obvious question: what’s the smallest number that it’s true for?

For this problem, it’s easy to see the answer is n=1. A mathematician would say: the statement is trivially true for n=1 (here trivial means there is no thinking required to show it: you just plug in n=1 and verify). And so by our reasoning, the statement is true for n=2, n=3, and so on forever. This is the spirit of mathematical induction.

Formal Nonsense

Now that we’ve got a taste of how to use induction in practice, let’s formally write down the rules for induction. Let’s have a statement which depends on a number n, and call it p(n). This is written as a function because it actually is one (naively). It’s a function from the set of natural numbers to the set of all mathematical statements. In our example above, p(n) was the statement that the equality 1 + \dots + n = n(n+1)/2 holds.

We can plug in various numbers into this function, such as p(1) being the statement “1 = 1(1+1)/2 holds,” or p(n+1) being “1 + \dots + (n+1) = (n+1)(n+1+1)/2 holds.”

The basic recipe for induction is then very simple. Say you want to prove that p(n) is true for all n \geq 1. First you prove that p(1) is true (this is called the base case), and then you prove the implication p(n) \to p(n+1) for an arbitrary n. Each of these pieces can be proved with any method one wishes (direct, contrapositive, contradiction, etc.). Once they are proven, we get that p(n) is true for all n automatically.

Indeed, we can even prove it. A rigorous proof requires a bit of extra knowledge, but we can easily understand the argument: it’s just a proof by contradiction. Here’s a quick sketch. Let X be the set of all natural numbers n for which p(n) is false. Suppose to the contrary that X is not empty. Every nonempty set of natural numbers has a smallest element, so let’s call m the smallest number for which p(m) is false. Now m-1 < m, so p(m-1) must be true. But we proved that whenever p(n) is true then so is p(n+1), so p(m-1 + 1) = p(m) is true, a contradiction.

Besides practicing proof by induction, that’s all there is to it. One more caveat is that the base case can be some number other than 1. For instance, it is true that n! > 2^n, but only for n \geq 4. In this case, we prove p(4) is true, and p(n) \to p(n+1) with the extra assumption that n \geq 4. The same inductive result applies.

Here are some exercises the reader can practice with, and afterward we will explore some variants of induction.

  1. Prove that n! > 2^n for all n \geq 4.
  2. Prove that for all n \geq 1 the following equality holds: 1/(1 \cdot 2) + 1/(2 \cdot 3) + \dots + 1/(n \cdot (n+1)) = n/(n+1).
  3. Prove that for every natural number n, a set of n elements has 2^n subsets (including the empty subset).

This last exercise gives a hint that induction can prove more than arithmetic formulas. Indeed, if we have any way to associate a mathematical object with a number, we can potentially use induction to prove things about those objects. Unfortunately, we don’t have any mathematical objects to work with (except for sets and functions), and so we will stick primarily to proving facts about numbers.

One interesting observation about proof by induction is very relevant to programmers: it’s just recursion. That is, if we want to prove a statement p(n), it suffices to prove it for p(n-1) and do some “extra computation” to arrive at the statement for p(n). And of course, we want to make sure the recursion terminates, so we add in the known result for p(1).

Variations on Induction, and Connections to Dynamic Programming

The first variation of induction is simultaneous induction on multiple quantities. That is, we can formulate a statement p(n,m) which depends on two natural numbers independently of one another. The base case is a bit trickier, but paralleling the above remark about recursion, double-induction follows the same pattern as a two-dimensional dynamic programming algorithm. The base cases would consist of all p(1,m) and all p(n,1), and the inductive step to get p(n,m) requires p(n-1,m) and p(n,m-1) (and potentially p(n-1, m-1) or others; it depends on the problem).

Unfortunately, natural instances where double induction is useful (or anywhere close to necessary) are rare. Here is an example of a (tricky) but elementary example. Call

\displaystyle C(m,n) = \frac{(2m)!(2n)!}{m!n!(m+n)!},

where the exclamation point denotes the factorial function. We will outline a proof that C(m,n) is always an integer for all m, n \geq 0. If we look at the base cases, C(0,n), C(m,0) (recalling that 0! = 1), we get (2n!)/(n! n!), and this happens to be in the form of a binomial coefficient (here, the number of ways to choose n! objects from a collection of (2n)! objects), and binomial coefficients are known to be integers. Now the inductive step relies on the fact that C(m,n-1) and C(m+1, n-1) are both integers. If this is true then

\displaystyle C(m,n) = 4C(m,n-1) - C(m+1, n-1),

which is obviously again an integer.

If we write these values in a table, we can see how the induction progresses in a “dynamic programming” fashion:

dynamic-programming-table

In order to fill the values in the next n column (prove the statement for those values of n), we need to fill the entire n-1 column (for indeed, we rely on the inductive hypothesis for both the m+1 and m row). But since our base case was the entire n=0 column, we can fill the entire table. In fact, we have just described a dynamic programming algorithm for computing the value of C(m,n) in mn steps. The correctness of the algorithm is indeed an inductive proof of the theorem.

Perhaps uninterestingly, this is asymptotically slower than the naive algorithm of computing C(m,n) directly by computing (2n)!, (2m)!, (n+m)! directly; this would take a linear number of steps in n, assuming n > m. In passing, this author wonders if, when the numbers are really large, the lack of division and multiplication in the dynamic program (multiplying by 4 using bit shifting instead) would overtake the naive algorithm. But C(m,n) is certainly not interesting enough in its own right for anyone to care :)

At this point, we have noticed that we sometimes use induction and assume that many smaller instances of the statement are true. Indeed, why not inductively assume that the statement holds for all smaller n. This would certainly give the prover more tools to work with. Indeed, this technique is sometimes called strong induction, in the sense that we assume a stronger inductive hypothesis when we’re trying to prove p(n+1). It may not be entirely obvious (especially to one well versed in the minutiae of formal logic) that this is actually equivalent to normal induction, but it is. In fact, the concept of “strong” induction is entirely pedagogical in nature. Most working mathematicians will not mention the difference in their proofs.

The last variant we’ll mention about induction is that of transfinite induction. The concept is that if you have any set X which is well-ordered (essentially this means: allowing one to prove induction applies as we did earlier in the post), then we can perform induction its elements. In this way, we can “parameterize” statements by elements of an arbitrary well-ordered set, so that instead of p(n) being a function from natural numbers to mathematical statements, it’s a function from X to mathematical statements. One somewhat common example of when X is something besides natural numbers is when we use the so-called cardinal numbers. We have already seen two distinct infinite cardinal numbers in this series: the cardinality of the integers and the cardinality of the real numbers (indeed, “cardinal number” just means a number which is the cardinality of a set). It turns out that there are many more kinds of cardinal numbers, and you can do induction on them, but this rarely shows up outside of mathematical logic.

And of course, we should close this post on an example of when induction goes wrong. For this we point the reader to our proof gallery, and the false proof that all horses are the same color. It’s quite an amusing joke, and hopefully it will stimulate the reader’s mind to recognize the pitfalls that can occur in a proof by induction.

So those are the basic four proof techniques! Fortunately for the reader, pretty much all proofs presented on this blog follow one of these four techniques. I imagine many of my readers skip over the proofs entirely (if only I could put proofs in animated gifs, with claims illustrated by grumpy cats!). So hopefully, if you have been intimidated or confused by the structure of the proofs on this blog, this will aid you in your future mathematical endeavors.  Butchering an old phrase for the sake of a bad pun, the eating of the pudding is in the proof. :)

Until next time!

About these ads

Methods of Proof — Contradiction

In this post we’ll expand our toolbox of proof techniques by adding the proof by contradiction. We’ll also expand on our knowledge of functions on sets, and tackle our first nontrivial theorem: that there is more than one kind of infinity.

Impossibility and an Example Proof by Contradiction

Many of the most impressive results in all of mathematics are proofs of impossibility. We see these in lots of different fields. In number theory, plenty of numbers cannot be expressed as fractions. In geometry, certain geometric constructions are impossible with a straight-edge and compass. In computing theory, certain programs cannot be written. And in logic even certain mathematical statements can’t be proven or disproven.

In some sense proofs of impossibility are hardest proofs, because it’s unclear to the layman how anyone could prove it’s not possible to do something. Perhaps this is part of human nature, that nothing is too impossible to escape the realm of possibility. But perhaps it’s more surprising that the main line of attack to prove something is impossible is to assume it’s possible, and see what follows as a result. This is precisely the method of proof by contradiction:

Assume the claim you want to prove is false, and deduce that something obviously impossible must happen.

There is a simple and very elegant example that I use to explain this concept to high school students in my guest lectures.

Image you’re at a party of a hundred people, and any pair of people are either mutual friends or not mutual friends. Being a mathematical drama queen, you decide to count how many friends each person has at the party. You notice that there are two people with the same number of friends, and you wonder if it will always be the case for any party. And so the problem is born: prove or disprove that at every party of n people, there exist two people with the same number of friends at the party.

If we believe this is true, and we can’t seem to find a direct proof, then we can try a proof by contradiction. That is, we assume that there are not two people with the same number of friends. Equivalently, we can assume that everybody has a distinct number of friends. Well what could the possibilities be? On one hand, if there are n people at the party, then the minimum number of friends one could have is zero (if you’re quite lonely), and the maximum is n-1 (if you’re friends with everybody). So there are n distinct numbers, and n people, and everyone has to have a different number. That means someone has to have zero friends, and someone has to be friends with everybody. But this can’t possibly be true: if you’re friends with everyone (and we’re only counting mutual friendships) then nobody can be friendless. Thus, we have arrived at a contradiction, and our original assumption must have been incorrect. That is, every party has two people with the same number of friends at the party.

There are certainly other proofs of this fact (I know of a direct proof which is essentially the same proof as the one given above), and there are more mathematical ways to think about the problem. But this is a wonderful example of a proof which requires little else than the method of contradiction.

A Reprise on Truth Tables, and More Examples

Just as with our post on contrapositive implication, we can analyze proof by contradiction from the standpoint of truth tables. Recall the truth table for an implication p \to q:

p  q  p->q
T  T   T
T  F   F
F  T   T
F  F   T

We notice that an implication can only be false if the hypothesis p is true and the consequence q is false. This is the motivation for a proof by contradiction: if we show this case can't happen, then there's no other option: the statement p \to q must be true. In other words, if supposing "p and not q" is true implies something which we know to be false, then by the very same truth table argument it must be that either "q" is true or "p" is false. In any of these cases "p implies q" is true.

But all of this truth table juggling really takes us away from the heart of the method. Let's do some proofs.

First, we will prove that the square root of 2 is not a rational number. That is, we are proving the statement that if x is a number such that x^2 = 2, then it cannot be true that x = p/q where p,q are integers.

Suppose to the contrary (this usually marks the beginning of a proof by contradiction) that x = p/q is a ratio of integers. Then we can square both sides to get 2 = x^2 = p^2 / q^2, and rearrange to arrive at 2q^2 = p^2. Now comes the tricky part: if a number is a divisor of p, then its square must divide p^2; this is true of all square numbers. In particular, it must be the case that the largest power of 2 dividing any square number is even (and 2^0 counts as an even power). Now in the equation 2q^2 = p^2 the right hand side is a square, so the largest power of two dividing it is even, and the right hand side is two times a square, so the largest power of 2 dividing it is odd (2 times an even power of 2 gives an odd power of two). But the two sides are equal! They can't possibly have different largest powers of 2 dividing them. So we have arrived at a contradiction, and it must not be the case that x is rational.

This is quite a nice result, and a true understanding of the proof comes when you see why it fails for numbers which actually do have rational square roots (for instance, try it for the square root of 9 or 36/25). But the use of the method is clear: we entered a magical fairy land where the square root of 2 is a rational number, and by using nothing but logical steps, we proved that this world is a farce. It cannot exist.

Often times the jump from "suppose to the contrary" to the contradiction requires a relatively small piece of insight, but in other times it is quite large. In our example above, the insight was related to divisors (or prime factorizations) of a number, and these are not at all as obvious to the layman as our "having no friends" contradiction earlier.

For instance, here is another version of the square root of two proof, proved by contradiction, but this time using geometry. Another example is on tiling chessboards with dominoes (though the application of the proof by contradiction in this post is more subtle; can you pick out exactly when it's used?). Indeed, most proofs of the fundamental theorem of algebra (these are much more advanced: [1] [2] [3] [4]) follow the same basic recipe: suppose otherwise, and find a contradiction.

Instead of a obviously ridiculous statement like "1 = 0", often times the "contradiction" at the end of a proof will contradict the original hypothesis that was assumed. This happens in a famous proof that there are infinitely many prime numbers.

Indeed, if we suppose that there are finitely many prime numbers, we can write them all down: p_1 , \dots, p_n. That is, we are assuming that this is a list of all prime numbers. Since the list is finite, we can multiply them all together and add 1: let q = p_1 \dots p_n + 1. Indeed, as the reader will prove in the exercises below, every number has a prime divisor, but it is not hard to see that no p_i divides q. This is because no matter what some number x is, no number except 1 can divide both x and x-1 (one can prove this fact by contradiction if it is not obvious), and we already know that all the p_i divide q-1 . So q must have some prime divisor which was not in the list we started with. This contradicts that we had a complete list of primes to begin with. And so there must be infinitely many primes.

Here are some exercises to practice the proof by contradiction:

  1. Prove that the base 2 logarithm of 3 is irrational.
  2. More generally that \log_a(b) is irrational if there is any prime p dividing a but not b, or vice versa.
  3. Prove the fundamental theorem of arithmetic, that every natural number n \geq 2 is a product of primes (hint: inspect the smallest failing example).

A Few More Words on Functions and Sets

Last time we defined what it means for a function f: X \to Y on sets to be injective: different things in X get mapped to different things in Y. Indeed, there is another interesting notion called surjectivity, which says that f "hits" everything in Y by something in X.

Definition: A function f: X \to Y is surjective if for every element y \in Y there is an x \in X for which f(x) = y. A surjective function is called a surjection. A synonym often used in place of surjective is onto.

For finite sets, we use surjections to prove something nice about the sets it involves. If f:X \to Y is a surjection, then X has at least as many elements as Y. The reader can prove this easily by contradiction. In our previous post we proved an analogous proposition for injective functions: if f: X \to Y is injective, then there are at least as many elements of Y as there are of X. If we combine the two notions, we can see that X and Y have exactly the same size.

Definition: A function f: X \to Y which is both injective and surjective is called a bijection. The adjectival form of bijection is bijective.

So for finite sets, if there exists a bijection X \to Y, then X and Y have the same number of elements. The converse is also true, if two finite sets have the same size one can make a bijection between them (though a strictly formal proof of this would require induction, which we haven't covered yet). The main benefit of thinking about size this way is that it extends to infinite sets!

Definition: Two arbitrary sets X,Y are said to have the same cardinality if there exists a bijection f : X \to Y. If there is a bijection f: \mathbb{N} \to X then X is said to have countably infinite cardinality, or simply countably infinite. If no such bijection exists (and X is not a finite set), then X is said to be uncountably infinite.

So we can say two infinite sets have the same cardinality if we can construct a bijection between them. For instance, we can prove that the even natural numbers have the same cardinality as the regular natural numbers. If X is the set of even natural numbers, just construct a function \mathbb{N} \to X by sending x \mapsto 2x. This is manifestly surjective and injective (one can prove it with whatever method one wants, but it is obviously true). A quick note on notation: when mathematicians want to define a function without giving it a name, they use the "maps to" arrow \mapsto. The reader can think of this as the mathematician's version of lambda expression. So the above map would be written in python: lambda x: 2*x.

So we have proved, as curious as it sounds to say it, that there are just as many even numbers as not even numbers. Even more impressive, one can construct a bijection between the natural numbers and the rational numbers. Mathematicians denote the latter by \mathbb{Q}, and typically this proof is done by first finding a bijection from \mathbb{N} \to \mathbb{Z} and then from \mathbb{Z} \to \mathbb{Q}. We are implicitly using the fact that a composition of two bijections is a bijection. The diligent reader has already proved this for injections, so if one can also prove it for surjections, by definition it will be satisfied for bijections.

Diagonalization, and a Non-Trivial Theorem

We now turn to the last proof of this post, and our first non-trivial theorem: that there is no bijection between the set of real numbers and the set of natural numbers. Before we start, we should mention that calling this theorem 'non-trivial' might sound insulting to the non-mathematician; the reader has been diligently working to follow the proofs in these posts and completing exercises, and they probably all feel non-trivial. In fact, mathematicians don't use trivial with the intent to insult (most of the time) or to say something is easy or not worth doing. Instead, 'trivial' is used to say that a result follows naturally, that it comes from nothing but applying the definitions and using the basic methods of proof. Of course, since we're learning the basic methods of proof nothing can really be trivial, but if we say a theorem is non-trivial that means the opposite: there is some genuinely inspired idea that sits at the focal point of the proof, more than just direct or indirect inference. Even more, a proof is called "highly non-trivial" if there are multiple novel ideas or a menagerie of complicated details to keep track of.

In any case, we have to first say what the real numbers are. Instead we won't actually work with the entire set of real numbers, but with a "small" subset: the real numbers between zero and one. We will also view these numbers in a particular representation that should be familiar to the practicing programmer.

Definition: The set of [0,1] is the set of all infinite sequences of zeroes and ones, interpreted as the set of all binary decimal expansions of numbers between zero and one.

If we want to be rigorous, we can define an infinite sequence to either be an infinite tuple (falling back on our definition of a tuple as a set), or we can define it to be a function f : \mathbb{N} \to \left \{ 0, 1 \right \}. Taking the latter view, we add one additional piece of notation:

Definition: An infinite binary sequence (b_i) = (b_1, b_2, \dots) is a function b : \mathbb{N} \to \left \{ 0, 1 \right \} where we denote by b_i the value b(i).

So now we can state the theorem.

Theorem: The set [0,1] of infinite binary sequences is uncountably infinite. That is, there is no bijection \mathbb{N} \to [0,1].

The proof, as we said, is non-trivial, but it starts off in a familiar way: we assume there is such a bijection. Suppose to the contrary that f : \mathbb{N} \to [0,1] is a bijection. Then we can list the values of f in a table. Since we want to use b_i for all of the values of f, we will call

\displaystyle f(n) = (b_{n,i}) = b_{n,1}, b_{n,2}, \dots

This gives us the following infinite table:

\displaystyle \begin{matrix} f(1) &=& b_{1,1}, & b_{1,2}, & \dots \\ f(2) &=& b_{2,1}, & b_{2,2}, & \dots \\ f(3) &=& b_{3,1}, & b_{3,2}, & \dots \\ \vdots & & \vdots & & \end{matrix}

Now here is the tricky part. We are going to define a new binary sequence which we can guarantee does not show up in this list. This will be our contradiction, because we assumed at first that this list consisted of all of the binary sequences.

The construction itself is not so hard. Start by taking c_i = b_{i,i} for all i. That is, we are using all of the diagonal elements of the table above. Now take each c_i and replace it with its opposite (i.e., flip each bit in the sequence, or equivalently apply b \mapsto 1-b to each entry). The important fact about this new sequence is it differs from every entry in this table. By the way we constructed it, no matter which $lateex n$ one chooses, this number differs from the table entry f(n) at digit n (and perhaps elsewhere). Because of this, it can't occur as an entry in the table. So we just proved our function f isn't surjective, contradicting our original hypothesis, and proving the theorem.

The discovery of this fact was an important step forward in the history of mathematics. The particular technique though, using the diagonal entries of the table and changing each one, comes with a name of its own: the diagonalization argument. It's quite a bit more specialized of a proof technique than, say, the contrapositive implication, but it shows up in quite a range of mathematical literature (for instance, diagonalization is by far the most common way to prove that the Halting problem is undecidable). It is worth noting diagonalization was not the first known way to prove this theorem, just the cleanest.

The fact itself has interesting implications that lends itself nicely to confusing normal people. For instance, it implies not only that there is more than one kind of infinity, but that there are an infinity of infinities. Barring a full discussion of how far down the set-theory rabbit hole one can go, we look forward to next time, when we meet the final of the four basic methods of proof: proof by induction.

Until then!

Methods of Proof — Contrapositive

In this post we’ll cover the second of the “basic four” methods of proof: the contrapositive implication. We will build off our material from last time and start by defining functions on sets.

Functions as Sets

So far we have become comfortable with the definition of a set, but the most common way to use sets is to construct functions between them. As programmers we readily understand the nature of a function, but how can we define one mathematically? It turns out we can do it in terms of sets, but let us recall the desired properties of a function:

  • Every input must have an output.
  • Every input can only correspond to one output (the functions must be deterministic).

One might try at first to define a function in terms of subsets of size two. That is, if A, B are sets then a function f: A \to B would be completely specified by

\displaystyle \left \{ \left \{ x, y \right \} : x \in A, y \in B \right \}

where to enforce those two bullets, we must impose the condition that every x \in A occurs in one and only one of those subsets. Notationally, we would say that y = f(x) means \left \{ x, y \right \} is a member of the function. Unfortunately, this definition fails miserably when A = B, because we have no way to distinguish the input from the output.

To compensate for this, we introduce a new type of object called a tuple. A tuple is just an ordered list of elements, which we write using round brackets, e.g. (a,b,c,d,e).

As a quick aside, one can define ordered tuples in terms of sets. We will leave the reader to puzzle why this works, and generalize the example provided:

\displaystyle (a,b) = \left \{ a, \left \{ a, b \right \} \right \}

And so a function f: A \to B is defined to be a list of ordered pairs where the first thing in the pair is an input and the second is an output:

\displaystyle f = \left \{ (x, y) : x \in A, y \in B \right \}

Subject to the same conditions, that each x value from A must occur in one and only one pair. And again by way of notation we say y = f(x) if the pair (x,y) is a member of f as a set. Note that the concept of a function having “input and output” is just an interpretation. A function can be viewed independent of any computational ideas as just a set of pairs. Often enough we might not even know how to compute a function (or it might be provably uncomputable!), but we can still work with it abstractly.

It is also common to call functions “maps,” and to define “map” to mean a special kind of function (that is, with extra conditions) depending on the mathematical field one is working in. Even in other places on this blog, “map” might stand for a continuous function, or a homomorphism. Don’t worry if you don’t know these terms off hand; they are just special cases of functions as we’ve defined them here. For the purposes of this series on methods of proof, “function” and “map” and “mapping” mean the same thing: regular old functions on sets.

Injections

One of the most important and natural properties of a function is that of injectivity.

Definition: A function f: A \to B is an injection if whenever a \neq a' are distinct members of A, then f(a) \neq f(a'). The adjectival version of the word injection is injective.

As a quick side note, it is often the convention for mathematicians to use a capital letter to denote a set, and a lower-case letter to denote a generic element of that set. Moreover, the apostrophe on the a' is called a prime (so a' is spoken, “a prime”), and it’s meant to denote a variation on the non-prime’d variable a in some way. In this case, the variation is that a' \neq a.

So even if we had not explicitly mentioned where the a, a' objects came from, the knowledgeable mathematician (which the reader is obviously becoming) would be reasonably certain that they come from A. Similarly, if I were to lackadaisically present b out of nowhere, the reader would infer it must come from B.

One simple and commonly used example of an injection is the so-called inclusion function. If A \subset B are sets, then there is a canonical function representing this subset relationship, namely the function i: A \to B defined by i(a) = a. It should be clear that non-equal things get mapped to non-equal things, because the function doesn’t actually do anything except change perspective on where the elements are sitting: two nonequal things sitting in A are still nonequal in B.

Another example is that of multiplication by two as a map on natural numbers. More rigorously, define f: \mathbb{N} \to \mathbb{N} by f(x) = 2x. It is clear that whenever x \neq y as natural numbers then 2x \neq 2y. For one, x, y must have differing prime factorizations, and so must 2x, 2y because we added the same prime factor of 2 to both numbers. Did you catch the quick proof by direct implication there? It was sneaky, but present.

Now the property of being an injection can be summed up by a very nice picture:

A picture example of an injective function.

A picture example of an injective function.

The arrows above represent the pairs (x,f(x)), and the fact that no two arrows end in the same place makes this function an injection. Indeed, drawing pictures like this can give us clues about the true nature of a proposed fact. If the fact is false, it’s usually easy to draw a picture like this showing so. If it’s true, then the pictures will support it and hopefully make the proof obvious. We will see this in action in a bit (and perhaps we should expand upon it later with a post titled, “Methods of Proof — Proof by Picture”).

There is another, more subtle concept associated with injectivity, and this is where its name comes from. The word “inject” gives one the mental picture that we’re literally placing one set A inside another set B without changing the nature of A. We are simply realizing it as being inside of B, perhaps with different names for its elements. This interpretation becomes much clearer when one investigates sets with additional structure, such as groups, rings, or topological spaces. Here the word “injective mapping” much more literally means placing one thing inside another without changing the former’s structure in any way except for relabeling.

In any case, mathematicians have the bad (but time-saving) habit of implicitly identifying a set with its image under an injective mapping. That is, if f :A \to B is an injective function, then one can view A as the same thing as f(A) \subset B. That is, they have the same elements except that f renames the elements of A as elements of B. The abuse comes in when they start saying A \subset B even when this is not strictly the case.

Here is an example of this abuse that many programmers commit without perhaps noticing it. Suppose X is the set of all colors that can be displayed on a computer (as an abstract set; the elements are “this particular green,” “that particular pinkish mauve”). Now let Y be the set of all finite hexadecimal numbers. Then there is an obvious injective map from X \to Y sending each color to its 6-digit hex representation. The lazy mathematician would say “Well, then, we might as well say X \subset Y, for this is the obvious way to view X as a set of hexadecimal numbers.” Of course there are other ways (try to think of one, and then try to find an infinite family of them!), but the point is that this is the only way that anyone really uses, and that the other ways are all just “natural relabelings” of this way.

The precise way to formulate this claim is as follows, and it holds for arbitrary sets and arbitrary injective functions. If g, g': X \to Y are two such ways to inject X inside of Y, then there is a function h: Y \to Y such that the composition hg is precisely the map g'. If this is mysterious, we have some methods the reader can use to understand it more fully: give examples for simplified versions (what if there were only three colors?), draw pictures of “generic looking” set maps, and attempt a proof by direct implication.

Proof by Contrapositive

Often times in mathematics we will come across a statement we want to prove that looks like this:

If X does not have property A, then Y does not have property B.

Indeed, we already have: to prove a function f: X \to Y is injective we must prove:

If x is not equal to y, then f(x) is not equal to f(y).

A proof by direct implication can be quite difficult because the statement gives us very little to work with. If we assume that X does not have property A, then we have nothing to grasp and jump-start our proof. The main (and in this author’s opinion, the only) benefit of a proof by contrapositive is that one can turn such a statement into a constructive one. That is, we can write “p implies q” as “not q implies not p” to get the equivalent claim:

If Y has property B then X has property A.

This rewriting is called the “contrapositive form” of the original statement. It’s not only easier to parse, but also probably easier to prove because we have something to grasp at from the beginning.

To the beginning mathematician, it may not be obvious that “if p then q” is equivalent to “if not q then not p” as logical statements. To show that they are requires a small detour into the idea of a “truth table.”

In particular, we have to specify what it means for “if p then q” to be true or false as a whole. There are four possibilities: p can be true or false, and q can be true or false. We can write all of these possibilities in a table.

p  q
T  T
T  F
F  T
F  F

If we were to complete this table for "if p then q," we'd have to specify exactly which of the four cases correspond to the statement being true. Of course, if the p part is true and the q part is true, then "p implies q" should also be true. We have seen this already in proof by direct implication. Next, if p is true and q is false, then it certainly cannot be the case that truth of p implies the truth of q. So this would be a false statement. Our truth table so far looks like

p  q  p->q
T  T   T
T  F   F
F  T   ?
F  F   ?

The next question is what to do if the premise p of "if p then q" is false. Should the statement as a whole be true or false? Rather then enter a belated philosophical discussion, we will zealously define an implication to be true if its hypothesis is false. This is a well-accepted idea in mathematics called vacuous truth. And although it seems to make awkward statements true (like "if 2 is odd then 1 = 0"), it is rarely a confounding issue (and more often forms the punchline of a few good math jokes). So we can complete our truth table as follows

p q  p->q
T T   T
T F   F
F T   T
F F   T

Now here's where contraposition comes into play. If we're interested in determining when "not q implies not p" is true, we can add these to the truth table as extra columns:

p  q  p->q  not q   not p   not q -> not p
T  T   T      F       F           T
T  F   F      T       F           F
F  T   T      F       T           T
F  F   T      T       T           T

As we can see, the two columns corresponding to "p implies q" and "not q implies not p" assume precisely the same truth values in all possible scenarios. In other words, the two statements are logically equivalent.

And so our proof technique for contrapositive becomes: rewrite the statement in its contrapositive form, and proceed to prove it by direct implication.

Examples and Exercises

Our first example will be completely straightforward and require nothing but algebra. Let's show that the function f(x) = 7x - 4 is injective. Contrapositively, we want to prove that if f(x) = f(x') then x = x'. Assuming the hypothesis, we start by supposing 7x - 4 = 7x' - 4. Applying algebra, we get 7x = 7x', and dividing by 7 shows that $x = x'$ as desired. So f is injective.

This example is important because if we tried to prove it directly, we might make the mistake of assuming algebra works with \neq the same way it does with equality. In fact, many of the things we take for granted about equality fail with inequality (for instance, if a \neq b and b \neq c it need not be the case that a \neq c). The contrapositive method allows us to use our algebraic skills in a straightforward way.

Next let's prove that the composition of two injective functions is injective. That is, if f: X \to Y and g: Y \to Z are injective functions, then the composition gf : X \to Z  defined by gf(x) = g(f(x)) is injective.

In particular, we want to prove that if x \neq x' then g(f(x)) \neq g(f(x')). Contrapositively, this is the same as proving that if g(f(x)) = g(f(x')) then x=x'. Well by the fact that g is injective, we know that (again contrapositively) whenever g(y) = g(y') then y = y', so it must be that f(x) = f(x'). But by the same reasoning f is injective and hence x = x'. This proves the statement.

This was a nice symbolic proof, but we can see the same fact in a picturesque form as well:

A composition of two injections is an injection.

A composition of two injections is an injection.

If we maintain that any two arrows in the diagram can't have the same head, then following two paths starting at different points in X will never land us at the same place in Z. Since f is injective we have to travel to different places in Y, and since g is injective we have to travel to different places in Z. Unfortunately, this proof cannot replace the formal one above, but it can help us understand it from a different perspective (which can often make or break a mathematical idea).

Expanding upon this idea we give the reader a challenge: Let A, B, C be finite sets of the same size. Prove or disprove that if f: A \to B and g: B \to C are (arbitrary) functions, and if the composition gf is injective, then both of f, g must be injective.

Another exercise which has a nice contrapositive proof: prove that if A,B are finite sets and f:A \to B is an injection, then A has at most as many elements as B. This one is particularly susceptible to a "picture proof" like the one above. Although the formal the formal name for the fact one uses to prove this is the pigeonhole principleit's really just a simple observation.

Aside from inventing similar exercises with numbers (e.g., if ab is odd then a is odd or b is odd), this is all there is to the contrapositive method. It's just a direct proof disguised behind a fact about truth tables. Of course, as is usual in more advanced mathematical literature, authors will seldom announce the use of contraposition. The reader just has to be watchful enough to notice it.

Though we haven't talked about either the real numbers \mathbb{R} nor proofs of existence or impossibility, we can still pose this interesting question: is there an injective function from \mathbb{R} \to \mathbb{N}? In truth there is not, but as of yet we don't have the proof technique required to show it. This will be our next topic in the series: the proof by contradiction.

Until then!

Methods of Proof — Direct Implication

I recently posted an exploratory piece on why programmers who are genuinely interested in improving their mathematical skills can quickly lose stamina or be deterred. My argument was essentially that they don’t focus enough on mastering the basic methods of proof before attempting to read research papers that assume such knowledge. Also, there are a number of confusing (but in the end helpful) idiosyncrasies in mathematical culture that are often unexplained. Together these can cause enough confusion to stymie even the most dedicated reader. I have certainly experienced it enough to call the feeling familiar.

Now I’m certainly not trying to assert that all programmers need to learn mathematics to improve their craft, nor that learning mathematics will be helpful to any given programmer. All I claim is that someone who wants to understand why theorems are true, or how to tweak mathematical work to suit their own needs, cannot succeed without a thorough understanding of how these results are developed in the first place. Function definitions and variable declarations may form the scaffolding of a C program while the heart of the program may only be contained in a few critical lines of code. In the same way, the heart of a proof is usually quite small and the rest is scaffolding. One surely cannot understand or tweak a program without understanding the scaffolding, and the same goes for mathematical proofs.

And so we begin this series focusing on methods of proof, and we begin in this post with the simplest such methods. I call them the “basic four,” and they are:

  • Proof by direct implication
  • Proof by contradiction
  • Proof by contrapositive, and
  • Proof by induction.

This post will focus on the first one, while introducing some basic notation we will use in the future posts. Mastering these proof techniques does take some practice, and it helps to have some basic mathematical content with which to practice on. We will choose the content of set theory because it’s the easiest field in terms of definitions, and its syntax is the most widely used in all but the most pure areas of mathematics. Part of the point of this primer is to spend time demystifying notation as well, so we will cover the material at a leisurely (for an experienced mathematician: aggravatingly slow) pace.

Recalling the Basic Definitions of Sets

Before one can begin to prove theorems about a mathematical object, one must have a thorough understanding of what the object is. In higher mathematics these objects can become prohibitively complicated and abstract, but for our purposes they are quite familiar.

set is just a collection of things. Obviously there is a more mathematically rigorous formalization of what a set is, but for the purposes of learning to do proofs this is unnecessarily verbose and not constructive. For finite sets in particular and many kinds of infinite sets, absolutely nothing problematic can happen. Somehow, paradoxes only occur in really large sets. We will never need such monstrosities in our theoretical investigations or their applications. So in this post we’ll just work with finite sets and infinite sets of integers so as to better focus on the syntactical scaffolding of a proof.

There are many ways to define sets. One of the easiest is simply to list its elements:

\displaystyle X = \left \{ 1,4,9,16,25 \right \}

The bracket notation simply means the stuff inside is a set. This particular set contains some square numbers. The order here is irrelevant, and sets are not allowed to have repetition. The programmer may think of this as a hash set, a Python set, or simply a list in which duplication is forbidden. The time it takes to apply operations to sets is of no concern for us, so any perspective is valid and multiple perspectives are encouraged for a deeper understanding. There are other ways to describe sets which were the inspiration for list comprehensions in many programming languages. For instance, one way to use the notation is

\displaystyle X = \left \{ n^2 : n = 1, \dots, 5 \right \}

Speaking out loud as I write this, I would say, “Let X be the set of n-squared as n goes from 1 to 5.” This would be a slightly more compact way of writing the set above containing the first five squares. The colon there simply separates the expression to be evaluated at each step from the iteration part. The corresponding list comprehension in Python would be

[n*n for n in range(1,6)]

Cousin to the names from programming, this mathematical notation is called "set-builder notation." Interestingly enough, mathematicians tend not to use strict set-builder notation. Except when the most stringent of formalities if required, they use words over these iterative expressions. This is part of how flexible mathematics allows us to define things. We might not even know an iterative method for computing numbers, so we could say something like:

\displaystyle \left \{ n : n \textup{ is prime } \right \}

or

\displaystyle X = \left \{ n : n \textup{ is a square and } n \leq 1000 \right \}

There is one bit of non-rigorousness already here! Can you spot it?

The trick is that we don't have a definition of what it means to be a square. In particular, the observant reader would note that all positive numbers are "squares" if we allow for real numbers, and if we allow for complex numbers we get the negative numbers, too! This is a groan-inducing "gotcha," but it expresses how important context is in mathematical proof. Depending on the context, this set could be finite or infinite. Now let's say you encounter such a statement in a real work of mathematics, and there is no obvious clarification? You consider the two possible alternatives: either the set they're defining is all numbers, or they actually mean a specific set of integers which are squares. The active reader must infer that the former is pointless (why describe all numbers in this stupid way), so the author must mean the latter. Of course, it is not always so obvious what the author intends to convey, but if all one has available is the printed word, this extra step of analyzing the author's own constructions is necessary.

And so let's formalize this just a little more.

\displaystyle X = \left \{ n : n \textup{ is the square of a positive integer and } n \leq 1000 \right \}

If we don't know how to express a condition, we can always just state it in words. The reader should practice at unpacking set-builder expressions in the same way that one can unpack triply-nested Python list comprehensions, but don't let it become a hindrance to expressing an idea. It's just that set-builder notation is often easier to parse than words in proofs which construct sets in nit-picky ways. It's even common for someone presenting a proof to spend a good two minutes or more describing the contents of a tricky set-builder expression before continuing with a proof (the other day I was describing a technical construction to a colleague and we spent the better half of fifteen minutes interpreting and critiquing it).

One interesting bit of notation dictates membership of something inside a set. This is the "set membership" operator, denoted by the \in symbol (it comes from an odd way of writing a lower-case epsilon; for a short historical deviation, see this stackexchange question I asked on this topic). A programmer would probably feel most comfortable thinking of it as an expression evaluating to true or false, as in the Python code

7 in [1,3,4,5,8,9]

Which evaluates to False. The semantics are identical in mathematics, only the notation doubles as a test and as an assertion of membership. That is, we could translate the above Python expression into mathematical notation:

Let X = \left \{ 1,3,4,5,8,9 \right \}. Then 7 \not{\in} X.

The slash through the \in represents the test's negation. We could have also said, "Then it is not the case that 7 \in X."

Or we could declare that X must satisfy the property that 7 \in X (assuming nothing else we know about X would stop that from happening):

 Let X be any set such that 7 \in X.

Here we are defining this object X to be as general as possible while still having 7 as a member (perhaps we want to prove something special about all sets which contain 7, although I can't imagine what that might be).

The last definition we want to make before we start doing proofs is that of the notation for some common sets. The set of natural numbers is

\mathbb{N} = \left \{ 1, 2, 3, \dots \right \},

and the set of integers is likewise

\mathbb{Z} = \left \{ \dots, -2,-1,0,1,2, \dots \right \}.

The blackboard-bold N stands for "natural" and the blackboard-bold Z stands for "Zahlen," the German word for "numbers." (Zero is not a natural number, and if you don't like it you can go become a logician.)

Subsets, and Direct Implication

The first thing mathematicians want to do when they encounter a new mathematical object is to come up with useful ways to relate them to each other. One such way is by one set "containing" another. Of course, the sense that we mean "contain" requires specification. For on one hand, we could be saying that X, a set, contains as a member another set Y (nothing stops sets from being elements of other sets). But this is not what we mean. Instead, we want to say that X contains every element that Y does. This calls for a new definition:

Definition: A set Y is a subset of a set X if every element of Y is also an element of X. We denote the fact that Y is a subset of X with the "subset symbol" Y \subset X.

We often say, "Y contains X" to mean X \subset Y, or alternatively that, "X is contained in Y."

Just as a quick example, the set \left \{ 5,8,12 \right \} \subset \left \{ 1, \dots, 20 \right \} because the latter contains 5, 8, and 12 as well as all of its other contents.

This new relationship gives us a bit to play with. For the question now becomes: how can we prove that one set is a subset of another set? Of course if they are finite then we can just check every single element by hand, but as programmers we know better than to waste our time with manual verification. The technique becomes more obvious when we start to play with some examples.

Let me define two sets with set-builder notation as follows:

\displaystyle X = \left \{ n^2 : n \in \mathbb{N} \right \}

\displaystyle Y = \left \{ n^4 : n \in \mathbb{N} \right \}

Take a moment to be sure you understand what the members of these sets are before continuing.

The salient question is, "Is Y \subset X? And if so how can we prove it?" This is where direct implication comes in. What we're asking is whether it's true that no matter which element you pick from Y, it will be a member of X. In the most formal language, we are trying to prove the implication that for any number y, if y \in Y then y \in X.

This is a logical statement of the form p \to q (read "p implies q"). We will see three ways to prove a statement like this in our methods of proof series, but the most direct way to do it is called direct implication. In particular, we assume that p is true, and using only logical steps, reach the fact that q is true. That is, we go directly from "p" to "q." This method of proof is so common that there is usually no mention of it in a proof before one applies it! It is, in a sense, the "obvious" way to go about proving something.

And so let's prove the subset question above has an affirmative answer. To do this, we can start by fixing an arbitrary element y \in Y. By "arbitrary" we just mean that whatever follows in the proof will be true if you replace y by any concrete element of Y. Since there are infinitely many of them, this is really the only way we can proceed.

Now comes the tricky part. This is quite a simple problem, and the reader has probably already figured out the solution, but in general there is no way to proceed from here without creativity. Once one has figured out the scaffolding, one needs to make the "leap" from something which is obviously the "p" to something which is obviously the "q." This can take the form of playing with the problem until an idea sparks, or it can result in applying a number of previously proved facts.

For this problem, we can achieve the leap by expanding the definitions of what it means to be in each of these sets, and apply a lick of algebra. We know that y \in Y is a fourth-power of a number. That is, it is y = n^4 for some natural number n. By our knowledge of algebra,

n^4 = n \cdot n \cdot n \cdot n = (n \cdot n)^2

And so y is indeed the square of a number (it is the square of n^2), and hence y \in X. This concludes the proof that Y \subset X.

Let's do one more direct implication proof about subsets, and have it be a tad more abstract. That is, we won't know anything concrete about the members of the sets we're working with.

Proposition: Let X, Y, Z be arbitrary sets. If X \subset Y and Y \subset Z, then X \subset Z.

Before we continue, rewrite the thing we're trying to prove as a p \to q type statement. Then we'll start by assuming the "p" is true and try to conclude the "q." In fact, this statement can be really expanded into one of the form

((a \to b) \textup{ and } (b \to c)) \to (a \to c)

In doing this we're making a serious notational mess of a truly simple statement, bordering on an act of violence. On the other hand, it helps to be able to write things out in complete detail when one doesn't understand one part of it. The active reader will determine exactly what the propositions a, b, c represent in the above implication, and this will make the following proof routine to follow.

Another important point to make is that if the proposition is still unclear at this point (or it is unclear how to proceed), then the best line of attack is to write down lots and lots of examples. This is the kind of thing that almost always goes unsaid in any mathematical proof, because one simply assumes that if the reader does not understand what is being asked then they will work with examples until they understand.

For instance, we can come up with a very simple example with finite sets:

X = \left \{ 1, 5, 9 \right \}

Y = \left \{ 1, 2, 5, 7, 9 \right \}

Z = \left \{ 1,2,3,4,5,6,7,8,9 \right \}

In this special case the statement of the theorem is obvious: the conditions hold as expected, and the result holds as well: X \subset Z. We can also do it with some infinite sets:

X = \left \{ 4,8,12,16, \dots \right \}

Y = \left \{ 2, 4, 6, 8, 10, \dots \right \}

Z = \left \{ 1, 2, 3, 4, 5, \dots \right \}

Indeed, there is a very simple picture proof hiding under the surface here! It is just:

transitivity-of-subsets

From this picture it should be obvious: if X is contained in Y and Y contained in Z then X is obviously contained in Z. With all of this intuitive knowledge under our belt, we can proceed to the formal proof of the statement.

Proof. Suppose that x is an arbitrary element of X. We want to prove that x \in Z. Applying the first fact that X \subset Y, we know that every element of X is an element of Y, so x \in Y. Further, by the fact that Y \subset Z, it follows that x \in Z, as desired. \square

This was a bit more formal and a bit more condensed than our last proof, but the syntactical structure was the same. We wanted to prove a p \to q fact, and we started by assuming the "p" part and deriving the "q" part. All in a day's work!

Here are some more examples of statements that you can prove by direct implication:

  1. Let A, B, C be sets. Show that if A \subset B and B \subset C and C \subset A, then A = B = C (where by equality of sets we mean they have precisely the same elements; as a side note, can you define equality in terms of set containment?).
  2. The union of two sets A, B is the set A \cup B defined by \left \{ x : x \in A \textup{ or } x \in B \right \}. Prove A \subset A \cup B.
  3. The intersection of two sets A, B is the set A \cap B defined by \left \{ x : x \in A \textup{ and } x \in B \right \}. Prove A \cap B \subset A.
  4. Let A,B,C be sets, and suppose that A \subset B. Prove that A \cap C \subset B \cap C and A \cup C \subset B \cup C.

Here are two more exercises from number theory that might require a bit more inspiration:

  1. Prove that every odd integer is a difference of two square integers.
  2. Prove that if a,b are integers then a^2 + b^2 \geq 2ab.

As usual, the first step should be writing down examples, and then attempting a proof.

Proofs by direct implication are very often straightforward, but as a reader we must be cognizant of when it's being applied. As is often the case in mathematics, the precise method of proof goes unstated except in pedagogy.

Just to get a taste of where else proof by direct implication can show up, we will prove something about functions (in a programming language).

Definition: Let f be a function in a programming language. We call f injective if different inputs to f produce different outputs (and every input produces some output). For the sake of this post, we assume the method of comparing things for equality is fixed across all objects in the language (say, via Java's object comparison method equals(), or Python's overloadable == operator).

PropositionIf f, g are injective functions which can be composed (all outputs of f are valid inputs to g), then the composition gf defined by gf(x) = g(f(x)) is injective.

Proof. Let x,y be two different inputs to f. They are also inputs to gf. Then the values f(x), f(y) are different, and they are also inputs to g. So g(f(x)) and g(f(y)) are different, proving gf is injective. \square

Next Time, and a Reference

Next time we'll investigate the next simplest way to prove statements of the form p \to q: the proof by contrapositive implication. This will involve a slight digression into the general meaning of p \to q. After that we'll cover contradiction, and then proofs by induction.

Of course, this series is woefully incomplete by the nature of blogs. For those who are interested in proof techniques for the sake of transitioning to advanced mathematics, I recommend Paul Halmos's Naive Set Theory. For those interested in getting a better idea of the beautiful nature of proofs (that is, mathematics done for the sake of finding clever and beautiful arguments) and want a more casual read, I recommend Paul Lockhart's Measurement. Both are valuable and well-written texts.

Until next time!