Methods of Proof — Contradiction

In this post we’ll expand our toolbox of proof techniques by adding the proof by contradiction. We’ll also expand on our knowledge of functions on sets, and tackle our first nontrivial theorem: that there is more than one kind of infinity.

Impossibility and an Example Proof by Contradiction

Many of the most impressive results in all of mathematics are proofs of impossibility. We see these in lots of different fields. In number theory, plenty of numbers cannot be expressed as fractions. In geometry, certain geometric constructions are impossible with a straight-edge and compass. In computing theory, certain programs cannot be written. And in logic even certain mathematical statements can’t be proven or disproven.

In some sense proofs of impossibility are hardest proofs, because it’s unclear to the layman how anyone could prove it’s not possible to do something. Perhaps this is part of human nature, that nothing is too impossible to escape the realm of possibility. But perhaps it’s more surprising that the main line of attack to prove something is impossible is to assume it’s possible, and see what follows as a result. This is precisely the method of proof by contradiction:

Assume the claim you want to prove is false, and deduce that something obviously impossible must happen.

There is a simple and very elegant example that I use to explain this concept to high school students in my guest lectures.

Image you’re at a party of a hundred people, and any pair of people are either mutual friends or not mutual friends. Being a mathematical drama queen, you decide to count how many friends each person has at the party. You notice that there are two people with the same number of friends, and you wonder if it will always be the case for any party. And so the problem is born: prove or disprove that at every party of n people, there exist two people with the same number of friends at the party.

If we believe this is true, and we can’t seem to find a direct proof, then we can try a proof by contradiction. That is, we assume that there are not two people with the same number of friends. Equivalently, we can assume that everybody has a distinct number of friends. Well what could the possibilities be? On one hand, if there are n people at the party, then the minimum number of friends one could have is zero (if you’re quite lonely), and the maximum is n-1 (if you’re friends with everybody). So there are n distinct numbers, and n people, and everyone has to have a different number. That means someone has to have zero friends, and someone has to be friends with everybody. But this can’t possibly be true: if you’re friends with everyone (and we’re only counting mutual friendships) then nobody can be friendless. Thus, we have arrived at a contradiction, and our original assumption must have been incorrect. That is, every party has two people with the same number of friends at the party.

There are certainly other proofs of this fact (I know of a direct proof which is essentially the same proof as the one given above), and there are more mathematical ways to think about the problem. But this is a wonderful example of a proof which requires little else than the method of contradiction.

A Reprise on Truth Tables, and More Examples

Just as with our post on contrapositive implication, we can analyze proof by contradiction from the standpoint of truth tables. Recall the truth table for an implication p \to q:

p  q  p->q
T  T   T
T  F   F
F  T   T
F  F   T

We notice that an implication can only be false if the hypothesis p is true and the consequence q is false. This is the motivation for a proof by contradiction: if we show this case can’t happen, then there’s no other option: the statement p \to q must be true. In other words, if supposing “p and not q” is true implies something which we know to be false, then by the very same truth table argument it must be that either “q” is true or “p” is false. In any of these cases “p implies q” is true.

But all of this truth table juggling really takes us away from the heart of the method. Let’s do some proofs.

First, we will prove that the square root of 2 is not a rational number. That is, we are proving the statement that if x is a number such that x^2 = 2, then it cannot be true that x = p/q where p,q are integers.

Suppose to the contrary (this usually marks the beginning of a proof by contradiction) that x = p/q is a ratio of integers. Then we can square both sides to get 2 = x^2 = p^2 / q^2, and rearrange to arrive at 2q^2 = p^2. Now comes the tricky part: if a number is a divisor of p, then its square must divide p^2; this is true of all square numbers. In particular, it must be the case that the largest power of 2 dividing any square number is even (and 2^0 counts as an even power). Now in the equation 2q^2 = p^2 the right hand side is a square, so the largest power of two dividing it is even, and the right hand side is two times a square, so the largest power of 2 dividing it is odd (2 times an even power of 2 gives an odd power of two). But the two sides are equal! They can’t possibly have different largest powers of 2 dividing them. So we have arrived at a contradiction, and it must not be the case that x is rational.

This is quite a nice result, and a true understanding of the proof comes when you see why it fails for numbers which actually do have rational square roots (for instance, try it for the square root of 9 or 36/25). But the use of the method is clear: we entered a magical fairy land where the square root of 2 is a rational number, and by using nothing but logical steps, we proved that this world is a farce. It cannot exist.

Often times the jump from “suppose to the contrary” to the contradiction requires a relatively small piece of insight, but in other times it is quite large. In our example above, the insight was related to divisors (or prime factorizations) of a number, and these are not at all as obvious to the layman as our “having no friends” contradiction earlier.

For instance, here is another version of the square root of two proof, proved by contradiction, but this time using geometry. Another example is on tiling chessboards with dominoes (though the application of the proof by contradiction in this post is more subtle; can you pick out exactly when it’s used?). Indeed, most proofs of the fundamental theorem of algebra (these are much more advanced: [1] [2] [3] [4]) follow the same basic recipe: suppose otherwise, and find a contradiction.

Instead of a obviously ridiculous statement like “1 = 0″, often times the “contradiction” at the end of a proof will contradict the original hypothesis that was assumed. This happens in a famous proof that there are infinitely many prime numbers.

Indeed, if we suppose that there are finitely many prime numbers, we can write them all down: p_1 , \dots, p_n. That is, we are assuming that this is a list of all prime numbers. Since the list is finite, we can multiply them all together and add 1: let q = p_1 \dots p_n + 1. Indeed, as the reader will prove in the exercises below, every number has a prime divisor, but it is not hard to see that no p_i divides q. This is because no matter what some number x is, no number except 1 can divide both x and x-1 (one can prove this fact by contradiction if it is not obvious), and we already know that all the p_i divide q-1 . So q must have some prime divisor which was not in the list we started with. This contradicts that we had a complete list of primes to begin with. And so there must be infinitely many primes.

Here are some exercises to practice the proof by contradiction:

  1. Prove that the base 2 logarithm of 3 is irrational.
  2. More generally that \log_a(b) is irrational if there is any prime p dividing a but not b, or vice versa.
  3. Prove the fundamental theorem of arithmetic, that every natural number n \geq 2 is a product of primes (hint: inspect the smallest failing example).

A Few More Words on Functions and Sets

Last time we defined what it means for a function f: X \to Y on sets to be injective: different things in X get mapped to different things in Y. Indeed, there is another interesting notion called surjectivity, which says that f “hits” everything in Y by something in X.

Definition: A function f: X \to Y is surjective if for every element y \in Y there is an x \in X for which f(x) = y. A surjective function is called a surjection. A synonym often used in place of surjective is onto.

For finite sets, we use surjections to prove something nice about the sets it involves. If f:X \to Y is a surjection, then X has at least as many elements as Y. The reader can prove this easily by contradiction. In our previous post we proved an analogous proposition for injective functions: if f: X \to Y is injective, then there are at least as many elements of Y as there are of X. If we combine the two notions, we can see that X and Y have exactly the same size.

Definition: A function f: X \to Y which is both injective and surjective is called a bijection. The adjectival form of bijection is bijective.

So for finite sets, if there exists a bijection X \to Y, then X and Y have the same number of elements. The converse is also true, if two finite sets have the same size one can make a bijection between them (though a strictly formal proof of this would require induction, which we haven’t covered yet). The main benefit of thinking about size this way is that it extends to infinite sets!

Definition: Two arbitrary sets X,Y are said to have the same cardinality if there exists a bijection f : X \to Y. If there is a bijection f: \mathbb{N} \to X then X is said to have countably infinite cardinality, or simply countably infinite. If no such bijection exists (and X is not a finite set), then X is said to be uncountably infinite.

So we can say two infinite sets have the same cardinality if we can construct a bijection between them. For instance, we can prove that the even natural numbers have the same cardinality as the regular natural numbers. If X is the set of even natural numbers, just construct a function \mathbb{N} \to X by sending x \mapsto 2x. This is manifestly surjective and injective (one can prove it with whatever method one wants, but it is obviously true). A quick note on notation: when mathematicians want to define a function without giving it a name, they use the “maps to” arrow \mapsto. The reader can think of this as the mathematician’s version of lambda expression. So the above map would be written in python: lambda x: 2*x.

So we have proved, as curious as it sounds to say it, that there are just as many even numbers as not even numbers. Even more impressive, one can construct a bijection between the natural numbers and the rational numbers. Mathematicians denote the latter by \mathbb{Q}, and typically this proof is done by first finding a bijection from \mathbb{N} \to \mathbb{Z} and then from \mathbb{Z} \to \mathbb{Q}. We are implicitly using the fact that a composition of two bijections is a bijection. The diligent reader has already proved this for injections, so if one can also prove it for surjections, by definition it will be satisfied for bijections.

Diagonalization, and a Non-Trivial Theorem

We now turn to the last proof of this post, and our first non-trivial theorem: that there is no bijection between the set of real numbers and the set of natural numbers. Before we start, we should mention that calling this theorem ‘non-trivial’ might sound insulting to the non-mathematician; the reader has been diligently working to follow the proofs in these posts and completing exercises, and they probably all feel non-trivial. In fact, mathematicians don’t use trivial with the intent to insult (most of the time) or to say something is easy or not worth doing. Instead, ‘trivial’ is used to say that a result follows naturally, that it comes from nothing but applying the definitions and using the basic methods of proof. Of course, since we’re learning the basic methods of proof nothing can really be trivial, but if we say a theorem is non-trivial that means the opposite: there is some genuinely inspired idea that sits at the focal point of the proof, more than just direct or indirect inference. Even more, a proof is called “highly non-trivial” if there are multiple novel ideas or a menagerie of complicated details to keep track of.

In any case, we have to first say what the real numbers are. Instead we won’t actually work with the entire set of real numbers, but with a “small” subset: the real numbers between zero and one. We will also view these numbers in a particular representation that should be familiar to the practicing programmer.

Definition: The set of [0,1] is the set of all infinite sequences of zeroes and ones, interpreted as the set of all binary decimal expansions of numbers between zero and one.

If we want to be rigorous, we can define an infinite sequence to either be an infinite tuple (falling back on our definition of a tuple as a set), or we can define it to be a function f : \mathbb{N} \to \left \{ 0, 1 \right \}. Taking the latter view, we add one additional piece of notation:

Definition: An infinite binary sequence (b_i) = (b_1, b_2, \dots) is a function b : \mathbb{N} \to \left \{ 0, 1 \right \} where we denote by b_i the value b(i).

So now we can state the theorem.

Theorem: The set [0,1] of infinite binary sequences is uncountably infinite. That is, there is no bijection \mathbb{N} \to [0,1].

The proof, as we said, is non-trivial, but it starts off in a familiar way: we assume there is such a bijection. Suppose to the contrary that f : \mathbb{N} \to [0,1] is a bijection. Then we can list the values of f in a table. Since we want to use b_i for all of the values of f, we will call

\displaystyle f(n) = (b_{n,i}) = b_{n,1}, b_{n,2}, \dots

This gives us the following infinite table:

\displaystyle \begin{matrix} f(1) &=& b_{1,1}, & b_{1,2}, & \dots \\ f(2) &=& b_{2,1}, & b_{2,2}, & \dots \\ f(3) &=& b_{3,1}, & b_{3,2}, & \dots \\ \vdots & & \vdots & & \end{matrix}

Now here is the tricky part. We are going to define a new binary sequence which we can guarantee does not show up in this list. This will be our contradiction, because we assumed at first that this list consisted of all of the binary sequences.

The construction itself is not so hard. Start by taking c_i = b_{i,i} for all i. That is, we are using all of the diagonal elements of the table above. Now take each c_i and replace it with its opposite (i.e., flip each bit in the sequence, or equivalently apply b \mapsto 1-b to each entry). The important fact about this new sequence is it differs from every entry in this table. By the way we constructed it, no matter which $lateex n$ one chooses, this number differs from the table entry f(n) at digit n (and perhaps elsewhere). Because of this, it can’t occur as an entry in the table. So we just proved our function f isn’t surjective, contradicting our original hypothesis, and proving the theorem.

The discovery of this fact was an important step forward in the history of mathematics. The particular technique though, using the diagonal entries of the table and changing each one, comes with a name of its own: the diagonalization argument. It’s quite a bit more specialized of a proof technique than, say, the contrapositive implication, but it shows up in quite a range of mathematical literature (for instance, diagonalization is by far the most common way to prove that the Halting problem is undecidable). It is worth noting diagonalization was not the first known way to prove this theorem, just the cleanest.

The fact itself has interesting implications that lends itself nicely to confusing normal people. For instance, it implies not only that there is more than one kind of infinity, but that there are an infinity of infinities. Barring a full discussion of how far down the set-theory rabbit hole one can go, we look forward to next time, when we meet the final of the four basic methods of proof: proof by induction.

Until then!

Set Theory – A Primer

It’s often that a student’s first exposure to rigorous mathematics is through set theory, as originally studied by Georg Cantor. This means we will not treat set theory axiomatically (as in ZF set theory), but rather we will take the definition of a set for granted, and allow any operation to be performed on a set. This will be clear when we present examples, and it will be clear why this is a bad idea when we present paradoxes.

The Basics of Sets

Definition: A set S is a collection of distinct objects, each of which is called an element of S. For a potential element x, we denote its membership in S and lack thereof by the infix symbols \in, \notin, respectively. The proposition x \in S is true if and only if x is an element of S.

Definition: The cardinality of S, denoted |S|, is the number of elements in S.

The elements of a set can in principal be anything: numbers, equations, cats, morals, and even (especially) other sets. There is no restriction on their size, and the order in which we list the objects is irrelevant. For now, we will stick to sets of numbers and later we will move on to sets of other sets.

There are many ways to construct sets, and for finite sets it suffices to list the objects:

S = \left \{ 0, 2,4,6,8,10 \right \}

Clearly this set has cardinality six. The left and right curly braces are standard notation for the stuff inside a set. Another shorter construction is to simply state the contents of a set (let S be the set of even numbers between 0 and 10, inclusive). Sometimes, it will be very important that we construct the sets in a more detailed way than this, because, as we will see, sets can become rather convoluted.

We may construct sets using an implied pattern, i.e. for the positive evens:

E = \left \{ 2, 4, 6, 8, \dots \right \}

For now, we simply allow that this set has infinite cardinality, though we will revisit this notion in more detail later. In this way we define two basic sets of numbers:

\mathbb{N} = \left \{ 1, 2, 3, \dots \right \} \\ \mathbb{Z} = \left \{ 0, -1, 1, -2, 2, \dots \right \}

We name \mathbb{N} the natural numbers, and \mathbb{Z} the integers. Yet another construction allows us to populate a set with all values that satisfy a particular equation or proposition. We denote this \left \{ \textup{variable} | \textup{condition} \right \}. For example, we may define \mathbb{Q}, the rational numbers (fractional numbers) as follows:

\displaystyle \mathbb{Q} = \left \{ \frac{p}{q} | p \in \mathbb{Z} \textup{ and } q \in \mathbb{Z} q \neq 0 \right \}

This is not quite a complete description of how rational numbers work: some fractions are “equivalent” to other fractions, like 2/4 and 1/2. There is an additional piece of data called a “relation” that’s imposed on this set, and any two things which are related are considered equivalent. We’re not going to go into the details of this, but the interested reader should look up an equivalence relation.

Next, we want to describe certain pieces of sets:

Definition: A set R is a subset of a set S, denoted R \subset S, if all elements of R are in S. i.e., for all x, x \in R implies x \in S.

And we recognize that under our standard equality of numbers (i.e. \frac{4}{1} = 4), we have \mathbb{N} \subset \mathbb{Z} \subset \mathbb{Q}.

We may now define equality on sets, extending the natural idea that two sets are equal if they contain precisely the same elements:

Definition: Two sets R,S are equal if R \subset S and S \subset R.

A natural set to construct now is the power set of a given set:

Definition: the power set of S, denoted P(S), is the set of all subsets of S. i.e. P(S) = \left \{ R | R \subset S \right \}.

Elements of this set are sets themselves, and there are two trivial, yet important sets to recognize are in P(S), namely S \subset S itself and \left \{ \right \} \subset S, the empty set which contains no elements, is vacuously a subset of every set.

For a finite set S, power sets are strictly larger in size, since there exists a singleton set \left \{ x \right \} \in P(S) for each x \in S. As an exercise for the reader, determine the size of P(S) for any finite set S, expressed in terms of |S|. For infinite sets, we simply admit that their power sets are also infinite, since we don’t yet have a way to describe “larger” infinities.

Building Sets From Other Sets

We have a couple of nice operations we may define on sets, which are rather trivial to define.

Definition: The union of two sets R,S, denoted R \cup S, is \left \{ x | x \in S \textup{ or } x \in R \right \}.

Definition: The intersection of two sets R,S, denoted R \cap S, is \left \{ x | x \in S \textup{ and } x \in R \right \}.

As an exercise, try to prove that |S \cup R| = |S| + |R| - |S \cap R|.

The next definition requires one to remember what an ordered tuple (a_1,a_2, \dots , a_n) is. Specifically, an ordered pair is just like a set which allows for repetition and respects the presented order of elements. So (a,b) \neq (b,a) \neq (a,a,b).

Definition: The direct product (or simply product) of two sets R,S, denoted R \times S, is \left \{ (x,y) | x \in R \textup{ and } y \in S \right \}.

This is just like in defining the Cartesian Plane \mathbb{R}^2 = \mathbb{R} \times \mathbb{R} as ordered pairs of real numbers. We can extend this even further by defining \mathbb{S}^n to be the set of all n-tuples of elements in S.

Functions, and Their -Jections

Now that we have sets and ways to build interesting sets, we may define mathematical objects which do stuff with them.

Definition: A relation \sim on R and S, is a subset of R \times S. Denotationally, we write a \sim b as shorthand for (a,b) \in \sim .

Relations are natural generalizations of = on numbers. In general relations need no additional properties, but they are not very interesting unless they do. For more discussion on relations, we point the reader to the Wikipedia page on equivalence relations. As an exercise to the reader, prove that set equality (defined above) is an equivalence relation, as expected.

Now, we get to the meat of our discussion on sets: functions.

Definition: A function f : S \to R is a relation on S and R, a subset of S \times R, with the additional properties that for each x \in S, there is exactly one element of the form (x, \cdot ) \in f.

Colloquially, functions ‘accept’ a value from S and output something in R. This is why we may only have one ordered pair for each x \in S, because functions are deterministic. Furthermore, we must be able to put every value from S into our function, so no values x \in S may be without a corresponding element of f.

We have special notation for functions, which was established long before set theory was invented. If x \in S, we write f(x) to denote the corresponding element y in (x,y) \in f. In addition, we say that a value x maps to y to mean f(x)=y. If the function is implicitly understood, we sometimes just write x \mapsto y. Then we have the following definitions:

Definition: The domain of a function f: S \to R, sometimes denoted \textup{dom}(f), is the set of input values S. The codomain, denoted \textup{codom}(f), is the set R. Since not every value in R must be in a pair in f, we call the subset of values of R which are produced by some input in S the range of f. Rigorously, the range of f is \left \{ f(x) | x \in S \right \}.

Now we may speak of some “interesting” functions:

Definition: A function f : S \to R is a surjection if its range is equal to its codomain. In other words, for every y \in R, there is some x \in S with f(x) = y, or, equivalently, f(S) = R.

Note that f being a surjection on finite sets implies that the domain is at least as big as the codomain. Though it seems trivial, we can use functions in this way to reason about the cardinalities of their domains and codomains.

Definition: A function f : S \to R is an injection if no two different values x \in S map to the same y \in R. In other words, if f(a) = f(b), then a=b.

Similarly, for finite domains/codomains an injection forces the codomain to be at least as big as the domain in cardinality.

Now, we may combine the two properties to get a very special kind of function.

Definition: A function f: S \to R is a bijection if it is both a surjection and an injection.

A bijection specifically represents a “relabeling” of a given set, in that each element in the domain has exactly one corresponding element in the codomain, and each element in the codomain has exactly one corresponding element in the domain. Thus, the bijection represents changing the label x into the label f(x).

Note that for finite sets, since a bijection is both a surjection and an injection, the domain and codomain of a bijection must have the same cardinality! What’s better, is we can extend this to infinite sets.

To Infinity, and Beyond! (Literally)

Definition: Two infinite sets have equal cardinality if there exists a bijection between them.

Now we will prove that two different infinite sets, the natural numbers and the integers, have equal cardinality. This is surprising, because despite the fact that the sets are not equal, one is a subset of the other and they have equal cardinality! So here we go.

Define f : \mathbb{N} \to \mathbb{Z} as follows. Let f(1) = 0, f(2) = 1, f(3) = -1 \dots. Continuing in this way, we see that f(2k+1) = -k, and f(2k) = k for all k \in \mathbb{N}. This is clearly a bijection, and hence |\mathbb{N}| = |\mathbb{Z}|.

We can extend this to any bijection between an infinite set S of positive numbers and the set \left \{ \pm x | x \in S \right \}.

Let’s try to push bijections a bit further. Let’s see if we can construct a bijection between the natural numbers and the positive rationals (and hence the set of all rationals). If integers seemed bigger than the naturals, then the rational numbers must be truly huge. As it turns out, the rationals also have cardinality equal to the natural numbers!

It suffices to show that the natural numbers are equal in cardinality to the nonnegative rationals. Here is a picture describing the bijection:

We arrange the rationals into a grid, such that each blue dot above corresponds to some \frac{p}{q}, where p is the x-coordinate of the grid, and q is the y-coordinate. Then, we assign to each blue dot a nonnegative integer in the diagonal fashion described by the sequence of arrows. Note these fractions are not necessarily in lowest terms, so some rational numbers correspond to more than one blue dot. To fix this, we simply eliminate the points (p,q) for which their greatest common divisor is not 1. Then, in assigning the blue dots numbers, we just do so in the same fashion, skipping the places where we deleted bad points.

This bijection establishes that the natural numbers and rationals have identical cardinality. Despite how big the rationals seem, they are just a relabeling of the natural numbers! Astounding.

With this result it seems like every infinite set has cardinality equal to the natural numbers. It should be totally easy to find a bijection between the naturals and the real numbers \mathbb{R}.

Unfortunately, try as we might, no such bijection exists. This was a huge result proven by Georg Cantor in his study of infinite sets, and its proof has become a staple of every mathematics education, called Cantor’s Diagonalization Proof.

First, we recognize that every real number has a representation in base 2 as an infinite sequence of 0’s and 1’s. Thus, if there were such a bijection between the natural numbers and reals, we could list the reals in order of their corresponding naturals, as:

1 \mapsto d_{1,1}d_{1,2}d_{1,3} \dots \\ 2 \mapsto d_{2,1}d_{2,2}d_{2,3} \dots \\ 3 \mapsto d_{3,1}d_{3,2}d_{3,3} \dots
\vdots

Here each d_{i,j} \in \left \{ 0,1 \right \} corresponds to the jth digit of the ith number in the list. Now we may build a real number r = d_{1,1}d_{2,2}d_{3,3} \dots, the diagonal elements of this matrix. If we take r and flip each digit from a 0 to a 1 and vice versa, we get the complement of r, call it r'. Notice that r' differs from every real number at some digit, because the ith real number shares digit i with r, and hence differs from r' at the same place. But r' is a real number, so it must occur somewhere in this list! Call that place k \mapsto r'. Then, r' differs from r' at digit k, a contradiction.

Hence, no such bijection can exist. This is amazing! We have just found two infinite sets which differ in size! Even though the natural numbers are infinitely large, there are so many mind-bogglingly more real numbers that it is a larger infinity! We need new definitions to make sense of this:

Definition: An infinite set S is countably infinite if there exists a bijection \mathbb{N} \to S. If no such bijection exists, we call it uncountably infinite, or just uncountable.

Georg Cantor went on to prove this in general, that there cannot exist a bijection from a set onto its power set (we may realize the reals as the power set of the naturals by a similar decimal expansion). He did so simply by extending the diagonalization argument. But since we are dealing with infinite sets, we need even more definitions of “how hugely infinite” these infinite sets can be. These new measures of set cardinality were also invented by Cantor, and they are called transfinite numbers. Their investigation is beyond the scope of this post, but we encourage the reader to follow up on this fascinating subject.

We have still only scratched the surface of set theory, and we have even left out a lot of basic material to expedite our discussion of uncountability. There is a huge amount of debate that resulted from Cantor’s work, and it inspired many to further pick apart the foundations of mathematics, leading to more rigorous formulations of set theory, and extensions or generalizations such as category theory.

Sets of Sets of Sets, and so on Ad Insanitum

We wrap up this post with a famous paradox, which makes one question whether all of the operations performed in set theory are justified. It is called Russell’s Paradox, after Bertrand Russell.

Suppose we define a set S, which contains itself as an element. This does not break any rules of Cantor’s set theory, because we said a set could be any collection of objects. We may even speak of the set of all sets which contain themselves as elements.

Now let us define the set of all sets which do not contain themselves. Call this set X. It must be true that either X \in X or X \notin X. If X \in X, then X is contained in itself, obviously, but then by the definition of X, it does not contain itself as an element. This is a contradiction, so X \notin X. However, if X \notin X, then X satisfies the definition of a set which does not contain itself, so X \in X. Again, a contradiction.

This problem has no resolution within Cantor’s world of sets. For this reason (and other paradoxes based on wild constructions of sets), many have come to believe that Cantor’s set theory is not well-founded. That is not to say his arguments and famous results are wrong, but rather that they need to be reproved within a more constrained version of set theory, in which such paradoxes cannot happen.

Such a set theory was eventually found that bypassed Russell’s paradox, and it is called Zermelo-Fraenkel set theory. But even that was not enough! Additional statements, like the Axiom of Choice (which nevertheless does lead to some counter-intuitive theorems), were found which cannot be proved or disproved by the other axioms of ZF set theory.

Rather than give up all that work on axiomatizing set theory, most mathematicians today accept the Axiom of Choice and work around any oddities that arise, resulting in ZFC (ZF +  Choice), doing their mathematical work there.

So along with paradoxical curiosities, we have laid all the foundation necessary for reasoning about countability and uncountability, which has already shown up numerous times on this blog.

Until next time!