Zero-One Laws for Random Graphs

Last time we saw a number of properties of graphs, such as connectivity, where the probability that an Erdős–Rényi random graph $ G(n,p)$ satisfies the property is asymptotically either zero or one. And this zero or one depends on whether the parameter $ p$ is above or below a universal threshold (that depends only on $ n$ and the property in question).

To remind the reader, the Erdős–Rényi random “graph” $ G(n,p)$ is a distribution over graphs that you draw from by including each edge independently with probability $ p$. Last time we saw that the existence of an isolated vertex has a sharp threshold at $ (\log n) / n$, meaning if $ p$ is asymptotically smaller than the threshold there will certainly be isolated vertices, and if $ p$ is larger there will certainly be no isolated vertices. We also gave a laundry list of other properties with such thresholds.

One might want to study this phenomenon in general. Even if we might not be able to find all the thresholds we want for a given property, can we classify which properties have thresholds and which do not?

The answer turns out to be mostly yes! For large classes of properties, there are proofs that say things like, “either this property holds with probability tending to one, or it holds with probability tending to zero.” These are called “zero-one laws,” and they’re sort of meta theorems. We’ll see one such theorem in this post relating to constant edge-probabilities in random graphs, and we’ll remark on another at the end.

Sentences about graphs in first order logic

A zero-one law generally works by defining a class of properties, and then applying a generic first/second moment-type argument to every property in the class.

So first we define what kinds of properties we’ll discuss. We’ll pick a large class: anything that can be expressed in first-order logic in the language of graphs. That is, any finite logical statement that uses existential and universal quantifiers over variables, and whose only relation (test) is whether an edge exists between two vertices. We’ll call this test $ e(x,y)$. So you write some sentence $ P$ in this language, and you take a graph $ G$, and you can ask $ P(G) = 1$, whether the graph satisfies the sentence.

This seems like a really large class of properties, and it is, but let’s think carefully about what kinds of properties can be expressed this way. Clearly the existence of a triangle can be written this way, it’s just the sentence

$ \exists x,y,z : e(x,y) \wedge e(y,z) \wedge e(x,z)$

I’m using $ \wedge$ for AND, and $ \vee$ for OR, and $ \neg$ for NOT. Similarly, one can express the existence of a clique of size $ k$, or the existence of an independent set of size $ k$, or a path of a fixed length, or whether there is a vertex of maximal degree $ n-1$.

Here’s a question: can we write a formula which will be true for a graph if and only if it’s connected? Well such a formula seems like it would have to know about how many vertices there are in the graph, so it could say something like “for all $ x,y$ there is a path from $ x$ to $ y$.” It seems like you’d need a family of such formulas that grows with $ n$ to make anything work. But this isn’t a proof; the question remains whether there is some other tricky way to encode connectivity.

But as it turns out, connectivity is not a formula you can express in propositional logic. We won’t prove it here, but we will note at the end of the article that connectivity is in a different class of properties that you can prove has a similar zero-one law.

The zero-one law for first order logic

So the theorem about first-order expressible sentences is as follows.

Theorem: Let $ P$ be a property of graphs that can be expressed in the first order language of graphs (with the $ e(x,y)$ relation). Then for any constant $ p$, the probability that $ P$ holds in $ G(n,p)$ has a limit of zero or one as $ n \to \infty$.

Proof. We’ll prove the simpler case of $ p=1/2$, but the general case is analogous. Given such a graph $ G$ drawn from $ G(n,p)$, what we’ll do is define a countably infinite family of propositional formulas $ \varphi_{k,l}$, and argue that they form a sort of “basis” for all first-order sentences about graphs.

First let’s describe the $ \varphi_{k,l}$. For any $ k,l \in \mathbb{N}$, the sentence will assert that for every set of $ k$ vertices and every set of $ l$ vertices, there is some other vertex connected to the first $ k$ but not the last $ l$.

$ \displaystyle \varphi_{k,l} : \forall x_1, \dots, x_k, y_1, \dots, y_l \exists z : \\ e(z,x_1) \wedge \dots \wedge e(z,x_k) \wedge \neg e(z,y_1) \wedge \dots \wedge \neg e(z,y_l)$.

In other words, these formulas encapsulate every possible incidence pattern for a single vertex. It is a strange set of formulas, but they have a very nice property we’re about to get to. So for a fixed $ \varphi_{k,l}$, what is the probability that it’s false on $ n$ vertices? We want to give an upper bound and hence show that the formula is true with probability approaching 1. That is, we want to show that all the $ \varphi_{k,l}$ are true with probability tending to 1.

Computing the probability: we have $ \binom{n}{k} \binom{n-k}{l}$ possibilities to choose these sets, and the probability that some other fixed vertex $ z$ has the good connections is $ 2^{-(k+l)}$ so the probability $ z$ is not good is $ 1 – 2^{-(k+l)}$, and taking a product over all choices of $ z$ gives the probability that there is some bad vertex $ z$ with an exponent of $ (n – (k + l))$. Combining all this together gives an upper bound of $ \varphi_{k,l}$ being false of:

$ \displaystyle \binom{n}{k}\binom{n-k}{l} (1-2^{-k-1})^{n-k-l}$

And $ k, l$ are constant, so the left two terms are polynomials while the rightmost term is an exponentially small function, and this implies that the whole expression tends to zero, as desired.

Break from proof.

A bit of model theory

So what we’ve proved so far is that the probability of every formula of the form $ \varphi_{k,l}$ being satisfied in $ G(n,1/2)$ tends to 1.

Now look at the set of all such formulas

$ \displaystyle \Phi = \{ \varphi_{k,l} : k,l \in \mathbb{N} \}$

We ask: is there any graph which satisfies all of these formulas? Certainly it cannot be finite, because a finite graph would not be able to satisfy formulas with sufficiently large values of $ l, k > n$. But indeed, there is a countably infinite graph that works. It’s called the Rado graph, pictured below.

rado

The Rado graph has some really interesting properties, such as that it contains every finite and countably infinite graph as induced subgraphs. Basically this means, as far as countably infinite graphs go, it’s the big momma of all graphs. It’s the graph in a very concrete sense of the word. It satisfies all of the formulas in $ \Phi$, and in fact it’s uniquely determined by this, meaning that if any other countably infinite graph satisfies all the formulas in $ \Phi$, then that graph is isomorphic to the Rado graph.

But for our purposes (proving a zero-one law), there’s a better perspective than graph theory on this object. In the logic perspective, the set $ \Phi$ is called a theory, meaning a set of statements that you consider “axioms” in some logical system. And we’re asking whether there any model realizing the theory. That is, is there some logical system with a semantic interpretation (some mathematical object based on numbers, or sets, or whatever) that satisfies all the axioms?

A good analogy comes from the rational numbers, because they satisfy a similar property among all ordered sets. In fact, the rational numbers are the unique countable, ordered set with the property that it has no biggest/smallest element and is dense. That is, in the ordering there is always another element between any two elements you want. So the theorem says if you have two countable sets with these properties, then they are actually isomorphic as ordered sets, and they are isomorphic to the rational numbers.

So, while we won’t prove that the Rado graph is a model for our theory $ \Phi$, we will use that fact to great benefit. One consequence of having a theory with a model is that the theory is consistent, meaning it can’t imply any contradictions. Another fact is that this theory $ \Phi$ is complete. Completeness means that any formula or it’s negation is logically implied by the theory. Note these are syntactical implications (using standard rules of propositional logic), and have nothing to do with the model interpreting the theory.

The proof that $ \Phi$ is complete actually follows from the uniqueness of the Rado graph as the only countable model of $ \Phi$. Suppose the contrary, that $ \Phi$ is not consistent, then there has to be some formula $ \psi$ that is not provable, and it’s negation is also not provable, by starting from $ \Phi$. Now extend $ \Phi$ in two ways: by adding $ \psi$ and by adding $ \neg \psi$. Both of the new theories are still countable, and by a theorem from logic this means they both still have countable models. But both of these new models are also countable models of $ \Phi$, so they have to both be the Rado graph. But this is very embarrassing for them, because we assumed they disagree on the truth of $ \psi$.

So now we can go ahead and prove the zero-one law theorem.

Return to proof.

Given an arbitrary property $ \varphi \not \in \Psi$. Now either $ \varphi$ or it’s negation can be derived from $ \Phi$. Without loss of generality suppose it’s $ \varphi$. Take all the formulas from the theory you need to derive $ \varphi$, and note that since it is a proof in propositional logic you will only finitely many such $ \varphi_{k,l}$. Now look at the probabilities of the $ \varphi_{k,l}$: they are all true with probability tending to 1, so the implied statement of the proof of $ \varphi$ (i.e., $ \varphi$ itself) must also hold with probability tending to 1. And we’re done!

$ \square$

If you don’t like model theory, there is another “purely combinatorial” proof of the zero-one law using something called Ehrenfeucht–Fraïssé games. It is a bit longer, though.

Other zero-one laws

One might naturally ask two questions: what if your probability is not constant, and what other kinds of properties have zero-one laws? Both great questions.

For the first, there are some extra theorems. I’ll just describe one that has always seemed very strange to me. If your probability is of the form $ p = n^{-\alpha}$ but $ \alpha$ is irrational, then the zero-one law still holds! This is a theorem of Baldwin-Shelah-Spencer, and it really makes you wonder why irrational numbers would be so well behaved while rational numbers are not 🙂

For the second question, there is another theorem about monotone properties of graphs. Monotone properties come in two flavors, so called “increasing” and “decreasing.” I’ll describe increasing monotone properties and the decreasing counterpart should be obvious. A property is called monotone increasing if adding edges can never destroy the property. That is, with an empty graph you don’t have the property (or maybe you do), and as you start adding edges eventually you suddenly get the property, but then adding more edges can’t cause you to lose the property again. Good examples of this include connectivity, or the existence of a triangle.

So the theorem is that there is an identical zero-one law for monotone properties. Great!

It’s not so often that you get to see these neat applications of logic and model theory to graph theory and (by extension) computer science. But when you do get to apply them they seem very powerful and mysterious. I think it’s a good thing.

Until next time!

Methods of Proof — Contradiction

In this post we’ll expand our toolbox of proof techniques by adding the proof by contradiction. We’ll also expand on our knowledge of functions on sets, and tackle our first nontrivial theorem: that there is more than one kind of infinity.

Impossibility and an Example Proof by Contradiction

Many of the most impressive results in all of mathematics are proofs of impossibility. We see these in lots of different fields. In number theory, plenty of numbers cannot be expressed as fractions. In geometry, certain geometric constructions are impossible with a straight-edge and compass. In computing theory, certain programs cannot be written. And in logic even certain mathematical statements can’t be proven or disproven.

In some sense proofs of impossibility are hardest proofs, because it’s unclear to the layman how anyone could prove it’s not possible to do something. Perhaps this is part of human nature, that nothing is too impossible to escape the realm of possibility. But perhaps it’s more surprising that the main line of attack to prove something is impossible is to assume it’s possible, and see what follows as a result. This is precisely the method of proof by contradiction:

Assume the claim you want to prove is false, and deduce that something obviously impossible must happen.

There is a simple and very elegant example that I use to explain this concept to high school students in my guest lectures.

Image you’re at a party of a hundred people, and any pair of people are either mutual friends or not mutual friends. Being a mathematical drama queen, you decide to count how many friends each person has at the party. You notice that there are two people with the same number of friends, and you wonder if it will always be the case for any party. And so the problem is born: prove or disprove that at every party of $ n$ people, there exist two people with the same number of friends at the party.

If we believe this is true, and we can’t seem to find a direct proof, then we can try a proof by contradiction. That is, we assume that there are not two people with the same number of friends. Equivalently, we can assume that everybody has a distinct number of friends. Well what could the possibilities be? On one hand, if there are $ n$ people at the party, then the minimum number of friends one could have is zero (if you’re quite lonely), and the maximum is $ n-1$ (if you’re friends with everybody). So there are $ n$ distinct numbers, and $ n$ people, and everyone has to have a different number. That means someone has to have zero friends, and someone has to be friends with everybody. But this can’t possibly be true: if you’re friends with everyone (and we’re only counting mutual friendships) then nobody can be friendless. Thus, we have arrived at a contradiction, and our original assumption must have been incorrect. That is, every party has two people with the same number of friends at the party.

There are certainly other proofs of this fact (I know of a direct proof which is essentially the same proof as the one given above), and there are more mathematical ways to think about the problem. But this is a wonderful example of a proof which requires little else than the method of contradiction.

A Reprise on Truth Tables, and More Examples

Just as with our post on contrapositive implication, we can analyze proof by contradiction from the standpoint of truth tables. Recall the truth table for an implication $ p \to q$:

p  q  p->q
T  T   T
T  F   F
F  T   T
F  F   T

We notice that an implication can only be false if the hypothesis $ p$ is true and the consequence $ q$ is false. This is the motivation for a proof by contradiction: if we show this case can’t happen, then there’s no other option: the statement $ p \to q$ must be true. In other words, if supposing “p and not q” is true implies something which we know to be false, then by the very same truth table argument it must be that either “q” is true or “p” is false. In any of these cases “p implies q” is true.

But all of this truth table juggling really takes us away from the heart of the method. Let’s do some proofs.

First, we will prove that the square root of 2 is not a rational number. That is, we are proving the statement that if $ x$ is a number such that $ x^2 = 2$, then it cannot be true that $ x = p/q$ where $ p,q$ are integers.

Suppose to the contrary (this usually marks the beginning of a proof by contradiction) that $ x = p/q$ is a ratio of integers. Then we can square both sides to get $ 2 = x^2 = p^2 / q^2$, and rearrange to arrive at $ 2q^2 = p^2$. Now comes the tricky part: if a number is a divisor of $ p$, then its square must divide $ p^2$; this is true of all square numbers. In particular, it must be the case that the largest power of 2 dividing any square number is even (and $ 2^0$ counts as an even power). Now in the equation $ 2q^2 = p^2$ the right hand side is a square, so the largest power of two dividing it is even, and the right hand side is two times a square, so the largest power of 2 dividing it is odd (2 times an even power of 2 gives an odd power of two). But the two sides are equal! They can’t possibly have different largest powers of 2 dividing them. So we have arrived at a contradiction, and it must not be the case that $ x$ is rational.

This is quite a nice result, and a true understanding of the proof comes when you see why it fails for numbers which actually do have rational square roots (for instance, try it for the square root of 9 or 36/25). But the use of the method is clear: we entered a magical fairy land where the square root of 2 is a rational number, and by using nothing but logical steps, we proved that this world is a farce. It cannot exist.

Often times the jump from “suppose to the contrary” to the contradiction requires a relatively small piece of insight, but in other times it is quite large. In our example above, the insight was related to divisors (or prime factorizations) of a number, and these are not at all as obvious to the layman as our “having no friends” contradiction earlier.

For instance, here is another version of the square root of two proof, proved by contradiction, but this time using geometry. Another example is on tiling chessboards with dominoes (though the application of the proof by contradiction in this post is more subtle; can you pick out exactly when it’s used?). Indeed, most proofs of the fundamental theorem of algebra (these are much more advanced: [1] [2] [3] [4]) follow the same basic recipe: suppose otherwise, and find a contradiction.

Instead of a obviously ridiculous statement like “1 = 0”, often times the “contradiction” at the end of a proof will contradict the original hypothesis that was assumed. This happens in a famous proof that there are infinitely many prime numbers.

Indeed, if we suppose that there are finitely many prime numbers, we can write them all down: $ p_1 , \dots, p_n$. That is, we are assuming that this is a list of all prime numbers. Since the list is finite, we can multiply them all together and add 1: let $ q = p_1 \dots p_n + 1$. Indeed, as the reader will prove in the exercises below, every number has a prime divisor, but it is not hard to see that no $ p_i$ divides $ q$. This is because no matter what some number $ x$ is, no number except 1 can divide both $ x$ and $ x-1$ (one can prove this fact by contradiction if it is not obvious), and we already know that all the $ p_i$ divide $ q-1$ . So $ q$ must have some prime divisor which was not in the list we started with. This contradicts that we had a complete list of primes to begin with. And so there must be infinitely many primes.

Here are some exercises to practice the proof by contradiction:

  1. Prove that the base 2 logarithm of 3 is irrational.
  2. More generally that $ \log_a(b)$ is irrational if there is any prime $ p$ dividing $ a$ but not $ b$, or vice versa.
  3. Prove the fundamental theorem of arithmetic, that every natural number $ n \geq 2$ is a product of primes (hint: inspect the smallest failing example).

A Few More Words on Functions and Sets

Last time we defined what it means for a function $ f: X \to Y$ on sets to be injective: different things in $ X$ get mapped to different things in $ Y$. Indeed, there is another interesting notion called surjectivity, which says that $ f$ “hits” everything in $ Y$ by something in $ X$.

Definition: A function $ f: X \to Y$ is surjective if for every element $ y \in Y$ there is an $ x \in X$ for which $ f(x) = y$. A surjective function is called a surjection. A synonym often used in place of surjective is onto.

For finite sets, we use surjections to prove something nice about the sets it involves. If $ f:X \to Y$ is a surjection, then $ X$ has at least as many elements as $ Y$. The reader can prove this easily by contradiction. In our previous post we proved an analogous proposition for injective functions: if $ f: X \to Y$ is injective, then there are at least as many elements of $ Y$ as there are of $ X$. If we combine the two notions, we can see that $ X$ and $ Y$ have exactly the same size.

Definition: A function $ f: X \to Y$ which is both injective and surjective is called a bijection. The adjectival form of bijection is bijective.

So for finite sets, if there exists a bijection $ X \to Y$, then $ X$ and $ Y$ have the same number of elements. The converse is also true, if two finite sets have the same size one can make a bijection between them (though a strictly formal proof of this would require induction, which we haven’t covered yet). The main benefit of thinking about size this way is that it extends to infinite sets!

Definition: Two arbitrary sets $ X,Y$ are said to have the same cardinality if there exists a bijection $ f : X \to Y$. If there is a bijection $ f: \mathbb{N} \to X$ then $ X$ is said to have countably infinite cardinality, or simply countably infinite. If no such bijection exists (and $ X$ is not a finite set), then $ X$ is said to be uncountably infinite.

So we can say two infinite sets have the same cardinality if we can construct a bijection between them. For instance, we can prove that the even natural numbers have the same cardinality as the regular natural numbers. If $ X$ is the set of even natural numbers, just construct a function $ \mathbb{N} \to X$ by sending $ x \mapsto 2x$. This is manifestly surjective and injective (one can prove it with whatever method one wants, but it is obviously true). A quick note on notation: when mathematicians want to define a function without giving it a name, they use the “maps to” arrow $ \mapsto$. The reader can think of this as the mathematician’s version of lambda expression. So the above map would be written in python: lambda x: 2*x.

So we have proved, as curious as it sounds to say it, that there are just as many even numbers as all natural numbers. Even more impressive, one can construct a bijection between the natural numbers and the rational numbers. Mathematicians denote the latter by $ \mathbb{Q}$, and typically this proof is done by first finding a bijection from $ \mathbb{N} \to \mathbb{Z}$ and then from $ \mathbb{Z} \to \mathbb{Q}$. We are implicitly using the fact that a composition of two bijections is a bijection. The diligent reader has already proved this for injections, so if one can also prove it for surjections, by definition it will be satisfied for bijections.

Diagonalization, and a Non-Trivial Theorem

We now turn to the last proof of this post, and our first non-trivial theorem: that there is no bijection between the set of real numbers and the set of natural numbers. Before we start, we should mention that calling this theorem ‘non-trivial’ might sound insulting to the non-mathematician; the reader has been diligently working to follow the proofs in these posts and completing exercises, and they probably all feel non-trivial. In fact, mathematicians don’t use trivial with the intent to insult (most of the time) or to say something is easy or not worth doing. Instead, ‘trivial’ is used to say that a result follows naturally, that it comes from nothing but applying the definitions and using the basic methods of proof. Of course, since we’re learning the basic methods of proof nothing can really be trivial, but if we say a theorem is non-trivial that means the opposite: there is some genuinely inspired idea that sits at the focal point of the proof, more than just direct or indirect inference. Even more, a proof is called “highly non-trivial” if there are multiple novel ideas or a menagerie of complicated details to keep track of.

In any case, we have to first say what the real numbers are. Instead we won’t actually work with the entire set of real numbers, but with a “small” subset: the real numbers between zero and one. We will also view these numbers in a particular representation that should be familiar to the practicing programmer.

Definition: The set of $ [0,1]$ is the set of all infinite sequences of zeroes and ones, interpreted as the set of all binary decimal expansions of numbers between zero and one.

If we want to be rigorous, we can define an infinite sequence to either be an infinite tuple (falling back on our definition of a tuple as a set), or we can define it to be a function $ f : \mathbb{N} \to \left \{ 0, 1 \right \}$. Taking the latter view, we add one additional piece of notation:

Definition: An infinite binary sequence $ (b_i) = (b_1, b_2, \dots)$ is a function $ b : \mathbb{N} \to \left \{ 0, 1 \right \}$ where we denote by $ b_i$ the value $ b(i)$.

So now we can state the theorem.

Theorem: The set $ [0,1]$ of infinite binary sequences is uncountably infinite. That is, there is no bijection $ \mathbb{N} \to [0,1]$.

The proof, as we said, is non-trivial, but it starts off in a familiar way: we assume there is such a bijection. Suppose to the contrary that $ f : \mathbb{N} \to [0,1]$ is a bijection. Then we can list the values of $ f$ in a table. Since we want to use $ b_i$ for all of the values of $ f$, we will call

$ \displaystyle f(n) = (b_{n,i}) = b_{n,1}, b_{n,2}, \dots$

This gives us the following infinite table:

$ \displaystyle \begin{matrix} f(1) &=& b_{1,1}, & b_{1,2}, & \dots \\ f(2) &=& b_{2,1}, & b_{2,2}, & \dots \\ f(3) &=& b_{3,1}, & b_{3,2}, & \dots \\ \vdots & & \vdots & & \end{matrix}$

Now here is the tricky part. We are going to define a new binary sequence which we can guarantee does not show up in this list. This will be our contradiction, because we assumed at first that this list consisted of all of the binary sequences.

The construction itself is not so hard. Start by taking $ c_i = b_{i,i}$ for all $ i$. That is, we are using all of the diagonal elements of the table above. Now take each $ c_i$ and replace it with its opposite (i.e., flip each bit in the sequence, or equivalently apply $ b \mapsto 1-b$ to each entry). The important fact about this new sequence is it differs from every entry in this table. By the way we constructed it, no matter which $lateex n$ one chooses, this number differs from the table entry $ f(n)$ at digit $ n$ (and perhaps elsewhere). Because of this, it can’t occur as an entry in the table. So we just proved our function $ f$ isn’t surjective, contradicting our original hypothesis, and proving the theorem.

The discovery of this fact was an important step forward in the history of mathematics. The particular technique though, using the diagonal entries of the table and changing each one, comes with a name of its own: the diagonalization argument. It’s quite a bit more specialized of a proof technique than, say, the contrapositive implication, but it shows up in quite a range of mathematical literature (for instance, diagonalization is by far the most common way to prove that the Halting problem is undecidable). It is worth noting diagonalization was not the first known way to prove this theorem, just the cleanest.

The fact itself has interesting implications that lends itself nicely to confusing normal people. For instance, it implies not only that there is more than one kind of infinity, but that there are an infinity of infinities. Barring a full discussion of how far down the set-theory rabbit hole one can go, we look forward to next time, when we meet the final of the four basic methods of proof: proof by induction.

Until then!

False Proof – The Reals are Countable

It seems that false proofs are quickly becoming some of the most popular posts on Math ∩ Programming. I have been preparing exciting posts on applications of graph coloring, deck stacking, and serial killers. Unfortunately, each requires resources which exist solely on my home desktop, which is currently dismantled in California while I am on vacation in Costa Rica. Until I return from the tropics, I will continue with more of the ever -popular false proofs.

Problem: Show the real numbers $ \mathbb{R}$ are countable. [For a primer on set theory and countability, see our post on the subject].

“Solution”: Recall from our post on well-orderings that every set has a well-order (assuming the axiom of choice). So $ \mathbb{R}$ has a well-ordering. Call it $ <$. Recall additionally that by the definition of a well-order, every nonempty subset of $ \mathbb{R}$ has a least element with respect to $ <$. We construct a sequence $ x_i \in \mathbb{R}$ that uses this well-order in the following way:

Let $ x_1$ be the smallest element of $ \mathbb{R}$ with respect to $ <$. Let $ x_2$ be the smallest element of $ \mathbb{R} – \left \{ x_1 \right \}$. Let $ x_i$ be the smallest element of $ \mathbb{R} – \left \{ x_1, x_2, \dots, x_{i-1} \right \}$.

Define $ f: \mathbb{N} \to \mathbb{R}$ by $ f(i)=x_i$. Clearly this map is injective, and we argue that it is surjective because any element $ y \in \mathbb{R}$ is the smallest element of the subset of $ \mathbb{R}$ that contains strictly greater elements with respect to $ <$. So this map is a bijection, and the reals are countable.

Explanation: The reader should be extremely wary of this argument, because it does not only claim that the reals are countable. If the argument above held, then we could apply it to any set (since every set has a well-order!). So something is definitely fishy here.

The problem with analyzing this argument is that the well-order of the reals is not known explicitly. However, our noses point us to recognize that the only place a flaw could exist is in the claim of surjectivity.  We prove the falsity of the above argument now, by proving that the same argument fails to show the integers $ \mathbb{Z}$ are countable with respect to a well-order of our choosing.

Define a well-order $ <_{\xi}$ on $ \mathbb{Z}$ by letting non-positive numbers be larger than all positive numbers, and ordering negative numbers by increasing absolute value. In other words, we order the integers like $ 1,2, \dots, n, \dots, 0, -1, -2, \dots, -n, \dots$. We leave it as an exercise to the reader to prove $ <_{\xi}$ is a well-order.

Notice that with the argument above, we never choose -1, or any negative number, as an $ x_i$. Specifically, the sequence of $ x_i$ is precisely the natural numbers. Hence, we cannot enumerate all of $ \mathbb{Z}$ in this fashion. Clearly the argument fails for $ \mathbb{R}$ as well.

This false proof is significant in that we get some intuition about well-orders. In particular, a function from the natural numbers to a set $ S$ is also a way to “order” $ S$. However, a well-order is more general and more expressive than a bijection from $ \mathbb{N} \to S$, in that we can construct such an order on any set. As we have seen, with a well-order we don’t have to worry about a contiguous sequence of elements from smallest to greatest. We are allowed the power to say “all of these guys are bigger than all of these guys,” regardless of how many are in each group.

And, as always, with more power comes the responsibility to manage counter-intuitive results, either by accepting them through rigorous proof or rejecting false constructions like the one above.

Set Theory – A Primer

It’s often that a student’s first exposure to rigorous mathematics is through set theory, as originally studied by Georg Cantor. This means we will not treat set theory axiomatically (as in ZF set theory), but rather we will take the definition of a set for granted, and allow any operation to be performed on a set. This will be clear when we present examples, and it will be clear why this is a bad idea when we present paradoxes.

The Basics of Sets

Definition: A set $ S$ is a collection of distinct objects, each of which is called an element of S. For a potential element $ x$, we denote its membership in $ S$ and lack thereof by the infix symbols $ \in, \notin$, respectively. The proposition $ x \in S$ is true if and only if $ x$ is an element of $ S$.

Definition: The cardinality of $ S$, denoted $ |S|$, is the number of elements in S.

The elements of a set can in principal be anything: numbers, equations, cats, morals, and even (especially) other sets. There is no restriction on their size, and the order in which we list the objects is irrelevant. For now, we will stick to sets of numbers and later we will move on to sets of other sets.

There are many ways to construct sets, and for finite sets it suffices to list the objects:

$ S = \left \{ 0, 2,4,6,8,10 \right \}$

Clearly this set has cardinality six. The left and right curly braces are standard notation for the stuff inside a set. Another shorter construction is to simply state the contents of a set (let $ S$ be the set of even numbers between 0 and 10, inclusive). Sometimes, it will be very important that we construct the sets in a more detailed way than this, because, as we will see, sets can become rather convoluted.

We may construct sets using an implied pattern, i.e. for the positive evens:

$ E = \left \{ 2, 4, 6, 8, \dots \right \}$

For now, we simply allow that this set has infinite cardinality, though we will revisit this notion in more detail later. In this way we define two basic sets of numbers:

$ \mathbb{N} = \left \{ 1, 2, 3, \dots \right \} \\ \mathbb{Z} = \left \{ 0, -1, 1, -2, 2, \dots \right \}$

We name $ \mathbb{N}$ the natural numbers, and $ \mathbb{Z}$ the integers. Yet another construction allows us to populate a set with all values that satisfy a particular equation or proposition. We denote this $ \left \{ \textup{variable} | \textup{condition} \right \}$. For example, we may define $ \mathbb{Q}$, the rational numbers (fractional numbers) as follows:

$ \displaystyle \mathbb{Q} = \left \{ \frac{p}{q} | p \in \mathbb{Z} \textup{ and } q \in \mathbb{Z} q \neq 0 \right \}$

This is not quite a complete description of how rational numbers work: some fractions are “equivalent” to other fractions, like 2/4 and 1/2. There is an additional piece of data called a “relation” that’s imposed on this set, and any two things which are related are considered equivalent. We’re not going to go into the details of this, but the interested reader should look up an equivalence relation.

Next, we want to describe certain pieces of sets:

Definition: A set $ R$ is a subset of a set $ S$, denoted $ R \subset S$, if all elements of $ R$ are in $ S$. i.e., for all $ x, x \in R$ implies $ x \in S$.

And we recognize that under our standard equality of numbers (i.e. $ \frac{4}{1} = 4$), we have $ \mathbb{N} \subset \mathbb{Z} \subset \mathbb{Q}$.

We may now define equality on sets, extending the natural idea that two sets are equal if they contain precisely the same elements:

Definition: Two sets $ R,S$ are equal if $ R \subset S$ and $ S \subset R$.

A natural set to construct now is the power set of a given set:

Definition: the power set of $ S$, denoted $ P(S)$, is the set of all subsets of $ S$. i.e. $ P(S) = \left \{ R | R \subset S \right \}$.

Elements of this set are sets themselves, and there are two trivial, yet important sets to recognize are in $ P(S)$, namely $ S \subset S$ itself and $ \left \{ \right \} \subset S$, the empty set which contains no elements, is vacuously a subset of every set.

For a finite set $ S$, power sets are strictly larger in size, since there exists a singleton set $ \left \{ x \right \} \in P(S)$ for each $ x \in S$. As an exercise for the reader, determine the size of $ P(S)$ for any finite set $ S$, expressed in terms of $ |S|$. For infinite sets, we simply admit that their power sets are also infinite, since we don’t yet have a way to describe “larger” infinities.

Building Sets From Other Sets

We have a couple of nice operations we may define on sets, which are rather trivial to define.

Definition: The union of two sets $ R,S$, denoted $ R \cup S$, is $ \left \{ x | x \in S \textup{ or } x \in R \right \}$.

Definition: The intersection of two sets $ R,S$, denoted $ R \cap S$, is $ \left \{ x | x \in S \textup{ and } x \in R \right \}$.

As an exercise, try to prove that $ |S \cup R| = |S| + |R| – |S \cap R|$.

The next definition requires one to remember what an ordered tuple $ (a_1,a_2, \dots , a_n)$ is. Specifically, an ordered pair is just like a set which allows for repetition and respects the presented order of elements. So $ (a,b) \neq (b,a) \neq (a,a,b)$.

Definition: The direct product (or simply product) of two sets $ R,S$, denoted $ R \times S$, is $ \left \{ (x,y) | x \in R \textup{ and } y \in S \right \}$.

This is just like in defining the Cartesian Plane $ \mathbb{R}^2 = \mathbb{R} \times \mathbb{R}$ as ordered pairs of real numbers. We can extend this even further by defining $ \mathbb{S}^n$ to be the set of all $ n$-tuples of elements in $ S$.

Functions, and Their -Jections

Now that we have sets and ways to build interesting sets, we may define mathematical objects which do stuff with them.

Definition: A relation $ \sim$ on $ R$ and $ S$, is a subset of $ R \times S$. Denotationally, we write $ a \sim b$ as shorthand for $ (a,b) \in \sim $.

Relations are natural generalizations of $ =$ on numbers. In general relations need no additional properties, but they are not very interesting unless they do. For more discussion on relations, we point the reader to the Wikipedia page on equivalence relations. As an exercise to the reader, prove that set equality (defined above) is an equivalence relation, as expected.

Now, we get to the meat of our discussion on sets: functions.

Definition: A function $ f : S \to R$ is a relation on $ S$ and $ R$, a subset of $ S \times R$, with the additional properties that for each $ x \in S$, there is exactly one element of the form $ (x, \cdot ) \in f$.

Colloquially, functions ‘accept’ a value from $ S$ and output something in $ R$. This is why we may only have one ordered pair for each $ x \in S$, because functions are deterministic. Furthermore, we must be able to put every value from $ S$ into our function, so no values $ x \in S$ may be without a corresponding element of $ f$.

We have special notation for functions, which was established long before set theory was invented. If $ x \in S$, we write $ f(x)$ to denote the corresponding element $ y$ in $ (x,y) \in f$. In addition, we say that a value $ x$ maps to $ y$ to mean $ f(x)=y$. If the function is implicitly understood, we sometimes just write $ x \mapsto y$. Then we have the following definitions:

Definition: The domain of a function $ f: S \to R$, sometimes denoted $ \textup{dom}(f)$, is the set of input values $ S$. The codomain, denoted $ \textup{codom}(f)$, is the set $ R$. Since not every value in $ R$ must be in a pair in $ f$, we call the subset of values of $ R$ which are produced by some input in $ S$ the range of $ f$. Rigorously, the range of $ f$ is $ \left \{ f(x) | x \in S \right \}$.

Now we may speak of some “interesting” functions:

Definition: A function $ f : S \to R$ is a surjection if its range is equal to its codomain. In other words, for every $ y \in R$, there is some $ x \in S$ with $ f(x) = y$, or, equivalently, $ f(S) = R$.

Note that $ f$ being a surjection on finite sets implies that the domain is at least as big as the codomain. Though it seems trivial, we can use functions in this way to reason about the cardinalities of their domains and codomains.

Definition: A function $ f : S \to R$ is an injection if no two different values $ x \in S$ map to the same $ y \in R$. In other words, if $ f(a) = f(b)$, then $ a=b$.

Similarly, for finite domains/codomains an injection forces the codomain to be at least as big as the domain in cardinality.

Now, we may combine the two properties to get a very special kind of function.

Definition: A function $ f: S \to R$ is a bijection if it is both a surjection and an injection.

A bijection specifically represents a “relabeling” of a given set, in that each element in the domain has exactly one corresponding element in the codomain, and each element in the codomain has exactly one corresponding element in the domain. Thus, the bijection represents changing the label $ x$ into the label $ f(x)$.

Note that for finite sets, since a bijection is both a surjection and an injection, the domain and codomain of a bijection must have the same cardinality! What’s better, is we can extend this to infinite sets.

To Infinity, and Beyond! (Literally)

Definition: Two infinite sets have equal cardinality if there exists a bijection between them.

Now we will prove that two different infinite sets, the natural numbers and the integers, have equal cardinality. This is surprising, because despite the fact that the sets are not equal, one is a subset of the other and they have equal cardinality! So here we go.

Define $ f : \mathbb{N} \to \mathbb{Z}$ as follows. Let $ f(1) = 0, f(2) = 1, f(3) = -1 \dots$. Continuing in this way, we see that $ f(2k+1) = -k$, and $ f(2k) = k$ for all $ k \in \mathbb{N}$. This is clearly a bijection, and hence $ |\mathbb{N}| = |\mathbb{Z}|$.

We can extend this to any bijection between an infinite set $ S$ of positive numbers and the set $ \left \{ \pm x | x \in S \right \}$.

Let’s try to push bijections a bit further. Let’s see if we can construct a bijection between the natural numbers and the positive rationals (and hence the set of all rationals). If integers seemed bigger than the naturals, then the rational numbers must be truly huge. As it turns out, the rationals also have cardinality equal to the natural numbers!

It suffices to show that the natural numbers are equal in cardinality to the nonnegative rationals. Here is a picture describing the bijection:

We arrange the rationals into a grid, such that each blue dot above corresponds to some $ \frac{p}{q}$, where $ p$ is the x-coordinate of the grid, and $ q$ is the y-coordinate. Then, we assign to each blue dot a nonnegative integer in the diagonal fashion described by the sequence of arrows. Note these fractions are not necessarily in lowest terms, so some rational numbers correspond to more than one blue dot. To fix this, we simply eliminate the points $ (p,q)$ for which their greatest common divisor is not 1. Then, in assigning the blue dots numbers, we just do so in the same fashion, skipping the places where we deleted bad points.

This bijection establishes that the natural numbers and rationals have identical cardinality. Despite how big the rationals seem, they are just a relabeling of the natural numbers! Astounding.

With this result it seems like every infinite set has cardinality equal to the natural numbers. It should be totally easy to find a bijection between the naturals and the real numbers $ \mathbb{R}$.

Unfortunately, try as we might, no such bijection exists. This was a huge result proven by Georg Cantor in his study of infinite sets, and its proof has become a staple of every mathematics education, called Cantor’s Diagonalization Proof.

First, we recognize that every real number has a representation in base 2 as an infinite sequence of 0’s and 1’s. Thus, if there were such a bijection between the natural numbers and reals, we could list the reals in order of their corresponding naturals, as:

$ 1 \mapsto d_{1,1}d_{1,2}d_{1,3} \dots \\ 2 \mapsto d_{2,1}d_{2,2}d_{2,3} \dots \\ 3 \mapsto d_{3,1}d_{3,2}d_{3,3} \dots$
$ \vdots$

Here each $ d_{i,j} \in \left \{ 0,1 \right \}$ corresponds to the $ j$th digit of the $ i$th number in the list. Now we may build a real number $ r = d_{1,1}d_{2,2}d_{3,3} \dots$, the diagonal elements of this matrix. If we take $ r$ and flip each digit from a 0 to a 1 and vice versa, we get the complement of $ r$, call it $ r’$. Notice that $ r’$ differs from every real number at some digit, because the $ i$th real number shares digit $ i$ with $ r$, and hence differs from $ r’$ at the same place. But $ r’$ is a real number, so it must occur somewhere in this list! Call that place $ k \mapsto r’$. Then, $ r’$ differs from $ r’$ at digit $ k$, a contradiction.

Hence, no such bijection can exist. This is amazing! We have just found two infinite sets which differ in size! Even though the natural numbers are infinitely large, there are so many mind-bogglingly more real numbers that it is a larger infinity! We need new definitions to make sense of this:

Definition: An infinite set $ S$ is countably infinite if there exists a bijection $ \mathbb{N} \to S$. If no such bijection exists, we call it uncountably infinite, or just uncountable.

Georg Cantor went on to prove this in general, that there cannot exist a bijection from a set onto its power set (we may realize the reals as the power set of the naturals by a similar decimal expansion). He did so simply by extending the diagonalization argument. But since we are dealing with infinite sets, we need even more definitions of “how hugely infinite” these infinite sets can be. These new measures of set cardinality were also invented by Cantor, and they are called transfinite numbers. Their investigation is beyond the scope of this post, but we encourage the reader to follow up on this fascinating subject.

We have still only scratched the surface of set theory, and we have even left out a lot of basic material to expedite our discussion of uncountability. There is a huge amount of debate that resulted from Cantor’s work, and it inspired many to further pick apart the foundations of mathematics, leading to more rigorous formulations of set theory, and extensions or generalizations such as category theory.

Sets of Sets of Sets, and so on Ad Insanitum

We wrap up this post with a famous paradox, which makes one question whether all of the operations performed in set theory are justified. It is called Russell’s Paradox, after Bertrand Russell.

Suppose we define a set $ S$, which contains itself as an element. This does not break any rules of Cantor’s set theory, because we said a set could be any collection of objects. We may even speak of the set of all sets which contain themselves as elements.

Now let us define the set of all sets which do not contain themselves. Call this set $ X$. It must be true that either $ X \in X$ or $ X \notin X$. If $ X \in X$, then $ X$ is contained in itself, obviously, but then by the definition of $ X$, it does not contain itself as an element. This is a contradiction, so $ X \notin X$. However, if $ X \notin X$, then $ X$ satisfies the definition of a set which does not contain itself, so $ X \in X$. Again, a contradiction.

This problem has no resolution within Cantor’s world of sets. For this reason (and other paradoxes based on wild constructions of sets), many have come to believe that Cantor’s set theory is not well-founded. That is not to say his arguments and famous results are wrong, but rather that they need to be reproved within a more constrained version of set theory, in which such paradoxes cannot happen.

Such a set theory was eventually found that bypassed Russell’s paradox, and it is called Zermelo-Fraenkel set theory. But even that was not enough! Additional statements, like the Axiom of Choice (which nevertheless does lead to some counter-intuitive theorems), were found which cannot be proved or disproved by the other axioms of ZF set theory.

Rather than give up all that work on axiomatizing set theory, most mathematicians today accept the Axiom of Choice and work around any oddities that arise, resulting in ZFC (ZF +  Choice), doing their mathematical work there.

So along with paradoxical curiosities, we have laid all the foundation necessary for reasoning about countability and uncountability, which has already shown up numerous times on this blog.

Until next time!