False Proof – All Numbers are Describable in at Most Twenty Words

Problem: Show that every natural number can be unambiguously described in fewer than twenty words.

“Solution”: Suppose to the contrary that not every natural number can be so described. Let $ S$ be the set of all natural numbers which are describable in fewer than twenty words. Consider $ R = \mathbb{N}-S$, the set of all words which cannot be described in fewer than twenty words.

Since $ R$ is a subset of the natural numbers, which is well-ordered, it has a unique smallest element which we call $ r$. Now, we may describe $ r$ unambiguously with the sentence: “the smallest natural number which cannot be unambiguously described in fewer than twenty words.” Since this description uses only fourteen words, we see that $ r \in S$, a contradiction. Hence, $ R$ must be the empty set. $ \square$

Explanation: Let’s analyze this a bit further. Suppose that every number were indeed unambiguously describable in fewer than twenty words. It should be obvious that this admits only finitely many descriptions of infinitely many numbers!

In particular, let there be $ n$ words in the English language, which we for the sake of argument say is the number of words in the Oxford English Dictionary. Then there are $ \displaystyle \sum \limits_{k=1}^{20} n^k$ different phrases. This is a large number, but it is indeed finite. Even if every phrase describes a natural number, there’s no way that we could get them all! This proof is clearly nonsense.

This apparent paradox is in the same vein as Russell’s paradox, which we cover at the end of our set theory primer. Indeed, in its paradox form, this problem is sometimes called the Richard-Berry paradox. The problem is with what kinds of sets we construct. While with Russell’s paradox, the problem was with elements of our set, here it is with the propositions used to determine membership. This proof gives evidence that the English language (and any human language, really), is not rigorous enough for the purposes of mathematics. It also convinces us that naive set theory is problematic.

We find the meat of the problem when we finally get down to the definition of a description. In short (and I don’t want to spoil upcoming content on this blog), a description of a number is a program which computes the number (on some fixed universal Turing machine $ U$). Note that this assumes that whatever a human can compute a computer can too, which is a controversial statement better known as the Church-Turing Thesis. If we further define a number’s “definition” as the shortest program which computes it, then this statement transforms into “x is the smallest integer whose definition has fewer than 100 characters.” As it turns out, when this English description is appropriately reformulated on a Turing machine, it is not an effective description: the statement is undeciable. In fact, once we get into the theory of Kolmogorov complexity, we will find that Berry’s paradox can be used to prove Gödel’s incompleteness theorem!

The problem of description is a very deep and historically debated one. The various approaches and sub-fields are encapsulated in the study of information theory. One such framework for descriptions is provided by Kolmogorov complexity, which we are diligently working toward understanding. Keep an eye out for our future primer on the subject, which we will write as extra background for the already-written post on low-complexity art and unknown future topics.

False Proof – The Reals are Countable

It seems that false proofs are quickly becoming some of the most popular posts on Math ∩ Programming. I have been preparing exciting posts on applications of graph coloring, deck stacking, and serial killers. Unfortunately, each requires resources which exist solely on my home desktop, which is currently dismantled in California while I am on vacation in Costa Rica. Until I return from the tropics, I will continue with more of the ever -popular false proofs.

Problem: Show the real numbers $ \mathbb{R}$ are countable. [For a primer on set theory and countability, see our post on the subject].

“Solution”: Recall from our post on well-orderings that every set has a well-order (assuming the axiom of choice). So $ \mathbb{R}$ has a well-ordering. Call it $ <$. Recall additionally that by the definition of a well-order, every nonempty subset of $ \mathbb{R}$ has a least element with respect to $ <$. We construct a sequence $ x_i \in \mathbb{R}$ that uses this well-order in the following way:

Let $ x_1$ be the smallest element of $ \mathbb{R}$ with respect to $ <$. Let $ x_2$ be the smallest element of $ \mathbb{R} – \left \{ x_1 \right \}$. Let $ x_i$ be the smallest element of $ \mathbb{R} – \left \{ x_1, x_2, \dots, x_{i-1} \right \}$.

Define $ f: \mathbb{N} \to \mathbb{R}$ by $ f(i)=x_i$. Clearly this map is injective, and we argue that it is surjective because any element $ y \in \mathbb{R}$ is the smallest element of the subset of $ \mathbb{R}$ that contains strictly greater elements with respect to $ <$. So this map is a bijection, and the reals are countable.

Explanation: The reader should be extremely wary of this argument, because it does not only claim that the reals are countable. If the argument above held, then we could apply it to any set (since every set has a well-order!). So something is definitely fishy here.

The problem with analyzing this argument is that the well-order of the reals is not known explicitly. However, our noses point us to recognize that the only place a flaw could exist is in the claim of surjectivity.  We prove the falsity of the above argument now, by proving that the same argument fails to show the integers $ \mathbb{Z}$ are countable with respect to a well-order of our choosing.

Define a well-order $ <_{\xi}$ on $ \mathbb{Z}$ by letting non-positive numbers be larger than all positive numbers, and ordering negative numbers by increasing absolute value. In other words, we order the integers like $ 1,2, \dots, n, \dots, 0, -1, -2, \dots, -n, \dots$. We leave it as an exercise to the reader to prove $ <_{\xi}$ is a well-order.

Notice that with the argument above, we never choose -1, or any negative number, as an $ x_i$. Specifically, the sequence of $ x_i$ is precisely the natural numbers. Hence, we cannot enumerate all of $ \mathbb{Z}$ in this fashion. Clearly the argument fails for $ \mathbb{R}$ as well.

This false proof is significant in that we get some intuition about well-orders. In particular, a function from the natural numbers to a set $ S$ is also a way to “order” $ S$. However, a well-order is more general and more expressive than a bijection from $ \mathbb{N} \to S$, in that we can construct such an order on any set. As we have seen, with a well-order we don’t have to worry about a contiguous sequence of elements from smallest to greatest. We are allowed the power to say “all of these guys are bigger than all of these guys,” regardless of how many are in each group.

And, as always, with more power comes the responsibility to manage counter-intuitive results, either by accepting them through rigorous proof or rejecting false constructions like the one above.

Well Orderings and Search

Binary Search

Binary search is perhaps the first and most basic nontrivial algorithm a student learns. For the mathematicians out there, binary search is a fast procedure to determine whether a sorted list contains a particular element. Here is a pseudocode implementation:

# Binary Search:
# Given a list L, sorted via the total order &lt;, and a sought
# element x, return true iff L contains x.

function binarySearch(L, x, &lt;):
   # base case
   if(length(L) == 1):
      return L[0] == x
   middleIndex = floor(length(L) / 2)
   if (L[middleIndex] == x):
      return true

   # inductive step, with ellipsis notation meaning slices of L
   # from the beginning and to the end, respectively
   if (x &lt; L[middleIndex]):
      return binarySort(L[...middleIndex-1], x, &lt;)
   else:
      return binarySort(L[middleIndex+1...], x, &lt;)

Colloquially, this is the optimal strategy in a game of “guess the number,” where the guesser is told if her guess is correct, too high, or too low. Try the middle number in the range of possible numbers. If the guess is too high, try the number which is 1/4th in the ordering, otherwise try 3/4ths, continuing this process until the number is guessed. This algorithm is obviously made for recursion (and for those advanced programmers, we resign to hope our working language supports tail-call optimization).

Binary search’s runtime is rather easy to analyze. At each step of the algorithm, we either finish, or cut the problem size in half. By no stretch of the imagination, binary search runs in $ O(\log n)$ where $ n$ is the length of the input list. In other words, in at worst $ \log_2 n$ steps, we will reduce the input list to have size 1, and can definitively say whether the list as a whole contains the sought element.

Notice that the success of the algorithm depends on the fact that the list is sorted, but the specific total order $ <$ is irrelevant. We will investigate this idea further, but first we need some deeper maths.

Well and Total Orders

For those of us who aren’t set theorists, it isn’t fair to talk about total orders and well orders without defining them. So here comes a definition:

Definition: A strict total order $ <$ is a relation on a set $ S$ with the following statements holding for all $ a, b, c \in S$:

  • It is never the case that $ a < a$. (anti-reflexivity)
  • Exactly one of the following are true: $ a < b, b < a,$ or $ a = b$. (trichotomy)
  • If $ a < b$ and $ b < c$, then $ a < c$. (transitivity)

These conditions create an environment in which sorting is possible: we need to be able to compare any two elements (trichotomy), we need to be able to inspect two elements at a time and know that our analysis generalizes to the whole list (transitivity), and we can’t break the world of strictness (anti-reflexivity).

Aside: The use of a strict total order as opposed to a non-strict total order is irrelevant, because each strict total order corresponds bijectively to a non-strict total order. Hence, there are two equivalent formulations of sorting with respect to strict and non-strict total orders, and we may choose one arbitrarily.

Now, we may elevate a total order $ <$ to a well order if every non-empty subset $ R \subset S$ has a least element with respect to $ <$. We computer scientists only sort finite lists, so every total order is automatically a well order. However, the reader may be interested in the mathematical need for such a distinction, so here it is:

Consider the integers $ \mathbb{Z}$ with the standard ordering $ <$. While $ \mathbb{Z}$ itself has no smallest element, neither does any subset which has infinitely many negative numbers, such as the evens or odds. More generally, any open interval in the real numbers $ \mathbb{R}$ obviously doesn’t have a least element with respect to the natural order. In contrast, we rely on the crucial fact that the set of natural numbers $ \mathbb{N}$ is well-ordered to apply mathematical induction.

Interestingly enough, a theorem due to Ernst Zermelo states that every set can be well ordered, and it is equivalent to the Axiom of Choice. While many people have a hard time visualizing a well-ordering of the real numbers $ \mathbb{R}$, we simply resign (as mystical as it is) to admit there is one out there somewhere, though we may never know what it is.

As another aside, it turns out that we only need one of the inequalities in $ (<, \leq, >, \geq)$ and the standard logical operations and (infix &&), or (infix ||), and not (prefix !) in order to define the other three (and indeed, to define $ =, \neq$ as well). This is a computer science trick that comes in handy, as we’ll see later, so here is the described equivalence. Given $ <$, we define the remaining operations as follows:

  • $ x > y$ is equivalent to $ y < x$
  • $ x \geq y$ is equivalent to $ !(x < y)$
  • $ x \leq y$ is equivalent to $ !(y < x)$
  • $ x = y$ is equivalent to $ !(x < y) \textup{ and } !(y < x)$
  • $ x \neq y$ is equivalent to $ x < y \textup{ or } y < x$

So if we are interested in sorting a set via some procedure, all we need from the user is the $ <$ operation, and then we may compare any way our heart desires.

Defining a New Well Order

Consider a deck of cards which is initially sorted (with some arbitrary ordering on the suits), and then is “cut” at some arbitrary point and the bottom part is placed on top of the deck. We may simplify this “cut” operation to a list of numbers, say ten, and provide the following example of a cut:

(5,6,9,10,11,13,1,3,3,4)

To pick a standard working language, we say the “cut point” of this list is 5, not 4.

We have a few (naive) options to search through cut data: we may sort it with respect to the natural total order on $ \mathbb{N}$, and then search through it; we may stick the elements of the list into a hash set (a constant-time lookup table), and then query existence that way; or we may traverse the list element-by element looking for a particular value.

The problem with the first two methods, though they determine existence, is that they don’t allow us to know where the value is in the list. If this it not important, and we are searching many times (compared to the size of the list) on many different values, then a has table would be the correct choice. However, if we are searching only a few times, and need to know where the value is hidden, all three of the above approaches are slow and inelegant.

Enter well orders. You may have noticed that a cut list of numbers has a very simple well ordering in terms of the natural order on $ \mathbb{N}$. Verbally, if the two numbers are separated across the cut point, then the larger number is in fact the smaller number. Otherwise, we may appeal to the regular ordering. Here it is in pseudocode:

function lessThan(x, y):
   if (y &lt; cutPoint &lt;= x):
      return true
   else if (x &lt; cutPoint &lt;= y):
      return false
   else:
      return x &lt; y

And we may compress these if statements into a single condition:

function lessThan(cutPoint, x, y): 
   y &lt; cutPoint &lt;= x || (!(x &lt; cutPoint &lt;= y) &amp;&amp; x &lt; y)

function makeCutOrdering(cutPoint): 
   lambda x,y: lessThan(cutPoint, x, y)

So we have found that a cut list of numbers is in fact well ordered with respect to this new relation. Forget about preprocessing the data, we can just do a binary search using this new ordering! Here’s a Mathematica script that does just that. Here we assume constant-time list length calculations, and fast slicing (which Mathematica has). Note that list indexing and slicing has double square bracket [ [ ] ] syntax, while function application is single square brackets [ ]. It should look very similar to the earlier pseudocode for binary search.

eq[x_, y_, lt_] := !lt[x,y] &amp;&amp; !lt[y,x];

(* base case *)
BinarySearch[{oneElt_}, sought_, lt_] := eq[oneElt, sought, lt];

(* inductive step *)
BinarySearch[lst_List, sought_, lt_] :=
   Module[{size = Length[lst], midVal, midNdx},
      midNdx = Floor[size/2];
      midVal = lst[[midNdx]];
      If[eq[midVal, sought, lt],
         True,
         If[lt[sought, midVal],
            BinarySearch[lst[[ ;; midNdx - 1]], sought, lt]
            BinarySearch[lst[[midNdx + 1 ;; ]], sought, lt]
         ]
      ]
   ];

Notice that if we use the standard $ <$ (in Mathematica, the function Less), then the BinarySearch function reverts to a standard binary search. Marvelous! Now we have a reusable piece of code that searches through any well-ordered set, provided we provide the correct well order.

The lesson to take from this is know your data! If your input list is not sorted, but still structured in some way, then there’s a good chance it is sorted with respect to a non-natural total order. For example, many operating systems order filenames which end in numbers oddly (e.g. “file1”, “file11”, “file12”, “file2”), and in certain places in the world, financial calendars are ordered differently (In Australia, the fiscal year starts in July). So take advantage of that, and you’ll never need to write binary search again.