# Ramsey Number Lower Bound

Define the Ramsey number $R(k,m)$ to be the minimum number $n$ of vertices required of the complete graph $K_n$ so that for any two-coloring (red, blue) of the edges of $K_n$ one of two things will happen:

• There is a red $k$-clique; that is, a complete subgraph of $k$ vertices for which all edges are red.
• There is a blue $m$-clique.

It is known that these numbers are always finite, but it is very difficult to compute them exactly.

Problem: Prove that the Ramsey number $R(m,m) > n$ whenever $n,m$ satisfy

$\displaystyle \binom{n}{m}2^{1-\binom{m}{2}} < 1$

Solution: Color the edges of $K_n$ uniformly at random (that is, each edge has probability 1/2 of being colored red). For any complete subgraph $G = K_m$, define by event $A_G$ the event that $G$ is monochromatic (its edges are either all red or all blue).

Now the probability that $A_G$ occurs (where $G$ is fixed ahead of time) is easy to compute:

$\displaystyle \textup{Pr}(A_G) = \left (\frac{1}{2} \right)^{\binom{m}{2} - 1} = 2^{1-\binom{m}{2}}$

Since there are $\binom{n}{m}$ possible subgraphs with $m$ vertices, The probability that for some $G$ the event $A_G$ occurs is at most

$\displaystyle \binom{n}{m}2^{1-\binom{m}{2}}$

Whenever this quantity is strictly less than 1 (by assumption) then there is a positive probability that no event $A_G$ will occur. That is, there is a positive probability that a random coloring will have no monochromatic subgraph $K_m$. So there must exist such a coloring, and the Ramsey number $R(m,m)$ must be larger than $n$. $\square$

Discussion: This proof (originally due to Erdős) is a classic example of the so-called probabilistic method. In particular, we create a probability space from the object we wish to study, and then we make claims about the probability of joint events.

While it seems quite simple in nature, the probabilistic method has been successfully applied to a wide variety of problems in mathematics. For instance, there is an elegant proof in complexity theory that $\textup{BPP} \subset \textup{P/poly}$ which uses this same method. The probabilistic method has been applied to loads of problems in combinatorics, number theory, and graph theory, and it forms the foundation of the area of random graph theory (which is the setting in which one studies social networks). Perhaps unsurprisingly, there is also a proof of the fundamental theorem of algebra that uses the probabilistic method.

# There are Infinitely Many Primes (Erdős)

Problem: Prove there are infinitely many primes

Solution: Denote by $\pi(n)$ the number of primes less than or equal to $n$. We will give a lower bound on $\pi(n)$ which increases without bound as $n \to \infty$.

Note that every number $n$ can be factored as the product of a square free number $r$ (a number which no square divides) and a square $s^2$. In particular, to find $s^2$ recognize that 1 is a square dividing $n$, and there are finitely many squares dividing $n$. So there must be a largest one, and then $r = n/s^2$. We will give a bound on the number of such products $rs^2$ which are less than or equal to $n$.

If we want to count the number of square-free numbers less than $n$, we can observe that each square free number is a product of distinct primes, and so (as in our false proof that there are finitely many primes) each square-free number corresponds to a subset of primes. At worst, we can allow these primes to be as large as $n$ (for example, if $n$ itself is prime), so there are no more than $2^{\pi(n)}$ such subsets.

Similarly, there are at most $\sqrt{n}$ square numbers less than $n$, since if $x > \sqrt{n}$ then $x^2 > n$.

At worst the two numbers $r, s^2$ will be unrelated, so the total number of factorizations $rs^2$ is at most the product $2^{\pi(n)}\sqrt{n}$. In other words,

$2^{\pi(n)}\sqrt{n} \geq n$

The rest is algebra: divide by $\sqrt{n}$ and take logarithms to see that $\pi(n) \geq \frac{1}{2} \log(n)$. Since $\log(n)$ is unbounded as $n$ grows, so must $\pi(n)$. Hence, there are infinitely many primes.

Discussion: This is a classic analytical argument originally discovered by Paul Erdős. One of the classical ways to investigate the properties of prime numbers is to try to estimate $\pi(n)$. In fact, much of the work of the famous number theorists of the past involved giving good approximations of $\pi(n)$ in terms of logarithms. This usually involved finding good upper bounds and lower bounds and limits. Erdős’s proof is entirely in this spirit, although there are much closer and more accurate lower and upper bounds. In this proof we include a lot of values which are not actually valid factorizations (many larger choices of $r, s^2$ will have their product larger than $n$). But for the purposes of proving there are infinitely many primes, this bound is about as elegant as one can find.

# Double Angle Trigonometric Formulas

Problem: Derive the double angle identities

$\sin(2\theta) = 2\sin(\theta)\cos(\theta)\\ \cos(2\theta) = \cos^2(\theta) - \sin^2(\theta)$

Solution: Recall from linear algebra how one rotates a point in the plane. The matrix of rotation (derived by seeing where $(1,0)$ and $(0,1)$ go under a rotation by $\theta$, and writing those coordinates in the columns) is

$A = \begin{pmatrix} \cos(\theta) & -\sin(\theta) \\ \sin(\theta) & \cos(\theta) \end{pmatrix}$

Next, note that to rotate a point twice by $\theta$, we simply multiply the point (as a vector) by $A$ twice. That is, multiply by $A^2$:

$AAv = A^2v$

Computing $A^2$ gives the following matrix:

$A^2 = \begin{pmatrix} \cos^2(\theta) - \sin^2(\theta) & -2\sin(\theta)\cos(\theta) \\ 2\sin(\theta)\cos(\theta) & \cos^2(\theta) - \sin^2(\theta) \end{pmatrix}$

But rotating twice by $\theta$ is the same as rotating once by $2\theta$, so we have the equality:

$\begin{pmatrix} \cos(2\theta) & -\sin(2\theta) \\ \sin(2\theta) & \cos(2\theta) \end{pmatrix} = \begin{pmatrix} \cos^2(\theta) - \sin^2(\theta) & -2\sin(\theta)\cos(\theta) \\ 2\sin(\theta)\cos(\theta) & \cos^2(\theta) - \sin^2(\theta) \end{pmatrix}$

The matrices are equal, so they must be equal entrywise, giving the identities we desire. $\square$

Discussion: There are (painful, messy) ways to derive these identities by drawing triangles on the unit circle and cultishly chanting “soh-cah-toa.” The key idea in this proof that one might study geometric transformations, and it is a truly mature viewpoint of mathematics. Specifically, over the last two hundred years the field of mathematics has changed focus from the study of mathematical “things” to the study of transformations of mathematical things. This proof is an elementary example of the power such perspective can provide. If you want to be really high-brow, start asking about transformations of transformations of things, and transformations of those transformations, and recurse until you’re doing something original.

# False Proof – 2 = 4, As the Limit of an Infinite Power Tower

Problem: Prove that $2 = 4$.

Solution: Consider the value of the following infinitely iterated exponent:

$\displaystyle \sqrt{2}^{\sqrt{2}^{\sqrt{2}^{\cdot^{\cdot^{\cdot}}}}}$

Let $a_n = \sqrt{2} \uparrow \uparrow n$, that is, the above power tower where we stop at the $n$-th term. Then $a_n$ is clearly an increasing sequence, and moreover $a_n \leq 4$ by a trivial induction argument: $\sqrt{2} \leq 4$ and if $a_n \leq 4$ then $a_{n+1} = (\sqrt{2})^{a_n} \leq (\sqrt{2})^{4} = 4$.

So by basic results in analysis, the sequence of $a_n$ converges to a limit. Let’s call this limit $x$, and try to solve for it.

It is not hard to see that $x$ is defined by the equation:

$\displaystyle (\sqrt{2})^x = x,$

simply because the infinite power tower starting at the second level is the same as the infinite power tower starting at the first level. However, we notice that $x=2$ and $x=4$ both satisfy this relationship. Hence,

$\displaystyle 2 = \sqrt{2}^{\sqrt{2}^{\sqrt{2}^{\cdot^{\cdot^{\cdot}}}}} = 4$

And hence $2=4$, as desired. $\square$

Explanation: The issue here is relatively subtle, but one trained in real analysis should have no trouble spotting the problem.

The first alarm is that we are claiming that the limit of a sequence is not unique, which is always false. Inspecting this argument more closely, we corner the flaw to our claim that $x$ is defined by the equation $(\sqrt{2})^x = x$. It does, in fact, satisfy this equation, but there is a world of difference between satisfying an equation and being defined by satisfying an equation. One might say that this only makes sense when there is a unique solution to the equation. But as we just saw, there are multiple valid solutions here: 2 and 4 (I don’t immediately see any others; it’s clear no other powers of two will work).

Analogously, one might erroneously say that $i$ is defined by satisfying the equation $x^2 = -1$, but this does not imply $i = -i$, since the equation has two differing complex roots. This is a slightly murky analogy, as complex conjugation gives an automorphism of $\mathbb{C}$. One would reasonably argue that $i$ and $-i$ “play the same role” in this field, while 2 and 4 play wildly different roles as real numbers. But nevertheless, they are not equal simply by virtue of satisfying the same equation.

Moreover, while we said the sequence was bounded from above by 4, an identical argument shows it’s bounded from above by 2. And so one can immediately conclude that the limit of the sequence cannot be 4.

In fact, this infinite tower power $x^{x^{\cdot^{\cdot^{\cdot}}}}$ is continuous as a function, whose domain is the interval $[e^{-e}, \sqrt[e]{e}]$ and whose range is $[e^{-1},e]$. For more information about interesting number-theoretic properties of infinite power towers, see this paper.