# The Inequality

Math and computer science are full of inequalities, but there is one that shows up more often in my work than any other. Of course, I’m talking about $\displaystyle 1+x \leq e^{x}$

This is The Inequality. I’ve been told on many occasions that the entire field of machine learning reduces to The Inequality combined with the Chernoff bound (which is proved using The Inequality).

Why does it show up so often in machine learning? Mostly because in analyzing an algorithm you want to bound the probability that some bad event happens. Bad events are usually phrased similarly to $\displaystyle \prod_{i=1}^m (1-p_i)$

And applying The Inequality we can bound this from above by $\displaystyle\prod_{i=1}^m (1-p_i) \leq \prod_{i=1}^m e^{-p_i} = e^{-\sum_{i=1}^m p_i}$

The point is that usually $m$ is the size of your dataset, which you get to choose, and by picking larger $m$ you make the probability of the bad event vanish exponentially quickly in $m$. (Here $p_i$ is unrelated to how I am about to use $p_i$ as weights).

Of course, The Inequality has much deeper implications than bounds for the efficiency and correctness of machine learning algorithms. To convince you of the depth of this simple statement, let’s see its use in an elegant proof of the arithmetic geometric inequality.

Theorem: (The arithmetic-mean geometric-mean inequality, general version): For all non-negative real numbers $a_1, \dots, a_n$ and all positive $p_1, \dots, p_n$ such that $p_1 + \dots + p_n = 1$, the following inequality holds: $\displaystyle a_1^{p_1} \cdots a_n^{p_n} \leq p_1 a_1 + \dots + p_n a_n$

Note that when all the $p_i = 1/n$ this is the standard AM-GM inequality.

Proof. This proof is due to George Polya (in Hungarian, Pólya György).

We start by modifying The Inequality $1+x \leq e^x$ by a shift of variables $x \mapsto x-1$, so that the inequality now reads $x \leq e^{x-1}$. We can apply this to each $a_i$ giving $a_i \leq e^{a_i - 1}$, and in fact, $\displaystyle a_1^{p_1} \cdots a_n^{p_n} \leq e^{\sum_{i=1}^n p_ia_i - p_i} = e^{\left ( \sum_{i=1}^n p_ia_i \right ) - 1}$

Now we have something quite curious: if we call $A$ the sum $p_1a_1 + \dots + p_na_n$, the above shows that $a_1^{p_1} \cdots a_n^{p_n} \leq e^{A-1}$. Moreover, again because $A \leq e^{A-1}$ that shows that the right hand side of the inequality we’re trying to prove is also bounded by $e^{A-1}$. So we know that both sides of our desired inequality (and in particular, the max) is bounded from above by $e^{A-1}$. This seems like a conundrum until we introduce the following beautiful idea: normalize by the thing you think should be the larger of the two sides of the inequality.

Define new variables $b_i = a_i / A$ and notice that $\sum_i p_i b_i = 1$ just by unraveling the definition. Call this sum $B = \sum_i p_i b_i$. Now we know that $b_1^{p_1} \cdots b_n^{p_n} = \left ( \frac{a_1}{A} \right )^{p_1} \cdots \left ( \frac{a_n}{A} \right )^{p_n} \leq e^{B - 1} = e^0 = 1$

Now we unpack the pieces, and multiply through by $A^{p_1}A^{p_2} \cdots A^{p_n} = A$, the result is exactly the AM-GM inequality. $\square$

Even deeper, there is only one case when The Inequality is tight, i.e. when $1+x = e^x$, and that is $x=0$. This allows us to use the proof above to come to a full characterization of the case of equality in the proof above. Indeed, the crucial step was that $(a_i / A) = e^{A-1}$, which is only true when $(a_i / A) = 1$, i.e. when $a_i = A$. Spending a few seconds thinking about this gives the characterization of equality if and only if $a_1 = a_2 = \dots = a_n = A$.

So this is excellent: the arithmetic-geometric inequality is a deep theorem with applications all over mathematics and statistics. Adding another layer of indirection for impressiveness, one can use the AM-GM inequality to prove the Cauchy-Schwarz inequality rather directly. Sadly, the Wikipedia page for the Cauchy-Schwarz inequality hardly does it justice as far as the massive number of applications. For example, many novel techniques in geometry and number theory are proved directly from C-S. More, in fact, than I can hope to learn.

Of course, no article about The Inequality could be complete without a proof of The Inequality.

Theorem: For all $x \in \mathbb{R}$, $1+x \leq e^x$.

Proof. The proof starts by proving a simpler theorem, named after Bernoulli, that $1+nx \leq (1+x)^n$ for every $x [-1, \infty)$ and every $n \in \mathbb{N}$. This is relatively straightforward by induction. The base case is trivial, and $\displaystyle (1+x)^{n+1} = (1+x)(1+x)^n \geq (1+x)(1+nx) = 1 + (n+1)x + nx^2$

And because $nx^2 \geq 0$, we get Bernoulli’s inequality.

Now for any $z \geq 0$ we can set $x = z/n$, and get $(1+z) = (1+nx) \leq (1+\frac{z}{n})^n$ for every $n$. Note that Bernoulli’s inequality is preserved for larger and larger $n$ because $x \geq 0$. So taking limits of both sides as $n \to \infty$ we get the definition of $e^z$ on the right hand side of the inequality. We can prove a symmetrical inequality for $-x$ when $x < 0$, and this proves the theorem. $\square$

What other insights can we glean about The Inequality? For one, it’s a truncated version of the Taylor series approximation $\displaystyle e^x = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \dots$

Indeed, the Taylor remainder theorem tells us that the first two terms approximate $e^x$ around zero with error depending on some constant times $e^x x^2 \geq 0$. In other words, $1+x$ is a lower bound on $e^x$ around zero. It is perhaps miraculous that this extends to a lower bound everywhere, until you realize that exponentials grow extremely quickly and lines do not.

One might wonder whether we can improve our approximation with higher order approximations. Indeed we can, but we have to be a bit careful. In particular, $1+x+x^2/2 \leq e^x$ is only true for nonnegative $x$ (because the remainder theorem now applies to $x^3$, but if we restrict to odd terms we win: $1+x+x^2/2 + x^3/6 \leq e^x$ is true for all $x$.

What is really surprising about The Inequality is that, at least in the applications I work with, we rarely see higher order approximations used. For most applications, The difference between an error term which is quadratic and one which is cubic or quartic is often not worth the extra work in analyzing the result. You get the same theorem: that something vanishes exponentially quickly.

If you’re interested in learning more about the theory of inequalities, I wholeheartedly recommend The Cauchy-Schwarz Master Class. This book is wonderfully written, and chock full of fun exercises. I know because I do exercises from books like this one on planes and trains. It’s my kind of sudoku 🙂

# Cauchy-Schwarz Inequality (and Amplification)

Problem: Prove that for vectors $v, w$ in an inner product space, the inequality $\displaystyle |\left \langle v, w \right \rangle | \leq \| v \| \| w \|$

Solution: There is an elementary proof of the Cauchy-Schwarz inequality (see the Wikipedia article), and this proof is essentially the same. What makes this proof stand out is its insightful technique, which I first read about on Terry Tao’s blog. He calls it “textbook,” and maybe it is for an analyst, but it’s still very elegant.

We start by observing another inequality we know to be true, that $\| v - w \|^2 = \left \langle v - w, v - w \right \rangle \geq 0$, since norms are by definition nonnegative. By the properties of a complex inner product we can expand to get $\displaystyle \| v \|^2 - 2 \textup{Re}(\left \langle v,w \right \rangle) + \| w \|^2 \geq 0$

or equivalently $\displaystyle \textup{Re}(\left \langle v,w \right \rangle) \leq \frac{1}{2} \| v \|^2 + \frac{1}{2} \| w \|^2$

This inequality is close to the one we’re looking for, but ‘weaker’ because the inequality we seek squeezes inside the inequality we have. That is, $\displaystyle \textup{Re}(\left \langle v,w \right \rangle) \leq |\left \langle v, w \right \rangle | \leq \| v \| \| w \| \leq \frac{1}{2} \| v \|^2 + \frac{1}{2} \| w \|^2$

The first inequality is trivial (a complex number is always greater than its real part), the second is the inequality we seek to prove, and the third is a consequence of the arithmetic-geometric mean inequality. And so we have an inequality we’d like to “tighten” to get the true theorem. We do this by tightening each side of the inequality separately, and we do each by exploiting symmetries in the expressions involved.

First, we observe that norms of vectors are preserved by (complex) rotations $v \mapsto e^{i \theta}v$, but the real part is not. Since this inequality is true no matter which vectors we choose, we can choose $\theta$ to our advantage. That is, $\displaystyle \textup{Re}(\left \langle e^{i \theta}v, w \right \rangle) \leq \frac{1}{2} \| e^{i \theta}v \|^2 + \frac{1}{2} \| w \|^2$

And by properties of inner products and norms (pulling out scalars) and the fact that $|e^{i\theta}| = 1$, we can simplify to $\displaystyle \textup{Re}(e^{i \theta}\left \langle v,w \right \rangle) \leq \frac{1}{2}\| v \|^2 + \frac{1}{2} \| w \|^2$

where $\theta$ is arbitrary. Since we want to maximize the left hand side as much as possible, we can choose $\theta$ to be whatever is required to make the number real. Then the real part is just the absolute value of the number itself, and we have $\displaystyle \left |\langle v,w \right \rangle | \leq \frac{1}{2} \| v \|^2 + \frac{1}{2} \| w \|^2$

Now we tighten the right hand side by exploiting a symmetry in inner products: the transformation $(v,w) \mapsto (\lambda v, \frac{1}{\lambda} w)$ preserves the left hand side (since $|\lambda / \bar{\lambda}| = 1$) but not the right. And so by the same reasoning, we can transform the above inequality into $\displaystyle \left |\langle v,w \right \rangle | \leq \frac{\lambda^2}{2} \| v \|^2 + \frac{1}{2 \lambda^2} \| w \|^2$

And by plugging in $\lambda = \sqrt{\| w \| / \| v \|}$ (indeed, this minimizes the expression for nonzero $v,w$) we get exactly the Cauchy-Schwarz inequality, as desired. $\square$

This technique is termed “amplification” by Tao, and in his blog post he gives quite a few more advanced examples in harmonic and functional analysis (which are far beyond the scope of this blog).   The asymmetrical symmetry we took advantage of is a sort of “arbitrage” (again Terry’s clever choice of words) to take a weak fact and boost it to a stronger fact. And while the details of this proof are quite trivial, the technique of actively looking for one-sided symmetries is difficult to forget.