# Methods of Proof — Induction

In this final post on the basic four methods of proof (but perhaps not our last post on proof methods), we consider the proof by induction.

## Proving Statements About All Natural Numbers

Induction comes in many flavors, but the goal never changes. We use induction when we want to prove something is true about all natural numbers. These statements will look something like this:

For all natural numbers n, $1 + 2 + \dots + n = n(n+1)/2$.

Of course, there are many ways to prove this fact, but induction relies on one key idea: if we know the statement is true for some specific number $n$, then that gives us information about whether the statement is true for $n+1$. If this isn’t true about the problem, then proof by induction is hopeless.

Let’s see how we can apply it to the italicized statement above (though we haven’t yet said what induction is, this example will pave the way for a formal description of the technique). The first thing we notice is that indeed, if we know something about the first $n$ numbers, then we can just add $n+1$ to it to get the sum of the first $n+1$ numbers. That is,

$\displaystyle 1 + \dots + n + (n+1) = (1 + \dots + n) + (n+1)$

Reiterating our key point, this is how we know induction is a valid strategy: the statement written for a fixed $n$ translates naturally into the statement about $n+1$. Now suppose we know the theorem is true for $n$. Then we can rewrite the above sum as follows:

$\displaystyle 1 + \dots + n + (n+1) = \frac{n(n+1)}{2} + (n+1)$

With some algebra, we can write the left-hand side as a single fraction:

$\displaystyle 1 + \dots + (n+1) = \frac{n(n+1) + 2(n+1)}{2}$

and factoring the numerator gives

$\displaystyle 1 + \dots + (n+1) = \frac{(n+1)(n+2)}{2}$

Indeed, this is precisely what we’re looking for! It’s what happens when you replace $n$ by $n+1$ in the original statement of the problem.

At this point we’re very close to being finished. We proved that if the statement is true for $n$, then it’s true for $n+1$. And by the same reasoning, it will be true for $n+2, n+3,$ and all numbers after $n$. But this raises the obvious question: what’s the smallest number that it’s true for?

For this problem, it’s easy to see the answer is $n=1$. A mathematician would say: the statement is trivially true for $n=1$ (here trivial means there is no thinking required to show it: you just plug in $n=1$ and verify). And so by our reasoning, the statement is true for $n=2, n=3,$ and so on forever. This is the spirit of mathematical induction.

## Formal Nonsense

Now that we’ve got a taste of how to use induction in practice, let’s formally write down the rules for induction. Let’s have a statement which depends on a number $n$, and call it $p(n)$. This is written as a function because it actually is one (naively). It’s a function from the set of natural numbers to the set of all mathematical statements. In our example above, $p(n)$ was the statement that the equality $1 + \dots + n = n(n+1)/2$ holds.

We can plug in various numbers into this function, such as $p(1)$ being the statement “$1 = 1(1+1)/2$ holds,” or $p(n+1)$ being “$1 + \dots + (n+1) = (n+1)(n+1+1)/2$ holds.”

The basic recipe for induction is then very simple. Say you want to prove that $p(n)$ is true for all $n \geq 1$. First you prove that $p(1)$ is true (this is called the base case), and then you prove the implication $p(n) \to p(n+1)$ for an arbitrary $n$. Each of these pieces can be proved with any method one wishes (direct, contrapositive, contradiction, etc.). Once they are proven, we get that $p(n)$ is true for all $n$ automatically.

Indeed, we can even prove it. A rigorous proof requires a bit of extra knowledge, but we can easily understand the argument: it’s just a proof by contradiction. Here’s a quick sketch. Let $X$ be the set of all natural numbers $n$ for which $p(n)$ is false. Suppose to the contrary that $X$ is not empty. Every nonempty set of natural numbers has a smallest element, so let’s call $m$ the smallest number for which $p(m)$ is false. Now $m-1 < m$, so $p(m-1)$ must be true. But we proved that whenever $p(n)$ is true then so is $p(n+1)$, so $p(m-1 + 1) = p(m)$ is true, a contradiction.

Besides practicing proof by induction, that’s all there is to it. One more caveat is that the base case can be some number other than 1. For instance, it is true that $n! > 2^n$, but only for $n \geq 4$. In this case, we prove $p(4)$ is true, and $p(n) \to p(n+1)$ with the extra assumption that $n \geq 4$. The same inductive result applies.

Here are some exercises the reader can practice with, and afterward we will explore some variants of induction.

1. Prove that $n! > 2^n$ for all $n \geq 4$.
2. Prove that for all $n \geq 1$ the following equality holds: $1/(1 \cdot 2) + 1/(2 \cdot 3) + \dots + 1/(n \cdot (n+1)) = n/(n+1)$.
3. Prove that for every natural number $n$, a set of $n$ elements has $2^n$ subsets (including the empty subset).

This last exercise gives a hint that induction can prove more than arithmetic formulas. Indeed, if we have any way to associate a mathematical object with a number, we can potentially use induction to prove things about those objects. Unfortunately, we don’t have any mathematical objects to work with (except for sets and functions), and so we will stick primarily to proving facts about numbers.

One interesting observation about proof by induction is very relevant to programmers: it’s just recursion. That is, if we want to prove a statement $p(n)$, it suffices to prove it for $p(n-1)$ and do some “extra computation” to arrive at the statement for $p(n)$. And of course, we want to make sure the recursion terminates, so we add in the known result for $p(1)$.

## Variations on Induction, and Connections to Dynamic Programming

The first variation of induction is simultaneous induction on multiple quantities. That is, we can formulate a statement $p(n,m)$ which depends on two natural numbers independently of one another. The base case is a bit trickier, but paralleling the above remark about recursion, double-induction follows the same pattern as a two-dimensional dynamic programming algorithm. The base cases would consist of all $p(1,m)$ and all $p(n,1)$, and the inductive step to get $p(n,m)$ requires $p(n-1,m)$ and $p(n,m-1)$ (and potentially $p(n-1, m-1)$ or others; it depends on the problem).

Unfortunately, natural instances where double induction is useful (or anywhere close to necessary) are rare. Here is an example of a (tricky) but elementary example. Call

$\displaystyle C(m,n) = \frac{(2m)!(2n)!}{m!n!(m+n)!}$,

where the exclamation point denotes the factorial function. We will outline a proof that $C(m,n)$ is always an integer for all $m, n \geq 0$. If we look at the base cases, $C(0,n), C(m,0)$ (recalling that 0! = 1), we get $(2n!)/(n! n!)$, and this happens to be in the form of a binomial coefficient (here, the number of ways to choose $n!$ objects from a collection of $(2n)!$ objects), and binomial coefficients are known to be integers. Now the inductive step relies on the fact that $C(m,n-1)$ and $C(m+1, n-1)$ are both integers. If this is true then

$\displaystyle C(m,n) = 4C(m,n-1) - C(m+1, n-1)$,

which is obviously again an integer.

If we write these values in a table, we can see how the induction progresses in a “dynamic programming” fashion:

In order to fill the values in the next $n$ column (prove the statement for those values of $n$), we need to fill the entire $n-1$ column (for indeed, we rely on the inductive hypothesis for both the $m+1$ and $m$ row). But since our base case was the entire $n=0$ column, we can fill the entire table. In fact, we have just described a dynamic programming algorithm for computing the value of $C(m,n)$ in $mn$ steps. The correctness of the algorithm is indeed an inductive proof of the theorem.

Perhaps uninterestingly, this is asymptotically slower than the naive algorithm of computing $C(m,n)$ directly by computing $(2n)!, (2m)!, (n+m)!$ directly; this would take a linear number of steps in $n$, assuming $n > m$. In passing, this author wonders if, when the numbers are really large, the lack of division and multiplication in the dynamic program (multiplying by 4 using bit shifting instead) would overtake the naive algorithm. But $C(m,n)$ is certainly not interesting enough in its own right for anyone to care

At this point, we have noticed that we sometimes use induction and assume that many smaller instances of the statement are true. Indeed, why not inductively assume that the statement holds for all smaller $n$. This would certainly give the prover more tools to work with. Indeed, this technique is sometimes called strong induction, in the sense that we assume a stronger inductive hypothesis when we’re trying to prove $p(n+1)$. It may not be entirely obvious (especially to one well versed in the minutiae of formal logic) that this is actually equivalent to normal induction, but it is. In fact, the concept of “strong” induction is entirely pedagogical in nature. Most working mathematicians will not mention the difference in their proofs.

The last variant we’ll mention about induction is that of transfinite induction. The concept is that if you have any set $X$ which is well-ordered (essentially this means: allowing one to prove induction applies as we did earlier in the post), then we can perform induction its elements. In this way, we can “parameterize” statements by elements of an arbitrary well-ordered set, so that instead of $p(n)$ being a function from natural numbers to mathematical statements, it’s a function from $X$ to mathematical statements. One somewhat common example of when $X$ is something besides natural numbers is when we use the so-called cardinal numbers. We have already seen two distinct infinite cardinal numbers in this series: the cardinality of the integers and the cardinality of the real numbers (indeed, “cardinal number” just means a number which is the cardinality of a set). It turns out that there are many more kinds of cardinal numbers, and you can do induction on them, but this rarely shows up outside of mathematical logic.

And of course, we should close this post on an example of when induction goes wrong. For this we point the reader to our proof gallery, and the false proof that all horses are the same color. It’s quite an amusing joke, and hopefully it will stimulate the reader’s mind to recognize the pitfalls that can occur in a proof by induction.

So those are the basic four proof techniques! Fortunately for the reader, pretty much all proofs presented on this blog follow one of these four techniques. I imagine many of my readers skip over the proofs entirely (if only I could put proofs in animated gifs, with claims illustrated by grumpy cats!). So hopefully, if you have been intimidated or confused by the structure of the proofs on this blog, this will aid you in your future mathematical endeavors.  Butchering an old phrase for the sake of a bad pun, the eating of the pudding is in the proof.

Until next time!

# Methods of Proof — Contradiction

In this post we’ll expand our toolbox of proof techniques by adding the proof by contradiction. We’ll also expand on our knowledge of functions on sets, and tackle our first nontrivial theorem: that there is more than one kind of infinity.

## Impossibility and an Example Proof by Contradiction

Many of the most impressive results in all of mathematics are proofs of impossibility. We see these in lots of different fields. In number theory, plenty of numbers cannot be expressed as fractions. In geometry, certain geometric constructions are impossible with a straight-edge and compass. In computing theory, certain programs cannot be written. And in logic even certain mathematical statements can’t be proven or disproven.

In some sense proofs of impossibility are hardest proofs, because it’s unclear to the layman how anyone could prove it’s not possible to do something. Perhaps this is part of human nature, that nothing is too impossible to escape the realm of possibility. But perhaps it’s more surprising that the main line of attack to prove something is impossible is to assume it’s possible, and see what follows as a result. This is precisely the method of proof by contradiction:

Assume the claim you want to prove is false, and deduce that something obviously impossible must happen.

There is a simple and very elegant example that I use to explain this concept to high school students in my guest lectures.

Image you’re at a party of a hundred people, and any pair of people are either mutual friends or not mutual friends. Being a mathematical drama queen, you decide to count how many friends each person has at the party. You notice that there are two people with the same number of friends, and you wonder if it will always be the case for any party. And so the problem is born: prove or disprove that at every party of $n$ people, there exist two people with the same number of friends at the party.

If we believe this is true, and we can’t seem to find a direct proof, then we can try a proof by contradiction. That is, we assume that there are not two people with the same number of friends. Equivalently, we can assume that everybody has a distinct number of friends. Well what could the possibilities be? On one hand, if there are $n$ people at the party, then the minimum number of friends one could have is zero (if you’re quite lonely), and the maximum is $n-1$ (if you’re friends with everybody). So there are $n$ distinct numbers, and $n$ people, and everyone has to have a different number. That means someone has to have zero friends, and someone has to be friends with everybody. But this can’t possibly be true: if you’re friends with everyone (and we’re only counting mutual friendships) then nobody can be friendless. Thus, we have arrived at a contradiction, and our original assumption must have been incorrect. That is, every party has two people with the same number of friends at the party.

There are certainly other proofs of this fact (I know of a direct proof which is essentially the same proof as the one given above), and there are more mathematical ways to think about the problem. But this is a wonderful example of a proof which requires little else than the method of contradiction.

## A Reprise on Truth Tables, and More Examples

Just as with our post on contrapositive implication, we can analyze proof by contradiction from the standpoint of truth tables. Recall the truth table for an implication $p \to q$:

p  q  p->q
T  T   T
T  F   F
F  T   T
F  F   T

We notice that an implication can only be false if the hypothesis $p$ is true and the consequence $q$ is false. This is the motivation for a proof by contradiction: if we show this case can’t happen, then there’s no other option: the statement $p \to q$ must be true. In other words, if supposing “p and not q” is true implies something which we know to be false, then by the very same truth table argument it must be that either “q” is true or “p” is false. In any of these cases “p implies q” is true.

But all of this truth table juggling really takes us away from the heart of the method. Let’s do some proofs.

First, we will prove that the square root of 2 is not a rational number. That is, we are proving the statement that if $x$ is a number such that $x^2 = 2$, then it cannot be true that $x = p/q$ where $p,q$ are integers.

Suppose to the contrary (this usually marks the beginning of a proof by contradiction) that $x = p/q$ is a ratio of integers. Then we can square both sides to get $2 = x^2 = p^2 / q^2$, and rearrange to arrive at $2q^2 = p^2$. Now comes the tricky part: if a number is a divisor of $p$, then its square must divide $p^2$; this is true of all square numbers. In particular, it must be the case that the largest power of 2 dividing any square number is even (and $2^0$ counts as an even power). Now in the equation $2q^2 = p^2$ the right hand side is a square, so the largest power of two dividing it is even, and the right hand side is two times a square, so the largest power of 2 dividing it is odd (2 times an even power of 2 gives an odd power of two). But the two sides are equal! They can’t possibly have different largest powers of 2 dividing them. So we have arrived at a contradiction, and it must not be the case that $x$ is rational.

This is quite a nice result, and a true understanding of the proof comes when you see why it fails for numbers which actually do have rational square roots (for instance, try it for the square root of 9 or 36/25). But the use of the method is clear: we entered a magical fairy land where the square root of 2 is a rational number, and by using nothing but logical steps, we proved that this world is a farce. It cannot exist.

Often times the jump from “suppose to the contrary” to the contradiction requires a relatively small piece of insight, but in other times it is quite large. In our example above, the insight was related to divisors (or prime factorizations) of a number, and these are not at all as obvious to the layman as our “having no friends” contradiction earlier.

For instance, here is another version of the square root of two proof, proved by contradiction, but this time using geometry. Another example is on tiling chessboards with dominoes (though the application of the proof by contradiction in this post is more subtle; can you pick out exactly when it’s used?). Indeed, most proofs of the fundamental theorem of algebra (these are much more advanced: [1] [2] [3] [4]) follow the same basic recipe: suppose otherwise, and find a contradiction.

Instead of a obviously ridiculous statement like “1 = 0″, often times the “contradiction” at the end of a proof will contradict the original hypothesis that was assumed. This happens in a famous proof that there are infinitely many prime numbers.

Indeed, if we suppose that there are finitely many prime numbers, we can write them all down: $p_1 , \dots, p_n$. That is, we are assuming that this is a list of all prime numbers. Since the list is finite, we can multiply them all together and add 1: let $q = p_1 \dots p_n + 1$. Indeed, as the reader will prove in the exercises below, every number has a prime divisor, but it is not hard to see that no $p_i$ divides $q$. This is because no matter what some number $x$ is, no number except 1 can divide both $x$ and $x-1$ (one can prove this fact by contradiction if it is not obvious), and we already know that all the $p_i$ divide $q-1$ . So $q$ must have some prime divisor which was not in the list we started with. This contradicts that we had a complete list of primes to begin with. And so there must be infinitely many primes.

Here are some exercises to practice the proof by contradiction:

1. Prove that the base 2 logarithm of 3 is irrational.
2. More generally that $\log_a(b)$ is irrational if there is any prime $p$ dividing $a$ but not $b$, or vice versa.
3. Prove the fundamental theorem of arithmetic, that every natural number $n \geq 2$ is a product of primes (hint: inspect the smallest failing example).

## A Few More Words on Functions and Sets

Last time we defined what it means for a function $f: X \to Y$ on sets to be injective: different things in $X$ get mapped to different things in $Y$. Indeed, there is another interesting notion called surjectivity, which says that $f$ “hits” everything in $Y$ by something in $X$.

Definition: A function $f: X \to Y$ is surjective if for every element $y \in Y$ there is an $x \in X$ for which $f(x) = y$. A surjective function is called a surjection. A synonym often used in place of surjective is onto.

For finite sets, we use surjections to prove something nice about the sets it involves. If $f:X \to Y$ is a surjection, then $X$ has at least as many elements as $Y$. The reader can prove this easily by contradiction. In our previous post we proved an analogous proposition for injective functions: if $f: X \to Y$ is injective, then there are at least as many elements of $Y$ as there are of $X$. If we combine the two notions, we can see that $X$ and $Y$ have exactly the same size.

Definition: A function $f: X \to Y$ which is both injective and surjective is called a bijection. The adjectival form of bijection is bijective.

So for finite sets, if there exists a bijection $X \to Y$, then $X$ and $Y$ have the same number of elements. The converse is also true, if two finite sets have the same size one can make a bijection between them (though a strictly formal proof of this would require induction, which we haven’t covered yet). The main benefit of thinking about size this way is that it extends to infinite sets!

Definition: Two arbitrary sets $X,Y$ are said to have the same cardinality if there exists a bijection $f : X \to Y$. If there is a bijection $f: \mathbb{N} \to X$ then $X$ is said to have countably infinite cardinality, or simply countably infinite. If no such bijection exists (and $X$ is not a finite set), then $X$ is said to be uncountably infinite.

So we can say two infinite sets have the same cardinality if we can construct a bijection between them. For instance, we can prove that the even natural numbers have the same cardinality as the regular natural numbers. If $X$ is the set of even natural numbers, just construct a function $\mathbb{N} \to X$ by sending $x \mapsto 2x$. This is manifestly surjective and injective (one can prove it with whatever method one wants, but it is obviously true). A quick note on notation: when mathematicians want to define a function without giving it a name, they use the “maps to” arrow $\mapsto$. The reader can think of this as the mathematician’s version of lambda expression. So the above map would be written in python: lambda x: 2*x.

So we have proved, as curious as it sounds to say it, that there are just as many even numbers as not even numbers. Even more impressive, one can construct a bijection between the natural numbers and the rational numbers. Mathematicians denote the latter by $\mathbb{Q}$, and typically this proof is done by first finding a bijection from $\mathbb{N} \to \mathbb{Z}$ and then from $\mathbb{Z} \to \mathbb{Q}$. We are implicitly using the fact that a composition of two bijections is a bijection. The diligent reader has already proved this for injections, so if one can also prove it for surjections, by definition it will be satisfied for bijections.

## Diagonalization, and a Non-Trivial Theorem

We now turn to the last proof of this post, and our first non-trivial theorem: that there is no bijection between the set of real numbers and the set of natural numbers. Before we start, we should mention that calling this theorem ‘non-trivial’ might sound insulting to the non-mathematician; the reader has been diligently working to follow the proofs in these posts and completing exercises, and they probably all feel non-trivial. In fact, mathematicians don’t use trivial with the intent to insult (most of the time) or to say something is easy or not worth doing. Instead, ‘trivial’ is used to say that a result follows naturally, that it comes from nothing but applying the definitions and using the basic methods of proof. Of course, since we’re learning the basic methods of proof nothing can really be trivial, but if we say a theorem is non-trivial that means the opposite: there is some genuinely inspired idea that sits at the focal point of the proof, more than just direct or indirect inference. Even more, a proof is called “highly non-trivial” if there are multiple novel ideas or a menagerie of complicated details to keep track of.

In any case, we have to first say what the real numbers are. Instead we won’t actually work with the entire set of real numbers, but with a “small” subset: the real numbers between zero and one. We will also view these numbers in a particular representation that should be familiar to the practicing programmer.

Definition: The set of $[0,1]$ is the set of all infinite sequences of zeroes and ones, interpreted as the set of all binary decimal expansions of numbers between zero and one.

If we want to be rigorous, we can define an infinite sequence to either be an infinite tuple (falling back on our definition of a tuple as a set), or we can define it to be a function $f : \mathbb{N} \to \left \{ 0, 1 \right \}$. Taking the latter view, we add one additional piece of notation:

Definition: An infinite binary sequence $(b_i) = (b_1, b_2, \dots)$ is a function $b : \mathbb{N} \to \left \{ 0, 1 \right \}$ where we denote by $b_i$ the value $b(i)$.

So now we can state the theorem.

Theorem: The set $[0,1]$ of infinite binary sequences is uncountably infinite. That is, there is no bijection $\mathbb{N} \to [0,1]$.

The proof, as we said, is non-trivial, but it starts off in a familiar way: we assume there is such a bijection. Suppose to the contrary that $f : \mathbb{N} \to [0,1]$ is a bijection. Then we can list the values of $f$ in a table. Since we want to use $b_i$ for all of the values of $f$, we will call

$\displaystyle f(n) = (b_{n,i}) = b_{n,1}, b_{n,2}, \dots$

This gives us the following infinite table:

$\displaystyle \begin{matrix} f(1) &=& b_{1,1}, & b_{1,2}, & \dots \\ f(2) &=& b_{2,1}, & b_{2,2}, & \dots \\ f(3) &=& b_{3,1}, & b_{3,2}, & \dots \\ \vdots & & \vdots & & \end{matrix}$

Now here is the tricky part. We are going to define a new binary sequence which we can guarantee does not show up in this list. This will be our contradiction, because we assumed at first that this list consisted of all of the binary sequences.

The construction itself is not so hard. Start by taking $c_i = b_{i,i}$ for all $i$. That is, we are using all of the diagonal elements of the table above. Now take each $c_i$ and replace it with its opposite (i.e., flip each bit in the sequence, or equivalently apply $b \mapsto 1-b$ to each entry). The important fact about this new sequence is it differs from every entry in this table. By the way we constructed it, no matter which $lateex n$ one chooses, this number differs from the table entry $f(n)$ at digit $n$ (and perhaps elsewhere). Because of this, it can’t occur as an entry in the table. So we just proved our function $f$ isn’t surjective, contradicting our original hypothesis, and proving the theorem.

The discovery of this fact was an important step forward in the history of mathematics. The particular technique though, using the diagonal entries of the table and changing each one, comes with a name of its own: the diagonalization argument. It’s quite a bit more specialized of a proof technique than, say, the contrapositive implication, but it shows up in quite a range of mathematical literature (for instance, diagonalization is by far the most common way to prove that the Halting problem is undecidable). It is worth noting diagonalization was not the first known way to prove this theorem, just the cleanest.

The fact itself has interesting implications that lends itself nicely to confusing normal people. For instance, it implies not only that there is more than one kind of infinity, but that there are an infinity of infinities. Barring a full discussion of how far down the set-theory rabbit hole one can go, we look forward to next time, when we meet the final of the four basic methods of proof: proof by induction.

Until then!

# Why there is no Hitchhiker’s Guide to Mathematics for Programmers

For those who aren’t regular readers: as a followup to this post, there are four posts detailing the basic four methods of proof, with intentions to detail some more advanced proof techniques in the future. You can find them on this blog’s primers page.

## Do you really want to get better at mathematics?

Remember when you first learned how to program? I do. I spent two years experimenting with Java programs on my own in high school. Those two years collectively contain the worst and most embarrassing code I have ever written. My programs absolutely reeked of programming no-nos. Hundred-line functions and even thousand-line classes, magic numbers, unreachable blocks of code, ridiculous code comments, a complete disregard for sensible object orientation, negligence of nearly all logic, and type-coercion that would make your skin crawl. I committed every naive mistake in the book, and for all my obvious shortcomings I considered myself a hot-shot programmer! At least I was learning a lot, and I was a hot-shot programmer in a crowd of high-school students interested in game programming.

Even after my first exposure and my commitment to get a programming degree in college, it was another year before I knew what a stack frame or a register was, two more before I was anywhere near competent with a terminal, three more before I fully appreciated functional programming, and to this day I still have an irrational fear of networking and systems programming (the first time I manually edited the call stack I couldn’t stop shivering with apprehension and disgust at what I was doing).

I just made this function call return to a *different* place than where it was called from.

In a class on C++ programming I was programming a Checkers game, and my task at the moment was to generate a list of all possible jump-moves that could be made on a given board. This naturally involved a depth-first search and a couple of recursive function calls, and once I had something I was pleased with, I compiled it and ran it on my first non-trivial example. Low and behold (even having followed test-driven development!), I was hit hard in the face by a segmentation fault. It took hundreds of test cases and more than twenty hours of confusion before I found the error: I was passing a reference when I should have been passing a pointer. In particular (and this is the aggravating part, as most programmers know), the fix required the change of about 4 characters. Twenty hours of work for four characters! Once I begrudgingly verified it worked (of course it worked, it was so obvious in hindsight), I promptly took the rest of the day off to play Starcraft.

Of course, as every code-savvy reader will agree, all of this drama is part of the process of becoming and strong programmer. One must study the topics incrementally, make plentiful mistakes and learn from them, and spend uncountably many hours in a state of stuporous befuddlement before one can be considered an experienced coder. This gives rise to all sorts of programmer culture, unix jokes, and reverence for the masters of C that make the programming community so lovely to be a part of. It’s like a secret club where you know all the handshakes. And should you forget one, a crafty use of awk and sed will suffice.

“Semicolons of Fury” was the name of my programming team in the ACM collegiate programming contest. We placed Cal Poly third in the Southern California Regionals, and in my opinion our success was due in large part to the dynamics of our team. I (center, in blue) have since gotten a more stylish haircut.

Now imagine someone comes along and says,

“I’m really interested in learning to code, but I don’t plan to write any programs and I absolutely abhor tracing program execution. I just want to use applications that others have written, like Chrome and iTunes.”

You would laugh at them! And the first thing that would pass through your mind is either, “This person would give up programming after the first twenty minutes,” or “I would be doing the world a favor by preventing this person from ever writing a program. This person belongs in some other profession.” This lies in stark opposition to the common chorus that everyone should learn programming. After all, it’s a constructive way to think about problem solving and a highly employable skill. In today’s increasingly technological world, it literally pays to know your your computer better than a web browser. (Ironically, I’m writing this on my Chromebook, but in my defense it has a terminal with ssh. Perhaps more ironically, all of my real work is done with paper and pencil.)

Unfortunately this sentiment is mirrored among most programmers who claim to be interested in mathematics. Mathematics is fascinating and useful and doing it makes you smarter and better at problem solving. But a lot of programmers think they want to do mathematics, and they either don’t know what “doing mathematics” means, or they don’t really mean they want to do mathematics. The appropriate translation of the above quote for mathematics is:

“Mathematics is useful and I want to be better at it, but I won’t write any original proofs and I absolutely abhor reading other people’s proofs. I just want to use the theorems others have proved, like Fermat’s Last Theorem and the undecidability of the Halting Problem.”

Of course no non-mathematician is really going to understand the current proof of Fermat’s Last Theorem, just as no fledgling programmer is going to attempt to write a (quality) web browser. The point is that the sentiment is in the wrong place. Mathematics is cousin to programming in terms of the learning curve, obscure culture, and the amount of time one spends confused. And mathematics is as much about writing proofs as software development is about writing programs (it’s not everything, but without it you can’t do anything). Honestly, it sounds ridiculously obvious to say it directly like this, but the fact remains that people feel like they can understand the content of mathematics without being able to write or read proofs.

I want to devote the rest of this post to exploring some of the reasons why this misconception exists. My main argument is that the reasons have to do more with the culture of mathematics than the actual difficulty of the subject. Unfortunately as of the time of this writing I don’t have a proposed “solution.” And all I can claim is a problem is that programmers can have mistaken views of what mathematics involves. I don’t propose a way to make mathematics easier for programmers, although I do try to make the content on my blog as clear as possible (within reason). I honestly do believe that the struggle and confusion builds mathematical character, just as the arduous bug-hunt builds programming character. If you want to be good at mathematics, there is no other way.

All I want to do with this article is to detail why mathematics can be so hard for beginners, to explain a few of the secret handshakes, and hopefully to bring an outsider a step closer to becoming an insider. So read on, and welcome to the community.

## Travelling far and wide

Perhaps one of the most prominent objections to devoting a lot of time to mathematics is that it can be years before you ever apply mathematics to writing programs. On one hand, this is an extremely valid concern. If you love writing programs and designing software, then mathematics is nothing more than a tool to help you write better programs.

But on the other hand, the very nature of mathematics is what makes it so applicable, and the only way to experience nature is to ditch the city entirely. Indeed, I provide an extended example of this in my journalesque post on introducing graph theory to high school students: the point of the whole exercise is to filter out the worldly details and distill the problem into a pristine mathematical form. Only then can we see its beauty and wide applicability.

Here is a more concrete example. Suppose you were trying to encrypt the contents of a message so that nobody could read it even if they intercepted the message in transit. Your first ideas would doubtlessly be the same as those of our civilization’s past: substitution ciphers, Vigenere ciphers, the Enigma machine, etc. Regardless of what method you come up with, your first thought would most certainly not be, “prime numbers so big they’ll make your pants fall down.” Of course, the majority of encryption methods today rely on very deep facts (or rather, conjectures) about prime numbers and other mathematical objects (“group presentations so complicated they’ll orient your Mobius band,” anyone?). But it took hundreds of years of number theory to get there, and countless deviations into other fields and dead-ends.

Of course there are other examples much closer to contemporary fashionable programming techniques. One such example is boosting. While we have yet to investigate boosting on this blog, the basic idea is that one can combine a bunch of algorithms which perform just barely better than 50% accuracy, and collectively they will be arbitrarily close to perfect. In a field dominated by practical applications, this result is purely the product of mathematical analysis.

And of course boosting in turn relies on the mathematics of probability theory, which in turn relies on set theory and measure theory, which in turn relies on real analysis, and so on. One could get lost for a lifetime in this mathematical landscape! And indeed, the best way to get a good view of it all is to start at the bottom. To learn mathematics from scratch. The working programmer simply doesn’t have time for that.

## What is it really, that people have such a hard time learning?

Most of the complaints about mathematics come understandably from notation and abstraction. And while I’ll have more to say on that below, I’m fairly certain that the main obstacle is a familiarity with the basic methods of proof.

While methods of proof are semantical by nature, in practice they form a scaffolding for all of mathematics, and as such one could better characterize them as syntactical. I’m talking, of course, about the four basics: direct implication, proof by contradiction, contrapositive, and induction. These are the loops, if statements, pointers, and structs of rigorous argument, and there is simply no way to understand the mathematics without a native fluency in this language.

The “Math Major Sloth” is fluent. Why aren’t you?

So much of mathematics is built up by chaining together a multitude of absolutely trivial statements which are amendable to proof by the basic four. I’m not kidding when I say they are absolutely trivial. A professor of mine once said,

If it’s not completely trivial, then it’s probably not true.

I can’t agree more with this statement. Of course, there are many sophisticated proofs in mathematics, but an overwhelming majority of (very important) facts fall in the trivial category. That being said, trivial can be sometimes relative to one’s familiarity with a subject, but that doesn’t make the sentiment any less right. Drawing up a shopping list is trivial once you’re comfortable with a pencil and paper and you know how to write (and you know what the words mean). There are certainly works of writing that require a lot more than what it takes to write a shopping list. Likewise, when we say something is trivial in mathematics, it’s because there’s no content to the proof outside of using definitions and a typical application of the basic four methods of proof. This is the “holding a pencil” part of writing a shopping list.

And as you probably know, there are many many more methods of proof than just the basic four. Proof by construction, by exhaustion, case analysis, and even picture proofs have a place in all fields of mathematics. More relevantly for programmers, there are algorithm termination proofs, probabilistic proofs, loop invariants to design and monitor, and the ubiquitous NP-hardness proofs (I’m talking about you, Travelling Salesman Problem!). There are many books dedicated to showcasing such techniques, and rightly so. Clever proofs are what mathematicians strive for above all else, and once a clever proof is discovered, the immediate first step is to try to turn it into a general method for proving other facts. Fully flushing out such a process (over many years, showcasing many applications and extensions) is what makes one a world-class mathematician.

An entire book dedicated to the probabilistic method of proof, invented by Paul Erdős and sown into the soil of mathematics over the course of his lifetime.

Another difficulty faced by programmers new to mathematics is the inability to check your proof absolutely. With a program, you can always write test cases and run them to ensure they all pass. If your tests are solid and plentiful, the computer will catch your mistakes and you can go fix them.

There is no corresponding “proof checker” for mathematics. There is no compiler to tell you that it’s nonsensical to construct the set of all sets, or that it’s a type error to quotient a set by something that’s not an equivalence relation. The only way to get feedback is to seek out other people who do mathematics and ask their opinion. In solo, mathematics involves a lot of backtracking, revising mistaken assumptions, and stretching an idea to its breaking point to see that it didn’t even make sense to begin with. This is “bug hunting” in mathematics, and it can often completely destroy a proof and make one start over from scratch. It feels like writing a few hundred lines of code only to have the final program run “rm -rf *” on the directory containing it. It can be really. really. depressing.

It is an interesting pedagogical question in my mind whether there is a way to introduce proofs and the language of mature mathematics in a way that stays within a stone’s throw of computer programs. It seems like a worthwhile effort, but I can’t think of anyone who has sought to replace a classical mathematics education entirely with one based on computation.

## Mathematical syntax

Another major reason programmers are unwilling to give mathematics an honest effort is the culture of mathematical syntax: it’s ambiguous, and there’s usually nobody around to explain it to you. Let me start with an example of why this is not a problem in programming. Let’s say we’re reading a Python program and we see an expression like this:

foo[2]

The nature of (most) programming languages dictates that there are a small number of ways to interpret what’s going on in here:

1. foo could be a list/tuple, and we’re accessing the third element in it.
2. foo could be a dictionary, and we’re looking up value associated to the key 2.
3. foo could be a string, and we’re extracting the third character.
4. foo could be a custom-defined object, whose __getitem__ method is defined somewhere else and we can look there to see exactly what it does.

There are probably other times this notation can occur (although I’d be surprised if number 4 didn’t by default capture all possible uses), but the point is that any programmer reading this program knows enough to intuit that square brackets mean “accessing an item inside foo with identifier 2.” Part of the reasons that programs can be very easy to read is precisely because someone had to write a parser for a programming language, and so they had to literally enumerate all possible uses of any expression form.

The other extreme is the syntax of mathematics. The daunting fact is that there is no bound to what mathematical notation can represent, and much of mathematical notation is inherently ad hoc. For instance, if you’re reading a math paper and you come across an expression that looks like this

$\delta_i^j$

The possibilities of what this could represent are literally endless. Just to give the unmathematical reader a taste: $\delta_i$ could be an entry of a sequence of numbers of which we’re taking arithmetic $j^\textup{th}$ powers. The use of the letter delta could signify a slightly nonstandard way to write the Kronecker delta function, for which $\delta_i^j$ is one precisely when $i=j$ and zero otherwise. The superscript $j$ could represent dimension. Indeed, I’m currently writing an article in which I use $\delta^k_n$ to represent $k$-dimensional simplex numbers, specifically because I’m relating the numbers to geometric objects called simplices, and the letter for those is  a capital $\Delta$. The fact is that using notation in a slightly non-standard way does not invalidate a proof in the way that it can easily invalidate a program’s correctness.

What’s worse is that once mathematicians get comfortable with a particular notation, they will often “naturally extend” or even silently drop things like subscripts and assume their reader understands and agrees with the convenience! For example, here is a common difficulty that beginners face in reading math that involves use of the summation operator. Say that I have a finite set of numbers whose sum I’m interested in. The most rigorous way to express this is not far off from programming:

Let $S = \left \{ x_1, \dots, x_n \right \}$ be a finite set of things. Then their sum is finite:

$\displaystyle \sum_{i=1}^n x_i$

The programmer would say “great!” Assuming I know what “+” means for these things, I can start by adding $x_1 + x_2$, add the result to $x_3$, and keep going until I have the whole sum. This is really just a left fold of the plus operator over the list $S$.

But for mathematicians, the notation is far more flexible. For instance, I could say

Let $S$ be finite. Then $\sum_{x \in S} x$ is finite.

Things are now more vague. We need to remember that the $\in$ symbol means “in.” We have to realize that the strict syntax of having an iteration variable $i$ is no longer in effect. Moreover, the order in which the things are summed (which for a left fold is strictly prescribed) is arbitrary. If you asked any mathematician, they’d say “well of course it’s arbitrary, in an abelian group addition is commutative so the order doesn’t matter.” But realize, this is yet another fact that the reader must be aware of to be comfortable with the expression.

But it still gets worse.

In the case of the capital Sigma, there is nothing syntactically stopping a mathematician from writing

$\displaystyle \sum_{\sigma \in \Sigma} f_{\Sigma}(\sigma)$

Though experienced readers may chuckle, they will have no trouble understanding what is meant here. That is, syntactically this expression is unambiguous enough to avoid an outcry: $\Sigma$ just happens to also be a set, and saying $f_{\Sigma}$ means that the function $f$ is constructed in a way that depends on the choice of the set $\Sigma$. This often shows up in computer science literature, as $\Sigma$ is a standard letter to denote an alphabet (such as the binary alphabet $\left \{ 0,1 \right \}$).

One can even take it a step further and leave out the set we’re iterating over, as in

$\displaystyle \sum_{\sigma} f_{\Sigma}(\sigma)$

since it’s understood that the lowercase letter ($\sigma$) is usually an element of the set denoted by the corresponding uppercase letter ($\Sigma$). If you don’t know greek and haven’t seen that coincidence enough times to recognize it, you would quickly get lost. But programmers must realize: this is just the mathematician’s secret handshake. A mathematician would be just as bewildered and confused upon seeing some of the pointer arithmetic hacks C programmers invent, or the always awkward infinite for loop.

for (;;) {
;
}

And once the paper you’re reading is over, and you start reading a new paper, chances are their conventions and notation will be ever-so-slightly different, and you have to keep straight what means what. It’s as if the syntax of a programming language changed depending on who was writing the program!

Perhaps understandably, the frustration that most mathematicians feel when dealing with varying syntax across different papers and books is collectively called “technicalities.” And the more advanced the mathematics becomes, the ability to fluidly transition between high-level intuition and technical details is all but assumed.

The upshot of this whole conversation is that the reader of a mathematical proof must hold in mind a vastly larger body of absorbed (and often frivolous) knowledge than the reader of a computer program.

At this point you might see all of this as my complaining, but in truth I’m saying this notational flexibility and ambiguity is a benefit. Once you get used to doing mathematics, you realize that technical syntax can make something which is essentially simple seem much more difficult than it is. In other words, we absolutely must have a way to make things completely rigorous, but in developing and presenting proofs the most important part is to make the audience understand the big picture, see intuition behind the symbols, and believe the proofs. For better or worse, mathematical syntax is just a means to that end, and the more abstract the mathematics becomes, the more flexiblility mathematicians need to keep themselves afloat in a tumultuous sea of notation.

## You’re on your own, unless you’re around mathematicians

That brings me to my last point: reading mathematics is much more difficult than conversing about mathematics in person. The reason for this is once again cultural.

Imagine you’re reading someone else’s program, and they’ve defined a number of functions like this (pardon the single-letter variable names, I’m just don’t like “foo” and “bar”).

def splice(L):
...

def join(*args):
...

def flip(x, y):
...

There are two parts to understanding how these functions work. The first part is that someone (or a code comment) explains to you in a high level what they do to an input. The second part is to weed out the finer details. These “finer details” are usually completely spelled out by the documentation, but it’s still a good practice to experiment with it yourself (there is always the possibility for bugs, of course).

In mathematics there is no unified documentation, just a collective understanding, scattered references, and spoken folk lore. You’re lucky if a textbook has a table of notation in the appendix. You are expected to derive the finer details and catch the errors yourself. Even if you are told the end result of a proposition, it is often followed by, “The proof is trivial.” This is the mathematician’s version of piping output to /dev/null, and literally translates to, “You’re expected to be able to write the proof yourself, and if you can’t then you’re not ready to continue.”

Indeed, the opposite problems are familiar to a beginning programmer when they aren’t in a group of active programmers. Why is it that people give up or don’t enjoy programming? Is it because they have a hard time getting honest help from rudely abrupt moderators on help websites like stackoverflow? Is it because often when one wants to learn the basics, they are overloaded with the entirety of the documentation and the overwhelming resources of the internet and all its inhabitants? Is it because compiler errors are nonsensically exact, but very rarely helpful? Is it because when you learn it alone, you are bombarded with contradicting messages about what you should be doing and why (and often for the wrong reasons)?

All of these issues definitely occur, and I see them contribute to my students’ confusion in my introductory Python class all the time. They try to look on the web for information about how to solve a very basic problem, and they come back to me saying they were told it’s more secure to do it this way, or more efficient to do it this way, or that they need to import something called the “heapq module.” When really the goal is not to solve the problem in the best way possible or in the shortest amount of code, but to show them how to use the tools they already know about to construct a program that works. Without a guiding mentor it’s extremely easy to get lost in the jungle of people who think they know what’s best.

As far as I know there is no solution to this problem faced by the solo programming student (or the solo anything student). And so it stands for mathematics: without others doing mathematics with you, its very hard to identify your issues and see how to fix them.

## Proofs, Syntax, and Community

For the programmer who is truly interested in improving their mathematical skills, the first line of attack should now be obvious. Become an expert at applying the basic methods of proof. Second, spend as much time as it takes to clear up what mathematical syntax means before you attempt to interpret the semantics. And finally, find others who are interested in seriously learning some mathematics, and work on exercises (perhaps a weekly set) with them. Start with something basic like set theory, and write your own proofs and discuss each others’ proofs. Treat the sessions like code review sessions, and be the compiler to your partner’s program. Test their arguments to the extreme, and question anything that isn’t obvious or trivial. It’s not uncommon for easy questions with simple answers and trivial proofs to create long and drawn out discussions before everyone agrees it’s obvious. Embrace this and use it to improve.

Short of returning to your childhood and spending more time doing recreational mathematics, that is the best advice I can give.

Until next time!

# Learning Programming — Finger-Painting and Killing Zombies

Zmob, my first (and only) original game.

By the end, the breadth and depth of our collective knowledge was far beyond what anyone could expect from any high school course in any subject.

## Education Versus Exploration

I’m a lab TA for an introductory Python programming course this semester, and it’s been…depressing. I remember my early days of programming, when the possibilities seemed endless and adding new features to my programs was exciting and gratifying, and I brimmed with pride at every detail, and I boasted to my friends of the amazing things I did, and I felt powerful. The world was literally at my fingertips. I could give substance to any idea I cared to entertain and any facet of life I wanted to explore. I had developed an insatiable thirst for programming that has lasted to this very day.

My younger self, if programming were more noodley.

The ironic thing is that today I look back on the programs I wrote and cringe with unending embarrassment. My old code was the artistic equivalent of a finger-painting made by a kindergartener. Sure, it might look kind of like a horse, but only because the kid has no idea what he’s doing. The programs I wrote were bug-ridden, hard to read, poorly organized, and a veritable spaghetti-slop of logic.

But I can’t deny how much fun I had, and how much I learned. I will describe all of that in more detail below, but first I’d like to contrast my experience with the students I’m teaching this semester. Their labs are no more than mindlessly shuffling around data, and implementing dry, boring functions like Horner’s method for evaluating polynomials. And (amazingly) their projects are even less interesting. I think the biggest difference is that my students don’t have to actually solve any problems by writing programs. And so their experience is boiled down to following directions and pretending to be a computer in order to follow their own program’s logic.

I’m certainly not saying that following directions and simulating a computer in your head aren’t important things to be a good programmer. What I’m saying is that my students get no gratification from their work. Their results are just as dry as the problem, and the majority of the joy I see among them is when they finish a problem and don’t have to think about it anymore (even if their solution is completely wrong).

The course has other problems with it. For instance, the professor teaches the students C paradigms instead of Python paradigms (I don’t think he ever learned the right way to do things in Python), and he confuses them with talk of stack frames and registers and all sorts of irrelevant architectural details. Remember, these students have never programmed before, and some started the course just barely competent with a computer. I didn’t know what a stack frame was until I had three years of programming under my belt (two of those years were the early, experimental years).

All of this has gotten me thinking pretty regularly about how I might teach my own course, if I might ever have one. This post will roughly be an outline of how my own computer science education began. I’ll distill the most important aspects of it: the things that made me want to keep programming and the things that taught me deep ideas in natural contexts.

## My First Time was with Java

My high school (Campolindo High, in Moraga, CA) was blessed with a computer science course. With my early exposure to computers (at 3 years old, by my parents’ accounts), my love of video games, and my basic grasp of HTML, it seemed inevitable that I belonged in the class. In retrospect, it was perhaps the most beneficial course I ever took, followed closely by Honors/AP English, German, and public policy. Not only did it provide me with the aforementioned thirst for programming, but it planted a mathematical seed in my mind that would flourish years later like a giant bean stalk, which I’m still in the process of climbing today.

Right off the bat the course was different. The teacher’s name was Mr. Maters, and by the end of the first week we ceased to have lectures. Mr. Maters showed us barely enough to get a simple program running with input and output, and then we were left to our own devices.

There were roughly two options for getting credit. The first was to follow an outline of exercises and small projects from a book on GUI programming in Java. Most students followed these exercises for the first two months or so, and I did at least until I had made a stupid little pizza shop program that let you order pizzas.

The second option was wide open. The students could do whatever they wanted, and Mr. Maters jokingly told us, “At the end of each quarter I’ll judge your worth, and if I deem you deserve an A, you’ll get an A, but if I deem otherwise… you’ll get an F!”

Of course, Mr. Maters was the nicest guy you ever met. He would calmly sit at his computer in the front of the lab, maintaining a help queue of students with questions. He would quietly and calmly listen to a student’s question, and then shed some insight into their problem. Mr. Maters would get a better idea of a student’s progress by how frequent and how poignant their questions were, and more often than not students who were waiting in the queue would solve their own problems before getting to the front.

Most of the students in the class chose the “wide open” route, and that meant designing games. I’m not talking about good games, mind you, I’m talking about games made by high schoolers. I made the ugliest Whack-a-Mole you ever saw, a lifeless AI for a Battleship game, and a video poker game that featured Mr. Maters’s face on the back of every card (and replacing the faces of the kings, queens, and jacks). For the last one, I even collaborated with other students who made card games by jointly designing the Maters-themed deck. Collaboration would become a bigger theme the second year I took the course (yes, I took the same course twice), but before we get there, there were some other indispensable features I want to mention.

First, the lab room was set up so that Mr. Maters could remotely control any computer in the room from his desk. The program he used was dubbed the reverent name, “Vision,” and the slackers feared its power. Vision allowed Mr. Maters to look at our code while we were asking him questions at the front, and also helped him monitor students’ progress. Second, we were allowed a shared drive on the school’s network so that we could instantly pass files back and forth between lab computers. This had a few direct learning benefits, like sharing code examples, sprites, and sound files we used in our programs. But more importantly it gave a sense of culture to the class. We occasionally had contests where every student in the class submitted a picture of Maters’s face photoshopped into some ridiculous and funny scene (really, MS-Painted into a scene). Recall, this was the early days of internet memes, and naturally we youngsters were at the forefront of it.

Third, we were given “exploration” days every so often, on which we were freed from any obligation to work. We could play games, browse around online, or just sit and talk. More often than not we ended up playing LAN Unreal Tournament, but by the end of my second year I chose to spend those days working on my programs too; my games turned out to be more fun to work on than others were to play.

All of these instilled a sense of relaxation in the course. We weren’t being taught to the midterms or the AP exam (although I did take the AP eventually, and I scored quite well considering I didn’t study at all for it). We weren’t even being told to work on some days. The course was more of a community, and even as we teased Mr. Maters we revered him as a mentor.

As I did, most students who took a first year in the course stuck around for a second year. And that was when the amazing projects started to come.

## Zmob

The second year in the computer science class was all games all the time. Moreover, we started by looking at real-time games, the kinds of side-scrolling platformers we loved to play ourselves (yeah, Super Mario Brothers and Donkey Kong Country). I tried my hand at one, but before long I was lost in figuring out how to make the collisions work. Making the levels and animating the character and making the screen scroll were all challenging but not beyond my reach.

One of my early side-scrollers based on the Starfox series.

But once I got fed up with getting him to jump on blocks, I found a better project: Zmob (short for Zombie Mob). It was inspired by collaboration. I helped a friend nail down how to draw two circles in a special way: one was large and fixed, and the other was smaller, always touching the first, and the line between their two centers went through the position of the mouse. In other words, the smaller circle represented the “orientation” of the pair of circles, and it was always facing toward the mouse. It was a basic bit of trigonometry, but once I figured out how to do it I decided a top-down zombie shooting game would be fun to work on. So I started to it. Here’s the opening screen of an early version (typos and errors are completely preserved):

The intro screen to Zmob 1.0

In the game you control the black circle, and the grey circle is supposed to represent the gun. Zombies (blue circles) are spawned regularly at random positions and they travel at varying speeds directly toward your character. You can run around and if you hold down Shift you can run faster than them for a time. And of course, you shoot them until they die, and the game ends when you die. The number of zombies spawned increases as you go on, and your ammunition is limited (although you can pick up more ammo after you get a certain number of kills) so you will eventually die. To goal is to get a high score.

The game plays more like reverse-shepherding than a shooter, and while it might be hard, I don’t think anyone but me would play it for more than ten minutes at a time.

The important part was that I had a lot of ideas, and I needed to figure out how to make those ideas a reality. I wanted the zombies to not be able to overlap each other. I wanted a gun that poisoned zombies and when a poisoned zombie touched a healthy zombie, the healthy one became poisoned. I wanted all sorts of things to happen, and the solutions naturally became language features of Java that I ended up using.

The poison gun. White zombies are poisoned, while blue zombies are healthy.

For instance, at first I just represented the zombies as circles. There was no information that made any two zombies different, so I could store them as a list of x,y coordinates. Once I wanted to give them a health bar, and give them variable speeds, and poison them, I was forced to design a zombie class, so that I could give each zombie an internal state (poisoned or not, fast or slow, etc.). I followed this up by making a player class, an item class, and a bullet class.

And the bullets turned out to be my favorite part. I wanted every bullet on the screen to be updated just by me calling an “update()” function. It turns out this was the equivalent of making a bullet into an interface which each specialized bullet class inherited from. Already I saw the need and elegance behind object oriented programming, something that was totally lost on me when I made those stupid “Shape” interfaces they have you do in basic tutorials. I was solving a problem I actually needed to solve, and an understanding of inheritance was forever branded into my mind.

And the bullet logic was a joy in itself. The first three guns I made were boring: a pistol, a machine gun, and a shotgun. Each sprayed little black circles in the expected way. I wanted to break out and make a cool gun. I called my first idea the wave beam.

The wave beam: sinusoidal bullets.

The idea behind the wave beam is  that the bullets travel along a sinusoidal curve in the direction they were shot. This left me with a huge difficulty, though: how does one rotate a sine wave by an arbitrary angle? I had x and y coordinates for the bullets, but all of the convoluted formulas I randomly tried using sines and cosines and tangents ended up failing miserably. The best I could get was a sort of awkwardly-stretched sideways sine.

After about a week of trying with no success, I finally went to my statistics teacher at the time (whom I still keep in touch with) and I asked him if he knew of any sort of witchcraft mathemagic that would do what I wanted.

After a moment’s thought, he pulled out a textbook and showed me a page on rotation matrices. To my seventeen-year-old eyes, the formula was as mysterious as an ancient rune:

My particular code ended up looking like:

x += frame*Math.cos(angle) + Math.sin(frame)*Math.sin(angle)
y += frame*Math.sin(angle) + Math.sin(frame)*Math.cos(angle)

When I ran the code, it worked so perfectly I shouted out loud. After my week of struggle and botched attempts to figure this out, this solution was elegant and beautiful and miraculous. After that, I turned to calculus to make jumping look more natural in my Fox side-scroller. I experimented with other matrix operations like shearing and stretching. By the end of that year, I had a better understanding of a “change of basis” (though I didn’t know the words for it) than most of the students I took linear algebra with in college! It was just a different coordinate system for space; there were rotated coordinates, fat and stretchy coordinates, along with skinny and backward coordinates. I tried all sorts of things in search of fun gameplay.

And it wasn’t just mathematics that I learned ahead of my time. By the end of the year I had “finished” the game. I designed a chain gun that set off chain reactions when it hit zombies, I had given it a face lift with new graphics for the player and zombies. I even designed a smart tile-layout system to measure the size of the screen and display the background appropriately. I had gotten tired of measuring the sizes by hand, so I wrote a program to measure it for me. That sounds trivial, but it’s really the heart of problem solving in computer science.

The whole class “beta tested” it, i.e., spent a few days of class just playing it to have fun and find bugs. And they found lots of bugs. Overt ones (divide by zero errors making bullets go crazy) and subtler ones (if you time everything right, the zombies can’t get close enough to hurt you, and just keep bumping into each other).

One pretty important issue that came up was speed. Once I added images, I decided to use a Java library to rotate the images on every frame so they were pointing in the right direction. Now some people say Java is slow, but this part was really slow, especially when it got up to a hundred or more zombies. My solution, it just so happened, was a paradigm in computer science called caching. You pre-compute all of the rotations you’ll ever need ahead of time, and then store them somewhere. In fact, what I really did was called lazy-loading, a slightly more sophisticated technique that involved only storing the computed rotations once they were needed.

And I never learned the name of this technique until I got to a third-year college course in dynamic web programming when we discussed the Hibernate object-relational mapping for databases! Just like with linear algebra, my personalized problems resulted in me reinventing or rediscovering important concepts far earlier than I would have learned them otherwise. I was giving myself a deep understanding of the concepts and what sorts of problems they could solve.  This is distinctly different from the sort of studying that goes on in college: students memorize the name of a concept and what it means, but only the top students get a feel for why it’s important and when to use it.

## An Honest Evaluation in Retrospect

I’ll admit it, I was more dedicated to my work than the average kid. A small portion of the class was only engaged in the silly stuff. Some students didn’t have a goal in mind, or they did but didn’t pursue the issue with my kind of vigor. We didn’t have access to many good examples outside of our own web browsing and the mediocre quality of the books Mr. Maters had on hand. The choice of Java was perhaps a steep learning curve for some, but I think in the end it did more good than harm.

But on the other hand, those of us that did work well got to explore, and absorb the material at our own pace. We got to struggle with problems we actually wanted to solve, leading to new insights. One of my classmates made a chat client and a networked version of Tron, while others made role-playing games, musical applications, encryption algorithms, painting programs, and much more. By the end, the breadth and depth of our collective knowledge was far beyond what anyone could expect from any high school course in any subject. I don’t say that lightly; I spent a lot of time analyzing literature and debating contemporary issues and memorizing German vocabulary and fine-tuning essays and doing biology experiments, but programming was different. It was engaging and artistic and technical and logical and visceral. Moreover, it was a skill that makes me marketable. I could drop out of graduate school today and find a comfortable job as a software engineer in any major city and probably in any industry that makes software. That class was truly what set me on the path to where I am today.

And worst of all, it absolutely breaks my heart  to hear my students say “I didn’t think programming would be like this. I’m just not cut out for it.” The best response I can muster is “Don’t judge programming by this class. It can be fun, truly.”

## What They Need

It’s become woefully clear to me that to keep students interested in programming, they need a couple of things:

1. Instant gratification

My students spend way too much time confused about their code. They need have some way to make a change and see the effects immediately. They need teaching tools designed by Bret Victor (skip ahead to 10:30 in the video to see what I mean, but the whole thing is worth watching). And they need to work on visual programs. Drawing programs and games and music. Programs whose effects they can experience in a non-intellectual way, as opposed to checking whether they’re computing polynomial derivatives correctly.

2. Projects that are relevant, or at least fun.

Just like when I was learning, student need to be able to explore. Let them work on their own projects, and have enough knowledge as a teacher to instruct them when they get stuck (or better yet, brainstorm with them). If everyone having a customized project is out of the question, at least have them work on something relevant. The last two projects in the class I teach were regrettably based on file input/output and matrix sums. Why not let them work on a video game, or a search engine (it might sound complicated, but that’s the introductory course over at udacity), or some drawing/animation, a chat client, solve Sudoku puzzles, or even show them how to get data from Facebook with the Graph API. All of these things can be sufficiently abstracted so that a student at any level can handle it, and each requires the ability to use certain constructs (basic networking for a chat client, matrix work for a sudoku, file I/O in parts of a search engine, etc.). Despite the wealth of interesting things they could have students do, it seems to me that the teachers just don’t want to come up with interesting projects, so they just have their students compute matrix sums over and over and over again.

3. The ability to read others’ code.

This is an integral part of learning. Not only should students’ be able to write code, but they must be able to read foreign code. They have to be able to look at examples and extract the important parts to use in their own original work. They have to be able to collaborate with their classmates, work on a shared project, and brainstorm new ideas while discussing bugs. They have to be able to criticize code as they might criticize a movie or a restaurant. Students should be very opinionated about software, and they should strive to find the right way to do things, openly lampooning pieces of code that are bloated or disorganized (okay, maybe not too harshly, but they should be mentally aware).

These three things lie at the heart of computer science and software development, and all of the other crap (the stack frames and lazy-loading and linux shells) can wait until students are already hooked and eager to learn more. Moreover, it can wait until they have a choice to pursue the area that requires knowledge of the linux shell or web frameworks or networking security or graphics processing. I learned all of the basics and then some without ever touching a linux terminal or knowing what a bit was. I don’t doubt my current students could do the same.

And once students get neck deep in code (after spending a year or two writing spaghetti code programs like I did), they can start to see beauty in the elegant ways one can organize things, and the expressive power one has to write useful programs. In some sense programming is like architecture: a good program has beauty in form and function. That’s the time when they should start thinking about systems programming and networking, because that’s the time when they can weigh the new paradigms against their own. They can criticize and discuss and innovate, or at least appreciate how nice it is and apply the ideas to whatever zombie-related project they might be working on at the time.

I hold the contention that every computer science curriculum should have multiple courses that function as blank canvases, and they should have one early on in the pipeline (if not for the very first course). I think that the reason classes aren’t taught this way is the same reason that mathematics education is what it is: teaching things right is hard work! As sad as it sounds, professors (especially at a research institution) don’t have time to design elaborate projects for their students.

And as long as I’m in the business of teaching, I’ll work to change that. I’ll design courses to be fun, and help my future coworkers who fail to do so. Even in highly structured courses, I’ll give students an open-ended project.

So add that onto my wish list as a high school teacher: next to “Math Soup for the Teenage Soul,” a class called “Finger-paint Programming.” (or “Programming on a Canvas”? “How to Kill Zombies”? Other suggested titles are welcome in the comments )