Deconstructing the Common Core Mathematical Standard

Ever since I started to get a real picture of what mathematics is about I’ve viewed middle school and high school mathematics education like a bit of a snob. I’ve read the treatise of Paul Lockhart, “A Mathematician’s Lament,” on the dystopia of cultural attitudes toward mathematics in pre-collegiate education. I’ve written responses to articles in the Atlantic authored by economists who, after getting PhDs in quantitative economics, still talk about math as if it’s just a bag of tricks. I’ve even taught guest lectures at high schools and middle schools to prove by example that an engaging, thought-provoking mathematics education is possible for 8th graders and up. I regularly tell my calculus students that half the things we make them do are completely pointless for their lives, while trying very hard to highlight the truly deep concepts and the few tools they might have reason to use.

But it’s generally agreed that something’s wrong with mathematics education in the US. There are a lot of questions to ask about why: are teachers not trained? Are students too focused on sports? Are Americans becoming intellectually lazy?

These all have their place in the debate, but the question I want to focus on in this article is: are policy makers designing good standards? I often hear about fantastic teachers who are stifled by administrators and standardized testing, so the popular answer is no. Moreover, though I haven’t done a principled study of this (again, my snobbishness peeking out), my impression is that even the fantastic math teachers at the most prestigious schools are still forced to hold the real mathematical learning in extracurriculars like math circles, or math symposiums.

And so the next natural step in analyzing the state of mathematics education in the US is to look at the standards in detail from a mathematical perspective. There was a nice article in the Washington Post detailing one ludicrous exam given to first graders in New York (exams for 6 year olds!). Here’s a snapshot:

first-grade-exam

This is wrong on so many levels.

This prompted me to actually look at the text of the Common Core State Standards in Mathematics, which is the currently accepted standard for most states. I have heard a lot about the political debate over efficacy and testing and assessment, but almost nothing about the mathematical content of the standards. Do they actually promote critical thinking skills and mathematical problem solving, as they claim? Does it differ enough from Lockhart’s dystopia?

My conclusion is:

While the Common Core indicates movement toward the right attitude on mathematics education, the attitudes aren’t reflected in the content of the standards themselves.

The big distinction I want to make in this article is the (perhaps counterintuitive notion) that mathematical thinking skills are largely unrelated to knowledge of mathematical facts, or ability to perform mechanical computations. The reason we teach mathematics to gain critical thinking skills is that mathematics gives examples of when they’re needed that are as simple and boiled down to their true essence as possible. And the text of the Common Core mostly ignores this.

I cannot claim that the writers of the standard don’t understand the mathematics deeply enough to realize this, and it would be too pompous even by my standards to imply that I know better than the thousands of educators that worked on this document. It could be the case that it’s instead the result of bureaucracy and partisanship, and the designers of the Common Core felt they could only make progress in certain areas. But even so, all we are left with is the document itself, and I want to give a principled (but more or less unstructured) inspection of its technical content.

There are some exceptions to my conclusion, and I will detail them as they come up, but the general picture is still this: the intent is better than it used to be but the implementation is still wrong. At the end, I’ll discuss why this is important and what I think should be done instead.

Preliminaries

Now, before we jump in to the text of the standards themselves, I want to point out a few resources provided for teachers and discuss them briefly. First, there is a 3-minute intro video by the Common Core about why the standards are important

Putting aside the animation style, this video sends some disturbing messages. First, that getting a lot of money is what defines and creates success. Not having a deep understanding of the problems you’re facing, not having strong relationships, not helping people, but money. All of this focus on competition and money suggests that the standards are primarily business oriented. I would argue that this is not a useful position for education, but that would digress. Suffice it to say, the Common Core people should know that the most successful mathematician of all time, Paul Erdős, was homeless and had but a few hundred dollars to his name at any given time. He instead survived (indeed, excelled to legendary status) on his deep understanding of problem solving and his strong relationships with other mathematicians. This is an extreme example but it makes my point clear: collaboration, not competition, breeds success.

The second misconception expressed in the video is that mathematics (indeed, all learning) is like a staircase, and you have to learn the concepts in, say, 6th grade before you can learn anything in 7th. Beyond basic technical proficiency, this is simply not true for the kinds of skills we want to develop in our students. And the standards for basic technical proficiency in mathematics have never really changed: be competent in arithmetic and know what a variable is by the end of grade school; be competent in basic algebraic manipulation by the end of middle school or freshmen year of high school. Then the rest of high school is about being exposed to other areas, most often geometry, trigonometry, calculus, and stats, none of which depends on another too heavily (at the level taught in high school). The details of exactly what technical skills are required for high school students is unclear. Why? Because as one goes from elementary school to middle school to high school, the focus on learned skills should gradually change from mechanical abilities to big ideas, and that transition arguably begins some time in middle school.

One of the common core “big ideas” we’ll look at later is that of similarity in geometric figures. But it’s clear that you can go through your entire life’s work without thinking about similar triangles, and it’s hardly relevant to most disciplines. Why then, should we teach it? Well, there’s a very good reason, but it’s at the heart of what the Common Core standards are missing, so we’ll save the explanation for later.

The video does make one good point: standards is not learning, and learning only happens with great teachers. But really, meeting standards isn’t learning either! It’s just evidence to learning, which is extremely hard to measure. But even worse is meeting the wrong standards, and that’s what I’m afraid Common Core is promoting.

Speaking of which, here are the actual standards themselves for the reader to peruse. The Common Core website has the full standard, also available as a pdf, and the Chicago Public School administration released a lengthy pdf explaining the standards in detail more or less specific to the school system. CPS also provides example lessons on their website, and I’m pleasantly surprised that a handful of the lessons try to flush out the more insightful ideas I find lacking in the Common Core itself.

So let’s dive right on in.

Common Core: Too Narrow and Too General

The standard starts out by describing a set of general guidelines which I generally agree with. They are:

  1. Make sense of problems and persevere in solving them.
  2. Reason abstractly and quantitatively.
  3. Construct viable arguments and critique the reasoning of others.
  4. Model with mathematics.
  5. Use appropriate tools strategically.
  6. Attend to precision.
  7. Look for an make use of structure.
  8. Look for and express regularity in repeated reasoning.

But even as they are true, the descriptions of these tenets are either far too narrow or far too general. Take, for example, this excerpt from the description of the (arguably MOST mathematical) skill “Look for an express regularity in repeated reasoning,” also known as “Reasoning about patterns.”

Mathematically proficient students notice if calculations are repeated, and look both for general methods and for shortcuts. … Noticing the regularity in the way terms cancel when expanding $ (x-1)(x+1), (x-1)(x^2 + x + 1), $ and $ (x-1)(x^3 + x^2 + x + 1)$ might lead them to the general formula for the sum of a geometric series. As they work to solve a problem, mathematically proficient students maintain an oversight of the process while attending to the details. They continually evaluate the reasonableness of their intermediate results.

Yes, a mathematically proficient student will be able to infer a general pattern here. But it’s phrased in a way that makes it seem like expanding products of polynomials is the central goal while the real mathematical skill is just “looking for a shortcut” in calculations. Moreover, the sentence that follows is equally applicable to any profession. Indeed, while working to cook a meal, a proficient chef maintains an oversight of the process while attending to details, and continually evaluates the reasonableness of their intermediate results. Finding shortcuts is not mathematical, nor is it culinary. But reasoning about those shortcuts is, whether or not they’re correct. I have plenty of calculus students who “find shortcuts” that simply aren’t true, but don’t bother to think about them, and are hence exercising no mathematical abilities. It’s a fine distinction that the Common Core seems to ignore at some times and embrace at others.

The best example of this is in “Construct viable arguments and critique the reasoning of others.” Here they wonderfully lay out the kind of logical reasoning students should learn in mathematics. How I wish that all mathematics education was based around this sole principle! The problem is that none of these thoughts are reflected in the standards themselves! Instead, the standards generally simply request that a student “knows” a particular argument, not that they generate original ideas or critique the ideas of others. The generation of new mathematical questions and arguments is, without a doubt, the best way to learn mathematical thinking.

Indeed, let’s take a closer look at the standards themselves.

Inspecting the Standards Themselves

The list of Common Core Standards breaks mathematical abilities down by grade level and by area. I think the Washington Post article gives a very good critique of the lowest grade standards, so let’s focus on high school level. This is where I claim the true big ideas must shine through, if they come up anywhere at all.

The high school standards are broken up into areas by subject:

  • Number and Quantity
  • Algebra
  • Functions
  • Modeling
  • Geometry
  • Statistics and Probability

So far so good, I guess. I’m quite pleased to see statistics recognized here. Let’s start with “Number and Quantity.” This section is broken into “The Real Number System,” “Quantities,” “The Complex Number System,” and “Vector and Matrix Quantities.”

The first one, “The Real Number System,” already shows some huge red flags. There are three standards here, and I quote:

  1. Extend the properties of exponents to rational exponents.
    1. Explain how the definition of the meaning of rational exponents follows from extending the properties of integer exponents to those values, allowing for a notation for radicals in terms of rational exponents. For example, we define 51/3 to be the cube root of 5 because we want (51/3)3 = 5(1/3)3 to hold, so (51/3)3 must equal 5
    2. Rewrite expressions involving radicals and rational exponents using the properties of exponents.
  2. Use properties of rational and irrational numbers
    1. Explain why the sum or product of two rational numbers is rational; that the sum of a rational number and an irrational number is irrational; and that the product of a nonzero rational number and an irrational number is irrational.

This is supposed to indicate that someone understands real numbers? Number 1 only shows that one knows how to do arithmetic with exponents, asking the student to know a very specific argument, and number 2 is just memorizing some basic properties of rational numbers. There are some HUGE questions left unasked. Here are a few

  1. What is a real number? How does it differ from other kinds of numbers?
  2. Is infinity a real number? If so, how does it fit with the definition of a real number? If not, is it some other kind of number?
  3. What does it mean to be rational and irrational?
  4. What are some examples of irrational numbers? Why are those examples irrational?
  5. Are there more irrational numbers than rational numbers? Vice versa? Are they “equal” in size?
  6. If we can’t know it exactly, how would you estimate a the value of a number like $ \pi^4$?

To be fair, the grade 8 standards address some of these questions, but in an odd way. Rather than say that students should know that real numbers can be (sort of) defined by a finite integer part and an infinite decimal expansion, it says

Understand informally that every number has a decimal expansion; for rational numbers show that the decimal expansion repeats eventually, and convert a decimal expansion which repeats eventually into a rational number.

Does that mean that a number can have multiple decimal expansions? Does every decimal expansion also correspond to a number? Or can a decimal expansion represent multiple numbers? What about the number whose “decimal expansion” has an infinite number of 1’s before the decimal point? Does that mean that infinity is a real number? As you can see, these are some very basic questions about real numbers, which are arguably more stimulating and important than being able to convert back and forth between decimal expansions and rational numbers (as the 8th grade standard requires, but nobody actually does for numbers harder than 1/3).

The important point I want to make here is that the truly “Big Ideas” underlying this topic are as follows, and they’re only halfway related to numbers themselves:

  1. Understand the importance of precise definitions, and be able to apply those definitions to simple questions, such as “Prove that 1/3 is a real number,” and “Argue why infinity is not a real number.”
  2. Understand that we can define notation as we see fit, e.g. $ 5^{1/3}$, and more deeply that mathematical concepts are invented via definitions.
  3. Understand the concept of a correspondence between two collections. Know how to argue that a correspondence is or is not bijective (by any other name).
  4. Understand basic proofs of impossibility.
  5. Understand the concept of approximation, and understand how to quickly get rough approximations of quantities. Extend this to estimate concrete real-world quantities, like the number of pianos in your hometown.

And these ideas are the actual ideas that the Common Core is looking for, the ones that apply across all of mathematics and actually relate to real critical thinking skills. But it seems that the only place in the standard where they address the idea of a correspondence is in a Kindergarten “Counting” standard:

CCSS.Math.Content.K.CC.C.6 Identify whether the number of objects in one group is greater than, less than, or equal to the number of objects in another group, e.g., by using matching and counting strategies.

“Number and Quantity” goes on to describe the importance of consistently using units and rounding measurements to the right number of digits; memorizing properties of complex numbers (with regards to which any introductory college professor will start over anyway); and more rote manipulation of vectors and matrices that few high school students have any reason to know. Other big ideas apply here, ideas from geometry and the idea of correspondence in particular, but the standards still focus on mechanical abilities.

The Common Core folks argue in quite a few places that knowing the reasons for why certain mathematical facts are true is what constitutes a true understanding of those facts. And yes, I agree. But there’s much more to the story. If we want students to know why we define $ 5^{1/3}$ as we do, to make a nice extension of the rules of exponent arithmetic, it’s certainly a deeper understanding than just memorizing how to do the arithmetic itself. But it’s just another kind of memorization! It’s memorization of a specific mathematical reason for a specific mathematical fact. It’s a better kind of memorization than we used to require, but is it critical thinking or problem solving? It’s hard to say whether or not it requires more of the mental faculty we want it to, but if it’s not then we lose, and if it is, then this is a pretty indirect way of going about it.

Again, my big point here is that the requirements of the Standard overlook the deep underlying mathematical thinking skills that we hope are being developed when we ask them to know whatever it is we want them to know. These big concepts like correspondence and impossibility and approximation should be the central focus. The particular rules of exponents and the specific properties of irrational numbers, these are tools and sidenotes that accentuate fluency in the big concepts as applied to solving problems. Almost nobody needs to know facts about irrational numbers in their careers, but relating things by correspondence is a truly useful mathematical skill.

Taking a Step Back

So let’s pause for a moment and give some counterpoint. I could just be focusing super narrowly on one or two topics that I feel the Common Core misrepresents, and using that gripe to claim the entire Common Core is crap when it actually has lots of merit in other areas.

While I do think that the standard addresses a few topics well (more on that later), I claim the pattern of “Number and Quantity,” is endemic. Take for example the section on Geometry, Measurement & Dimension. I was really hopeful here that one of the standards would be “Understand what dimension means,” but no dice. Instead it’s the same old memorization of formulas for volumes of geometric shapes.

Even worse, when asked to do derive these formulas, the standard says

Give an informal argument for the formulas for the circumference of a circle, area of a circle, volume of a cylinder, pyramid, and cone. Use dissection arguments, Cavalieri’s principle, and informal limit arguments.

I’d be surprised if many of my readers had even heard of Cavalieri’s principle before reading this, but this is the pattern striking again. The standard expects the students to know something very specific: not just how to use Cavalieri’s principle, but how to apply it just to these special objects, while ignoring the underlying principles at work. A true understanding of measurement and volume would be:

Reason about the volume of a solid you’ve never seen before.

But more deeply, Cavalieri’s principle is just another kind of correspondence argument applied to geometry. And I see this right away without muddling through that terribly written Wikipedia page because I have a solid understanding of the notion of a correspondence and I can recognize it at work.

The “dissection” argument is another deep principle that is mostly ignored in the standard, so let me spell it out. One way to solve problems is to break them down into simpler problems you know how to solve, and to figure out how to piece them back together again. Any high school student can understand this technique because by high school they’ve learned to put their pants on one leg at a time. But this idea alone accounts for a wide breadth of mathematical solutions to problems (the day it was applied to signal processing is often credited as the day the Age of Information began!). So they should be seeing (and using) this technique applied to many problems, be they about algebra or geometry or cats. It shouldn’t be hidden away in a single (perhaps memorized) geometric argument.

And finally, the “limit argument” (called exhaustion in some educational circles, but I don’t like this name) is an application of approximation. The idea is that a sequence of increasingly good lies will eventually give you the truth, and it’s one of the deepest thoughts that mankind has ever had! So indeed, the “Big Ideas” across this standard are big ideas, but the writers of the standard neglect to point out their significance in favor of very specific and arguably pointless factual requirements.

The geometry section is full of other similar nonsense: using laws of sines and cosines, the same geometry proofs that Paul Lockhart derides (page 19), and memorizing minutiae about the equations of parabolas, ellipses, hyperbolas. They say the big ideas are similarity, transformations, and symmetry (indeed these are big ideas!) but then largely revert to the same old awful kinds of rote memorization and symbol pushing we’ve grown to hate about math.

A real course in geometry, measurement, and mathematical problem solving might even follow Lockhart’s book, Measurement. In fact, if students really absorbed the contents of this book, that would constitute an entire high school mathematics education. Why do I say that? Because this book focuses on developing mathematical thinking skills in a way that no high school education I’ve heard of has attempted. This book emphasizes methods and exploration over facts, and teaches readers to conjecture and reason without telling them how to do everything. It provides numerous exercises without clear answers and has no solution manual. And I claim that any facts required by the standards that are not covered there could be taught to a student who is comfortable with this book in one month or less. That is, all of the “standards” simply fall out of the more important deeper concepts, and we should be working forward from the deep ideas. We should use things like the law of sines as examples of these deep principles in action, but knowing or not knowing the law of sines or when to use it gives little indication of critical thinking.

I could continue with algebra, and the other sections, but I think my point is clear. The standards are filled with the same arbitrary choices of technical facts, and the deep ideas, the kinds of thinking we want to develop, are absent.

Modeling and Other Big Ideas

There is one aspect of mathematical problem solving that I think the Common Core addresses well, and that is modeling. That is, students need to be able to take a poorly defined problem, whether it’s “analyzing the stopping distance for a car” or asking what constitutes a number, and boil it down to its essence. This means making and questioning assumptions, debating the quality of a model, testing and revising, and interpreting results in a principled way. This is arguably the only kind of mathematics that non-mathematicians do outside of academia, and I feel that the description in the Common Core does justice to its importance. Even better, they admit there can be no “list” of facts the students are expected to know about specific models or tools. Here the Common Core gives in to the truth that discussion and original arguments are the key to developing fluency. In my imagination there were a select few key players lobbying for this to be included in the Core, and I say bravo to you, well done!

But there are some other big ideas that the Common Core misses entirely.

One of these is the idea of generalization. That is, one core part of mathematical thinking is to take a solution to a given problem and extend it to more general patterns and problems. I see only vague allusions to this concept in the Common Core (students are expected to know, for example, how to generate a sequence when given a pattern). But this is literally the core stuff of mathematical problem solving: if you can’t solve a hard problem, try to simplify it until you can solve it, and then try to generalize your solution back to the original problem. This is why it makes sense to think about matrices and polynomials as “generalizations of integers,” because natural facts about integers extend (or don’t extend as the case may be) to these more general settings. Students should be comfortable facing problems that may require simplification and mathematically “feeling around” for insights.

The second idea is that of the algorithm. I’m not talking about programming, that’s a different story. I’m talking about procedures that anyone might follow to get something done. People follow algorithms all day, and some of the most natural problems (and interesting problems for students to think about) are algorithmic in nature: how to guarantee you win a game, how to find the quickest way to get somewhere, how to win the heart of that cute guy or girl. Indeed, students are expected to follow algorithms all over the Common Core, from approximating irrationals by rationals to solving algebra and making inferences. The only non algorithmic aspect of the Common Core is modeling, and here they provide an algorithm for how to do it! And so it makes sense to study exactly what makes an algorithm an algorithm, when algorithms apply, and more deeply what makes an algorithm good.

modeling-algorithm

I find it hard to believe a mathematician would ever make such a diagram…

The last point I want to make is that true mathematical understanding arises from trying to solve problems that you are not told how to solve ahead of time, and recognizing when these big ideas apply and when they do not. Students love to solve puzzles for their own sake, and they don’t need to be embedded in stupid “real world” applications like computing mortgage payments. Indeed, this is what Sergio Correa did in his financially destitute school in Mexico, and his students have made progress beyond belief (see, Common Core people? Money is not the problem or the solution!). It’s okay for problems to be left unsolved by students for days, weeks, or even years, and students need to be comfortable with identifying their own lack of understanding.

I want to expand on the idea a bit more. Taking it to the extreme, you could ask a more daring question: should students be exposed to problems they cannot possibly solve? My answer is yes! Emphatically, yes! A thousand times, yes! Student need to be exposed to many kinds of problems they cannot solve to be prepared for a world in which most problems don’t have known solutions (or else they wouldn’t be problems in the first place). Here are a few examples:

1. They should be exposed to problems that can be solved in principle but are too hard to solve with the techniques they know well.

For example: elementary level students who are just beginning to learn about variables should be asked to add up all the numbers between 1 and 100. They should be encouraged to try it by hand until they’re convinced it’s too hard, and they should be rewarded if they actually do manage to do it by hand. They should then be encouraged to think of other, cleverer, ways to solve the problem. No idea is crazier than adding it up by hand, and so much time (at least a full class session) should be spent puzzling over what in the world could possibly be done. Finally, an elegant solution should be shown that reduces the problem to multiplication, and the use of variables highlighted (let S be the sum of these numbers, even though we don’t know what it is…). And then the problem can be extended to a general sum of the first few integers, sums of squares, and so on.

2. They should be exposed to problems that cannot be solved with any technique they know but will foreshadow their education in future classes.

When students are learning about the slope of a linear function, they must be encouraged to wonder how one could reason about the steepness of nonlinear things. For it’s obvious that some nonlinear functions are steeper than others at different places, but how can we use a single number to compare them like we do for slopes of lines? The answer is that we cannot! The “correct” way is to invent calculus, but of course the calculus way of doing it involves extending the usual notion of slopes of lines by taking limits. The students will not know this, nor will they find it out by the end of their algebra class, but it should linger in their minds as a motivating question: there are always more unanswered questions! What about the “steepness” of surfaces? Can we talk about the “steepness” of time? Students should readily ask and be asked such intriguing questions (again, this is generalization at work). The fascinating anecdote is that educators are forced to do this, but rather than have their students wonder about steepness of curves, they make them memorize the form of a “difference quotient” with little indication of why, other than that “it will be useful later.” Seeing the solution before they even hear the problem? It’s completely backwards!

3. They should be asked obvious technical questions that appear not to have any technique at all.

For example, they might be asked the difficult question: is $ \pi$ a rational number? Indeed, this is an extremely natural question to ask, since $ \pi$ is defined to be a ratio of two numbers: the circumference and diameter of a circle. But despite the fact that there are many proofs using a variety of techniques, almost all proofs that $ \pi$ is irrational are beyond the abilities of high school students to follow and not even familiar to the average college math major. I certainly couldn’t prove it off the top of my head. This is quite different than the previous kind of problem, because there it was obvious that you can reason about the steepness of nonlinear functions, the students just don’t know how to formulate it rigorously. But here, the rigorous question is understood (can $ \pi$ be represented as a quotient of integers) but it’s unclear whether the problem is easy or hard to solve, and it turns out to be hard.

4. They should be exposed to problems that NOBODY knows how to solve.

When students learn about rational numbers (and if they know about $ e$, which I doubt they should before calculus but I’ll use it in my example anyway), they could be asked whether $ \pi + e$ is rational. This is an open problem in mathematics. If they’re learning about prime numbers, they should be asked whether every positive integer can be written as a sum of two primes. Every even positive integer? Every even integer greater than 2? And so they go through the process of refining “stupid” questions (with obvious “no” answers) into deep open conjectures. And then it can be connected to other ideas: can an algorithm answer this question? How long might it take? Can we try to correspond integers to something else? Can we give an approximation argument? There are so many simple open problems in number theory that it baffles me that many students are never exposed to them.

The point of all this is that mathematics, and mathematical problem solving skills, are not just about picking the right tool from the set of tools you’ve been taught. It’s about recognizing when any tool you’ve ever heard of even applies! More deeply, it’s about debating with your colleagues that problems can and cannot be solved using certain methods, and giving principled reasons why you think so. This sounds like “modeling,” and indeed the strategies used for modeling also apply to pure mathematics, a fact that most people don’t realize. Critical thinking and mathematical problem solving is more akin to art and debate than to mechanical computation. And the most interesting problems are the most natural questions one could ask, not the contrived “compute the volume of an oddly shaped wine glass” questions. Those are busy work questions, not open for discussion and interpretation. And the students know it.

Why This Matters

A reasonable objection to my rant goes as follows: why does it matter that the Common Core isn’t super clear about these “big ideas” I’m claiming are so central? If the teachers are knowledgeable they’ll know what is important and what isn’t important, and how to teach the material in the manner that best promotes learning.

The problem is one of intention and misdirection. If the teachers are not rock-solid in their own understanding, then this Common Core, promoted by The World’s Leading Education Experts, can easily narrow their teaching to just what’s in the Core. More disturbing are the people who don’t know mathematics well, the principals, policy makers, and standardized test writers who really take these guidelines at face level. Even if a teacher has a good reason to favor one area like modeling over memorizing facts about hyperbolas, they will be met with the same kind of obtuse opposition by administrators seeking short term business goals. The standards exist, one might argue, to explain to these people (the people who wouldn’t know mathematical reasoning if it hit them in the face) that the teachers are teaching ideas much deeper than the rules of matrix multiplication. The Common Core represents this adequately with regards to modeling, but little else.

The Common Core also claims that the standards should be separated from specific curriculum and pedagogy, and one would reasonably argue that what I’m presenting here is pedagogy, not standards. Regardless of whether you agree or disagree with this, it still remains that the Common Core is designed to influence curriculum and pedagogy. And so even if the Common Core must have “facts” as standards, if it fails to emphasize the deep ideas underlying the factual obligations then it fails to influence pedagogy in the right way. In doing so, it reinforces obviously bad practices like teaching to the test.

The important thing to realize is that the correct pedagogy is already basically known: from a young age students should explore and reason and puzzle without horse blinders. Sometimes there are some dry factual things they cannot escape, but such is true of everything. So the separation of mathematical church and state (pedagogy and standards) claimed by the Common Core seems to be entirely a political one. It would infringe on the freedom of the teachers to impose pedagogical constraints, especially ones that only work for some environments. If this necessarily causes deficiencies of a global set of standards, then it is simply the wrong approach.

Again, I cannot say for sure whether the writers of the standard don’t understand the mathematics well enough, and it would be pointlessly arrogant to imply my own superiority. I hate to think it’s a bureaucracy issue, and that the designers felt the only progress they could make was to emphasize modeling as well as they did. If this is the truth then it is a sad one, because where I and many of the teachers sit, our country is stuck with the results.

We don’t need more compartmentalization by subject and grade. We do need a recognition of the deep critical thinking skills we want to teach. “Abstract reasoning” is not a specific enough goal to warrant policy. We need to admit to our teachers and our students exactly what we’re trying to get them to learn. And then we can organize education based on increasingly sophisticated applications of those ideas, to thinking about shapes, numbers, modeling, to whatever you want. Then students won’t forget about counting as “matching” after kindergarten ends, or only consider approximations related to irrational numbers. They will instead see these ideas blossom over time into the mental Swiss Army knives that they are. And they will use these ideas as a foundation to acquire whatever factual knowledge they might need to succeed in their careers.

Methods of Proof — Induction

In this final post on the basic four methods of proof (but perhaps not our last post on proof methods), we consider the proof by induction.

Proving Statements About All Natural Numbers

Induction comes in many flavors, but the goal never changes. We use induction when we want to prove something is true about all natural numbers. These statements will look something like this:

For all natural numbers n, $ 1 + 2 + \dots + n = n(n+1)/2$.

Of course, there are many ways to prove this fact, but induction relies on one key idea: if we know the statement is true for some specific number $ n$, then that gives us information about whether the statement is true for $ n+1$. If this isn’t true about the problem, then proof by induction is hopeless.

Let’s see how we can apply it to the italicized statement above (though we haven’t yet said what induction is, this example will pave the way for a formal description of the technique). The first thing we notice is that indeed, if we know something about the first $ n$ numbers, then we can just add $ n+1$ to it to get the sum of the first $ n+1$ numbers. That is,

$ \displaystyle 1 + \dots + n + (n+1) = (1 + \dots + n) + (n+1)$

Reiterating our key point, this is how we know induction is a valid strategy: the statement written for a fixed $ n$ translates naturally into the statement about $ n+1$. Now suppose we know the theorem is true for $ n$. Then we can rewrite the above sum as follows:

$ \displaystyle 1 + \dots + n + (n+1) = \frac{n(n+1)}{2} + (n+1)$

With some algebra, we can write the left-hand side as a single fraction:

$ \displaystyle 1 + \dots + (n+1) = \frac{n(n+1) + 2(n+1)}{2}$

and factoring the numerator gives

$ \displaystyle 1 + \dots + (n+1) = \frac{(n+1)(n+2)}{2}$

Indeed, this is precisely what we’re looking for! It’s what happens when you replace $ n$ by $ n+1$ in the original statement of the problem.

At this point we’re very close to being finished. We proved that if the statement is true for $ n$, then it’s true for $ n+1$. And by the same reasoning, it will be true for $ n+2, n+3, $ and all numbers after $ n$. But this raises the obvious question: what’s the smallest number that it’s true for?

For this problem, it’s easy to see the answer is $ n=1$. A mathematician would say: the statement is trivially true for $ n=1$ (here trivial means there is no thinking required to show it: you just plug in $ n=1$ and verify). And so by our reasoning, the statement is true for $ n=2, n=3, $ and so on forever. This is the spirit of mathematical induction.

Formal Nonsense

Now that we’ve got a taste of how to use induction in practice, let’s formally write down the rules for induction. Let’s have a statement which depends on a number $ n$, and call it $ p(n)$. This is written as a function because it actually is one (naively). It’s a function from the set of natural numbers to the set of all mathematical statements. In our example above, $ p(n)$ was the statement that the equality $ 1 + \dots + n = n(n+1)/2$ holds.

We can plug in various numbers into this function, such as $ p(1)$ being the statement “$ 1 = 1(1+1)/2$ holds,” or $ p(n+1)$ being “$ 1 + \dots + (n+1) = (n+1)(n+1+1)/2$ holds.”

The basic recipe for induction is then very simple. Say you want to prove that $ p(n)$ is true for all $ n \geq 1$. First you prove that $ p(1)$ is true (this is called the base case), and then you prove the implication $ p(n) \to p(n+1)$ for an arbitrary $ n$. Each of these pieces can be proved with any method one wishes (direct, contrapositive, contradiction, etc.). Once they are proven, we get that $ p(n)$ is true for all $ n$ automatically.

Indeed, we can even prove it. A rigorous proof requires a bit of extra knowledge, but we can easily understand the argument: it’s just a proof by contradiction. Here’s a quick sketch. Let $ X$ be the set of all natural numbers $ n$ for which $ p(n)$ is false. Suppose to the contrary that $ X$ is not empty. Every nonempty set of natural numbers has a smallest element, so let’s call $ m$ the smallest number for which $ p(m)$ is false. Now $ m-1 < m$, so $ p(m-1)$ must be true. But we proved that whenever $ p(n)$ is true then so is $ p(n+1)$, so $ p(m-1 + 1) = p(m)$ is true, a contradiction.

Besides practicing proof by induction, that’s all there is to it. One more caveat is that the base case can be some number other than 1. For instance, it is true that $ n! > 2^n$, but only for $ n \geq 4$. In this case, we prove $ p(4)$ is true, and $ p(n) \to p(n+1)$ with the extra assumption that $ n \geq 4$. The same inductive result applies.

Here are some exercises the reader can practice with, and afterward we will explore some variants of induction.

  1. Prove that $ n! > 2^n$ for all $ n \geq 4$.
  2. Prove that for all $ n \geq 1$ the following equality holds: $ 1/(1 \cdot 2) + 1/(2 \cdot 3) + \dots + 1/(n \cdot (n+1)) = n/(n+1)$.
  3. Prove that for every natural number $ n$, a set of $ n$ elements has $ 2^n$ subsets (including the empty subset).

This last exercise gives a hint that induction can prove more than arithmetic formulas. Indeed, if we have any way to associate a mathematical object with a number, we can potentially use induction to prove things about those objects. Unfortunately, we don’t have any mathematical objects to work with (except for sets and functions), and so we will stick primarily to proving facts about numbers.

One interesting observation about proof by induction is very relevant to programmers: it’s just recursion. That is, if we want to prove a statement $ p(n)$, it suffices to prove it for $ p(n-1)$ and do some “extra computation” to arrive at the statement for $ p(n)$. And of course, we want to make sure the recursion terminates, so we add in the known result for $ p(1)$.

Variations on Induction, and Connections to Dynamic Programming

The first variation of induction is simultaneous induction on multiple quantities. That is, we can formulate a statement $ p(n,m)$ which depends on two natural numbers independently of one another. The base case is a bit trickier, but paralleling the above remark about recursion, double-induction follows the same pattern as a two-dimensional dynamic programming algorithm. The base cases would consist of all $ p(1,m)$ and all $ p(n,1)$, and the inductive step to get $ p(n,m)$ requires $ p(n-1,m)$ and $ p(n,m-1)$ (and potentially $ p(n-1, m-1)$ or others; it depends on the problem).

Unfortunately, natural instances where double induction is useful (or anywhere close to necessary) are rare. Here is an example of a (tricky) but elementary example. Call

$ \displaystyle C(m,n) = \frac{(2m)!(2n)!}{m!n!(m+n)!}$,

where the exclamation point denotes the factorial function. We will outline a proof that $ C(m,n)$ is always an integer for all $ m, n \geq 0$. If we look at the base cases, $ C(0,n), C(m,0)$ (recalling that 0! = 1), we get $ (2n!)/(n! n!)$, and this happens to be in the form of a binomial coefficient (here, the number of ways to choose $ n!$ objects from a collection of $ (2n)!$ objects), and binomial coefficients are known to be integers. Now the inductive step relies on the fact that $ C(m,n-1)$ and $ C(m+1, n-1)$ are both integers. If this is true then

$ \displaystyle C(m,n) = 4C(m,n-1) – C(m+1, n-1)$,

which is obviously again an integer.

If we write these values in a table, we can see how the induction progresses in a “dynamic programming” fashion:

dynamic-programming-table

In order to fill the values in the next $ n$ column (prove the statement for those values of $ n$), we need to fill the entire $ n-1$ column (for indeed, we rely on the inductive hypothesis for both the $ m+1$ and $ m$ row). But since our base case was the entire $ n=0$ column, we can fill the entire table. In fact, we have just described a dynamic programming algorithm for computing the value of $ C(m,n)$ in $ mn$ steps. The correctness of the algorithm is indeed an inductive proof of the theorem.

Perhaps uninterestingly, this is asymptotically slower than the naive algorithm of computing $ C(m,n)$ directly by computing $ (2n)!, (2m)!, (n+m)!$ directly; this would take a linear number of steps in $ n$, assuming $ n > m$. In passing, this author wonders if, when the numbers are really large, the lack of division and multiplication in the dynamic program (multiplying by 4 using bit shifting instead) would overtake the naive algorithm. But $ C(m,n)$ is certainly not interesting enough in its own right for anyone to care 🙂

At this point, we have noticed that we sometimes use induction and assume that many smaller instances of the statement are true. Indeed, why not inductively assume that the statement holds for all smaller $ n$. This would certainly give the prover more tools to work with. Indeed, this technique is sometimes called strong induction, in the sense that we assume a stronger inductive hypothesis when we’re trying to prove $ p(n+1)$. It may not be entirely obvious (especially to one well versed in the minutiae of formal logic) that this is actually equivalent to normal induction, but it is. In fact, the concept of “strong” induction is entirely pedagogical in nature. Most working mathematicians will not mention the difference in their proofs.

The last variant we’ll mention about induction is that of transfinite induction. The concept is that if you have any set $ X$ which is well-ordered (essentially this means: allowing one to prove induction applies as we did earlier in the post), then we can perform induction its elements. In this way, we can “parameterize” statements by elements of an arbitrary well-ordered set, so that instead of $ p(n)$ being a function from natural numbers to mathematical statements, it’s a function from $ X$ to mathematical statements. One somewhat common example of when $ X$ is something besides natural numbers is when we use the so-called cardinal numbers. We have already seen two distinct infinite cardinal numbers in this series: the cardinality of the integers and the cardinality of the real numbers (indeed, “cardinal number” just means a number which is the cardinality of a set). It turns out that there are many more kinds of cardinal numbers, and you can do induction on them, but this rarely shows up outside of mathematical logic.

And of course, we should close this post on an example of when induction goes wrong. For this we point the reader to our proof gallery, and the false proof that all horses are the same color. It’s quite an amusing joke, and hopefully it will stimulate the reader’s mind to recognize the pitfalls that can occur in a proof by induction.

So those are the basic four proof techniques! Fortunately for the reader, pretty much all proofs presented on this blog follow one of these four techniques. I imagine many of my readers skip over the proofs entirely (if only I could put proofs in animated gifs, with claims illustrated by grumpy cats!). So hopefully, if you have been intimidated or confused by the structure of the proofs on this blog, this will aid you in your future mathematical endeavors.  Butchering an old phrase for the sake of a bad pun, the eating of the pudding is in the proof. 🙂

Until next time!

Methods of Proof — Contradiction

In this post we’ll expand our toolbox of proof techniques by adding the proof by contradiction. We’ll also expand on our knowledge of functions on sets, and tackle our first nontrivial theorem: that there is more than one kind of infinity.

Impossibility and an Example Proof by Contradiction

Many of the most impressive results in all of mathematics are proofs of impossibility. We see these in lots of different fields. In number theory, plenty of numbers cannot be expressed as fractions. In geometry, certain geometric constructions are impossible with a straight-edge and compass. In computing theory, certain programs cannot be written. And in logic even certain mathematical statements can’t be proven or disproven.

In some sense proofs of impossibility are hardest proofs, because it’s unclear to the layman how anyone could prove it’s not possible to do something. Perhaps this is part of human nature, that nothing is too impossible to escape the realm of possibility. But perhaps it’s more surprising that the main line of attack to prove something is impossible is to assume it’s possible, and see what follows as a result. This is precisely the method of proof by contradiction:

Assume the claim you want to prove is false, and deduce that something obviously impossible must happen.

There is a simple and very elegant example that I use to explain this concept to high school students in my guest lectures.

Image you’re at a party of a hundred people, and any pair of people are either mutual friends or not mutual friends. Being a mathematical drama queen, you decide to count how many friends each person has at the party. You notice that there are two people with the same number of friends, and you wonder if it will always be the case for any party. And so the problem is born: prove or disprove that at every party of $ n$ people, there exist two people with the same number of friends at the party.

If we believe this is true, and we can’t seem to find a direct proof, then we can try a proof by contradiction. That is, we assume that there are not two people with the same number of friends. Equivalently, we can assume that everybody has a distinct number of friends. Well what could the possibilities be? On one hand, if there are $ n$ people at the party, then the minimum number of friends one could have is zero (if you’re quite lonely), and the maximum is $ n-1$ (if you’re friends with everybody). So there are $ n$ distinct numbers, and $ n$ people, and everyone has to have a different number. That means someone has to have zero friends, and someone has to be friends with everybody. But this can’t possibly be true: if you’re friends with everyone (and we’re only counting mutual friendships) then nobody can be friendless. Thus, we have arrived at a contradiction, and our original assumption must have been incorrect. That is, every party has two people with the same number of friends at the party.

There are certainly other proofs of this fact (I know of a direct proof which is essentially the same proof as the one given above), and there are more mathematical ways to think about the problem. But this is a wonderful example of a proof which requires little else than the method of contradiction.

A Reprise on Truth Tables, and More Examples

Just as with our post on contrapositive implication, we can analyze proof by contradiction from the standpoint of truth tables. Recall the truth table for an implication $ p \to q$:

p  q  p->q
T  T   T
T  F   F
F  T   T
F  F   T

We notice that an implication can only be false if the hypothesis $ p$ is true and the consequence $ q$ is false. This is the motivation for a proof by contradiction: if we show this case can’t happen, then there’s no other option: the statement $ p \to q$ must be true. In other words, if supposing “p and not q” is true implies something which we know to be false, then by the very same truth table argument it must be that either “q” is true or “p” is false. In any of these cases “p implies q” is true.

But all of this truth table juggling really takes us away from the heart of the method. Let’s do some proofs.

First, we will prove that the square root of 2 is not a rational number. That is, we are proving the statement that if $ x$ is a number such that $ x^2 = 2$, then it cannot be true that $ x = p/q$ where $ p,q$ are integers.

Suppose to the contrary (this usually marks the beginning of a proof by contradiction) that $ x = p/q$ is a ratio of integers. Then we can square both sides to get $ 2 = x^2 = p^2 / q^2$, and rearrange to arrive at $ 2q^2 = p^2$. Now comes the tricky part: if a number is a divisor of $ p$, then its square must divide $ p^2$; this is true of all square numbers. In particular, it must be the case that the largest power of 2 dividing any square number is even (and $ 2^0$ counts as an even power). Now in the equation $ 2q^2 = p^2$ the right hand side is a square, so the largest power of two dividing it is even, and the right hand side is two times a square, so the largest power of 2 dividing it is odd (2 times an even power of 2 gives an odd power of two). But the two sides are equal! They can’t possibly have different largest powers of 2 dividing them. So we have arrived at a contradiction, and it must not be the case that $ x$ is rational.

This is quite a nice result, and a true understanding of the proof comes when you see why it fails for numbers which actually do have rational square roots (for instance, try it for the square root of 9 or 36/25). But the use of the method is clear: we entered a magical fairy land where the square root of 2 is a rational number, and by using nothing but logical steps, we proved that this world is a farce. It cannot exist.

Often times the jump from “suppose to the contrary” to the contradiction requires a relatively small piece of insight, but in other times it is quite large. In our example above, the insight was related to divisors (or prime factorizations) of a number, and these are not at all as obvious to the layman as our “having no friends” contradiction earlier.

For instance, here is another version of the square root of two proof, proved by contradiction, but this time using geometry. Another example is on tiling chessboards with dominoes (though the application of the proof by contradiction in this post is more subtle; can you pick out exactly when it’s used?). Indeed, most proofs of the fundamental theorem of algebra (these are much more advanced: [1] [2] [3] [4]) follow the same basic recipe: suppose otherwise, and find a contradiction.

Instead of a obviously ridiculous statement like “1 = 0”, often times the “contradiction” at the end of a proof will contradict the original hypothesis that was assumed. This happens in a famous proof that there are infinitely many prime numbers.

Indeed, if we suppose that there are finitely many prime numbers, we can write them all down: $ p_1 , \dots, p_n$. That is, we are assuming that this is a list of all prime numbers. Since the list is finite, we can multiply them all together and add 1: let $ q = p_1 \dots p_n + 1$. Indeed, as the reader will prove in the exercises below, every number has a prime divisor, but it is not hard to see that no $ p_i$ divides $ q$. This is because no matter what some number $ x$ is, no number except 1 can divide both $ x$ and $ x-1$ (one can prove this fact by contradiction if it is not obvious), and we already know that all the $ p_i$ divide $ q-1$ . So $ q$ must have some prime divisor which was not in the list we started with. This contradicts that we had a complete list of primes to begin with. And so there must be infinitely many primes.

Here are some exercises to practice the proof by contradiction:

  1. Prove that the base 2 logarithm of 3 is irrational.
  2. More generally that $ \log_a(b)$ is irrational if there is any prime $ p$ dividing $ a$ but not $ b$, or vice versa.
  3. Prove the fundamental theorem of arithmetic, that every natural number $ n \geq 2$ is a product of primes (hint: inspect the smallest failing example).

A Few More Words on Functions and Sets

Last time we defined what it means for a function $ f: X \to Y$ on sets to be injective: different things in $ X$ get mapped to different things in $ Y$. Indeed, there is another interesting notion called surjectivity, which says that $ f$ “hits” everything in $ Y$ by something in $ X$.

Definition: A function $ f: X \to Y$ is surjective if for every element $ y \in Y$ there is an $ x \in X$ for which $ f(x) = y$. A surjective function is called a surjection. A synonym often used in place of surjective is onto.

For finite sets, we use surjections to prove something nice about the sets it involves. If $ f:X \to Y$ is a surjection, then $ X$ has at least as many elements as $ Y$. The reader can prove this easily by contradiction. In our previous post we proved an analogous proposition for injective functions: if $ f: X \to Y$ is injective, then there are at least as many elements of $ Y$ as there are of $ X$. If we combine the two notions, we can see that $ X$ and $ Y$ have exactly the same size.

Definition: A function $ f: X \to Y$ which is both injective and surjective is called a bijection. The adjectival form of bijection is bijective.

So for finite sets, if there exists a bijection $ X \to Y$, then $ X$ and $ Y$ have the same number of elements. The converse is also true, if two finite sets have the same size one can make a bijection between them (though a strictly formal proof of this would require induction, which we haven’t covered yet). The main benefit of thinking about size this way is that it extends to infinite sets!

Definition: Two arbitrary sets $ X,Y$ are said to have the same cardinality if there exists a bijection $ f : X \to Y$. If there is a bijection $ f: \mathbb{N} \to X$ then $ X$ is said to have countably infinite cardinality, or simply countably infinite. If no such bijection exists (and $ X$ is not a finite set), then $ X$ is said to be uncountably infinite.

So we can say two infinite sets have the same cardinality if we can construct a bijection between them. For instance, we can prove that the even natural numbers have the same cardinality as the regular natural numbers. If $ X$ is the set of even natural numbers, just construct a function $ \mathbb{N} \to X$ by sending $ x \mapsto 2x$. This is manifestly surjective and injective (one can prove it with whatever method one wants, but it is obviously true). A quick note on notation: when mathematicians want to define a function without giving it a name, they use the “maps to” arrow $ \mapsto$. The reader can think of this as the mathematician’s version of lambda expression. So the above map would be written in python: lambda x: 2*x.

So we have proved, as curious as it sounds to say it, that there are just as many even numbers as all natural numbers. Even more impressive, one can construct a bijection between the natural numbers and the rational numbers. Mathematicians denote the latter by $ \mathbb{Q}$, and typically this proof is done by first finding a bijection from $ \mathbb{N} \to \mathbb{Z}$ and then from $ \mathbb{Z} \to \mathbb{Q}$. We are implicitly using the fact that a composition of two bijections is a bijection. The diligent reader has already proved this for injections, so if one can also prove it for surjections, by definition it will be satisfied for bijections.

Diagonalization, and a Non-Trivial Theorem

We now turn to the last proof of this post, and our first non-trivial theorem: that there is no bijection between the set of real numbers and the set of natural numbers. Before we start, we should mention that calling this theorem ‘non-trivial’ might sound insulting to the non-mathematician; the reader has been diligently working to follow the proofs in these posts and completing exercises, and they probably all feel non-trivial. In fact, mathematicians don’t use trivial with the intent to insult (most of the time) or to say something is easy or not worth doing. Instead, ‘trivial’ is used to say that a result follows naturally, that it comes from nothing but applying the definitions and using the basic methods of proof. Of course, since we’re learning the basic methods of proof nothing can really be trivial, but if we say a theorem is non-trivial that means the opposite: there is some genuinely inspired idea that sits at the focal point of the proof, more than just direct or indirect inference. Even more, a proof is called “highly non-trivial” if there are multiple novel ideas or a menagerie of complicated details to keep track of.

In any case, we have to first say what the real numbers are. Instead we won’t actually work with the entire set of real numbers, but with a “small” subset: the real numbers between zero and one. We will also view these numbers in a particular representation that should be familiar to the practicing programmer.

Definition: The set of $ [0,1]$ is the set of all infinite sequences of zeroes and ones, interpreted as the set of all binary decimal expansions of numbers between zero and one.

If we want to be rigorous, we can define an infinite sequence to either be an infinite tuple (falling back on our definition of a tuple as a set), or we can define it to be a function $ f : \mathbb{N} \to \left \{ 0, 1 \right \}$. Taking the latter view, we add one additional piece of notation:

Definition: An infinite binary sequence $ (b_i) = (b_1, b_2, \dots)$ is a function $ b : \mathbb{N} \to \left \{ 0, 1 \right \}$ where we denote by $ b_i$ the value $ b(i)$.

So now we can state the theorem.

Theorem: The set $ [0,1]$ of infinite binary sequences is uncountably infinite. That is, there is no bijection $ \mathbb{N} \to [0,1]$.

The proof, as we said, is non-trivial, but it starts off in a familiar way: we assume there is such a bijection. Suppose to the contrary that $ f : \mathbb{N} \to [0,1]$ is a bijection. Then we can list the values of $ f$ in a table. Since we want to use $ b_i$ for all of the values of $ f$, we will call

$ \displaystyle f(n) = (b_{n,i}) = b_{n,1}, b_{n,2}, \dots$

This gives us the following infinite table:

$ \displaystyle \begin{matrix} f(1) &=& b_{1,1}, & b_{1,2}, & \dots \\ f(2) &=& b_{2,1}, & b_{2,2}, & \dots \\ f(3) &=& b_{3,1}, & b_{3,2}, & \dots \\ \vdots & & \vdots & & \end{matrix}$

Now here is the tricky part. We are going to define a new binary sequence which we can guarantee does not show up in this list. This will be our contradiction, because we assumed at first that this list consisted of all of the binary sequences.

The construction itself is not so hard. Start by taking $ c_i = b_{i,i}$ for all $ i$. That is, we are using all of the diagonal elements of the table above. Now take each $ c_i$ and replace it with its opposite (i.e., flip each bit in the sequence, or equivalently apply $ b \mapsto 1-b$ to each entry). The important fact about this new sequence is it differs from every entry in this table. By the way we constructed it, no matter which $lateex n$ one chooses, this number differs from the table entry $ f(n)$ at digit $ n$ (and perhaps elsewhere). Because of this, it can’t occur as an entry in the table. So we just proved our function $ f$ isn’t surjective, contradicting our original hypothesis, and proving the theorem.

The discovery of this fact was an important step forward in the history of mathematics. The particular technique though, using the diagonal entries of the table and changing each one, comes with a name of its own: the diagonalization argument. It’s quite a bit more specialized of a proof technique than, say, the contrapositive implication, but it shows up in quite a range of mathematical literature (for instance, diagonalization is by far the most common way to prove that the Halting problem is undecidable). It is worth noting diagonalization was not the first known way to prove this theorem, just the cleanest.

The fact itself has interesting implications that lends itself nicely to confusing normal people. For instance, it implies not only that there is more than one kind of infinity, but that there are an infinity of infinities. Barring a full discussion of how far down the set-theory rabbit hole one can go, we look forward to next time, when we meet the final of the four basic methods of proof: proof by induction.

Until then!

Why there is no Hitchhiker’s Guide to Mathematics for Programmers

For those who aren’t regular readers: as a followup to this post, there are four posts detailing the basic four methods of proof, with intentions to detail some more advanced proof techniques in the future. You can find them on this blog’s primers page.

Do you really want to get better at mathematics?

Remember when you first learned how to program? I do. I spent two years experimenting with Java programs on my own in high school. Those two years collectively contain the worst and most embarrassing code I have ever written. My programs absolutely reeked of programming no-nos. Hundred-line functions and even thousand-line classes, magic numbers, unreachable blocks of code, ridiculous code comments, a complete disregard for sensible object orientation, negligence of nearly all logic, and type-coercion that would make your skin crawl. I committed every naive mistake in the book, and for all my obvious shortcomings I considered myself a hot-shot programmer! At least I was learning a lot, and I was a hot-shot programmer in a crowd of high-school students interested in game programming.

Even after my first exposure and my commitment to get a programming degree in college, it was another year before I knew what a stack frame or a register was, two before I was anywhere near competent with a terminal, three before I learned to appreciate functional programming, and to this day I still have an irrational fear of networking and systems programming (the first time I manually edited the call stack I couldn’t stop shivering with apprehension and disgust at what I was doing).

I just made it so this function returns to a *different* place than where it was called from.

I just made this function call return to a *different* place than where it was called from.

In a class on C++ programming I was programming a Checkers game, and my task at the moment was to generate a list of all possible jump-moves that could be made on a given board. This naturally involved a depth-first search and a couple of recursive function calls, and once I had something I was pleased with, I compiled it and ran it on my first non-trivial example. Low and behold (even having followed test-driven development!), I was hit hard in the face by a segmentation fault. It took hundreds of test cases and more than twenty hours of confusion before I found the error: I was passing a reference when I should have been passing a pointer. This was not a bug in syntax or semantics (I understood pointers and references well enough) but a design error. And the aggravating part, as most programmers know, was that the fix required the change of about 4 characters. Twenty hours of work for four characters! Once I begrudgingly verified it worked (of course it worked, it was so obvious in hindsight), I promptly took the rest of the day off to play Starcraft.

Of course, as every code-savvy reader will agree, all of this drama is part of the process of becoming and strong programmer. One must study the topics incrementally, make plentiful mistakes and learn from them, and spend uncountably many hours in a state of stuporous befuddlement before one can be considered an experienced coder. This gives rise to all sorts of programmer culture, unix jokes, and reverence for the masters of C that make the programming community so lovely to be a part of. It’s like a secret club where you know all the handshakes. And should you forget one, a crafty use of awk and sed will suffice.

"Semicolons of Fury" was the name of my programming team in the ACM collegiate programming contest. We placed Cal Poly third in the Southern California Regionals.

“Semicolons of Fury” was the name of my programming team in the ACM collegiate programming contest. We placed Cal Poly third in the Southern California Regionals, and in my opinion our success was due in large part to the dynamics of our team. I (center, in blue) have since gotten a more stylish haircut.

Now imagine someone comes along and says,

“I’m really interested in learning to code, but I don’t plan to write any programs and I absolutely abhor tracing program execution. I just want to use applications that others have written, like Chrome and iTunes.”

You would laugh at them! And the first thing that would pass through your mind is either, “This person would give up programming after the first twenty minutes,” or “I would be doing the world a favor by preventing this person from ever writing a program. This person belongs in some other profession.” This lies in stark opposition to the common chorus that everyone should learn programming. After all, it’s a constructive way to think about problem solving and a highly employable skill. In today’s increasingly technological world, it literally pays to know your computer better than a web browser. (Ironically, I’m writing this on my Chromebook, but in my defense it has a terminal with ssh. Perhaps more ironically, all of my real work is done with paper and pencil.)

Unfortunately this sentiment is mirrored among most programmers who claim to be interested in mathematics. Mathematics is fascinating and useful and doing it makes you smarter and better at problem solving. But a lot of programmers think they want to do mathematics, and they either don’t know what “doing mathematics” means, or they don’t really mean they want to do mathematics. The appropriate translation of the above quote for mathematics is:

“Mathematics is useful and I want to be better at it, but I won’t write any original proofs and I absolutely abhor reading other people’s proofs. I just want to use the theorems others have proved, like Fermat’s Last Theorem and the undecidability of the Halting Problem.”

Of course no non-mathematician is really going to understand the current proof of Fermat’s Last Theorem, just as no fledgling programmer is going to attempt to write a (quality) web browser. The point is that the sentiment is in the wrong place. Mathematics is cousin to programming in terms of the learning curve, obscure culture, and the amount of time one spends confused. And mathematics is as much about writing proofs as software development is about writing programs (it’s not everything, but without it you can’t do anything). Honestly, it sounds ridiculously obvious to say it directly like this, but the fact remains that people feel like they can understand the content of mathematics without being able to write or read proofs.

I want to devote the rest of this post to exploring some of the reasons why this misconception exists. My main argument is that the reasons have to do more with the culture of mathematics than the actual difficulty of the subject. Unfortunately as of the time of this writing I don’t have a proposed “solution.” And all I can claim is a problem is that programmers can have mistaken views of what mathematics involves. I don’t propose a way to make mathematics easier for programmers, although I do try to make the content on my blog as clear as possible (within reason). I honestly do believe that the struggle and confusion builds mathematical character, just as the arduous bug-hunt builds programming character. If you want to be good at mathematics, there is no other way.

All I want to do with this article is to detail why mathematics can be so hard for beginners, to explain a few of the secret handshakes, and hopefully to bring an outsider a step closer to becoming an insider. And I want to stress that this is not a call for all programmers to learn mathematics. Far from it! I just happen to notice that, for good reason, the proportion of programmers who are interested in mathematics is larger than in most professions. And as a member of both communities, I want to shed light on why mathematics can be difficult for an otherwise smart and motivated software engineer.

So read on, and welcome to the community.

Travelling far and wide

Perhaps one of the most prominent objections to devoting a lot of time to mathematics is that it can be years before you ever apply mathematics to writing programs. On one hand, this is an extremely valid concern. If you love writing programs and designing software, then mathematics is nothing more than a tool to help you write better programs.

But on the other hand, the very nature of mathematics is what makes it so applicable, and the only way to experience nature is to ditch the city entirely. Indeed, I provide an extended example of this in my journalesque post on introducing graph theory to high school students: the point of the whole exercise is to filter out the worldly details and distill the problem into a pristine mathematical form. Only then can we see its beauty and wide applicability.

Here is a more concrete example. Suppose you were trying to encrypt the contents of a message so that nobody could read it even if they intercepted the message in transit. Your first ideas would doubtlessly be the same as those of our civilization’s past: substitution ciphers, Vigenere ciphers, the Enigma machine, etc. Regardless of what method you come up with, your first thought would most certainly not be, “prime numbers so big they’ll make your pants fall down.” Of course, the majority of encryption methods today rely on very deep facts (or rather, conjectures) about prime numbers, elliptic curves, and other mathematical objects (“group presentations so complicated they’ll orient your Mobius band,” anyone?). But it took hundreds of years of number theory to get there, and countless deviations into other fields and dead-ends. It’s not that the methods themselves are particularly complicated, but the way they’re often presented (and this is unavoidable if you’re interested in new mathematical breakthroughs) is in the form of classical mathematical literature.

Of course there are other examples much closer to contemporary fashionable programming techniques. One such example is boosting. While we have yet to investigate boosting on this blog [update: yes we have], the basic idea is that one can combine a bunch of algorithms which perform just barely better than 50% accuracy, and collectively they will be arbitrarily close to perfect. In a field dominated by practical applications, this result is purely the product of mathematical analysis.

And of course boosting in turn relies on the mathematics of probability theory, which in turn relies on set theory and measure theory, which in turn relies on real analysis, and so on. One could get lost for a lifetime in this mathematical landscape! And indeed, the best way to get a good view of it all is to start at the bottom. To learn mathematics from scratch. The working programmer simply doesn’t have time for that.

What is it really, that people have such a hard time learning?

Most of the complaints about mathematics come understandably from notation and abstraction. And while I’ll have more to say on that below, I’m fairly certain that the main obstacle is a familiarity with the basic methods of proof.

While methods of proof are semantical by nature, in practice they form a scaffolding for all of mathematics, and as such one could better characterize them as syntactical. I’m talking, of course, about the four basics: direct implication, proof by contradiction, contrapositive, and induction. These are the loops, if statements, pointers, and structs of rigorous argument, and there is simply no way to understand the mathematics without a native fluency in this language.

The “Math Major Sloth” is fluent. Why aren’t you?

So much of mathematics is built up by chaining together a multitude of absolutely trivial statements which are amendable to proof by the basic four. I’m not kidding when I say they are absolutely trivial. A professor of mine once said,

If it’s not completely trivial, then it’s probably not true.

I can’t agree more with this statement. Of course, there are many sophisticated proofs in mathematics, but an overwhelming majority of (very important) facts fall in the trivial category. That being said, trivial can be sometimes relative to one’s familiarity with a subject, but that doesn’t make the sentiment any less right. Drawing up a shopping list is trivial once you’re comfortable with a pencil and paper and you know how to write (and you know what the words mean). There are certainly works of writing that require a lot more than what it takes to write a shopping list. Likewise, when we say something is trivial in mathematics, it’s because there’s no content to the proof outside of using definitions and a typical application of the basic four methods of proof. This is the “holding a pencil” part of writing a shopping list.

And as you probably know, there are many many more methods of proof than just the basic four. Proof by construction, by exhaustion, case analysis, and even picture proofs have a place in all fields of mathematics. More relevantly for programmers, there are algorithm termination proofs, probabilistic proofs, loop invariants to design and monitor, and the ubiquitous NP-hardness proofs (I’m talking about you, Travelling Salesman Problem!). There are many books dedicated to showcasing such techniques, and rightly so. Clever proofs are what mathematicians strive for above all else, and once a clever proof is discovered, the immediate first step is to try to turn it into a general method for proving other facts. Fully flushing out such a process (over many years, showcasing many applications and extensions) is what makes one a world-class mathematician.

Another difficulty faced by programmers new to mathematics is the inability to check your proof absolutely. With a program, you can always write test cases and run them to ensure they all pass. If your tests are solid and plentiful, the computer will catch your mistakes and you can go fix them.

There is no corresponding “proof checker” for mathematics. There is no compiler to tell you that it’s nonsensical to construct the set of all sets, or that it’s a type error to quotient a set by something that’s not an equivalence relation. The only way to get feedback is to seek out other people who do mathematics and ask their opinion. In solo, mathematics involves a lot of backtracking, revising mistaken assumptions, and stretching an idea to its breaking point to see that it didn’t even make sense to begin with. This is “bug hunting” in mathematics, and it can often completely destroy a proof and make one start over from scratch. It feels like writing a few hundred lines of code only to have the final program run “rm -rf *” on the directory containing it. It can be really. really. depressing.

It is an interesting pedagogical question in my mind whether there is a way to introduce proofs and the language of mature mathematics in a way that stays within a stone’s throw of computer programs. It seems like a worthwhile effort, but I can’t think of anyone who has sought to replace a classical mathematics education entirely with one based on computation.

Mathematical syntax

Another major reason programmers are unwilling to give mathematics an honest effort is the culture of mathematical syntax: it’s ambiguous, and there’s usually nobody around to explain it to you. Let me start with an example of why this is not a problem in programming. Let’s say we’re reading a Python program and we see an expression like this:

foo[2]

The nature of (most) programming languages dictates that there are a small number of ways to interpret what’s going on in here:

  1. foo could be a list/tuple, and we’re accessing the third element in it.
  2. foo could be a dictionary, and we’re looking up value associated to the key 2.
  3. foo could be a string, and we’re extracting the third character.
  4. foo could be a custom-defined object, whose __getitem__ method is defined somewhere else and we can look there to see exactly what it does.

There are probably other times this notation can occur (although I’d be surprised if number 4 didn’t by default capture all possible uses), but the point is that any programmer reading this program knows enough to intuit that square brackets mean “accessing an item inside foo with identifier 2.” Part of the reasons that programs can be very easy to read is precisely because someone had to write a parser for a programming language, and so they had to literally enumerate all possible uses of any expression form.

The other extreme is the syntax of mathematics. The daunting fact is that there is no bound to what mathematical notation can represent, and much of mathematical notation is inherently ad hoc. For instance, if you’re reading a math paper and you come across an expression that looks like this

$ \delta_i^j$

The possibilities of what this could represent are literally endless. Just to give the unmathematical reader a taste: $ \delta_i$ could be an entry of a sequence of numbers of which we’re taking arithmetic $ j^\textup{th}$ powers. The use of the letter delta could signify a slightly nonstandard way to write the Kronecker delta function, for which $ \delta_i^j$ is one precisely when $ i=j$ and zero otherwise. The superscript $ j$ could represent dimension. Indeed, I’m currently writing an article in which I use $ \delta^k_n$ to represent $ k$-dimensional simplex numbers, specifically because I’m relating the numbers to geometric objects called simplices, and the letter for those is  a capital $ \Delta$. The fact is that using notation in a slightly non-standard way does not invalidate a proof in the way that it can easily invalidate a program’s correctness.

What’s worse is that once mathematicians get comfortable with a particular notation, they will often “naturally extend” or even silently drop things like subscripts and assume their reader understands and agrees with the convenience! For example, here is a common difficulty that beginners face in reading math that involves use of the summation operator. Say that I have a finite set of numbers whose sum I’m interested in. The most rigorous way to express this is not far off from programming:

Let $ S = \left \{ x_1, \dots, x_n \right \}$ be a finite set of things. Then their sum is finite:

$ \displaystyle \sum_{i=1}^n x_i$

The programmer would say “great!” Assuming I know what “+” means for these things, I can start by adding $ x_1 + x_2$, add the result to $ x_3$, and keep going until I have the whole sum. This is really just a left fold of the plus operator over the list $ S$.

But for mathematicians, the notation is far more flexible. For instance, I could say

Let $ S$ be finite. Then $ \sum_{x \in S} x$ is finite.

Things are now more vague. We need to remember that the $ \in$ symbol means “in.” We have to realize that the strict syntax of having an iteration variable $ i$ is no longer in effect. Moreover, the order in which the things are summed (which for a left fold is strictly prescribed) is arbitrary. If you asked any mathematician, they’d say “well of course it’s arbitrary, in an abelian group addition is commutative so the order doesn’t matter.” But realize, this is yet another fact that the reader must be aware of to be comfortable with the expression.

But it still gets worse.

In the case of the capital Sigma, there is nothing syntactically stopping a mathematician from writing

$ \displaystyle \sum_{\sigma \in \Sigma} f_{\Sigma}(\sigma)$

Though experienced readers may chuckle, they will have no trouble understanding what is meant here. That is, syntactically this expression is unambiguous enough to avoid an outcry: $ \Sigma$ just happens to also be a set, and saying $ f_{\Sigma}$ means that the function $ f$ is constructed in a way that depends on the choice of the set $ \Sigma$. This often shows up in computer science literature, as $ \Sigma$ is a standard letter to denote an alphabet (such as the binary alphabet $ \left \{ 0,1 \right \}$).

One can even take it a step further and leave out the set we’re iterating over, as in

$ \displaystyle \sum_{\sigma} f_{\Sigma}(\sigma)$

since it’s understood that the lowercase letter ($ \sigma$) is usually an element of the set denoted by the corresponding uppercase letter ($ \Sigma$). If you don’t know greek and haven’t seen that coincidence enough times to recognize it, you would quickly get lost. But programmers must realize: this is just the mathematician’s secret handshake. A mathematician would be just as bewildered and confused upon seeing some of the pointer arithmetic hacks C programmers invent, or the always awkward infinite for loop, if they had not had enough experience dealing with the syntax of standard for loops.

for (;;) {
   ;
}

In fact, a mathematician would look at this in disgust! The fact that the C programmer has need for something as pointless as an “empty statement” should be viewed as a clumsy inelegance in the syntax of the programming language (says the mathematician). Since mathematicians have the power to change their syntax at will, they would argue there’s no good reason not to change it, if it were a mathematical expression, to something simpler.

And once the paper you’re reading is over, and you start reading a new paper, chances are their conventions and notation will be ever-so-slightly different, and you have to keep straight what means what. It’s as if the syntax of a programming language changed depending on who was writing the program!

Perhaps understandably, the frustration that most mathematicians feel when dealing with varying syntax across different papers and books is collectively called “technicalities.” And the more advanced the mathematics becomes, the ability to fluidly transition between high-level intuition and technical details is all but assumed.

The upshot of this whole conversation is that the reader of a mathematical proof must hold in mind a vastly larger body of absorbed (and often frivolous) knowledge than the reader of a computer program.

At this point you might see all of this as my complaining, but in truth I’m saying this notational flexibility and ambiguity is a benefit. Once you get used to doing mathematics, you realize that technical syntax can make something which is essentially simple seem much more difficult than it is. In other words, we absolutely must have a way to make things completely rigorous, but in developing and presenting proofs the most important part is to make the audience understand the big picture, see intuition behind the symbols, and believe the proofs. For better or worse, mathematical syntax is just a means to that end, and the more abstract the mathematics becomes, the more flexiblility mathematicians need to keep themselves afloat in a tumultuous sea of notation.

You’re on your own, unless you’re around mathematicians

That brings me to my last point: reading mathematics is much more difficult than conversing about mathematics in person. The reason for this is once again cultural.

Imagine you’re reading someone else’s program, and they’ve defined a number of functions like this (pardon the single-letter variable names; as long as one is willing to be vague I prefer single-letter variable names to “foo/bar/baz”).

def splice(L):
   ...

def join(*args):
   ...

def flip(x, y):
   ...

There are two parts to understanding how these functions work. The first part is that someone (or a code comment) explains to you in a high level what they do to an input. The second part is to weed out the finer details. These “finer details” are usually completely spelled out by the documentation, but it’s still a good practice to experiment with it yourself (there is always the possibility for bugs or unexpected features, of course).

In mathematics there is no unified documentation, just a collective understanding, scattered references, and spoken folk lore. You’re lucky if a textbook has a table of notation in the appendix. You are expected to derive the finer details and catch the errors yourself. Even if you are told the end result of a proposition, it is often followed by, “The proof is trivial.” This is the mathematician’s version of piping output to /dev/null, and literally translates to, “You’re expected to be able to write the proof yourself, and if you can’t then maybe you’re not ready to continue.”

Indeed, the opposite problems are familiar to a beginning programmer when they aren’t in a group of active programmers. Why is it that people give up or don’t enjoy programming? Is it because they have a hard time getting honest help from rudely abrupt moderators on help websites like stackoverflow? Is it because often when one wants to learn the basics, they are overloaded with the entirety of the documentation and the overwhelming resources of the internet and all its inhabitants? Is it because compiler errors are nonsensically exact, but very rarely helpful? Is it because when you learn it alone, you are bombarded with contradicting messages about what you should be doing and why (and often for the wrong reasons)?

All of these issues definitely occur, and I see them contribute to my students’ confusion in my introductory Python class all the time. They try to look on the web for information about how to solve a very basic problem, and they come back to me saying they were told it’s more secure to do it this way, or more efficient to do it this way, or that they need to import something called the “heapq module.” When really the goal is not to solve the problem in the best way possible or in the shortest amount of code, but to show them how to use the tools they already know about to construct a program that works. Without a guiding mentor it’s extremely easy to get lost in the jungle of people who think they know what’s best.

As far as I know there is no solution to this problem faced by the solo programming student (or the solo anything student). And so it stands for mathematics: without others doing mathematics with you, its very hard to identify your issues and see how to fix them.

Proofs, Syntax, and Community

For the programmer who is truly interested in improving their mathematical skills, the first line of attack should now be obvious. Become an expert at applying the basic methods of proof. Second, spend as much time as it takes to clear up what mathematical syntax means before you attempt to interpret the semantics. And finally, find others who are interested in seriously learning some mathematics, and work on exercises (perhaps a weekly set) with them. Start with something basic like set theory, and write your own proofs and discuss each others’ proofs. Treat the sessions like code review sessions, and be the compiler to your partner’s program. Test their arguments to the extreme, and question anything that isn’t obvious or trivial. It’s not uncommon for easy questions with simple answers and trivial proofs to create long and drawn out discussions before everyone agrees it’s obvious. Embrace this and use it to improve.

Short of returning to your childhood and spending more time doing recreational mathematics, that is the best advice I can give.

Until next time!