|The Fast Fourier Transform,
and a Noisy Signal
|Kolmogorov Complexity -
|Bezier Curves and Picasso||Computing Homology||A Sample of Standard ML
For a while I’ve been meaning to do some more advanced posts on optimization problems of all flavors. One technique that comes up over and over again is Lagrange multipliers, so this post is going to be a leisurely reminder of that technique. I often forget how to do these basic calculus-type things, so it’s good practice.
We will assume something about the reader’s knowledge, but it’s a short list: know how to operate with vectors and the dot product, know how to take a partial derivative, and know that in single-variable calculus the local maxima and minima of a differentiable function occur when the derivative vanishes. All of the functions we’ll work with in this post will have infinitely many derivatives (i.e. smooth). So things will be nice.
The gradient of a multivariable function is the natural extension of the derivative of a single-variable function. If is a differentiable function, the data of the gradient of consists of all of the partial derivatives . It’s usually written as a vector
To make things easier for ourselves, we’ll just call a function and understand to be a vector in .
We can also think of as a function which takes in vectors and spits out vectors, by plugging in the input vector into each . And the reason we do this is because it lets us describe the derivative of at a point as a linear map based on the gradient. That is, if we want to know how fast is growing along a particular vector and at a particular point , we can just take a dot product of with . I like to call dot products inner products, and use the notation . Here is a vector in which we think of as “tangent vectors” to the surface defined by . And if we scale bigger or smaller, the value of the derivative scales with it (of course, because the derivative is a linear map!). Usually we use unit vectors to represent directions, but there’s no reason we have to. Calculus textbooks often require this to define a “directional derivative,” but perhaps it is better to understand the linear algebra over memorizing these arbitrary choices.
For example, let . Then , and . Now if we pick a vector to go along, say, , we get the derivative of along is .
As importantly as computing derivatives is finding where the derivative is zero, and the geometry of the gradient can help us here. Specifically, if we think of our function as a surface sitting in (as in the picture below), it’s not hard to see that the gradient vector points in the direction of steepest ascent of . How do we know this? Well if you fix a point and you’re forced to use a vector of the same magnitude as , how can you maximize the inner product ? Well, you just pick to be equal to , of course! This will turn the dot product into the square norm of .
More generally, the operation of an inner product is geometrically the size of the projection of the argument onto (scaled by the size of ), and projections of a vector onto different directions than can only be smaller in magnitude than . Another way to see this is to know the “alternative” formula for the dot product
where is the angle between the vectors (in ). We might not know how to get that angle, and in this post we don’t care, but we do know that is between -1 and 1. And so if is fixed and we can’t change the norm of but only its direction, we will maximize the dot product when the two vectors point in the same direction, when is zero.
All of this is just to say that the gradient at a point can be interpreted as having a specific direction. It’s the direction of steepest ascent of the surface , and it’s size tells you how steep is at that point. The opposite direction is the direction of steepest descent, and the orthogonal directions (when ) have derivative zero.
Now what happens if we’re at a local minimum or maximum? Well it’s necessary that is flat, and so by our discussion above the derivatives in all directions must be zero. It’s a basic linear algebra proof to show that this means the gradient is the zero vector. You can prove this by asking what sorts of vectors have a dot product of zero with all other vectors ?
Now once we have a local max or a local min, how do we tell which? The answer is actually a bit complicated, and it requires you to inspect the eigenvalues of the Hessian of . We won’t dally on eigenvalues except to explain the idea in brief: for an variable function the Hessian of at is an -by- matrix where the entry is the value of . It just so turns out that if this matrix has only positive eigenvalues, then is a local minimum. If the eigenvalues are all negative, it’s a local max. If some are negative and some are positive, then it’s a saddle point. And if zero is an eigenvalue then we’re screwed and can’t conclude anything without more work.
But all of this Hessian business isn’t particularly important for us, because most of our applications of the Lagrangian will work with functions where we already know that there is a unique global maximum or minimum. Finding where the gradient is zero is enough. As much as this author stresses the importance of linear algebra, we simply won’t need to compute any eigenvalues for this one.
What we will need to do is look at optimizing functions which are constrained by some equality conditions. This is where Lagrangians come into play.
Constrained by Equality
Often times we will want to find a minimum or maximum of a function , but we will have additional constraints. The simplest kind is an equality constraint.
For example, we might want to find the maximum of the function requiring that the point lies on the unit circle. One could write this in a “canonical form”
Way back in the scientific revolution, Fermat discovered a technique to solve such problems that was later generalized by Lagrange. The idea is to combine these constraints into one function whose gradient provides enough information to find a maximum. Clearly such information needs to include two things: that the gradient of is zero, and that the constraint is satisfied.
First we rewrite the constraints as , because when we’re dealing with gradients we want things to be zero. Then we form the Lagrangian of the problem. We’ll give a precise definition in a minute, but it looks like this:
That is, we’ve added a new variable and added the two functions together. Let’s see what happens when we take a gradient:
Now if we require the gradient to be zero, the last equation is simply the original constraint, and the first three equations say that . In other words, we’re saying that the two gradients must point in the same direction for the function to provide a maximum. Solving for where these equations vanish gives some trivial solutions (one variable is and the rest zero, and ), and a solution defined by which is clearly the maximal of the choices.
Indeed, this will work in general, and you can see a geometric and analytic proof in these notes.
Specifically, if we have an optimization problem defined by an objective function to optimize, and a set of equality constraints , then we can form the Lagrangian
And then a theorem of Lagrange is that all optimal solutions to the problem satisfy for some choice of . But then you have to go solve the system and figure out which of the solutions gives you your optimum.
As it turns out, there are some additional constraints you can add to your problem to guarantee your system has a solution. One nice condition is that is convex. A function is convex if any point on a line segment between two points and has a value greater than . In other words, for all :
Some important examples of convex functions: exponentials, quadratics whose leading coefficient is positive, square norms of a vector variable, and linear functions.
Convex functions have this nice property that they have a unique local minimum value, and hence it must also be the global minimum. Why is this? Well if you have a local minimum , and any other point , then by virtue of being a local minimum there is some sufficiently close to 1 so that:
And rearranging we get
So , and since was arbitrary then is the global minimum.
This alleviates our problem of having to sort through multiple solutions, and in particular it helps us to write programs to solve optimization problems: we know that techniques like gradient descent will never converge to a false local minimum.
That’s all for now! The next question we might shadowily ask: what happens if we add inequality constraints?
In the last twenty years there has been a lot of research in a subfield of machine learning called Bandit Learning. The name comes from the problem of being faced with a large sequence of slot machines (once called one-armed bandits) each with a potentially different payout scheme. The problems in this field all focus on one central question:
If I have many available actions with uncertain outcomes, how should I act to maximize the quality of my results over many trials?
The deep question here is how to balance exploitation, the desire to choose an action which has payed off well in the past, with exploration, the desire to try options which may produce even better results. The ideas are general enough that it’s hard not to find applications: choosing which drug to test in a clinical study, choosing which companies to invest in, choosing which ads or news stories to display to users, and even (as Richard Feynman once wondered) how to maximize your dining enjoyment.
In less recent times (circa 1960′s), this problem was posed and considered in the case where the payoff mechanisms had a very simple structure: each slot machine is a coin flip with a different probability of winning, and the player’s goal is to find the best machine as quickly as possible. We called this the “stochastic” setting, and last time we saw a modern strategy called UCB1 which maintained statistical estimates on the payoffs of the actions and chose the action with the highest estimate. The underlying philosophy was “optimism in the face of uncertainty,” and it gave us something provably close to optimal.
Unfortunately payoff structures are more complex than coin flips in the real world. Having “optimism” is arguably naive, especially when it comes to competitive scenarios like stock trading. Indeed the algorithm we’ll analyze in this post will take the polar opposite stance, that payoffs could conceivably operate in any manner. This is called the adversarial model, because even though the payoffs are fixed in advance of the game beginning, it can always be the case that the next choice you make results in the worst possible payoff.
One might wonder how we can hope to do anything in such a pessimistic model. As we’ll see, our notion of performing well is relative to the best single slot machine, and we will argue that this is the only reasonable notion of success. On the other hand, one might argue that real world payoffs are almost never entirely adversarial, and so we would hope that algorithms which do well theoretically in the adversarial model excel beyond their minimal guarantees in practice.
In this post we’ll explore and implement one algorithm for adversarial bandit learning, called Exp3, and in the next post we’ll see how it fares against UCB1 in some applications. Some prerequisites: since the main algorithm presented in this post is randomized, its analysis requires some familiarity with techniques and notation from probability theory. Specifically, we will assume that the reader is familiar with the content of this blog’s basic probability theory primers (1, 2), though the real difficulty in the analysis will be keeping up with all of the notation.
In case the reader is curious, Exp3 was invented in 2001 by Auer, Cesa-Bianchi, Freund, and Schapire. Here is their original paper, which contains lots of other mathematical goodies.
As usual, all of the code and data produced in the making of this blog post is available for download on this blog’s Github page.
Model Formalization and Notions of Regret
Before we describe the algorithm and analyze its we have to set up the problem formally. The first few paragraphs of our last post give a high-level picture of general bandit learning, so we won’t repeat that here. Recall, however, that we have to describe both the structure of the payoffs and how success is measured. So let’s describe the former first.
Definition: An adversarial bandit problem is a pair , where represents the number of actions (henceforth indexed by ), and is an infinite sequence of payoff vectors , where is a vector of length and is the reward of action on step .
In English, the game is played in rounds (or “time steps”) indexed by , and the payoffs are fixed for each action and time before the game even starts. Note that we assume the reward of an action is a number in the interval , but all of our arguments in this post can be extended to payoffs in some range by shifting by and dividing by .
Let’s specify what the player (algorithm designer) knows during the course of the game. First, the value of is given, and total number of rounds is kept secret. In each round, the player has access to the history of rewards for the actions that were chosen by the algorithm in previous rounds, but not the rewards of unchosen actions. In other words, it will only ever know one for each . To set up some notation, if we call the list of chosen actions over rounds, then at step the player has access to the values of and must pick to continue.
So to be completely clear, the game progresses as follows:
The player is given access to .
For each time step :
The player must pick an action .
The player observes the reward , which he may save for future use.
The problem gives no explicit limit on the amount of computation performed during each step, but in general we want it to run in polynomial time and not depend on the round number . If runtime even logarithmically depended on , then we’d have a big problem using it for high-frequency applications. For example in ad serving, Google processes on the order of ads per day; so a logarithmic dependence wouldn’t be that bad, but at some point in the distance future Google wouldn’t be able to keep up (and we all want long-term solutions to our problems).
Note that the reward vectors must be fixed in advance of the algorithm running, but this still allows a lot of counterintuitive things. For example, the payoffs can depend adversarially on the algorithm the player decides to use. For example, if the player chooses the stupid strategy of always picking the first action, then the adversary can just make that the worst possible action to choose. However, the rewards cannot depend on the random choices made by the player during the game.
So now let’s talk about measuring success. For an algorithm which chooses the sequence of actions, define to be the sum of the observed rewards
And because will often be randomized, this value is a random variable depending on the decisions made by . As such, we will often only consider the payoff up to expectation. That is, we’ll be interested in how relates to other possible courses of action. To be completely rigorous, the randomization is not over “choices made by an algorithm,” but rather the probability distribution over sequences of actions that the algorithm induces. It’s a fine distinction but a necessary one. In other words, we could define any sequence of actions and define analogously as above:
Any algorithm and choice of reward vectors induces a probability distribution over sequences of actions in a natural way (if you want to draw from the distribution, just run the algorithm). So instead of conditioning our probabilities and expectations on previous choices made by the algorithm, we do it over histories of actions .
An obvious question we might ask is: why can’t the adversary just make all the payoffs zero? (or negative!) In this event the player won’t get any reward, but he can emotionally and psychologically accept this fate. If he never stood a chance to get any reward in the first place, why should he feel bad about the inevitable result? What a truly cruel adversary wants is, at the end of the game, to show the player what he could have won, and have it far exceed what he actually won. In this way the player feels regret for not using a more sensible strategy, and likely turns to binge eating cookie dough ice cream. Or more likely he returns to the casino to lose more money. The trick that the player has up his sleeve is precisely the randomness in his choice of actions, and he can use its objectivity to partially overcome even the nastiest of adversaries.
Sadism aside, this thought brings us to a few mathematical notions of regret that the player algorithm may seek to minimize. The first, most obvious, and least reasonable is the worst-case regret. Given a stopping time and a sequence of actions , the expected regret of algorithm with respect to is the difference . This notion of regret measures the regret of a player if he knew what would have happened had he played . The expected worst-case regret of is then the maximum over all sequences of the regret of with respect to . This notion of regret seems particularly unruly, especially considering that the payoffs are adversarial, but there are techniques to reason about it.
However, the focus of this post is on a slightly easier notion of regret, called weak regret, which instead compares the results of to the best single action over all rounds. That is, this quantity is just
We call the parenthetical term . This kind of regret is a bit easier to analyze, and the main theorem of this post will given an upper bound on it for Exp3. The reader who read our last post on UCB1 will wonder why we make a big distinction here just to arrive at the same definition of regret that we had in the stochastic setting. But with UCB1 the best sequence of actions to take just happened to be to play the best action over and over again. Here, the payoff difference between the best sequence of actions and the best single action can be arbitrarily large.
Exp3 and an Upper Bound on Weak Regret
We now describe at the Exp3 algorithm.
Exp3 stands for Exponential-weight algorithm for Exploration and Exploitation. It works by maintaining a list of weights for each of the actions, using these weights to decide randomly which action to take next, and increasing (decreasing) the relevant weights when a payoff is good (bad). We further introduce an egalitarianism factor which tunes the desire to pick an action uniformly at random. That is, if , the weights have no effect on the choices at any step.
The algorithm is readily described in Python code, but we need to set up some notation used in the proof of the theorem. The pseudocode for the algorithm is as follows.
- Given , initialize the weights for .
- In each round :
- Set for each .
- Draw the next action randomly according to the distribution of .
- Observe reward .
- Define the estimated reward to be .
- Set all other .
The choices of these particular mathematical quantities (in steps 1, 4, and 5) are a priori mysterious, but we will explain them momentarily. In the proof that follows, we will extend to indices other than and define those values to be zero.
The Python implementation is perhaps more legible, and implements the possibly infinite loop as a generator:
def exp3(numActions, reward, gamma): weights = [1.0] * numActions t = 0 while True: probabilityDistribution = distr(weights, gamma) choice = draw(probabilityDistribution) theReward = reward(choice, t) estimatedReward = 1.0 * theReward / probabilityDistribution[choice] weights[choice] *= math.exp(estimatedReward * gamma / numActions) # important that we use estimated reward here! yield choice, theReward, estimatedReward, weights t = t + 1
Here the “rewards” variable refers to a callable which accepts as input the action chosen in round (keeps track of , assuming we’ll play nice), and returns as output the reward for that choice. The distr and draw functions are also easily defined, with the former depending on the gamma parameter as follows:
def distr(weights, gamma=0.0): theSum = float(sum(weights)) return tuple((1.0 - gamma) * (w / theSum) + (gamma / len(weights)) for w in weights)
There is one odd part of the algorithm above, and that’s the “estimated reward” . The intuitive reason to do this is to compensate for a potentially small probability of getting the observed reward. More formally, it ensures that the conditional expectation of the “estimated reward” is the actual reward. We will explore this formally during the proof of the main theorem.
As usual, the programs we write in this post are available on this blog’s Github page.
We can now state and prove the upper bound on the weak regret of Exp3. Note all logarithms are base .
Theorem: For any , and any stopping time
This is a purely analytical result because we don’t actually know what is ahead of time. Also note how the factor of occurs: in the first term, having a large will result in a poor upper bound because it occurs in the numerator of that term: too much exploration means not enough exploitation. But it occurs in the denominator of the second term, meaning that not enough exploration can also produce an undesirably large regret. This theorem then provides a quantification of the tradeoff being made, although it is just an upper bound.
We present the proof in two parts. Part 1:
We made a notable mistake in part 1, claiming that when . In fact, this does follow from the Taylor series expansion of , but it’s not as straightforward as I made it sound. In particular, note that , and so . Using in place of gives
And since , each term in the sum will decrease when replaced by , and we’ll be left with exactly . In other words, this is the tightest possible quadratic upper bound on . Pretty neat! On to part 2:
As usual, here is the entire canvas made over the course of both videos.
We can get a version of this theorem that is easier to analyze by picking a suitable choice of .
Corollary: Assume that is bounded by , and that Exp3 is run with
Then the weak regret of Exp3 is bounded by for any reward vector .
Proof. Simply plug in the bound in the theorem above, and note that .
A Simple Test Against Coin Flips
Now that we’ve analyzed the theoretical guarantees of the Exp3 algorithm, let’s use our implementation above and see how it fares in practice. Our first test will use 10 coin flips (Bernoulli trials) for our actions, with the probabilities of winning (and the actual payoff vectors) defined as follows:
biases = [1.0 / k for k in range(2,12)] rewardVector = [[1 if random.random() < bias else 0 for bias in biases] for _ in range(numRounds)] rewards = lambda choice, t: rewardVector[t][choice]
If we are to analyze the regret of Exp3 against the best action, we must compute the payoffs for all actions ahead of time, and compute which is the best. Obviously it will be the one with the largest probability of winning (the first in the list generated above), but it might not be, so we have to compute it. Specifically, it’s the following argmax:
bestAction = max(range(numActions), key=lambda action: sum([rewardVector[t][action] for t in range(numRounds)]))
Where the max function is used as “argmax” would be in mathematics.
We also have to pick a good choice of , and the corollary from the previous section gives us a good guide to the optimal : simply find a good upper bound on the reward of the best action, and use that. We can cheat a little here: we know the best action has a probability of 1/2 of paying out, and so the expected reward if we always did the best action is half the number of rounds. If we use, say, and compute using the formula from the corollary, this will give us a reasonable (but perhaps not perfectly correct) upper bound.
Then we just run the exp3 generator for rounds, and compute some statistics as we go:
bestUpperBoundEstimate = 2 * numRounds / 3 gamma = math.sqrt(numActions * math.log(numActions) / ((math.e - 1) * bestUpperBoundEstimate)) gamma = 0.07 cumulativeReward = 0 bestActionCumulativeReward = 0 weakRegret = 0 t = 0 for (choice, reward, est, weights) in exp3(numActions, rewards, gamma): cumulativeReward += reward bestActionCumulativeReward += rewardVector[t][bestAction] weakRegret = (bestActionCumulativeReward - cumulativeReward) regretBound = (math.e - 1) * gamma * bestActionCumulativeReward + (numActions * math.log(numActions)) / gamma t += 1 if t >= numRounds: break
At the end of one run of ten thousand rounds, the weights are overwhelmingly in favor of the best arm. The cumulative regret is 723, compared to the theoretical upper bound of 897. It’s not too shabby, but by tinkering with the value of we see that we can get regrets lower than 500 (when is around 0.7). Considering that the cumulative reward for the player is around 4,500 in this experiment, that means we spent only about 500 rounds out of ten thousand exploring non-optimal options (and also getting unlucky during said exploration). Not too shabby at all.
Here is a graph of a run of this experiment.
Note how the Exp3 algorithm never stops increasing its regret. This is in part because of the adversarial model; even if Exp3 finds the absolutely perfect action to take, it just can’t get over the fact that the world might try to screw it over. As long as the parameter is greater than zero, Exp3 will explore bad options just in case they turn out to be good. The benefits of this is that if the model changes over time Exp3 will adapt, but the downside is that the pessimism inherent in this worldview generally results in lower payoffs than other algorithms.
More Variations, and Future Plans
Right now we have two contesting models of how the world works: is it stochastic and independent, like the UCB1 algorithm would optimize for? Or does it follow Exp3′s world view that the payoffs are adversarial? Next time we’ll run some real-world tests to see how each fares.
But before that, we should note that there are still more models we haven’t discussed. One extremely significant model is that of contextual bandits. That is, the real world settings we care about often come with some “context” associated with each trial. Ads being displayed to users have probabilities that should take into account the information known about the user, and medical treatments should take into account past medical history. While we will not likely investigate any contextual bandit algorithms on this blog in the near future, the reader who hopes to apply this work to his or her own exploits (no pun intended) should be aware of the additional reading.
Until next time!
Ever since I started to get a real picture of what mathematics is about I’ve viewed middle school and high school mathematics education like a bit of a snob. I’ve read the treatise of Paul Lockhart, “A Mathematician’s Lament,” on the dystopia of cultural attitudes toward mathematics in pre-collegiate education. I’ve written responses to articles in the Atlantic authored by economists who, after getting PhDs in quantitative economics, still talk about math as if it’s just a bag of tricks. I’ve even taught guest lectures at high schools and middle schools to prove by example that an engaging, thought-provoking mathematics education is possible for 8th graders and up. I regularly tell my calculus students that half the things we make them do are completely pointless for their lives, while trying very hard to highlight the truly deep concepts and the few tools they might have reason to use.
But it’s generally agreed that something’s wrong with mathematics education in the US. There are a lot of questions to ask about why: are teachers not trained? Are students too focused on sports? Are Americans becoming intellectually lazy?
These all have their place in the debate, but the question I want to focus on in this article is: are policy makers designing good standards? I often hear about fantastic teachers who are stifled by administrators and standardized testing, so the popular answer is no. Moreover, though I haven’t done a principled study of this (again, my snobbishness peeking out), my impression is that even the fantastic math teachers at the most prestigious schools are still forced to hold the real mathematical learning in extracurriculars like math circles, or math symposiums.
And so the next natural step in analyzing the state of mathematics education in the US is to look at the standards in detail from a mathematical perspective. There was a nice article in the Washington Post detailing one ludicrous exam given to first graders in New York (exams for 6 year olds!). Here’s a snapshot:
This prompted me to actually look at the text of the Common Core State Standards in Mathematics, which is the currently accepted standard for most states. I have heard a lot about the political debate over efficacy and testing and assessment, but almost nothing about the mathematical content of the standards. Do they actually promote critical thinking skills and mathematical problem solving, as they claim? Does it differ enough from Lockhart’s dystopia?
My conclusion is:
While the Common Core indicates movement toward the right attitude on mathematics education, the attitudes aren’t reflected in the content of the standards themselves.
The big distinction I want to make in this article is the (perhaps counterintuitive notion) that mathematical thinking skills are largely unrelated to knowledge of mathematical facts, or ability to perform mechanical computations. The reason we teach mathematics to gain critical thinking skills is that mathematics gives examples of when they’re needed that are as simple and boiled down to their true essence as possible. And the text of the Common Core mostly ignores this.
I cannot claim that the writers of the standard don’t understand the mathematics deeply enough to realize this, and it would be too pompous even by my standards to imply that I know better than the thousands of educators that worked on this document. It could be the case that it’s instead the result of bureaucracy and partisanship, and the designers of the Common Core felt they could only make progress in certain areas. But even so, all we are left with is the document itself, and I want to give a principled (but more or less unstructured) inspection of its technical content.
There are some exceptions to my conclusion, and I will detail them as they come up, but the general picture is still this: the intent is better than it used to be but the implementation is still wrong. At the end, I’ll discuss why this is important and what I think should be done instead.
Now, before we jump in to the text of the standards themselves, I want to point out a few resources provided for teachers and discuss them briefly. First, there is a 3-minute intro video by the Common Core about why the standards are important
Putting aside the animation style, this video sends some disturbing messages. First, that getting a lot of money is what defines and creates success. Not having a deep understanding of the problems you’re facing, not having strong relationships, not helping people, but money. All of this focus on competition and money suggests that the standards are primarily business oriented. I would argue that this is not a useful position for education, but that would digress. Suffice it to say, the Common Core people should know that the most successful mathematician of all time, Paul Erdős, was homeless and had but a few hundred dollars to his name at any given time. He instead survived (indeed, excelled to legendary status) on his deep understanding of problem solving and his strong relationships with other mathematicians. This is an extreme example but it makes my point clear: collaboration, not competition, breeds success.
The second misconception expressed in the video is that mathematics (indeed, all learning) is like a staircase, and you have to learn the concepts in, say, 6th grade before you can learn anything in 7th. Beyond basic technical proficiency, this is simply not true for the kinds of skills we want to develop in our students. And the standards for basic technical proficiency in mathematics have never really changed: be competent in arithmetic and know what a variable is by the end of grade school; be competent in basic algebraic manipulation by the end of middle school or freshmen year of high school. Then the rest of high school is about being exposed to other areas, most often geometry, trigonometry, calculus, and stats, none of which depends on another too heavily (at the level taught in high school). The details of exactly what technical skills are required for high school students is unclear. Why? Because as one goes from elementary school to middle school to high school, the focus on learned skills should gradually change from mechanical abilities to big ideas, and that transition arguably begins some time in middle school.
One of the common core “big ideas” we’ll look at later is that of similarity in geometric figures. But it’s clear that you can go through your entire life’s work without thinking about similar triangles, and it’s hardly relevant to most disciplines. Why then, should we teach it? Well, there’s a very good reason, but it’s at the heart of what the Common Core standards are missing, so we’ll save the explanation for later.
The video does make one good point: standards is not learning, and learning only happens with great teachers. But really, meeting standards isn’t learning either! It’s just evidence to learning, which is extremely hard to measure. But even worse is meeting the wrong standards, and that’s what I’m afraid Common Core is promoting.
Speaking of which, here are the actual standards themselves for the reader to peruse. The Common Core website has the full standard, also available as a pdf, and CPS released a lengthy pdf explaining the standards in detail more or less specific to the school system. CPS also provides example lessons on their website, and I’m pleasantly surprised that a handful of the lessons try to flush out the more insightful ideas I find lacking in the Common Core itself.
So let’s dive right on in.
Common Core: Too Narrow and Too General
The standard starts out by describing a set of general guidelines which I generally agree with. They are:
- Make sense of problems and persevere in solving them.
- Reason abstractly and quantitatively.
- Construct viable arguments and critique the reasoning of others.
- Model with mathematics.
- Use appropriate tools strategically.
- Attend to precision.
- Look for an make use of structure.
- Look for and express regularity in repeated reasoning.
But even as they are true, the descriptions of these tenets are either far too narrow or far too general. Take, for example, this excerpt from the description of the (arguably MOST mathematical) skill “Look for an express regularity in repeated reasoning,” also known as “Reasoning about patterns.”
Mathematically proficient students notice if calculations are repeated, and look both for general methods and for shortcuts. … Noticing the regularity in the way terms cancel when expanding and might lead them to the general formula for the sum of a geometric series. As they work to solve a problem, mathematically proficient students maintain an oversight of the process while attending to the details. They continually evaluate the reasonableness of their intermediate results.
Yes, a mathematically proficient student will be able to infer a general pattern here. But it’s phrased in a way that makes it seem like expanding products of polynomials is the central goal while the real mathematical skill is just “looking for a shortcut” in calculations. Moreover, the sentence that follows is equally applicable to any profession. Indeed, while working to cook a meal, a proficient chef maintains an oversight of the process while attending to details, and continually evaluates the reasonableness of their intermediate results. Finding shortcuts is not mathematical, nor is it culinary. But reasoning about those shortcuts is, whether or not they’re correct. I have plenty of calculus students who “find shortcuts” that simply aren’t true, but don’t bother to think about them, and are hence exercising no mathematical abilities. It’s a fine distinction that the Common Core seems to ignore at some times and embrace at others.
The best example of this is in “Construct viable arguments and critique the reasoning of others.” Here they wonderfully lay out the kind of logical reasoning students should learn in mathematics. How I wish that all mathematics education was based around this sole principle! The problem is that none of these thoughts are reflected in the standards themselves! Instead, the standards generally simply request that a student “knows” a particular argument, not that they generate original ideas or critique the ideas of others. The generation of new mathematical questions and arguments is, without a doubt, the best way to learn mathematical thinking.
Indeed, let’s take a closer look at the standards themselves.
Inspecting the Standards Themselves
The list of Common Core Standards breaks mathematical abilities down by grade level and by area. I think the Washington Post article gives a very good critique of the lowest grade standards, so let’s focus on high school level. This is where I claim the true big ideas must shine through, if they come up anywhere at all.
The high school standards are broken up into areas by subject:
- Number and Quantity
- Statistics and Probability
So far so good, I guess. I’m quite pleased to see statistics recognized here. Let’s start with “Number and Quantity.” This section is broken into “The Real Number System,” “Quantities,” “The Complex Number System,” and “Vector and Matrix Quantities.”
The first one, “The Real Number System,” already shows some huge red flags. There are three standards here, and I quote:
- Extend the properties of exponents to rational exponents.
- Explain how the definition of the meaning of rational exponents follows from extending the properties of integer exponents to those values, allowing for a notation for radicals in terms of rational exponents. For example, we define 51/3 to be the cube root of 5 because we want (51/3)3 = 5(1/3)3 to hold, so (51/3)3 must equal 5
- Rewrite expressions involving radicals and rational exponents using the properties of exponents.
- Use properties of rational and irrational numbers
- Explain why the sum or product of two rational numbers is rational; that the sum of a rational number and an irrational number is irrational; and that the product of a nonzero rational number and an irrational number is irrational.
This is supposed to indicate that someone understands real numbers? Number 1 only shows that one knows how to do arithmetic with exponents, asking the student to know a very specific argument, and number 2 is just memorizing some basic properties of rational numbers. There are some HUGE questions left unasked. Here are a few
- What is a real number? How does it differ from other kinds of numbers?
- Is infinity a real number? If so, how does it fit with the definition of a real number? If not, is it some other kind of number?
- What does it mean to be rational and irrational?
- What are some examples of irrational numbers? Why are those examples irrational?
- Are there more irrational numbers than rational numbers? Vice versa? Are they “equal” in size?
- If we can’t know it exactly, how would you estimate a the value of a number like ?
To be fair, the grade 8 standards address some of these questions, but in an odd way. Rather than say that students should know that real numbers can be (sort of) defined by a finite integer part and an infinite decimal expansion, it says
Understand informally that every number has a decimal expansion; for rational numbers show that the decimal expansion repeats eventually, and convert a decimal expansion which repeats eventually into a rational number.
Does that mean that a number can have multiple decimal expansions? Does every decimal expansion also correspond to a number? Or can a decimal expansion represent multiple numbers? What about the number whose “decimal expansion” has an infinite number of 1′s before the decimal point? Does that mean that infinity is a real number? As you can see, these are some very basic questions about real numbers, which are arguably more stimulating and important than being able to convert back and forth between decimal expansions and rational numbers (as the 8th grade standard requires, but nobody actually does for numbers harder than 1/3).
The important point I want to make here is that the truly “Big Ideas” underlying this topic are as follows, and they’re only halfway related to numbers themselves:
- Understand the importance of precise definitions, and be able to apply those definitions to simple questions, such as “Prove that 1/3 is a real number,” and “Argue why infinity is not a real number.”
- Understand that we can define notation as we see fit, e.g. , and more deeply that mathematical concepts are invented via definitions.
- Understand the concept of a correspondence between two collections. Know how to argue that a correspondence is or is not bijective (by any other name).
- Understand basic proofs of impossibility.
- Understand the concept of approximation, and understand how to quickly get rough approximations of quantities. Extend this to estimate concrete real-world quantities, like the number of pianos in your hometown.
And these ideas are the actual ideas that the Common Core is looking for, the ones that apply across all of mathematics and actually relate to real critical thinking skills. But it seems that the only place in the standard where they address the idea of a correspondence is in a Kindergarten “Counting” standard:
CCSS.Math.Content.K.CC.C.6 Identify whether the number of objects in one group is greater than, less than, or equal to the number of objects in another group, e.g., by using matching and counting strategies.
“Number and Quantity” goes on to describe the importance of consistently using units and rounding measurements to the right number of digits; memorizing properties of complex numbers (with regards to which any introductory college professor will start over anyway); and more rote manipulation of vectors and matrices that few high school students have any reason to know. Other big ideas apply here, ideas from geometry and the idea of correspondence in particular, but the standards still focus on mechanical abilities.
The Common Core folks argue in quite a few places that knowing the reasons for why certain mathematical facts are true is what constitutes a true understanding of those facts. And yes, I agree. But there’s much more to the story. If we want students to know why we define as we do, to make a nice extension of the rules of exponent arithmetic, it’s certainly a deeper understanding than just memorizing how to do the arithmetic itself. But it’s just another kind of memorization! It’s memorization of a specific mathematical reason for a specific mathematical fact. It’s a better kind of memorization than we used to require, but is it critical thinking or problem solving? It’s hard to say whether or not it requires more of the mental faculty we want it to, but if it’s not then we lose, and if it is, then this is a pretty indirect way of going about it.
Again, my big point here is that the requirements of the Standard overlook the deep underlying mathematical thinking skills that we hope are being developed when we ask them to know whatever it is we want them to know. These big concepts like correspondence and impossibility and approximation should be the central focus. The particular rules of exponents and the specific properties of irrational numbers, these are tools and sidenotes that accentuate fluency in the big concepts as applied to solving problems. Almost nobody needs to know facts about irrational numbers in their careers, but relating things by correspondence is a truly useful mathematical skill.
Taking a Step Back
So let’s pause for a moment and give some counterpoint. I could just be focusing super narrowly on one or two topics that I feel the Common Core misrepresents, and using that gripe to claim the entire Common Core is crap when it actually has lots of merit in other areas.
While I do think that the standard addresses a few topics well (more on that later), I claim the pattern of “Number and Quantity,” is endemic. Take for example the section on Geometry, Measurement & Dimension. I was really hopeful here that one of the standards would be “Understand what dimension means,” but no dice. Instead it’s the same old memorization of formulas for volumes of geometric shapes.
Even worse, when asked to do derive these formulas, the standard says
Give an informal argument for the formulas for the circumference of a circle, area of a circle, volume of a cylinder, pyramid, and cone. Use dissection arguments, Cavalieri’s principle, and informal limit arguments.
I’d be surprised if many of my readers had even heard of Cavalieri’s principle before reading this, but this is the pattern striking again. The standard expects the students to know something very specific: not just how to use Cavalieri’s principle, but how to apply it just to these special objects, while ignoring the underlying principles at work. A true understanding of measurement and volume would be:
Reason about the volume of a solid you’ve never seen before.
But more deeply, Cavalieri’s principle is just another kind of correspondence argument applied to geometry. And I see this right away without muddling through that terribly written Wikipedia page because I have a solid understanding of the notion of a correspondence and I can recognize it at work.
The “dissection” argument is another deep principle that is mostly ignored in the standard, so let me spell it out. One way to solve problems is to break them down into simpler problems you know how to solve, and to figure out how to piece them back together again. Any high school student can understand this technique because by high school they’ve learned to put their pants on one leg at a time. But this idea alone accounts for a wide breadth of mathematical solutions to problems (the day it was applied to signal processing is often credited as the day the Age of Information began!). So they should be seeing (and using) this technique applied to many problems, be they about algebra or geometry or cats. It shouldn’t be hidden away in a single (perhaps memorized) geometric argument.
And finally, the “limit argument” (called exhaustion in some educational circles, but I don’t like this name) is an application of approximation. The idea is that a sequence of increasingly good lies will eventually give you the truth, and it’s one of the deepest thoughts that mankind has ever had! So indeed, the “Big Ideas” across this standard are big ideas, but the writers of the standard neglect to point out their significance in favor of very specific and arguably pointless factual requirements.
The geometry section is full of other similar nonsense: using laws of sines and cosines, the same geometry proofs that Paul Lockhart derides (page 19), and memorizing minutiae about the equations of parabolas, ellipses, hyperbolas. They say the big ideas are similarity, transformations, and symmetry (indeed these are big ideas!) but then largely revert to the same old awful kinds of rote memorization and symbol pushing we’ve grown to hate about math.
A real course in geometry, measurement, and mathematical problem solving might even follow Lockhart’s book, Measurement. In fact, if students really absorbed the contents of this book, that would constitute an entire high school mathematics education. Why do I say that? Because this book focuses on developing mathematical thinking skills in a way that no high school education I’ve heard of has attempted. This book emphasizes methods and exploration over facts, and teaches readers to conjecture and reason without telling them how to do everything. It provides numerous exercises without clear answers and has no solution manual. And I claim that any facts required by the standards that are not covered there could be taught to a student who is comfortable with this book in one month or less. That is, all of the “standards” simply fall out of the more important deeper concepts, and we should be working forward from the deep ideas. We should use things like the law of sines as examples of these deep principles in action, but knowing or not knowing the law of sines or when to use it gives little indication of critical thinking.
I could continue with algebra, and the other sections, but I think my point is clear. The standards are filled with the same arbitrary choices of technical facts, and the deep ideas, the kinds of thinking we want to develop, are absent.
Modeling and Other Big Ideas
There is one aspect of mathematical problem solving that I think the Common Core addresses well, and that is modeling. That is, students need to be able to take a poorly defined problem, whether it’s “analyzing the stopping distance for a car” or asking what constitutes a number, and boil it down to its essence. This means making and questioning assumptions, debating the quality of a model, testing and revising, and interpreting results in a principled way. This is arguably the only kind of mathematics that non-mathematicians do outside of academia, and I feel that the description in the Common Core does justice to its importance. Even better, they admit there can be no “list” of facts the students are expected to know about specific models or tools. Here the Common Core gives in to the truth that discussion and original arguments are the key to developing fluency. In my imagination there were a select few key players lobbying for this to be included in the Core, and I say bravo to you, well done!
But there are some other big ideas that the Common Core misses entirely.
One of these is the idea of generalization. That is, one core part of mathematical thinking is to take a solution to a given problem and extend it to more general patterns and problems. I see only vague allusions to this concept in the Common Core (students are expected to know, for example, how to generate a sequence when given a pattern). But this is literally the core stuff of mathematical problem solving: if you can’t solve a hard problem, try to simplify it until you can solve it, and then try to generalize your solution back to the original problem. This is why it makes sense to think about matrices and polynomials as “generalizations of integers,” because natural facts about integers extend (or don’t extend as the case may be) to these more general settings. Students should be comfortable facing problems that may require simplification and mathematically “feeling around” for insights.
The second idea is that of the algorithm. I’m not talking about programming, that’s a different story. I’m talking about procedures that anyone might follow to get something done. People follow algorithms all day, and some of the most natural problems (and interesting problems for students to think about) are algorithmic in nature: how to guarantee you win a game, how to find the quickest way to get somewhere, how to win the heart of that cute guy or girl. Indeed, students are expected to follow algorithms all over the Common Core, from approximating irrationals by rationals to solving algebra and making inferences. The only non algorithmic aspect of the Common Core is modeling, and here they provide an algorithm for how to do it! And so it makes sense to study exactly what makes an algorithm an algorithm, when algorithms apply, and more deeply what makes an algorithm good.
The last point I want to make is that true mathematical understanding arises from trying to solve problems that you are not told how to solve ahead of time, and recognizing when these big ideas apply and when they do not. Students love to solve puzzles for their own sake, and they don’t need to be embedded in stupid “real world” applications like computing mortgage payments. Indeed, this is what Sergio Correa did in his financially destitute school in Mexico, and his students have made progress beyond belief (see, Common Core people? Money is not the problem or the solution!). It’s okay for problems to be left unsolved by students for days, weeks, or even years, and students need to be comfortable with identifying their own lack of understanding.
I want to expand on the idea a bit more. Taking it to the extreme, you could ask a more daring question: should students be exposed to problems they cannot possibly solve? My answer is yes! Emphatically, yes! A thousand times, yes! Student need to be exposed to many kinds of problems they cannot solve to be prepared for a world in which most problems don’t have known solutions (or else they wouldn’t be problems in the first place). Here are a few examples:
1. They should be exposed to problems that can be solved in principle but are too hard to solve with the techniques they know well.
For example: elementary level students who are just beginning to learn about variables should be asked to add up all the numbers between 1 and 100. They should be encouraged to try it by hand until they’re convinced it’s too hard, and they should be rewarded if they actually do manage to do it by hand. They should then be encouraged to think of other, cleverer, ways to solve the problem. No idea is crazier than adding it up by hand, and so much time (at least a full class session) should be spent puzzling over what in the world could possibly be done. Finally, an elegant solution should be shown that reduces the problem to multiplication, and the use of variables highlighted (let S be the sum of these numbers, even though we don’t know what it is…). And then the problem can be extended to a general sum of the first few integers, sums of squares, and so on.
2. They should be exposed to problems that cannot be solved with any technique they know but will foreshadow their education in future classes.
When students are learning about the slope of a linear function, they must be encouraged to wonder how one could reason about the steepness of nonlinear things. For it’s obvious that some nonlinear functions are steeper than others at different places, but how can we use a single number to compare them like we do for slopes of lines? The answer is that we cannot! The “correct” way is to invent calculus, but of course the calculus way of doing it involves extending the usual notion of slopes of lines by taking limits. The students will not know this, nor will they find it out by the end of their algebra class, but it should linger in their minds as a motivating question: there are always more unanswered questions! What about the “steepness” of surfaces? Can we talk about the “steepness” of time? Students should readily ask and be asked such intriguing questions (again, this is generalization at work).
3. They should be asked obvious technical questions that appear not to have any technique at all.
For example, they might be asked the difficult question: is a rational number? Indeed, this is an extremely natural question to ask, since is defined to be a ratio of two numbers: the circumference and diameter of a circle. But despite the fact that there are many proofs using a variety of techniques, almost all proofs that is irrational are beyond the abilities of high school students to follow and not even familiar to the average college math major. I certainly couldn’t prove it off the top of my head. This is quite different than the previous kind of problem, because there it was obvious that you can reason about the steepness of nonlinear functions, the students just don’t know how to formulate it rigorously. But here, the rigorous question is understood (can be represented as a quotient of integers) but it’s unclear whether the problem is easy or hard to solve, and it turns out to be hard.
4. They should be exposed to problems that NOBODY knows how to solve.
When students learn about rational numbers (and if they know about , which I doubt they should before calculus but I’ll use it in my example anyway), they could be asked whether is rational. This is an open problem in mathematics. If they’re learning about prime numbers, they should be asked whether every positive integer can be written as a sum of two primes. Every even positive integer? Every even integer greater than 2? And so they go through the process of refining “stupid” questions (with obvious “no” answers) into deep open conjectures. And then it can be connected to other ideas: can an algorithm answer this question? How long might it take? Can we try to correspond integers to something else? Can we give an approximation argument? There are so many simple open problems in number theory that it baffles me that many students are never exposed to them.
The point of all this is that mathematics, and mathematical problem solving skills, are not just about picking the right tool from the set of tools you’ve been taught. It’s about recognizing when any tool you’ve ever heard of even applies! More deeply, it’s about debating with your colleagues that problems can and cannot be solved using certain methods, and giving principled reasons why you think so. This sounds like “modeling,” and indeed the strategies used for modeling also apply to pure mathematics, a fact that most people don’t realize. Critical thinking and mathematical problem solving is more akin to art and debate than to mechanical computation. And the most interesting problems are the most natural questions one could ask, not the contrived “compute the volume of an oddly shaped wine glass” questions. Those are busy work questions, not open for discussion and interpretation. And the students know it.
Why This Matters
A reasonable objection to my rant goes as follows: why does it matter that the Common Core isn’t super clear about these “big ideas” I’m claiming are so central? If the teachers are knowledgeable they’ll know what is important and what isn’t important, and how to teach the material in the manner that best promotes learning.
The problem is one of intention and misdirection. If the teachers are not rock-solid in their own understanding, then this Common Core, promoted by The World’s Leading Education Experts, can easily narrow their teaching to just what’s in the Core. More disturbing are the people who don’t know mathematics well, the principals, policy makers, and standardized test writers who really take these guidelines at face level. Even if a teacher has a good reason to favor one area like modeling over memorizing facts about hyperbolas, they will be met with the same kind of obtuse opposition by administrators seeking short term business goals. The standards exist, one might argue, to explain to these people (the people who wouldn’t know mathematical reasoning if it hit them in the face) that the teachers are teaching ideas much deeper than the rules of matrix multiplication. The Common Core represents this adequately with regards to modeling, but little else.
The Common Core also claims that the standards should be separated from specific curriculum and pedagogy, and one would reasonably argue that what I’m presenting here is pedagogy, not standards. Regardless of whether you agree or disagree with this, it still remains that the Common Core is designed to influence curriculum and pedagogy. And so even if the Common Core must have “facts” as standards, if it fails to emphasize the deep ideas underlying the factual obligations then it fails to influence pedagogy in the right way. In doing so, it reinforces obviously bad practices like teaching to the test.
The important thing to realize is that the correct pedagogy is already basically known: from a young age students should explore and reason and puzzle without horse blinders. Sometimes there are some dry factual things they cannot escape, but such is true of everything. So the separation of mathematical church and state (pedagogy and standards) claimed by the Common Core seems to be entirely a political one. It would infringe on the freedom of the teachers to impose pedagogical constraints, especially ones that only work for some environments. If this necessarily causes deficiencies of a global set of standards, then it is simply the wrong approach.
Again, I cannot say for sure whether the writers of the standard don’t understand the mathematics well enough, and it would be pointlessly arrogant to imply my own superiority. I hate to think it’s a bureaucracy issue, and that the designers felt the only progress they could make was to emphasize modeling as well as they did. If this is the truth then it is a sad one, because where I and many of the teachers sit, our country is stuck with the results.
What don’t need more compartmentalization by subject and grade. We do need a recognition of the deep critical thinking skills we want to teach. “Abstract reasoning” is not a specific enough goal to warrant policy. We need to admit to our teachers and our students exactly what we’re trying to get them to learn. And then we can organize education based on increasingly sophisticated applications of those ideas, to thinking about shapes, numbers, modeling, to whatever you want. Then students won’t forget about counting as “matching” after kindergarten ends, or only consider approximations related to irrational numbers. They will instead see these ideas blossom over time into the mental Swiss Army knives that they are. And they will use these ideas as a foundation to acquire whatever factual knowledge they might need to succeed in their careers.
It’s that time of year where senior undergraduates are considering whether to go to graduate school. And I wouldn’t be surprised if many students were afraid of the prospect, perhaps having read that popular genre of articles these days that tell you graduate school will turn you into an emotional wreck and that only a psychopathic masochist would put themselves through it.
The problem with these articles is they’re usually written by both outliers and those who put themselves in situations with no other options. I’ve felt my time at UI Chicago, however, has provided me nothing but options and excitement! So if you’re thinking about graduate school in mathematics or theoretical computer science, here’s my pitch for
Why you should come to UI Chicago and study theoretical computer science
In fact, UI Chiago’s mathematics department is the most social of any math department I’ve ever heard of. I think this is the biggest benefit for me. On my first day here, I was surprised that everyone was totally normal and not the typical weird antisocial stereotype one associates with people who like math. Our department has a huge list of seminars going on every day of the week, and a small party every Friday called “Tea” that has a large attendance. We often go out to bars and restaurants, and have other outings. We even have a Facebook group (for grad students only) and a ping pong league that the professors sometimes join. We currently have over 150 graduate students in our department, and I know around 70 by name.
We have world-class faculty.
Some of my colleagues came to UIC specifically to work with David Marker on model theory, or Lou Kauffman on knot theory. At least one researcher here has over two hundred publications! We have big names in algebraic geometry, hypergraph combinatorics, dynamical systems, low-dimensional topology, and a very active logic group. Our theoretical computer science group (mixed with our combinatorics group) is small but vibrant and growing fast. We just got three new mathematical computer science students this year, and I’m doing everything I can to convert some of the other students over to our side.
We’re in the middle of a thriving intellectual community.
Chicago is the center of the Midwest US, and there are a ton of universities not only in the city but within a few hours drive. There are regular seminars and colloquia at the University of Chicago, Northwestern, and other smaller institutions like the Toyota Technical Institute. Then there are the universities of Wisconsin, Indiana, and Michigan which all have nontrivial theoretical computer science groups (and of course other mathematics groups) and we get together for conferences like Midwest Theory Day.
Our department is not cutthroat competitive.
I hear rumors about top mathematics and computer science programs that (unintentionally) pit students against each other for the attention of a few glorified professors. That simply doesn’t happen here. Everyone is friendly and people regularly collaborate. You can approach any professor and ask to do a reading course with them or ask them what kinds of open problems they’re thinking about, and most of them will gladly sit down with you and explain all the neat ideas in their heads. Even the hardest, most sarcastic professors genuinely care about their students. I think, along with being social, this makes our department one of the friendliest and most stress-free places to get a PhD.
We’re in a great city.
Chicago is really fun! I don’t know what else to say about this.
Our department staff is very supportive.
Our director and assistant director of graduate studies are extremely helpful at getting new students situated and ensuring they have funding. It’s not uncommon for students who start in the PhD program to decide after one or two years that a PhD is not right for them. Usually they will stop with the requirements for a master’s degree, and there are no hard feelings. Students who do this are even encouraged to return if they decide they want to finish their PhD later. In the mean time, our department guarantees tuition waivers and stipends to all of its teaching assistants (and there are alternatives to teaching as well), so you can focus on your studies and not have to think too much about money.
And even more, if you decide to study theoretical computer science at UI Chicago you get a whole bunch of other benefits:
You get to hang out and do research with me!
(Okay maybe that’s not a serious benefit to consider)
Your post-grad school job opportunities widen.
Jobs are hard to come by for the purest of pure mathematics researchers. Research positions are in short supply, and unless you want to go into industry with an applied math degree the remaining option is to teach at a 4-year institution. But if you study theoretical computer science, now you are qualified to do all kinds of things. Work at industry research labs like Microsoft Research, Google Research, or Yahoo! Research. Work at government labs like Lincoln Labs and Lawrence Livermore National Labs, both of which I interned at. You can shoot for a professorship or do a postdoc like a regular mathematics PhD would. If you’re hand with Python you could go into the software industry and get a high-demand job at any major company in cryptography or operations research (both of which depend on ideas from TCS). And you always keep the option of teaching at a 4-year.
You have many options for internships during summers.
I, my colleagues, and even my advisor did research internships during the Summers at various research labs and industry companies. This is a particularly nice benefit of doing mathematical computer science in grad school, because it augments your normal graduate student stipend by enough to live much more comfortably than otherwise (that being said, for extra money a lot of my pure math colleagues will tutor on the side, and tutoring comes at a high price these days). It’s not uncommon to receive additional funding through these opportunities as well.
You get to travel a lot.
The main publication venue in computer science is the conference, and that means there are conferences happening all over the world all the time. In fact, I just got back from a conference in Aachen, Germany, earlier this year I was at Berkeley and Stanford, I am helping to run a conference in Florida early next year, and I am looking at conferences in Beijing and Barcelona next Summer. All of the trips you take to present your published research is paid for, so it’s just pure awesome.
You enjoy the breadth of problems in computer science.
Computer science is unique in that it connects to almost every field of mathematics.
- Like statistics? There’s statistical machine learning and randomized algorithm design.
- Like real analysis and dynamical systems? There’s convex optimization, support vector machines, and tons of computational aspects of PDE’s.
- Like algebra or number theory? There’s cryptography.
- Like combinatorics? There’s combinatorial optimization.
- Like game theory? I just got back from a conference on algorithmic game theory.
- Like geometry and representation theory? There’s a Geometric Complexity Theory program working toward P vs NP.
- Like logic? You might be surprised to know that the cleanest proofs of the incompleteness theorems are via Turing machines.
- Like topology? There are researchers (not at UIC) working on computational topology, like persistent homology which we’ve been slowly covering on this blog.
The list just goes on and on, and this isn’t even mentioning the purely pure theoretical computer science topics which have a flavor of their own.
Programming options exist, but you aren’t forced to write programs.
Some of the greatest computer science researchers cannot write simple computer programs, and if you’re just interested in theory there is plenty of theory to go around. On the other hand, we have researchers in our department studying aspects of supercomputing, and options for collaboration with researchers in the (engineering) computer science department. Over there they’re studying things like biological networks, machine learning and robotics, and all kinds of hands-on applied stuff that you might be interested in if you read this blog.