# Adversarial Bandits and the Exp3 Algorithm

In the last twenty years there has been a lot of research in a subfield of machine learning called Bandit Learning. The name comes from the problem of being faced with a large sequence of slot machines (once called one-armed bandits) each with a potentially different payout scheme. The problems in this field all focus on one central question:

If I have many available actions with uncertain outcomes, how should I act to maximize the quality of my results over many trials?

The deep question here is how to balance exploitation, the desire to choose an action which has payed off well in the past, with exploration, the desire to try options which may produce even better results. The ideas are general enough that it’s hard not to find applications: choosing which drug to test in a clinical study, choosing which companies to invest in, choosing which ads or news stories to display to users, and even (as Richard Feynman once wondered) how to maximize your dining enjoyment.

Herbert Robbins, one of the first to study bandit learning algorithms. Image credit

In less recent times (circa 1960′s), this problem was posed and considered in the case where the payoff mechanisms had a very simple structure: each slot machine is a coin flip with a different probability $p$ of winning, and the player’s goal is to find the best machine as quickly as possible. We called this the “stochastic” setting, and last time we saw a modern strategy called UCB1 which maintained statistical estimates on the payoffs of the actions and chose the action with the highest estimate. The underlying philosophy was “optimism in the face of uncertainty,” and it gave us something provably close to optimal.

Unfortunately payoff structures are more complex than coin flips in the real world. Having “optimism” is arguably naive, especially when it comes to competitive scenarios like stock trading. Indeed the algorithm we’ll analyze in this post will take the polar opposite stance, that payoffs could conceivably operate in any manner. This is called the adversarial model, because even though the payoffs are fixed in advance of the game beginning, it can always be the case that the next choice you make results in the worst possible payoff.

One might wonder how we can hope to do anything in such a pessimistic model. As we’ll see, our notion of performing well is relative to the best single slot machine, and we will argue that this is the only reasonable notion of success. On the other hand, one might argue that real world payoffs are almost never entirely adversarial, and so we would hope that algorithms which do well theoretically in the adversarial model excel beyond their minimal guarantees in practice.

In this post we’ll explore and implement one algorithm for adversarial bandit learning, called Exp3, and in the next post we’ll see how it fares against UCB1 in some applications. Some prerequisites: since the main algorithm presented in this post is randomized, its analysis requires some familiarity with techniques and notation from probability theory. Specifically, we will assume that the reader is familiar with the content of this blog’s basic probability theory primers (1, 2), though the real difficulty in the analysis will be keeping up with all of the notation.

In case the reader is curious, Exp3 was invented in 2001 by Auer, Cesa-Bianchi, Freund, and Schapire. Here is their original paper, which contains lots of other mathematical goodies.

As usual, all of the code and data produced in the making of this blog post is available for download on this blog’s Github page.

## Model Formalization and Notions of Regret

Before we describe the algorithm and analyze its we have to set up the problem formally. The first few paragraphs of our last post give a high-level picture of general bandit learning, so we won’t repeat that here. Recall, however, that we have to describe both the structure of the payoffs and how success is measured. So let’s describe the former first.

Definition: An adversarial bandit problem is a pair $(K, \mathbf{x})$, where $K$ represents the number of actions (henceforth indexed by $i$), and $\mathbf{x}$ is an infinite sequence of payoff vectors $\mathbf{x} = \mathbf{x}(1), \mathbf{x}(2), \dots$, where $\mathbf{x}(t) = (x_1(t), \dots, x_K(t))$ is a vector of length $K$ and $x_i(t) \in [0,1]$ is the reward of action $i$ on step $t$.

In English, the game is played in rounds (or “time steps”) indexed by $t = 1, 2, \dots$, and the payoffs are fixed for each action and time before the game even starts. Note that we assume the reward of an action is a number in the interval $[0,1]$, but all of our arguments in this post can be extended to payoffs in some range $[a,b]$ by shifting by $a$ and dividing by $b-a$.

Let’s specify what the player (algorithm designer) knows during the course of the game. First, the value of $K$ is given, and total number of rounds is kept secret. In each round, the player has access to the history of rewards for the actions that were chosen by the algorithm in previous rounds, but not the rewards of unchosen actions. In other words, it will only ever know one $x_i(t)$ for each $t$. To set up some notation, if we call $i_1, \dots, i_t$ the list of chosen actions over $t$ rounds, then at step $t+1$ the player has access to the values of $x_{i_1}(1), \dots, x_{i_t}(t)$ and must pick $i_{t+1}$ to continue.

So to be completely clear, the game progresses as follows:

The player is given access to $K$.
For each time step $t$:

The player must pick an action $i_t$.
The player observes the reward $x_i(t) \in [0,1]$, which he may save for future use.

The problem gives no explicit limit on the amount of computation performed during each step, but in general we want it to run in polynomial time and not depend on the round number $t$. If runtime even logarithmically depended on $t$, then we’d have a big problem using it for high-frequency applications. For example in ad serving, Google processes on the order of $10^9$ ads per day; so a logarithmic dependence wouldn’t be that bad, but at some point in the distance future Google wouldn’t be able to keep up (and we all want long-term solutions to our problems).

Note that the reward vectors $\mathbf{x}_t$ must be fixed in advance of the algorithm running, but this still allows a lot of counterintuitive things. For example, the payoffs can depend adversarially on the algorithm the player decides to use. For example, if the player chooses the stupid strategy of always picking the first action, then the adversary can just make that the worst possible action to choose. However, the rewards cannot depend on the random choices made by the player during the game.

So now let’s talk about measuring success. For an algorithm $A$ which chooses the sequence $i_1, \dots, i_t$ of actions, define $G_A(t)$ to be the sum of the observed rewards

$\displaystyle G_A(t) = \sum_{s=1}^t x_{i_s}(s)$.

And because $A$ will often be randomized, this value is a random variable depending on the decisions made by $A$. As such, we will often only consider the payoff up to expectation. That is, we’ll be interested in how $\textup{E}(G_A(t))$ relates to other possible courses of action. To be completely rigorous, the randomization is not over “choices made by an algorithm,” but rather the probability distribution over sequences of actions that the algorithm induces. It’s a fine distinction but a necessary one. In other words, we could define any sequence of actions $\mathbf{j} = (j_1, \dots, j_t)$ and define $G_{\mathbf{j}}(t)$ analogously as above:

$\displaystyle G_{\mathbf{j}}(t) = \sum_{s=1}^t x_{j_s}(s)$.

Any algorithm and choice of reward vectors induces a probability distribution over sequences of actions in a natural way (if you want to draw from the distribution, just run the algorithm). So instead of conditioning our probabilities and expectations on previous choices made by the algorithm, we do it over histories of actions $i_1, \dots, i_t$.

An obvious question we might ask is: why can’t the adversary just make all the payoffs zero? (or negative!) In this event the player won’t get any reward, but he can emotionally and psychologically accept this fate. If he never stood a chance to get any reward in the first place, why should he feel bad about the inevitable result? What a truly cruel adversary wants is, at the end of the game, to show the player what he could have won, and have it far exceed what he actually won. In this way the player feels regret for not using a more sensible strategy, and likely turns to binge eating cookie dough ice cream. Or more likely he returns to the casino to lose more money. The trick that the player has up his sleeve is precisely the randomness in his choice of actions, and he can use its objectivity to partially overcome even the nastiest of adversaries.

The adversary would love to show you this bluff after you choose to fold your hand. What a jerk. Image credit

Sadism aside, this thought brings us to a few mathematical notions of regret that the player algorithm may seek to minimize. The first, most obvious, and least reasonable is the worst-case regret. Given a stopping time $T$ and a sequence of actions $\mathbf{j} = (j_1, \dots, j_T)$, the expected regret of algorithm $A$ with respect to $\mathbf{j}$ is the difference $G_{\mathbf{j}}(T) - \mathbb{E}(G_A(T))$. This notion of regret measures the regret of a player if he knew what would have happened had he played $\mathbf{j}$.  The expected worst-case regret of $A$ is then the maximum over all sequences $\mathbf{j}$ of the regret of $A$ with respect to $\mathbf{j}$. This notion of regret seems particularly unruly, especially considering that the payoffs are adversarial, but there are techniques to reason about it.

However, the focus of this post is on a slightly easier notion of regret, called weak regret, which instead compares the results of $A$ to the best single action over all rounds. That is, this quantity is just

$\displaystyle \left ( \max_{j} \sum_{t=1}^T x_j(t) \right ) - \mathbb{E}(G_A(T))$

We call the parenthetical term $G_{\textup{max}}(T)$. This kind of regret is a bit easier to analyze, and the main theorem of this post will given an upper bound on it for Exp3. The reader who read our last post on UCB1 will wonder why we make a big distinction here just to arrive at the same definition of regret that we had in the stochastic setting. But with UCB1 the best sequence of actions to take just happened to be to play the best action over and over again. Here, the payoff difference between the best sequence of actions and the best single action can be arbitrarily large.

## Exp3 and an Upper Bound on Weak Regret

We now describe at the Exp3 algorithm.

Exp3 stands for Exponential-weight algorithm for Exploration and Exploitation. It works by maintaining a list of weights for each of the actions, using these weights to decide randomly which action to take next, and increasing (decreasing) the relevant weights when a payoff is good (bad). We further introduce an egalitarianism factor $\gamma \in [0,1]$ which tunes the desire to pick an action uniformly at random. That is, if $\gamma = 1$, the weights have no effect on the choices at any step.

The algorithm is readily described in Python code, but we need to set up some notation used in the proof of the theorem. The pseudocode for the algorithm is as follows.

Exp3

1. Given $\gamma \in [0,1]$, initialize the weights $w_i(1) = 1$ for $i = 1, \dots, K$.
2. In each round $t$:
1.  Set $\displaystyle p_i(t) = (1-\gamma)\frac{w_i(t)}{\sum_{j=1}^K w_j(t)} + \frac{\gamma}{K}$ for each $i$.
2. Draw the next action $i_t$ randomly according to the distribution of $p_i(t)$.
3. Observe reward $x_{i_t}(t)$.
4. Define the estimated reward $\hat{x}_{i_t}(t)$ to be $x_{i_t}(t) / p_{i_t}(t)$.
5. Set $\displaystyle w_{i_t}(t+1) = w_{i_t}(t) e^{\gamma \hat{x}_{i_t}(t) / K}$
6. Set all other $w_j(t+1) = w_j(t)$.

The choices of these particular mathematical quantities (in steps 1, 4, and 5) are a priori mysterious, but we will explain them momentarily. In the proof that follows, we will extend $\hat{x}_{i_t}(t)$ to indices other than $i_t$ and define those values to be zero.

The Python implementation is perhaps more legible, and implements the possibly infinite loop as a generator:

def exp3(numActions, reward, gamma):
weights = [1.0] * numActions

t = 0
while True:
probabilityDistribution = distr(weights, gamma)
choice = draw(probabilityDistribution)
theReward = reward(choice, t)

estimatedReward = 1.0 * theReward / probabilityDistribution[choice]
weights[choice] *= math.exp(estimatedReward * gamma / numActions) # important that we use estimated reward here!

yield choice, theReward, estimatedReward, weights
t = t + 1


Here the “rewards” variable refers to a callable which accepts as input the action chosen in round $t$ (keeps track of $t$, assuming we’ll play nice), and returns as output the reward for that choice. The distr and draw functions are also easily defined, with the former depending on the gamma parameter as follows:

def distr(weights, gamma=0.0):
theSum = float(sum(weights))
return tuple((1.0 - gamma) * (w / theSum) + (gamma / len(weights)) for w in weights)


There is one odd part of the algorithm above, and that’s the “estimated reward” $\hat{x}_{i_t}(t) = x_{i_t}(t) / p_{i_t}(t)$. The intuitive reason to do this is to compensate for a potentially small probability of getting the observed reward. More formally, it ensures that the conditional expectation of the “estimated reward” is the actual reward. We will explore this formally during the proof of the main theorem.

As usual, the programs we write in this post are available on this blog’s Github page.

We can now state and prove the upper bound on the weak regret of Exp3. Note all logarithms are base $e$.

Theorem: For any $K > 0, \gamma \in (0, 1]$, and any stopping time $T \in \mathbb{N}$

$\displaystyle G_{\textup{max}}(T) - \mathbb{E}(G_{\textup{Exp3}}(T)) \leq (e-1)\gamma G_{\textup{max}}(T) + \frac{K \log K}{\gamma}$.

This is a purely analytical result because we don’t actually know what $G_{\textup{max}}(T)$ is ahead of time. Also note how the factor of $\gamma$ occurs: in the first term, having a large $\gamma$ will result in a poor upper bound because it occurs in the numerator of that term: too much exploration means not enough exploitation. But it occurs in the denominator of the second term, meaning that not enough exploration can also produce an undesirably large regret. This theorem then provides a quantification of the tradeoff being made, although it is just an upper bound.

Proof.

We present the proof in two parts. Part 1:

We made a notable mistake in part 1, claiming that $e^x \leq 1 + x + (e-2)x^2$ when $x \leq 1$. In fact, this does follow from the Taylor series expansion of $e$, but it’s not as straightforward as I made it sound. In particular, note that $e^x = 1 + x + \frac{x^2}{2!} + \dots$, and so $e^1 = 2 + \sum_{k=2}^\infty \frac{1}{k!}$. Using $(e-2)$ in place of $\frac{1}{2}$ gives

$\displaystyle 1 + x + \left ( \sum_{k=2}^{\infty} \frac{x^2}{k!} \right )$

And since $0 < x \leq 1$, each term in the sum will decrease when replaced by $\frac{x^k}{k!}$, and we’ll be left with exactly $e^x$. In other words, this is the tightest possible quadratic upper bound on $e^x$. Pretty neat! On to part 2:

As usual, here is the entire canvas made over the course of both videos.

$\square$

We can get a version of this theorem that is easier to analyze by picking a suitable choice of $\gamma$.

Corollary: Assume that $G_{\textup{max}}(T)$ is bounded by $g$, and that Exp3 is run with

$\displaystyle \gamma = \min \left ( 1, \sqrt{\frac{K \log K}{(e-1)g}} \right )$

Then the weak regret of Exp3 is bounded by $2.63 \sqrt{g K \log K}$ for any reward vector $\mathbf{x}$.

Proof. Simply plug $\gamma$ in the bound in the theorem above, and note that $2 \sqrt{e-1} < 2.63$.

## A Simple Test Against Coin Flips

Now that we’ve analyzed the theoretical guarantees of the Exp3 algorithm, let’s use our implementation above and see how it fares in practice. Our first test will use 10 coin flips (Bernoulli trials) for our actions, with the probabilities of winning (and the actual payoff vectors) defined as follows:

biases = [1.0 / k for k in range(2,12)]
rewardVector = [[1 if random.random() < bias else 0 for bias in biases] for _ in range(numRounds)]
rewards = lambda choice, t: rewardVector[t][choice]


If we are to analyze the regret of Exp3 against the best action, we must compute the payoffs for all actions ahead of time, and compute which is the best. Obviously it will be the one with the largest probability of winning (the first in the list generated above), but it might not be, so we have to compute it. Specifically, it’s the following argmax:

bestAction = max(range(numActions), key=lambda action: sum([rewardVector[t][action] for t in range(numRounds)]))


Where the max function is used as “argmax” would be in mathematics.

We also have to pick a good choice of $\gamma$, and the corollary from the previous section gives us a good guide to the optimal $\gamma$: simply find a good upper bound on the reward of the best action, and use that. We can cheat a little here: we know the best action has a probability of 1/2 of paying out, and so the expected reward if we always did the best action is half the number of rounds. If we use, say, $g = 2T / 3$ and compute $\gamma$ using the formula from the corollary, this will give us a reasonable (but perhaps not perfectly correct) upper bound.

Then we just run the exp3 generator for $T = \textup{10,000}$ rounds, and compute some statistics as we go:

bestUpperBoundEstimate = 2 * numRounds / 3
gamma = math.sqrt(numActions * math.log(numActions) / ((math.e - 1) * bestUpperBoundEstimate))
gamma = 0.07

cumulativeReward = 0
bestActionCumulativeReward = 0
weakRegret = 0

t = 0
for (choice, reward, est, weights) in exp3(numActions, rewards, gamma):
cumulativeReward += reward
bestActionCumulativeReward += rewardVector[t][bestAction]

weakRegret = (bestActionCumulativeReward - cumulativeReward)
regretBound = (math.e - 1) * gamma * bestActionCumulativeReward + (numActions * math.log(numActions)) / gamma

t += 1
if t >= numRounds:
break


At the end of one run of ten thousand rounds, the weights are overwhelmingly in favor of the best arm. The cumulative regret is 723, compared to the theoretical upper bound of 897. It’s not too shabby, but by tinkering with the value of $\gamma$ we see that we can get regrets lower than 500 (when $\gamma$ is around 0.7). Considering that the cumulative reward for the player is around 4,500 in this experiment, that means we spent only about 500 rounds out of ten thousand exploring non-optimal options (and also getting unlucky during said exploration). Not too shabby at all.

Here is a graph of a run of this experiment.

A run of Exp3 against Bernoulli rewards. The first graph represents the simple regret of the player algorithm against the best action; the blue line is the actual simple regret, and the green line is the theoretical O(sqrt(k log k)) upper bound. The second graph shows the weights of each action evolving over time. The blue line is the weight of the best action, while the green and red lines are the weights of the second and third best actions.

Note how the Exp3 algorithm never stops increasing its regret. This is in part because of the adversarial model; even if Exp3 finds the absolutely perfect action to take, it just can’t get over the fact that the world might try to screw it over. As long as the $\gamma$ parameter is greater than zero, Exp3 will explore bad options just in case they turn out to be good. The benefits of this is that if the model changes over time Exp3 will adapt, but the downside is that the pessimism inherent in this worldview generally results in lower payoffs than other algorithms.

## More Variations, and Future Plans

Right now we have two contesting models of how the world works: is it stochastic and independent, like the UCB1 algorithm would optimize for? Or does it follow Exp3′s world view that the payoffs are adversarial? Next time we’ll run some real-world tests to see how each fares.

But before that, we should note that there are still more models we haven’t discussed. One extremely significant model is that of contextual bandits. That is, the real world settings we care about often come with some “context” associated with each trial. Ads being displayed to users have probabilities that should take into account the information known about the user, and medical treatments should take into account past medical history. While we will not likely investigate any contextual bandit algorithms on this blog in the near future, the reader who hopes to apply this work to his or her own exploits (no pun intended) should be aware of the additional reading.

Until next time!

# Deconstructing the Common Core Mathematical Standard

Ever since I started to get a real picture of what mathematics is about I’ve viewed middle school and high school mathematics education like a bit of a snob. I’ve read the treatise of Paul Lockhart, “A Mathematician’s Lament,” on the dystopia of cultural attitudes toward mathematics in pre-collegiate education. I’ve written responses to articles in the Atlantic authored by economists who, after getting PhDs in quantitative economics, still talk about math as if it’s just a bag of tricks. I’ve even taught guest lectures at high schools and middle schools to prove by example that an engaging, thought-provoking mathematics education is possible for 8th graders and up. I regularly tell my calculus students that half the things we make them do are completely pointless for their lives, while trying very hard to highlight the truly deep concepts and the few tools they might have reason to use.

But it’s generally agreed that something’s wrong with mathematics education in the US. There are a lot of questions to ask about why: are teachers not trained? Are students too focused on sports? Are Americans becoming intellectually lazy?

These all have their place in the debate, but the question I want to focus on in this article is: are policy makers designing good standards? I often hear about fantastic teachers who are stifled by administrators and standardized testing, so the popular answer is no. Moreover, though I haven’t done a principled study of this (again, my snobbishness peeking out), my impression is that even the fantastic math teachers at the most prestigious schools are still forced to hold the real mathematical learning in extracurriculars like math circles, or math symposiums.

And so the next natural step in analyzing the state of mathematics education in the US is to look at the standards in detail from a mathematical perspective. There was a nice article in the Washington Post detailing one ludicrous exam given to first graders in New York (exams for 6 year olds!). Here’s a snapshot:

This is wrong on so many levels.

This prompted me to actually look at the text of the Common Core State Standards in Mathematics, which is the currently accepted standard for most states. I have heard a lot about the political debate over efficacy and testing and assessment, but almost nothing about the mathematical content of the standards. Do they actually promote critical thinking skills and mathematical problem solving, as they claim? Does it differ enough from Lockhart’s dystopia?

My conclusion is:

While the Common Core indicates movement toward the right attitude on mathematics education, the attitudes aren’t reflected in the content of the standards themselves.

The big distinction I want to make in this article is the (perhaps counterintuitive notion) that mathematical thinking skills are largely unrelated to knowledge of mathematical facts, or ability to perform mechanical computations. The reason we teach mathematics to gain critical thinking skills is that mathematics gives examples of when they’re needed that are as simple and boiled down to their true essence as possible. And the text of the Common Core mostly ignores this.

I cannot claim that the writers of the standard don’t understand the mathematics deeply enough to realize this, and it would be too pompous even by my standards to imply that I know better than the thousands of educators that worked on this document. It could be the case that it’s instead the result of bureaucracy and partisanship, and the designers of the Common Core felt they could only make progress in certain areas. But even so, all we are left with is the document itself, and I want to give a principled (but more or less unstructured) inspection of its technical content.

There are some exceptions to my conclusion, and I will detail them as they come up, but the general picture is still this: the intent is better than it used to be but the implementation is still wrong. At the end, I’ll discuss why this is important and what I think should be done instead.

## Preliminaries

Now, before we jump in to the text of the standards themselves, I want to point out a few resources provided for teachers and discuss them briefly. First, there is a 3-minute intro video by the Common Core about why the standards are important

Putting aside the animation style, this video sends some disturbing messages. First, that getting a lot of money is what defines and creates success. Not having a deep understanding of the problems you’re facing, not having strong relationships, not helping people, but money. All of this focus on competition and money suggests that the standards are primarily business oriented. I would argue that this is not a useful position for education, but that would digress. Suffice it to say, the Common Core people should know that the most successful mathematician of all time, Paul Erdős, was homeless and had but a few hundred dollars to his name at any given time. He instead survived (indeed, excelled to legendary status) on his deep understanding of problem solving and his strong relationships with other mathematicians. This is an extreme example but it makes my point clear: collaboration, not competition, breeds success.

The second misconception expressed in the video is that mathematics (indeed, all learning) is like a staircase, and you have to learn the concepts in, say, 6th grade before you can learn anything in 7th. Beyond basic technical proficiency, this is simply not true for the kinds of skills we want to develop in our students. And the standards for basic technical proficiency in mathematics have never really changed: be competent in arithmetic and know what a variable is by the end of grade school; be competent in basic algebraic manipulation by the end of middle school or freshmen year of high school. Then the rest of high school is about being exposed to other areas, most often geometry, trigonometry, calculus, and stats, none of which depends on another too heavily (at the level taught in high school). The details of exactly what technical skills are required for high school students is unclear. Why? Because as one goes from elementary school to middle school to high school, the focus on learned skills should gradually change from mechanical abilities to big ideas, and that transition arguably begins some time in middle school.

One of the common core “big ideas” we’ll look at later is that of similarity in geometric figures. But it’s clear that you can go through your entire life’s work without thinking about similar triangles, and it’s hardly relevant to most disciplines. Why then, should we teach it? Well, there’s a very good reason, but it’s at the heart of what the Common Core standards are missing, so we’ll save the explanation for later.

The video does make one good point: standards is not learning, and learning only happens with great teachers. But really, meeting standards isn’t learning either! It’s just evidence to learning, which is extremely hard to measure. But even worse is meeting the wrong standards, and that’s what I’m afraid Common Core is promoting.

Speaking of which, here are the actual standards themselves for the reader to peruse. The Common Core website has the full standard, also available as a pdf, and CPS released a lengthy pdf explaining the standards in detail more or less specific to the school system. CPS also provides example lessons on their website, and I’m pleasantly surprised that a handful of the lessons try to flush out the more insightful ideas I find lacking in the Common Core itself.

So let’s dive right on in.

## Common Core: Too Narrow and Too General

The standard starts out by describing a set of general guidelines which I generally agree with. They are:

1. Make sense of problems and persevere in solving them.
2. Reason abstractly and quantitatively.
3. Construct viable arguments and critique the reasoning of others.
4. Model with mathematics.
5. Use appropriate tools strategically.
6. Attend to precision.
7. Look for an make use of structure.
8. Look for and express regularity in repeated reasoning.

But even as they are true, the descriptions of these tenets are either far too narrow or far too general. Take, for example, this excerpt from the description of the (arguably MOST mathematical) skill “Look for an express regularity in repeated reasoning,” also known as “Reasoning about patterns.”

Mathematically proficient students notice if calculations are repeated, and look both for general methods and for shortcuts. … Noticing the regularity in the way terms cancel when expanding $(x-1)(x+1), (x-1)(x^2 + x + 1),$ and $(x-1)(x^3 + x^2 + x + 1)$ might lead them to the general formula for the sum of a geometric series. As they work to solve a problem, mathematically proficient students maintain an oversight of the process while attending to the details. They continually evaluate the reasonableness of their intermediate results.

Yes, a mathematically proficient student will be able to infer a general pattern here. But it’s phrased in a way that makes it seem like expanding products of polynomials is the central goal while the real mathematical skill is just “looking for a shortcut” in calculations. Moreover, the sentence that follows is equally applicable to any profession. Indeed, while working to cook a meal, a proficient chef maintains an oversight of the process while attending to details, and continually evaluates the reasonableness of their intermediate results. Finding shortcuts is not mathematical, nor is it culinary. But reasoning about those shortcuts is, whether or not they’re correct. I have plenty of calculus students who “find shortcuts” that simply aren’t true, but don’t bother to think about them, and are hence exercising no mathematical abilities. It’s a fine distinction that the Common Core seems to ignore at some times and embrace at others.

The best example of this is in “Construct viable arguments and critique the reasoning of others.” Here they wonderfully lay out the kind of logical reasoning students should learn in mathematics. How I wish that all mathematics education was based around this sole principle! The problem is that none of these thoughts are reflected in the standards themselves! Instead, the standards generally simply request that a student “knows” a particular argument, not that they generate original ideas or critique the ideas of others. The generation of new mathematical questions and arguments is, without a doubt, the best way to learn mathematical thinking.

Indeed, let’s take a closer look at the standards themselves.

## Inspecting the Standards Themselves

The list of Common Core Standards breaks mathematical abilities down by grade level and by area. I think the Washington Post article gives a very good critique of the lowest grade standards, so let’s focus on high school level. This is where I claim the true big ideas must shine through, if they come up anywhere at all.

The high school standards are broken up into areas by subject:

• Number and Quantity
• Algebra
• Functions
• Modeling
• Geometry
• Statistics and Probability

So far so good, I guess. I’m quite pleased to see statistics recognized here. Let’s start with “Number and Quantity.” This section is broken into “The Real Number System,” “Quantities,” “The Complex Number System,” and “Vector and Matrix Quantities.”

The first one, “The Real Number System,” already shows some huge red flags. There are three standards here, and I quote:

1. Extend the properties of exponents to rational exponents.
1. Explain how the definition of the meaning of rational exponents follows from extending the properties of integer exponents to those values, allowing for a notation for radicals in terms of rational exponents. For example, we define 51/3 to be the cube root of 5 because we want (51/3)3 = 5(1/3)3 to hold, so (51/3)3 must equal 5
2. Rewrite expressions involving radicals and rational exponents using the properties of exponents.
2. Use properties of rational and irrational numbers
1. Explain why the sum or product of two rational numbers is rational; that the sum of a rational number and an irrational number is irrational; and that the product of a nonzero rational number and an irrational number is irrational.

This is supposed to indicate that someone understands real numbers? Number 1 only shows that one knows how to do arithmetic with exponents, asking the student to know a very specific argument, and number 2 is just memorizing some basic properties of rational numbers. There are some HUGE questions left unasked. Here are a few

1. What is a real number? How does it differ from other kinds of numbers?
2. Is infinity a real number? If so, how does it fit with the definition of a real number? If not, is it some other kind of number?
3. What does it mean to be rational and irrational?
4. What are some examples of irrational numbers? Why are those examples irrational?
5. Are there more irrational numbers than rational numbers? Vice versa? Are they “equal” in size?
6. If we can’t know it exactly, how would you estimate a the value of a number like $\pi^4$?

To be fair, the grade 8 standards address some of these questions, but in an odd way. Rather than say that students should know that real numbers can be (sort of) defined by a finite integer part and an infinite decimal expansion, it says

Understand informally that every number has a decimal expansion; for rational numbers show that the decimal expansion repeats eventually, and convert a decimal expansion which repeats eventually into a rational number.

Does that mean that a number can have multiple decimal expansions? Does every decimal expansion also correspond to a number? Or can a decimal expansion represent multiple numbers? What about the number whose “decimal expansion” has an infinite number of 1′s before the decimal point? Does that mean that infinity is a real number? As you can see, these are some very basic questions about real numbers, which are arguably more stimulating and important than being able to convert back and forth between decimal expansions and rational numbers (as the 8th grade standard requires, but nobody actually does for numbers harder than 1/3).

The important point I want to make here is that the truly “Big Ideas” underlying this topic are as follows, and they’re only halfway related to numbers themselves:

1. Understand the importance of precise definitions, and be able to apply those definitions to simple questions, such as “Prove that 1/3 is a real number,” and “Argue why infinity is not a real number.”
2. Understand that we can define notation as we see fit, e.g. $5^{1/3}$, and more deeply that mathematical concepts are invented via definitions.
3. Understand the concept of a correspondence between two collections. Know how to argue that a correspondence is or is not bijective (by any other name).
4. Understand basic proofs of impossibility.
5. Understand the concept of approximation, and understand how to quickly get rough approximations of quantities. Extend this to estimate concrete real-world quantities, like the number of pianos in your hometown.

And these ideas are the actual ideas that the Common Core is looking for, the ones that apply across all of mathematics and actually relate to real critical thinking skills. But it seems that the only place in the standard where they address the idea of a correspondence is in a Kindergarten “Counting” standard:

CCSS.Math.Content.K.CC.C.6 Identify whether the number of objects in one group is greater than, less than, or equal to the number of objects in another group, e.g., by using matching and counting strategies.

“Number and Quantity” goes on to describe the importance of consistently using units and rounding measurements to the right number of digits; memorizing properties of complex numbers (with regards to which any introductory college professor will start over anyway); and more rote manipulation of vectors and matrices that few high school students have any reason to know. Other big ideas apply here, ideas from geometry and the idea of correspondence in particular, but the standards still focus on mechanical abilities.

The Common Core folks argue in quite a few places that knowing the reasons for why certain mathematical facts are true is what constitutes a true understanding of those facts. And yes, I agree. But there’s much more to the story. If we want students to know why we define $5^{1/3}$ as we do, to make a nice extension of the rules of exponent arithmetic, it’s certainly a deeper understanding than just memorizing how to do the arithmetic itself. But it’s just another kind of memorization! It’s memorization of a specific mathematical reason for a specific mathematical fact. It’s a better kind of memorization than we used to require, but is it critical thinking or problem solving? It’s hard to say whether or not it requires more of the mental faculty we want it to, but if it’s not then we lose, and if it is, then this is a pretty indirect way of going about it.

Again, my big point here is that the requirements of the Standard overlook the deep underlying mathematical thinking skills that we hope are being developed when we ask them to know whatever it is we want them to know. These big concepts like correspondence and impossibility and approximation should be the central focus. The particular rules of exponents and the specific properties of irrational numbers, these are tools and sidenotes that accentuate fluency in the big concepts as applied to solving problems. Almost nobody needs to know facts about irrational numbers in their careers, but relating things by correspondence is a truly useful mathematical skill.

## Taking a Step Back

So let’s pause for a moment and give some counterpoint. I could just be focusing super narrowly on one or two topics that I feel the Common Core misrepresents, and using that gripe to claim the entire Common Core is crap when it actually has lots of merit in other areas.

While I do think that the standard addresses a few topics well (more on that later), I claim the pattern of “Number and Quantity,” is endemic. Take for example the section on Geometry, Measurement & Dimension. I was really hopeful here that one of the standards would be “Understand what dimension means,” but no dice. Instead it’s the same old memorization of formulas for volumes of geometric shapes.

Even worse, when asked to do derive these formulas, the standard says

Give an informal argument for the formulas for the circumference of a circle, area of a circle, volume of a cylinder, pyramid, and cone. Use dissection arguments, Cavalieri’s principle, and informal limit arguments.

I’d be surprised if many of my readers had even heard of Cavalieri’s principle before reading this, but this is the pattern striking again. The standard expects the students to know something very specific: not just how to use Cavalieri’s principle, but how to apply it just to these special objects, while ignoring the underlying principles at work. A true understanding of measurement and volume would be:

Reason about the volume of a solid you’ve never seen before.

But more deeply, Cavalieri’s principle is just another kind of correspondence argument applied to geometry. And I see this right away without muddling through that terribly written Wikipedia page because I have a solid understanding of the notion of a correspondence and I can recognize it at work.

The “dissection” argument is another deep principle that is mostly ignored in the standard, so let me spell it out. One way to solve problems is to break them down into simpler problems you know how to solve, and to figure out how to piece them back together again. Any high school student can understand this technique because by high school they’ve learned to put their pants on one leg at a time. But this idea alone accounts for a wide breadth of mathematical solutions to problems (the day it was applied to signal processing is often credited as the day the Age of Information began!). So they should be seeing (and using) this technique applied to many problems, be they about algebra or geometry or cats. It shouldn’t be hidden away in a single (perhaps memorized) geometric argument.

And finally, the “limit argument” (called exhaustion in some educational circles, but I don’t like this name) is an application of approximation. The idea is that a sequence of increasingly good lies will eventually give you the truth, and it’s one of the deepest thoughts that mankind has ever had! So indeed, the “Big Ideas” across this standard are big ideas, but the writers of the standard neglect to point out their significance in favor of very specific and arguably pointless factual requirements.

The geometry section is full of other similar nonsense: using laws of sines and cosines, the same geometry proofs that Paul Lockhart derides (page 19), and memorizing minutiae about the equations of parabolas, ellipses, hyperbolas. They say the big ideas are similarity, transformations, and symmetry (indeed these are big ideas!) but then largely revert to the same old awful kinds of rote memorization and symbol pushing we’ve grown to hate about math.

A real course in geometry, measurement, and mathematical problem solving might even follow Lockhart’s book, Measurement. In fact, if students really absorbed the contents of this book, that would constitute an entire high school mathematics education. Why do I say that? Because this book focuses on developing mathematical thinking skills in a way that no high school education I’ve heard of has attempted. This book emphasizes methods and exploration over facts, and teaches readers to conjecture and reason without telling them how to do everything. It provides numerous exercises without clear answers and has no solution manual. And I claim that any facts required by the standards that are not covered there could be taught to a student who is comfortable with this book in one month or less. That is, all of the “standards” simply fall out of the more important deeper concepts, and we should be working forward from the deep ideas. We should use things like the law of sines as examples of these deep principles in action, but knowing or not knowing the law of sines or when to use it gives little indication of critical thinking.

I could continue with algebra, and the other sections, but I think my point is clear. The standards are filled with the same arbitrary choices of technical facts, and the deep ideas, the kinds of thinking we want to develop, are absent.

## Modeling and Other Big Ideas

There is one aspect of mathematical problem solving that I think the Common Core addresses well, and that is modeling. That is, students need to be able to take a poorly defined problem, whether it’s “analyzing the stopping distance for a car” or asking what constitutes a number, and boil it down to its essence. This means making and questioning assumptions, debating the quality of a model, testing and revising, and interpreting results in a principled way. This is arguably the only kind of mathematics that non-mathematicians do outside of academia, and I feel that the description in the Common Core does justice to its importance. Even better, they admit there can be no “list” of facts the students are expected to know about specific models or tools. Here the Common Core gives in to the truth that discussion and original arguments are the key to developing fluency. In my imagination there were a select few key players lobbying for this to be included in the Core, and I say bravo to you, well done!

But there are some other big ideas that the Common Core misses entirely.

One of these is the idea of generalization. That is, one core part of mathematical thinking is to take a solution to a given problem and extend it to more general patterns and problems. I see only vague allusions to this concept in the Common Core (students are expected to know, for example, how to generate a sequence when given a pattern). But this is literally the core stuff of mathematical problem solving: if you can’t solve a hard problem, try to simplify it until you can solve it, and then try to generalize your solution back to the original problem. This is why it makes sense to think about matrices and polynomials as “generalizations of integers,” because natural facts about integers extend (or don’t extend as the case may be) to these more general settings. Students should be comfortable facing problems that may require simplification and mathematically “feeling around” for insights.

The second idea is that of the algorithm. I’m not talking about programming, that’s a different story. I’m talking about procedures that anyone might follow to get something done. People follow algorithms all day, and some of the most natural problems (and interesting problems for students to think about) are algorithmic in nature: how to guarantee you win a game, how to find the quickest way to get somewhere, how to win the heart of that cute guy or girl. Indeed, students are expected to follow algorithms all over the Common Core, from approximating irrationals by rationals to solving algebra and making inferences. The only non algorithmic aspect of the Common Core is modeling, and here they provide an algorithm for how to do it! And so it makes sense to study exactly what makes an algorithm an algorithm, when algorithms apply, and more deeply what makes an algorithm good.

I find it hard to believe a mathematician would ever make such a diagram…

The last point I want to make is that true mathematical understanding arises from trying to solve problems that you are not told how to solve ahead of time, and recognizing when these big ideas apply and when they do not. Students love to solve puzzles for their own sake, and they don’t need to be embedded in stupid “real world” applications like computing mortgage payments. Indeed, this is what Sergio Correa did in his financially destitute school in Mexico, and his students have made progress beyond belief (see, Common Core people? Money is not the problem or the solution!). It’s okay for problems to be left unsolved by students for days, weeks, or even years, and students need to be comfortable with identifying their own lack of understanding.

I want to expand on the idea a bit more. Taking it to the extreme, you could ask a more daring question: should students be exposed to problems they cannot possibly solve? My answer is yes! Emphatically, yes! A thousand times, yes! Student need to be exposed to many kinds of problems they cannot solve to be prepared for a world in which most problems don’t have known solutions (or else they wouldn’t be problems in the first place). Here are a few examples:

1. They should be exposed to problems that can be solved in principle but are too hard to solve with the techniques they know well.

For example: elementary level students who are just beginning to learn about variables should be asked to add up all the numbers between 1 and 100. They should be encouraged to try it by hand until they’re convinced it’s too hard, and they should be rewarded if they actually do manage to do it by hand. They should then be encouraged to think of other, cleverer, ways to solve the problem. No idea is crazier than adding it up by hand, and so much time (at least a full class session) should be spent puzzling over what in the world could possibly be done. Finally, an elegant solution should be shown that reduces the problem to multiplication, and the use of variables highlighted (let S be the sum of these numbers, even though we don’t know what it is…). And then the problem can be extended to a general sum of the first few integers, sums of squares, and so on.

2. They should be exposed to problems that cannot be solved with any technique they know but will foreshadow their education in future classes.

When students are learning about the slope of a linear function, they must be encouraged to wonder how one could reason about the steepness of nonlinear things. For it’s obvious that some nonlinear functions are steeper than others at different places, but how can we use a single number to compare them like we do for slopes of lines? The answer is that we cannot! The “correct” way is to invent calculus, but of course the calculus way of doing it involves extending the usual notion of slopes of lines by taking limits. The students will not know this, nor will they find it out by the end of their algebra class, but it should linger in their minds as a motivating question: there are always more unanswered questions! What about the “steepness” of surfaces? Can we talk about the “steepness” of time? Students should readily ask and be asked such intriguing questions (again, this is generalization at work).

3. They should be asked obvious technical questions that appear not to have any technique at all.

For example, they might be asked the difficult question: is $\pi$ a rational number? Indeed, this is an extremely natural question to ask, since $\pi$ is defined to be a ratio of two numbers: the circumference and diameter of a circle. But despite the fact that there are many proofs using a variety of techniques, almost all proofs that $\pi$ is irrational are beyond the abilities of high school students to follow and not even familiar to the average college math major. I certainly couldn’t prove it off the top of my head. This is quite different than the previous kind of problem, because there it was obvious that you can reason about the steepness of nonlinear functions, the students just don’t know how to formulate it rigorously. But here, the rigorous question is understood (can $\pi$ be represented as a quotient of integers) but it’s unclear whether the problem is easy or hard to solve, and it turns out to be hard.

4. They should be exposed to problems that NOBODY knows how to solve.

When students learn about rational numbers (and if they know about $e$, which I doubt they should before calculus but I’ll use it in my example anyway), they could be asked whether $\pi + e$ is rational. This is an open problem in mathematics. If they’re learning about prime numbers, they should be asked whether every positive integer can be written as a sum of two primes. Every even positive integer? Every even integer greater than 2? And so they go through the process of refining “stupid” questions (with obvious “no” answers) into deep open conjectures. And then it can be connected to other ideas: can an algorithm answer this question? How long might it take? Can we try to correspond integers to something else? Can we give an approximation argument? There are so many simple open problems in number theory that it baffles me that many students are never exposed to them.

The point of all this is that mathematics, and mathematical problem solving skills, are not just about picking the right tool from the set of tools you’ve been taught. It’s about recognizing when any tool you’ve ever heard of even applies! More deeply, it’s about debating with your colleagues that problems can and cannot be solved using certain methods, and giving principled reasons why you think so. This sounds like “modeling,” and indeed the strategies used for modeling also apply to pure mathematics, a fact that most people don’t realize. Critical thinking and mathematical problem solving is more akin to art and debate than to mechanical computation. And the most interesting problems are the most natural questions one could ask, not the contrived “compute the volume of an oddly shaped wine glass” questions. Those are busy work questions, not open for discussion and interpretation. And the students know it.

## Why This Matters

A reasonable objection to my rant goes as follows: why does it matter that the Common Core isn’t super clear about these “big ideas” I’m claiming are so central? If the teachers are knowledgeable they’ll know what is important and what isn’t important, and how to teach the material in the manner that best promotes learning.

The problem is one of intention and misdirection. If the teachers are not rock-solid in their own understanding, then this Common Core, promoted by The World’s Leading Education Experts, can easily narrow their teaching to just what’s in the Core. More disturbing are the people who don’t know mathematics well, the principals, policy makers, and standardized test writers who really take these guidelines at face level. Even if a teacher has a good reason to favor one area like modeling over memorizing facts about hyperbolas, they will be met with the same kind of obtuse opposition by administrators seeking short term business goals. The standards exist, one might argue, to explain to these people (the people who wouldn’t know mathematical reasoning if it hit them in the face) that the teachers are teaching ideas much deeper than the rules of matrix multiplication. The Common Core represents this adequately with regards to modeling, but little else.

The Common Core also claims that the standards should be separated from specific curriculum and pedagogy, and one would reasonably argue that what I’m presenting here is pedagogy, not standards. Regardless of whether you agree or disagree with this, it still remains that the Common Core is designed to influence curriculum and pedagogy. And so even if the Common Core must have “facts” as standards, if it fails to emphasize the deep ideas underlying the factual obligations then it fails to influence pedagogy in the right way. In doing so, it reinforces obviously bad practices like teaching to the test.

The important thing to realize is that the correct pedagogy is already basically known: from a young age students should explore and reason and puzzle without horse blinders. Sometimes there are some dry factual things they cannot escape, but such is true of everything. So the separation of mathematical church and state (pedagogy and standards) claimed by the Common Core seems to be entirely a political one. It would infringe on the freedom of the teachers to impose pedagogical constraints, especially ones that only work for some environments. If this necessarily causes deficiencies of a global set of standards, then it is simply the wrong approach.

Again, I cannot say for sure whether the writers of the standard don’t understand the mathematics well enough, and it would be pointlessly arrogant to imply my own superiority. I hate to think it’s a bureaucracy issue, and that the designers felt the only progress they could make was to emphasize modeling as well as they did. If this is the truth then it is a sad one, because where I and many of the teachers sit, our country is stuck with the results.

What don’t need more compartmentalization by subject and grade. We do need a recognition of the deep critical thinking skills we want to teach. “Abstract reasoning” is not a specific enough goal to warrant policy. We need to admit to our teachers and our students exactly what we’re trying to get them to learn. And then we can organize education based on increasingly sophisticated applications of those ideas, to thinking about shapes, numbers, modeling, to whatever you want. Then students won’t forget about counting as “matching” after kindergarten ends, or only consider approximations related to irrational numbers. They will instead see these ideas blossom over time into the mental Swiss Army knives that they are. And they will use these ideas as a foundation to acquire whatever factual knowledge they might need to succeed in their careers.

# Optimism in the Face of Uncertainty: the UCB1 Algorithm

The software world is always atwitter with predictions on the next big piece of technology. And a lot of chatter focuses on what venture capitalists express interest in. As an investor, how do you pick a good company to invest in? Do you notice quirky names like “Kaggle” and “Meebo,” require deep technical abilities, or value a charismatic sales pitch?

This author personally believes we’re not thinking as big as we should be when it comes to innovation in software engineering and computer science, and that as a society we should value big pushes forward much more than we do. But making safe investments is almost always at odds with innovation. And so every venture capitalist faces the following question. When do you focus investment in those companies that have proven to succeed, and when do you explore new options for growth? A successful venture capitalist must strike a fine balance between this kind of exploration and exploitation. Explore too much and you won’t make enough profit to sustain yourself. Narrow your view too much and you will miss out on opportunities whose return surpasses any of your current prospects.

In life and in business there is no correct answer on what to do, partly because we just don’t have a good understanding of how the world works (or markets, or people, or the weather). In mathematics, however, we can meticulously craft settings that have solid answers. In this post we’ll describe one such scenario, the so-called multi-armed bandit problem, and a simple algorithm called UCB1 which performs close to optimally. Then, in a future post, we’ll analyze the algorithm on some real world data.

As usual, all of the code used in the making of this post are available for download on this blog’s Github page.

## Multi-Armed Bandits

The multi-armed bandit scenario is simple to describe, and it boils the exploration-exploitation tradeoff down to its purest form.

Suppose you have a set of $K$ actions labeled by the integers $\left \{ 1, 2, \dots, K \right \}$. We call these actions in the abstract, but in our minds they’re slot machines. We can then play a game where, in each round, we choose an action (a slot machine to play), and we observe the resulting payout. Over many rounds, we might explore the machines by trying some at random. Assuming the machines are not identical, we naturally play machines that seem to pay off well more frequently to try to maximize our total winnings.

This is the most general description of the game we could possibly give, and every bandit learning problem has these two components: actions and rewards. But in order to get to a concrete problem that we can reason about, we need to specify more details. Bandit learning is a large tree of variations and this is the point at which the field ramifies. We presently care about two of the main branches.

How are the rewards produced? There are many ways that the rewards could work. One nice option is to have the rewards for action $i$ be drawn from a fixed distribution $D_i$ (a different reward distribution for each action), and have the draws be independent across rounds and across actions. This is called the stochastic setting and it’s what we’ll use in this post. Just to pique the reader’s interest, here’s the alternative: instead of having the rewards be chosen randomly, have them be adversarial. That is, imagine a casino owner knows your algorithm and your internal beliefs about which machines are best at any given time. He then fixes the payoffs of the slot machines in advance of each round to screw you up! This sounds dismal, because the casino owner could just make all the machines pay nothing every round. But actually we can design good algorithms for this case, but “good” will mean something different than absolute winnings. And so we must ask:

How do we measure success? In both the stochastic and the adversarial setting, we’re going to have a hard time coming up with any theorems about the performance of an algorithm if we care about how much absolute reward is produced. There’s nothing to stop the distributions from having terrible expected payouts, and nothing to stop the casino owner from intentionally giving us no payout. Indeed, the problem lies in our measurement of success. A better measurement, which we can apply to both the stochastic and adversarial settings, is the notion of regret. We’ll give the definition for the stochastic case, and investigate the adversarial case in a future post.

Definition: Given a player algorithm $A$ and a set of actions $\left \{1, 2, \dots, K \right \}$, the cumulative regret of $A$ in rounds $1, \dots, T$ is the difference between the expected reward of the best action (the action with the highest expected payout) and the expected reward of $A$ for the first $T$ rounds.

We’ll add some more notation shortly to rephrase this definition in symbols, but the idea is clear: we’re competing against the best action. Had we known it ahead of time, we would have just played it every single round. Our notion of success is not in how well we do absolutely, but in how well we do relative to what is feasible.

## Notation

Let’s go ahead and draw up some notation. As before the actions are labeled by integers $\left \{ 1, \dots, K \right \}$. The reward of action $i$ is a $[0,1]$-valued random variable $X_i$ distributed according to an unknown distribution and possessing an unknown expected value $\mu_i$. The game progresses in rounds $t = 1, 2, \dots$ so that in each round we have different random variables $X_{i,t}$ for the reward of a single action $i$ in round $t$. The $X_{i,t}$ are independent as both $t$ and $i$ vary, although when $i$ varies the distribution changes.

So if we were to play action 2 over and over for $T$ rounds, then the total payoff would be the random variable $G_2(T) = \sum_{t=1}^T X_{2,t}$. But by independence across rounds and the linearity of expectation, the expected payoff is just $\mu_2 T$. So we can describe the best action as the action with the highest expected payoff. Define

$\displaystyle \mu^* = \max_{1 \leq i \leq K} \mu_i$

We call the action which achieves the maximum $i^*$.

A policy is a randomized algorithm $A$ which picks an action in each round based on the history of chosen actions and observed rewards so far. Define $I_t$ to be the action played by $A$ in round $t$ and $P_i(n)$ to be the number of times we’ve played action $i$ in rounds $1 \leq t \leq n$. These are both random variables. Then the cumulative payoff for the algorithm $A$ over the first $T$ rounds, denoted $G_A(T)$, is just

$\displaystyle G_A(T) = \sum_{t=1}^T X_{I_t, t}$

and its expected value is simply

$\displaystyle \mathbb{E}(G_A(T)) = \mu_1 \mathbb{E}(P_1(T)) + \dots + \mu_K \mathbb{E}(P_K(T))$.

Here the expectation is taken over all random choices made by the policy and over the distributions of rewards, and indeed both of these can affect how many times a machine is played.

Now the cumulative regret of a policy $A$ after the first $T$ steps, denoted $R_A(T)$ can be written as

$\displaystyle R_A(T) = G_{i^*}(T) - G_A(T)$

And the goal of the policy designer for this bandit problem is to minimize the expected cumulative regret, which by linearity of expectation is

$\mathbb{E}(R_A(T)) = \mu^*T - \mathbb{E}(G_A(T))$.

Before we continue, we should note that there are theorems concerning lower bounds for expected cumulative regret. Specifically, for this problem it is known that no algorithm can guarantee an expected cumulative regret better than $\Omega(\sqrt{KT})$. It is also known that there are algorithms that guarantee no worse than $O(\sqrt{KT})$ expected regret. The algorithm we’ll see in the next section, however, only guarantees $O(\sqrt{KT \log T})$. We present it on this blog because of its simplicity and ubiquity in the field.

## The UCB1 Algorithm

The policy we examine is called UCB1, and it can be summed up by the principle of optimism in the face of uncertainty. That is, despite our lack of knowledge in what actions are best we will construct an optimistic guess as to how good the expected payoff of each action is, and pick the action with the highest guess. If our guess is wrong, then our optimistic guess will quickly decrease and we’ll be compelled to switch to a different action. But if we pick well, we’ll be able to exploit that action and incur little regret. In this way we balance exploration and exploitation.

The formalism is a bit more detailed than this, because we’ll need to ensure that we don’t rule out good actions that fare poorly early on. Our “optimism” comes in the form of an upper confidence bound (hence the acronym UCB). Specifically, we want to know with high probability that the true expected payoff of an action $\mu_i$ is less than our prescribed upper bound. One general (distribution independent) way to do that is to use the Chernoff-Hoeffding inequality.

As a reminder, suppose $Y_1, \dots, Y_n$ are independent random variables whose values lie in $[0,1]$ and whose expected values are $\mu_i$. Call $Y = \frac{1}{n}\sum_{i}Y_i$ and $\mu = \mathbb{E}(Y) = \frac{1}{n} \sum_{i} \mu_i$. Then the Chernoff-Hoeffding inequality gives an exponential upper bound on the probability that the value of $Y$ deviates from its mean. Specifically,

$\displaystyle \textup{P}(Y + a < \mu) \leq e^{-2na^2}$

For us, the $Y_i$ will be the payoff variables for a single action $j$ in the rounds for which we choose action $j$. Then the variable $Y$ is just the empirical average payoff for action $j$ over all the times we’ve tried it. Moreover, $a$ is our one-sided upper bound (and as a lower bound, sometimes). We can then solve this equation for $a$ to find an upper bound big enough to be confident that we’re within $a$ of the true mean.

Indeed, if we call $n_j$ the number of times we played action $j$ thus far, then $n = n_j$ in the equation above, and using $a = a(j,T) = \sqrt{2 \log(T) / n_j}$ we get that $\textup{P}(Y > \mu + a) \leq T^{-4}$, which converges to zero very quickly as the number of rounds played grows. We’ll see this pop up again in the algorithm’s analysis below. But before that note two things. First, assuming we don’t play an action $j$, its upper bound $a$ grows in the number of rounds. This means that we never permanently rule out an action no matter how poorly it performs. If we get extremely unlucky with the optimal action, we will eventually be convinced to try it again. Second, the probability that our upper bound is wrong decreases in the number of rounds independently of how many times we’ve played the action. That is because our upper bound $a(j, T)$ is getting bigger for actions we haven’t played; any round in which we play an action $j$, it must be that $a(j, T+1) = a(j,T)$, although the empirical mean will likely change.

With these two facts in mind, we can formally state the algorithm and intuitively understand why it should work.

UCB1:
Play each of the $K$ actions once, giving initial values for empirical mean payoffs $\overline{x}_i$ of each action $i$.
For each round $t = K, K+1, \dots$:

Let $n_j$ represent the number of times action $j$ was played so far.
Play the action $j$ maximizing $\overline{x}_j + \sqrt{2 \log t / n_j}$.
Observe the reward $X_{j,t}$ and update the empirical mean for the chosen action.

And that’s it. Note that we’re being super stateful here: the empirical means $x_j$ change over time, and we’ll leave this update implicit throughout the rest of our discussion (sorry, functional programmers, but the notation is horrendous otherwise).

Before we implement and test this algorithm, let’s go ahead and prove that it achieves nearly optimal regret. The reader uninterested in mathematical details should skip the proof, but the discussion of the theorem itself is important. If one wants to use this algorithm in real life, one needs to understand the guarantees it provides in order to adequately quantify the risk involved in using it.

Theorem: Suppose that UCB1 is run on the bandit game with $K$ actions, each of whose reward distribution $X_{i,t}$ has values in [0,1]. Then its expected cumulative regret after $T$ rounds is at most $O(\sqrt{KT \log T})$.

Actually, we’ll prove a more specific theorem. Let $\Delta_i$ be the difference $\mu^* - \mu_i$, where $\mu^*$ is the expected payoff of the best action, and let $\Delta$ be the minimal nonzero $\Delta_i$. That is, $\Delta_i$ represents how suboptimal an action is and $\Delta$ is the suboptimality of the second best action. These constants are called problem-dependent constants. The theorem we’ll actually prove is:

Theorem: Suppose UCB1 is run as above. Then its expected cumulative regret $\mathbb{E}(R_{\textup{UCB1}}(T))$ is at most

$\displaystyle 8 \sum_{i : \mu_i < \mu^*} \frac{\log T}{\Delta_i} + \left ( 1 + \frac{\pi^2}{3} \right ) \left ( \sum_{j=1}^K \Delta_j \right )$

Okay, this looks like one nasty puppy, but it’s actually not that bad. The first term of the sum signifies that we expect to play any suboptimal machine about a logarithmic number of times, roughly scaled by how hard it is to distinguish from the optimal machine. That is, if $\Delta_i$ is small we will require more tries to know that action $i$ is suboptimal, and hence we will incur more regret. The second term represents a small constant number (the $1 + \pi^2 / 3$ part) that caps the number of times we’ll play suboptimal machines in excess of the first term due to unlikely events occurring. So the first term is like our expected losses, and the second is our risk.

But note that this is a worst-case bound on the regret. We’re not saying we will achieve this much regret, or anywhere near it, but that UCB1 simply cannot do worse than this. Our hope is that in practice UCB1 performs much better.

Before we prove the theorem, let’s see how derive the $O(\sqrt{KT \log T})$ bound mentioned above. This will require familiarity with multivariable calculus, but such things must be endured like ripping off a band-aid. First consider the regret as a function $R(\Delta_1, \dots, \Delta_K)$ (excluding of course $\Delta^*$), and let’s look at the worst case bound by maximizing it. In particular, we’re just finding the problem with the parameters which screw our bound as badly as possible, The gradient of the regret function is given by

$\displaystyle \frac{\partial R}{\partial \Delta_i} = - \frac{8 \log T}{\Delta_i^2} + 1 + \frac{\pi^2}{3}$

and it’s zero if and only if for each $i$, $\Delta_i = \sqrt{\frac{8 \log T}{1 + \pi^2/3}} = O(\sqrt{\log T})$. However this is a minimum of the regret bound (the Hessian is diagonal and all its eigenvalues are positive). Plugging in the $\Delta_i = O(\sqrt{\log T})$ (which are all the same) gives a total bound of $O(K \sqrt{\log T})$. If we look at the only possible endpoint (the $\Delta_i = 1$), then we get a local maximum of $O(K \sqrt{\log T})$. But this isn’t the $O(\sqrt{KT \log T})$ we promised, what gives? Well, this upper bound grows arbitrarily large as the $\Delta_i$ go to zero. But at the same time, if all the $\Delta_i$ are small, then we shouldn’t be incurring much regret because we’ll be picking actions that are close to optimal!

Indeed, if we assume for simplicity that all the $\Delta_i = \Delta$ are the same, then another trivial regret bound is $\Delta T$ (why?). The true regret is hence the minimum of this regret bound and the UCB1 regret bound: as the UCB1 bound degrades we will eventually switch to the simpler bound. That will be a non-differentiable switch (and hence a critical point) and it occurs at $\Delta = O(\sqrt{(K \log T) / T})$. Hence the regret bound at the switch is $\Delta T = O(\sqrt{KT \log T})$, as desired.

## Proving the Worst-Case Regret Bound

Proof. The proof works by finding a bound on $P_i(T)$, the expected number of times UCB chooses an action up to round $T$. Using the $\Delta$ notation, the regret is then just $\sum_i \Delta_i \mathbb{E}(P_i(T))$, and bounding the $P_i$‘s will bound the regret.

Recall the notation for our upper bound $a(j, T) = \sqrt{2 \log T / P_j(T)}$ and let’s loosen it a bit to $a(y, T) = \sqrt{2 \log T / y}$ so that we’re allowed to “pretend” a action has been played $y$ times. Recall further that the random variable $I_t$ has as its value the index of the machine chosen. We denote by $\chi(E)$ the indicator random variable for the event $E$. And remember that we use an asterisk to denote a quantity associated with the optimal action (e.g., $\overline{x}^*$ is the empirical mean of the optimal action).

Indeed for any action $i$, the only way we know how to write down $P_i(T)$ is as

$\displaystyle P_i(T) = 1 + \sum_{t=K}^T \chi(I_t = i)$

The 1 is from the initialization where we play each action once, and the sum is the trivial thing where just count the number of rounds in which we pick action $i$. Now we’re just going to pull some number $m-1$ of plays out of that summation, keep it variable, and try to optimize over it. Since we might play the action fewer than $m$ times overall, this requires an inequality.

$P_i(T) \leq m + \sum_{t=K}^T \chi(I_t = i \textup{ and } P_i(t-1) \geq m)$

These indicator functions should be read as sentences: we’re just saying that we’re picking action $i$ in round $t$ and we’ve already played $i$ at least $m$ times. Now we’re going to focus on the inside of the summation, and come up with an event that happens at least as frequently as this one to get an upper bound. Specifically, saying that we’ve picked action $i$ in round $t$ means that the upper bound for action $i$ exceeds the upper bound for every other action. In particular, this means its upper bound exceeds the upper bound of the best action (and $i$ might coincide with the best action, but that’s fine). In notation this event is

$\displaystyle \overline{x}_i + a(P_i(t), t-1) \geq \overline{x}^* + a(P^*(T), t-1)$

Denote the upper bound $\overline{x}_i + a(i,t)$ for action $i$ in round $t$ by $U_i(t)$. Since this event must occur every time we pick action $i$ (though not necessarily vice versa), we have

$\displaystyle P_i(T) \leq m + \sum_{t=K}^T \chi(U_i(t-1) \geq U^*(t-1) \textup{ and } P_i(t-1) \geq m)$

We’ll do this process again but with a slightly more complicated event. If the upper bound of action $i$ exceeds that of the optimal machine, it is also the case that the maximum upper bound for action $i$ we’ve seen after the first $m$ trials exceeds the minimum upper bound we’ve seen on the optimal machine (ever). But on round $t$ we don’t know how many times we’ve played the optimal machine, nor do we even know how many times we’ve played machine $i$ (except that it’s more than $m$). So we try all possibilities and look at minima and maxima. This is a pretty crude approximation, but it will allow us to write things in a nicer form.

Denote by $\overline{x}_{i,s}$ the random variable for the empirical mean after playing action $i$ a total of $s$ times, and $\overline{x}^*_s$ the corresponding quantity for the optimal machine. Realizing everything in notation, the above argument proves that

$\displaystyle P_i(T) \leq m + \sum_{t=K}^T \chi \left ( \max_{m \leq s < t} \overline{x}_{i,s} + a(s, t-1) \geq \min_{0 < s' < t} \overline{x}^*_{s'} + a(s', t-1) \right )$

Indeed, at each $t$ for which the max is greater than the min, there will be at least one pair $s,s'$ for which the values of the quantities inside the max/min will satisfy the inequality. And so, even worse, we can just count the number of pairs $s, s'$ for which it happens. That is, we can expand the event above into the double sum which is at least as large:

$\displaystyle P_i(T) \leq m + \sum_{t=K}^T \sum_{s = m}^{t-1} \sum_{s' = 1}^{t-1} \chi \left ( \overline{x}_{i,s} + a(s, t-1) \geq \overline{x}^*_{s'} + a(s', t-1) \right )$

We can make one other odd inequality by increasing the sum to go from $t=1$ to $\infty$. This will become clear later, but it means we can replace $t-1$ with $t$ and thus have

$\displaystyle P_i(T) \leq m + \sum_{t=1}^\infty \sum_{s = m}^{t-1} \sum_{s' = 1}^{t-1} \chi \left ( \overline{x}_{i,s} + a(s, t) \geq \overline{x}^*_{s'} + a(s', t) \right )$

Now that we’ve slogged through this mess of inequalities, we can actually get to the heart of the argument. Suppose that this event actually happens, that $\overline{x}_{i,s} + a(s, t) \geq \overline{x}^*_{s'} + a(s', t)$. Then what can we say? Well, consider the following three events:

(1) $\displaystyle \overline{x}^*_{s'} \leq \mu^* - a(s', t)$
(2) $\displaystyle \overline{x}_{i,s} \geq \mu_i + a(s, t)$
(3) $\displaystyle \mu^* < \mu_i + 2a(s, t)$

In words, (1) is the event that the empirical mean of the optimal action is less than the lower confidence bound. By our Chernoff bound argument earlier, this happens with probability $t^{-4}$. Likewise, (2) is the event that the empirical mean payoff of action $i$ is larger than the upper confidence bound, which also occurs with probability $t^{-4}$. We will see momentarily that (3) is impossible for a well-chosen $m$ (which is why we left it variable), but in any case the claim is that one of these three events must occur. For if they are all false, we have

$\displaystyle \begin{matrix} \overline{x}_{i,s} + a(s, t) \geq \overline{x}^*_{s'} + a(s', t) & > & \mu^* - a(s',t) + a(s',t) = \mu^* \\ \textup{assumed} & (1) \textup{ is false} & \\ \end{matrix}$

and

$\begin{matrix} \mu_i + 2a(s,t) & > & \overline{x}_{i,s} + a(s, t) \geq \overline{x}^*_{s'} + a(s', t) \\ & (2) \textup{ is false} & \textup{assumed} \\ \end{matrix}$

But putting these two inequalities together gives us precisely that (3) is true:

$\mu^* < \mu_i + 2a(s,t)$

This proves the claim.

By the union bound, the probability that at least one of these events happens is $2t^{-4}$ plus whatever the probability of (3) being true is. But as we said, we’ll pick $m$ to make (3) always false. Indeed $m$ depends on which action $i$ is being played, and if $s \geq m > 8 \log T / \Delta_i^2$ then $2a(s,t) \leq \Delta_i$, and by the definition of $\Delta_i$ we have

$\mu^* - \mu_i - 2a(s,t) \geq \mu^* - \mu_i - \Delta_i = 0$.

Now we can finally piece everything together. The expected value of an event is just its probability of occurring, and so

\displaystyle \begin{aligned} \mathbb{E}(P_i(T)) & \leq m + \sum_{t=1}^\infty \sum_{s=m}^t \sum_{s' = 1}^t \textup{P}(\overline{x}_{i,s} + a(s, t) \geq \overline{x}^*_{s'} + a(s', t)) \\ & \leq \left \lceil \frac{8 \log T}{\Delta_i^2} \right \rceil + \sum_{t=1}^\infty \sum_{s=m}^t \sum_{s' = 1}^t 2t^{-4} \\ & \leq \frac{8 \log T}{\Delta_i^2} + 1 + \sum_{t=1}^\infty \sum_{s=1}^t \sum_{s' = 1}^t 2t^{-4} \\ & = \frac{8 \log T}{\Delta_i^2} + 1 + 2 \sum_{t=1}^\infty t^{-2} \\ & = \frac{8 \log T}{\Delta_i^2} + 1 + \frac{\pi^2}{3} \\ \end{aligned}

The second line is the Chernoff bound we argued above, the third and fourth lines are relatively obvious algebraic manipulations, and the last equality uses the classic solution to Basel’s problem. Plugging this upper bound in to the regret formula we gave in the first paragraph of the proof establishes the bound and proves the theorem.

$\square$

## Implementation and an Experiment

The algorithm is about as simple to write in code as it is in pseudocode. The confidence bound is trivial to implement (though note we index from zero):

def upperBound(step, numPlays):
return math.sqrt(2 * math.log(step + 1) / numPlays)


And the full algorithm is quite short as well. We define a function ub1, which accepts as input the number of actions and a function reward which accepts as input the index of the action and the time step, and draws from the appropriate reward distribution. Then implementing ub1 is simply a matter of keeping track of empirical averages and an argmax. We implement the function as a Python generator, so one can observe the steps of the algorithm and keep track of the confidence bounds and the cumulative regret.

def ucb1(numActions, reward):
payoffSums = [0] * numActions
numPlays = [1] * numActions
ucbs = [0] * numActions

# initialize empirical sums
for t in range(numActions):
payoffSums[t] = reward(t,t)
yield t, payoffSums[t], ucbs

t = numActions

while True:
ucbs = [payoffSums[i] / numPlays[i] + upperBound(t, numPlays[i]) for i in range(numActions)]
action = max(range(numActions), key=lambda i: ucbs[i])
theReward = reward(action, t)
numPlays[action] += 1
payoffSums[action] += theReward

yield action, theReward, ucbs
t = t + 1


The heart of the algorithm is the second part, where we compute the upper confidence bounds and pick the action maximizing its bound.

We tested this algorithm on synthetic data. There were ten actions and a million rounds, and the reward distributions for each action were uniform from $[0,1]$, biased by $1/k$ for some $5 \leq k \leq 15$. The regret and theoretical regret bound are given in the graph below.

The regret of ucb1 run on a simple example. The blue curve is the cumulative regret of the algorithm after a given number of steps. The green curve is the theoretical upper bound on the regret.

Note that both curves are logarithmic, and that the actual regret is quite a lot smaller than the theoretical regret. The code used to produce the example and image are available on this blog’s Github page.

## Next Time

One interesting assumption that UCB1 makes in order to do its magic is that the payoffs are stochastic and independent across rounds. Next time we’ll look at an algorithm that assumes the payoffs are instead adversarial, as we described earlier. Surprisingly, in the adversarial case we can do about as well as the stochastic case. Then, we’ll experiment with the two algorithms on a real-world application.

Until then!