In the last twenty years there has been a lot of research in a subfield of machine learning called Bandit Learning. The name comes from the problem of being faced with a large sequence of slot machines (once called one-armed bandits) each with a potentially different payout scheme. The problems in this field all focus on one central question:
If I have many available actions with uncertain outcomes, how should I act to maximize the quality of my results over many trials?
The deep question here is how to balance exploitation, the desire to choose an action which has payed off well in the past, with exploration, the desire to try options which may produce even better results. The ideas are general enough that it’s hard not to find applications: choosing which drug to test in a clinical study, choosing which companies to invest in, choosing which ads or news stories to display to users, and even (as Richard Feynman once wondered) how to maximize your dining enjoyment.
In less recent times (circa 1960’s), this problem was posed and considered in the case where the payoff mechanisms had a very simple structure: each slot machine is a coin flip with a different probability of winning, and the player’s goal is to find the best machine as quickly as possible. We called this the “stochastic” setting, and last time we saw a modern strategy called UCB1 which maintained statistical estimates on the payoffs of the actions and chose the action with the highest estimate. The underlying philosophy was “optimism in the face of uncertainty,” and it gave us something provably close to optimal.
Unfortunately payoff structures are more complex than coin flips in the real world. Having “optimism” is arguably naive, especially when it comes to competitive scenarios like stock trading. Indeed the algorithm we’ll analyze in this post will take the polar opposite stance, that payoffs could conceivably operate in any manner. This is called the adversarial model, because even though the payoffs are fixed in advance of the game beginning, it can always be the case that the next choice you make results in the worst possible payoff.
One might wonder how we can hope to do anything in such a pessimistic model. As we’ll see, our notion of performing well is relative to the best single slot machine, and we will argue that this is the only reasonable notion of success. On the other hand, one might argue that real world payoffs are almost never entirely adversarial, and so we would hope that algorithms which do well theoretically in the adversarial model excel beyond their minimal guarantees in practice.
In this post we’ll explore and implement one algorithm for adversarial bandit learning, called Exp3, and in the next post we’ll see how it fares against UCB1 in some applications. Some prerequisites: since the main algorithm presented in this post is randomized, its analysis requires some familiarity with techniques and notation from probability theory. Specifically, we will assume that the reader is familiar with the content of this blog’s basic probability theory primers (1, 2), though the real difficulty in the analysis will be keeping up with all of the notation.
In case the reader is curious, Exp3 was invented in 2001 by Auer, Cesa-Bianchi, Freund, and Schapire. Here is their original paper, which contains lots of other mathematical goodies.
As usual, all of the code and data produced in the making of this blog post is available for download on this blog’s Github page.
Model Formalization and Notions of Regret
Before we describe the algorithm and analyze its we have to set up the problem formally. The first few paragraphs of our last post give a high-level picture of general bandit learning, so we won’t repeat that here. Recall, however, that we have to describe both the structure of the payoffs and how success is measured. So let’s describe the former first.
Definition: An adversarial bandit problem is a pair , where represents the number of actions (henceforth indexed by ), and is an infinite sequence of payoff vectors , where is a vector of length and is the reward of action on step .
In English, the game is played in rounds (or “time steps”) indexed by , and the payoffs are fixed for each action and time before the game even starts. Note that we assume the reward of an action is a number in the interval , but all of our arguments in this post can be extended to payoffs in some range by shifting by and dividing by .
Let’s specify what the player (algorithm designer) knows during the course of the game. First, the value of is given, and total number of rounds is kept secret. In each round, the player has access to the history of rewards for the actions that were chosen by the algorithm in previous rounds, but not the rewards of unchosen actions. In other words, it will only ever know one for each . To set up some notation, if we call the list of chosen actions over rounds, then at step the player has access to the values of and must pick to continue.
So to be completely clear, the game progresses as follows:
The player is given access to .
For each time step :
The player must pick an action .
The player observes the reward , which he may save for future use.
The problem gives no explicit limit on the amount of computation performed during each step, but in general we want it to run in polynomial time and not depend on the round number . If runtime even logarithmically depended on , then we’d have a big problem using it for high-frequency applications. For example in ad serving, Google processes on the order of ads per day; so a logarithmic dependence wouldn’t be that bad, but at some point in the distance future Google wouldn’t be able to keep up (and we all want long-term solutions to our problems).
Note that the reward vectors must be fixed in advance of the algorithm running, but this still allows a lot of counterintuitive things. For example, the payoffs can depend adversarially on the algorithm the player decides to use. For example, if the player chooses the stupid strategy of always picking the first action, then the adversary can just make that the worst possible action to choose. However, the rewards cannot depend on the random choices made by the player during the game.
So now let’s talk about measuring success. For an algorithm which chooses the sequence of actions, define to be the sum of the observed rewards
And because will often be randomized, this value is a random variable depending on the decisions made by . As such, we will often only consider the payoff up to expectation. That is, we’ll be interested in how relates to other possible courses of action. To be completely rigorous, the randomization is not over “choices made by an algorithm,” but rather the probability distribution over sequences of actions that the algorithm induces. It’s a fine distinction but a necessary one. In other words, we could define any sequence of actions and define analogously as above:
Any algorithm and choice of reward vectors induces a probability distribution over sequences of actions in a natural way (if you want to draw from the distribution, just run the algorithm). So instead of conditioning our probabilities and expectations on previous choices made by the algorithm, we do it over histories of actions .
An obvious question we might ask is: why can’t the adversary just make all the payoffs zero? (or negative!) In this event the player won’t get any reward, but he can emotionally and psychologically accept this fate. If he never stood a chance to get any reward in the first place, why should he feel bad about the inevitable result? What a truly cruel adversary wants is, at the end of the game, to show the player what he could have won, and have it far exceed what he actually won. In this way the player feels regret for not using a more sensible strategy, and likely turns to binge eating cookie dough ice cream. Or more likely he returns to the casino to lose more money. The trick that the player has up his sleeve is precisely the randomness in his choice of actions, and he can use its objectivity to partially overcome even the nastiest of adversaries.
Sadism aside, this thought brings us to a few mathematical notions of regret that the player algorithm may seek to minimize. The first, most obvious, and least reasonable is the worst-case regret. Given a stopping time and a sequence of actions , the expected regret of algorithm with respect to is the difference . This notion of regret measures the regret of a player if he knew what would have happened had he played . The expected worst-case regret of is then the maximum over all sequences of the regret of with respect to . This notion of regret seems particularly unruly, especially considering that the payoffs are adversarial, but there are techniques to reason about it.
However, the focus of this post is on a slightly easier notion of regret, called weak regret, which instead compares the results of to the best single action over all rounds. That is, this quantity is just
We call the parenthetical term . This kind of regret is a bit easier to analyze, and the main theorem of this post will given an upper bound on it for Exp3. The reader who read our last post on UCB1 will wonder why we make a big distinction here just to arrive at the same definition of regret that we had in the stochastic setting. But with UCB1 the best sequence of actions to take just happened to be to play the best action over and over again. Here, the payoff difference between the best sequence of actions and the best single action can be arbitrarily large.
Exp3 and an Upper Bound on Weak Regret
We now describe at the Exp3 algorithm.
Exp3 stands for Exponential-weight algorithm for Exploration and Exploitation. It works by maintaining a list of weights for each of the actions, using these weights to decide randomly which action to take next, and increasing (decreasing) the relevant weights when a payoff is good (bad). We further introduce an egalitarianism factor which tunes the desire to pick an action uniformly at random. That is, if , the weights have no effect on the choices at any step.
The algorithm is readily described in Python code, but we need to set up some notation used in the proof of the theorem. The pseudocode for the algorithm is as follows.
- Given , initialize the weights for .
- In each round :
- Set for each .
- Draw the next action randomly according to the distribution of .
- Observe reward .
- Define the estimated reward to be .
- Set all other .
The choices of these particular mathematical quantities (in steps 1, 4, and 5) are a priori mysterious, but we will explain them momentarily. In the proof that follows, we will extend to indices other than and define those values to be zero.
The Python implementation is perhaps more legible, and implements the possibly infinite loop as a generator:
def exp3(numActions, reward, gamma): weights = [1.0] * numActions t = 0 while True: probabilityDistribution = distr(weights, gamma) choice = draw(probabilityDistribution) theReward = reward(choice, t) estimatedReward = 1.0 * theReward / probabilityDistribution[choice] weights[choice] *= math.exp(estimatedReward * gamma / numActions) # important that we use estimated reward here! yield choice, theReward, estimatedReward, weights t = t + 1
Here the “rewards” variable refers to a callable which accepts as input the action chosen in round (keeps track of , assuming we’ll play nice), and returns as output the reward for that choice. The distr and draw functions are also easily defined, with the former depending on the gamma parameter as follows:
def distr(weights, gamma=0.0): theSum = float(sum(weights)) return tuple((1.0 - gamma) * (w / theSum) + (gamma / len(weights)) for w in weights)
There is one odd part of the algorithm above, and that’s the “estimated reward” . The intuitive reason to do this is to compensate for a potentially small probability of getting the observed reward. More formally, it ensures that the conditional expectation of the “estimated reward” is the actual reward. We will explore this formally during the proof of the main theorem.
As usual, the programs we write in this post are available on this blog’s Github page.
We can now state and prove the upper bound on the weak regret of Exp3. Note all logarithms are base .
Theorem: For any , and any stopping time
This is a purely analytical result because we don’t actually know what is ahead of time. Also note how the factor of occurs: in the first term, having a large will result in a poor upper bound because it occurs in the numerator of that term: too much exploration means not enough exploitation. But it occurs in the denominator of the second term, meaning that not enough exploration can also produce an undesirably large regret. This theorem then provides a quantification of the tradeoff being made, although it is just an upper bound.
We present the proof in two parts. Part 1:
We made a notable mistake in part 1, claiming that when . In fact, this does follow from the Taylor series expansion of , but it’s not as straightforward as I made it sound. In particular, note that , and so . Using in place of gives
And since , each term in the sum will decrease when replaced by , and we’ll be left with exactly . In other words, this is the tightest possible quadratic upper bound on . Pretty neat! On to part 2:
As usual, here is the entire canvas made over the course of both videos.
We can get a version of this theorem that is easier to analyze by picking a suitable choice of .
Corollary: Assume that is bounded by , and that Exp3 is run with
Then the weak regret of Exp3 is bounded by for any reward vector .
Proof. Simply plug in the bound in the theorem above, and note that .
A Simple Test Against Coin Flips
Now that we’ve analyzed the theoretical guarantees of the Exp3 algorithm, let’s use our implementation above and see how it fares in practice. Our first test will use 10 coin flips (Bernoulli trials) for our actions, with the probabilities of winning (and the actual payoff vectors) defined as follows:
biases = [1.0 / k for k in range(2,12)] rewardVector = [[1 if random.random() < bias else 0 for bias in biases] for _ in range(numRounds)] rewards = lambda choice, t: rewardVector[t][choice]
If we are to analyze the regret of Exp3 against the best action, we must compute the payoffs for all actions ahead of time, and compute which is the best. Obviously it will be the one with the largest probability of winning (the first in the list generated above), but it might not be, so we have to compute it. Specifically, it’s the following argmax:
bestAction = max(range(numActions), key=lambda action: sum([rewardVector[t][action] for t in range(numRounds)]))
Where the max function is used as “argmax” would be in mathematics.
We also have to pick a good choice of , and the corollary from the previous section gives us a good guide to the optimal : simply find a good upper bound on the reward of the best action, and use that. We can cheat a little here: we know the best action has a probability of 1/2 of paying out, and so the expected reward if we always did the best action is half the number of rounds. If we use, say, and compute using the formula from the corollary, this will give us a reasonable (but perhaps not perfectly correct) upper bound.
Then we just run the exp3 generator for rounds, and compute some statistics as we go:
bestUpperBoundEstimate = 2 * numRounds / 3 gamma = math.sqrt(numActions * math.log(numActions) / ((math.e - 1) * bestUpperBoundEstimate)) gamma = 0.07 cumulativeReward = 0 bestActionCumulativeReward = 0 weakRegret = 0 t = 0 for (choice, reward, est, weights) in exp3(numActions, rewards, gamma): cumulativeReward += reward bestActionCumulativeReward += rewardVector[t][bestAction] weakRegret = (bestActionCumulativeReward - cumulativeReward) regretBound = (math.e - 1) * gamma * bestActionCumulativeReward + (numActions * math.log(numActions)) / gamma t += 1 if t >= numRounds: break
At the end of one run of ten thousand rounds, the weights are overwhelmingly in favor of the best arm. The cumulative regret is 723, compared to the theoretical upper bound of 897. It’s not too shabby, but by tinkering with the value of we see that we can get regrets lower than 500 (when is around 0.7). Considering that the cumulative reward for the player is around 4,500 in this experiment, that means we spent only about 500 rounds out of ten thousand exploring non-optimal options (and also getting unlucky during said exploration). Not too shabby at all.
Here is a graph of a run of this experiment.
Note how the Exp3 algorithm never stops increasing its regret. This is in part because of the adversarial model; even if Exp3 finds the absolutely perfect action to take, it just can’t get over the fact that the world might try to screw it over. As long as the parameter is greater than zero, Exp3 will explore bad options just in case they turn out to be good. The benefits of this is that if the model changes over time Exp3 will adapt, but the downside is that the pessimism inherent in this worldview generally results in lower payoffs than other algorithms.
More Variations, and Future Plans
Right now we have two contesting models of how the world works: is it stochastic and independent, like the UCB1 algorithm would optimize for? Or does it follow Exp3’s world view that the payoffs are adversarial? Next time we’ll run some real-world tests to see how each fares.
But before that, we should note that there are still more models we haven’t discussed. One extremely significant model is that of contextual bandits. That is, the real world settings we care about often come with some “context” associated with each trial. Ads being displayed to users have probabilities that should take into account the information known about the user, and medical treatments should take into account past medical history. While we will not likely investigate any contextual bandit algorithms on this blog in the near future, the reader who hopes to apply this work to his or her own exploits (no pun intended) should be aware of the additional reading.
Until next time!