Formulating the Support Vector Machine Optimization Problem

The hypothesis and the setup

This blog post has an interactive demo (mostly used toward the end of the post). The source for this demo is available in a Github repository.

Last time we saw how the inner product of two vectors gives rise to a decision rule: if w is the normal to a line (or hyperplane) L, the sign of the inner product \langle x, w \rangle tells you whether x is on the same side of L as w.

Let’s translate this to the parlance of machine-learning. Let x \in \mathbb{R}^n be a training data point, and y \in \{ 1, -1 \} is its label (green and red, in the images in this post). Suppose you want to find a hyperplane which separates all the points with -1 labels from those with +1 labels (assume for the moment that this is possible). For this and all examples in this post, we’ll use data in two dimensions, but the math will apply to any dimension.

problem_setup

Some data labeled red and green, which is separable by a hyperplane (line).

The hypothesis we’re proposing to separate these points is a hyperplane, i.e. a linear subspace that splits all of \mathbb{R}^n into two halves. The data that represents this hyperplane is a single vector w, the normal to the hyperplane, so that the hyperplane is defined by the solutions to the equation \langle x, w \rangle = 0.

As we saw last time, w encodes the following rule for deciding if a new point z has a positive or negative label.

\displaystyle h_w(z) = \textup{sign}(\langle w, x \rangle)

You’ll notice that this formula only works for the normals w of hyperplanes that pass through the origin, and generally we want to work with data that can be shifted elsewhere. We can resolve this by either adding a fixed term b \in \mathbb{R}—often called a bias because statisticians came up with it—so that the shifted hyperplane is the set of solutions to \langle x, w \rangle + b = 0. The shifted decision rule is:

\displaystyle h_w(z) = \textup{sign}(\langle w, x \rangle + b)

Now the hypothesis is the pair of vector-and-scalar w, b.

The key intuitive idea behind the formulation of the SVM problem is that there are many possible separating hyperplanes for a given set of labeled training data. For example, here is a gif showing infinitely many choices.

svm_lots_of_choices.gif

The question is: how can we find the separating hyperplane that not only separates the training data, but generalizes as well as possible to new data? The assumption of the SVM is that a hyperplane which separates the points, but is also as far away from any training point as possible, will generalize best.

optimal_example.png

While contrived, it’s easy to see that the separating hyperplane is as far as possible from any training point.

More specifically, fix a labeled dataset of points (x_i, y_i), or more precisely:

\displaystyle D = \{ (x_i, y_i) \mid i = 1, \dots, m, x_i \in \mathbb{R}^{n}, y_i \in \{1, -1\}  \}

And a hypothesis defined by the normal w \in \mathbb{R}^{n} and a shift b \in \mathbb{R}. Let’s also suppose that (w,b) defines a hyperplane that correctly separates all the training data into the two labeled classes, and we just want to measure its quality. That measure of quality is the length of its margin.

Definition: The geometric margin of a hyperplane w with respect to a dataset D is the shortest distance from a training point x_i to the hyperplane defined by w.

The best hyperplane has the largest possible margin.

This margin can even be computed quite easily using our work from last post. The distance from x to the hyperplane defined by w is the same as the length of the projection of x onto w. And this is just computed by an inner product.

decision-rule-3

If the tip of the x arrow is the point in question, then a is the dot product, and b the distance from x to the hyperplane L defined by w.

A naive optimization objective

If we wanted to, we could stop now and define an optimization problem that would be very hard to solve. It would look like this:

\displaystyle \begin{aligned} & \max_{w} \min_{x_i} \left | \left \langle x_i, \frac{w}{\|w\|} \right \rangle + b \right | & \\ \textup{subject to \ \ } & \textup{sign}(\langle x_i, w \rangle + b) = \textup{sign}(y_i) & \textup{ for every } i = 1, \dots, m \end{aligned}

The formulation is hard. The reason is it’s horrifyingly nonlinear. In more detail:

  1. The constraints are nonlinear due to the sign comparisons.
  2. There’s a min and a max! A priori, we have to do this because we don’t know which point is going to be the closest to the hyperplane.
  3. The objective is nonlinear in two ways: the absolute value and the projection requires you to take a norm and divide.

The rest of this post (and indeed, a lot of the work in grokking SVMs) is dedicated to converting this optimization problem to one in which the constraints are all linear inequalities and the objective is a single, quadratic polynomial we want to minimize or maximize.

Along the way, we’ll notice some neat features of the SVM.

Trick 1: linearizing the constraints

To solve the first problem, we can use a trick. We want to know whether \textup{sign}(\langle x_i, w \rangle + b) = \textup{sign}(y_i) for a labeled training point (x_i, y_i). The trick is to multiply them together. If their signs agree, then their product will be positive, otherwise it will be negative.

So each constraint becomes:

\displaystyle (\langle x_i, w \rangle + b) \cdot y_i \geq 0

This is still linear because y_i is a constant (input) to the optimization problem. The variables are the coefficients of w.

The left hand side of this inequality is often called the functional margin of a training point, since, as we will see, it still works to classify x_i, even if w is scaled so that it is no longer a unit vector. Indeed, the sign of the inner product is independent of how w is scaled.

Trick 1.5: the optimal solution is midway between classes

This small trick is to notice that if w is the supposed optimal separating hyperplane, i.e. its margin is maximized, then it must necessarily be exactly halfway in between the closest points in the positive and negative classes.

In other words, if x_+ and x_- are the closest points in the positive and negative classes, respectively, then \langle x_{+}, w \rangle + b = -(\langle x_{-}, w \rangle + b). If this were not the case, then you could adjust the bias, shifting the decision boundary along w until it they are exactly equal, and you will have increased the margin. The closest point, say x_+ will have gotten farther away, and the closest point in the opposite class, x_- will have gotten closer, but will not be closer than x_+.

Trick 2: getting rid of the max + min

Resolving this problem essentially uses the fact that the hypothesis, which comes in the form of the normal vector w, has a degree of freedom in its length. To explain the details of this trick, we’ll set b=0 which simplifies the intuition.

Indeed, in the animation below, I can increase or decrease the length of w without changing the decision boundary.

svm_w_length.gif

I have to keep my hand very steady (because I was too lazy to program it so that it only increases/decreases in length), but you can see the point. The line is perpendicular to the normal vector, and it doesn’t depend on the length.

Let’s combine this with tricks 1 and 1.5. If we increase the length of w, that means the absolute values of the dot products \langle x_i, w \rangle used in the constraints will all increase by the same amount (without changing their sign). Indeed, for any vector a we have \langle a, w \rangle = \|w \| \cdot \langle a, w / \| w \| \rangle.

In this world, the inner product measurement of distance from a point to the hyperplane is no longer faithful. The true distance is \langle a, w / \| w \| \rangle, but the distance measured by \langle a, w \rangle is measured in units of 1 / \| w \|.

units.png

In this example, the two numbers next to the green dot represent the true distance of the point from the hyperplane, and the dot product of the point with the normal (respectively). The dashed lines are the solutions to <x, w> = 1. The magnitude of w is 2.2, the inverse of that is 0.46, and indeed 2.2 = 4.8 * 0.46 (we’ve rounded the numbers).

Now suppose we had the optimal hyperplane and its normal w. No matter how near (or far) the nearest positively labeled training point x is, we could scale the length of w to force \langle x, w \rangle = 1. This is the core of the trick. One consequence is that the actual distance from x to the hyperplane is \frac{1}{\| w \|} = \langle x, w / \| w \| \rangle.

units2.png

The same as above, but with the roles reversed. We’re forcing the inner product of the point with w to be 1. The true distance is unchanged.

In particular, if we force the closest point to have inner product 1, then all other points will have inner product at least 1. This has two consequences. First, our constraints change to \langle x_i, w \rangle \cdot y_i \geq 1 instead of \geq 0. Second, we no longer need to ask which point is closest to the candidate hyperplane! Because after all, we never cared which point it was, just how far away that closest point was. And now we know that it’s exactly 1 / \| w \| away. Indeed, if the optimal points weren’t at that distance, then that means the closest point doesn’t exactly meet the constraint, i.e. that \langle x, w \rangle > 1 for every training point x. We could then scale w shorter until \langle x, w \rangle = 1, hence increasing the margin 1 / \| w \|.

In other words, the coup de grâce, provided all the constraints are satisfied, the optimization objective is just to maximize 1 / \| w \|, a.k.a. to minimize \| w \|.

This intuition is clear from the following demonstration, which you can try for yourself. In it I have a bunch of positively and negatively labeled points, and the line in the center is the candidate hyperplane with normal w that you can drag around. Each training point has two numbers next to it. The first is the true distance from that point to the candidate hyperplane; the second is the inner product with w. The two blue dashed lines are the solutions to \langle x, w \rangle = \pm 1. To solve the SVM by hand, you have to ensure the second number is at least 1 for all green points, at most -1 for all red points, and then you have to make w as short as possible. As we’ve discussed, shrinking w moves the blue lines farther away from the separator, but in order to satisfy the constraints the blue lines can’t go further than any training point. Indeed, the optimum will have those blue lines touching a training point on each side.

svm_solve_by_hand

 

I bet you enjoyed watching me struggle to solve it. And while it’s probably not the optimal solution, the idea should be clear.

The final note is that, since we are now minimizing \| w \|, a formula which includes a square root, we may as well minimize its square \| w \|^2 = \sum_j w_j^2. We will also multiply the objective by 1/2, because when we eventually analyze this problem we will take a derivative, and the square in the exponent and the 1/2 will cancel.

The final form of the problem

Our optimization problem is now the following (including the bias again):

\displaystyle \begin{aligned} & \min_{w}  \frac{1}{2} \| w \|^2 & \\ \textup{subject to \ \ } & (\langle x_i, w \rangle + b) \cdot y_i \geq 1 & \textup{ for every } i = 1, \dots, m \end{aligned}

This is much simpler to analyze. The constraints are all linear inequalities (which, because of linear programming, we know are tractable to optimize). The objective to minimize, however, is a convex quadratic function of the input variables—a sum of squares of the inputs.

Such problems are generally called quadratic programming problems (or QPs, for short). There are general methods to find solutions! However, they often suffer from numerical stability issues and have less-than-satisfactory runtime. Luckily, the form in which we’ve expressed the support vector machine problem is specific enough that we can analyze it directly, and find a way to solve it without appealing to general-purpose numerical solvers.

We will tackle this problem in a future post (planned for two posts sequel to this one). Before we close, let’s just make a few more observations about the solution to the optimization problem.

Support Vectors

In Trick 1.5 we saw that the optimal separating hyperplane has to be exactly halfway between the two closest points of opposite classes. Moreover, we noticed that, provided we’ve scaled \| w \| properly, these closest points (there may be multiple for positive and negative labels) have to be exactly “distance” 1 away from the separating hyperplane.

Another way to phrase this without putting “distance” in scare quotes is to say that, if w is the normal vector of the optimal separating hyperplane, the closest points lie on the two lines \langle x_i, w \rangle + b = \pm 1.

Now that we have some intuition for the formulation of this problem, it isn’t a stretch to realize the following. While a dataset may include many points from either class on these two lines \langle x_i, w \rangle = \pm 1, the optimal hyperplane itself does not depend on any of the other points except these closest points.

This fact is enough to give these closest points a special name: the support vectors.

We’ll actually prove that support vectors “are all you need” with full rigor and detail next time, when we cast the optimization problem in this post into the “dual” setting. To avoid vague names, the formulation described in this post called the “primal” problem. The dual problem is derived from the primal problem, with special variables and constraints chosen based on the primal variables and constraints. Next time we’ll describe in brief detail what the dual does and why it’s important, but we won’t have nearly enough time to give a full understanding of duality in optimization (such a treatment would fill a book).

When we compute the dual of the SVM problem, we will see explicitly that the hyperplane can be written as a linear combination of the support vectors. As such, once you’ve found the optimal hyperplane, you can compress the training set into just the support vectors, and reproducing the same optimal solution becomes much, much faster. You can also use the support vectors to augment the SVM to incorporate streaming data (throw out all non-support vectors after every retraining).

Eventually, when we get to implementing the SVM from scratch, we’ll see all this in action.

Until then!

Advertisements

The Inner Product as a Decision Rule

The standard inner product of two vectors has some nice geometric properties. Given two vectors x, y \in \mathbb{R}^n, where by x_i I mean the i-th coordinate of x, the standard inner product (which I will interchangeably call the dot product) is defined by the formula

\displaystyle \langle x, y \rangle = x_1 y_1 + \dots + x_n y_n

This formula, simple as it is, produces a lot of interesting geometry. An important such property, one which is discussed in machine learning circles more than pure math, is that it is a very convenient decision rule.

In particular, say we’re in the Euclidean plane, and we have a line L passing through the origin, with w being a unit vector perpendicular to L (“the normal” to the line).

decision-rule-1

If you take any vector x, then the dot product \langle x, w \rangle is positive if x is on the same side of L as w, and negative otherwise. The dot product is zero if and only if x is exactly on the line L, including when x is the zero vector.

decision-rule-2

Left: the dot product of w and x is positive, meaning they are on the same side of w. Right: The dot product is negative, and they are on opposite sides.

Here is an interactive demonstration of this property. Click the image below to go to the demo, and you can drag the vector arrowheads and see the decision rule change.

decision-rule

Click above to go to the demo

The code for this demo is available in a github repository.

It’s always curious, at first, that multiplying and summing produces such geometry. Why should this seemingly trivial arithmetic do anything useful at all?

The core fact that makes it work, however, is that the dot product tells you how one vector projects onto another. When I say “projecting” a vector x onto another vector w, I mean you take only the components of x that point in the direction of w. The demo shows what the result looks like using the red (or green) vector.

In two dimensions this is easy to see, as you can draw the triangle which has x as the hypotenuse, with w spanning one of the two legs of the triangle as follows:

decision-rule-3.png

If we call a the (vector) leg of the triangle parallel to w, while b is the dotted line (as a vector, parallel to L), then as vectors x = a + b. The projection of x onto w is just a.

Another way to think of this is that the projection is x, modified by removing any part of x that is perpendicular to w. Using some colorful language: you put your hands on either side of x and w, and then you squish x onto w along the line perpendicular to w (i.e., along b).

And if w is a unit vector, then the length of a—that is, the length of the projection of x onto w—is exactly the inner product product \langle x, w \rangle.

Moreover, if the angle between x and w is larger than 90 degrees, the projected vector will point in the opposite direction of w, so it’s really a “signed” length.

decision-rule-4

Left: the projection points in the same direction as w. Right: the projection points in the opposite direction.

And this is precisely why the decision rule works. This 90-degree boundary is the line perpendicular to w.

More technically said: Let x, y \in \mathbb{R}^n be two vectors, and \langle x,y \rangle their dot product. Define by \| y \| the length of y, specifically \sqrt{\langle y, y \rangle}. Define by \text{proj}_{y}(x) by first letting y' = \frac{y}{\| y \|}, and then let \text{proj}_{y}(x) = \langle x,y' \rangle y'. In words, you scale y to a unit vector y', use the result to compute the inner product, and then scale y so that it’s length is \langle x, y' \rangle. Then

Theorem: Geometrically, \text{proj}_y(x) is the projection of x onto the line spanned by y.

This theorem is true for any n-dimensional vector space, since if you have two vectors you can simply apply the reasoning for 2-dimensions to the 2-dimensional plane containing x and y. In that case, the decision boundary for a positive/negative output is the entire n-1 dimensional hyperplane perpendicular to y (the projected vector).

In fact, the usual formula for the angle between two vectors, i.e. the formula \langle x, y \rangle = \|x \| \cdot \| y \| \cos \theta, is a restatement of the projection theorem in terms of trigonometry. The \langle x, y' \rangle part of the projection formula (how much you scale the output) is equal to \| x \| \cos \theta. At the end of this post we have a proof of the cosine-angle formula above.

Part of why this decision rule property is so important is that this is a linear function, and linear functions can be optimized relatively easily. When I say that, I specifically mean that there are many known algorithms for optimizing linear functions, which don’t have obscene runtime or space requirements. This is a big reason why mathematicians and statisticians start the mathematical modeling process with linear functions. They’re inherently simpler.

In fact, there are many techniques in machine learning—a prominent one is the so-called Kernel Trick—that exist solely to take data that is not inherently linear in nature (cannot be fruitfully analyzed by linear methods) and transform it into a dataset that is. Using the Kernel Trick as an example to foreshadow some future posts on Support Vector Machines, the idea is to take data which cannot be separated by a line, and transform it (usually by adding new coordinates) so that it can. Then the decision rule, computed in the larger space, is just a dot product. Irene Papakonstantinou neatly demonstrates this with paper folding and scissors. The tradeoff is that the size of the ambient space increases, and it might increase so much that it makes computation intractable. Luckily, the Kernel Trick avoids this by remembering where the data came from, so that one can take advantage of the smaller space to compute what would be the inner product in the larger space.

Next time we’ll see how this decision rule shows up in an optimization problem: finding the “best” hyperplane that separates an input set of red and blue points into monochromatic regions (provided that is possible). Finding this separator is core subroutine of the Support Vector Machine technique, and therein lie interesting algorithms. After we see the core SVM algorithm, we’ll see how the Kernel Trick fits into the method to allow nonlinear decision boundaries.


Proof of the cosine angle formula

Theorem: The inner product \langle v, w \rangle is equal to \| v \| \| w \| \cos(\theta), where \theta is the angle between the two vectors.

Note that this angle is computed in the 2-dimensional subspace spanned by v, w, viewed as a typical flat plane, and this is a 2-dimensional plane regardless of the dimension of v, w.

Proof. If either v or w is zero, then both sides of the equation are zero and the theorem is trivial, so we may assume both are nonzero. Label a triangle with sides v,w and the third side v-w. Now the length of each side is \| v \|, \| w\|, and \| v-w \|, respectively. Assume for the moment that \theta is not 0 or 180 degrees, so that this triangle is not degenerate.

The law of cosines allows us to write

\displaystyle \| v - w \|^2 = \| v \|^2 + \| w \|^2 - 2 \| v \| \| w \| \cos(\theta)

Moreover, The left hand side is the inner product of v-w with itself, i.e. \| v - w \|^2 = \langle v-w , v-w \rangle. We’ll expand \langle v-w, v-w \rangle using two facts. The first is trivial from the formula, that inner product is symmetric: \langle v,w \rangle = \langle w, v \rangle. Second is that the inner product is linear in each input. In particular for the first input: \langle x + y, z \rangle = \langle x, z \rangle + \langle y, z \rangle and \langle cx, z \rangle = c \langle x, z \rangle. The same holds for the second input by symmetry of the two inputs. Hence we can split up \langle v-w, v-w \rangle as follows.

\displaystyle \begin{aligned} \langle v-w, v-w \rangle &= \langle v, v-w \rangle - \langle w, v-w \rangle \\ &= \langle v, v \rangle - \langle v, w \rangle - \langle w, v \rangle +  \langle w, w \rangle \\ &= \| v \|^2 - 2 \langle v, w \rangle + \| w \|^2 \\ \end{aligned}

Combining our two offset equations, we can subtract \| v \|^2 + \| w \|^2 from each side and get

\displaystyle -2 \|v \| \|w \| \cos(\theta) = -2 \langle v, w \rangle,

Which, after dividing by -2, proves the theorem if \theta \not \in \{0, 180 \}.

Now if \theta = 0 or 180 degrees, the vectors are parallel, so we can write one as a scalar multiple of the other. Say w = cv for c \in \mathbb{R}. In that case, \langle v, cv \rangle = c \| v \| \| v \|. Now \| w \| = | c | \| v \|, since a norm is a length and is hence non-negative (but c can be negative). Indeed, if v, w are parallel but pointing in opposite directions, then c < 0, so \cos(\theta) = -1, and c \| v \| = - \| w \|. Otherwise c > 0 and \cos(\theta) = 1. This allows us to write c \| v \| \| v \| = \| w \| \| v \| \cos(\theta), and this completes the final case of the theorem.

\square

The Reasonable Effectiveness of the Multiplicative Weights Update Algorithm

papad

Christos Papadimitriou, who studies multiplicative weights in the context of biology.

Hard to believe

Sanjeev Arora and his coauthors consider it “a basic tool [that should be] taught to all algorithms students together with divide-and-conquer, dynamic programming, and random sampling.” Christos Papadimitriou calls it “so hard to believe that it has been discovered five times and forgotten.” It has formed the basis of algorithms in machine learning, optimization, game theory, economics, biology, and more.

What mystical algorithm has such broad applications? Now that computer scientists have studied it in generality, it’s known as the Multiplicative Weights Update Algorithm (MWUA). Procedurally, the algorithm is simple. I can even describe the core idea in six lines of pseudocode. You start with a collection of n objects, and each object has a weight.

Set all the object weights to be 1.
For some large number of rounds:
   Pick an object at random proportionally to the weights
   Some event happens
   Increase the weight of the chosen object if it does well in the event
   Otherwise decrease the weight

The name “multiplicative weights” comes from how we implement the last step: if the weight of the chosen object at step t is w_t before the event, and G represents how well the object did in the event, then we’ll update the weight according to the rule:

\displaystyle w_{t+1} = w_t (1 + G)

Think of this as increasing the weight by a small multiple of the object’s performance on a given round.

Here is a simple example of how it might be used. You have some money you want to invest, and you have a bunch of financial experts who are telling you what to invest in every day. So each day you pick an expert, and you follow their advice, and you either make a thousand dollars, or you lose a thousand dollars, or something in between. Then you repeat, and your goal is to figure out which expert is the most reliable.

This is how we use multiplicative weights: if we number the experts 1, \dots, N, we give each expert a weight w_i which starts at 1. Then, each day we pick an expert at random (where experts with larger weights are more likely to be picked) and at the end of the day we have some gain or loss G. Then we update the weight of the chosen expert by multiplying it by (1 + G / 1000). Sometimes you have enough information to update the weights of experts you didn’t choose, too. The theoretical guarantees of the algorithm say we’ll find the best expert quickly (“quickly” will be concrete later).

In fact, let’s play a game where you, dear reader, get to decide the rewards for each expert and each day. I programmed the multiplicative weights algorithm to react according to your choices. Click the image below to go to the demo.

mwua

This core mechanism of updating weights can be interpreted in many ways, and that’s part of the reason it has sprouted up all over mathematics and computer science. Just a few examples of where this has led:

  1. In game theory, weights are the “belief” of a player about the strategy of an opponent. The most famous algorithm to use this is called Fictitious Play, and others include EXP3 for minimizing regret in the so-called “adversarial bandit learning” problem.
  2. In machine learning, weights are the difficulty of a specific training example, so that higher weights mean the learning algorithm has to “try harder” to accommodate that example. The first result I’m aware of for this is the Perceptron (and similar Winnow) algorithm for learning hyperplane separators. The most famous is the AdaBoost algorithm.
  3. Analogously, in optimization, the weights are the difficulty of a specific constraint, and this technique can be used to approximately solve linear and semidefinite programs. The approximation is because MWUA only provides a solution with some error.
  4. In mathematical biology, the weights represent the fitness of individual alleles, and filtering reproductive success based on this and updating weights for successful organisms produces a mechanism very much like evolution. With modifications, it also provides a mechanism through which to understand sex in the context of evolutionary biology.
  5. The TCP protocol, which basically defined the internet, uses additive and multiplicative weight updates (which are very similar in the analysis) to manage congestion.
  6. You can get easy \log(n)-approximation algorithms for many NP-hard problems, such as set cover.

Additional, more technical examples can be found in this survey of Arora et al.

In the rest of this post, we’ll implement a generic Multiplicative Weights Update Algorithm, we’ll prove it’s main theoretical guarantees, and we’ll implement a linear program solver as an example of its applicability. As usual, all of the code used in the making of this post is available in a Github repository.

The generic MWUA algorithm

Let’s start by writing down pseudocode and an implementation for the MWUA algorithm in full generality.

In general we have some set X of objects and some set Y of “event outcomes” which can be completely independent. If these sets are finite, we can write down a table M whose rows are objects, whose columns are outcomes, and whose i,j entry M(i,j) is the reward produced by object x_i when the outcome is y_j. We will also write this as M(x, y) for object x and outcome y. The only assumption we’ll make on the rewards is that the values M(x, y) are bounded by some small constant B (by small I mean B should not require exponentially many bits to write down as compared to the size of X). In symbols, M(x,y) \in [0,B]. There are minor modifications you can make to the algorithm if you want negative rewards, but for simplicity we will leave that out. Note the table M just exists for analysis, and the algorithm does not know its values. Moreover, while the values in M are static, the choice of outcome y for a given round may be nondeterministic.

The MWUA algorithm randomly chooses an object x \in X in every round, observing the outcome y \in Y, and collecting the reward M(x,y) (or losing it as a penalty). The guarantee of the MWUA theorem is that the expected sum of rewards/penalties of MWUA is not much worse than if one had picked the best object (in hindsight) every single round.

Let’s describe the algorithm in notation first and build up pseudocode as we go. The input to the algorithm is the set of objects, a subroutine that observes an outcome, a black-box reward function, a learning rate parameter, and a number of rounds.

def MWUA(objects, observeOutcome, reward, learningRate, numRounds):
   ...

We define for object x a nonnegative number w_x we call a “weight.” The weights will change over time so we’ll also sub-script a weight with a round number t, i.e. w_{x,t} is the weight of object x in round t. Initially, all the weights are 1. Then MWUA continues in rounds. We start each round by drawing an example randomly with probability proportional to the weights. Then we observe the outcome for that round and the reward for that round.

# draw: [float] -> int
# pick an index from the given list of floats proportionally
# to the size of the entry (i.e. normalize to a probability
# distribution and draw according to the probabilities).
def draw(weights):
    choice = random.uniform(0, sum(weights))
    choiceIndex = 0

    for weight in weights:
        choice -= weight
        if choice <= 0:
            return choiceIndex

        choiceIndex += 1

# MWUA: the multiplicative weights update algorithm
def MWUA(objects, observeOutcome, reward, learningRate numRounds):
   weights = [1] * len(objects)
   for t in numRounds:
      chosenObjectIndex = draw(weights)
      chosenObject = objects[chosenObjectIndex]

      outcome = observeOutcome(t, weights, chosenObject)
      thisRoundReward = reward(chosenObject, outcome)

      ...

Sampling objects in this way is the same as associating a distribution D_t to each round, where if S_t = \sum_{x \in X} w_{x,t} then the probability of drawing x, which we denote D_t(x), is w_{x,t} / S_t. We don’t need to keep track of this distribution in the actual run of the algorithm, but it will help us with the mathematical analysis.

Next comes the weight update step. Let’s call our learning rate variable parameter \varepsilon. In round t say we have object x_t and outcome y_t, then the reward is M(x_t, y_t). We update the weight of the chosen object x_t according to the formula:

\displaystyle w_{x_t, t} = w_{x_t} (1 + \varepsilon M(x_t, y_t) / B)

In the more general event that you have rewards for all objects (if not, the reward-producing function can output zero), you would perform this weight update on all objects x \in X. This turns into the following Python snippet, where we hide the division by B into the choice of learning rate:

# MWUA: the multiplicative weights update algorithm
def MWUA(objects, observeOutcome, reward, learningRate, numRounds):
   weights = [1] * len(objects)
   for t in numRounds:
      chosenObjectIndex = draw(weights)
      chosenObject = objects[chosenObjectIndex]

      outcome = observeOutcome(t, weights, chosenObject)
      thisRoundReward = reward(chosenObject, outcome)

      for i in range(len(weights)):
         weights[i] *= (1 + learningRate * reward(objects[i], outcome))

One of the amazing things about this algorithm is that the outcomes and rewards could be chosen adaptively by an adversary who knows everything about the MWUA algorithm (except which random numbers the algorithm generates to make its choices). This means that the rewards in round t can depend on the weights in that same round! We will exploit this when we solve linear programs later in this post.

But even in such an oppressive, exploitative environment, MWUA persists and achieves its guarantee. And now we can state that guarantee.

Theorem (from Arora et al): The cumulative reward of the MWUA algorithm is, up to constant multiplicative factors, at least the cumulative reward of the best object minus \log(n), where n is the number of objects. (Exact formula at the end of the proof)

The core of the proof, which we’ll state as a lemma, uses one of the most elegant proof techniques in all of mathematics. It’s the idea of constructing a potential function, and tracking the change in that potential function over time. Such a proof usually has the mysterious script:

  1. Define potential function, in our case S_t.
  2. State what seems like trivial facts about the potential function to write S_{t+1} in terms of S_t, and hence get general information about S_T for some large T.
  3. Theorem is proved.
  4. Wait, what?

Clearly, coming up with a useful potential function is a difficult and prized skill.

In this proof our potential function is the sum of the weights of the objects in a given round, S_t = \sum_{x \in X} w_{x, t}. Now the lemma.

Lemma: Let B be the bound on the size of the rewards, and 0 < \varepsilon < 1/2 a learning parameter. Recall that D_t(x) is the probability that MWUA draws object x in round t. Write the expected reward for MWUA for round t as the following (using only the definition of expected value):

\displaystyle R_t = \sum_{x \in X} D_t(x) M(x, y_t)

 Then the claim of the lemma is:

\displaystyle S_{t+1} \leq S_t e^{\varepsilon R_t / B}

Proof. Expand S_{t+1} = \sum_{x \in X} w_{x, t+1} using the definition of the MWUA update:

\displaystyle \sum_{x \in X} w_{x, t+1} = \sum_{x \in X} w_{x, t}(1 + \varepsilon M(x, y_t) / B)

Now distribute w_{x, t} and split into two sums:

\displaystyle \dots = \sum_{x \in X} w_{x, t} + \frac{\varepsilon}{B} \sum_{x \in X} w_{x,t} M(x, y_t)

Using the fact that D_t(x) = \frac{w_{x,t}}{S_t}, we can replace w_{x,t} with D_t(x) S_t, which allows us to get R_t

\displaystyle \begin{aligned} \dots &= S_t + \frac{\varepsilon S_t}{B} \sum_{x \in X} D_t(x) M(x, y_t) \\ &= S_t \left ( 1 + \frac{\varepsilon R_t}{B} \right ) \end{aligned}

And then using the fact that (1 + x) \leq e^x (Taylor series), we can bound the last expression by S_te^{\varepsilon R_t / B}, as desired.

\square

Now using the lemma, we can get a hold on S_T for a large T, namely that

\displaystyle S_T \leq S_1 e^{\varepsilon \sum_{t=1}^T R_t / B}

If |X| = n then S_1=n, simplifying the above. Moreover, the sum of the weights in round T is certainly greater than any single weight, so that for every fixed object x \in X,

\displaystyle S_T \geq w_{x,T} \leq  (1 + \varepsilon)^{\sum_t M(x, y_t) / B}

Squeezing S_t between these two inequalities and taking logarithms (to simplify the exponents) gives

\displaystyle \left ( \sum_t M(x, y_t) / B \right ) \log(1+\varepsilon) \leq \log n + \frac{\varepsilon}{B} \sum_t R_t

Multiply through by B, divide by \varepsilon, rearrange, and use the fact that when 0 < \varepsilon < 1/2 we have \log(1 + \varepsilon) \geq \varepsilon - \varepsilon^2 (Taylor series) to get

\displaystyle \sum_t R_t \geq \left [ \sum_t M(x, y_t) \right ] (1-\varepsilon) - \frac{B \log n}{\varepsilon}

The bracketed term is the payoff of object x, and MWUA’s payoff is at least a fraction of that minus the logarithmic term. The bound applies to any object x \in X, and hence to the best one. This proves the theorem.

\square

Briefly discussing the bound itself, we see that the smaller the learning rate is, the closer you eventually get to the best object, but by contrast the more the subtracted quantity B \log(n) / \varepsilon hurts you. If your target is an absolute error bound against the best performing object on average, you can do more algebra to determine how many rounds you need in terms of a fixed \delta. The answer is roughly: let \varepsilon = O(\delta / B) and pick T = O(B^2 \log(n) / \delta^2). See this survey for more.

MWUA for linear programs

Now we’ll approximately solve a linear program using MWUA. Recall that a linear program is an optimization problem whose goal is to minimize (or maximize) a linear function of many variables. The objective to minimize is usually given as a dot product c \cdot x, where c is a fixed vector and x = (x_1, x_2, \dots, x_n) is a vector of non-negative variables the algorithm gets to choose. The choices for x are also constrained by a set of m linear inequalities, A_i \cdot x \geq b_i, where A_i is a fixed vector and b_i is a scalar for i = 1, \dots, m. This is usually summarized by putting all the A_i in a matrix, b_i in a vector, as

x_{\textup{OPT}} = \textup{argmin}_x \{ c \cdot x \mid Ax \geq b, x \geq 0 \}

We can further simplify the constraints by assuming we know the optimal value Z = c \cdot x_{\textup{OPT}} in advance, by doing a binary search (more on this later). So, if we ignore the hard constraint Ax \geq b, the “easy feasible region” of possible x‘s includes \{ x \mid x \geq 0, c \cdot x = Z \}.

In order to fit linear programming into the MWUA framework we have to define two things.

  1. The objects: the set of linear inequalities A_i \cdot x \geq b_i.
  2. The rewards: the error of a constraint for a special input vector x_t.

Number 2 is curious (why would we give a reward for error?) but it’s crucial and we’ll discuss it momentarily.

The special input x_t depends on the weights in round t (which is allowed, recall). Specifically, if the weights are w = (w_1, \dots, w_m), we ask for a vector x_t in our “easy feasible region” which satisfies

\displaystyle (A^T w) \cdot x_t \geq w \cdot b

For this post we call the implementation of procuring such a vector the “oracle,” since it can be seen as the black-box problem of, given a vector \alpha and a scalar \beta and a convex region R, finding a vector x \in R satisfying \alpha \cdot x \geq \beta. This allows one to solve more complex optimization problems with the same technique, swapping in a new oracle as needed. Our choice of inputs, \alpha = A^T w, \beta = w \cdot b, are particular to the linear programming formulation.

Two remarks on this choice of inputs. First, the vector A^T w is a weighted average of the constraints in A, and w \cdot b is a weighted average of the thresholds. So this this inequality is a “weighted average” inequality (specifically, a convex combination, since the weights are nonnegative). In particular, if no such x exists, then the original linear program has no solution. Indeed, given a solution x^* to the original linear program, each constraint, say A_1 x^*_1 \geq b_1, is unaffected by left-multiplication by w_1.

Second, and more important to the conceptual understanding of this algorithm, the choice of rewards and the multiplicative updates ensure that easier constraints show up less prominently in the inequality by having smaller weights. That is, if we end up overly satisfying a constraint, we penalize that object for future rounds so we don’t waste our effort on it. The byproduct of MWUA—the weights—identify the hardest constraints to satisfy, and so in each round we can put a proportionate amount of effort into solving (one of) the hard constraints. This is why it makes sense to reward error; the error is a signal for where to improve, and by over-representing the hard constraints, we force MWUA’s attention on them.

At the end, our final output is an average of the x_t produced in each round, i.e. x^* = \frac{1}{T}\sum_t x_t. This vector satisfies all the constraints to a roughly equal degree. We will skip the proof that this vector does what we want, but see these notes for a simple proof. We’ll spend the rest of this post implementing the scheme outlined above.

Implementing the oracle

Fix the convex region R = \{ c \cdot x = Z, x \geq 0 \} for a known optimal value Z. Define \textup{oracle}(\alpha, \beta) as the problem of finding an x \in R such that \alpha \cdot x \geq \beta.

For the case of this linear region R, we can simply find the index i which maximizes \alpha_i Z / c_i. If this value exceeds \beta, we can return the vector with that value in the i-th position and zeros elsewhere. Otherwise, the problem has no solution.

To prove the “no solution” part, say n=2 and you have x = (x_1, x_2) a solution to \alpha \cdot x \geq \beta. Then for whichever index makes \alpha_i Z / c_i bigger, say i=1, you can increase \alpha \cdot x without changing c \cdot x = Z by replacing x_1 with x_1 + (c_2/c_1)x_2 and x_2 with zero. I.e., we’re moving the solution x along the line c \cdot x = Z until it reaches a vertex of the region bounded by c \cdot x = Z and x \geq 0. This must happen when all entries but one are zero. This is the same reason why optimal solutions of (generic) linear programs occur at vertices of their feasible regions.

The code for this becomes quite simple. Note we use the numpy library in the entire codebase to make linear algebra operations fast and simple to read.

def makeOracle(c, optimalValue):
    n = len(c)

    def oracle(weightedVector, weightedThreshold):
        def quantity(i):
            return weightedVector[i] * optimalValue / c[i] if c[i] > 0 else -1

        biggest = max(range(n), key=quantity)
        if quantity(biggest) < weightedThreshold:
            raise InfeasibleException

        return numpy.array([optimalValue / c[i] if i == biggest else 0 for i in range(n)])

    return oracle

Implementing the core solver

The core solver implements the discussion from previously, given the optimal value of the linear program as input. To avoid too many single-letter variable names, we use linearObjective instead of c.

def solveGivenOptimalValue(A, b, linearObjective, optimalValue, learningRate=0.1):
    m, n = A.shape  # m equations, n variables
    oracle = makeOracle(linearObjective, optimalValue)

    def reward(i, specialVector):
        ...

    def observeOutcome(_, weights, __):
        ...

    numRounds = 1000
    weights, cumulativeReward, outcomes = MWUA(
        range(m), observeOutcome, reward, learningRate, numRounds
    )
    averageVector = sum(outcomes) / numRounds

    return averageVector

First we make the oracle, then the reward and outcome-producing functions, then we invoke the MWUA subroutine. Here are those two functions; they are closures because they need access to A and b. Note that neither c nor the optimal value show up here.

    def reward(i, specialVector):
        constraint = A[i]
        threshold = b[i]
        return threshold - numpy.dot(constraint, specialVector)

    def observeOutcome(_, weights, __):
        weights = numpy.array(weights)
        weightedVector = A.transpose().dot(weights)
        weightedThreshold = weights.dot(b)
        return oracle(weightedVector, weightedThreshold)

Implementing the binary search, and an example

Finally, the top-level routine. Note that the binary search for the optimal value is sophisticated (though it could be more sophisticated). It takes a max range for the search, and invokes the optimization subroutine, moving the upper bound down if the linear program is feasible and moving the lower bound up otherwise.

def solve(A, b, linearObjective, maxRange=1000):
    optRange = [0, maxRange]

    while optRange[1] - optRange[0] > 1e-8:
        proposedOpt = sum(optRange) / 2
        print("Attempting to solve with proposedOpt=%G" % proposedOpt)

        # Because the binary search starts so high, it results in extreme
        # reward values that must be tempered by a slow learning rate. Exercise
        # to the reader: determine absolute bounds for the rewards, and set
        # this learning rate in a more principled fashion.
        learningRate = 1 / max(2 * proposedOpt * c for c in linearObjective)
        learningRate = min(learningRate, 0.1)

        try:
            result = solveGivenOptimalValue(A, b, linearObjective, proposedOpt, learningRate)
            optRange[1] = proposedOpt
        except InfeasibleException:
            optRange[0] = proposedOpt

    return result

Finally, a simple example:

A = numpy.array([[1, 2, 3], [0, 4, 2]])
b = numpy.array([5, 6])
c = numpy.array([1, 2, 1])

x = solve(A, b, c)
print(x)
print(c.dot(x))
print(A.dot(x) - b)

The output:

Attempting to solve with proposedOpt=500
Attempting to solve with proposedOpt=250
Attempting to solve with proposedOpt=125
Attempting to solve with proposedOpt=62.5
Attempting to solve with proposedOpt=31.25
Attempting to solve with proposedOpt=15.625
Attempting to solve with proposedOpt=7.8125
Attempting to solve with proposedOpt=3.90625
Attempting to solve with proposedOpt=1.95312
Attempting to solve with proposedOpt=2.92969
Attempting to solve with proposedOpt=3.41797
Attempting to solve with proposedOpt=3.17383
Attempting to solve with proposedOpt=3.05176
Attempting to solve with proposedOpt=2.99072
Attempting to solve with proposedOpt=3.02124
Attempting to solve with proposedOpt=3.00598
Attempting to solve with proposedOpt=2.99835
Attempting to solve with proposedOpt=3.00217
Attempting to solve with proposedOpt=3.00026
Attempting to solve with proposedOpt=2.99931
Attempting to solve with proposedOpt=2.99978
Attempting to solve with proposedOpt=3.00002
Attempting to solve with proposedOpt=2.9999
Attempting to solve with proposedOpt=2.99996
Attempting to solve with proposedOpt=2.99999
Attempting to solve with proposedOpt=3.00001
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3  # note %G rounds the printed values
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
[ 0.     0.987  1.026]
3.00000000425
[  5.20000072e-02   8.49831849e-09]

So there we have it. A fiendishly clever use of multiplicative weights for solving linear programs.

Discussion

One of the nice aspects of MWUA is it’s completely transparent. If you want to know why a decision was made, you can simply look at the weights and look at the history of rewards of the objects. There’s also a clear interpretation of what is being optimized, as the potential function used in the proof is a measure of both quality and adaptability to change. The latter is why MWUA succeeds even in adversarial settings, and why it makes sense to think about MWUA in the context of evolutionary biology.

This even makes one imagine new problems that traditional algorithms cannot solve, but which MWUA handles with grace. For example, imagine trying to solve an “online” linear program in which over time a constraint can change. MWUA can adapt to maintain its approximate solution.

The linear programming technique is known in the literature as the Plotkin-Shmoys-Tardos framework for covering and packing problems. The same ideas extend to other convex optimization problems, including semidefinite programming.

If you’ve been reading this entire post screaming “This is just gradient descent!” Then you’re right and wrong. It bears a striking resemblance to gradient descent (see this document for details about how special cases of MWUA are gradient descent by another name), but the adaptivity for the rewards makes MWUA different.

Even though so many people have been advocating for MWUA over the past decade, it’s surprising that it doesn’t show up in the general math/CS discourse on the internet or even in many algorithms courses. The Arora survey I referenced is from 2005 and the linear programming technique I demoed is originally from 1991! I took algorithms classes wherever I could, starting undergraduate in 2007, and I didn’t even hear a whisper of this technique until midway through my PhD in theoretical CS (I did, however, study fictitious play in a game theory class). I don’t have an explanation for why this is the case, except maybe that it takes more than 20 years for techniques to make it to the classroom. At the very least, this is one good reason to go to graduate school. You learn the things (and where to look for the things) which haven’t made it to classrooms yet.

Until next time!

Bezier Curves and Picasso

Pablo Picasso

Pablo Picasso in front of The Kitchen, photo by Herbert List.

Simplicity and the Artist

Some of my favorite of Pablo Picasso’s works are his line drawings. He did a number of them about animals: an owl, a camel, a butterfly, etc. This piece called “Dog” is on my wall:

Dachshund-Picasso-Sketch

(Jump to interactive demo where we recreate “Dog” using the math in this post)

These paintings are extremely simple but somehow strike the viewer as deeply profound. They give the impression of being quite simple to design and draw. A single stroke of the hand and a scribbled signature, but what a masterpiece! It simultaneously feels like a hasty afterthought and a carefully tuned overture to a symphony of elegance. In fact, we know that Picasso’s process was deep. For example, in 1945-1946, Picasso made a series of eleven drawings (lithographs, actually) showing the progression of his rendition of a bull. The first few are more or less lifelike, but as the series progresses we see the bull boiled down to its essence, the final painting requiring a mere ten lines. Along the way we see drawings of a bull that resemble some of Picasso’s other works (number 9 reminding me of the sculpture at Daley Center Plaza in Chicago). Read more about the series of lithographs here.

the bull

Picasso’s, “The Bull.” Photo taken by Jeremy Kun at the Art Institute of Chicago in 2013. Click to enlarge.

Now I don’t pretend to be a qualified artist (I couldn’t draw a bull to save my life), but I can recognize the mathematical aspects of his paintings, and I can write a damn fine program. There is one obvious way to consider Picasso-style line drawings as a mathematical object, and it is essentially the Bezier curve. Let’s study the theory behind Bezier curves, and then write a program to draw them. The mathematics involved requires no background knowledge beyond basic algebra with polynomials, and we’ll do our best to keep the discussion low-tech. Then we’ll explore a very simple algorithm for drawing Bezier curves, implement it in Javascript, and recreate one of Picasso’s line drawings as a sequence of Bezier curves.

The Bezier Curve and Parameterizations

When asked to conjure a “curve” most people (perhaps plagued by their elementary mathematics education) will either convulse in fear or draw part of the graph of a polynomial. While these are fine and dandy curves, they only represent a small fraction of the world of curves. We are particularly interested in curves which are not part of the graphs of any functions.

Three French curves.

For instance, a French curve is a physical template used in (manual) sketching to aid the hand in drawing smooth curves. Tracing the edges of any part of these curves will usually give you something that is not the graph of a function. It’s obvious that we need to generalize our idea of what a curve is a bit. The problem is that many fields of mathematics define a curve to mean different things.  The curves we’ll be looking at, called Bezier curves, are a special case of  single-parameter polynomial plane curves. This sounds like a mouthful, but what it means is that the entire curve can be evaluated with two polynomials: one for the x values and one for the y values. Both polynomials share the same variable, which we’ll call t, and t is evaluated at real numbers.

An example should make this clear. Let’s pick two simple polynomials in t, say x(t) = t^2 and y(t) = t^3. If we want to find points on this curve, we can just choose values of t and plug them into both equations. For instance, plugging in t = 2 gives the point (4, 8) on our curve. Plotting all such values gives a curve that is definitely not the graph of a function:

example-curve

But it’s clear that we can write any single-variable function f(x) in this parametric form: just choose x(t) = t and y(t) = f(t). So these are really more general objects than regular old functions (although we’ll only be working with polynomials in this post).

Quickly recapping, a single-parameter polynomial plane curve is defined as a pair of polynomials x(t), y(t) in the same variable t. Sometimes, if we want to express the whole gadget in one piece, we can take the coefficients of common powers of t and write them as vectors in the x and y parts. Using the x(t) = t^2, y(t) = t^3 example above, we can rewrite it as

\mathbf{f}(t) = (0,1) t^3 + (1,0) t^2

Here the coefficients are points (which are the same as vectors) in the plane, and we represent the function f in boldface to emphasize that the output is a point. The linear-algebraist might recognize that pairs of polynomials form a vector space, and further combine them as (0, t^3) + (t^2, 0) = (t^2, t^3). But for us, thinking of points as coefficients of a single polynomial is actually better.

We will also restrict our attention to single-parameter polynomial plane curves for which the variable t is allowed to range from zero to one. This might seem like an awkward restriction, but in fact every finite single-parameter polynomial plane curve can be written this way (we won’t bother too much with the details of how this is done). For the purpose of brevity, we will henceforth call a “single-parameter polynomial plane curve where t ranges from zero to one” simply a “curve.”

Now there are some very nice things we can do with curves. For instance, given any two points in the plane P = (p_1, p_2), Q = (q_1, q_2) we can describe the straight line between them as a curve: \mathbf{L}(t) = (1-t)P + tQ. Indeed, at t=0 the value \mathbf{L}(t) is exactly P, at t=1 it’s exactly Q, and the equation is a linear polynomial in t. Moreover (without getting too much into the calculus details), the line \mathbf{L} travels at “unit speed” from P to Q. In other words, we can think of \mathbf{L} as describing the motion of a particle from P to Q over time, and at time 1/4 the particle is a quarter of the way there, at time 1/2 it’s halfway, etc. (An example of a straight line which doesn’t have unit speed is, e.g. (1-t^2) P + t^2 Q.)

More generally, let’s add a third point R. We can describe a path which goes from P to R, and is “guided” by Q in the middle. This idea of a “guiding” point is a bit abstract, but computationally no more difficult. Instead of travelling from one point to another at constant speed, we want to travel from one line to another at constant speed. That is, call the two curves describing lines from P \to Q and Q \to R \mathbf{L}_1, \mathbf{L_2}, respectively. Then the curve “guided” by Q can be written as a curve

\displaystyle \mathbf{F}(t) = (1-t)\mathbf{L}_1(t) + t \mathbf{L}_2(t)

Multiplying this all out gives the formula

\displaystyle \mathbf{F}(t) = (1-t)^2 P + 2t(1-t)Q + t^2 R

We can interpret this again in terms of a particle moving. At the beginning of our curve the value of t is small, and so we’re sticking quite close to the line \mathbf{L}_1 As time goes on the point \mathbf{F}(t) moves along the line between the points \mathbf{L}_1(t) and \mathbf{L}_2(t), which are themselves moving. This traces out a curve which looks like this

Screen Shot 2013-05-07 at 9.38.42 PM

This screenshot was taken from a wonderful demo by data visualization consultant Jason Davies. It expresses the mathematical idea quite superbly, and one can drag the three points around to see how it changes the resulting curve. One should play with it for at least five minutes.

The entire idea of a Bezier curve is a generalization of this principle: given a list P_0, \dots, P_n of points in the plane, we want to describe a curve which travels from the first point to the last, and is “guided” in between by the remaining points. A Bezier curve is a realization of such a curve (a single-parameter polynomial plane curve) which is the inductive continuation of what we described above: we travel at unit speed from a Bezier curve defined by the first n-1 points in the list to the curve defined by the last n-1 points. The base case is the straight-line segment (or the single point, if you wish). Formally,

Definition: Given a list of points in the plane P_0, \dots, P_n we define the degree n-1 Bezier curve recursively as

\begin{aligned} \mathbf{B}_{P_0}(t) &= P_0 \\ \mathbf{B}_{P_0 P_1 \dots P_n}(t) &= (1-t)\mathbf{B}_{P_0 P_1 \dots P_{n-1}} + t \mathbf{B}_{P_1P_2 \dots P_n}(t) \end{aligned}

We call P_0, \dots, P_n the control points of \mathbf{B}.

While the concept of travelling at unit speed between two lower-order Bezier curves is the real heart of the matter (and allows us true computational insight), one can multiply all of this out (using the formula for binomial coefficients) and get an explicit formula. It is:

\displaystyle \mathbf{B}_{P_0 \dots P_n} = \sum_{k=0}^n \binom{n}{k}(1-t)^{n-k}t^k P_k

And for example, a cubic Bezier curve with control points P_0, P_1, P_2, P_3 would have equation

\displaystyle (1-t)^3 P_0 + 3(1-t)^2t P_1 + 3(1-t)t^2 P_2 + t^3 P_3

Higher dimensional Bezier curves can be quite complicated to picture geometrically. For instance, the following is a fifth-degree Bezier curve (with six control points).

BezierCurve

A degree five Bezier curve, credit Wikipedia.

The additional line segments drawn show the recursive nature of the curve. The simplest are the green points, which travel from control point to control point. Then the blue points travel on the line segments between green points, the pink travel along the line segments between blue, the orange between pink, and finally the red point travels along the line segment between the orange points.

Without the recursive structure of the problem (just seeing the curve) it would be a wonder how one could actually compute with these things. But as we’ll see, the algorithm for drawing a Bezier curve is very natural.

Bezier Curves as Data, and de Casteljau’s Algorithm

Let’s derive and implement the algorithm for painting a Bezier curve to a screen using only the ability to draw straight lines. For simplicity, we’ll restrict our attention to degree-three (cubic) Bezier curves. Indeed, every Bezier curve can be written as a combination of cubic curves via the recursive definition, and in practice cubic curves balance computational efficiency and expressiveness. All of the code we present in this post will be in Javascript, and is available on this blog’s Github page.

So then a cubic Bezier curve is represented in a program by a list of four points. For example,

var curve = [[1,2], [5,5], [4,0], [9,3]];

Most graphics libraries (including the HTML5 canvas standard) provide a drawing primitive that can output Bezier curves given a list of four points. But suppose we aren’t given such a function. Suppose that we only have the ability to draw straight lines. How would one go about drawing an approximation to a Bezier curve? If such an algorithm exists (it does, and we’re about to see it) then we could make the approximation so fine that it is visually indistinguishable from a true Bezier curve.

The key property of Bezier curves that allows us to come up with such an algorithm is the following:

Any cubic Bezier curve \mathbf{B} can be split into two, end to end,
which together trace out the same curve as \mathbf{B}.

Let see exactly how this is done. Let \mathbf{B}(t) be a cubic Bezier curve with control points P_0, P_1, P_2, P_3, and let’s say we want to split it exactly in half. We notice that the formula for the curve when we plug in 1/2, which is

\displaystyle \mathbf{B}(1/2) = \frac{1}{2^3}(P_0 + 3P_1 + 3P_2 + P_3)

Moreover, our recursive definition gave us a way to evaluate the point in terms of smaller-degree curves. But when these are evaluated at 1/2 their formulae are similarly easy to write down. The picture looks like this:

subdivision

The green points are the degree one curves, the pink points are the degree two curves, and the blue point is the cubic curve. We notice that, since each of the curves are evaluated at t=1/2, each of these points can be described as the midpoints of points we already know. So m_0 = (P_0 + P_1) / 2, q_0 = (m_0 + m_1)/2, etc.

In fact, the splitting of the two curves we want is precisely given by these points. That is, the “left” half of the curve is given by the curve \mathbf{L}(t) with control points P_0, m_0, q_0, \mathbf{B}(1/2), while the “right” half \mathbf{R}(t) has control points \mathbf{B}(1/2), q_1, m_2, P_3.

How can we be completely sure these are the same Bezier curves? Well, they’re just polynomials. We can compare them for equality by doing a bunch of messy algebra. But note, since \mathbf{L}(t) only travels halfway along \mathbf{B}(t), to check they are the same is to equate \mathbf{L}(t) with \mathbf{B}(t/2), since as t ranges from zero to one, t/2 ranges from zero to one half. Likewise, we can compare \mathbf{B}((t+1)/2) with \mathbf{R}(t).

The algebra is very messy, but doable. As a test of this blog’s newest tools, here’s a screen cast of me doing the algebra involved in proving the two curves are identical.


Now that that’s settled, we have a nice algorithm for splitting a cubic Bezier (or any Bezier) into two pieces. In Javascript,

function subdivide(curve) {
   var firstMidpoints = midpoints(curve);
   var secondMidpoints = midpoints(firstMidpoints);
   var thirdMidpoints = midpoints(secondMidpoints);

   return [[curve[0], firstMidpoints[0], secondMidpoints[0], thirdMidpoints[0]],
          [thirdMidpoints[0], secondMidpoints[1], firstMidpoints[2], curve[3]]];
}

Here “curve” is a list of four points, as described at the beginning of this section, and the output is a list of two curves with the correct control points. The “midpoints” function used is quite simple, and we include it here for compelteness:

function midpoints(pointList) {
   var midpoint = function(p, q) {
      return [(p[0] + q[0]) / 2.0, (p[1] + q[1]) / 2.0];
   };

   var midpointList = new Array(pointList.length - 1);
   for (var i = 0; i < midpointList.length; i++) {
      midpointList[i] = midpoint(pointList[i], pointList[i+1]);
   }

   return midpointList;
}

It just accepts as input a list of points and computes their sequential midpoints. So a list of n points is turned into a list of n-1 points. As we saw, we need to call this function d-1 times to compute the segmentation of a degree d Bezier curve.

As explained earlier, we can keep subdividing our curve over and over until each of the tiny pieces are basically lines. That is, our function to draw a Bezier curve from the beginning will be as follows:

function drawCurve(curve, context) {
   if (isFlat(curve)) {
      drawSegments(curve, context);
   } else {
      var pieces = subdivide(curve);
      drawCurve(pieces[0], context);
      drawCurve(pieces[1], context);
   }
}

In words, as long as the curve isn’t “flat,” we want to subdivide and draw each piece recursively. If it is flat, then we can simply draw the three line segments of the curve and be reasonably sure that it will be a good approximation. The context variable sitting there represents the canvas to be painted to; it must be passed through to the “drawSegments” function, which simply paints a straight line to the canvas.

Of course this raises the obvious question: how can we tell if a Bezier curve is flat? There are many ways to do so. One could compute the angles of deviation (from a straight line) at each interior control point and add them up. Or one could compute the volume of the enclosed quadrilateral. However, computing angles and volumes is usually not very nice: angles take a long time to compute and volumes have stability issues, and the algorithms which are stable are not very simple. We want a measurement which requires only basic arithmetic and perhaps a few logical conditions to check.

It turns out there is such a measurement. It’s originally attributed to Roger Willcocks, but it’s quite simple to derive by hand.

Essentially, we want to measure the “flatness” of a cubic Bezier curve by computing the distance of the actual curve at time t from where the curve would be at time t if the curve were a straight line.

Formally, given \mathbf{B}(t) with control points P_0, P_1, P_2, P_3 as usual, we can define the straight-line Bezier cubic as the colossal sum

\displaystyle \mathbf{S}(t) = (1-t)^3P_0 + 3(1-t)^2t \left ( \frac{2}{3}P_0 + \frac{1}{3}P_3 \right ) + 3(1-t)t^2 \left ( \frac{1}{3}P_0 + \frac{2}{3}P_3 \right ) + t^3 P_3

There’s nothing magical going on here. We’re simply giving the Bezier curve with control points P_0, \frac{2}{3}P_0 + \frac{1}{3}P_3, \frac{1}{3}P_0 + \frac{2}{3}P_3, P_3. One should think about this as points which are a 0, 1/3, 2/3, and 1 fraction of the way from P_0 to P_3 on a straight line.

Then we define the function d(t) = \left \| \mathbf{B}(t) - \mathbf{S}(t) \right \| to be the distance between the two curves at the same time t. The flatness value of \mathbf{B} is the maximum of d over all values of t. If this flatness value is below a certain tolerance level, then we call the curve flat.

With a bit of algebra we can simplify this expression. First, the value of t for which the distance is maximized is the same as when its square is maximized, so we can omit the square root computation at the end and take that into account when choosing a flatness tolerance.

Now lets actually write out the difference as a single polynomial. First, we can cancel the 3’s in \mathbf{S}(t) and write the polynomial as

\displaystyle \mathbf{S}(t) = (1-t)^3 P_0 + (1-t)^2t (2P_0 + P_3) + (1-t)t^2 (P_0 + 2P_3) + t^3 P_3

and so \mathbf{B}(t) - \mathbf{S}(t) is (by collecting coefficients of the like terms (1-t)^it^j)

\displaystyle (1-t)^2t (3 P_1 - 2P_0 - P_3) + (1-t)t^2 (3P_2 - P_0 - 2P_3)

Factoring out the (1-t)t from both terms and setting a = 3P_1 - 2P_0 - P_3, b = 3P_2 - P_0 - 2P_3, we get

\displaystyle d^2(t) = \left \| (1-t)t ((1-t)a + tb) \right \|^2 = (1-t)^2t^2 \left \| (1-t)a + tb \right \|^2

Since the maximum of a product is at most the product of the maxima, we can bound the above quantity by the product of the two maxes. The reason we want to do this is because we can easily compute the two maxes separately. It wouldn’t be hard to compute the maximum without splitting things up, but this way ends up with fewer computational steps for our final algorithm, and the visual result is equally good.

Using some elementary single-variable calculus, the maximum value of (1-t)^2t^2 for 0 \leq t \leq 1 turns out to be 1/16. And the norm of a vector is just the sum of squares of its components. If a = (a_x, a_y) and b = (b_x, b_y), then the norm above is exactly

\displaystyle ((1-t)a_x + tb_x)^2 + ((1-t)a_y + tb_y)^2

And notice: for any real numbers z, w the quantity (1-t)z + tw is exactly the straight line from z to w we know so well. The maximum over all t between zero and one is obviously the maximum of the endpoints z, w. So the max of our distance function d^2(t) is bounded by

\displaystyle \frac{1}{16} (\textup{max}(a_x^2, b_x^2) + \textup{max}(a_y^2, b_y^2))

And so our condition for being flat is that this bound is smaller than some allowable tolerance. We may safely factor the 1/16 into this tolerance bound, and so this is enough to write a function.

function isFlat(curve) {
   var tol = 10; // anything below 50 is roughly good-looking

   var ax = 3.0*curve[1][0] - 2.0*curve[0][0] - curve[3][0]; ax *= ax;
   var ay = 3.0*curve[1][1] - 2.0*curve[0][1] - curve[3][1]; ay *= ay;
   var bx = 3.0*curve[2][0] - curve[0][0] - 2.0*curve[3][0]; bx *= bx;
   var by = 3.0*curve[2][1] - curve[0][1] - 2.0*curve[3][1]; by *= by;

   return (Math.max(ax, bx) + Math.max(ay, by) <= tol);
}

And there we have it. We write a simple HTML page to access a canvas element and a few extra helper functions to draw the line segments when the curve is flat enough, and present the final result in this interactive demonstration (you can perturb the control points).

The picture you see on that page (given below) is my rendition of Picasso’s “Dog” drawing as a sequence of nine Bezier curves. I think the resemblance is uncanny 🙂

Picasso's "Dog," redesigned as a sequence of eight bezier curves.

Picasso’s “Dog,” redesigned as a sequence of nine bezier curves.

While we didn’t invent the drawing itself (and hence shouldn’t attach our signature to it), we did come up with the representation as a sequence of Bezier curves. It only seems fitting to present that as the work of art. Here we’ve distilled the representation down to a single file: the first line is the dimension of the canvas, and each subsequent line represents a cubic Bezier curve. Comments are included for readability.

bezier-dog-for-web

“Dog” Jeremy Kun, 2013. Click to enlarge.

Because standardizing things seems important, we define a new filetype “.bezier”, which has the format given above:

int int
(int) curve 
(int) curve
...

Where the first two ints specify the size of the canvas, the first (optional) int on each line specifies the width of the stroke, and a “curve” has the form

[int,int] [int,int] ... [int,int]

If an int is omitted at the beginning of a line, this specifies a width of three pixels.

In a general .bezier file we allow a curve to have arbitrarily many control points, though the code we gave above does not draw them that generally. As an exercise, write a program which accepts as input a .bezier file and produces as output an image of the drawing. This will require an extension of the algorithm above for drawing arbitrary Bezier curves, which loops its computation of the midpoints and keeps track of which end up in the resulting subdivision. Alternatively, one could write a program which accepts as input a .bezier file with only cubic Bezier curves, and produces as output an SVG file of the drawing (SVG only supports cubic Bezier curves). So a .bezier file is a simplification (fewer features) and an extension (Bezier curves of arbitrary degree) of an SVG file.

We didn’t go as deep into the theory of Bezier curves as we could have. If the reader is itching for more (and a more calculus-based approach), see this lengthy primer. It contains practically everything one could want to know about Bezier curves, with nice interactive demos written in Processing.

Low-Complexity Art

There are some philosophical implications of what we’ve done today with Picasso’s “Dog.” Previously on this blog we’ve investigated the idea of low-complexity art, and it’s quite relevant here. The thesis is that “beautiful” art has a small description length, and more formally the “complexity” of some object (represented by text) is the length of the shortest program that outputs that object given no inputs. More on that in our primer on Kolmogorov complexity. The fact that we can describe Picasso’s line drawings with a small number of Bezier curves (and a relatively short program to output the bezier curves) is supposed to be a deep statement about the beauty of the art itself. Obviously this is very subjective, but not without its proponents.

There has been a bit of recent interest in computers generating art. For instance, this recent programming competition (in Dutch) gave the task of generating art similar to the work of Piet Mondrian. The idea is that the more elegant the algorithm, the higher it would be scored. The winner used MD5 hashes to generate Mondrian pieces, and there were many many other impressive examples (the link above has a gallery of submissions).

In our earlier post on low-complexity art, we explored the possibility of representing all images within a coordinate system involving circles with shaded interiors. But it’s obvious that such a coordinate system wouldn’t be able to represent “Dog” with very low complexity. It seems that Bezier curves are a much more natural system of coordinates. Some of the advantages include that length of lines and slight perturbations don’t affect the resulting complexity. A cubic Bezier curve can be described by any set of four points, and more “intricate” (higher complexity) descriptions of curves require a larger number of points. Bezier curves can be scaled up arbitrarily, and this doesn’t significantly change the complexity of the curve (although scaling many orders of magnitude will introduce a logarithmic factor complexity increase, this is quite small). Curves with larger stroke are slightly more complex than those with smaller stroke, and representing many small sharp bends require more curves than long, smooth arcs.

On the downside, it’s not so easy to represent a circle as a Bezier curve. In fact, it is impossible to do so exactly. Despite the simplicity of this object (it’s even defined as a single polynomial, albeit in two variables), the best one can do is approximate it. The same goes for ellipses. There are actually ways to overcome this (the concept of rational Bezier curves which are quotients of polynomials), but they add to the inherent complexity of the drawing algorithm and the approximations using regular Bezier curves are good enough.

And so we define the complexity of a drawing to be the number of bits in its .bezier file representation. Comments are ignored in this calculation.

The real prize, and what we’ll explore next time, is to find a way to generate art automatically. That is to do one of two things:

  1. Given some sort of “seed,” write a program that produces a pseudo-random line drawing.
  2. Given an image, produce a .bezier image which accurately depicts the image as a line drawing.

We will attempt to explore these possibilities in the follow-up to this post. Depending on how things go, this may involve some local search algorithms, genetic algorithms, or other methods.

Until then!

Addendum: want to buy a framed print of the source code for “Dog”? Head over to our page on Society6.