Optimization Models for Subset Cover

In a recent newsletter article I complained about how researchers mislead about the applicability of their work. I gave SAT solvers as an example. People provided interesting examples in response, but what was new to me was the concept of SMT (Satisfiability Modulo Theories), an extension to SAT. SMT seems to have more practical uses than vanilla SAT (see the newsletter for details).

I wanted to take some time to explore SMT solvers, and I landed on Z3, an open-source SMT solver from Microsoft. In particular, I wanted to compare it to ILP (Integer Linear Programing) solvers, which I know relatively well. I picked a problem that I thought would work better for SAT-ish solvers than for ILPs: subset covering (explained in the next section). If ILP still wins against Z3, then that would be not so great for the claim that SMT is a production strength solver.

All the code used for this post is on Github.

Subset covering

A subset covering is a kind of combinatorial design, which can be explained in terms of magic rings.

An adventurer stumbles upon a chest full of magic rings. Each ring has a magical property, but some pairs of rings, when worn together on the same hand, produce a combined special magical effect distinct to that pair.

The adventurer would like to try all pairs of rings to catalogue the magical interactions. With only five fingers, how can we minimize the time spent trying on rings?

Mathematically, the rings can be described as a set $ X$ of size $ n$. We want to choose a family $ F$ of subsets of $ X$, with each subset having size 5 (five fingers), such that each subset of $ X$ of size 2 (pairs of rings) is contained in some subset of $ F$. And we want $ F$ to be as small as possible.

Subset covering is not a “production worthy” problem. Rather, I could imagine it’s useful in some production settings, but I haven’t heard of one where it is actually used. I can imagine, for instance, that a cluster of machines has some bug occurring seemingly at random for some point-to-point RPCs, and in tracking down the problem, you want to deploy a test change to subsets of servers to observe the bug occurring. Something like an experiment design problem.

If you generalize the “5” in “5 fingers” to an arbitrary positive integer $ k$, and the “2” in “2 rings” to $ l < k$, then we have the general subset covering problem. Define $ M(n, k, l)$ to be the minimal number of subsets of size $ k$ needed to cover all subsets of size $ l$. This problem was studied by Erdős, with a conjecture subsequently proved by Vojtěch Rödl, that asymptotically $ M(n,k,l)$ grows like $ \binom{n}{l} / \binom{k}{l}$. Additional work by Joel Spencer showed that a greedy algorithm is essentially optimal.

However, all of the constructive algorithms in these proofs involve enumerating all $ \binom{n}{k}$ subsets of $ X$. This wouldn’t scale very well. You can alternatively try a “random” method, incurring a typically $ \log(r)$ factor of additional sets required to cover a $ 1 – 1/r$ fraction of the needed subsets. This is practical, but imperfect.

To the best of my knowledge, there is no exact algorithm, that both achieves the minimum and is efficient in avoiding constructing all $ \binom{n}{k}$ subsets. So let’s try using an SMT solver. I’ll be using the Python library for Z3.

Baseline: brute force Z3

For a baseline, let’s start with a simple Z3 model that enumerates all the possible subsets that could be chosen. This leads to an exceedingly simple model to compare the complex models against.

Define boolean variables $ \textup{Choice}_S$ which is true if and only if the subset $ S$ is chosen (I call this a “choice set”). Define boolean variables $ \textup{Hit}_T$ which is true if the subset $ T$ (I call this a “hit set”) is contained in a chosen choice set. Then the subset cover problem can be defined by two sets of implications.

First, if $ \textup{Choice}_S$ is true, then so must all $ \textup{Hit}_T$ for $ T \subset S$. E.g., for $ S = \{ 1, 2, 3 \}$ and $ l=2$, we get

$ \displaystyle \begin{aligned} \textup{Choice}_{(1,2,3)} &\implies \textup{Hit}_{(1,2)} \\ \textup{Choice}_{(1,2,3)} &\implies \textup{Hit}_{(1,3)} \\ \textup{Choice}_{(1,2,3)} &\implies \textup{Hit}_{(2,3)} \end{aligned}$

In Python this looks like the following (note this program has some previously created lookups and data structures containing the variables)

for choice_set in choice_sets:
  for hit_set_key in combinations(choice_set.elements, hit_set_size):
    hit_set = hit_set_lookup[hit_set_key]
    implications.append(
      z3.Implies(choice_set.variable, hit_set.variable))

Second, if $ \textup{Hit}_T$ is true, it must be that some $ \textup{Choice}_S$ is true for some $ S$ containing $ T$ as a subset. For example,

$ \displaystyle \begin{aligned} \textup{Hit}_{(1,2)} &\implies \\ & \textup{Choice}_{(1,2,3,4)} \textup{ OR} \\ & \textup{Choice}_{(1,2,3,5)} \textup{ OR} \\ & \textup{Choice}_{(1,2,4,5)} \textup{ OR } \cdots \\ \end{aligned}$

In code,

for hit_set in hit_sets.values():
  relevant_choice_set_vars = [
    choice_set.variable
    for choice_set in hit_set_to_choice_set_lookup[hit_set]
  ]
  implications.append(
    z3.Implies(
      hit_set.variable, 
      z3.Or(*relevant_choice_set_vars)))

Next, in this experiment we’re allowing the caller to specify the number of choice sets to try, and the solver should either return SAT or UNSAT. From that, we can use a binary search to find the optimal number of sets to pick. Thus, we have to limit the number of $ \textup{Choice}_S$ that are allowed to be true and false. Z3 supports boolean cardinality constraints, apparently with a special solver to handle problems that have them. Otherwise, the process of encoding cardinality constraints as SAT formulas is not trivial (and the subject of active research). But the code is simple enough:

args = [cs.variable for cs in choice_sets] + [parameters.num_choice_sets]
choice_sets_at_most = z3.AtMost(*args)
choice_sets_at_least = z3.AtLeast(*args)

Finally, we must assert that every $ \textup{Hit}_T$ is true.

solver = z3.Solver()
for hit_set in hit_sets.values():
  solver.add(hit_set.variable)

for impl in implications:
  solver.add(impl)

solver.add(choice_sets_at_most)
solver.add(choice_sets_at_least)

Running it for $ n=7, k=3, l=2$, and seven choice sets (which is optimal), we get

>>> SubsetCoverZ3BruteForce().solve(
  SubsetCoverParameters(
    num_elements=7,
    choice_set_size=3,
    hit_set_size=2,
    num_choice_sets=7))
[(0, 1, 3), (0, 2, 4), (0, 5, 6), (1, 2, 6), (1, 4, 5), (2, 3, 5), (3, 4, 6)]
SubsetCoverSolution(status=<SolveStatus.SOLVED: 1>, solve_time_seconds=0.018305063247680664)

Interestingly, Z3 refuses to solve marginally larger instances. For instance, I tried the following and Z3 times out around $ n=12, k=6$ (about 8k choice sets):

from math import comb

for n in range(8, 16):
  k = int(n / 2)
  l = 3
  max_num_sets = int(2 * comb(n, l) / comb(k, l))
  params = SubsetCoverParameters(
    num_elements=n,
    choice_set_size=k,
    hit_set_size=l,                                   
    num_choice_sets=max_num_sets)

    print_table(
      params, 
      SubsetCoverZ3BruteForce().solve(params), 
      header=(n==8))

After taking a long time to generate the larger models, Z3 exceeds my 15 minute time limit, suggesting exponential growth:

status               solve_time_seconds  num_elements  choice_set_size  hit_set_size  num_choice_sets
SolveStatus.SOLVED   0.0271              8             4                3             28
SolveStatus.SOLVED   0.0346              9             4                3             42
SolveStatus.SOLVED   0.0735              10            5                3             24
SolveStatus.SOLVED   0.1725              11            5                3             33
SolveStatus.SOLVED   386.7376            12            6                3             22
SolveStatus.UNKNOWN  900.1419            13            6                3             28
SolveStatus.UNKNOWN  900.0160            14            7                3             20
SolveStatus.UNKNOWN  900.0794            15            7                3             26

An ILP model

Next we’ll see an ILP model for the sample problem. Note there are two reasons I expect the ILP model to fall short. First, the best solver I have access to is SCIP, which, despite being quite good is, in my experience, about an order of magnitude slower than commercial alternatives like Gurobi. Second, I think this sort of problem seems to not be very well suited to ILPs. It would take quite a bit longer to explain why (maybe another post, if you’re interested), but in short well-formed ILPs have easily found feasible solutions (this one does not), and the LP-relaxation of the problem should be as tight as possible. I don’t think my formulation is very tight, but it’s possible there is a better formulation.

Anyway, the primary difference in my ILP model from brute force is that the number of choice sets is fixed in advance, and the members of the choice sets are model variables. This allows us to avoid enumerating all choice sets in the model.

In particular, $ \textup{Member}_{S,i} \in \{ 0, 1 \}$ is a binary variable that is 1 if and only if element $ i$ is assigned to be in set $ S$. And $ \textup{IsHit}_{T, S} \in \{0, 1\}$ is 1 if and only if the hit set $ T$ is a subset of $ S$. Here “$ S$” is an index over the subsets, rather than the set itself, because we don’t know what elements are in $ S$ while building the model.

For the constraints, each choice set $ S$ must have size $ k$:

$ \displaystyle \sum_{i \in X} \textup{Member}_{S, i} = k$

Each hit set $ T$ must be hit by at least one choice set:

$ \displaystyle \sum_{S} \textup{IsHit}_{T, S} \geq 1$

Now the tricky constraint. If a hit set $ T$ is hit by a specific choice set $ S$ (i.e., $ \textup{IsHit}_{T, S} = 1$) then all the elements in $ T$ must also be members of $ S$.

$ \displaystyle \sum_{i \in T} \textup{Member}_{S, i} \geq l \cdot \textup{IsHit}_{T, S}$

This one works by the fact that the left-hand side (LHS) is bounded from below by 0 and bounded from above by $ l = |T|$. Then $ \textup{IsHit}_{T, S}$ acts as a switch. If it is 0, then the constraint is vacuous since the LHS is always non-negative. If $ \textup{IsHit}_{T, S} = 1$, then the right-hand side (RHS) is $ l = |T|$ and this forces all variables on the LHS to be 1 to achieve it.

Because we fixed the number of choice sets as a parameter, the objective is 1, and all we’re doing is looking for a feasible solution. The full code is here.

On the same simple example as the brute force

>>> SubsetCoverILP().solve(
  SubsetCoverParameters(
    num_elements=7,
    choice_set_size=3,
    hit_set_size=2,
    num_choice_sets=7))
[(0, 1, 3), (0, 2, 6), (0, 4, 5), (1, 2, 4), (1, 5, 6), (2, 3, 5), (3, 4, 6)]
SubsetCoverSolution(status=<SolveStatus.SOLVED: 1>, solve_time_seconds=0.1065816879272461)

It finds the same solution in about 10x the runtime as the brute force Z3 model, though still well under one second.

On the “scaling” example, it fares much worse. With a timeout of 15 minutes, it solves $ n=8,$ decently fast, $ n=9,12$ slowly, and times out on the rest.

status               solve_time_seconds  num_elements  choice_set_size  hit_set_size  num_choice_sets
SolveStatus.SOLVED   1.9969              8             4                3             28
SolveStatus.SOLVED   306.4089            9             4                3             42
SolveStatus.UNKNOWN  899.8842            10            5                3             24
SolveStatus.UNKNOWN  899.4849            11            5                3             33
SolveStatus.SOLVED   406.9502            12            6                3             22
SolveStatus.UNKNOWN  902.7807            13            6                3             28
SolveStatus.UNKNOWN  900.0826            14            7                3             20
SolveStatus.UNKNOWN  900.0731            15            7                3             26

A Z3 Boolean Cardinality Model

The next model uses Z3. It keeps the concept of Member and Hit variables, but they are boolean instead of integer. It also replaces the linear constraints with implications. The constraint that forces a Hit set’s variable to be true when some Choice set contains all its elements is (for each $ S, T$)

$ \displaystyle \left ( \bigwedge_{i \in T} \textup{Member}_{S, i} \right ) \implies \textup{IsHit}_T$

Conversely, A Hit set’s variable being true implies its members are in some choice set.

$ \displaystyle \textup{IsHit}_T \implies \bigvee_{S} \bigwedge_{i \in T} \textup{Member}_{S, i}$

Finally, we again use boolean cardinality constraints AtMost and AtLeast so that each choice set has the right size.

The results are much better than the ILP: it solves all of the instances in under 3 seconds

status              solve_time_seconds  num_elements  choice_set_size  hit_set_size  num_choice_sets
SolveStatus.SOLVED  0.0874              8             4                3             28
SolveStatus.SOLVED  0.1861              9             4                3             42
SolveStatus.SOLVED  0.1393              10            5                3             24
SolveStatus.SOLVED  0.2845              11            5                3             33
SolveStatus.SOLVED  0.2032              12            6                3             22
SolveStatus.SOLVED  1.3661              13            6                3             28
SolveStatus.SOLVED  0.8639              14            7                3             20
SolveStatus.SOLVED  2.4877              15            7                3             26

A Z3 integer model

Z3 supports implications on integer equation equalities, so we can try a model that leverages this by essentially converting the boolean model to one where the variables are 0-1 integers, and the constraints are implications on equality of integer formulas (all of the form “variable = 1”).

I expect this to perform worse than the boolean model, even though the formulation is almost identical. The details of the model are here, and it’s so similar to the boolean model above that it needs no extra explanation.

The runtime is much worse, but surprisingly it still does better than the ILP model.

status              solve_time_seconds  num_elements  choice_set_size  hit_set_size  num_choice_sets
SolveStatus.SOLVED  2.1129              8             4                3             28
SolveStatus.SOLVED  14.8728             9             4                3             42
SolveStatus.SOLVED  7.6247              10            5                3             24
SolveStatus.SOLVED  25.0607             11            5                3             33
SolveStatus.SOLVED  30.5626             12            6                3             22
SolveStatus.SOLVED  63.2780             13            6                3             28
SolveStatus.SOLVED  57.0777             14            7                3             20
SolveStatus.SOLVED  394.5060            15            7                3             26

Harder instances

So far all the instances we’ve been giving the solvers are “easy” in a sense. In particular, we’ve guaranteed there’s a feasible solution, and it’s easy to find. We’re giving roughly twice as many sets as are needed. There are two ways to make this problem harder. One is to test on unsatisfiable instances, which can be harder because the solver has to prove it can’t work. Another is to test on satisfiable instances that are hard to find, such as those satisfiable instances where the true optimal number of choice sets is given as the input parameter. The hardest unsatisfiable instances are also the ones where the number of choice sets allowed is one less than optimal.

Let’s test those situations. Since $ M(7, 3, 2) = 7$, we can try with 7 choice sets and 6 choice sets.

For 7 choice sets (the optimal value), all the solvers do relatively well

method                    status  solve_time_seconds  num_elements  choice_set_size  hit_set_size  num_choice_sets
SubsetCoverILP            SOLVED  0.0843              7             3                2             7
SubsetCoverZ3Integer      SOLVED  0.0938              7             3                2             7
SubsetCoverZ3BruteForce   SOLVED  0.0197              7             3                2             7
SubsetCoverZ3Cardinality  SOLVED  0.0208              7             3                2             7

For 6, the ILP struggles to prove it’s infeasible, and the others do comparatively much better (at least 17x better).

method                    status      solve_time_seconds  num_elements  choice_set_size  hit_set_size  num_choice_sets
SubsetCoverILP            INFEASIBLE  120.8593            7             3                2             6
SubsetCoverZ3Integer      INFEASIBLE  3.0792              7             3                2             6
SubsetCoverZ3BruteForce   INFEASIBLE  0.3384              7             3                2             6
SubsetCoverZ3Cardinality  INFEASIBLE  7.5781              7             3                2             6

This seems like hard evidence that Z3 is better than ILPs for this problem (and it is), but note that the same test on $ n=8$ fails for all models. They can all quickly prove $ 8 < M(8, 3, 2) \leq 11$, but time out after twenty minutes when trying to determine if $ M(8, 3, 2) \in \{ 9, 10 \}$. Note that $ k=3, l=2$ is the least complex choice for the other parameters, so it seems like there’s not much hope to find $ M(n, k, l)$ for any seriously large parameters, like, say, $ k=6$.

Thoughts

These experiments suggest what SMT solvers can offer above and beyond ILP solvers. Disjunctions and implications are notoriously hard to model in an ILP. You often need to add additional special variables, or do tricks like the one I did that only work in some situations and which can mess with the efficiency of the solver. With SMT, implications are trivial to model, and natively supported by the solver.

Aside from reading everything I could find on Z3, there seems to be little advice on modeling to help the solver run faster. There is a ton of literature for this in ILP solvers, but if any readers see obvious problems with my SMT models, please chime in! I’d love to hear from you. Even without that, I am pretty impressed by how fast the solves finish for this subset cover problem (which this experiment has shown me is apparently a very hard problem).

However, there’s an elephant in the room. These models are all satisfiability/feasibility checks on a given solution. What is not tested here is optimization, in the sense of having the number of choice sets used be minimized directly by the solver. In a few experiments on even simpler models, z3 optimization is quite slow. And while I know how I’d model the ILP version of the optimization problem, given that it’s quite slow to find a feasible instance when the optimal number of sets is given as a parameter, it seems unlikely that it will be fast when asked to optimize. I will have to try that another time to be sure.

Also, I’d like to test the ILP models on Gurobi, but I don’t have a personal license. There’s also the possibility that I can come up with a much better ILP formulation, say, with a tighter LP relaxation. But these will have to wait for another time.

In the end, this experiment has given me some more food for thought, and concrete first-hand experience, on the use of SMT solvers.

Earthmover Distance

Problem: Compute distance between points with uncertain locations (given by samples, or differing observations, or clusters).

For example, if I have the following three “points” in the plane, as indicated by their colors, which is closer, blue to green, or blue to red?

example-points.png

It’s not obvious, and there are multiple factors at work: the red points have fewer samples, but we can be more certain about the position; the blue points are less certain, but the closest non-blue point to a blue point is green; and the green points are equally plausibly “close to red” and “close to blue.” The centers of masses of the three sample sets are close to an equilateral triangle. In our example the “points” don’t overlap, but of course they could. And in particular, there should probably be a nonzero distance between two points whose sample sets have the same center of mass, as below. The distance quantifies the uncertainty.

same-centers.png

All this is to say that it’s not obvious how to define a distance measure that is consistent with perceptual ideas of what geometry and distance should be.

Solution (Earthmover distance): Treat each sample set $ A$ corresponding to a “point” as a discrete probability distribution, so that each sample $ x \in A$ has probability mass $ p_x = 1 / |A|$. The distance between $ A$ and $ B$ is the optional solution to the following linear program.

Each $ x \in A$ corresponds to a pile of dirt of height $ p_x$, and each $ y \in B$ corresponds to a hole of depth $ p_y$. The cost of moving a unit of dirt from $ x$ to $ y$ is the Euclidean distance $ d(x, y)$ between the points (or whatever hipster metric you want to use).

Let $ z_{x, y}$ be a real variable corresponding to an amount of dirt to move from $ x \in A$ to $ y \in B$, with cost $ d(x, y)$. Then the constraints are:

  • Each $ z_{x, y} \geq 0$, so dirt only moves from $ x$ to $ y$.
  • Every pile $ x \in A$ must vanish, i.e. for each fixed $ x \in A$, $ \sum_{y \in B} z_{x,y} = p_x$.
  • Likewise, every hole $ y \in B$ must be completely filled, i.e. $ \sum_{y \in B} z_{x,y} = p_y$.

The objective is to minimize the cost of doing this: $ \sum_{x, y \in A \times B} d(x, y) z_{x, y}$.

In python, using the ortools library (and leaving out a few docstrings and standard import statements, full code on Github):

from ortools.linear_solver import pywraplp

def earthmover_distance(p1, p2):
    dist1 = {x: count / len(p1) for (x, count) in Counter(p1).items()}
    dist2 = {x: count / len(p2) for (x, count) in Counter(p2).items()}
    solver = pywraplp.Solver('earthmover_distance', pywraplp.Solver.GLOP_LINEAR_PROGRAMMING)

    variables = dict()

    # for each pile in dist1, the constraint that says all the dirt must leave this pile
    dirt_leaving_constraints = defaultdict(lambda: 0)

    # for each hole in dist2, the constraint that says this hole must be filled
    dirt_filling_constraints = defaultdict(lambda: 0)

    # the objective
    objective = solver.Objective()
    objective.SetMinimization()

    for (x, dirt_at_x) in dist1.items():
        for (y, capacity_of_y) in dist2.items():
            amount_to_move_x_y = solver.NumVar(0, solver.infinity(), 'z_{%s, %s}' % (x, y))
            variables[(x, y)] = amount_to_move_x_y
            dirt_leaving_constraints[x] += amount_to_move_x_y
            dirt_filling_constraints[y] += amount_to_move_x_y
            objective.SetCoefficient(amount_to_move_x_y, euclidean_distance(x, y))

    for x, linear_combination in dirt_leaving_constraints.items():
        solver.Add(linear_combination == dist1[x])

    for y, linear_combination in dirt_filling_constraints.items():
        solver.Add(linear_combination == dist2[y])

    status = solver.Solve()
    if status not in [solver.OPTIMAL, solver.FEASIBLE]:
        raise Exception('Unable to find feasible solution')

    return objective.Value()

Discussion: I’ve heard about this metric many times as a way to compare probability distributions. For example, it shows up in an influential paper about fairness in machine learning, and a few other CS theory papers related to distribution testing.

One might ask: why not use other measures of dissimilarity for probability distributions (Chi-squared statistic, Kullback-Leibler divergence, etc.)? One answer is that these other measures only give useful information for pairs of distributions with the same support. An example from a talk of Justin Solomon succinctly clarifies what Earthmover distance achieves

Screen Shot 2018-03-03 at 6.11.00 PM.png

Also, why not just model the samples using, say, a normal distribution, and then compute the distance based on the parameters of the distributions? That is possible, and in fact makes for a potentially more efficient technique, but you lose some information by doing this. Ignoring that your data might not be approximately normal (it might have some curvature), with Earthmover distance, you get point-by-point details about how each data point affects the outcome.

This kind of attention to detail can be very important in certain situations. One that I’ve been paying close attention to recently is the problem of studying gerrymandering from a mathematical perspective. Justin Solomon of MIT is a champion of the Earthmover distance (see his fascinating talk here for more, with slides) which is just one topic in a field called “optimal transport.”

This has the potential to be useful in redistricting because of the nature of the redistricting problem. As I wrote previously, discussions of redistricting are chock-full of geometry—or at least geometric-sounding language—and people are very concerned with the apparent “compactness” of a districting plan. But the underlying data used to perform redistricting isn’t very accurate. The people who build the maps don’t have precise data on voting habits, or even locations where people live. Census tracts might not be perfectly aligned, and data can just plain have errors and uncertainty in other respects. So the data that district-map-drawers care about is uncertain much like our point clouds. With a theory of geometry that accounts for uncertainty (and the Earthmover distance is the “distance” part of that), one can come up with more robust, better tools for redistricting.

Solomon’s website has a ton of resources about this, under the names of “optimal transport” and “Wasserstein metric,” and his work extends from computing distances to computing important geometric values like the barycenter, computational advantages like parallelism.

Others in the field have come up with transparency techniques to make it clearer how the Earthmover distance relates to the geometry of the underlying space. This one is particularly fun because the explanations result in a path traveled from the start to the finish, and by setting up the underlying metric in just such a way, you can watch the distribution navigate a maze to get to its target. I like to imagine tiny ants carrying all that dirt.

Screen Shot 2018-03-03 at 6.15.50 PM.png

Finally, work of Shirdhonkar and Jacobs provide approximation algorithms that allow linear-time computation, instead of the worst-case-cubic runtime of a linear solver.

Boolean Logic in Polynomials

Problem: Express a boolean logic formula using polynomials. I.e., if an input variable $ x$ is set to $ 0$, that is interpreted as false, while $ x=1$ is interpreted as true. The output of the polynomial should be 0 or 1 according to whether the formula is true or false as a whole.

Solution: You can do this using a single polynomial.

Illustrating with an example: the formula is $ \neg[(a \vee b) \wedge (\neg c \vee d)]$ also known as

not((a or b) and (not c or d))

The trick is to use multiplication for “and” and $ 1-x$ for “not.” So $ a \wedge b$ would be $ x_1 x_2$, and $ \neg z$ would be $ 1-z$. Indeed, if you have two binary variables $ x$ and $ y$ then $ xy$ is 1 precisely when both are 1, and zero when either variable is zero. Likewise, $ 1-x = 1$ if $ x$ is zero and zero if $ x$ is one.

Combine this with deMorgan’s rule to get any formula. $ a \vee b = \neg(\neg a \wedge \neg b)$ translates to $ 1 – (1-a)(1-b)$. For our example above,

$ \displaystyle f(x_1, x_2, x_3, x_4) = 1 – (1 – (1-a)(1-b))(1 – c(1-d))$

Which expands to

$ \displaystyle 1 – a – b + ab + (1-d)(ac + bc – abc)$

If you plug in $ a = 1, b = 0, c = 1, d = 0$ you get True in the original formula (because “not c or d” is False), and likewise the polynomial is

$ \displaystyle 1 – 1 – 0 + 0 + (1-0)(1 + 0 – 0) = 1$

You can verify the rest work yourself, using the following table as a guide:

0, 0, 0, 0 -&gt; 1
0, 0, 0, 1 -&gt; 1
0, 0, 1, 0 -&gt; 1
0, 0, 1, 1 -&gt; 1
0, 1, 0, 0 -&gt; 0
0, 1, 0, 1 -&gt; 0
0, 1, 1, 0 -&gt; 1
0, 1, 1, 1 -&gt; 0
1, 0, 0, 0 -&gt; 0
1, 0, 0, 1 -&gt; 0
1, 0, 1, 0 -&gt; 1
1, 0, 1, 1 -&gt; 0
1, 1, 0, 0 -&gt; 0
1, 1, 0, 1 -&gt; 0
1, 1, 1, 0 -&gt; 1
1, 1, 1, 1 -&gt; 0

Discussion: This trick is used all over CS theory to embed boolean logic within polynomials, and it makes the name “boolean algebra” obvious, because it’s just a subset of normal algebra.

Moreover, since boolean satisfiability—the problem of algorithmically determining if a boolean formula has a satisfying assignment (a choice of variables evaluating to true)—is NP-hard, this can be used to show certain problems relating to multivariable polynomials is also hard. For example, finding roots of multivariable polynomials (even if you knew nothing about algebraic geometry) is hard because you’d run into NP-hardness by simply considering the subset of polynomials coming from boolean formulas.

Here’s a more interesting example, related to the kinds of optimization problems that show up in modern machine learning. Say you want to optimize a polynomial $ f(x)$ subject to a set of quadratic equality constraints. This is NP-hard. Here’s why.

Let $ \varphi$ be a boolean formula, and $ f_\varphi$ its corresponding polynomial. First, each variable $ x_i$ used in the polynomial can be restricted to binary values via the constraint $ x_i(x_i – 1) = 0$.

You can even show NP-hardness if the target function to optimize is only quadratic. As an exercise, one can express the subset sum problem as a quadratic programming problem using similar choices for the constraints. According to this writeup you even express subset sum as a quadratic program with linear constraints.

The moral of the story is simply that multivariable polynomials can encode arbitrary boolean logic.

The Reasonable Effectiveness of the Multiplicative Weights Update Algorithm

papad

Christos Papadimitriou, who studies multiplicative weights in the context of biology.

Hard to believe

Sanjeev Arora and his coauthors consider it “a basic tool [that should be] taught to all algorithms students together with divide-and-conquer, dynamic programming, and random sampling.” Christos Papadimitriou calls it “so hard to believe that it has been discovered five times and forgotten.” It has formed the basis of algorithms in machine learning, optimization, game theory, economics, biology, and more.

What mystical algorithm has such broad applications? Now that computer scientists have studied it in generality, it’s known as the Multiplicative Weights Update Algorithm (MWUA). Procedurally, the algorithm is simple. I can even describe the core idea in six lines of pseudocode. You start with a collection of $ n$ objects, and each object has a weight.

Set all the object weights to be 1.
For some large number of rounds:
   Pick an object at random proportionally to the weights
   Some event happens
   Increase the weight of the chosen object if it does well in the event
   Otherwise decrease the weight

The name “multiplicative weights” comes from how we implement the last step: if the weight of the chosen object at step $ t$ is $ w_t$ before the event, and $ G$ represents how well the object did in the event, then we’ll update the weight according to the rule:

$ \displaystyle w_{t+1} = w_t (1 + G)$

Think of this as increasing the weight by a small multiple of the object’s performance on a given round.

Here is a simple example of how it might be used. You have some money you want to invest, and you have a bunch of financial experts who are telling you what to invest in every day. So each day you pick an expert, and you follow their advice, and you either make a thousand dollars, or you lose a thousand dollars, or something in between. Then you repeat, and your goal is to figure out which expert is the most reliable.

This is how we use multiplicative weights: if we number the experts $ 1, \dots, N$, we give each expert a weight $ w_i$ which starts at 1. Then, each day we pick an expert at random (where experts with larger weights are more likely to be picked) and at the end of the day we have some gain or loss $ G$. Then we update the weight of the chosen expert by multiplying it by $ (1 + G / 1000)$. Sometimes you have enough information to update the weights of experts you didn’t choose, too. The theoretical guarantees of the algorithm say we’ll find the best expert quickly (“quickly” will be concrete later).

In fact, let’s play a game where you, dear reader, get to decide the rewards for each expert and each day. I programmed the multiplicative weights algorithm to react according to your choices. Click the image below to go to the demo.

mwua

This core mechanism of updating weights can be interpreted in many ways, and that’s part of the reason it has sprouted up all over mathematics and computer science. Just a few examples of where this has led:

  1. In game theory, weights are the “belief” of a player about the strategy of an opponent. The most famous algorithm to use this is called Fictitious Play, and others include EXP3 for minimizing regret in the so-called “adversarial bandit learning” problem.
  2. In machine learning, weights are the difficulty of a specific training example, so that higher weights mean the learning algorithm has to “try harder” to accommodate that example. The first result I’m aware of for this is the Perceptron (and similar Winnow) algorithm for learning hyperplane separators. The most famous is the AdaBoost algorithm.
  3. Analogously, in optimization, the weights are the difficulty of a specific constraint, and this technique can be used to approximately solve linear and semidefinite programs. The approximation is because MWUA only provides a solution with some error.
  4. In mathematical biology, the weights represent the fitness of individual alleles, and filtering reproductive success based on this and updating weights for successful organisms produces a mechanism very much like evolution. With modifications, it also provides a mechanism through which to understand sex in the context of evolutionary biology.
  5. The TCP protocol, which basically defined the internet, uses additive and multiplicative weight updates (which are very similar in the analysis) to manage congestion.
  6. You can get easy $ \log(n)$-approximation algorithms for many NP-hard problems, such as set cover.

Additional, more technical examples can be found in this survey of Arora et al.

In the rest of this post, we’ll implement a generic Multiplicative Weights Update Algorithm, we’ll prove it’s main theoretical guarantees, and we’ll implement a linear program solver as an example of its applicability. As usual, all of the code used in the making of this post is available in a Github repository.

The generic MWUA algorithm

Let’s start by writing down pseudocode and an implementation for the MWUA algorithm in full generality.

In general we have some set $ X$ of objects and some set $ Y$ of “event outcomes” which can be completely independent. If these sets are finite, we can write down a table $ M$ whose rows are objects, whose columns are outcomes, and whose $ i,j$ entry $ M(i,j)$ is the reward produced by object $ x_i$ when the outcome is $ y_j$. We will also write this as $ M(x, y)$ for object $ x$ and outcome $ y$. The only assumption we’ll make on the rewards is that the values $ M(x, y)$ are bounded by some small constant $ B$ (by small I mean $ B$ should not require exponentially many bits to write down as compared to the size of $ X$). In symbols, $ M(x,y) \in [0,B]$. There are minor modifications you can make to the algorithm if you want negative rewards, but for simplicity we will leave that out. Note the table $ M$ just exists for analysis, and the algorithm does not know its values. Moreover, while the values in $ M$ are static, the choice of outcome $ y$ for a given round may be nondeterministic.

The MWUA algorithm randomly chooses an object $ x \in X$ in every round, observing the outcome $ y \in Y$, and collecting the reward $ M(x,y)$ (or losing it as a penalty). The guarantee of the MWUA theorem is that the expected sum of rewards/penalties of MWUA is not much worse than if one had picked the best object (in hindsight) every single round.

Let’s describe the algorithm in notation first and build up pseudocode as we go. The input to the algorithm is the set of objects, a subroutine that observes an outcome, a black-box reward function, a learning rate parameter, and a number of rounds.

def MWUA(objects, observeOutcome, reward, learningRate, numRounds):
   ...

We define for object $ x$ a nonnegative number $ w_x$ we call a “weight.” The weights will change over time so we’ll also sub-script a weight with a round number $ t$, i.e. $ w_{x,t}$ is the weight of object $ x$ in round $ t$. Initially, all the weights are $ 1$. Then MWUA continues in rounds. We start each round by drawing an example randomly with probability proportional to the weights. Then we observe the outcome for that round and the reward for that round.

# draw: [float] -&gt; int
# pick an index from the given list of floats proportionally
# to the size of the entry (i.e. normalize to a probability
# distribution and draw according to the probabilities).
def draw(weights):
    choice = random.uniform(0, sum(weights))
    choiceIndex = 0

    for weight in weights:
        choice -= weight
        if choice &lt;= 0:
            return choiceIndex

        choiceIndex += 1

# MWUA: the multiplicative weights update algorithm
def MWUA(objects, observeOutcome, reward, learningRate numRounds):
   weights = [1] * len(objects)
   for t in numRounds:
      chosenObjectIndex = draw(weights)
      chosenObject = objects[chosenObjectIndex]

      outcome = observeOutcome(t, weights, chosenObject)
      thisRoundReward = reward(chosenObject, outcome)

      ...

Sampling objects in this way is the same as associating a distribution $ D_t$ to each round, where if $ S_t = \sum_{x \in X} w_{x,t}$ then the probability of drawing $ x$, which we denote $ D_t(x)$, is $ w_{x,t} / S_t$. We don’t need to keep track of this distribution in the actual run of the algorithm, but it will help us with the mathematical analysis.

Next comes the weight update step. Let’s call our learning rate variable parameter $ \varepsilon$. In round $ t$ say we have object $ x_t$ and outcome $ y_t$, then the reward is $ M(x_t, y_t)$. We update the weight of the chosen object $ x_t$ according to the formula:

$ \displaystyle w_{x_t, t} = w_{x_t} (1 + \varepsilon M(x_t, y_t) / B)$

In the more general event that you have rewards for all objects (if not, the reward-producing function can output zero), you would perform this weight update on all objects $ x \in X$. This turns into the following Python snippet, where we hide the division by $ B$ into the choice of learning rate:

# MWUA: the multiplicative weights update algorithm
def MWUA(objects, observeOutcome, reward, learningRate, numRounds):
   weights = [1] * len(objects)
   for t in numRounds:
      chosenObjectIndex = draw(weights)
      chosenObject = objects[chosenObjectIndex]

      outcome = observeOutcome(t, weights, chosenObject)
      thisRoundReward = reward(chosenObject, outcome)

      for i in range(len(weights)):
         weights[i] *= (1 + learningRate * reward(objects[i], outcome))

One of the amazing things about this algorithm is that the outcomes and rewards could be chosen adaptively by an adversary who knows everything about the MWUA algorithm (except which random numbers the algorithm generates to make its choices). This means that the rewards in round $ t$ can depend on the weights in that same round! We will exploit this when we solve linear programs later in this post.

But even in such an oppressive, exploitative environment, MWUA persists and achieves its guarantee. And now we can state that guarantee.

Theorem (from Arora et al): The cumulative reward of the MWUA algorithm is, up to constant multiplicative factors, at least the cumulative reward of the best object minus $ \log(n)$, where $ n$ is the number of objects. (Exact formula at the end of the proof)

The core of the proof, which we’ll state as a lemma, uses one of the most elegant proof techniques in all of mathematics. It’s the idea of constructing a potential function, and tracking the change in that potential function over time. Such a proof usually has the mysterious script:

  1. Define potential function, in our case $ S_t$.
  2. State what seems like trivial facts about the potential function to write $ S_{t+1}$ in terms of $ S_t$, and hence get general information about $ S_T$ for some large $ T$.
  3. Theorem is proved.
  4. Wait, what?

Clearly, coming up with a useful potential function is a difficult and prized skill.

In this proof our potential function is the sum of the weights of the objects in a given round, $ S_t = \sum_{x \in X} w_{x, t}$. Now the lemma.

Lemma: Let $ B$ be the bound on the size of the rewards, and $ 0 < \varepsilon < 1/2$ a learning parameter. Recall that $ D_t(x)$ is the probability that MWUA draws object $ x$ in round $ t$. Write the expected reward for MWUA for round $ t$ as the following (using only the definition of expected value):

$ \displaystyle R_t = \sum_{x \in X} D_t(x) M(x, y_t)$

 Then the claim of the lemma is:

$ \displaystyle S_{t+1} \leq S_t e^{\varepsilon R_t / B}$

Proof. Expand $ S_{t+1} = \sum_{x \in X} w_{x, t+1}$ using the definition of the MWUA update:

$ \displaystyle \sum_{x \in X} w_{x, t+1} = \sum_{x \in X} w_{x, t}(1 + \varepsilon M(x, y_t) / B)$

Now distribute $ w_{x, t}$ and split into two sums:

$ \displaystyle \dots = \sum_{x \in X} w_{x, t} + \frac{\varepsilon}{B} \sum_{x \in X} w_{x,t} M(x, y_t)$

Using the fact that $ D_t(x) = \frac{w_{x,t}}{S_t}$, we can replace $ w_{x,t}$ with $ D_t(x) S_t$, which allows us to get $ R_t$

$ \displaystyle \begin{aligned} \dots &= S_t + \frac{\varepsilon S_t}{B} \sum_{x \in X} D_t(x) M(x, y_t) \\ &= S_t \left ( 1 + \frac{\varepsilon R_t}{B} \right ) \end{aligned}$

And then using the fact that $ (1 + x) \leq e^x$ (Taylor series), we can bound the last expression by $ S_te^{\varepsilon R_t / B}$, as desired.

$ \square$

Now using the lemma, we can get a hold on $ S_T$ for a large $ T$, namely that

$ \displaystyle S_T \leq S_1 e^{\varepsilon \sum_{t=1}^T R_t / B}$

If $ |X| = n$ then $ S_1=n$, simplifying the above. Moreover, the sum of the weights in round $ T$ is certainly greater than any single weight, so that for every fixed object $ x \in X$,

$ \displaystyle S_T \geq w_{x,T} \leq  (1 + \varepsilon)^{\sum_t M(x, y_t) / B}$

Squeezing $ S_t$ between these two inequalities and taking logarithms (to simplify the exponents) gives

$ \displaystyle \left ( \sum_t M(x, y_t) / B \right ) \log(1+\varepsilon) \leq \log n + \frac{\varepsilon}{B} \sum_t R_t$

Multiply through by $ B$, divide by $ \varepsilon$, rearrange, and use the fact that when $ 0 < \varepsilon < 1/2$ we have $ \log(1 + \varepsilon) \geq \varepsilon – \varepsilon^2$ (Taylor series) to get

$ \displaystyle \sum_t R_t \geq \left [ \sum_t M(x, y_t) \right ] (1-\varepsilon) – \frac{B \log n}{\varepsilon}$

The bracketed term is the payoff of object $ x$, and MWUA’s payoff is at least a fraction of that minus the logarithmic term. The bound applies to any object $ x \in X$, and hence to the best one. This proves the theorem.

$ \square$

Briefly discussing the bound itself, we see that the smaller the learning rate is, the closer you eventually get to the best object, but by contrast the more the subtracted quantity $ B \log(n) / \varepsilon$ hurts you. If your target is an absolute error bound against the best performing object on average, you can do more algebra to determine how many rounds you need in terms of a fixed $ \delta$. The answer is roughly: let $ \varepsilon = O(\delta / B)$ and pick $ T = O(B^2 \log(n) / \delta^2)$. See this survey for more.

MWUA for linear programs

Now we’ll approximately solve a linear program using MWUA. Recall that a linear program is an optimization problem whose goal is to minimize (or maximize) a linear function of many variables. The objective to minimize is usually given as a dot product $ c \cdot x$, where $ c$ is a fixed vector and $ x = (x_1, x_2, \dots, x_n)$ is a vector of non-negative variables the algorithm gets to choose. The choices for $ x$ are also constrained by a set of $ m$ linear inequalities, $ A_i \cdot x \geq b_i$, where $ A_i$ is a fixed vector and $ b_i$ is a scalar for $ i = 1, \dots, m$. This is usually summarized by putting all the $ A_i$ in a matrix, $ b_i$ in a vector, as

$ x_{\textup{OPT}} = \textup{argmin}_x \{ c \cdot x \mid Ax \geq b, x \geq 0 \}$

We can further simplify the constraints by assuming we know the optimal value $ Z = c \cdot x_{\textup{OPT}}$ in advance, by doing a binary search (more on this later). So, if we ignore the hard constraint $ Ax \geq b$, the “easy feasible region” of possible $ x$’s includes $ \{ x \mid x \geq 0, c \cdot x = Z \}$.

In order to fit linear programming into the MWUA framework we have to define two things.

  1. The objects: the set of linear inequalities $ A_i \cdot x \geq b_i$.
  2. The rewards: the error of a constraint for a special input vector $ x_t$.

Number 2 is curious (why would we give a reward for error?) but it’s crucial and we’ll discuss it momentarily.

The special input $ x_t$ depends on the weights in round $ t$ (which is allowed, recall). Specifically, if the weights are $ w = (w_1, \dots, w_m)$, we ask for a vector $ x_t$ in our “easy feasible region” which satisfies

$ \displaystyle (A^T w) \cdot x_t \geq w \cdot b$

For this post we call the implementation of procuring such a vector the “oracle,” since it can be seen as the black-box problem of, given a vector $ \alpha$ and a scalar $ \beta$ and a convex region $ R$, finding a vector $ x \in R$ satisfying $ \alpha \cdot x \geq \beta$. This allows one to solve more complex optimization problems with the same technique, swapping in a new oracle as needed. Our choice of inputs, $ \alpha = A^T w, \beta = w \cdot b$, are particular to the linear programming formulation.

Two remarks on this choice of inputs. First, the vector $ A^T w$ is a weighted average of the constraints in $ A$, and $ w \cdot b$ is a weighted average of the thresholds. So this this inequality is a “weighted average” inequality (specifically, a convex combination, since the weights are nonnegative). In particular, if no such $ x$ exists, then the original linear program has no solution. Indeed, given a solution $ x^*$ to the original linear program, each constraint, say $ A_1 x^*_1 \geq b_1$, is unaffected by left-multiplication by $ w_1$.

Second, and more important to the conceptual understanding of this algorithm, the choice of rewards and the multiplicative updates ensure that easier constraints show up less prominently in the inequality by having smaller weights. That is, if we end up overly satisfying a constraint, we penalize that object for future rounds so we don’t waste our effort on it. The byproduct of MWUA—the weights—identify the hardest constraints to satisfy, and so in each round we can put a proportionate amount of effort into solving (one of) the hard constraints. This is why it makes sense to reward error; the error is a signal for where to improve, and by over-representing the hard constraints, we force MWUA’s attention on them.

At the end, our final output is an average of the $ x_t$ produced in each round, i.e. $ x^* = \frac{1}{T}\sum_t x_t$. This vector satisfies all the constraints to a roughly equal degree. We will skip the proof that this vector does what we want, but see these notes for a simple proof. We’ll spend the rest of this post implementing the scheme outlined above.

Implementing the oracle

Fix the convex region $ R = \{ c \cdot x = Z, x \geq 0 \}$ for a known optimal value $ Z$. Define $ \textup{oracle}(\alpha, \beta)$ as the problem of finding an $ x \in R$ such that $ \alpha \cdot x \geq \beta$.

For the case of this linear region $ R$, we can simply find the index $ i$ which maximizes $ \alpha_i Z / c_i$. If this value exceeds $ \beta$, we can return the vector with that value in the $ i$-th position and zeros elsewhere. Otherwise, the problem has no solution.

To prove the “no solution” part, say $ n=2$ and you have $ x = (x_1, x_2)$ a solution to $ \alpha \cdot x \geq \beta$. Then for whichever index makes $ \alpha_i Z / c_i$ bigger, say $ i=1$, you can increase $ \alpha \cdot x$ without changing $ c \cdot x = Z$ by replacing $ x_1$ with $ x_1 + (c_2/c_1)x_2$ and $ x_2$ with zero. I.e., we’re moving the solution $ x$ along the line $ c \cdot x = Z$ until it reaches a vertex of the region bounded by $ c \cdot x = Z$ and $ x \geq 0$. This must happen when all entries but one are zero. This is the same reason why optimal solutions of (generic) linear programs occur at vertices of their feasible regions.

The code for this becomes quite simple. Note we use the numpy library in the entire codebase to make linear algebra operations fast and simple to read.

def makeOracle(c, optimalValue):
    n = len(c)

    def oracle(weightedVector, weightedThreshold):
        def quantity(i):
            return weightedVector[i] * optimalValue / c[i] if c[i] &gt; 0 else -1

        biggest = max(range(n), key=quantity)
        if quantity(biggest) &lt; weightedThreshold:
            raise InfeasibleException

        return numpy.array([optimalValue / c[i] if i == biggest else 0 for i in range(n)])

    return oracle

Implementing the core solver

The core solver implements the discussion from previously, given the optimal value of the linear program as input. To avoid too many single-letter variable names, we use linearObjective instead of $ c$.

def solveGivenOptimalValue(A, b, linearObjective, optimalValue, learningRate=0.1):
    m, n = A.shape  # m equations, n variables
    oracle = makeOracle(linearObjective, optimalValue)

    def reward(i, specialVector):
        ...

    def observeOutcome(_, weights, __):
        ...

    numRounds = 1000
    weights, cumulativeReward, outcomes = MWUA(
        range(m), observeOutcome, reward, learningRate, numRounds
    )
    averageVector = sum(outcomes) / numRounds

    return averageVector

First we make the oracle, then the reward and outcome-producing functions, then we invoke the MWUA subroutine. Here are those two functions; they are closures because they need access to $ A$ and $ b$. Note that neither $ c$ nor the optimal value show up here.

    def reward(i, specialVector):
        constraint = A[i]
        threshold = b[i]
        return threshold - numpy.dot(constraint, specialVector)

    def observeOutcome(_, weights, __):
        weights = numpy.array(weights)
        weightedVector = A.transpose().dot(weights)
        weightedThreshold = weights.dot(b)
        return oracle(weightedVector, weightedThreshold)

Implementing the binary search, and an example

Finally, the top-level routine. Note that the binary search for the optimal value is sophisticated (though it could be more sophisticated). It takes a max range for the search, and invokes the optimization subroutine, moving the upper bound down if the linear program is feasible and moving the lower bound up otherwise.

def solve(A, b, linearObjective, maxRange=1000):
    optRange = [0, maxRange]

    while optRange[1] - optRange[0] &gt; 1e-8:
        proposedOpt = sum(optRange) / 2
        print(&quot;Attempting to solve with proposedOpt=%G&quot; % proposedOpt)

        # Because the binary search starts so high, it results in extreme
        # reward values that must be tempered by a slow learning rate. Exercise
        # to the reader: determine absolute bounds for the rewards, and set
        # this learning rate in a more principled fashion.
        learningRate = 1 / max(2 * proposedOpt * c for c in linearObjective)
        learningRate = min(learningRate, 0.1)

        try:
            result = solveGivenOptimalValue(A, b, linearObjective, proposedOpt, learningRate)
            optRange[1] = proposedOpt
        except InfeasibleException:
            optRange[0] = proposedOpt

    return result

Finally, a simple example:

A = numpy.array([[1, 2, 3], [0, 4, 2]])
b = numpy.array([5, 6])
c = numpy.array([1, 2, 1])

x = solve(A, b, c)
print(x)
print(c.dot(x))
print(A.dot(x) - b)

The output:

Attempting to solve with proposedOpt=500
Attempting to solve with proposedOpt=250
Attempting to solve with proposedOpt=125
Attempting to solve with proposedOpt=62.5
Attempting to solve with proposedOpt=31.25
Attempting to solve with proposedOpt=15.625
Attempting to solve with proposedOpt=7.8125
Attempting to solve with proposedOpt=3.90625
Attempting to solve with proposedOpt=1.95312
Attempting to solve with proposedOpt=2.92969
Attempting to solve with proposedOpt=3.41797
Attempting to solve with proposedOpt=3.17383
Attempting to solve with proposedOpt=3.05176
Attempting to solve with proposedOpt=2.99072
Attempting to solve with proposedOpt=3.02124
Attempting to solve with proposedOpt=3.00598
Attempting to solve with proposedOpt=2.99835
Attempting to solve with proposedOpt=3.00217
Attempting to solve with proposedOpt=3.00026
Attempting to solve with proposedOpt=2.99931
Attempting to solve with proposedOpt=2.99978
Attempting to solve with proposedOpt=3.00002
Attempting to solve with proposedOpt=2.9999
Attempting to solve with proposedOpt=2.99996
Attempting to solve with proposedOpt=2.99999
Attempting to solve with proposedOpt=3.00001
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3  # note %G rounds the printed values
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
Attempting to solve with proposedOpt=3
[ 0.     0.987  1.026]
3.00000000425
[  5.20000072e-02   8.49831849e-09]

So there we have it. A fiendishly clever use of multiplicative weights for solving linear programs.

Discussion

One of the nice aspects of MWUA is it’s completely transparent. If you want to know why a decision was made, you can simply look at the weights and look at the history of rewards of the objects. There’s also a clear interpretation of what is being optimized, as the potential function used in the proof is a measure of both quality and adaptability to change. The latter is why MWUA succeeds even in adversarial settings, and why it makes sense to think about MWUA in the context of evolutionary biology.

This even makes one imagine new problems that traditional algorithms cannot solve, but which MWUA handles with grace. For example, imagine trying to solve an “online” linear program in which over time a constraint can change. MWUA can adapt to maintain its approximate solution.

The linear programming technique is known in the literature as the Plotkin-Shmoys-Tardos framework for covering and packing problems. The same ideas extend to other convex optimization problems, including semidefinite programming.

If you’ve been reading this entire post screaming “This is just gradient descent!” Then you’re right and wrong. It bears a striking resemblance to gradient descent (see this document for details about how special cases of MWUA are gradient descent by another name), but the adaptivity for the rewards makes MWUA different.

Even though so many people have been advocating for MWUA over the past decade, it’s surprising that it doesn’t show up in the general math/CS discourse on the internet or even in many algorithms courses. The Arora survey I referenced is from 2005 and the linear programming technique I demoed is originally from 1991! I took algorithms classes wherever I could, starting undergraduate in 2007, and I didn’t even hear a whisper of this technique until midway through my PhD in theoretical CS (I did, however, study fictitious play in a game theory class). I don’t have an explanation for why this is the case, except maybe that it takes more than 20 years for techniques to make it to the classroom. At the very least, this is one good reason to go to graduate school. You learn the things (and where to look for the things) which haven’t made it to classrooms yet.

Until next time!