A while back we featured a post about why learning mathematics can be hard for programmers, and I claimed a major issue was not understanding the basic methods of proof (the lingua franca between intuition and rigorous mathematics). I boiled these down to the “basic four,” direct implication, contrapositive, contradiction, and induction. But in mathematics there is an ever growing supply of proof methods. There are books written about the “probabilistic method,” and I recently went to a lecture where the “linear algebra method” was displayed. There has been recent talk of a “quantum method” for proving theorems unrelated to quantum mechanics, and many more.
So in continuing our series of methods of proof, we’ll move up to some of the more advanced methods of proof. And in keeping with the spirit of the series, we’ll spend most of our time discussing the structural form of the proofs. This time, diagonalization.
Diagonalization
Perhaps one of the most famous methods of proof after the basic four is proof by diagonalization. Why do they call it diagonalization? Because the idea behind diagonalization is to write out a table that describes how a collection of objects behaves, and then to manipulate the “diagonal” of that table to get a new object that you can prove isn’t in the table.
The simplest and most famous example of this is the proof that there is no bijection between the natural numbers and the real numbers. We defined injections, and surjections and bijections, in two earlier posts in this series, but for new readers a bijection is just a one-to-one mapping between two collections of things. For example, one can construct a bijection between all positive integers and all even positive integers by mapping $ n$ to $ 2n$. If there is a bijection between two (perhaps infinite) sets, then we say they have the same size or cardinality. And so to say there is no bijection between the natural numbers and the real numbers is to say that one of these two sets (the real numbers) is somehow “larger” than the other, despite both being infinite in size. It’s deep, it used to be very controversial, and it made the method of diagonalization famous. Let’s see how it works.
Theorem: There is no bijection from the natural numbers $ \mathbb{N}$ to the real numbers $ \mathbb{R}$.
Proof. Suppose to the contrary (i.e., we’re about to do proof by contradiction) that there is a bijection $ f: \mathbb{N} \to \mathbb{R}$. That is, you give me a positive integer $ k$ and I will spit out $ f(k)$, with the property that different $ k$ give different $ f(k)$, and every real number is hit by some natural number $ k$ (this is just what it means to be a one-to-one mapping).
First let me just do some setup. I claim that all we need to do is show that there is no bijection between $ \mathbb{N}$ and the real numbers between 0 and 1. In particular, I claim there is a bijection from $ (0,1)$ to all real numbers, so if there is a bijection from $ \mathbb{N} \to (0,1)$ then we could combine the two bijections. To show there is a bijection from $ (0,1) \to \mathbb{R}$, I can first make a bijection from the open interval $ (0,1)$ to the interval $ (-\infty, 0) \cup (1, \infty)$ by mapping $ x$ to $ 1/x$. With a little bit of extra work (read, messy details) you can extend this to all real numbers. Here’s a sketch: make a bijection from $ (0,1)$ to $ (0,2)$ by doubling; then make a bijection from $ (0,2)$ to all real numbers by using the $ (0,1)$ part to get $ (-\infty, 0) \cup (1, \infty)$, and use the $ [1,2)$ part to get $ [0,1]$ by subtracting 1 (almost! To be super rigorous you also have to argue that the missing number 1 doesn’t change the cardinality, or else write down a more complicated bijection; still, the idea should be clear).
Okay, setup is done. We just have to show there is no bijection between $ (0,1)$ and the natural numbers.
The reason I did all that setup is so that I can use the fact that every real number in $ (0,1)$ has an infinite binary decimal expansion whose only nonzero digits are after the decimal point. And so I’ll write down the expansion of $ f(1)$ as a row in a table (an infinite row), and below it I’ll write down the expansion of $ f(2)$, below that $ f(3)$, and so on, and the decimal points will line up. The table looks like this.
The $ d$’s above are either 0 or 1. I need to be a bit more detailed in my table, so I’ll index the digits of $ f(1)$ by $ b_{1,1}, b_{1,2}, b_{1,3}, \dots$, the digits of $ f(2)$ by $ b_{2,1}, b_{2,2}, b_{2,3}, \dots$, and so on. This makes the table look like this
It’s a bit harder to read, but trust me the notation is helpful.
Now by the assumption that $ f$ is a bijection, I’m assuming that every real number shows up as a number in this table, and no real number shows up twice. So if I could construct a number that I can prove is not in the table, I will arrive at a contradiction: the table couldn’t have had all real numbers to begin with! And that will prove there is no bijection between the natural numbers and the real numbers.
Here’s how I’ll come up with such a number $ N$ (this is the diagonalization part). It starts with 0., and it’s first digit after the decimal is $ 1-b_{1,1}$. That is, we flip the bit $ b_{1,1}$ to get the first digit of $ N$. The second digit is $ 1-b_{2,2}$, the third is $ 1-b_{3,3}$, and so on. In general, digit $ i$ is $ 1-b_{i,i}$.
Now we show that $ N$ isn’t in the table. If it were, then it would have to be $ N = f(m)$ for some $ m$, i.e. be the $ m$-th row in the table. Moreover, by the way we built the table, the $ m$-th digit of $ N$ would be $ b_{m,m}$. But we defined $ N$ so that it’s $ m$-th digit was actually $ 1-b_{m,m}$. This is very embarrassing for $ N$ (it’s a contradiction!). So $ N$ isn’t in the table.
$ \square$
It’s the kind of proof that blows your mind the first time you see it, because it says that there is more than one kind of infinity. Not something you think about every day, right?
The Halting Problem
The second example we’ll show of a proof by diagonalization is the Halting Theorem, proved originally by Alan Turing, which says that there are some problems that computers can’t solve, even if given unbounded space and time to perform their computations. The formal mathematical model is called a Turing machine, but for simplicity you can think of “Turing machines” and “algorithms described in words” as the same thing. Or if you want it can be “programs written in programming language X.” So we’ll use the three words “Turing machine,” “algorithm,” and “program” interchangeably.
The proof works by actually defining a problem and proving it can’t be solved. The problem is called the halting problem, and it is the problem of deciding: given a program $ P$ and an input $ x$ to that program, will $ P$ ever stop running when given $ x$ as input? What I mean by “decide” is that any program that claims to solve the halting problem is itself required to halt for every possible input with the correct answer. A “halting problem solver” can’t loop infinitely!
So first we’ll give the standard proof that the halting problem can’t be solved, and then we’ll inspect the form of the proof more closely to see why it’s considered a diagonalization argument.
Theorem: The halting program cannot be solved by Turing machines.
Proof. Suppose to the contrary that $ T$ is a program that solves the halting problem. We’ll use $ T$ as a black box to come up with a new program I’ll call meta-$ T$, defined in pseudo-python as follows.
def metaT(P):
run T on (P,P)
if T says that P halts:
loop infinitely
else:
halt and output "success!"
In words, meta-$ T$ accepts as input the source code of a program $ P$, and then uses $ T$ to tell if $ P$ halts (when given its own source code as input). Based on the result, it behaves the opposite of $ P$; if $ P$ halts then meta-$ T$ loops infinitely and vice versa. It’s a little meta, right?
Now let’s do something crazy: let’s run meta-$ T$ on itself! That is, run
metaT(metaT)
So meta. The question is what is the output of this call? The meta-$ T$ program uses $ T$ to determine whether meta-$ T$ halts when given itself as input. So let’s say that the answer to this question is “yes, it does halt.” Then by the definition of meta-$ T$, the program proceeds to loop forever. But this is a problem, because it means that metaT(metaT) (which is the original thing we ran) actually does not halt, contradicting $ T$’s answer! Likewise, if $ T$ says that metaT(metaT) should loop infinitely, that will cause meta-$ T$ to halt, a contradiction. So $ T$ cannot be correct, and the halting problem can’t be solved.
$ \square$
This theorem is deep because it says that you can’t possibly write a program to which can always detect bugs in other programs. Infinite loops are just one special kind of bug.
But let’s take a closer look and see why this is a proof by diagonalization. The first thing we need to convince ourselves is that the set of all programs is countable (that is, there is a bijection from $ \mathbb{N}$ to the set of all programs). This shouldn’t be so hard to see: you can list all programs in lexicographic order, since the set of all strings is countable, and then throw out any that are not syntactically valid programs. Likewise, the set of all inputs, really just all strings, is countable.
The second thing we need to convince ourselves of is that a problem corresponds to an infinite binary string. To do this, we’ll restrict our attention to problems with yes/no answers, that is where the goal of the program is to output a single bit corresponding to yes or no for a given input. Then if we list all possible inputs in increasing lexicographic order, a problem can be represented by the infinite list of bits that are the correct outputs to each input.
For example, if the problem is to determine whether a given binary input string corresponds to an even number, the representation might look like this:
010101010101010101...
Of course this all depends on the details of how one encodes inputs, but the point is that if you wanted to you could nail all this down precisely. More importantly for us we can represent the halting problem as an infinite table of bits. If the columns of the table are all programs (in lex order), and the rows of the table correspond to inputs (in lex order), then the table would have at entry $ (x,P)$ a 1 if $ P(x)$ halts and a 0 otherwise.
here $ b_{i,j}$ is 1 if $ P_j(x_i)$ halts and 0 otherwise. The table encodes the answers to the halting problem for all possible inputs.
Now we assume for contradiction sake that some program solves the halting problem, i.e. that every entry of the table is computable. Now we’ll construct the answers output by meta-$ T$ by flipping each bit of the diagonal of the table. The point is that meta-$ T$ corresponds to some row of the table, because there is some input string that is interpreted as the source code of meta-$ T$. Then we argue that the entry of the table for $ (\textup{meta-}T, \textup{meta-}T)$ contradicts its definition, and we’re done!
So these are two of the most high-profile uses of the method of diagonalization. It’s a great tool for your proving repertoire.
A while back Peter Norvig posted a wonderful pair of articles about regex golf. The idea behind regex golf is to come up with the shortest possible regular expression that matches one given list of strings, but not the other.
“Regex Golf,” by Randall Munroe.
In the first article, Norvig runs a basic algorithm to recreate and improve the results from the comic, and in the second he beefs it up with some improved search heuristics. My favorite part about this topic is that regex golf can be phrased in terms of a problem called set cover. I noticed this when reading the comic, and was delighted to see Norvig use that as the basis of his algorithm.
The set cover problem shows up in other places, too. If you have a database of items labeled by users, and you want to find the smallest set of labels to display that covers every item in the database, you’re doing set cover. I hear there are applications in biochemistry and biology but haven’t seen them myself.
If you know what a set is (just think of the “set” or “hash set” type from your favorite programming language), then set cover has a simple definition.
Definition (The Set Cover Problem): You are given a finite set $ U$ called a “universe” and sets $ S_1, \dots, S_n$ each of which is a subset of $ U$. You choose some of the $ S_i$ to ensure that every $ x \in U$ is in one of your chosen sets, and you want to minimize the number of $ S_i$ you picked.
It’s called a “cover” because the sets you pick “cover” every element of $ U$. Let’s do a simple. Let $ U = \{ 1,2,3,4,5 \}$ and
Then the smallest possible number of sets you can pick is 2, and you can achieve this by picking both $ S_1, S_2$ or both $ S_2, S_3$. The connection to regex golf is that you pick $ U$ to be the set of strings you want to match, and you pick a set of regexes that match some of the strings in $ U$ but none of the strings you want to avoid matching (I’ll call them $ V$). If $ w$ is such a regex, then you can form the set $ S_w$ of strings that $ w$ matches. Then if you find a small set cover with the strings $ w_1, \dots, w_t$, then you can “or” them together to get a single regex $ w_1 \mid w_2 \mid \dots \mid w_t$ that matches all of $ U$ but none of $ V$.
Set cover is what’s called NP-hard, and one implication is that we shouldn’t hope to find an efficient algorithm that will always give you the shortest regex for every regex golf problem. But despite this, there are approximation algorithms for set cover. What I mean by this is that there is a regex-golf algorithm $ A$ that outputs a subset of the regexes matching all of $ U$, and the number of regexes it outputs is such-and-such close to the minimum possible number. We’ll make “such-and-such” more formal later in the post.
What made me sad was that Norvig didn’t go any deeper than saying, “We can try to approximate set cover, and the greedy algorithm is pretty good.” It’s true, but the ideas are richer than that! Set cover is a simple example to showcase interesting techniques from theoretical computer science. And perhaps ironically, in Norvig’s second post a header promised the article would discuss the theory of set cover, but I didn’t see any of what I think of as theory. Instead he partially analyzes the structure of the regex golf instances he cares about. This is useful, but not really theoretical in any way unless he can say something universal about those instances.
I don’t mean to bash Norvig. His articles were great! And in-depth theory was way beyond scope. So this post is just my opportunity to fill in some theory gaps. We’ll do three things:
Show formally that set cover is NP-hard.
Prove the approximation guarantee of the greedy algorithm.
Show another (very different) approximation algorithm based on linear programming.
Along the way I’ll argue that by knowing (or at least seeing) the details of these proofs, one can get a better sense of what features to look for in the set cover instance you’re trying to solve. We’ll also see how set cover depicts the broader themes of theoretical computer science.
NP-hardness
The first thing we should do is show that set cover is NP-hard. Intuitively what this means is that we can take some hard problem $ P$ and encode instances of $ P$ inside set cover problems. This idea is called a reduction, because solving problem $ P$ will “reduce” to solving set cover, and the method we use to encode instance of $ P$ as set cover problems will have a small amount of overhead. This is one way to say that set cover is “at least as hard as” $ P$.
The hard problem we’ll reduce to set cover is called 3-satisfiability (3-SAT). In 3-SAT, the input is a formula whose variables are either true or false, and the formula is expressed as an OR of a bunch of clauses, each of which is an AND of three variables (or their negations). This is called 3-CNF form. A simple example:
$ \displaystyle (x \vee y \vee \neg z) \wedge (\neg x \vee w \vee y) \wedge (z \vee x \vee \neg w)$
The goal of the algorithm is to decide whether there is an assignment to the variables which makes the formula true. 3-SAT is one of the most fundamental problems we believe to be hard and, roughly speaking, by reducing it to set cover we include set cover in a class called NP-complete, and if any one of these problems can be solved efficiently, then they all can (this is the famous P versus NP problem, and an efficient algorithm would imply P equals NP).
So a reduction would consist of the following: you give me a formula $ \varphi$ in 3-CNF form, and I have to produce (in a way that depends on $ \varphi$!) a universe $ U$ and a choice of subsets $ S_i \subset U$ in such a way that
$ \varphi$ has a true assignment of variables if and only if the corresponding set cover problem has a cover using $ k$ sets.
In other words, I’m going to design a function $ f$ from 3-SAT instances to set cover instances, such that $ x$ is satisfiable if and only if $ f(x)$ has a set cover with $ k$ sets.
Why do I say it only for $ k$ sets? Well, if you can always answer this question then I claim you can find the minimum size of a set cover needed by doing a binary search for the smallest value of $ k$. So finding the minimum size of a set cover reduces to the problem of telling if theres a set cover of size $ k$.
Now let’s do the reduction from 3-SAT to set cover.
If you give me $ \varphi = C_1 \wedge C_2 \wedge \dots \wedge C_m$ where each $ C_i$ is a clause and the variables are denoted $ x_1, \dots, x_n$, then I will choose as my universe $ U$ to be the set of all the clauses and indices of the variables (these are all just formal symbols). i.e.
$ \displaystyle U = \{ C_1, C_2, \dots, C_m, 1, 2, \dots, n \}$
The first part of $ U$ will ensure I make all the clauses true, and the last part will ensure I don’t pick a variable to be both true and false at the same time.
To show how this works I have to pick my subsets. For each variable $ x_i$, I’ll make two sets, one called $ S_{x_i}$ and one called $ S_{\neg x_i}$. They will both contain $ i$ in addition to the clauses which they make true when the corresponding literal is true (by literal I just mean the variable or its negation). For example, if $ C_j$ uses the literal $ \neg x_7$, then $ S_{\neg x_7}$ will contain $ C_j$ but $ S_{x_7}$ will not. Finally, I’ll set $ k = n$, the number of variables.
Now to prove this reduction works I have to prove two things: if my starting formula has a satisfying assignment I have to show the set cover problem has a cover of size $ k$. Indeed, take the sets $ S_{y}$ for all literals $ y$ that are set to true in a satisfying assignment. There can be at most $ n$ true literals since half are true and half are false, so there will be at most $ n$ sets, and these sets clearly cover all of $ U$ because every literal has to be satisfied by some literal or else the formula isn’t true.
The reverse direction is similar: if I have a set cover of size $ n$, I need to use it to come up with a satisfying truth assignment for the original formula. But indeed, the sets that get chosen can’t include both a $ S_{x_i}$ and its negation set $ S_{\neg x_i}$, because there are $ n$ of the elements $ \{1, 2, \dots, n \} \subset U$, and each $ i$ is only in the two $ S_{x_i}, S_{\neg x_i}$. Just by counting if I cover all the indices $ i$, I already account for $ n$ sets! And finally, since I have covered all the clauses, the literals corresponding to the sets I chose give exactly a satisfying assignment.
Whew! So set cover is NP-hard because I encoded this logic problem 3-SAT within its rules. If we think 3-SAT is hard (and we do) then set cover must also be hard. So if we can’t hope to solve it exactly we should try to approximate the best solution.
The greedy approach
The method that Norvig uses in attacking the meta-regex golf problem is the greedy algorithm. The greedy algorithm is exactly what you’d expect: you maintain a list $ L$ of the subsets you’ve picked so far, and at each step you pick the set $ S_i$ that maximizes the number of new elements of $ U$ that aren’t already covered by the sets in $ L$. In python pseudocode:
def greedySetCover(universe, sets):
chosenSets = set()
leftToCover = universe.copy()
unchosenSets = sets
covered = lambda s: leftToCover & s
while universe != 0:
if len(chosenSets) == len(sets):
raise Exception("No set cover possible")
nextSet = max(unchosenSets, key=lambda s: len(covered(s)))
unchosenSets.remove(nextSet)
chosenSets.add(nextSet)
leftToCover -= nextSet
return chosenSets
This is what theory has to say about the greedy algorithm:
Theorem: If it is possible to cover $ U$ by the sets in $ F = \{ S_1, \dots, S_n \}$, then the greedy algorithm always produces a cover that at worst has size $ O(\log(n)) \textup{OPT}$, where $ \textup{OPT}$ is the size of the smallest cover. Moreover, this is asymptotically the best any algorithm can do.
One simple fact we need from calculus is that the following sum is asymptotically the same as $ \log(n)$:
Proof. [adapted from Wan] Let’s say the greedy algorithm picks sets $ T_1, T_2, \dots, T_k$ in that order. We’ll set up a little value system for the elements of $ U$. Specifically, the value of each $ T_i$ is 1, and in step $ i$ we evenly distribute this unit value across all newly covered elements of $ T_i$. So for $ T_1$ each covered element gets value $ 1/|T_1|$, and if $ T_2$ covers four new elements, each gets a value of 1/4. One can think of this “value” as a price, or energy, or unit mass, or whatever. It’s just an accounting system (albeit a clever one) we use to make some inequalities clear later.
In general call the value $ v_x$ of element $ x \in U$ the value assigned to $ x$ at the step where it’s first covered. In particular, the number of sets chosen by the greedy algorithm $ k$ is just $ \sum_{x \in U} v_x$. We’re just bunching back together the unit value we distributed for each step of the algorithm.
Now we want to compare the sets chosen by greedy to the optimal choice. Call a smallest set cover $ C_{\textup{OPT}}$. Let’s stare at the following inequality.
It’s true because each $ x$ counts for a $ v_x$ at most once in the left hand side, and in the right hand side the sets in $ C_{\textup{OPT}}$ must hit each $ x$ at least once but may hit some $ x$ more than once. Also remember the left hand side is equal to $ k$.
Now we want to show that the inner sum on the right hand side, $ \sum_{x \in S} v_x$, is at most $ H(|S|)$. This will in fact prove the entire theorem: because each set $ S_i$ has size at most $ n$, the inequality above will turn into
$ \displaystyle k \leq |C_{\textup{OPT}}| H(|S|) \leq |C_{\textup{OPT}}| H(n)$
And so $ k \leq \textup{OPT} \cdot O(\log(n))$, which is the statement of the theorem.
So we want to show that $ \sum_{x \in S} v_x \leq H(|S|)$. For each $ j$ define $ \delta_j(S)$ to be the number of elements in $ S$ not covered in $ T_1, \cup \dots \cup T_j$. Notice that $ \delta_{j-1}(S) – \delta_{j}(S)$ is the number of elements of $ S$ that are covered for the first time in step $ j$. If we call $ t_S$ the smallest integer $ j$ for which $ \delta_j(S) = 0$, we can count up the differences up to step $ t_S$, we get
The rightmost term is just the cost assigned to the relevant elements at step $ i$. Moreover, because $ T_i$ covers more new elements than $ S$ (by definition of the greedy algorithm), the fraction above is at most $ 1/\delta_{i-1}(S)$. The end is near. For brevity I’ll drop the $ (S)$ from $ \delta_j(S)$.
This is basically the exact worst-case approximation that the greedy algorithm achieves. In fact, Petr Slavik proved in 1996 that the greedy gives you a set of size exactly $ (\log n – \log \log n + O(1)) \textup{OPT}$ in the worst case.
This is also the best approximation that any set cover algorithm can achieve, provided that P is not NP. This result was basically known in 1994, but it wasn’t until 2013 and the use of some very sophisticated tools that the best possible bound was found with the smallest assumptions.
In the proof we used that $ |S| \leq n$ to bound things, but if we knew that our sets $ S_i$ (i.e. subsets matched by a regex) had sizes bounded by, say, $ B$, the same proof would show that the approximation factor is $ \log(B)$ instead of $ \log n$. However, in order for that to be useful you need $ B$ to be a constant, or at least to grow more slowly than any polynomial in $ n$, since e.g. $ \log(n^{0.1}) = 0.1 \log n$. In fact, taking a second look at Norvig’s meta regex golf problem, some of his instances had this property! Which means the greedy algorithm gives a much better approximation ratio for certain meta regex golf problems than it does for the worst case general problem. This is one instance where knowing the proof of a theorem helps us understand how to specialize it to our interests.
Norvig’s frequency table for president meta-regex golf. The left side counts the size of each set (defined by a regex)
The linear programming approach
So we just said that you can’t possibly do better than the greedy algorithm for approximating set cover. There must be nothing left to say, job well done, right? Wrong! Our second analysis, based on linear programming, shows that instances with special features can have better approximation results.
In particular, if we’re guaranteed that each element $ x \in U$ occurs in at most $ B$ of the sets $ S_i$, then the linear programming approach will give a $ B$-approximation, i.e. a cover whose size is at worst larger than OPT by a multiplicative factor of $ B$. In the case that $ B$ is constant, we can beat our earlier greedy algorithm.
The technique is now a classic one in optimization, called LP-relaxation (LP stands for linear programming). The idea is simple. Most optimization problems can be written as integer linear programs, that is there you have $ n$ variables $ x_1, \dots, x_n \in \{ 0, 1 \}$ and you want to maximize (or minimize) a linear function of the $ x_i$ subject to some linear constraints. The thing you’re trying to optimize is called the objective. While in general solving integer linear programs is NP-hard, we can relax the “integer” requirement to $ 0 \leq x_i \leq 1$, or something similar. The resulting linear program, called the relaxed program, can be solved efficiently using the simplex algorithm or another more complicated method.
The output of solving the relaxed program is an assignment of real numbers for the $ x_i$ that optimizes the objective function. A key fact is that the solution to the relaxed linear program will be at least as good as the solution to the original integer program, because the optimal solution to the integer program is a valid candidate for the optimal solution to the linear program. Then the idea is that if we use some clever scheme to round the $ x_i$ to integers, we can measure how much this degrades the objective and prove that it doesn’t degrade too much when compared to the optimum of the relaxed program, which means it doesn’t degrade too much when compared to the optimum of the integer program as well.
If this sounds wishy washy and vague don’t worry, we’re about to make it super concrete for set cover.
We’ll make a binary variable $ x_i$ for each set $ S_i$ in the input, and $ x_i = 1$ if and only if we include it in our proposed cover. Then the objective function we want to minimize is $ \sum_{i=1}^n x_i$. If we call our elements $ X = \{ e_1, \dots, e_m \}$, then we need to write down a linear constraint that says each element $ e_j$ is hit by at least one set in the proposed cover. These constraints have to depend on the sets $ S_i$, but that’s not a problem. One good constraint for element $ e_j$ is
In words, the only way that an $ e_j$ will not be covered is if all the sets containing it have their $ x_i = 0$. And we need one of these constraints for each $ j$. Putting it together, the integer linear program is
The integer program for set cover.
Once we understand this formulation of set cover, the relaxation is trivial. We just replace the last constraint with inequalities.
For a given candidate assignment $ x$ to the $ x_i$, call $ Z(x)$ the objective value (in this case $ \sum_i x_i$). Now we can be more concrete about the guarantees of this relaxation method. Let $ \textup{OPT}_{\textup{IP}}$ be the optimal value of the integer program and $ x_{\textup{IP}}$ a corresponding assignment to $ x_i$ achieving the optimum. Likewise let $ \textup{OPT}_{\textup{LP}}, x_{\textup{LP}}$ be the optimal things for the linear relaxation. We will prove:
Theorem: There is a deterministic algorithm that rounds $ x_{\textup{LP}}$ to integer values $ x$ so that the objective value $ Z(x) \leq B \textup{OPT}_{\textup{IP}}$, where $ B$ is the maximum number of sets that any element $ e_j$ occurs in. So this gives a $ B$-approximation of set cover.
Proof. Let $ B$ be as described in the theorem, and call $ y = x_{\textup{LP}}$ to make the indexing notation easier. The rounding algorithm is to set $ x_i = 1$ if $ y_i \geq 1/B$ and zero otherwise.
To prove the theorem we need to show two things hold about this new candidate solution $ x$:
The choice of all $ S_i$ for which $ x_i = 1$ covers every element.
The number of sets chosen (i.e. $ Z(x)$) is at most $ B$ times more than $ \textup{OPT}_{\textup{LP}}$.
Since $ \textup{OPT}_{\textup{LP}} \leq \textup{OPT}_{\textup{IP}}$, so if we can prove number 2 we get $ Z(x) \leq B \textup{OPT}_{\textup{LP}} \leq B \textup{OPT}_{\textup{IP}}$, which is the theorem.
So let’s prove 1. Fix any $ j$ and we’ll show that element $ e_j$ is covered by some set in the rounded solution. Call $ B_j$ the number of times element $ e_j$ occurs in the input sets. By definition $ B_j \leq B$, so $ 1/B_j \geq 1/B$. Recall $ y$ was the optimal solution to the relaxed linear program, and so it must be the case that the linear constraint for each $ e_j$ is satisfied: $ \sum_{i : e_j \in S_i} x_i \geq 1$. We know that there are $ B_j$ terms and they sums to at least 1, so not all terms can be smaller than $ 1/B_j$ (otherwise they’d sum to something less than 1). In other words, some variable $ x_i$ in the sum is at least $ 1/B_j \geq 1/B$, and so $ x_i$ is set to 1 in the rounded solution, corresponding to a set $ S_i$ that contains $ e_j$. This finishes the proof of 1.
Now let’s prove 2. For each $ j$, we know that for each $ x_i = 1$, the corresponding variable $ y_i \geq 1/B$. In particular $ 1 \leq y_i B$. Now we can simply bound the sum.
The second inequality is true because some of the $ x_i$ are zero, but we can ignore them when we upper bound and just include all the $ y_i$. This proves part 2 and the theorem.
$ \square$
I’ve got some more postscripts to this proof:
The proof works equally well when the sets are weighted, i.e. your cost for picking a set is not 1 for every set but depends on some arbitrarily given constants $ w_i \geq 0$.
We gave a deterministic algorithm rounding $ y$ to $ x$, but one can get the same result (with high probability) using a randomized algorithm. The idea is to flip a coin with bias $ y_i$ roughly $ \log(n)$ times and set $ x_i = 1$ if and only if the coin lands heads at least once. The guarantee is no better than what we proved, but for some other problems randomness can help you get approximations where we don’t know of any deterministic algorithms to get the same guarantees. I can’t think of any off the top of my head, but I’m pretty sure they’re out there.
For step 1 we showed that at least one term in the inequality for $ e_j$ would be rounded up to 1, and this guaranteed we covered all the elements. A natural question is: why not also round up at most one term of each of these inequalities? It might be that in the worst case you don’t get a better guarantee, but it would be a quick extra heuristic you could use to post-process a rounded solution.
Solving linear programs is slow. There are faster methods based on so-called “primal-dual” methods that use information about the dual of the linear program to construct a solution to the problem. Goemans and Williamson have a nice self-contained chapter on their website about this with a ton of applications.
Additional Reading
Williamson and Shmoys have a large textbook called The Design of Approximation Algorithms. One problem is that this field is like a big heap of unrelated techniques, so it’s not like the book will build up some neat theoretical foundation that works for every problem. Rather, it’s messy and there are lots of details, but there are definitely diamonds in the rough, such as the problem of (and algorithms for) coloring 3-colorable graphs with “approximately 3” colors, and the infamous unique games conjecture.
I wrote a post a while back giving conditions which, if a problem satisfies those conditions, the greedy algorithm will give a constant-factor approximation. This is much better than the worst case $ \log(n)$-approximation we saw in this post. Moreover, I also wrote a post about matroids, which is a characterization of problems where the greedy algorithm is actually optimal.
Set cover is one of the main tools that IBM’s AntiVirus software uses to detect viruses. Similarly to the regex golf problem, they find a set of strings that occurs source code in some viruses but not (usually) in good programs. Then they look for a small set of strings that covers all the viruses, and their virus scan just has to search binaries for those strings. Hopefully the size of your set cover is really small compared to the number of viruses you want to protect against. I can’t find a reference that details this, but that is understandable because it is proprietary software.
Problem: Given a massive data stream of $ n$ values in $ \{ 1, 2, \dots, m \}$ and the guarantee that one value occurs more than $ n/2$ times in the stream, determine exactly which value does so.
Solution: (in Python)
def majority(stream):
held = next(stream)
counter = 1
for item in stream:
if item == held:
counter += 1
elif counter == 0:
held = item
counter = 1
else:
counter -= 1
return held
Discussion: Let’s prove correctness. Say that $ s$ is the unknown value that occurs more than $ n/2$ times. The idea of the algorithm is that if you could pair up elements of your stream so that distinct values are paired up, and then you “kill” these pairs, then $ s$ will always survive. The way this algorithm pairs up the values is by holding onto the most recent value that has no pair (implicitly, by keeping a count how many copies of that value you saw). Then when you come across a new element, you decrement the counter and implicitly account for one new pair.
Let’s analyze the complexity of the algorithm. Clearly the algorithm only uses a single pass through the data. Next, if the stream has size $ n$, then this algorithm uses $ O(\log(n) + \log(m))$ space. Indeed, if the stream entirely consists of a single value (say, a stream of all 1’s) then the counter will be $ n$ at the end, which takes $ \log(n)$ bits to store. On the other hand, if there are $ m$ possible values then storing the largest requires $ \log(m)$ bits.
Finally, the guarantee that one value occurs more than $ n/2$ times is necessary. If it is not the case the algorithm could output anything (including the most infrequent element!). And moreover, if we don’t have this guarantee then every algorithm that solves the problem must use at least $ \Omega(n)$ space in the worst case. In particular, say that $ m=n$, and the first $ n/2$ items are all distinct and the last $ n/2$ items are all the same one, the majority value $ s$. If you do not know $ s$ in advance, then you must keep at least one bit of information to know which symbols occurred in the first half of the stream because any of them could be $ s$. So the guarantee allows us to bypass that barrier.
This algorithm can be generalized to detect $ k$ items with frequency above some threshold $ n/(k+1)$ using space $ O(k \log n)$. The idea is to keep $ k$ counters instead of one, adding new elements when any counter is zero. When you see an element not being tracked by your $ k$ counters (which are all positive), you decrement all the counters by 1. This is like a $ k$-to-one matching rather than a pairing.
Greedy algorithms are by far one of the easiest and most well-understood algorithmic techniques. There is a wealth of variations, but at its core the greedy algorithm optimizes something using the natural rule, “pick what looks best” at any step. So a greedy routing algorithm would say to a routing problem: “You want to visit all these locations with minimum travel time? Let’s start by going to the closest one. And from there to the next closest one. And so on.”
Because greedy algorithms are so simple, researchers have naturally made a big effort to understand their performance. Under what conditions will they actually solve the problem we’re trying to solve, or at least get close? In a previous post we gave some easy-to-state conditions under which greedy gives a good approximation, but the obvious question remains: can we characterize when greedy algorithms give an optimal solution to a problem?
The answer is yes, and the framework that enables us to do this is called a matroid. That is, if we can phrase the problem we’re trying to solve as a matroid, then the greedy algorithm is guaranteed to be optimal. Let’s start with an example when greedy is provably optimal: the minimum spanning tree problem. Throughout the article we’ll assume the reader is familiar with the very basics of linear algebra and graph theory (though we’ll remind ourselves what a minimum spanning tree is shortly). For a refresher, this blog has primers on both subjects. But first, some history.
History
Matroids were first introduced by Hassler Whitney in 1935, and independently discovered a little later by B.L. van der Waerden (a big name in combinatorics). They were both interested in devising a general description of “independence,” the properties of which are strikingly similar when specified in linear algebra and graph theory. Since then the study of matroids has blossomed into a large and beautiful theory, one part of which is the characterization of the greedy algorithm: greedy is optimal on a problem if and only if the problem can be represented as a matroid. Mathematicians have also characterized which matroids can be modeled as spanning trees of graphs (we will see this momentarily). As such, matroids have become a standard topic in the theory and practice of algorithms.
Minimum Spanning Trees
It is often natural in an undirected graph $ G = (V,E)$ to find a connected subset of edges that touch every vertex. As an example, if you’re working on a power network you might want to identify a “backbone” of the network so that you can use the backbone to cheaply travel from any node to any other node. Similarly, in a routing network (like the internet) it costs a lot of money to lay down cable, it’s in the interest of the internet service providers to design analogous backbones into their infrastructure.
A minimal subset of edges in a backbone like this is guaranteed to form a tree. This is simply because if you have a cycle in your subgraph then removing any edge on that cycle doesn’t break connectivity or the fact that you can get from any vertex to any other (and trees are the maximal subgraphs without cycles). As such, these “backbones” are called spanning trees. “Span” here means that you can get from any vertex to any other vertex, and it suggests the connection to linear algebra that we’ll describe later, and it’s a simple property of a tree that there is a unique path between any two vertices in the tree.
An example of a spanning tree
When your edges $ e \in E$ have nonnegative weights $ w_e \in \mathbb{R}^{\geq 0}$, we can further ask to find a minimum cost spanning tree. The cost of a spanning tree $ T$ is just the sum of its edges, and it’s important enough of a definition to offset.
Definition: A minimum spanning tree $ T$ of a weighted graph $ G$ (with weights $ w_e \geq 0$ for $ e \in E$) is a spanning tree which minimizes the quantity
$ w(T) = \sum_{e \in T} w_e$
There are a lot of algorithms to find minimal spanning trees, but one that will lead us to matroids is Kruskal’s algorithm. It’s quite simple. We’ll maintain a forest $ F$ in $ G$, which is just a subgraph consisting of a bunch of trees that may or may not be connected. At the beginning $ F$ is just all the vertices with no edges. And then at each step we add to $ F$ the edge $ e$ whose weight is smallest and also does not introduce any cycles into $ F$. If the input graph $ G$ is connected then this will always produce a minimal spanning tree.
Theorem: Kruskal’s algorithm produces a minimal spanning tree of a connected graph.
Proof. Call $ F_t$ the forest produced at step $ t$ of the algorithm. Then $ F_0$ is the set of all vertices of $ G$ and $ F_{n-1}$ is the final forest output by Kruskal’s (as a quick exercise, prove all spanning trees on $ n$ vertices have $ n-1$ edges, so we will stop after $ n-1$ rounds). It’s clear that $ F_{n-1}$ is a tree because the algorithm guarantees no $ F_i$ will have a cycle. And any tree with $ n-1$ edges is necessarily a spanning tree, because if some vertex were left out then there would be $ n-1$ edges on a subgraph of $ n-1$ vertices, necessarily causing a cycle somewhere in that subgraph.
Now we’ll prove that $ F_{n-1}$ has minimal cost. We’ll prove this in a similar manner to the general proof for matroids.Indeed, say you had a tree $ T$ whose cost is strictly less than that of $ F_{n-1}$ (we can also suppose that $ T$ is minimal, but this is not necessary). Pick the minimal weight edge $ e \in T$ that is not in $ F_{n-1}$. Adding $ e$ to $ F_{n-1}$ introduces a unique cycle $ C$ in $ F_{n-1}$. This cycle has some strange properties. First, $ e$ has the highest cost of any edge on $ C$. For otherwise, Kruskal’s algorithm would have chosen it before the heavier weight edges. Second, there is another edge in $ C$ that’s not in $ T$ (because $ T$ was a tree it can’t have the entire cycle). Call such an edge $ e’$. Now we can remove $ e’$ from $ F_{n-1}$ and add $ e$. This can only increase the total cost of $ F_{n-1}$, but this transformation produces a tree with one more edge in common with $ T$ than before. This contradicts that $ T$ had strictly lower weight than $ F_{n-1}$, because repeating the process we described would eventually transform $ F_{n-1}$ into $ T$ exactly, while only increasing the total cost.
$ \square$
Just to recap, we defined sets of edges to be “good” if they did not contain a cycle, and a spanning tree is a maximal set of edges with this property. In this scenario, the greedy algorithm performed optimally at finding a spanning tree with minimal total cost.
Columns of Matrices
Now let’s consider a different kind of problem. Say I give you a matrix like this one:
In the standard interpretation of linear algebra, this matrix represents a linear function $ f$ from one vector space $ V$ to another $ W$, with the basis $ (v_1, \dots, v_5)$ of $ V$ being represented by columns and the basis $ (w_1, w_2, w_3)$ of $ W$ being represented by the rows. Column $ j$ tells you how to write $ f(v_j)$ as a linear combination of the $ w_i$, and in so doing uniquely defines $ f$.
Now one thing we want to calculate is the rank of this matrix. That is, what is the dimension of the image of $ V$ under $ f$? By linear algebraic arguments we know that this is equivalent to asking “how many linearly independent columns of $ A$ can we find”? An interesting consequence is that if you have two sets of columns that are both linearly independent and maximally so (adding any other column to either set would necessarily introduce a dependence in that set), then these two sets have the same size. This is part of why the rank of a matrix is well-defined.
If we were to give the columns of $ A$ costs, then we could ask about finding the minimal-cost maximally-independent column set. It sounds like a mouthful, but it’s exactly the same idea as with spanning trees: we want a set of vectors that spans the whole column space of $ A$, but contains no “cycles” (linearly dependent combinations), and we want the cheapest such set.
So we have two kinds of “independence systems” that seem to be related. One interesting question we can ask is whether these kinds of independence systems are “the same” in a reasonable way. Hardcore readers of this blog may see the connection quite quickly. For any graph $ G = (V,E)$, there is a natural linear map from $ E$ to $ V$, so that a linear dependence among the columns (edges) corresponds to a cycle in $ G$. This map is called the incidence matrix by combinatorialists and the first boundary map by topologists.
The map is easy to construct: for each edge $ e = (v_i,v_j)$ you add a column with a 1 in the $ j$-th row and a $ -1$ in the $ i$-th row. Then taking a sum of edges gives you zero if and only if the edges form a cycle. So we can think of a set of edges as “independent” if they don’t contain a cycle. It’s a little bit less general than independence over $ \mathbb{R}$, but you can make it exactly the same kind of independence if you change your field from real numbers to $ \mathbb{Z}/2\mathbb{Z}$. We won’t do this because it will detract from our end goal (to analyze greedy algorithms in realistic settings), but for further reading this survey of Oxley assumes that perspective.
So with the recognition of how similar these notions of independence are, we are ready to define matroids.
The Matroid
So far we’ve seen two kinds of independence: “sets of edges with no cycles” (also called forests) and “sets of linearly independent vectors.” Both of these share two trivial properties: there are always nonempty independent sets, and every subset of an independent set is independent. We will call any family of subsets with this property an independence system.
Definition: Let $ X$ be a finite set. An independence system over $ X$ is a family $ \mathscr{I}$ of subsets of $ X$ with the following two properties.
$ \mathscr{I}$ is nonempty.
If $ I \in \mathscr{I}$, then so is every subset of $ I$.
This is too general to characterize greedy algorithms, so we need one more property shared by our examples. There are a few things we do, but here’s one nice property that turns out to be enough.
Definition: A matroid $ M = (X, \mathscr{I})$ is a set $ X$ and an independence system $ \mathscr{I}$ over $ X$ with the following property:
If $ A, B$ are in $ \mathscr{I}$ with $ |A| = |B| + 1$, then there is an element $ x \in A \setminus B$ such that $ B \cup \{ a \} \in \mathscr{I}$.
In other words, this property says if I have an independent set that is not maximally independent, I can grow the set by adding some suitably-chosen element from a larger independent set. We’ll call this the extension property. For a warmup exercise, let’s prove that the extension property is equivalent to the following (assuming the other properties of a matroid):
For every subset $ Y \subset X$, all maximal independent sets contained in $ Y$ have equal size.
Proof. For one direction, if you have two maximal sets $ A, B \subset Y \subset X$ that are not the same size (say $ A$ is bigger), then you can take any subset of $ A$ whose size is exactly $ |B| + 1$, and use the extension property to make $ B$ larger, a contradiction. For the other direction, say that I know all maximal independent sets of any $ Y \subset X$ have the same size, and you give me $ A, B \subset X$. I need to find an $ a \in A \setminus B$ that I can add to $ B$ and keep it independent. What I do is take the subset $ Y = A \cup B$. Now the sizes of $ A, B$ don’t change, but $ B$ can’t be maximal inside $ Y$ because it’s smaller than $ A$ ($ A$ might not be maximal either, but it’s still independent). And the only way to extend $ B$ is by adding something from $ A$, as desired.
$ \square$
So we can use the extension property and the cardinality property interchangeably when talking about matroids. Continuing to connect matroid language to linear algebra and graph theory, the maximal independent sets of a matroid are called bases, the size of any basis is the rank of the matroid, and the minimal dependent sets are called circuits. In fact, you can characterize matroids in terms of the properties of their circuits, which are dual to the properties of bases (and hence all independent sets) in a very concrete sense.
But while you could spend all day characterizing the many kinds of matroids and comatroids out there, we are still faced with the task of seeing how the greedy algorithm performs on a matroid. That is, suppose that your matroid $ M = (X, \mathscr{I})$ has a nonnegative real number $ w(x)$ associated with each $ x \in X$. And suppose we had a black-box function to determine if a given set $ S \subset X$ is independent. Then the greedy algorithm maintains a set $ B$, and at every step adds a minimum weight element that maintains the independence of $ B$. If we measure the cost of a subset by the sum of the weights of its elements, then the question is whether the greedy algorithm finds a minimum weight basis of the matroid.
The answer is even better than yes. In fact, the answer is that the greedy algorithm performs perfectly if and only if the problem is a matroid! More rigorously,
Theorem: Suppose that $ M = (X, \mathscr{I})$ is an independence system, and that we have a black-box algorithm to determine whether a given set is independent. Define the greedy algorithm to iteratively adds the cheapest element of $ X$ that maintains independence. Then the greedy algorithm produces a maximally independent set $ S$ of minimal cost for every nonnegative cost function on $ X$, if and only if $ M$ is a matroid.
It’s clear that the algorithm will produce a set that is maximally independent. The only question is whether what it produces has minimum weight among all maximally independent sets. We’ll break the theorem into the two directions of the “if and only if”:
Part 1: If $ M$ is a matroid, then greedy works perfectly no matter the cost function. Part 2: If greedy works perfectly for every cost function, then $ M$ is a matroid.
Proof of Part 1.
Call the cost function $ w : X \to \mathbb{R}^{\geq 0}$, and suppose that the greedy algorithm picks elements $ B = \{ x_1, x_2, \dots, x_r \}$ (in that order). It’s easy to see that $ w(x_1) \leq w(x_2) \leq \dots \leq w(x_r)$. Now if you give me any list of $ r$ independent elements $ y_1, y_2, \dots, y_r \in X$ that has $ w(y_1) \leq \dots \leq w(y_r)$, I claim that $ w(x_i) \leq w(y_i)$ for all $ i$. This proves what we want, because if there were a basis of size $ r$ with smaller weight, sorting its elements by weight would give a list contradicting this claim.
To prove the claim, suppose to the contrary that it were false, and for some $ k$ we have $ w(x_k) > w(y_k)$. Moreover, pick the smallest $ k$ for which this is true. Note $ k > 1$, and so we can look at the special sets $ S = \{ x_1, \dots, x_{k-1} \}$ and $ T = \{ y_1, \dots, y_k \}$. Now $ |T| = |S|+1$, so by the matroid property there is some $ j$ between $ 1$ and $ r$ so that $ S \cup \{ y_j \}$ is an independent set (and $ y_j$ is not in $ S$). But then $ w(y_j) \leq w(y_k) < w(x_k)$, and so the greedy algorithm would have picked $ y_j$ before it picks $ x_k$ (and the strict inequality means they’re different elements). This contradicts how the greedy algorithm runs, and hence proves the claim.
Proof of Part 2.
We’ll prove this contrapositively as follows. Suppose we have our independence system and it doesn’t satisfy the last matroid condition. Then we’ll construct a special weight function that causes the greedy algorithm to fail. So let $ A,B$ be independent sets with $ |A| = |B| + 1$, but for every $ a \in A \setminus B$ adding $ a$ to $ B$ never gives you an independent set.
Now what we’ll do is define our weight function so that the greedy algorithm picks the elements we want in the order we want (roughly). In particular, we’ll assign all elements of $ A \cap B$ a tiny weight we’ll call $ w_1$. For elements of $ B – A$ we’ll use $ w_2$, and for $ A – B$ we’ll use $ w_3$, with $ w_4$ for everything else. In a more compact notation:
We need two things for this weight function to screw up the greedy algorithm. The first is that $ w_1 < w_2 < w_3 < w_4$, so that greedy picks the elements in the order we want. Note that this means it’ll first pick all of $ A \cap B$, and then all of $ B – A$, and by assumption it won’t be able to pick anything from $ A – B$, but since $ B$ is assumed to be non-maximal, we have to pick at least one element from $ X – (A \cup B)$ and pay $ w_4$ for it.
So the second thing we want is that the cost of doing greedy is worse than picking any maximally independent set that contains $ A$ (and we know that there has to be some maximal independent set containing $ A$). In other words, if we call $ m$ the size of a maximally independent set, we want
This can be rearranged (using the fact that $ |A| = |B|+1$) to
$ \displaystyle w_4 > |A-B|w_3 – |B-A|w_2$
The point here is that the greedy picks too many elements of weight $ w_4$, since if we were to start by taking all of $ A$ (instead of all of $ B$), then we could get by with one fewer. That might not be optimal, but it’s better than greedy and that’s enough for the proof.
So we just need to make $ w_4$ large enough to make this inequality hold, while still maintaining $ w_2 < w_3$. There are probably many ways to do this, and here’s one. Pick some $ 0 < \varepsilon < 1$, and set
It’s trivial that $ w_1 < w_2$ and $ w_3 < w_4$. For the rest we need some observations. First, the fact that $ |A-B| = |B-A| + 1$ implies that $ w_2 < w_3$. Second, both $ |A-B|$ and $ |B-A|$ are nonempty, since otherwise the second property of independence systems would contradict our assumption that augmenting $ B$ with elements of $ A$ breaks independence. Using this, we can divide by these quantities to get
As a side note, we proved everything here with respect to minimizing the sum of the weights, but one can prove an identical theorem for maximization. The only part that’s really different is picking the clever weight function in part 2. In fact, you can convert between the two by defining a new weight function that subtracts the old weights from some fixed number $ N$ that is larger than any of the original weights. So these two problems really are the same thing.
This is pretty amazing! So if you can prove your problem is a matroid then you have an awesome algorithm automatically. And if you run the greedy algorithm for fun and it seems like it works all the time, then that may be hinting that your problem is a matroid. This is one of the best situations one could possibly hope for.
But as usual, there are a few caveats to consider. They are both related to efficiency. The first is the black box algorithm for determining if a set is independent. In a problem like minimum spanning tree or finding independent columns of a matrix, there are polynomial time algorithms for determining independence. These two can both be done, for example, with Gaussian elimination. But there’s nothing to stop our favorite matroid from requiring an exponential amount of time to check if a set is independent. This makes greedy all but useless, since we need to check for independence many times in every round.
Another, perhaps subtler, issue is that the size of the ground set $ X$ might be exponentially larger than the rank of the matroid. In other words, at every step our greedy algorithm needs to find a new element to add to the set it’s building up. But there could be such a huge ocean of candidates, all but a few of which break independence. In practice an algorithm might be working with $ X$ implicitly, so we could still hope to solve the problem if we had enough knowledge to speed up the search for a new element.
There are still other concerns. For example, a naive approach to implementing greedy takes quadratic time, since you may have to look through every element of $ X$ to find the minimum-cost guy to add. What if you just have to have faster runtime than $ O(n^2)$? You can still be interested in finding more efficient algorithms that still perform perfectly, and to the best of my knowledge there’s nothing that says that greedy is the only exact algorithm for your favorite matroid. And then there are models where you don’t have direct/random access to the input, and lots of other ways that you can improve on greedy. But those stories are for another time.