When Greedy Algorithms are Perfect: the Matroid

Greedy algorithms are by far one of the easiest and most well-understood algorithmic techniques. There is a wealth of variations, but at its core the greedy algorithm optimizes something using the natural rule, “pick what looks best” at any step. So a greedy routing algorithm would say to a routing problem: “You want to visit all these locations with minimum travel time? Let’s start by going to the closest one. And from there to the next closest one. And so on.”

Because greedy algorithms are so simple, researchers have naturally made a big effort to understand their performance. Under what conditions will they actually solve the problem we’re trying to solve, or at least get close? In a previous post we gave some easy-to-state conditions under which greedy gives a good approximation, but the obvious question remains: can we characterize when greedy algorithms give an optimal solution to a problem?

The answer is yes, and the framework that enables us to do this is called a matroid. That is, if we can phrase the problem we’re trying to solve as a matroid, then the greedy algorithm is guaranteed to be optimal. Let’s start with an example when greedy is provably optimal: the minimum spanning tree problem. Throughout the article we’ll assume the reader is familiar with the very basics of linear algebra and graph theory (though we’ll remind ourselves what a minimum spanning tree is shortly). For a refresher, this blog has primers on both subjects. But first, some history.

History

Matroids were first introduced by Hassler Whitney in 1935, and independently discovered a little later by B.L. van der Waerden (a big name in combinatorics). They were both interested in devising a general description of “independence,” the properties of which are strikingly similar when specified in linear algebra and graph theory. Since then the study of matroids has blossomed into a large and beautiful theory, one part of which is the characterization of the greedy algorithm: greedy is optimal on a problem if and only if the problem can be represented as a matroid. Mathematicians have also characterized which matroids can be modeled as spanning trees of graphs (we will see this momentarily). As such, matroids have become a standard topic in the theory and practice of algorithms.

Minimum Spanning Trees

It is often natural in an undirected graph G = (V,E) to find a connected subset of edges that touch every vertex. As an example, if you’re working on a power network you might want to identify a “backbone” of the network so that you can use the backbone to cheaply travel from any node to any other node. Similarly, in a routing network (like the internet) it costs a lot of money to lay down cable, it’s in the interest of the internet service providers to design analogous backbones into their infrastructure.

A minimal subset of edges in a backbone like this is guaranteed to form a tree. This is simply because if you have a cycle in your subgraph then removing any edge on that cycle doesn’t break connectivity or the fact that you can get from any vertex to any other (and trees are the maximal subgraphs without cycles). As such, these “backbones” are called spanning trees. “Span” here means that you can get from any vertex to any other vertex, and it suggests the connection to linear algebra that we’ll describe later, and it’s a simple property of a tree that there is a unique path between any two vertices in the tree.

An example of a spanning tree

An example of a spanning tree

When your edges e \in E have nonnegative weights w_e \in \mathbb{R}^{\geq 0}, we can further ask to find a minimum cost spanning tree. The cost of a spanning tree T is just the sum of its edges, and it’s important enough of a definition to offset.

Definition: A minimum spanning tree T of a weighted graph G (with weights w_e \geq 0 for e \in E) is a spanning tree which minimizes the quantity

w(T) = \sum_{e \in T} w_e

There are a lot of algorithms to find minimal spanning trees, but one that will lead us to matroids is Kruskal’s algorithm. It’s quite simple. We’ll maintain a forest F in G, which is just a subgraph consisting of a bunch of trees that may or may not be connected. At the beginning F is just all the vertices with no edges. And then at each step we add to F the edge e whose weight is smallest and also does not introduce any cycles into F. If the input graph G is connected then this will always produce a minimal spanning tree.

Theorem: Kruskal’s algorithm produces a minimal spanning tree of a connected graph.

Proof. Call F_t the forest produced at step t of the algorithm. Then F_0 is the set of all vertices of G and F_{n-1} is the final forest output by Kruskal’s (as a quick exercise, prove all spanning trees on n vertices have n-1 edges, so we will stop after n-1 rounds). It’s clear that F_{n-1} is a tree because the algorithm guarantees no F_i will have a cycle. And any tree with n-1 edges is necessarily a spanning tree, because if some vertex were left out then there would be n-1 edges on a subgraph of n-1 vertices, necessarily causing a cycle somewhere in that subgraph.

Now we’ll prove that F_{n-1} has minimal cost. We’ll prove this in a similar manner to the general proof for matroids. Indeed, say you had a tree T whose cost is strictly less than that of F_{n-1} (we can also suppose that T is minimal, but this is not necessary). Pick the minimal weight edge e \in T that is not in F_{n-1}. Adding e to F_{n-1} introduces a unique cycle C in F_{n-1}. This cycle has some strange properties. First, e has the highest cost of any edge on C. For otherwise, Kruskal’s algorithm would have chosen it before the heavier weight edges. Second, there is another edge in C that’s not in T (because T was a tree it can’t have the entire cycle). Call such an edge e'. Now we can remove e' from F_{n-1} and add e. This can only increase the total cost of F_{n-1}, but this transformation produces a tree with one more edge in common with T than before. This contradicts that T had strictly lower weight than F_{n-1}, because repeating the process we described would eventually transform F_{n-1} into T exactly, while only increasing the total cost.

\square

Just to recap, we defined sets of edges to be “good” if they did not contain a cycle, and a spanning tree is a maximal set of edges with this property. In this scenario, the greedy algorithm performed optimally at finding a spanning tree with minimal total cost.

Columns of Matrices

Now let’s consider a different kind of problem. Say I give you a matrix like this one:

\displaystyle A = \begin{pmatrix} 2 & 0 & 1 & -1 & 0 \\ 0 & -4 & 0 & 1 & 0 \\ 0 & 0 & 1 & 0 & 7 \end{pmatrix}

In the standard interpretation of linear algebra, this matrix represents a linear function f from one vector space V to another W, with the basis (v_1, \dots, v_5) of V being represented by columns and the basis (w_1, w_2, w_3) of W being represented by the rows. Column j tells you how to write f(v_j) as a linear combination of the w_i, and in so doing uniquely defines f.

Now one thing we want to calculate is the rank of this matrix. That is, what is the dimension of the image of V under f? By linear algebraic arguments we know that this is equivalent to asking “how many linearly independent columns of A can we find”? An interesting consequence is that if you have two sets of columns that are both linearly independent and maximally so (adding any other column to either set would necessarily introduce a dependence in that set), then these two sets have the same size. This is part of why the rank of a matrix is well-defined.

If we were to give the columns of A costs, then we could ask about finding the minimal-cost maximally-independent column set. It sounds like a mouthful, but it’s exactly the same idea as with spanning trees: we want a set of vectors that spans the whole column space of A, but contains no “cycles” (linearly dependent combinations), and we want the cheapest such set.

So we have two kinds of “independence systems” that seem to be related. One interesting question we can ask is whether these kinds of independence systems are “the same” in a reasonable way. Hardcore readers of this blog may see the connection quite quickly. For any graph G = (V,E), there is a natural linear map from E to V, so that a linear dependence among the columns (edges) corresponds to a cycle in G. This map is called the incidence matrix by combinatorialists and the first boundary map by topologists.

The map is easy to construct: for each edge e = (v_i,v_j) you add a column with a 1 in the j-th row and a -1 in the i-th row. Then taking a sum of edges gives you zero if and only if the edges form a cycle. So we can think of a set of edges as “independent” if they don’t contain a cycle. It’s a little bit less general than independence over \mathbb{R}, but you can make it exactly the same kind of independence if you change your field from real numbers to \mathbb{Z}/2\mathbb{Z}. We won’t do this because it will detract from our end goal (to analyze greedy algorithms in realistic settings), but for further reading this survey of Oxley assumes that perspective.

So with the recognition of how similar these notions of independence are, we are ready to define matroids.

The Matroid

So far we’ve seen two kinds of independence: “sets of edges with no cycles” (also called forests) and “sets of linearly independent vectors.” Both of these share two trivial properties: there are always nonempty independent sets, and every subset of an independent set is independent. We will call any family of subsets with this property an independence system.

Definition: Let X be a finite set. An independence system over X is a family \mathscr{I} of subsets of X with the following two properties.

  1. \mathscr{I} is nonempty.
  2. If I \in \mathscr{I}, then so is every subset of I.

This is too general to characterize greedy algorithms, so we need one more property shared by our examples. There are a few things we do, but here’s one nice property that turns out to be enough.

Definition: A matroid M = (X, \mathscr{I}) is a set X and an independence system \mathscr{I} over X with the following property:

If A, B are in \mathscr{I} with |A| = |B| + 1, then there is an element x \in A \setminus B such that B \cup \{ a \} \in \mathscr{I}.

In other words, this property says if I have an independent set that is not maximally independent, I can grow the set by adding some suitably-chosen element from a larger independent set. We’ll call this the extension property. For a warmup exercise, let’s prove that the extension property is equivalent to the following (assuming the other properties of a matroid):

For every subset Y \subset X, all maximal independent sets contained in Y have equal size.

Proof. For one direction, if you have two maximal sets A, B \subset Y \subset X that are not the same size (say A is bigger), then you can take any subset of A whose size is exactly |B| + 1, and use the extension property to make B larger, a contradiction. For the other direction, say that I know all maximal independent sets of any Y \subset X have the same size, and you give me A, B \subset X. I need to find an a \in A \setminus B that I can add to B and keep it independent. What I do is take the subset Y = A \cup B. Now the sizes of A, B don’t change, but B can’t be maximal inside Y because it’s smaller than A (A might not be maximal either, but it’s still independent). And the only way to extend B is by adding something from A, as desired.

\square

So we can use the extension property and the cardinality property interchangeably when talking about matroids. Continuing to connect matroid language to linear algebra and graph theory, the maximal independent sets of a matroid are called bases, the size of any basis is the rank of the matroid, and the minimal dependent sets are called circuits. In fact, you can characterize matroids in terms of the properties of their circuits, which are dual to the properties of bases (and hence all independent sets) in a very concrete sense.

But while you could spend all day characterizing the many kinds of matroids and comatroids out there, we are still faced with the task of seeing how the greedy algorithm performs on a matroid. That is, suppose that your matroid M = (X, \mathscr{I}) has a nonnegative real number w(x) associated with each x \in X. And suppose we had a black-box function to determine if a given set S \subset X is independent. Then the greedy algorithm maintains a set B, and at every step adds a minimum weight element that maintains the independence of B. If we measure the cost of a subset by the sum of the weights of its elements, then the question is whether the greedy algorithm finds a minimum weight basis of the matroid.

The answer is even better than yes. In fact, the answer is that the greedy algorithm performs perfectly if and only if the problem is a matroid! More rigorously,

Theorem: Suppose that M = (X, \mathscr{I}) is an independence system, and that we have a black-box algorithm to determine whether a given set is independent. Define the greedy algorithm to iteratively adds the cheapest element of X that maintains independence. Then the greedy algorithm produces a maximally independent set S of minimal cost for every nonnegative cost function on X, if and only if M is a matroid.

It’s clear that the algorithm will produce a set that is maximally independent. The only question is whether what it produces has minimum weight among all maximally independent sets. We’ll break the theorem into the two directions of the “if and only if”:

Part 1: If M is a matroid, then greedy works perfectly no matter the cost function.
Part 2: If greedy works perfectly for every cost function, then M is a matroid.

Proof of Part 1.

Call the cost function w : X \to \mathbb{R}^{\geq 0}, and suppose that the greedy algorithm picks elements B = \{ x_1, x_2, \dots, x_r \} (in that order). It’s easy to see that w(x_1) \leq w(x_2) \leq \dots \leq w(x_r). Now if you give me any list of r independent elements y_1, y_2, \dots, y_r \in X that has w(y_1) \leq \dots \leq w(y_r), I claim that w(x_i) \leq w(y_i) for all i. This proves what we want, because if there were a basis of size r with smaller weight, sorting its elements by weight would give a list contradicting this claim.

To prove the claim, suppose to the contrary that it were false, and for some k we have w(x_k) > w(y_k). Moreover, pick the smallest k for which this is true. Note k > 1, and so we can look at the special sets S = \{ x_1, \dots, x_{k-1} \} and T = \{ y_1, \dots, y_k \}. Now |T| = |S|+1, so by the matroid property there is some j between 1 and r so that S \cup \{ y_j \} is an independent set (and y_j is not in S). But then w(y_j) \leq w(y_k) < w(x_k), and so the greedy algorithm would have picked y_j before it picks x_k (and the strict inequality means they’re different elements). This contradicts how the greedy algorithm runs, and hence proves the claim.

Proof of Part 2.

We’ll prove this contrapositively as follows. Suppose we have our independence system and it doesn’t satisfy the last matroid condition. Then we’ll construct a special weight function that causes the greedy algorithm to fail. So let A,B be independent sets with |A| = |B| + 1, but for every a \in A \setminus B adding a to B never gives you an independent set.

Now what we’ll do is define our weight function so that the greedy algorithm picks the elements we want in the order we want (roughly). In particular, we’ll assign all elements of A \cap B a tiny weight we’ll call w_1. For elements of B - A we’ll use w_2, and for A - B we’ll use w_3, with w_4 for everything else. In a more compact notation:

CodeCogsEqn

We need two things for this weight function to screw up the greedy algorithm. The first is that w_1 < w_2 < w_3 < w_4, so that greedy picks the elements in the order we want. Note that this means it’ll first pick all of A \cap B, and then all of B - A, and by assumption it won’t be able to pick anything from A - B, but since B is assumed to be non-maximal, we have to pick at least one element from X - (A \cup B) and pay w_4 for it.

So the second thing we want is that the cost of doing greedy is worse than picking any maximally independent set that contains A (and we know that there has to be some maximal independent set containing A). In other words, if we call m the size of a maximally independent set, we want

\displaystyle |A \cap B| w_1 + |B-A|w_2 + (m - |B|)w_4 > |A \cap B|w_1 + |A-B|w_3 + (m-|A|)w_4

This can be rearranged (using the fact that |A| = |B|+1) to

\displaystyle w_4 > |A-B|w_3 - |B-A|w_2

The point here is that the greedy picks too many elements of weight w_4, since if we were to start by taking all of A (instead of all of B), then we could get by with one fewer. That might not be optimal, but it’s better than greedy and that’s enough for the proof.

So we just need to make w_4 large enough to make this inequality hold, while still maintaining w_2 < w_3. There are probably many ways to do this, and here’s one. Pick some 0 < \varepsilon < 1, and set

settings

It’s trivial that w_1 < w_2 and w_3 < w_4. For the rest we need some observations. First, the fact that |A-B| = |B-A| + 1 implies that w_2 < w_3. Second, both |A-B| and |B-A| are nonempty, since otherwise the second property of independence systems would contradict our assumption that augmenting B with elements of A breaks independence. Using this, we can divide by these quantities to get

\displaystyle w_4 = 2 > 1 = \frac{|A-B|(1 + \varepsilon)}{|A-B|} - \frac{|B-A|\varepsilon}{|B-A|}

This proves the claim and finishes the proof.

\square

As a side note, we proved everything here with respect to minimizing the sum of the weights, but one can prove an identical theorem for maximization. The only part that’s really different is picking the clever weight function in part 2. In fact, you can convert between the two by defining a new weight function that subtracts the old weights from some fixed number N that is larger than any of the original weights. So these two problems really are the same thing.

This is pretty amazing! So if you can prove your problem is a matroid then you have an awesome algorithm automatically. And if you run the greedy algorithm for fun and it seems like it works all the time, then that may be hinting that your problem is a matroid. This is one of the best situations one could possibly hope for.

But as usual, there are a few caveats to consider. They are both related to efficiency. The first is the black box algorithm for determining if a set is independent. In a problem like minimum spanning tree or finding independent columns of a matrix, there are polynomial time algorithms for determining independence. These two can both be done, for example, with Gaussian elimination. But there’s nothing to stop our favorite matroid from requiring an exponential amount of time to check if a set is independent. This makes greedy all but useless, since we need to check for independence many times in every round.

Another, perhaps subtler, issue is that the size of the ground set X might be exponentially larger than the rank of the matroid. In other words, at every step our greedy algorithm needs to find a new element to add to the set it’s building up. But there could be such a huge ocean of candidates, all but a few of which break independence. In practice an algorithm might be working with X implicitly, so we could still hope to solve the problem if we had enough knowledge to speed up the search for a new element.

There are still other concerns. For example, a naive approach to implementing greedy takes quadratic time, since you may have to look through every element of X to find the minimum-cost guy to add. What if you just have to have faster runtime than O(n^2)? You can still be interested in finding more efficient algorithms that still perform perfectly, and to the best of my knowledge there’s nothing that says that greedy is the only exact algorithm for your favorite matroid. And then there are models where you don’t have direct/random access to the input, and lots of other ways that you can improve on greedy. But those stories are for another time.

Until then!

About these ads

Parameterizing the Vertex Cover Problem

I’m presenting a paper later this week at the Matheamtical Foundations of Computer Science 2014 in Budapest, Hungary. This conference is an interesting mix of logic and algorithms that aims to bring together researchers from these areas to discuss their work. And right away the first session on the first day focused on an area I know is important but have little experience with: fixed parameter complexity. From what I understand it’s not that popular of a topic at major theory conferences in the US (there appears to be only one paper on it at this year’s FOCS conference), but the basic ideas are worth knowing.

The basic idea is pretty simple: some hard computational problems become easier (read, polynomial-time solvable) if you fix some parameters involved to constants. Preferably small constants. For example, finding cliques of size k in a graph is NP-hard if k is a parameter, but if you fix k to a constant then you can check all possible subsets of size k in O(n^k) time. This is kind of a silly example because there are much faster ways to find triangles than checking all O(n^3) subsets of vertices, but part of the point of fixed-parameter complexity is to find the fastest algorithms in these fixed-parameter settings. Since in practice parameters are often small [citation needed], this analysis can provide useful practical algorithmic alternatives to heuristics or approximate solutions.

One important tool in the theory of fixed-parameter tractability is the idea of a kernel. I think it’s an unfortunate term because it’s massively overloaded in mathematics, but the idea is to take a problem instance with the parameter k, and carve out “easy” regions of the instance (often reducing k as you go) until the runtime of the trivial brute force algorithm only depends on k and not on the size of the input. The point is that the solution you get on this “carved out” instance is either the same as the original, or can be extended back to the original with little extra work. There is a more formal definition we’ll state, but there is a canonical example that gives a great illustration.

Consider the vertex cover problem. That is, you give me a graph G = (V,E) and a number k and I have to determine if there is a subset of \leq k vertices of G that touch all of the edges in E. This problem is fixed-parameter tractable because, as with k-clique one can just check all subsets of size k. The kernel approach we’ll show now is much smarter.

What you do is the following. As long as your graph has a vertex of degree > k, you remove it and reduce k by 1. This is because a vertex of degree > k will always be chosen for a vertex cover. If it’s not, then you need to include all of its neighbors to cover its edges, but there are > k neighbors and your vertex cover is constrained by size k. And so you can automatically put this high-degree vertex in your cover, and use induction on the smaller graph.

Once you can’t remove any more vertices there are two cases. In the case that there are more than k^2 edges, you output that there is no vertex cover. Indeed, if you only get k vertices in your cover and you removed all vertices of degree > k, then each can cover at most k edges, giving a total of at most k^2. Otherwise, if there are at most k^2 edges, then you can remove all the isolated vertices and show that there are only \leq 2k^2 vertices left. This is because each edge touches only two vertices, so in the worst case they’re all distinct. This smaller subgraph is called a kernel of the vertex cover, and the fact that its size depends only on k is the key. So you can look at all 2^{2k^2} = O(1) subsets to determine if there’s a cover of the size you want. If you find a cover of the kernel, you add back in all the large-degree vertices you deleted and you’re done.

Now, even for small k this is a pretty bad algorithm (k=5 gives 2^{50} subsets to inspect), but with more detailed analysis you can do significantly better. In particular, the best known bound reduces vertex cover to a kernel of size 2k - c \log(k) vertices for any constant c you specify. Getting \log(k) vertices is known to imply P = NP, and with more detailed complexity assumptions it’s even hard to get a graph with fewer than O(k^{2-\varepsilon}) edges for any \varepsilon > 0. These are all relatively recent results whose associated papers I have not read.

Even with these hardness results, there are two reasons why this kind of analysis is useful. The first is that it gives us a clearer picture of the complexity of these problems. In particular, the reduction we showed for vertex cover gives a time O(2^{2k^2} + n + m)-time algorithm, which you can then compare directly to the trivial O(n^k) time brute force algorithm and measure the difference. Indeed, if k = o(\sqrt{(k/2) log(n)}) then the kernelized approach is faster.

The second reason is that the kernel approach usually results in simple and quick checks for negative answers to a problem. In particular, if you want to check for k-sized set covers in a graph in the real world, this analysis shows that the first thing you should do is check if the kernel has size > k^2. If so, you can immediately give a “no” answer. So useful kernels can provide insight into the structure of a problem that can be turned into heuristic tools even when it doesn’t help you solve the problem exactly.

So now let’s just see the prevailing definition of a “kernelization” of a problem. This comes from the text of Downey and Fellows.

Definition: kernelization of a parameterized problem L (formally, a language where each string x is paired with a positive integer k) is a \textup{poly}(|x|, k)-time algorithm that converts instances (x,k) into instances (x', k') with the following three properties.

  • (x,k) is a yes instance of L if and only if (x', k') is.
  • |x'| \leq f(k) for some computable function f: \mathbb{N} \to \mathbb{N}.
  • k' \leq g(k) for some computable function g: \mathbb{N} \to \mathbb{N}.

The output (x', k') is called a kernel, and the problem is said to admit a polynomial kernel if f(k) = O(k^c) for some constant c.

So we showed that vertex cover admits a polynomial kernel (in fact, a quadratic one).

Now the nice theorem is that a problem is fixed-parameter tractable if and only if it admits a polynomial kernel. Finding a kernel is conceptually easier because, like in vertex cover, it allows you to introduce additional assumptions on the structure of the instances you’re working with. But more importantly from a theoretical standpoint, measuring the size and complexity of kernels for NP-hard problems gives us a way to discriminate among problems within NP. That and the chance to get some more practical tools for NP-hard problems makes parameterized complexity more interesting than it sounds at first.

Until next time!

When Greedy Algorithms are Good Enough: Submodularity and the (1 – 1/e)-Approximation

Greedy algorithms are among the simplest and most intuitive algorithms known to humans. Their name essentially gives their description: do the thing that looks best right now, and repeat until nothing looks good anymore or you’re forced to stop. Some of the best situations in computer science are also when greedy algorithms are optimal or near-optimal. There is a beautiful theory of this situation, known as the theory of matroids. We haven’t covered matroids on this blog (at some point we will), but in this post we will focus on the next best thing: when the greedy algorithm guarantees a reasonably good approximation to the optimal solution.

This situation isn’t hard to formalize, and we’ll make it as abstract as possible. Say you have a set of objects X, and you’re looking to find the “best” subset S \subset X. Here “best” is just measured by a fixed (known, efficiently computable) objective function f : 2^X \to \mathbb{R}. That is, f accepts as input subsets of X and outputs numbers so that better subsets have larger numbers. Then the goal is to find a subset maximizing X.

In this generality the problem is clearly impossible. You’d have to check all subsets to be sure you didn’t miss the best one. So what conditions do we need on either X or f or both that makes this problem tractable? There are plenty you could try, but one very rich property is submodularity.

The Submodularity Condition

I think the simplest way to explain submodularity is in terms of coverage. Say you’re starting a new radio show and you have to choose which radio stations to broadcast from to reach the largest number of listeners. For simplicity say each radio station has one tower it broadcasts from, and you have a good estimate of the number of listeners you would reach if you broadcast from a given tower. For more simplicity, say it costs the same to broadcast from each tower, and your budget restricts you to a maximum of ten stations to broadcast from. So the question is: how do you pick towers to maximize your overall reach?

The hidden condition here is that some towers overlap in which listeners they reach. So if you broadcast from two towers in the same city, a listener who has access to both will just pick one or the other. In other words, there’s a diminished benefit to picking two overlapping towers if you already have chosen one.

In our version of the problem, picking both of these towers has some small amount of "overkill."

In our version of the problem, picking both of these towers has some small amount of “overkill.”

This “diminishing returns” condition is a general idea you can impose on any function that takes in subsets of a given set and produces numbers. If X is a set then for what seems like a strange reason we denote the set of all subsets of X by 2^X. So we can state this condition more formally,

Definition: Let X be a finite set. A function f: 2^X \to \mathbb{R} is called submodular if for all subsets S \subset T \subset X and all x \in X \setminus T,

\displaystyle f(S \cup \{ x \}) - f(S) \geq f(T \cup \{ x \}) - f(T)

In other words, if f measures “benefit,” then the marginal benefit of adding x to S is at least as high as the marginal benefit of adding it to T. Since S \subset T and x are all arbitrary, this is as general as one could possibly make it.

Before we start doing things with submodular functions, let’s explore some basic properties. The first is an equivalent definition of submodularity

Proposition: f is submodular if and only if for all A, B \subset X, it holds that

\displaystyle f(A \cap B) + f(A \cup B) \leq f(A) + f(B).

Proof. If we assume f has the condition from this proposition, then we can set A=T, B=S \cup \{ x \}, and the formula just works out. Conversely, if we have the condition from the definition, then using the fact that A \cap B \subset B we can inductively apply the inequality to each element of A \setminus B to get

\displaystyle f(A \cup B) - f(B) \leq f(A) - f(A \cap B)

\square

Next, we can tweak and combine submodular functions to get more submodular functions. In particular, non-negative linear combinations of sub-modular functions are submodular. In other words, if f_1, \dots, f_k are submodular on the same set X, and \alpha_1, \dots, \alpha_k are all non-negative reals, then \alpha_1 f_1 + \dots + \alpha_k f_k is also a submodular function on X. It’s an easy exercise in applying the definition to see why this is true. This is important because when we’re designing objectives to maximize, we can design them by making some simple submodular pieces, and then picking an appropriate combination of those pieces.

The second property we need to impose on a submodular function is monotonicity. That is, as your sets get more elements added to them, their value under f only goes up. In other words, f is monotone when S \subset T then f(S) \leq f(T). An interesting property of functions that are both submodular and monotone is that the truncation of such a function is also submodular and monotone. In other words, \textup{min}(f(S), c) is still submodular when f is monotone submodular and c is a constant.

Submodularity and Monotonicity Give 1 – 1/e

The wonderful thing about submodular functions is that we have a lot of great algorithmic guarantees for working with them. We’ll prove right now that the coverage problem (while it might be hard to solve in general) can be approximated pretty well by the greedy algorithm.

Here’s the algorithmic setup. I give you a finite set X and an efficient black-box to evaluate f(S) for any subset S \subset X you want. I promise you that f is monotone and submodular. Now I give you an integer k between 1 and the size of X, and your task is to quickly find a set S of size k for which f(S) is maximal among all subsets of size k. That is, you design an algorithm that will work for any k, X, f and runs in polynomial time in the sizes of X, k.

In general this problem is NP-hard, meaning you’re not going to find a solution that works in the worst case (if you do, don’t call me; just claim your million dollar prize). So how well can we approximate the optimal value for f(S) by a different set of size k? The beauty is that, if your function is monotone and submodular, you can guarantee to get within 63% of the optimum. The hope (and reality) is that in practice it will often perform much better, but still this is pretty good! More formally,

Theorem: Let f be a monotone, submodular, non-negative function on X. The greedy algorithm, which starts with S as the empty set and at every step picks an element x which maximizes the marginal benefit f(S \cup \{ x \}) - f(S), provides a set S that achieves a (1- 1/e)-approximation of the optimum.

We’ll prove this in just a little bit more generality, and the generality is quite useful. If we call S_1, S_2, \dots, S_l the sets chosen by the greedy algorithm (where now we might run the greedy algorithm for l > k steps), then for all l, k, we have

\displaystyle f(S_l) \geq \left ( 1 - e^{-l/k} \right ) \max_{T: |T| \leq k} f(T)

This allows us to run the algorithm for more than k steps to get a better approximation by sets of larger size, and quantify how much better the guarantee on that approximation would be. It’s like an algorithmic way of hedging your risk. So let’s prove it.

Proof. Let’s set up some notation first. Fix your l and k, call S_i the set chosen by the greedy algorithm at step i, and call S^* the optimal subset of size k. Further call \textup{OPT} the value of the best set f(S^*). Call x_1^*, \dots, x_k^* the elements of S^* (the order is irrelevant). Now for every i < l monotonicity gives us f(S^*) \leq f(S^* \cup S_i). We can unravel this into a sum of marginal gains of adding single elements. The first step is

\displaystyle f(S^* \cup S_i) = f(S^* \cup S_i) - f(\{ x_1^*, \dots, x_{k-1}^* \} \cup S_i) + f(\{ x_1^*, \dots, x_{k-1}^* \} \cup S_i)

The second step removes x_{k-1}^*, from the last term, the third removes x_{k-2}^*, and so on until we have removed all of S^* and get this sum

\displaystyle f(S^* \cup S_i) = f(S_i) + \sum_{j=1}^k \left ( f(S_i \cup \{ x_1^*, \dots, x_j^* \}) - f(S_i \cup \{ x_1^*, \dots, x_{j-1}^* \} ) \right )

Now, applying submodularity, we can change all of these marginal benefits of “adding one more S^* element to S_i already with some S^* stuff” to “adding one more S^* element to just S_i.” In symbols, the equation above is at most

\displaystyle f(S_i) + \sum_{x \in S^*} f(S_i \cup \{ x \}) - f(S_i)

and because S_{i+1} is greedily chosen to maximize the benefit of adding a single element, so the above is at most

\displaystyle f(S_i) + \sum_{x \in S^*} f(S_{i+1}) - f(S_i) = f(S_i) + k(f(S_{i+1}) - f(S_i))

Chaining all of these together, we have f(S^*) - f(S_i) \leq k(f(S_{i+1}) - f(S_i)). If we call a_{i} = f(S^*) - f(S_i), then this inequality can be rewritten as a_{i+1} \leq (1 - 1/k) a_{i}. Now by induction we can relate a_l \leq (1 - 1/k)^l a_0. Now use the fact that a_0 \leq f(S^*) and the common inequality 1-x \leq e^{-x} to get

\displaystyle a_l = f(S^*) - f(S_l) \leq e^{-l/k} f(S^*)

And rearranging gives f(S_l) \geq (1 - e^{-l/k}) f(S^*).

\square

Setting l=k gives the approximation bound we promised. But note that allowing the greedy algorithm to run longer can give much stronger guarantees, though it requires you to sacrifice the cardinality constraint. 1 - 1/e is about 63%, but doubling the size of S gives about an 86% approximation guarantee. This is great for people in the real world, because you can quantify the gains you’d get by relaxing the constraints imposed on you (which are rarely set in stone).

So this is really great! We have quantifiable guarantees on a stupidly simple algorithm, and the setting is super general. And so if you have your problem and you manage to prove your function is submodular (this is often the hardest part), then you are likely to get this nice guarantee.

Extensions and Variations

This result on monotone submodular functions is just one part of a vast literature on finding approximation algorithms for submodular functions in various settings. In closing this post we’ll survey some of the highlights and provide references.

What we did in this post was maximize a monotone submodular function subject to a cardinality constraint |S| \leq k. There are three basic variations we could do: we could drop constraints and see whether we can still get guarantees, we could look at minimization instead of maximization, and we could modify the kinds of constraints we impose on the solution.

There are a ton of different kinds of constraints, and we’ll discuss two. The first is where you need to get a certain value f(S) \geq q, and you want to find the smallest set that achieves this value. Laurence Wolsey (who proved a lot of these theorems) showed in 1982 that a slight variant of the greedy algorithm can achieve a set whose size is a multiplicative factor of 1 + \log (\max_x f(\{ x \})) worse than the optimum.

The second kind of constraint is a generalization of a cardinality constraint called a knapsack constraint. This means that each item x \in X has a cost, and you have a finite budget with which to spend on elements you add to S. One might expect this natural extension of the greedy algorithm to work: pick the element which maximizes the ratio of increasing the value of f to the cost (within your available budget). Unfortunately this algorithm can perform arbitrarily poorly, but there are two fun caveats. The first is that if you do both this augmented greedy algorithm and the greedy algorithm that ignores costs, then at least one of these can’t do too poorly. Specifically, one of them has to get at least a 30% approximation. This was shown by Leskovec et al in 2007. The second is that if you’re willing to spend more time in your greedy step by choosing the best subset of size 3, then you can get back to the 1-1/e approximation. This was shown by Sviridenko in 2004.

Now we could try dropping the monotonicity constraint. In this setting cardinality constraints are also superfluous, because it could be that the very large sets have low values. Now it turns out that if f has no other restrictions (in particular, if it’s allowed to be negative), then even telling whether there’s a set S with f(S) > 0 is NP-hard, but the optimum could be arbitrarily large and positive when it exists. But if you require that f is non-negative, then you can get a 1/3-approximation, if you’re willing to add randomness you can get 2/5 in expectation, and with more subtle constraints you can get up to a 1/2 approximation. Anything better is NP-hard. Fiege, Mirrokni, and Vondrak have a nice FOCS paper on this.

Next, we could remove the monotonicity property and try to minimize the value of f(S). It turns out that this problem always has an efficient solution, but the only algorithm I have heard of to solve it involves a very sophisticated technique called the ellipsoid algorithm. This is heavily related to linear programming and convex optimization, something which I hope to cover in more detail on this blog.

Finally, there are many interesting variations in the algorithmic procedure. For example, one could require that the elements are provided in some order (the streaming setting), and you have to pick at each step whether to put the element in your set or not. Alternatively, the objective functions might not be known ahead of time and you have to try to pick elements to jointly maximize them as they are revealed. These two settings have connections to bandit learning problems, which we’ve covered before on this blog. See this survey of Krause and Golovin for more on the connections, which also contains the main proof used in this post.

Indeed, despite the fact that many of the big results were proved in the 80’s, the analysis of submodular functions is still a big research topic. There was even a paper posted just the other day on the arXiv about it’s relation to ad serving! And wouldn’t you know, they proved a (1-1/e)-approximation for their setting. There’s just something about 1-1/e.

Until next time!

The Mathematics of Secret Sharing

Here’s a simple puzzle with a neat story. A rich old woman is drafting her will and wants to distribute her expansive estate equally amongst her five children. But her children are very greedy, and the woman knows that if he leaves her will unprotected her children will resort to nefarious measures to try to get more than their fair share. In one fearful scenario, she worries that the older four children will team up to bully the youngest child entirely out of his claim! She desperately wants them to cooperate, so she decides to lock the will away, and the key is a secret integer N. The question is, how can she distribute this secret number to her children so that the only way they can open the safe is if they are all present and willing?

estateA mathematical way to say this is: how can she distribute some information to her children so that, given all of their separate pieces of information, they can reconstruct the key, but for every choice of fewer than 5 children, there is no way to reliably recover the key? This is called the secret sharing problem. More generally, say we have an integer N called the secret, a number of participants k, and a number required for reconstruction r. Then a secret sharing protocol is the data of a method for distributing information and a method for reconstructing the secret. The distributing method is an algorithm D that accepts as input N, k, r and produces as output a list of k numbers D(N, k) = (x_1, x_2, \dots, x_k). These are the numbers distributed to the k participants. Then the reconstruction method is a function R which accepts as input r numbers (y_1, \dots, y_r) and outputs a number M. We want two properties to hold :

  • The reconstruction function R outputs N when given any r of the numbers output by D.
  • One cannot reliably reconstruct N with fewer than r of the numbers output by D.

The question is: does an efficient secret sharing protocol exist for every possible choice of r \leq k? In fact it does, and the one we’ll describe in this post is far more secure than the word “reliable” suggests. It will be so hard as to be mathematically impossible to reconstruct the secret from fewer than the desired number of pieces. Independently discovered by Adi Shamir in 1979, the protocol we’ll see in this post is wonderfully simple, and as we describe it we’ll build up a program to implement it. This time we’ll work in the Haskell programming language, and you can download the program from this blog’s Github page. And finally, a shout out to my friend Karishma Chadha who worked together with me on this post. She knows Haskell a lot better than I do.

Polynomial Interpolation

The key to the secret sharing protocol is a beautiful fact about polynomials. Specifically, if you give me k+1 points in the plane with distinct x values, then there is a unique degree k polynomial that passes through the points. Just as importantly (and as a byproduct of this fact), there are infinitely many degree k+1 polynomials that pass through the same points. For example, if I give you the points (1,2), (2,4), (-2,2), the only quadratic (degree 2) polynomial that passes through all of them is 1 + \frac{1}{2}x + \frac{1}{2} x^2.interpolating polynomial example The proof that you can always find such a polynomial is pretty painless, so let’s take it slowly and write a program as we go. Suppose you give me some list of k+1 points (x_0, y_0), \dots, (x_k, y_k) and no two x values are the same. The proof has two parts. First we have to prove existence, that some degree k polynomial passes through the points, and then we have to prove that the polynomial is unique. The uniqueness part is easier, so let’s do the existence part first. Let’s start with just one point (x_0, y_0). What’s a degree zero polynomial that passes through it? Just the constant function f(x) = y_0. For two points (x_0, y_0), (x_1, y_1) it’s similarly easy, since we all probably remember from basic geometry that there’s a unique line passing through any two points. But let’s write the line in a slightly different way:

\displaystyle f(x) = \frac{(x-x_1)}{x_0-x_1}y_0 + \frac{(x-x_0)}{x_1-x_0} y_1

Why write it this way? Because now it should be obvious that the polynomial passes through our two points: if I plug in x_0 then the second term is zero and the first term is just y_0(x_0 - x_1) / (x_0 - x_1) = y_0, and likewise for x_1.

For example, if we’re given (1, 3), (2, 5) we get:

\displaystyle f(x) = \frac{(x - 2)}{(1-2)} \cdot 3 + \frac{(x-1)}{(2-1)} \cdot 5

Plugging in x = 1 cancels the second term out, leaving f(1) = \frac{1-2}{1-2} \cdot 3 = 3, and plugging in x = 2 cancels the first term, leaving f(2) = \frac{(2-1)}{(2-1)} \cdot 5 = 5.

Now the hard step is generalizing this to three points. But the suggestive form above gives us a hint on how to continue.

\displaystyle f(x) = \frac{(x-x_1)(x-x_2)}{(x_0-x_1)(x_0-x_2)}y_0+\frac{(x-x_0)(x-x_2)}{(x_1-x_0)(x_1-x_2)}y_1+ \frac{(x-x_0)(x-x_1)}{(x_2-x_0)(x_2-x_1)}y_2

Notice that the numerators of the terms take on the form y_j \prod_{i \ne j} (x-x_i), that is, a product (x-x_0)(x-x_1), \dots, (x-x_n) y_j excluding (x - x_j). Thus, all terms will cancel out to 0 if we plug in x_i, except one term, which has the form

\displaystyle y_1 \cdot \frac{\prod_{j \neq i} (x-x_j)}{\prod_{j \neq i} (x_i - x_j)}

Here, the fraction on the right side of the term cancels out to 1 when x_i is plugged in, leaving only y_i, the desired result. Now that we’ve written the terms in this general product form, we can easily construct examples for any number of points. We just do a sum of terms that look like this, one for each y value. Try writing this out as a summation, if you feel comfortable with notation.

Let’s go further and write an algorithm to construct the polynomial for us. Some preliminaries: we encode a polynomial as a list of coefficients in degree-increasing order, so that 1 + 3x + 5x^3 is represented by [1,3,0,5].

type Point = (Rational, Rational)
type Polynomial = [Rational] --Polynomials are represented in ascending degree order

Then we can write some simple functions for adding and multiplying polynomials

addPoly :: Polynomial -> Polynomial -> Polynomial
addPoly [] [] = []
addPoly [] xs = xs
addPoly xs [] = xs
addPoly (x:xs) (y:ys) = (x+y) : (addPoly xs ys)

multNShift :: Polynomial -> (Rational, Int) -> Polynomial
multNShift xs (y, shift) =
    (replicate shift 0) ++ ( map ((*) y) xs)

multPoly :: Polynomial -> Polynomial -> Polynomial
multPoly [] [] = []
multPoly [] _ = []
multPoly _ [] = []
multPoly xs ys = foldr addPoly [] $ map (multNShift ys) $ zip xs [0..]

In short, multNShift multiplies a polynomial by a monomial (like 3x^2 (1 + 7x + 2x^4)), and multPoly does the usual distribution of terms, using multNShift to do most of the hard work. Then to construct the polynomial we need one more helper function to extract all elements of a list except a specific entry:

allBut :: Integer -> [a] -> [a]
allBut i list = snd $ unzip $ filter (\ (index,_) -> i /= index) $ zip [0..] list

And now we can construct a polynomial from a list of points in the same way we did mathematically.

findPolynomial :: [Point] -> Polynomial
findPolynomial points =
   let term (i, (xi,yi)) =
          let prodTerms = map (\ (xj, _) -> [-xj/(xi - xj), 1/(xi - xj)]) $ allBut i points
          in multPoly [yi] $ foldl multPoly [1] prodTerms
   in foldl addPoly [] $ map term $ zip [0..] points

Here the sub-function term constructs the i-th term of the polynomial, and the remaining expression adds up all the terms. Remember that due to our choice of representation the awkward 1 sitting in the formula signifies the presence of x. And that’s it! An example of it’s use to construct 3x - 1:

*Main> findPolynomial [(1,2), (2,5)]
[(-1) % 1,3 % 1]

Now the last thing we need to do is show that the polynomial we constructed in this way is unique. Here’s a proof.

Suppose there are two degree n polynomials f(x)  and g(x) that pass through the n+1 given data points (x_0, y_0), (x_1, y_1), \dots , (x_n, y_n). Let h(x) = p(x) - q(x), and we want to show that h(x) is the zero polynomial. This proves that f(x) is unique because the only assumptions we made at the beginning were that f,g both passed through the given points. Now since both f and g are degree n polynomials, h is a polynomial of degree at most n. It is also true that h(x_i) = p(x_i) - q(x_i) = y_i - y_i = 0 where 0\leq i\leq n. Thus, we have (at least) n+1 roots of this degree n polynomial. But this can’t happen by the fundamental theorem of algebra! In more detail: if a nonzero degree \leq n polynomial really could have n+1 distinct roots, then you could factor it into at least n+1 linear terms like h(x) = (x - x_0)(x - x_1) \dots (x - x_n). But since there are n+1 copies of x, h would need to be a degree n+1 polynomial! The only way to resolve this contradiction is if h is actually the zero polynomial, and thus h(x) = f(x) - g(x) = 0, f(x) = g(x).

This completes the proof. Now that we know these polynomials exist and are unique, it makes sense to give them a name. So for a given set of k+1 points, call the unique degree k polynomial that passes through them the interpolating polynomial for those points.

Secret Sharing with Interpolating Polynomials

Once you think to use interpolating polynomials, the connection to secret sharing seems almost obvious. If you want to distribute a secret to k people so that r of them can reconstruct it here’s what you do:

  1. Pick a random polynomial p of degree r-1 so that the secret is p(0).
  2. Distribute the points (1, p(1)), (2, p(2)), \dots, (k, p(k)).

Then the reconstruction function is: take the points provided by at least r participants, use them to reconstruct p, and output p(0). That’s it! Step 1 might seem hard at first, but you can just notice that p(0) is equivalent to the constant term of the polynomial, so you can pick r-1 random numbers for the other coefficients of p and output them. In Haskell,

makePolynomial :: Rational -> Int -> StdGen -> Polynomial
makePolynomial secret r generator =
  secret : map toRational (take (r-1) $ randomRs (1, (numerator(2*secret))) generator)

share :: Rational -> Integer -> Int -> IO [Point]
share secret k r = do
  generator <- getStdGen
  let poly = makePolynomial secret r generator
      ys = map (eval poly) $ map toRational [1..k]
  return $ zip [1..] ys

In words, we initialize the Haskell standard generator (which wraps the results inside an IO monad), then we construct a polynomial by letting the first coefficient be the secret and choosing random coefficients for the rest. And findPolynomial is the reconstruction function.

Finally, just to flush the program out a little more, we write a function that encodes or decodes a string as an integer.

encode :: String -> Integer
encode str = let nums = zip [0..] $ map (toInteger . ord) str
                 integers = map (\(i, n) -> shift n (i*8)) nums
             in foldl (+) 0 integers

decode :: Integer -> String
decode 0 = ""
decode num = if num < 0
             then error "Can't decode a negative number"
             else chr (fromInteger (num .&. 127)) : (decode $ shift num (-8))

And then we have a function that shows the whole process in action.

example msg k r =
   let secret = toRational $ encode msg
   in do points  (numerator x, numerator y)) points
      let subset = take r points
          encodedSecret = eval (findPolynomial subset) 0
      putStrLn $ show $ numerator encodedSecret
      putStrLn $ decode $ numerator encodedSecret

And a function call:

*Main> example "Hello world!" 10 5
10334410032606748633331426632
[(1,34613972928232668944107982702),(2,142596447049264820443250256658),(3,406048862884360219576198642966),(4,916237517700482382735379150124),(5,1783927975542901326260203400662),(6,3139385067235193566437068631142),(7,5132372890379242119499357692158),(8,7932154809355236501627439048336),(9,11727493455321672728948666778334),(10,16726650726215353317537380574842)]
10334410032606748633331426632
Hello world!

Security

The final question to really close this problem with a nice solution is, “How secure is this protocol?” That is, if you didn’t know the secret but you had r-1 numbers, could you find a way to recover the secret, oh, say, 0.01% of the time?

Pleasingly, the answer is a solid no. This protocol has something way stronger, what’s called information-theoretic security. In layman’s terms, this means it cannot possibly be broken, period. That is, without taking advantage of some aspect of the random number generator, which we assume is a secure random number generator. But with that assumption the security proof is trivial. Here it goes.

Pick a number M that isn’t the secret N. It’s any number you want. And say you only have r-1 of the correct numbers y_1, \dots, y_{r-1}. Then there is a final number y_r so that the protocol reconstructs M instead of N. This is no matter which of the unused x-values you pick, no matter what M and r-1 numbers you started with. This is simply because adding in (0, M) defines a new polynomial q, and you can use any point on q as your r-th number.

Here’s what this means. A person trying to break the secret sharing protocol would have no way to tell if they did it correctly! If the secret is a message, then a bad reconstruction could produce any message. In information theory terms, knowing r-1 of the numbers provides no information about the actual message. In our story from the beginning of the post, no matter how much computing power one of the greedy children may have, the only algorithm they have to open the safe is to try every combination. The mother could make the combination have length in the millions of digits, or even better, the mother could encode the will as an integer and distribute that as the secret. I imagine there are some authenticity issues there, since one could claim to have reconstructed a false will, signatures and all, but there appear to be measures to account for this.

One might wonder if this is the only known secret sharing protocol, and the answer is no. Essentially, any time you have an existence and uniqueness theorem in mathematics, and the objects you’re working with are efficiently constructible, then you have the potential for a secret sharing protocol. There are two more on Wikipedia. But people don’t really care to find new ones anymore because the known protocols are as good as it gets.

On a broader level, the existence of efficient secret sharing protocols is an important fact used in the field of secure multiparty computation. Here the goal is for a group of individuals to compute a function depending on secret information from all of them, without revealing their secret information to anyone. A classic example of this is to compute the average of seven salaries without revealing any of the salaries. This was a puzzle featured on Car Talk, and it has a cute answer. See if you can figure it out.

Until next time!