An Update on “Coloring Resilient Graphs”

A while back I announced a preprint of a paper on coloring graphs with certain resilience properties. I’m pleased to announce that it’s been accepted to the Mathematical Foundations of Computer Science 2014, which is being held in Budapest this year. Since we first published the preprint we’ve actually proved some additional results about resilience, and so I’ll expand some of the details here. I think it makes for a nicer overall picture, and in my opinion it gives a little more justification that resilient coloring is interesting, at least in contrast to other resilience problems.

Resilient SAT

Recall that a “resilient” yes-instance of a combinatorial problem is one which remains a yes-instance when you add or remove some constraints. The way we formalized this for SAT was by fixing variables to arbitrary values. Then the question is how resilient does an instance need to be in order to actually find a certificate for it? In more detail,

Definition: r-resilient k-SAT formulas are satisfiable formulas in k-CNF form (conjunctions of clauses, where each clause is a disjunction of three literals) such that for all choices of r variables, every way to fix those variables yields a satisfiable formula.

For example, the following 3-CNF formula is 1-resilient:

\displaystyle (a \vee b \vee c) \wedge (a \vee \overline{b} \vee \overline{c}) \wedge (\overline{a} \vee \overline{b} \vee c)

The idea is that resilience may impose enough structure on a SAT formula that it becomes easy to tell if it’s satisfiable at all. Unfortunately for SAT (though this is definitely not the case for coloring), there are only two possibilities. Either the instances are so resilient that they never existed in the first place (they’re vacuously trivial), or the instances are NP-hard. The first case is easy: there are no k-resilient k-SAT formulas. Indeed, if you’re allowed to fix k variables to arbitrary values, then you can just pick a clause and set all its variables to false. So no formula can ever remain satisfiable under that condition.

The second case is when the resilience is strictly less than the clause size, i.e. r-resilient k-SAT for 0 \leq r < k. In this case the problem of finding a satisfying assignment is NP-hard. We’ll show this via a sequence of reductions which start at 3-SAT, and they’ll involve two steps: increasing the clause size and resilience, and decreasing the clause size and resilience. The trick is in balancing which parts are increased and decreased. I call the first step the “blowing up” lemma, and the second part the “shrinking down” lemma.

Blowing Up and Shrinking Down

Here’s the intuition behind the blowing up lemma. If you give me a regular (unresilient) 3-SAT formula \varphi, what I can do is make a copy of \varphi with a new set of variables and OR the two things together. Call this \varphi^1 \vee \varphi^2. This is clearly logically equivalent to the original formula; if you give me a satisfying assignment for the ORed thing, I can just see which of the two clauses are satisfied and use that sub-assignment for \varphi, and conversely if you can satisfy \varphi it doesn’t matter what truth values you choose for the new set of variables. And further you can transform the ORed formula into a 6-SAT formula in polynomial time. Just apply deMorgan’s rules for distributing OR across AND.

Now the choice of a new set of variables allows us to give some resilient. If you fix one variable to the value of your choice, I can always just work with the other set of variables. Your manipulation doesn’t change the satisfiability of the ORed formula, because I’ve added all of this redundancy. So we took a 3-SAT formula and turned it into a 1-resilient 6-SAT formula.

The idea generalizes to the blowing up lemma, which says that you can measure the effects of a blowup no matter what you start with. More formally, if s is the number of copies of variables you make, k is the clause size of the starting formula \varphi, and r is the resilience of \varphi, then blowing up gives you an [(r+1)s - 1]-resilient (sk)-SAT formula. The argument is almost identical to the example above the resilience is more general. Specifically, if you fix fewer than (r+1)s variables, then the pigeonhole principle guarantees that one of the s copies of variables has at most r fixed values, and we can just work with that set of variables (i.e., this small part of the big ORed formula is satisfiable if \varphi was r-resilient).

The shrinking down lemma is another trick that is similar to the reduction from k-SAT to 3-SAT. There you take a clause like v \vee w \vee x \vee y \vee z and add new variables z_i to break up the clause in to clauses of size 3 as follows:

\displaystyle (v \vee w \vee z_1) \wedge (\neg z_1 \vee x \vee z_2) \wedge (\neg z_2 \vee y \vee z)

These are equivalent because your choice of truth values for the z_i tell me which of these sub-clauses to look for a true literal of the old variables. I.e. if you choose z_1 = T, z_2 = F then you have to pick either y or z to be true. And it’s clear that if you’re willing to double the number of variables (a linear blowup) you can always get a k-clause down to an AND of 3-clauses.

So the shrinking down reduction does the same thing, except we only split clauses in half. For a clause C, call C[:k/2] the first half of a clause and C[k/2:] the second half (you can see how my Python training corrupts my notation preference). Then to shrink a clause C_i down from size k to size \lceil k/2 \rceil + 1 (1 for the new variable), add a variable z_i and break C_i into

\displaystyle (C_i[:k/2] \vee z_i) \wedge (\neg z_i \vee C[k/2:])

and just AND these together for all clauses. Call the original formula \varphi and the transformed one \psi. The formulas are logically equivalent for the same reason that the k-to-3-SAT reduction works, and it’s already in the right CNF form. So resilience is all we have to measure. The claim is that the resilience is q = \min(r, \lfloor k/2 \rfloor), where r is the resilience of \varphi.

The reason for this is that if all the fixed variables are old variables (not z_i), then nothing changes and the resilience of the original \phi keeps us safe. And each z_i we fix has no effect except to force us to satisfy a variable in one of the two halves. So there is this implication that if you fix a z_i you have to also fix a regular variable. Because we can’t guarantee anything if we fix more than r regular variables, we’d have to stop before fixing r of the z_i. And because these new clauses have size k/2 + 1, we can’t do this more than k/2 times or else we risk ruining an entire clause. So this give the definition of q. So this proves the shrinking down lemma.

Resilient SAT is always hard

The blowing up and shrinking down lemmas can be used to show that r-resilient k-SAT is NP-hard for all r < k. What we do is reduce from 3-SAT to an r-resilient k-SAT instance in such a way that the 3-SAT formula is satisfiable if and only if the transformed formula is resiliently satisfiable.

What makes these two lemmas work together is that shrinking down shrinks the clause size just barely less than the resilience, and blowing up increases resilience just barely more than it increases clause size. So we can combine these together to climb from 3-SAT up to some high resilience and satisfiability, and then iteratively shrink down until we hit our target.

One might worry that it will take an exponential number of reductions (or a few reductions of exponential size) to get from 3-SAT to the (r,k) of our choice, but we have a construction that does it in at most four steps, with only a linear initial blowup from 3-SAT to r-resilient 3(r+1)-SAT. Then, to deal with the odd ceilings and floors in the shrinking down lemma, you have to find a suitable larger k to reduce to (by padding with useless variables, which cannot make the problem easier). And you choose this k so that you only need at most two applications of shrinking down to get to (k-1)-resilient k-SAT. Our preprint has the gory details (which has an inelegant part that is not worth writing here), but in the end you show that (k-1)-resilient k-SAT is hard, and since that’s the maximal amount of resilience before the problem becomes vacuously trivial, all smaller resilience values are also hard.

So how does this relate to coloring?

I’m happy about this result not just because it answers an open question I’m honestly curious about, but also because it shows that resilient coloring is more interesting. Basically this proves that satisfiability is so hard that no amount of resilience can make it easier in the worst case. But coloring has a gradient of difficulty. Once you get to order k^2 resilience for k-colorable graphs, the coloring problem can be solved efficiently by a greedy algorithm (and it’s not a vacuously empty class of graphs). Another thing on the side is that we use the hardness of resilient SAT to get the hardness results we have for coloring.

If you really want to stretch the implications, you might argue that this says something like “coloring is somewhat easier than SAT,” because we found a quantifiable axis along which SAT remains difficult while coloring crumbles. The caveat is that fixing colors of vertices is not exactly comparable to fixing values of truth assignments (since we are fixing lots of instances by fixing a variable), but at least it’s something concrete.

Coloring is still mostly open, and recently I’ve been going to talks where people are discussing startlingly similar ideas for things like Hamiltonian cycles. So that makes me happy.

Until next time!

About these ads

Community Detection in Graphs — a Casual Tour

Graphs are among the most interesting and useful objects in mathematics. Any situation or idea that can be described by objects with connections is a graph, and one of the most prominent examples of a real-world graph that one can come up with is a social network.

Recall, if you aren’t already familiar with this blog’s gentle introduction to graphs, that a graph G is defined by a set of vertices V, and a set of edges E, each of which connects two vertices. For this post the edges will be undirected, meaning connections between vertices are symmetric.

One of the most common topics to talk about for graphs is the notion of a community. But what does one actually mean by that word? It’s easy to give an informal definition: a subset of vertices C such that there are many more edges between vertices in C than from vertices in C to vertices in V - C (the complement of C). Try to make this notion precise, however, and you open a door to a world of difficult problems and open research questions. Indeed, nobody has yet come to a conclusive and useful definition of what it means to be a community. In this post we’ll see why this is such a hard problem, and we’ll see that it mostly has to do with the word “useful.” In future posts we plan to cover some techniques that have found widespread success in practice, but this post is intended to impress upon the reader how difficult the problem is.

The simplest idea

The simplest thing to do is to say a community is a subset of vertices which are completely connected to each other. In the technical parlance, a community is a subgraph which forms a clique. Sometimes an n-clique is also called a complete graph on n vertices, denoted K_n. Here’s an example of a 5-clique in a larger graph:

"Where's Waldo" for graph theorists: a clique hidden in a larger graph.

“Where’s Waldo” for graph theorists: a clique hidden in a larger graph.

Indeed, it seems reasonable that if we can reliably find communities at all, then we should be able to find cliques. But as fate should have it, this problem is known to be computationally intractable. In more detail, the problem of finding the largest clique in a graph is NP-hard. That essentially means we don’t have any better algorithms to find cliques in general graphs than to try all possible subsets of the vertices and check to see which, if any, form cliques. In fact it’s much worse, this problem is known to be hard to approximate to any reasonable factor in the worst case (the error of the approximation grows polynomially with the size of the graph!). So we can’t even hope to find a clique half the size of the biggest, or a thousandth the size!

But we have to take these impossibility results with a grain of salt: they only say things about the worst case graphs. And when we’re looking for communities in the real world, the worst case will never show up. Really, it won’t! In these proofs, “worst case” means that they encode some arbitrarily convoluted logic problem into a graph, so that finding the clique means solving the logic problem. To think that someone could engineer their social network to encode difficult logic problems is ridiculous.

So what about an “average case” graph? To formulate this typically means we need to consider graphs randomly drawn from a distribution.

Random graphs

The simplest kind of “randomized” graph you could have is the following. You fix some set of vertices, and then run an experiment: for each pair of vertices you flip a coin, and if the coin is heads you place an edge and otherwise you don’t. This defines a distribution on graphs called G(n, 1/2), which we can generalize to G(n, p) for a coin with bias p. With a slight abuse of notation, we call G(n, p) the Erdős–Rényi random graph (it’s not a graph but a distribution on graphs). We explored this topic form a more mathematical perspective earlier on this blog.

So we can sample from this distribution and ask questions like: what’s the probability of the largest clique being size at least 20? Indeed, cliques in Erdős–Rényi random graphs are so well understood that we know exactly how they work. For example, if p=1/2 then the size of the largest clique is guaranteed (with overwhelming probability as n grows) to have size k(n) or k(n)+1, where k(n) is about 2 \log n. Just as much is known about other values of p as well as other properties of G(n,p), see Wikipedia for a short list.

In other words, if we wanted to find the largest clique in an Erdős–Rényi random graph, we could check all subsets of size roughly 2\log(n), which would take about (n / \log(n))^{\log(n)} time. This is pretty terrible, and I’ve never heard of an algorithm that does better (contrary to the original statement in this paragraph that showed I can’t count). In any case, it turns out that the Erdős–Rényi random graph, and using cliques to represent communities, is far from realistic. There are many reasons why this is the case, but here’s one example that fits with the topic at hand. If I thought the world’s social network was distributed according to G(n, 1/2) and communities were cliques, then I would be claiming that the largest community is of size 65 or 66. Estimated world population: 7 billion, 2 \log(7 \cdot 10^9) \sim 65. Clearly this is ridiculous: there are groups of larger than 66 people that we would want to call “communities,” and there are plenty of communities that don’t form bona-fide cliques.

Another avenue shows that things are still not as easy as they seem in Erdős–Rényi land. This is the so-called planted clique problem. That is, you draw a graph G from G(n, 1/2). You give G to me and I pick a random but secret subset of r vertices and I add enough edges to make those vertices form an r-clique. Then I ask you to find the r-clique. Clearly it doesn’t make sense when r < 2 \log (n) because you won’t be able to tell it apart from the guaranteed cliques in G. But even worse, nobody knows how to find the planted clique when r is even a little bit smaller than \sqrt{n} (like, r = n^{9/20} even). Just to solidify this with some numbers, we don’t know how to reliably find a planted clique of size 60 in a random graph on ten thousand vertices, but we do when the size of the clique goes up to 100. The best algorithms we know rely on some sophisticated tools in spectral graph theory, and their details are beyond the scope of this post.

So Erdős–Rényi graphs seem to have no hope. What’s next? There are a couple of routes we can take from here. We can try to change our random graph model to be more realistic. We can relax our notion of communities from cliques to something else. We can do both, or we can do something completely different.

Other kinds of random graphs

There is an interesting model of Barabási and Albert, often called the “preferential attachment” model, that has been described as a good model of large, quickly growing networks like the internet. Here’s the idea: you start off with a two-clique G = K_2, and at each time step t you add a new vertex v to G, and new edges so that the probability that the edge (v,w) is added to G is proportional to the degree of w (as a fraction of the total number of edges in G). Here’s an animation of this process:

Image source: Wikipedia

The significance of this random model is that it creates graphs with a small number of hubs, and a large number of low-degree vertices. In other words, the preferential attachment model tends to “make the rich richer.” Another perspective is that the degree distribution of such a graph is guaranteed to fit a so-called power-law distribution. Informally, this means that the overall fraction of small-degree vertices gives a significant contribution to the total number of edges. This is sometimes called a “fat-tailed” distribution. Since power-law distributions are observed in a wide variety of natural settings, some have used this as justification for working in the preferential attachment setting. On the other hand, this model is known to have no significant community structure (by any reasonable definition, certainly not having cliques of nontrivial size), and this has been used as evidence against the model. I am not aware of any work done on planting dense subgraphs in graphs drawn from a preferential attachment model, but I think it’s likely to be trivial and uninteresting. On the other hand, Bubeck et al. have looked at changing the initial graph (the “seed”) from a 2-clique to something else, and seeing how that affects the overall limiting distribution.

Another model that often shows up is a model that allows one to make a random graph starting with any fixed degree distribution, not just a power law. There are a number of models that do this to some fashion, and you’ll hear a lot of hyphenated names thrown around like Chung-Lu and Molloy-Reed and Newman-Strogatz-Watts. The one we’ll describe is quite simple. Say you start with a set of vertices V, and a number d_v for each vertex v, such that the sum of all the d_v is even. This condition is required because in any graph the sum of the degrees of a vertex is twice the number of edges. Then you imagine each vertex v having d_v “edge-stubs.” The name suggests a picture like the one below:

Each node has a prescribed number of "edge stubs," which are randomly connected to form a graph.

Each node has a prescribed number of “edge stubs,” which are randomly connected to form a graph.

Now you pick two edge stubs at random and connect them. One usually allows self-loops and multiple edges between vertices, so that it’s okay to pick two edge stubs from the same vertex. You keep doing this until all the edge stubs are accounted for, and this is your random graph. The degrees were fixed at the beginning, so the only randomization is in which vertices are adjacent. The same obvious biases apply, that any given vertex is more likely to be adjacent to high-degree vertices, but now we get to control the biases with much more precision.

The reason such a model is useful is that when you’re working with graphs in the real world, you usually have statistical information available. It’s simple to compute the degree of each vertex, and so you can use this random graph as a sort of “prior” distribution and look for anomalies. In particular, this is precisely how one of the leading measures of community structure works: the measure of modularity. We’ll talk about this in the next section.

Other kinds of communities

Here’s one easy way to relax our notion of communities. Rather than finding complete subgraphs, we could ask about finding very dense subgraphs (ignoring what happens outside the subgraph). We compute density as the average degree of vertices in the subgraph.

If we impose no bound on the size of the subgraph an algorithm is allowed to output, then there is an efficient algorithm for finding the densest subgraph in a given graph. The general exact solution involves solving a linear programming problem and a little extra work, but luckily there is a greedy algorithm that can get within half of the optimal density. You start with all the vertices S_n = V, and remove any vertex of minimal degree to get S_{n-1}. Continue until S_0, and then compute the density of all the S_i. The best one is guaranteed to be at least half of the optimal density. See this paper of Moses Charikar for a more formal analysis.

One problem with this is that the size of the densest subgraph might be too big. Unfortunately, if you fix the size of the dense subgraph you’re looking for (say, you want to find the densest subgraph of size at most k where k is an input), then the problem once again becomes NP-hard and suffers from the same sort of inapproximability theorems as finding the largest clique.

A more important issue with this is that a dense subgraph isn’t necessarily a community. In particular, we want communities to be dense on the inside and sparse on the outside. The densest subgraph analysis, however, might rate the following graph as one big dense subgraph instead of two separately dense communities with some modest (but not too modest) amount of connections between them.

What should be identified as communities?

What are the correct communities here?

Indeed, we want a quantifiable a notion of “dense on the inside and sparse on the outside.” One such formalization is called modularity. Modularity works as follows. If you give me some partition of the vertices of G into two sets, modularity measures how well this partition reflects two separate communities. It’s the definition of “community” here that makes it interesting. Rather than ask about densities exactly, you can compare the densities to the expected densities in a given random graph model.

In particular, we can use the fixed-degree distribution model from the last section. If we know the degrees of all the vertices ahead of time, we can compute the probability that we see some number of edges going between the two pieces of the partition relative to what we would see at random. If the difference is large (and largely biased toward fewer edges across the partition and more edges within the two subsets), then we say it has high modularity. This involves a lot of computations  — the whole measure can be written as a quadratic form via one big matrix — but the idea is simple enough. We intend to write more about modularity and implement the algorithm on this blog, but the excited reader can see the original paper of M.E.J. Newman.

Now modularity is very popular but it too has shortcomings. First, even though you can compute the modularity of a given partition, there’s still the problem of finding the partition that globally maximizes modularity. Sadly, this is known to be NP-hard. Mover, it’s known to be NP-hard even if you’re just trying to find a partition into two pieces that maximizes modularity, and even still when the graph is regular (every vertex has the same degree).

Still worse, while there are some readily accepted heuristics that often “do well enough” in practice, we don’t even know how to approximate modularity very well. Bhaskar DasGupta has a line of work studying approximations of maximum modularity, and he has proved that for dense graphs you can’t even approximate modularity to within any constant factor. That is, the best you can do is have an approximation that gets worse as the size of the graph grows. It’s similar to the bad news we had for finding the largest clique, but not as bad. For example, when the graph is sparse it’s known that one can approximate modularity to within a \log(n) factor of the optimum, where n is the number of vertices of the graph (for cliques the factor was like n^c for some c, and this is drastically worse).

Another empirical issue is that modularity seems to fail to find small communities. That is, if your graph has some large communities and some small communities, strictly maximizing the modularity is not the right thing to do. So we’ve seen that even the leading method in the field has some issues.

Something completely different

The last method I want to sketch is in the realm of “something completely different.” The notion is that if we’re given a graph, we can run some experiment on the graph, and the results of that experiment can give us insight into where the communities are.

The experiment I’m going to talk about is the random walk. That is, say you have a vertex v in a graph G and you want to find some vertices that are “closest” to v. That is, those that are most likely to be in the same community as v. What you can do is run a random walk starting at v. By a “random walk” I mean you start at v, you pick a neighbor at random and move to it, then repeat. You can compute statistics about the vertices you visit in a sample of such walks, and the vertices that you visit most often are those you say are “in the same community as v. One important parameter is how long the walk is, but it’s generally believed to be best if you keep it between 3-6 steps.

Of course, this is not a partition of the vertices, so it’s not a community detection algorithm, but you can turn it into one. Run this process for each vertex, and use it to compute a “distance” between all the pairs of vertices. Then you compute a tree of partitions by lumping the closest pairs of vertices into the same community, one at a time, until you’ve got every vertex. At each step of the way, you compute the modularity of the partition, and when you’re done you choose the partition that maximizes modularity. This algorithm as a whole is called the walktrap clustering algorithm, and was introduced by Pons and Latapy in 2005.

This sounds like a really great idea, because it’s intuitive: there’s a relatively high chance that the friends of your friends are also your friends. It’s also really great because there is an easily measurable tradeoff between runtime and quality: you can tune down the length of the random walk, and the number of samples you take for each vertex, to speed up the runtime but lower the quality of your statistical estimates. So if you’re working on huge graphs, you get a lot of control and a clear idea of exactly what’s going on inside the algorithm (something which is not immediately clear in a lot of these papers).

Unfortunately, I’m not aware of any concrete theoretical guarantees for walktrap clustering. The one bit of theoretical justification I’ve read over the last year is that you can relate the expected distances you get to certain spectral properties of the graph that are known to be related to community structure, but the lower bounds on maximizing modularity already suggest (though they do not imply) that walktrap won’t do that well in the worst case.

So many algorithms, so little time!

I have only brushed the surface of the literature on community detection, and the things I have discussed are heavily biased toward what I’ve read about and used in my own research. There are methods based on information theory, label propagation, and obscure physics processes like “spin glass” (whatever that is, it sounds frustrating).

And we have only been talking about perfect community structure. What if you want to allow people to be in multiple communities, or have communities at varying levels of granularity (e.g. a sports club within a school versus the whole student body of that school)? What if we want to allow people to be “members” of a community at varying degrees of intensity? How do we deal with noisy signals in our graphs? For example, if we get our data from observing people talk, are two people who have heated arguments considered to be in the same community? Since a lot social network data comes from sources like Twitter and Facebook where arguments are rampant, how do we distinguish between useful and useless data? More subtly, how do we determine useful information if a group within the social network are trying to mask their discovery? That is, how do we deal with adversarial noise in a graph?

And all of this is just on static graphs! What about graphs that change over time? You can keep making the problem more and more complicated as it gets more realistic.

With the huge wealth of research that has already been done just on the simplest case, and the difficult problems and known barriers to success even for the simple problems, it seems almost intimidating to even begin to try to answer these questions. But maybe that’s what makes them fascinating, not to mention that governments and big businesses pour many millions of dollars into this kind of research.

In the future of this blog we plan to derive and implement some of the basic methods of community detection. This includes, as a first outline, the modularity measure and the walktrap clustering algorithm. Considering that I’m also going to spend a large part of the summer thinking about these problems (indeed, with some of the leading researchers and upcoming stars under the sponsorship of the American Mathematical Society), it’s unlikely to end there.

Until next time!

An Un-PAC-learnable Problem

In a previous post we introduced a learning model called Probably Approximately Correct (PAC). We saw an example of a concept class that was easy to learn: intervals on the real line (and more generally, if you did the exercise, axis-aligned rectangles in a fixed dimension).

But PAC learning wouldn’t be an interesting model if every concept class was PAC-learnable. So as a technical aside in our study of learning theory, this post presents the standard example of a problem that isn’t learnable in the PAC model. Afterward we’ll see that allowing the learner to be more expressive can be helpful, and by doing so we can make this unlearnable problem learnable.

3-Term DNF Formulas

Readers of this blog will probably have encountered a boolean formula before. A boolean formula is just a syntactic way to describe some condition (like, exactly one of these two things has to be true) using variables and logical connectives. The best way to recall it is by example: the following boolean formula encodes the “exclusive or” of two variables.

\displaystyle (x \wedge \overline{y}) \vee (\overline{x} \wedge y)

The wedge \wedge denotes a logical AND and the vee \vee denotes a logical OR. A bar above a variable represents a negation of a variable. (Please don’t ask me why the official technical way to write AND and OR is in all caps, I feel like I’m yelling math at people.)

In general a boolean formula has literals, which we can always denote by an x_i or the negation \overline{x_i}, and connectives \wedge and \vee, and parentheses to denote order. It’s a simple fact that any logical formula can be encoded using just these tools, but rather than try to learn general boolean formulas we look at formulas in a special form.

Definition: A formula is in three-term disjunctive normal form (DNF) if it has the form C_1 \vee C_2 \vee C_3 where each $C_i$ is an AND of some number of literals.

Readers who enjoyed our P vs NP primer will recall a related form of formulas: the 3-CNF form, where the “three” meant that each clause had exactly three literals and the “C” means the clauses are connected with ANDs. This is a sort of dual normal form: there are only three clauses, each clause can have any number of variables, and the roles of AND and OR are switched. In fact, if you just distribute the \vee‘s in a 3-term DNF formula using DeMorgan’s rules, you’ll get an equivalent 3-CNF formula. The restriction of our hypotheses to 3-term DNFs will be the crux of the difficulty: it’s not that we can’t learn DNF formulas, we just can’t learn them if we are forced to express our hypothesis as a 3-term DNF as well.

The way we’ll prove that 3-term DNF formulas “can’t be learned” in the PAC model is by an NP-hardness reduction. That is, we’ll show that if we could learn 3-term DNFs in the PAC model, then we’d be able to efficiently solve NP-hard problems with high probability. The official conjecture we’d be violating is that RP is different from NP. RP is the class of problems that you can solve in polynomial time with randomness if you can never have false positives, and the probability of a false negative is at most 1/2. Our “RP” algorithm will be a PAC-learning algorithm.

The NP-complete problem we’ll reduce from is graph 3-coloring. So if you give me a graph, I’ll produce an instance of the 3-term DNF PAC-learning problem in such a way that finding a hypothesis with low error corresponds to a valid 3-coloring of the graph. Since PAC-learning ensures that you are highly likely to find a low-error hypothesis, the existence of a PAC-learning algorithm will constitute an RP algorithm to solve this NP-complete problem.

In more detail, an “instance” of the 3-term DNF problem comes in the form of a distribution over some set of labeled examples. In this case the “set” is the set of all possible truth assignments to the variables, where we fix the number of variables to suit our needs, along with a choice of a target 3-term DNF to be learned. Then you’d have to define the distribution over these examples.

But we’ll actually do something a bit slicker. We’ll take our graph G, we’ll construct a set S_G of labeled truth assignments, and we’ll define the distribution D to be the uniform distribution over those truth assignments used in S_G. Then, if there happens to be a 3-term DNF that coincidentally labels the truth assignments in S_G exactly how we labeled them, and we set the allowed error \varepsilon to be small enough, a PAC-learning algorithm will find a consistent hypothesis (and it will correspond to a valid 3-coloring of G). Otherwise, no algorithm would be able to come up with a low-error hypothesis, so if our purported learning algorithm outputs a bad hypothesis we’d be certain (with high probability) that it was not bad luck but that the examples are not consistent with any 3-term DNF (and hence there is no valid 3-coloring of G).

This general outline has nothing to do with graphs, and so you may have guessed that the technique is commonly used to prove learning problems are hard: come up with a set of labeled examples, and a purported PAC-learning algorithm would have to come up with a hypothesis consistent with all the examples, which translates back to a solution to your NP-hard problem.

The Reduction

Now we can describe the reduction from graphs to labeled examples. The intuition is simple: each term in the 3-term DNF should correspond to a color class, and so any two adjacent vertices should correspond to an example that cannot be true. The clauses will correspond to…

For a graph G with n nodes v_1, \dots, v_n and a set of m undirected edges E, we construct a set of examples with positive labels S^+ and one with negative examples S^-. The examples are truth assignments to n variables, which we label x_1, \dots, x_n, and we identify a truth assignment to the \left \{ 0,1 \right \}-valued vector (x_1, x_2, \dots, x_n) in the usual way (true is 1, false is 0).

The positive examples S^+ are simple: for each v_i add a truth assignment x_i = T, x_j = F for j \neq i. I.e., the binary vector is (1, \dots, 1,0,1, \dots, 1), and the zero is in the i-th position.

The negative examples S^- come from the edges. For each edge (v_i, v_j) \in E, we add the example with a zero in the i-th and j-th components and ones everywhere else. Here is an example graph and the corresponding positive and negative examples:

PAC-reduction

Claim: G is 3-colorable if and only if the corresponding examples are consistent with some 3-term DNF formula \varphi.

Again, consistent just means that \varphi is satisfied by every truth assignment in S^+ and unsatisfied by every example in S^-. Since we chose our distribution to be uniform over S^+ \cup S^-, we don’t care what \varphi does elsewhere.

Indeed, if G is three-colorable we can fix some valid 3-coloring with colors red, blue, and yellow. We can construct a 3-term DNF that does what we need. Let T_R be the AND of all the literals x_i for which vertex v_i is not red. For each such i, the corresponding example in S^+ will satisfy T_R, because we put a zero in the i-th position and ones everywhere else. Similarly, no example in S^- will make T_R true because to do so both vertices in the corresponding edge would have to be red.

To drive this last point home say there are three vertices and your edge is (v_1,v_2). Then the corresponding negative example is (0,0,1). Unless both v_1 and v_2 are colored red, one of x_1, x_2 will have to be ANDed as part of T_R. But the example has a zero for both x_1 and x_2, so T_R would not be satisfied.

Doing the same thing for blue and yellow, and OR them together to get T_R \vee T_B \vee T_Y. Since the case is symmetrically the same for the other colors, we a consistent 3-term DNF.

On the other hand, say there is a consistent 3-term DNF \varphi. We need to construct a three coloring of G. It goes in largely the same way: label the clauses \varphi = T_R \vee T_B \vee T_Y for Red, Blue, and Yellow, and then color a vertex v_i the color of the clause that is satisfied by the corresponding example in S^+. There must be some clause that does this because \varphi is consistent with S^+, and if there are multiple you can pick a valid color arbitrarily. Now we argue why no edge can be monochromatic. Suppose there were such an edge (v_i, v_j), and both v_i and v_j are colored, say, blue. Look at the clause T_B: since v_i and v_j are both blue, the positive examples corresponding to those vertices  (with a 0 in the single index and 1′s everywhere else) both make T_B true. Since those two positive examples differ in both their i-th and j-th positions, T_B can’t have any of the literals x_i, \overline{x_i}, x_j, \overline{x_j}. But then the negative example for the edge would satisfy T_B because it has 1′s everywhere except i,j! This means that the formula doesn’t consistently classify the negative examples, a contradiction. This proves the Claim.

Now we just need to show a few more details to finish the proof. In particular, we need to observe that the number of examples we generate is polynomial in the size of the graph G; that the learning algorithm would still run in polynomial time in the size of the input graph (indeed, this depends on our choice of the learning parameters); and that we only need to pick \delta < 1/2 and \varepsilon \leq 1/(2|S^+ \cup S^-|) in order to enforce that an efficient PAC-learner would generate a hypothesis consistent with all the examples. Indeed, if a hypothesis errs on even one example, it will have error at least 1 / |S^+ \cup S^-|, which is too big.

Everything’s not Lost

This might seem a bit depressing for PAC-learning, that we can’t even hope to learn 3-term DNF formulas. But we will give a sketch of why this is not a problem with PAC but a problem with DNFs.

In particular, the difficulty comes in forcing a PAC-learning algorithm to express its hypothesis as a 3-term DNF, as opposed to what we might argue is a more natural representation. As we observed, distributing the ORs in a 3-term DNF produces a 3-CNF formula (an AND of clauses where each clause is an OR of exactly three literals). Indeed, one can PAC-learn 3-CNF formulas efficiently, and it suffices to show that one can learn formulas which are just ANDs of literals. Then you can blow up the number of variables only polynomially larger to get 3-CNFs. ANDs of literals are just called “conjunctions,” so the problem is to PAC-learn conjunctions. The idea that works is the same one as in our first post on PAC where we tried to learn intervals: just pick the “smallest” hypothesis that is consistent with all the examples you’ve seen so far. We leave a formal proof as an (involved) exercise to the reader.

The important thing to note is that a concept class C (the thing we’re trying to learn) might be hard to learn if you’re constrained to work within C. If you’re allowed more expressive hypotheses (in this case, arbitrary boolean formulas), then learning C suddenly becomes tractable. This compels us to add an additional caveat to the PAC definition from our first post.

Definition: A concept class \mathsf{C} over a set X is efficiently PAC-learnable using the hypothesis class \mathsf{H} if there exists an algorithm A(\varepsilon, \delta) with access to a query function for \mathsf{C} and runtime O(\text{poly}(1/\varepsilon, 1/\delta)), such that for all c \in \mathsf{C}, all distributions D over X, and all 0 < \delta , \varepsilon < 1/2, the probability that A produces a hypothesis h \in \mathsf{H} with error at most \varepsilon is at least 1-\delta.

And with that we’ll end this extended side note. The next post in this series will introduce and analyze a fascinating notion of dimension for concept classes, the Vapnik-Chervonenkis dimension.

Until then!

Stable Marriages and Designing Markets

Here is a fun puzzle. Suppose we have a group of 10 men and 10 women, and each of the men has sorted the women in order of their preference for marriage (that is, a man prefers to marry a woman earlier in his list over a woman later in the list). Likewise, each of the women has sorted the men in order of marriageability. We might ask if there is any way that we, the omniscient cupids of love, can decide who should marry to make everyone happy.

Of course, the word happy is entirely imprecise. The mathematician balks at the prospect of leaving such terms undefined! In this case, it’s quite obvious that not everyone will get their first pick. Indeed, if even two women prefer the same man someone will have to settle for less than their top choice. So if we define happiness in this naive way, the problem is obviously not solvable in general.

Now what if instead of aiming for each individual’s maximum happiness we instead shoot for mutual contentedness? That is, what if “happiness” here means that nobody will ever have an incentive to cheat on their spouse? It turns out that for a mathematical version of this condition, we can always find a suitable set of marriages! These mathematical formalisms include some assumptions, such as that preferences never change and that no new individuals are added to the population. But it is nevertheless an impressive theorem that we can achieve stability no matter what everyone’s preferences are. In this post we’ll give the classical algorithm which constructs so-called “stable marriages,” and we’ll prove its correctness. Then we’ll see a slight generalization of the algorithm, in which the marriages are “polygamous,” and we’ll apply it to the problem of assigning students to internships.

As usual, all of the code used in this post is available for download at this blog’s Github page.

Historical Notes

The original algorithm for computing stable marriages was discovered by Lloyd Shapley and David Gale in the early 1960′s. Shapely and Alvin Roth went on to dedicate much of their career to designing markets and applying the stable marriage problem and its generalizations to such problems. In 2012 they jointly received the Nobel prize in economics for their work on this problem. If you want to know more about what “market design” means and why it’s needed (and you have an hour to spare), consider watching the talk below by Alvin Roth at the Simons Institute’s 2013 Symposium on the Visions of the Theory of Computing. Roth spends most of his time discussing the state of one particular economy, medical students and residence positions at hospitals, which he was asked to redesign. It’s quite a fascinating tale, although some of the deeper remarks assume knowledge of the algorithm we cover in this post.

Alvin Roth went on to apply the ideas presented in the video to economic systems in Boston and New York City public schools, kidney exchanges, and others. They all had the same sort of structure: both parties have preferences and stability makes sense. So he actually imposed the protocol we’re about to describe in order to guarantee that the process terminates to a stable arrangement (and automating it saves everyone involved a lot of time, stress, and money! Watch the video above for more on that).

The Monogamous Stable Marriage Algorithm

Let’s formally set up the problem. Let X = \left \{ 1, 2, \dots, n \right \} be a set of n suitors and Y = \left \{ 1,2,\dots ,n \right \} be a set of n “suited.” Let \textup{pref}_{X \to Y}: X \to S_n be a list of preferences for the suitors. In words, \textup{pref}_{X \to Y} accepts as input a suitor, and produces as output an ordering on the suited members of Y. We denote the output set as S_n, which the group theory folks will recognize as the permutation group on 1, \dots, n. Likewise, there is a function \textup{pref}_{Y \to X}: Y \to S_n describing the preferences of each of the suited.

An example will help clarify these stuffy definitions. If X = \left \{ 1, 2, 3 \right \} and Y = \left \{ 1, 2, 3 \right \}, then to say that

\textup{pref}_{X \to Y}(2) = (3, 1, 2)

is to say that the second suitor prefers the third member of Y the most, and then the first member of Y, and then the second. The programmer might imagine that the datum of the problem consists of two dictionaries (one for X and one for Y) whose keys are integers and whose values are lists of integers which contain 1 through n in some order.

A solution to the problem, then, is a way to match (or marry) suitors with suited. Specifically, a matching is a bijection m: X \to Y, so that x is matched with m(x). The reason we use a bijection is because the marriages are monogamous: only one suitor can be matched with one suited and vice versa. Later we’ll see this condition dropped so we can apply it to a more realistic problem of institutions (suited) which can accommodate many applicants (suitors). Because suitor and suited are awkward to say, we’ll use the familiar, antiquated, and politically incorrect terms “men and women.”

Now if we’re given a monogamous matching m, a pair x \in X, y \in Y is called unstable for m if both x,y prefer each other over their partners assigned by m. That is, (x,y) is unstable for m if y appears before m(y) in the preference list for x, \textup{pref}_{X \to Y}(x), and likewise x appears before m^{-1}(y) in \textup{pref}_{Y \to X}(y).

Another example to clarify: again let X = Y = \left \{ 1,2,3 \right \} and suppose for simplicity that our matching m pairs m(i) = i. If man 2 has the preference list (3,2,1) and woman 3 has the preference list (2,1,3), then 2 and 3 together form an unstable pair for m, because they would rather be with each other over their current partners. That is, they have a mutual incentive to cheat on their spouses. We say that the matching is unstable or admits an unstable pair if there are any unstable pairs for it, and we call the entire matching stable if it doesn’t admit any unstable pairs.

Unlike real life, mathematically unstable marriages need not have constant arguments.

Unlike real life, mathematically unstable marriages need not feature constant arguments.

So the question at hand is: is there an algorithm which, given access to to the two sets of preferences, can efficiently produce a stable matching? We can also wonder whether a stable matching is guaranteed to exist, and the answer is yes. In fact, we’ll prove this and produce an efficient algorithm in one fell swoop.

The central concept of the algorithm is called deferred acceptance. The gist is like this. The algorithm operates in rounds. During each round, each man will “propose” to a woman, and each woman will pick the best proposal available. But the women will not commit to their pick. They instead reject all other suitors, who go on to propose to their second choices in the next round. At that stage each woman (who now may have a more preferred suitor than in the first round) may replace her old pick with a new one. The process continues in this manner until each man is paired with a woman. In this way, each of the women defers accepting any proposal until the end of the round, progressively increasing the quality of her choice. Likewise, the men progressively propose less preferred matches as the rounds progress.

It’s easy to argue such a process must eventually converge. Indeed, the contrary means there’s some sort of cycle in the order of proposals, but each man proposes to only strictly less preferred women than any previous round, and the women can only strictly increase the quality of their held pick. Mathematically, we’re using an important tool called monotonicity. That some quantity can only increase or decrease as time goes on, and since the quantity is bounded, we must eventually reach a local maximum. From there, we can prove that any local maximum satisfies the property we want (here, that the matching is stable), and we win. Indeed, supposing to the contrary that we have a pair (x,y) which is unstable for the matching m produced at the end of this process, then it must have been the case that x proposed to y in some earlier round. But y has as her final match some other suitor x' = m^{-1}(y) whom she prefers less than x. Though she may have never picked x at any point in the algorithm, she can only end up with the worse choice x' if at some point y chose a suitor that was less preferred than the suitor she already had. Since her choices are monotonic this cannot happen, so no unstable pairs can exist.

Rather than mathematically implement the algorithm in pseudocode, let’s produce the entire algorithm in Python to make the ideas completely concrete.

Python Implementation

We start off with some simple data definitions for the two parties which, in the renewed interest of generality, refer to as Suitor and Suited.

class Suitor(object):
   def __init__(self, id, prefList):
      self.prefList = prefList
      self.rejections = 0 # num rejections is also the index of the next option
      self.id = id

   def preference(self):
      return self.prefList[self.rejections]

   def __repr__(self):
      return repr(self.id)

A Suitor is simple enough: he has an id representing his “index” in the set of Suitors, and a preference list prefList which in its i-th position contains the Suitor’s i-th most preferred Suited. This is identical to our mathematical representation from earlier, where a list like (2,3,1) means that the Suitor prefers the second Suited most and the first Suited least. Knowing the algorithm ahead of time, we add an additional piece of data: the number of rejections the Suitor has seen so far. This will double as the index of the Suited that the Suitor is currently proposing to. Indeed, the preference function provides a thin layer of indirection allowing us to ignore the underlying representation, so long as one updates the number of rejections appropriately.

Now for the Suited.

class Suited(object):
   def __init__(self, id, prefList):
      self.prefList = prefList
      self.held = None
      self.currentSuitors = set()
      self.id = id

   def __repr__(self):
      return repr(self.id)

A Suited likewise has a list of preferences and an id, but in addition she has a held attribute for the currently held Suitor, and a list currentSuitors of Suitors that are currently proposing to her. Hence we can define a reject method which accepts no inputs, and returns a list of rejected suitors, while updating the woman’s state to hold onto her most preferred suitor.

   def reject(self):
      if len(self.currentSuitors) == 0:
         return set()

      if self.held is not None:
         self.currentSuitors.add(self.held)

      self.held = min(self.currentSuitors, key=lambda suitor: self.prefList.index(suitor.id))
      rejected = self.currentSuitors - set([self.held])
      self.currentSuitors = set()

      return rejected

The call to min does all the work: finding the Suitor that appears first in her preference list. The rest is bookkeeping. Now the algorithm for finding a stable marriage, following the deferred acceptance algorithm, is simple.

# monogamousStableMarriage: [Suitor], [Suited] -> {Suitor -> Suited}
# construct a stable (monogamous) marriage between suitors and suiteds
def monogamousStableMarriage(suitors, suiteds):
   unassigned = set(suitors)

   while len(unassigned) > 0:
      for suitor in unassigned:
         suiteds[suitor.preference()].currentSuitors.add(suitor)
      unassigned = set()

      for suited in suiteds:
         unassigned |= suited.reject()

      for suitor in unassigned:
         suitor.rejections += 1

   return dict([(suited.held, suited) for suited in suiteds])

All the Suitors are unassigned to begin with. Each iteration of the loop corresponds to a round of the algorithm: the Suitors are added to the currentSuitors list of their next most preferred Suited. Then the Suiteds “simultaneously” reject some Suitors, whose rejection counts are upped by one and returned to the pool of unassigned Suitors. Once every Suited has held onto a Suitor we’re done.

Given a matching, we can define a function that verifies by brute force that the marriage is stable.

# verifyStable: [Suitor], [Suited], {Suitor -> Suited} -> bool
# check that the assignment of suitors to suited is a stable marriage
def verifyStable(suitors, suiteds, marriage):
   import itertools
   suitedToSuitor = dict((v,k) for (k,v) in marriage.items())
   precedes = lambda L, item1, item2: L.index(item1) < L.index(item2)

   def suitorPrefers(suitor, suited):
      return precedes(suitor.prefList, suited.id, marriage[suitor].id)

   def suitedPrefers(suited, suitor):
      return precedes(suited.prefList, suitor.id, suitedToSuitor[suited].id)

   for (suitor, suited) in itertools.product(suitors, suiteds):
      if suited != marriage[suitor] and suitorPrefers(suitor, suited) and suitedPrefers(suited, suitor):
         return False, (suitor.id, suited.id)

   return

Indeed, we can test the algorithm on an instance of the problem.

>>> suitors = [Suitor(0, [3,5,4,2,1,0]), Suitor(1, [2,3,1,0,4,5]),
...            Suitor(2, [5,2,1,0,3,4]), Suitor(3, [0,1,2,3,4,5]),
...            Suitor(4, [4,5,1,2,0,3]), Suitor(5, [0,1,2,3,4,5])]
>>> suiteds = [Suited(0, [3,5,4,2,1,0]), Suited(1, [2,3,1,0,4,5]),
...            Suited(2, [5,2,1,0,3,4]), Suited(3, [0,1,2,3,4,5]),
...            Suited(4, [4,5,1,2,0,3]), Suited(5, [0,1,2,3,4,5])]
>>> marriage = monogamousStableMarriage(suitors, suiteds)
{3: 0, 4: 4, 5: 1, 1: 2, 2: 5, 0: 3}
>>> verifyStable(suitors, suiteds, marriage)
True

We encourage the reader to check this by hand (this one only took two rounds). Even better, answer the question of whether the algorithm could ever require n steps to converge for 2n individuals, where you get to pick the preference list to try to make this scenario happen.

Stable Marriages with Capacity

We can extend this algorithm to work for “polygamous” marriages in which one Suited can accept multiple Suitors. In fact, the two problems are entirely the same! Just imagine duplicating a Suited with large capacity into many Suiteds with capacity of 1. This particular reduction is not very efficient, but it allows us to see that the same proof of convergence and correctness applies. We can then modify our classes and algorithm to account for it, so that (for example) instead of a Suited “holding” a single Suitor, she holds a set of Suitors. We encourage the reader to try extending our code above to the polygamous case as an exercise, and we’ve provided the solution in the code repository for this post on this blog’s Github page.

Ways to Make it Harder

When you study algorithmic graph problems as much as I do, you start to get disheartened. It seems like every problem is NP-hard or worse. So when we get a situation like this, a nice, efficient algorithm with very real consequences and interpretations, you start to get very excited. In between our heaves of excitement, we imagine all the other versions of this problem that we could solve and Nobel prizes we could win. Unfortunately the landscape is bleaker than that, and most extensions of stable marriage problems are NP-complete.

For example, what if we allow ties? That is, one man can be equally happy with two women. This is NP-complete. However, it turns out his extension can be formulated as an integer programming problem, and standard optimization techniques can be used to approximate a solution.

What if, thinking about the problem in terms of medical students and residencies, we allow people to pick their preferences as couples? Some med students are married, after all, and prefer to be close to their spouse even if it means they have a less preferred residency. NP-hard again. See page 53 (pdf page 71) of these notes for a more detailed investigation. The problem is essentially that there is not always a stable matching, and so even determining whether there is one is NP-complete.

So there are a lot of ways to enrich the problem, and there’s an interesting line between tractable and hard in the worst case. As a (relatively difficult) exercise, try to solve the “roommates” version of the problem, where there is no male/female distinction (anyone can be matched with anyone). It turns out to have a tractable solution, and the algorithm is similar to the one outlined in this post.

Until next time!

PS. I originally wrote this post about a year ago when I was contacted by someone in industry who agreed to provide some (anonymized) data listing the preferences of companies and interns applying to work at those companies. Not having heard from them for almost a year, I figure it’s a waste to let this finished post collect dust at the risk of not having an interesting data set. But if you, dear reader, have any data you’d like to provide that fits into the framework of stable marriages, I’d love to feature your company/service on my blog (and solve the matching problem) in exchange for the data. The only caveat is that the data would have to be public, so you would have to anonymize it.