On Coloring Resilient Graphs

I’m pleased to announce that another paper of mine is finished. This one is submitted to ICALP, which is being held in Copenhagen this year (this whole research thing is exciting!). This is joint work with my advisor, Lev Reyzin. As with my first paper, I’d like to explain things here on my blog a bit more informally than a scholarly article allows.

A Recent History of Graph Coloring

One of the first important things you learn when you study graphs is that coloring graphs is hard. Remember that coloring a graph with k colors means that you assign each vertex a color (a number in \left \{ 1, 2, \dots, k \right \}) so that no vertex is adjacent to a vertex of the same color (no edge is monochromatic). In fact, even deciding whether a graph can be colored with just 3 colors (not to mention finding such a coloring) has no known polynomial time algorithm. It’s what’s called NP-hard, which means that almost everyone believes it’s hopeless to solve efficiently in the worst case.

One might think that there’s some sort of gradient to this problem, that as the graphs get more “complicated” it becomes algorithmically harder to figure out how colorable they are. There are some notions of “simplicity” and “complexity” for graphs, but they hardly fall on a gradient. Just to give the reader an idea, here are some ways to make graph coloring easy:

  • Make sure your graph is planar. Then deciding 4-colorability is easy because the answer is always yes.
  • Make sure your graph is triangle-free and planar. Then finding a 3-coloring is easy.
  • Make sure your graph is perfect (which again requires knowledge about how colorable it is).
  • Make sure your graph has tree-width or clique-width bounded by a constant.
  • Make sure your graph doesn’t have a certain kind of induced subgraph (such as having no induced paths of length 4 or 5).

Let me emphasize that these results are very difficult and tricky to compare. The properties are inherently discrete (either perfect or imperfect, planar or not planar). The fact that the world has not yet agreed upon a universal measure of complexity for graphs (or at least one that makes graph coloring easy to understand) is not a criticism of the chef but a testament to the challenge and intrigue of the dish.

Coloring general graphs is much bleaker, where the focus has turned to approximations. You can’t “approximate” the answer to whether a graph is colorable, so now the key here is that we are actually trying to find an approximate coloring. In particular, if you’re given some graph G and you don’t know the minimum number of colors needed to color it (say it’s \chi(G), this is called the chromatic number), can you easily color it with what turns out to be, say, 2 \chi(G) colors?

Garey and Johnson (the gods of NP-hardness) proved this problem is again hard. In fact, they proved that you can’t do better than twice the number of colors. This might not seem so bad in practice, but the story gets worse. This lower bound was improved by Zuckerman, building on the work of Håstad, to depend on the size of the graph! That is, unless P=NP, all efficient algorithms will use asymptotically more than \chi(G) n^{1 - \varepsilon} colors for any \varepsilon > 0 in the worst case, where n is the number of vertices of G. So the best you can hope for is being off by something like a multiplicative factor of n / \log n. You can actually achieve this (it’s nontrivial and takes a lot of work), but it carries that aura of pity for the hopeful graph colorer.

The next avenue is to assume you know the chromatic number of your graph, and see how well you can do then. For example: if you are given the promise that a graph G is 3-colorable, can you efficiently find a coloring with 8 colors? The best would be if you could find a coloring with 4 colors, but this is already known to be NP-hard.

The best upper bounds, algorithms to find approximate colorings of 3-colorable graphs, also pitifully depend on the size of the graph. Remember I say pitiful not to insult the researchers! This decades-long line of work was extremely difficult and deserves the highest praise. It’s just frustrating that the best known algorithm to color a 3-colorable graph requires up to n^{0.2} colors. At least it bypasses the barrier of n^{1 - \varepsilon} mentioned above, so we know that knowing the chromatic number actually does help.

The lower bounds are a bit more hopeful; it’s known to be NP-hard to color a k-colorable graph using 2^{\sqrt[3]{k}} colors if k is sufficiently large. There are a handful of other linear lower bounds that work for all k \geq 3, but to my knowledge this is the best asymptotic result. The big open problem (which I doubt many people have their eye on considering how hard it seems) is to find an upper bound depending only on k. I wonder offhand whether a ridiculous bound like k^{k^k} colors would be considered progress, and I bet it would.

Our Idea: Resilience

So without big breakthroughs on the front of approximate graph coloring, we propose a new front for investigation. The idea is that we consider graphs which are not only colorable, but remain colorable under the adversarial operation of adding a few new edges. More formally,

Definition: A graph G = (V,E) is called r-resiliently k-colorable if two properties hold

  1. G is k-colorable.
  2. For any set E' of r edges disjoint from E, the graph G' = (V, E \cup E') is k-colorable.

The simplest nontrivial example of this is 1-resiliently 3-colorable graphs. That is a graph that is 3-colorable and remains 3-colorable no matter which new edge you add. And the question we ask of this example: is there a polynomial time algorithm to 3-color a 1-resiliently 3-colorable graph? We prove in our paper that this is actually NP-hard, but it’s not a trivial thing to see.

The chief benefit of thinking about resiliently colorable graphs is that it provides a clear gradient of complexity from general graphs (zero-resilient) to the empty graph (which is (\binom{k+1}{2} - 1)-resiliently k-colorable). We know that the most complex case is NP-hard, and maximally resilient graphs are trivially colorable. So finding the boundary where resilience makes things easy can shed new light on graph coloring.

Indeed, we argue in the paper that lots of important graphs have stronger resilience properties than one might expect. For example, here are the resilience properties of some famous graphs.

From left to right: the Petersen graph, 2-resiliently 3-colorable; the Dürer graph, 4-resiliently 4-colorable; the Grötzsch graph, 4-resiliently 4-colorable; and the Chvátal graph, 3-resiliently 4-colorable. These are all maximally resilient (no graph is more resilient than stated) and chromatic (no graph is colorable with fewer colors)

From left to right: the Petersen graph, 2-resiliently 3-colorable; the Dürer graph, 4-resiliently 4-colorable; the Grötzsch graph, 4-resiliently 4-colorable; and the Chvátal graph, 3-resiliently 4-colorable. These are all maximally resilient (no graph is more resilient than stated) and chromatic (no graph is colorable with fewer colors)

If I were of a mind to do applied graph theory, I would love to know about the resilience properties of graphs that occur in the wild. For example, the reader probably knows the problem of register allocation is a natural graph coloring problem. I would love to know the resilience properties of such graphs, with the dream that they might be resilient enough on average to admit efficient coloring algorithms.

Unfortunately the only way that I know how to compute resilience properties is via brute-force search, and of course this only works for small graphs and small k. If readers are interested I could post such a program (I wrote it in vanilla python), but for now I’ll just post a table I computed on the proportion of small graphs that have various levels of resilience (note this includes graphs that vacuously satisfy the definition).

Percentage of k-colorable graphs on 6 vertices which are n-resilient
k\n       1       2       3       4
  ----------------------------------------
3       58.0    22.7     5.9     1.7
4       93.3    79.3    58.0    35.3
5       99.4    98.1    94.8    89.0
6      100.0   100.0   100.0   100.0

Percentage of k-colorable graphs on 7 vertices which are n-resilient
k\n       1       2       3       4
  ----------------------------------------
3       38.1     8.2     1.2     0.3
4       86.7    62.6    35.0    14.9
5       98.7    95.6    88.5    76.2
6       99.9    99.7    99.2    98.3

Percentage of k-colorable graphs on 8 vertices which are n-resilient
k\n       1       2       3       4
  ----------------------------------------
3       21.3     2.1     0.2     0.0
4       77.6    44.2    17.0     4.5

The idea is this: if this trend continues, that only some small fraction of all 3-colorable graphs are, say, 2-resiliently 3-colorable graphs, then it should be easy to color them. Why? Because resilience imposes structure on the graphs, and that structure can hopefully be realized in a way that allows us to color easily. We don’t know how to characterize that structure yet, but we can give some structural implications for sufficiently resilient graphs.

For example, a 7-resiliently 5-colorable graph can’t have any subgraphs on 6 vertices with \binom{6}{2} - 7 edges, or else we can add enough edges to get a 6-clique which isn’t 5-colorable. This gives an obvious general property about the sizes of subgraphs in resilient graphs, but as a more concrete instance let’s think about 2-resilient 3-colorable graphs G. This property says that no set of 4 vertices may have more than 4 = \binom{4}{2} - 2 edges in G. This rules out 4-cycles and non-isolated triangles, but is it enough to make 3-coloring easy? We can say that G is a triangle-free graph and a bunch of disjoint triangles, but it’s known 3-colorable non-planar triangle-free graphs can have arbitrarily large chromatic number, and so the coloring problem is hard. Moreover, 2-resilience isn’t enough to make G planar. It’s not hard to construct a non-planar counterexample, but proving it’s 2-resilient is a tedious task I relegated to my computer.

Speaking of which, the problem of how to determine whether a k-colorable graph is r-resiliently k-colorable is open. Is this problem even in NP? It certainly seems not to be, but if it had a nice characterization or even stronger necessary conditions than above, we might be able to use them to find efficient coloring algorithms.

In our paper we begin to fill in a table whose completion would characterize the NP-hardness of coloring resilient graphs

table

The known complexity of k-coloring r-resiliently k-colorable graphs

Ignoring the technical notion of 2-to-1 hardness (it’s technical), the paper accomplishes this as follows. First, we prove some relationships between cells. In particular, if a cell is NP-hard then so are all the cells to the left and below it. So our Theorem 1, that 3-coloring 1-resiliently 3-colorable graphs is NP-hard, gives us the entire black region, though more trivial arguments give all except the (3,1) cell. Also, if a cell is in P (it’s easy to k-color graphs with that resilience), then so are all cells above and to its right. We prove that k-coloring \binom{k}{2}-resiliently k-colorable graphs is easy. This is trivial: no vertex may have degree greater than k-1, and the greedy algorithm can color such graphs with k colors. So that gives us the entire light gray region.

There is one additional lower bound comes from the fact that it’s NP-hard to 2^{\sqrt[3]{k}}-color a k-colorable graph. In particular, we prove that if you have any function f(k) that makes it NP-hard to f(k)-color a k-colorable graph, then it is NP-hard to f(k)-color an (f(k) - k)-resiliently f(k)-colorable graph. The exponential lower bound hence gives us a nice linear lower bound, and so we have the following “sufficiently zoomed out” picture of the table

zoomed-out

The zoomed out version of the classification table above.

The paper contains the details of how these observations are proved, in addition to the NP-hardness proof for 1-resiliently 3-colorable graphs. This leaves the following open problems:

  • Get an unconditional, concrete linear resilience lower bound for hardness.
  • Find an algorithm that colors graphs that are less resilient than O(k^2). Even determining specific cells like (4,5) or (5,9) would likely give enough insight for this.
  • Classify the tantalizing (3,2) cell (determine if it’s hard or easy to 3-color a 2-resiliently 3-colorable graph) or even better the (4,2) cell.
  • Find a way to relate resilient coloring back to general coloring. For example, if such and such cell is hard, then you can’t approximate k-coloring to within so many colors.

But Wait, There’s More!

Though this paper focuses on graph coloring, our idea of resilience doesn’t stop there (and this is one reason I like it so much!). One can imagine a notion of resilience for almost any combinatorial problem. If you’re trying to satisfy boolean formulas, you can define resilience to mean that you fix the truth value of some variable (we do this in the paper to build up to our main NP-hardness result of 3-coloring 1-resiliently 3-colorable graphs). You can define resilient set cover to allow the removal of some sets. And any other sort of graph-based problem (Traveling salesman, max cut, etc) can be resiliencified by adding or removing edges, whichever makes the problem more constrained.

So this resilience notion is quite general, though it’s hard to define precisely in a general fashion. There is a general framework called Constraint Satisfaction Problems (CSPs), but resilience here seem too general. A CSP is literally just a bunch of objects which can be assigned some set of values, and a set of constraints (k-ary 0-1-valued functions) that need to all be true for the problem to succeed. If we were to define resilience by “adding any constraint” to a given CSP, then there’s nothing to stop us from adding the negation of an existing constraint (or even the tautologically unsatisfiable constraint!). This kind of resilience would be a vacuous definition, and even if we try to rule out these edge cases, I can imagine plenty of weird things that might happen in their stead. That doesn’t mean there isn’t a nice way to generalize resilience to CSPs, but it would probably involve some sort of “constraint class” of acceptable constraints, and I don’t know a reasonable property to impose on the constraint class to make things work.

So there’s lots of room for future work here. It’s exciting to think where it will take me.

Until then!

About these ads

Elliptic Curves as Elementary Equations

Finding solutions to systems of polynomial equations is one of the oldest and deepest problems in all of mathematics. This is broadly the domain of algebraic geometry, and mathematicians wield some of the most sophisticated and abstract tools available to attack these problems.

The elliptic curve straddles the elementary and advanced mathematical worlds in an interesting way. On one hand, it’s easy to describe in elementary terms: it’s the set of solutions to a cubic function of two variables. But despite how simple they seem deep theorems govern their behavior, and many natural questions about elliptic curves are still wide open. Since elliptic curves provide us with some of the strongest and most widely used encryption protocols, understanding elliptic curves more deeply would give insight into the security (or potential insecurity) of these protocols.

Our first goal in this series is to treat elliptic curves as mathematical objects, and derive the elliptic curve group as the primary object of study. We’ll see what “group” means next time, and afterward we’ll survey some of the vast landscape of unanswered questions. But this post will be entirely elementary, and will gently lead into the natural definition of the group structure on an elliptic curve.

Elliptic Curves as Equations

The simplest way to describe an elliptic curve is as the set of all solutions to a specific kind of polynomial equation in two real variables, x,y. Specifically, the equation has the form:

\displaystyle y^2 = x^3 + ax + b

Where a,b are real numbers such that

\displaystyle -16(4a^3 + 27b^2) \neq 0

One would naturally ask, “Who the hell came up with that?” A thorough answer requires a convoluted trip through 19th and 20th-century mathematical history, but it turns out that this is a clever form of a very natural family of equations. We’ll elaborate on this in another post, but for now we can give an elementary motivation.

Say you have a pyramid of spheres whose layers are squares, like the one below

pyramid-spheres

We might wonder when it’s the case that we can rearrange these spheres into a single square. Clearly you can do it for a pyramid of height 1 because a single ball is also a 1×1 square (and one of height zero if you allow a 0×0 square). But are there any others?

This question turns out to be a question about an elliptic curve. First, recall that the number of spheres in such a pyramid is given by

\displaystyle 1 + 4 + 9 + 16 + \dots + n^2 = \frac{n(n+1)(2n+1)}{6}

And so we’re asking if there are any positive integers y such that

\displaystyle y^2 = \frac{x(x+1)(2x+1)}{6}

Here is a graph of this equation in the plane. As you admire it, though, remember that we’re chiefly interested in integer solutions.

pyramid-ec

The equation doesn’t quite have the special form we mentioned above, but the reader can rest assured (and we’ll prove it later) that one can transform our equation into that form without changing the set of solutions. In the meantime let’s focus on the question: are there any integer-valued points on this curve besides (0,0) and (1,1)? The method we use to answer this question comes from ancient Greece, and is due to Diophantus. The idea is that we can use the two points we already have to construct a third point. This method is important because it forms the basis for our entire study of elliptic curves.

Take the line passing through (0,0) and  (1,1), given by the equation y = x, and compute the intersection of this line and the original elliptic curve. The “intersection” simply means to solve both equations simultaneously. In this case it’s

\begin{aligned} y^2 &= \frac{x(x+1)(2x+1)}{6} \\ y &= x \end{aligned}

It’s clear what to do: just substitute the latter in for the former. That is, solve

\displaystyle x^2 = \frac{x(x+1)(2x+1)}{6}

Rearranging this into a single polynomial and multiplying through by 3 gives

\displaystyle x^3 - \frac{3x^2}{2} + \frac{x}{2} = 0

Factoring cubics happens to be easy, but let’s instead use a different trick that will come up again later. Let’s use a fact that is taught in elementary algebra and precalculus courses and promptly forgotten, that the sum of the roots of any polynomial is \frac{-a_{n-1}}{a_n}, where a_{n} is the leading coefficient and a_{n-1} is the next coefficient. Here a_n = 1, so the sum of the roots is 3/2. This is useful because we already know two roots, namely the solutions 0 and 1 we used to define the system of equations in the first place. So the third root satisfies

\displaystyle r + 0 + 1 = \frac{3}{2}

And it’s r = 1/2, giving the point (1/2, 1/2) since the line was y=x. Because of the symmetry of the curve, we also get the point (1/2, -1/2).

Here’s a zoomed-in picture of what we just did to our elliptic curve. We used the two pink points (which gave us the dashed line) to find the purple point.

line-intersection-example

The bad news is that these two new points don’t have integer coordinates. So it doesn’t answer our question. The good news is that now we have more points! So we can try this trick again to see if it will give us still more points, and hope to find some that are integer valued. (It sounds like a hopeless goal, but just hold out a bit longer). If we try this trick again using (1/2, -1/2) and (1,1), we get the equation

\displaystyle (3x - 2)^2 = \frac{x(x+1)(2x+1)}{6}

And redoing all the algebraic steps we did before gives us the solution x=24, y=70. In other words, we just proved that

\displaystyle 1^2 + 2^2 + \dots + 24^2 = 70^2

Great! Here’s another picture showing what we just did.

line-intersection-example-2

In reality we don’t care about this little puzzle. Its solution might be a fun distraction (and even more distracting: try to prove there aren’t any other integer solutions), but it’s not the real treasure. The mathematical gem is the method of finding the solution. We can ask the natural question: if you have two points on an elliptic curve, and you take the line between those two points, will you always get a third point on the curve?

Certainly the answer is no. See this example of two points whose line is vertical.

vertical-line

But with some mathematical elbow grease, we can actually force it to work! That is, we can define things just right so that the line between any two points on an elliptic curve will always give you another point on the curve. This sounds like mysterious black magic, but it lights the way down a long mathematical corridor of new ideas, and is required to make sense of using elliptic curves for cryptography.

Shapes of Elliptic Curves

Before we continue, let’s take a little detour to get a good feel for the shapes of elliptic curves. We have defined elliptic curves by a special kind of equation (we’ll give it a name in a future post). During most of our study we won’t be able to make any geometric sense of these equations. But for now, we can pretend that we’re working over real numbers and graph these equations in the plane.

Elliptic curves in the form y^2 = x^3 + ax + b have a small handful of different shapes that we can see as a,b vary:

ECdynamics1ECdynamics2

The problem is when we cross the point at which the rounded part pinches off in the first animation, and the circular component appears in the second. At those precise moments, the curve becomes “non-smooth” (or singular), and for reasons we’ll see later this is bad. The condition from the beginning of the article (that -16(4a^3 + 27b^2) \neq 0) ensures that these two cases are excluded from consideration, and it’s one crucial part of our “elbow grease” to ensure that lines behave nicely.

The “canonical” shape of the elliptic curve is given by the specific example y^2 = x^3 - x + 1. It’s the example that should pop up whenever you imagine an elliptic curve, and it’s the example we’ll use for all of our pictures.

canonical-EC

So in the next post we’ll roll up our sleeves and see exactly how “drawing lines” can be turned into an algebraic structure on an elliptic curve.

Until then!

Anti-Coordination Games and Stable Graph Colorings

My First Paper

I’m pleased to announce that my first paper, titled “Anti-Coordination Games and Stable Colorings,” has been accepted for publication! The venue is the Symposium on Algorithmic Game Theory, which will take place in Aachen, Germany this October. A professor of mine once told me that everyone puts their first few publications on a pedestal, so I’ll do my best to keep things down to earth by focusing on the contents of the paper and not my swirling cocktail of pride. The point of this post is to explain the results of my work to a non-research-level audience; the research level folks will likely feel more comfortable reading the paper itself. So here we’ll spend significantly longer explaining the proofs and the concepts, and significantly less time on previous work.

I will assume familiarity with basic graph theory (we have a gentle introduction to that here) and NP-completeness proofs (again, see our primer). We’ll give a quick reminder about the latter when we get to it.

Anti-Coordination Games on Graphs

The central question in the paper is how to find stable strategy profiles for anti-coordination games played on graphs. This section will flush out exactly what all of that means.

The easiest way to understand the game is in terms of fashion. Imagine there is a group of people. Every day they choose their outfits individually and interact with their friends. If any pair of friends happens to choose the same clothing, then they both suffer some embarrassment. We can alternatively say that whenever two friends anti-coordinate their outfits, they each get some kind of reward. If not being embarrassed is your kind of reward, then these really are equivalent. Not every pair of people are friends, so perhaps the most important aspect of this problem is how the particular friendship network considered affects their interactions. This kind of game is called an anti-coordination game, and the network of friends makes it a “game on a graph.” We’ll make this more rigorous shortly.

We can ask questions like, if everyone is acting independently and greedily will their choices converge over time to a single choice of outfit? If so how quickly? How much better could a centralized fashion-planner who knows the entire friendship network fare in choosing outfits? Is the problem of finding a best strategy for picking outfits computationally hard? What if some pairs of people want to coordinate their outfits and others don’t? What if caring about another’s fashion is only one-sided in some cases?

Already this problem is rooted in the theory of social networks, but the concept of an anti-coordination game played on a graph is quite broad, and the relevance of this model to the real world comes from the generality of a graph. For example, one may consider the trading networks of various countries; in this case not all countries are trading partners, and it is beneficial to produce different commodities than your trading partners so that you actually benefit from the interaction. Likewise, neighboring radio towers want to emit signals on differing wavelengths to minimize interference, and commuters want to pick different roadways to minimize traffic. These are all examples of this model which we’re about to formalize.

In place of our “network of friends,” the game entails a graph G = (V,E) in which each player is represented by a vertex, and there is an edge between two vertices whenever the corresponding players are trying to anti-coordinate. We will use the terms player and vertex interchangeably. For now the graph is undirected, but later we will relax this assumption and work with directed graphs. In place of “outfits” we’ll have a generic set of strategies denoted by the numbers 1, \dots, k, and each vertex will choose a strategy from this set. In one round of the game, each vertex v chooses a strategy, and this defines a function f : V \to \left \{ 1, \dots, k \right \} from the set of vertices to the set of strategies. Then the payoff of a vertex v in a round, which we denote \mu_f(v), is the number of neighbors of v which have chosen a different strategy than v. That is, it is

\displaystyle \mu_f(v) = \sum_{(v,w) \in E} \mathbf{1}_{\left \{ f(v) \neq f(w) \right \}}

Where \mathbf{1}_{A} denotes the indicator function for the event A, which assumes a value of 1 when the event occurs and 0 otherwise. Here is an example of an instance of the game. We have three strategies, denoted by colors, and the payoff for the vertex labeled v is three.

game-example

If this game is played over many many rounds, we can ask if there is a so-called Nash equilibrium. That is, is there a choice of strategies for the players so that no single player will have an incentive to change in future rounds, assuming nobody else changes? We restrict even further to thinking about pure strategy Nash equilibria, which means there are no probabilistic choices made in choosing a strategy. Equivalently, a pure strategy equilibrium is just a choice of a strategy for each vertex which doesn’t change across rounds. In terms of the graph, we call a strategy function f which is a Nash equilibrium a stable equilibrium (or, as will be made clear in the next paragraph, a stable coloring). It must satisfy the property that no vertex can increase its payoff by switching to a different strategy. So our question now becomes: how can we find a stable coloring which as good as possible for all players involved? Slightly more generally, we call a Nash equilibrium a strictly stable equilibrium (or a strictly stable coloring) if every vertex would strictly decrease its payoff by switching to another strategy. As opposed to a plain old stable coloring where one could have the same payoff by switching strategies, if any player tries to switch strategy then it will get a necessarily worse payoff. Though it’s not at all clear now, we will see that this distinction is the difference between computational tractability and infeasibility.

We can see a very clear connection between this game and graph coloring. Here an edge produces a payoff of 1 for each of its two vertices if and only if it’s properly colored. And so if the strategy choice function f is also a proper coloring, this will produce the largest possible payoff for all vertices in the graph. But it may not be the case that (for a fixed set of strategies) the graph is properly colorable, and we already know that finding a proper coloring with more than two colors is a computationally hard problem. So this isn’t a viable avenue for solving our fashion game. In any case, the connection confuses us enough to interchangeably call the strategy choice function fcoloring of G.

As an interesting side note, a slight variation of this game was actually tested on humans (with money as payoff!) to see how well they could do. Each human player was only shown the strategies of their neighbors, and received $5 for every round in which they collectively arrived at a proper coloring of the graph. See this article in Science for more details.

Since our game allows for the presence of improperly colored edges, we could instead propose to find an assignment of colors to vertices which maximizes the sum of the payoffs of all players. In this vein, we define the social welfare of a graph and a coloring, denoted W(G,f), to be the sum of the payoffs for all vertices \sum_v \mu_f(v). This is a natural quantity one wants to analyze. Unfortunately, even in the case of two strategies, this quantity is computationally difficult (NP-hard) to maximize. It’s a version of the MAX-CUT problem, in which we try to separate the graph into two sets X, Y such that the largest number of edges crosses from X to Y. The correspondence between the two problems is seen by having X represent those vertices which get strategy 1 and Y represent strategy 2.

So we can’t hope to find an efficient algorithm maximizing social welfare. The next natural question is: can we find stable or strictly stable colorings at all? Will they even necessarily exist? The answers to these questions form the main results of our paper.

An Algorithm for Stable Colorings, and the Price of Anarchy

It turns out that there is a very simple greedy algorithm for finding stable colorings of a graph. We state it in the form of a proposition. By stable k-coloring we mean a stable coloring of the graph with k colors (strategies).

Proposition: For every graph G and every k \geq 1, G admits a stable k-coloring, and such a coloring can be found in polynomial time.

Proof. The proof operates by using the social welfare function as a so-called potential function. That is, a change in a player’s strategy which results in a higher payoff results in a higher value of the social welfare function. It is easy to see why: if a player v changes to a color that helps him, then it will result in more properly colored edges (adjacent to v) than there were before. This means that more of v‘s neighbors receive an additional 1 unit of payoff than those that lost 1 as a result of v‘s switch. We call a vertex which has the potential to improve its payoff unhappy, and one which cannot improve its payoff happy.

And so our algorithm to find a stable coloring simply finds some unhappy vertex, switches its color to the most uncommon color among its neighbors, and repeats the process until all vertices are happy. Indeed, this is a local maximum of the social welfare function, and the very definition of a stable coloring.

\square

So that was nice, but we might ask: how much worse is the social welfare arrived at by this algorithm than the optimal social welfare? How much do we stand to lose by accepting the condemnation of NP-hardness and settling for the greedy solution we found? More precisely, if we call Q the set of stable colorings and C the set of all possible colorings, what is the value of

\displaystyle \frac{\max_{c' \in C} W(G, c')}{\min_{c \in Q} W(G, c)}

This is a well-studied quantity in many games, called the price of anarchy. The name comes from the thought: what do we stand to gain by having a central authority, who can see the entire network topology and decide what is best for us, manage our strategies? The alternative (anarchy) is to have each player in the game act as selfishly and rationally as possible without complete information.  It turns out that as the number of strategies grows large in our anti-coordination game, there is no price of anarchy. For our game this obviously depends on the choice of graph, but we know what it is and we formally state the result as a proposition:

Proposition: For any graph, the price of anarchy for the k strategy anti-coordination game is at most k/(k-1) and this value is actually achieved by some instances of the game.

Proof. The pigeonhole principle says that every vertex can always achieve at least a (k-1)/k fraction of its maximum possible payoff. Specifically, if a vertex v_i can’t achieve a proper coloring, then every color must be accounted for among v_i‘s neighbors. Choosing the minimally occurring color will give v_i at least a payoff of d_i(k-1)/k where d_i is the number of neighbors of v_i. Since every stable coloring has to satisfy the condition that no vertex can do any better than the strategy it already has, even in the worst stable coloring every vertex already has chosen such a minority color. Since the maximum payoff is twice the number of edges 2 |E|, and the sum of the degrees \sum_i d_i = 2 |E|, we have that the price of anarchy is at most

\displaystyle \frac{2|E|}{\frac{k-1}{k} \sum_i d_i} = \frac{k}{k-1}

Indeed, we can’t do any better than this in general, because the following graph gives an example where the price of anarchy exactly meets this bound.

An instance of the anti-coordination game with 5 strategies which meets the upper bound on price of anarchy.

An instance of the anti-coordination game with 5 strategies which meets the upper bound on price of anarchy.

This example can easily be generalized to work with arbitrary k. We leave the details as an exercise to the reader.

\square

Strictly Stable Colorings are Hard to Find

Perhaps surprisingly, the relatively minor change of adding strictness is enough to make computability intractable. We’ll give an explicit proof of this, but first let’s recall what it means to be intractable.

Recall that a problem is in NP if there is an efficient (read, polynomial-time) algorithm which can verify a solution to the problem is actually a solution. For example, the problem of proper graph k-coloring is in NP, because if someone gives you a purported coloring all you have to do is verify that each of the O(n^2) edges are properly colored. Similarly, the problem of strictly stable coloring is in NP; all one need do is verify that no choice of a different color for any vertex improves its payoff, and it is trivial to come up with an algorithm which checks this.

Next, call a problem A NP-hard if a solution to A allows you to solve any problem in NP. More formally, A being NP-hard means that there is a polynomial-time reduction from any problem in NP B to A in the following (rough) sense: there is a polynomial-time computable function (i.e. deterministic program) f which takes inputs for B and transforms them into inputs for A such that:

w is a solvable instance of B is if and only if f(w) is solvable for A.

This is not a completely formal definition (see this primer on NP-completeness for a more serious treatment), but it’s good enough for this post. In order to prove a problem C is NP-hard, all you need to do is come up with a polynomial-time reduction from a known NP-hard problem A to your new problem C. The composition of the reduction used for A can be composed with the reduction for C to get a new reduction proving C is NP-hard.

Finally, we call a problem NP-complete if it is both in NP and NP-hard. One natural question to ask is: if we don’t already know of any NP-hard problems, how can we prove anything is NP-hard? The answer is: it’s very hard, but it was done once and we don’t need to do it again (but if you really want to, see these notes). As a result, we have generated a huge list of problems that are NP-complete, and unless P = NP none of these algorithms have polynomial-time algorithms to solve them. We need two examples of NP-hard problems for this paper: graph coloring, and boolean satisfiability. Since we assume the reader is familiar with the former, we recall the latter.

Given a set of variables x_i, we can form a boolean formula over those variables of the form \varphi = C_1 \wedge C_2 \wedge \dots \wedge C_m where each clause C_i is a disjunction of three literals (negated or unnegated variables). For example, C_i = (x_2 \vee \bar{x_5} \vee \bar{x_9}) might be one clause. Here interpret a formula as the x_i having the value true or false, the horizontal bars denoting negation, the wedges \wedge meaning “and” and the vees \vee meaning “or.” We call this particular form conjunctive normal form. A formula \varphi is called satisfiable if there is a choice of true and false assignments to the variables which makes the entire logical formula true. The problem of determining whether there is any satisfying assignment of such a formula, called 3-SAT, is NP-hard.

Going back to strictly stable equilibria and anti-coordination games, we will prove that the problem of determining whether a graph has a strictly stable coloring with k colors is NP-hard. As a consequence, finding such an equilibrium is NP-hard. Since the problem is also in NP, it is in fact NP-complete.

Theorem: For all k \geq 2, the problem of determining whether a graph G has a strictly stable coloring with k colors is NP-complete.

Proof.  The hardest case is k =2, but k \geq 3 is a nice warmup to understand how a reduction works, so we start there.

The k \geq 3 part works by reducing from graph coloring. That is, our reduction will take an input to the graph k-coloring problem (a graph G whose k-colorability is in question) and we produce a graph G' such that G is k-colorable if and only if G' has a strictly stable coloring with k colors. Since graph coloring is hard for k \geq 3, this will prove our problem is NP-hard. More specifically, we will construct G' in such a way that the strictly stable colorings also happen to be proper colorings! So finding a strictly stable coloring of G' will immediately give us a proper coloring of G.

The construction of G' is quite straightforward. We start with G' = G, and then for each edge e = (u,v) we add a new subgraph which we call H_e that looks like:

coloring-reduction-H-e

By K_{k-2} we mean the complete graph on k-2 vertices (all possible edges are present), and the vertices u,v are adjacent to all vertices of the H_e = K_{k-2} part. That is, the graph H_e \cup \left \{ u,v \right \} is the complete graph on k vertices. Now if the original graph G was k-colorable, then we can use the same colors for the corresponding vertices in G', and extend to a proper coloring (and hence a strictly stable equilibrium) of all of G'. Indeed, for any H_e we can use one different color for each vertex of the K_{k-2} part if we don’t use either of the colors used for u,v, then we’ll have a proper coloring.

On the other hand, if G' has a strictly stable equilibrium, then no edge e which originally came from G can be improperly colored. If some edge e = (u,v) were improperly colored, then there would be some vertex in the corresponding H_e which is not strictly stable. To see this, notice that in the k vertices among H_e \cup \left \{ u,v \right \} there can be at most k-1 colors used, and so any vertex will always be able to switch to that color without hurting his payoff. That is, the coloring might be stable, but it won’t be strictly so. So strictly stable colorings are the same as proper colorings, and we already see that the subgraph G \subset G' is k-colorable, completing the reduction.

Well that was a bit of a cheap trick, but it shows the difficulty of working with strictly stable equilibria: preventing vertices from changing with no penalty is hard! What’s worse is that it’s still hard even if there are only two colors. The reduction here is a lot more complicated, so we’ll give a sketch of how it works.

The reduction is from 3-SAT. So given a boolean formula \varphi = C_1 \wedge \dots \wedge C_m we produce a graph G so that \varphi has a satisfying assignment if and only if G has a strictly stable coloring with two colors. The principle part of the reduction is the following gadget which represents the logic inherent in a clause. We pulled the figure directly from our paper, since the caption gives a good explanation of how it works.

gadget-figure-k2

To reiterate, the two “appendages” labeled by x correspond to the literal x, and the choice of colors for these vertices correspond to truth assignments in \varphi. In particular, if the two vertices have the same color, then the literal is assigned true. Of course, we have to ensure that any x‘s showing up in other clause gadgets agree, and any \bar{x}‘s will have opposite truth values. That’s what the following two gadgets do:

negationgadgets

The gadget on the left enforces x’s to have the same truth assignment across gadgets (otherwise the center vertex won’t be in strict equilibrium). The gadget on the right enforces two literals to be opposites.

And if we stitch the clauses together in a particular way (using the two gadgets above) then we will guarantee consistency across all of the literals. All that’s left to check is that the clauses do what they’re supposed to. That is, we need it to be the case that if all of the literals in a clause gadget are “false,” then we can’t complete the coloring to be strictly stable, and otherwise we can. Indeed, the following diagram gives all possible cases of this up to symmetry:

clause-gadget-lemma-proof

The last figure deserves an explanation: if the three literals are all false, then we can pick any color we wish for v_1, and its two remaining neighbors must both have the same color (or else v_1 is not in strict equilibrium). Call this color a, and using the same reasoning call the neighbors of v_2 and v_3 b and c, respectively. Now by the pigeonhole principle, either a=b, b=c, or b=c. Suppose without loss of generality that a=b, then the edge labeled (a,b) will have the a part not in strict equilibrium (it will have two neighbors of its same color and only one of the other color). This shows that no strict equilibrium can exist.

The reduction then works by taking a satisfying assignment for the variables, coloring the literals in G appropriately, and extending to a strictly stable equilibrium of all of G. Conversely, if G has a strictly stable coloring, then the literals must be consistent and each clause must be fully colorable, which the above diagram shows is the same as the clauses being satisfiable. So all of \varphi is satisfiable and we’re done (excluding a few additional details we describe in the paper).

\square

Directed Graphs and Cooperation

That was the main result of our paper, but we go on to describe some interesting generalizations. Since this post is getting quite long, we’ll just give a quick overview of the interesting parts.

The rest of the paper is dedicated to directed graphs, where we define the payoff of a directed edge (u,v) to go to the u player if u and v anti-coordinate, but v gets nothing. Here the computational feasibility is even worse than it was in the undirected case, but the structure is noticeably more interesting. For the former, not only is in NP-hard to compute strictly stable colorings, it’s even NP-hard to do so in the non-strict case! One large part of the reason for this is that stable colorings might not even exist: a directed 3-cycle has no stable equilibrium. We use this fact as a tool in our reductions to prove the following theorem.

Theorem:  For all k \geq 2, determining whether a directed graph has a stable $latex k$-coloring is NP-complete.

See section 5 of our paper for a full proof.

To address the interesting structure that arises in the directed case, we observe that we can use a directed graph to simulate the desire of one vertex to actually cooperate with another. To see this for two colors, instead of adding an edge (u,v) we add a proxy edge u' and directed edges (u,u'), (u',v). To be in equilibrium, the proxy has no choice but to anti-coordinate with v, and this will give u more incentive to cooperate with v by anti-cooperating with its proxy. This can be extended to k colors by using an appropriately (acyclically) directed copy of K_{k-1}.

Thoughts, and Future Work

While the results in this paper are nice, and I’m particularly proud that I came up with a novel NP-hardness reduction, it is unfortunate that the only results in this paper were hardness results. Because of the ubiquity of NP-hard problems, it’s far more impressive to have an algorithm which actually does something (approximate a good solution, do well under some relaxed assumption, do well in expectation with some randomness) than to prove something is NP-hard. To get an idea of the tone set by researchers, NP-hardness results are often called “negative” results (in the sense that they give a “no” answer to the question of whether there is an efficient algorithm) and finding an algorithm that does something is called a positive result. That being said the technique of using two separate vertices to represent a single literal in a reduction proof is a nice trick, and I have since used it in another research paper, so I’m happy with my work.

On the positive side, though, there is some interesting work to be done. We could look at varying types of payoff structures, where instead of a binary payoff it is a function of the colors involved (say, |i - j|. Another interesting direction is to consider distributed algorithms (where each player operates independently and in parallel) and see what kinds of approximations of the optimal payoff can be achieved in that setting. Yet another direction favored by a combinatorialist is to generalize the game to hypergraphs, which makes me wonder what type of payoff structure is appropriate (payoff of 1 for a rainbow edge? a non-monochromatic edge?). There is also some more work that can be done in inspecting the relationship between cooperation and anti-cooperation in the directed version. Though I don’t have any immediate open questions about it, it’s a very interesting phenomenon.

In any event, I’m currently scheduled to give three talks about the results in this paper (one at the conference venue in Germany, and two at my department’s seminars). Here’s to starting off my research career!

The Erdős-Rényi Random Graph

During the 1950′s the famous mathematician Paul Erdős and Alfred Rényi put forth the concept of a random graph and in the subsequent years of study transformed the world of combinatorics. The random graph is the perfect example of a good mathematical definition: it’s simple, has surprisingly intricate structure, and yields many applications.

In this post we’ll explore basic facts about random graphs, slowly detail a proof on their applications to graph theory, and explore their more interesting properties computationally (a prelude to proofs about their structure). We assume the reader is familiar with the definition of a graph, which we’ve written about at length for non-mathematical audiences, and has some familiarity with undergraduate-level probability and combinatorics for the more mathematical sections of the post. We’ll do our best to remind the reader of these prerequisites as we go, and we welcome any clarification questions in the comment section.

The Erdős-Rényi Model

The definition of a random graph is simple enough that we need not defer it to the technical section of the article.

Definition: Given a positive integer n and a probability value 0 \leq p \leq 1, define the graph G(n,p) to be the undirected graph on n vertices whose edges are chosen as follows. For all pairs of vertices v,w there is an edge (v,w) with probability p.

Of course, there is no single random graph. What we’ve described here is a process for constructing a graph. We create a set of n vertices, and for each possible pair of vertices we flip a coin (often a biased coin) to determine if we should add an edge connecting them. Indeed, every graph can be made by this process if one is sufficiently lucky (or unlucky), but it’s very unlikely that we will have no edges at all if p is large enough. So G(n,p) is really a probability distribution over the set of all possible graphs on n vertices. We commit a horrendous abuse of notation by saying G or G(n,p) is a random graph instead of saying that G is sampled from the distribution. The reader will get used to it in time.

Why Do We Care?

Random graphs of all sorts (not just Erdős’s model) find applications in two very different worlds. The first is pure combinatorics, and the second is in the analysis of networks of all kinds.

In combinatorics we often wonder if graphs exist with certain properties. For instance, in graph theory we have the notion of graph colorability: can we color the vertices of a graph with k colors so that none of its edges are monochromatic? (See this blog’s primer on graph coloring for more) Indeed, coloring is known to be a very difficult problem on general graphs. The problem of determining whether a graph can be colored with a fixed number of colors has no known efficient algorithm; it is NP-complete. Even worse, much of our intuition about graphs fails for graph coloring. We would expect that sparse-looking graphs can be colored with fewer colors than dense graphs. One naive way to measure sparsity of a graph is to measure the length of its shortest cycle (recall that a cycle is a path which starts and ends at the same vertex). This measurement is called the girth of a graph. But Paul Erdős proved using random graphs, as we will momentarily, that for any choice of integers g,k there are graphs of girth \geq g which cannot be colored with fewer than k colors. Preventing short cycles in graphs doesn’t make coloring easier.

The role that random graphs play in this picture is to give us ways to ensure the existence of graphs with certain properties, even if we don’t know how to construct an example of such a graph. Indeed, for every theorem proved using random graphs, there is a theorem (or open problem) concerning how to algorithmically construct those graphs which are known to exist.

Pure combinatorics may not seem very useful to the real world, but models of random graphs (even those beyond the Erdős-Rényi model) are quite relevant. Here is a simple example. One can take a Facebook user u and form a graph of that users network of immediate friends N(u) (excluding u itself), where vertices are people and two people are connected by an edge if they are mutual friends; call this the user’s friendship neighborhood. It turns out that the characteristics of the average Facebook user’s friendship neighborhood resembles a random graph. So understanding random graphs helps us understand the structure of small networks of friends. If we’re particularly insightful, we can do quite useful things like identify anomalies, such as duplicitous accounts, which deviate quite far from the expected model. They can also help discover trends or identify characteristics that can allow for more accurate ad targeting. For more details on how such an idea is translated into mathematics and code, see Modularity (we plan to talk about modularity on this blog in the near future; lots of great linear algebra there!).

Random graphs, when they exhibit observed phenomena, have important philosophical consequences. From a bird’s-eye view, there are two camps of scientists. The first are those who care about leveraging empirically observed phenomena to solve problems. Many statisticians fit into this realm: they do wonderful magic with data fit to certain distributions, but they often don’t know and don’t care whether the data they use truly has their assumed properties. The other camp is those who want to discover generative models for the data with theoretical principles. This is more like theoretical physics, where we invent an arguably computational notion of gravity whose consequences explain our observations.

For applied purposes, the Erdős-Rényi random graph model is in the second camp. In particular, if something fits in the Erdős-Rényi model, then it’s both highly structured (as we will make clear in the sequel) and “essentially random.” Bringing back the example of Facebook, this says that most people in the average user’s immediate friendship neighborhood are essentially the same and essentially random in their friendships among the friends of u. This is not quite correct, but it’s close enough to motivate our investigation of random graph models. Indeed, even Paul Erdős in his landmark paper mentioned that equiprobability among all vertices in a graph is unrealistic. See this survey for a more thorough (and advanced!) overview, and we promise to cover models which better represent social phenomena in the future.

So lets go ahead and look at a technical proof using random graphs from combinatorics, and then write some programs to generate random graphs.

Girth and Chromatic Number, and Counting Triangles

As a fair warning, this proof has a lot of moving parts. Skip to the next section if you’re eager to see some programs.

Say we have a k and a g, and we wonder whether a graph can exist which simultaneously has no cycles of length less than g (the girth) and needs at least k colors to color. The following theorem settles this question affirmatively.  A bit of terminology: the chromatic number of a graph G, denoted \chi(G), is the smallest number of colors needed to properly color G.

Theorem: For any natural numbers k,g, there exist graphs of chromatic number at least k and girth at least g.

Proof. Taking our cue from random graphs, let’s see what the probability is that a random graph G(n,p) on n vertices will have our desired properties. Or easier, what’s the chance that it will not have the right properties? This is essentially a fancy counting argument, but it’s nicer if we phrase it in the language of probability theory.

The proof has a few twists and turns for those uninitiated to the probabilistic method of proof. First, we will look at an arbitrary G(n,p) (where n,p are variable) and ask two questions about it: what is the expected number of short cycles, and what is the expected “independence number” (which we will see is related to coloring). We’ll then pick a value of p, depending crucially on n, which makes both of these expectations small. Next, we’ll use the fact that if the probability that G(n,p) doesn’t have our properties is strictly less than 1, then there has to be some instance in our probability space which has those properties (if no instance had the property, then the probability would be one!). Though we will not know what the graphs look like, their existence is enough to prove the theorem.

So let’s start with cycles. If we’re given a desired girth of g, the expected number of cycles of length \leq g in G(n,p) can be bounded by (np)^{g+1}/(np-1). To see this, the two main points are how to count the number of ways to pick k vertices in order to form a cycle, and a typical fact about sums of powers. Indeed, we can think of a cycle of length k as a way to seat a choice of k people around a circular table. There are \binom{n}{k} possible groups of people, and (k-1)! ways to seat one group. If we fix k and let n grow, then the product \binom{n}{k}(k-1)! will eventually be smaller than n^k (actually, this happens almost immediately in almost all cases). For each such choice of an ordering, the probability of the needed edges occurring to form a cycle is p^j, since all edges must occur independently of each other with probability p.

So the probability that we get a cycle of j vertices is

\displaystyle \binom{n}{j}(j-1)!p^j

And by the reasoning above we can bound this by n^jp^j. Summing over all numbers j = 3, \dots, g (we are secretly using the union bound), we bound the expected number of cycles of length \leq g from above:

\displaystyle \sum_{j=3}^g n^j p^j < \sum_{j=0}^g n^j p^j = \frac{(np)^{g+1}}{np - 1}

Since we want relatively few cycles to occur, we want it to be the case that the last quantity, (np)^{j+1}/(np-1), goes to zero as n goes to infinity. One trick is to pick p depending on n. If p = n^l, our upper bound becomes n^{(l+1)(g+1)} / (n^{1+l} - 1), and if we want the quantity to tend to zero it must be that (l+1)(g+1) < 1. Solving this we get that -1 < l < \frac{1}{g+1} - 1 < 0. Pick such an l (it doesn’t matter which), and keep this in mind: for our choice of p, the expected number of cycles goes to zero as n tends to infinity.

On the other hand, we want to make sure that such a graph has high chromatic number. To do this we’ll look at a related property: the size of the largest independent set. An independent set of a graph G = (V,E) is a set of vertices S \subset V so that there are no edges in E between vertices of S. We call \alpha(G) the size of the largest independent set. The values \alpha(G) and \chi(G) are related, because any time you have an independent set you can color all the vertices with a single color. In particular, this proves the inequality \chi(G) \alpha(G) \geq n, the number of vertices of G, or equivalently \chi(G) \geq n / \alpha(G). So if we want to ensure \chi(G) is large, it suffices to show \alpha(G) is small (rigorously, \alpha(G) \leq n / k implies \chi(G) \geq k).

The expected number of independent sets (again using the union bound) is at most the product of the number of possible independent sets and the probability of one of these having no edges. We let r be arbitrary and look at independent sets of size r Since there are \binom{n}{r} sets and each has a probability (1-p)^{\binom{r}{2}} of being independent, we get the probability that there is an independent set of size r is bounded by

\displaystyle \textup{P}(\alpha(G) \geq r) \leq \binom{n}{r}(1-p)^{\binom{r}{2}}

We use the fact that 1-x < e^{-x} for all x to translate the (1-p) part. Combining this with the usual \binom{n}{r} \leq n^r we get the probability of having an independent set of size r at most

\displaystyle \textup{P}(\alpha(G) \geq r) \leq \displaystyle n^re^{-pr(r-1)/2}

Now again we want to pick r so that this quantity goes to zero asymptotically, and it’s not hard to see that r = \frac{3}{p}\log(n) is good enough. With a little arithmetic we get the probability is at most n^{(1-a)/2}, where a > 1.

So now we have two statements: the expected number of short cycles goes to zero, and the probability that there is an independent set of size at least r goes to zero. If we pick a large enough n, then the expected number of short cycles is less than n/5, and using Markov’s inequality we see that the probability that there are more than n/2 cycles of length at most g is strictly less than 1/2. At the same time, if we pick a large enough n then \alpha(G) \geq r with probability strictly less than 1/2. Combining these two (once more with the union bound), we get

\textup{P}(\textup{at least } n/2 \textup{ cycles of length } \leq g \textup{ and } \alpha(G) \geq r) < 1

Now we can conclude that for all sufficiently large n there has to be a graph on at least n vertices which has neither of these two properties! Pick one and call it G. Now G still has cycles of length \leq g, but we can fix that by removing a vertex from each short cycle (it doesn’t matter which). Call this new graph G'. How does this operation affect independent sets, i.e. what is \alpha(G')? Well removing vertices can only decrease the size of the largest independent set. So by our earlier inequality, and calling n' the number of vertices of G', we can make a statement about the chromatic number:

\displaystyle \chi(G') \geq \frac{n'}{\alpha(G')} \geq \frac{n/2}{\log(n) 3/p} = \frac{n/2}{3n^l \log(n)} = \frac{n^{1-l}}{6 \log(n)}

Since -1 < l < 0 the numerator grows asymptotically faster than the denominator, and so for sufficiently large n the chromatic number will be larger than any k we wish. Hence we have found a graph with girth at least g and chromatic number at least k.

\square

Connected Components

The statistical properties of a random graph are often quite easy to reason about. For instance, the degree of each vertex in G(n,p) is np in expectation. Local properties like this are easy, but global properties are a priori very mysterious. One natural question we can ask in this vein is: when is G(n,p) connected? We would very much expect the answer to depend on how p changes in relation to n. For instance, p might look like p(n) = 1/n^2 or \log(n) / n or something similar. We could ask the following question:

As n tends to infinity, what limiting proportion of random graphs G(n,p) are connected?

Certainly for some p(n) which are egregiously small (for example, p(n) = 0), the answer will be that no graphs are connected. On the other extreme, if p(n) = 1 then all graphs will be connected (and complete graphs, at that!). So our goal is to study the transition phase between when the graphs are disconnected and when they are connected. A priori this boundary could be a gradual slope, where the proportion grows from zero to one, or it could be a sharp jump. Next time, we’ll formally state and prove the truth, but for now let’s see if we can figure out which answer to expect by writing an exploratory program.

We wrote the code for this post in Python, and as usual it is all available for download on this blog’s Github page.

We start with a very basic Node class to represent each vertex in a graph, and a function to generate random graphs

import random
class Node:
   def __init__(self, index):
      self.index = index
      self.neighbors = []

   def __repr__(self):
      return repr(self.index)

def randomGraph(n,p):
   vertices = [Node(i) for i in range(n)]
   edges = [(i,j) for i in xrange(n) for j in xrange(i) if random.random() < p]

   for (i,j) in edges:
      vertices[i].neighbors.append(vertices[j])
      vertices[j].neighbors.append(vertices[i])

   return vertices

The randomGraph function simply creates a list of edges chosen uniformly at random from all possible edges, and then constructs the corresponding graph. Next we have a familiar sight: the depth-first search. We use it to compute the graph components one by one (until all vertices have been found in some run of a DFS).

def dfsComponent(node, visited):
   for v in node.neighbors:
      if v not in visited:
         visited.add(v)
         dfsComponent(v, visited)

def connectedComponents(vertices):
   components = []
   cumulativeVisited = set()

   for v in vertices:
      if v not in cumulativeVisited:
        componentVisited = set([v])
        dfsComponent(v, componentVisited)

        components.append(componentVisited)
        cumulativeVisited |= componentVisited

   return components

The dfsComponent function simply searches in breadth-first fashion, adding every vertex it finds to the “visited” set.  The connectedComponents function keeps track of the list of components found so far, as well as the cumulative set of all vertices found in any run of bfsComponent. Hence, as we iterate through the vertices we can ignore vertices we’ve found in previous runs of bfsComponent. The “x |= y” notation is python shorthand for updating x via a union with y.

Finally, we can make a graph of the largest component of (independently generated) random graphs as the probability of an edge varies.

def sizeOfLargestComponent(vertices):
   return max(len(c) for c in connectedComponents(vertices))

def graphLargestComponentSize(n, theRange):
   return [(p, sizeOfLargestComponent(randomGraph(n, p))) for p in theRange]

Running this code and plotting it for p varying from zero to 0.5 gives the following graph.

zoomedout-50-1000

The blue line is the size of the largest component, and the red line gives a moving average estimate of the data.  As we can see, there is a very sharp jump peaking at p=0.1 at which point the whole graph is connected. It would appear there is a relatively quick “phase transition” happening here. Let’s zoom in closer on the interesting part.

zoomedin-50-1000

It looks like the transition begins around 0.02, and finishes at around 0.1. Interesting… Let’s change the parameters a bit, and increase the size of the graph. Here’s the same chart (in the same range of p values) for a graph with a hundred vertices.

zoomedin-100-1000

Now the phase transition appears to have shifted to about (0.01, 0.05), which is about multiplying the endpoints of the phase transition interval above by 1/2. The plot thickens… Once more, let’s move up to a graph on 500 vertices.

zoomedin-5000-1000

Again it’s too hard to see, so let’s zoom in.

zoomedin-500-1000

This one looks like the transition starts at 0.002 and ends at 0.01. This is a 5-fold decrease from the previous one, and we increased the number of vertices by 5. Could this be a pattern? Here’s a conjecture to formalize it:

Conjecture: The random graph G(n,p) enters a phase transition at p=1/n and becomes connected almost surely at p=5/n.

This is not quite rigorous enough to be a true conjecture, but it sums up our intuition that we’ve learned so far. Just to back this up even further, here’s an animation showing the progression of the phase transition as n = 20 \dots 500 in steps of twenty. Note that the p range is changing to maintain our conjectured window.

phase-transition-n-grows

Looks pretty good. Next time we’ll see some formal mathematics validating our intuition (albeit reformulated in a nicer way), and we’ll continue to investigate other random graph models.

Until then!