Serial Dictatorships and House Allocation

I was recently an invited speaker in a series of STEM talks at Moraine Valley Community College. My talk was called “What can algorithms tell us about life, love, and happiness?” and it’s on Youtube now so you can go watch it. The central theme of the talk was the lens of computation, that algorithms and theoretical computer science can provide new and novel explanations for the natural phenomena we observe in the world.

One of the main stories I told in the talk is about stable marriages and the deferred acceptance algorithm, which we covered previously on this blog. However, one of the examples of the applications I gave was to kidney exchanges and school allocation. I said in the talk that it’s a variant of the stable marriages, but it’s not clear exactly how the two are related. This post will fill that gap and showcase some of the unity in the field of mechanism design.

Mechanism design, which is sometimes called market design, has a grand vision. There is a population of players with individual incentives, and given some central goal the designer wants to come up with a game where the self-interest of the players will lead them to efficiently achieve the designer’s goals. That’s what we’re going to do today with a class of problems called allocation problems.

As usual, all of the code we used in this post is available in a repository on this blog’s Github page.

Allocating houses with dictators

In stable marriages we had $ n$ men and $ n$ women and we wanted to pair them off one to one in a way that there were no mutual incentives to cheat. Let’s modify this scenario so that only one side has preferences and the other does not. The analogy here is that we have $ n$ people and $ n$ houses, but what do we want to guarantee? It doesn’t make sense to say that people will cheat on each other, but it does make sense to ask that there’s no way for people to swap houses and have everyone be at least as happy as before. Let’s formalize this.

Let $ A$ be a set of people (agents) and $ H$ be a set of houses, and $ n = |A| = |H|$. A matching is a one-to-one map from $ A \to H$. Each agent is assumed to have a strict preference over houses, and if we’re given two houses $ h_1, h_2$ and $ a \in A$ prefers $ h_1$ over $ h_2$, we express that by saying $ h_1 >_a h_2$. If we want to include the possibility that $ h_1 = h_2$, we would say $ h_1 \geq_a h_2$. I.e., either they’re the same house, or $ a$ strictly prefers $ h_1$ more.

Definition: A matching $ M: A \to H$ is called pareto-optimal if there is no other matching $ M$ with both of the following properties:

  • Every agent is at least as happy in $ N$ as in $ M$, i.e. for every $ a \in A$, $ N(a) \geq_a M(a)$.
  • Some agent is strictly happier in $ N$, i.e. there exists an $ a \in A$ with $ N(a) >_a M(a)$.

We say a matching $ N$ “pareto-dominates” another matching $ M$ if these two properties hold. As a side note, if you like abstract algebra you might notice that you can take matchings and form them into a lattice where the comparison is pareto-domination. If you go deep into the theory of lattices, you can use some nice fixed-point theorems to (non-constructively) prove the existences of optimal allocations in this context and for stable marriages. See this paper if you’re interested. Of course, we will give efficient algorithms to achieve our goals, which is how I prefer to live life.

The mechanism we’ll use to find such an optimal matching is extremely simple, and it’s called the serial dictatorship.

First you pick an arbitrary ordering of the agents and all houses are marked “available.” Then the first agent in the ordering picks their top choice, and you remove their choice from the available houses. Continue in this way down the list until you get to the end, and the outcome is guaranteed to be pareto-optimal.

Theorem: Serial dictatorship always produces a pareto-optimal matching.

Proof. Let $ M$ be the output of the algorithm. Suppose the theorem is false, that there is some $ N$ that pareto-dominates $ M$. Let $ a$ be the first agent in the chosen ordering who gets a strictly better house in $ N$ than in $ M$. Whatever house $ a$ gets, call it $ N(a)$, it has to be a house that was unavailable at the time in the algorithm when $ a$ got to pick (otherwise $ a$ would have picked $ N(a)$ during the algorithm!). This means that $ a$ took the house chosen by some agent $ b \in A$ whose turn to pick comes before $ a$. But by assumption, $ a$ was the first agent to get a strictly better house, so $ b$ has to end up with a worse house. This contradicts that every agent is at least as happy in $ N$ than in $ M$, so $ N$ cannot pareto-dominate $ M$.

$ \square$

It’s easy enough to implement this in Python. Each agent will be represented by its list of preferences, each object will be an integer, and the matching will be a dictionary. The only thing we need to do is pick a way to order the agents, and we’ll just pick a random ordering. As usual, all of the code used in this post is available on this blog’s github page.

# serialDictatorship: [[int]], [int] -> {int: int}
# construct a pareto-optimal allocation of objects to agents.
def serialDictatorship(agents, objects, seed=None):
   if seed is not None:

   agentPreferences = agents[:]
   allocation = dict()
   availableHouses = set(objects)

   for agentIndex, preference in enumerate(agentPreferences):
      allocation[agentIndex] = max(availableHouses, key=preference.index)

   return allocation

And a test

agents = [['d','a','c','b'], # 4th in my chosen seed
          ['a','d','c','b'], # 3rd
          ['a','d','b','c'], # 2nd
          ['d','a','c','b']] # 1st
objects = ['a','b','c','d']
allocation = serialDictatorship(agents, objects, seed=1)
test({0: 'b', 1: 'c', 2: 'd', 3: 'a'}, allocation)

This algorithm is so simple it’s almost hard to believe. But it get’s better, because under some reasonable conditions, it’s the only algorithm that solves this problem.

Theorem [Svensson 98]: Serial dictatorship is the only algorithm that produces a pareto-optimal matching and also has the following three properties:

  • Strategy-proof: no agent can improve their outcomes by lying about their preferences at the beginning.
  • Neutral: the outcome of the algorithm is unchanged if you permute the items (i.e., does not depend on the index of the item in some list)
  • Non-bossy: No agent can change the outcome of the algorithm without also changing the object they receive.

And if we drop any one of these conditions there are other mechanisms that satisfy the rest. This theorem was proved in this paper by Lars-Gunnar Svensson in 1998, and it’s not particularly long or complicated. The proof of the main theorem is about a page. It would be a great exercise in reading mathematics to go through the proof and summarize the main idea (you could even leave a comment with your answer!).

Allocation with existing ownership

Now we switch to a slightly different problem. There are still $ n$ houses and $ n$ agents, but now every agent already “owns” a house. The question becomes: can they improve their situation by trading houses? It shouldn’t be immediately obvious whether this is possible, because a trade can happen in a “cycle” like the following:


Here A prefers the house of B, and B prefers the house of C, and C prefers the house of A, so they’d all benefit from doing a three-way cyclic trade. You can easily imagine the generalization to larger cycles.

This model was studied by Shapley and Scarf in 1974 (the same Shapley who did the deferred acceptance algorithm for stable marriages). Just as you’d expect, our goal is to find an optimal (re)-allocation of houses to agents in which there is no cycle the stands to improve. That is, there is no subset of agents that can jointly improve their standing. In formalizing this we call an “optimal” matching a core matching. Again $ A$ is a set of agents, and $ H$ is a set of houses.

Definition: A matching $ M: A \to H$ is called a core matching if there is no subset $ B \subset A$ and no matching $ N: A \to H$ with the following properties:

  • For every $ b \in B$, $ N(b)$ is owned by some other agent in $ B$ (trading only happens within $ B$).
  • Every agent $ b$ in $ B$ is at least as happy as before, i.e. $ N(b) \geq_b M(b)$ for all $ b$.
  • Some agent in $ B$ strictly improves, i.e. for some $ b, N(b) >_b M(b)$.

We also call an algorithm individually rational if it ensures that every agent gets a house that is at least as good as their starting house. It should be clear that an algorithm which produces a core matching is individually rational, because for any agent $ a$ we can set $ B = \{a\}$, i.e. force $ a$ to consider not trading at all, and being a core matching says that’s not better for $ a$. Likewise, core matchings are also pareto-optimal by setting $ B = A$.

It might seem like the idea of a “core” solution to an allocation problem is more general, and you’re right. You can define it for a very general setting of cooperative games and prove the existence of core matchings in that setting. See Wikipedia for more. As is our prerogative, we’ll achieve the same thing by constructing core matchings with an algorithm.

Indeed, the following theorem is due to Shapley & Scarf.

Theorem [Shapley-Scarf 74]: There is a core matching for every choice of preferences. Moreover, one can be found by an efficient algorithm.

Proof. The mechanism we’ll define is called the top trading cycles algorithm. We operate in rounds, and the first round goes as follows.

Form a directed graph with nodes in $ A \cup H$. That is there is one node for each agent and one node for each house. Then we start by having each agent “point” to its most preferred house, and each house “points” to its original owner. That is, we add in directed edges from agents to their top pick, and houses to their owners. For example, say there are five agents $ A = \{ a, b, c, d, e, f \}$ and houses $ H = \{ 1,2,3,4,5 \}$ with $ a$ owning $ 1$, and $ b$ owning $ 2$, etc. but their favorite picks goes backwards, so that $ a$ prefers house $ 5$ most, and $ b$ prefers $ 4$ most, $ c$ prefers $ 3$ (which $ c$ also owns), etc. Then the “pointing picture” in the first round looks like this.


The claim about such a graph is that there is always some directed cycle. In the example above, there are three. And moreover, we claim that no two cycles can share an edge. It’s easy to see there has to be a cycle: you can start at any agent and just follow the single outgoing edge until you find yourself repeating some vertices. By the fact that there is only one edge going out of any vertex, it follows that no two cycles could share an edge (or else in the last edge they share, there’d have to be a fork, i.e. two outgoing edges).

In the example above, you can start from A and follow the only edge and you get the cycle A -> 5 -> E -> 1 -> A. Similarly, starting at 4 would give you 4 -> D -> 2 -> B -> 4.

The point is that when you remove a cycle, you can have the agents in that cycle do the trade indicated by the cycle and remove the entire cycle from the graph. The consequence of this is that you have some agents who were pointing to houses that are removed, and so these agents revise their outgoing edge to point at their next most preferred available house. You can then continue removing cycles in this way until all the agents have been assigned a house.

The proof that this is a core matching is analogous to the proof that serial dictatorships were pareto-optimal. If there were some subset $ B$ and some other matching $ N$ under which $ B$ does better, then one of these agents has to be the first to be removed in a cycle during the algorithm’s run. But that agent got the best possible pick of a house, so by involving with $ B$ that agent necessarily gets a worse outcome.

$ \square$

This algorithm is commonly called the Top Trading Cycles algorithm, because it splits the set of agents and houses into a disjoint union of cycles, each of which is the best trade possible for every agent involved.

Implementing the Top Trading Cycles algorithm in code requires us to be able to find cycles in graphs, but that isn’t so hard. I implemented a simple data structure for a graph with helper functions that are specific to our kind of graph (i.e., every vertex has outdegree 1, so the algorithm to find cycles is simpler than something like Tarjan’s algorithm). You can see the data structure on this post’s github repository in the file An example of using it:

>>> G = Graph([1,'a',2,'b',3,'c',4,'d',5,'e',6,'f'])
>>> G.addEdges([(1,'a'), ('a',2), (2,'b'), ('b',3), (3,'c'), ('c',1),
            (4,'d'), ('d',5), (5,'e'), ('e',4), (6,'f'), ('f',6)])
>>> G['d']
>>> G['d'].outgoingEdges
{('d', 5)}
>>> G['d'].anyNext() # return the target of any outgoing edge from 'd'
>>> G.delete('e')
>>> G[4].incomingEdges

Next we implement a function to find a cycle, and a function to extract the agents from a cycle. For latter we can assume the cycle is just represented by any agent on the cycle (again, because our graphs always have outdegree exactly 1).

# anyCycle: graph -> vertex
# find any vertex involved in a cycle
def anyCycle(G):
   visited = set()
   v = G.anyVertex()

   while v not in visited:
      v = v.anyNext()

   return v

# getAgents: graph, vertex -> set(vertex)
# get the set of agents on a cycle starting at the given vertex
def getAgents(G, cycle, agents):
   # make sure starting vertex is a house
   if cycle.vertexId in agents:
      cycle = cycle.anyNext()

   startingHouse = cycle
   currentVertex = startingHouse.anyNext()
   theAgents = set()

   while currentVertex not in theAgents:
      currentVertex = currentVertex.anyNext()
      currentVertex = currentVertex.anyNext()

   return theAgents

Finally, implementing the algorithm is just bookkeeping. After setting up the initial graph, the core of the routine is

def topTradingCycles(agents, houses, agentPreferences, initialOwnership):
   # form the initial graph


   allocation = dict()
   while len(G.vertices) &> 0:
      cycle = anyCycle(G)
      cycleAgents = getAgents(G, cycle, agents)

      # assign agents in the cycle their choice of house
      for a in cycleAgents:
         h = a.anyNext().vertexId
         allocation[a.vertexId] = h

      for a in agents:
         if a in G.vertices and G[a].outdegree() == 0:
            # update preferences

            G.addEdge(a, preferredHouse(a))

   return allocation

This mutates the graph in each round by deleting any cycle that was found, and adding new edges when the top choice of some agent is removed. Finally, to fill in the ellipses we just need to say how we represent the preferences. The input agentPreferences is a dictionary mapping agents to a list of all houses in order of preference. So again we can just represent the “top available pick” by an index and update that index when agents lose their top pick.

# maps agent to an index of the list agentPreferences[agent]
currentPreferenceIndex = dict((a,0) for a in agents)
preferredHouse = lambda a: agentPreferences[a][currentPreferenceIndex[a]]

Then to update we just have to replace the currentPreferenceIndex for each disappointed agent by its next best option.

      for a in agents:
         if a in G.vertices and G[a].outdegree() == 0:
            while preferredHouse(a) not in G.vertices:
               currentPreferenceIndex[a] += 1
            G.addEdge(a, preferredHouse(a))

And that’s it! We included a small suite of test cases which you can run if you want to play around with it more.

One final nice thing about this algorithm is that it almost generalizes the serial dictatorship algorithm. What you do is rather than have each house point to its original owner, you just have all houses point to the first agent in the pre-specified ordering. Then a cycle will always have length 2, the first agent gets their preferred house, and in the next round the houses now point to the second agent in the ordering, and so on.

Kidney exchange

We still need one more ingredient to see the bridge from allocation problems to kidney exchanges. The setting is like this: say Manuel needs a kidney transplant, and he’s lucky enough that his sister-in-law Anastasia wants to donate her kidney to Manuel. However, it turns out that Anastasia doesn’t the same right blood/antibody type for a donation, and so even though she has a kidney to give, they can’t give it to Manuel. Now one might say “just sell your kidney and use the money to buy a kidney with the right type!” Turns out that’s illegal; at some point we as a society decided that it’s immoral to sell organs. But it is legal to exchange a kidney for a kidney. So if Manuel and Anastasia can find a pair of people both of whom happen to have the right blood types, they can arrange for a swap.

But finding two people both of whom have the right blood types is unlikely, and we can actually do far better! We can turn this into a housing allocation problem as follows. Anyone with a kidney to donate is a “house,” and anyone who needs a kidney is an “agent.” And to start off with, we say that each agent “owns” the kidney of their willing donor. And the preferences of each agent are determined by which kidney donors have the right blood type (with ties split, say, by geographical distance). Then when you do the top trading cycles algorithm you find these chains where Anastasia, instead of donating to Manuel, donates to another person who has the right blood type. On the other end of the cycle, Manuel receives a kidney from someone with the right blood type.

The big twist is that not everyone who needs a kidney knows someone willing to donate. So there are agents who are “new” to the market and don’t already own a house. Moreover, maybe you have someone who is willing to donate a kidney but isn’t asking for anything in return.

Because of this the algorithm changes slightly. You can no longer guarantee the existence of a cycle (though you can still guarantee that no two cycles will share an edge). But as new people are added to the graph, cycles will eventually form and you can make the trades. There are a few extra details if you want to ensure that everyone is being honest (if you’re thinking about it like a market in the economic sense, where people could be lying about their preferences).

The resulting mechanism is called You Request My House I Get Your Turn (YRMHIGYT). In short, the idea is that you pick an order on the agents, say for kidney exchanges it’s the order in which the patients are diagnosed. And you have them add edges to the graph in that order. At each step you look for a cycle, and when one appears you remove it as usual. The twist, and the source of the name, is that when someone who has no house requests a house which is already owned, the agent who owns the house gets to jump forward in the queue. This turns out to make everything “fair” (in that everyone is guaranteed to get a house at least as good as the one they own) and one can prove analogous optimality theorems to the ones we did for serial dictatorship.

This mechanism was implemented by Alvin Roth in the US hospital system, and by some measure it has saved many lives. If you want to hear more about the process and how successful the kidney exchange program is, you can listen to this Freakonomics podcast episode where they interviewed Al Roth and some of the patients who benefited from this new allocation market.

It would be an excellent exercise to go deeper into the guts of the kidney exchange program (see this paper by Alvin Roth et al.), and implement the matching system in code. At the very least, implementing the YRMHIGYT mechanism is only a minor modification of our existing Top Trading Cycles code.

Until next time!

Stable Marriages and Designing Markets

Here is a fun puzzle. Suppose we have a group of 10 men and 10 women, and each of the men has sorted the women in order of their preference for marriage (that is, a man prefers to marry a woman earlier in his list over a woman later in the list). Likewise, each of the women has sorted the men in order of marriageability. We might ask if there is any way that we, the omniscient cupids of love, can decide who should marry to make everyone happy.

Of course, the word happy is entirely imprecise. The mathematician balks at the prospect of leaving such terms undefined! In this case, it’s quite obvious that not everyone will get their first pick. Indeed, if even two women prefer the same man someone will have to settle for less than their top choice. So if we define happiness in this naive way, the problem is obviously not solvable in general.

Now what if instead of aiming for each individual’s maximum happiness we instead shoot for mutual contentedness? That is, what if “happiness” here means that nobody will ever have an incentive to cheat on their spouse? It turns out that for a mathematical version of this condition, we can always find a suitable set of marriages! These mathematical formalisms include some assumptions, such as that preferences never change and that no new individuals are added to the population. But it is nevertheless an impressive theorem that we can achieve stability no matter what everyone’s preferences are. In this post we’ll give the classical algorithm which constructs so-called “stable marriages,” and we’ll prove its correctness. Then we’ll see a slight generalization of the algorithm, in which the marriages are “polygamous,” and we’ll apply it to the problem of assigning students to internships.

As usual, all of the code used in this post is available for download at this blog’s Github page.

Historical Notes

The original algorithm for computing stable marriages was discovered by Lloyd Shapley and David Gale in the early 1960’s. Shapely and Alvin Roth went on to dedicate much of their career to designing markets and applying the stable marriage problem and its generalizations to such problems. In 2012 they jointly received the Nobel prize in economics for their work on this problem. If you want to know more about what “market design” means and why it’s needed (and you have an hour to spare), consider watching the talk below by Alvin Roth at the Simons Institute’s 2013 Symposium on the Visions of the Theory of Computing. Roth spends most of his time discussing the state of one particular economy, medical students and residence positions at hospitals, which he was asked to redesign. It’s quite a fascinating tale, although some of the deeper remarks assume knowledge of the algorithm we cover in this post.

Alvin Roth went on to apply the ideas presented in the video to economic systems in Boston and New York City public schools, kidney exchanges, and others. They all had the same sort of structure: both parties have preferences and stability makes sense. So he actually imposed the protocol we’re about to describe in order to guarantee that the process terminates to a stable arrangement (and automating it saves everyone involved a lot of time, stress, and money! Watch the video above for more on that).

The Monogamous Stable Marriage Algorithm

Let’s formally set up the problem. Let $ X = \left \{ 1, 2, \dots, n \right \}$ be a set of $ n$ suitors and $ Y = \left \{ 1,2,\dots ,n \right \}$ be a set of $ n$ “suited.” Let $ \textup{pref}_{X \to Y}: X \to S_n$ be a list of preferences for the suitors. In words, $ \textup{pref}_{X \to Y}$ accepts as input a suitor, and produces as output an ordering on the suited members of $ Y$. We denote the output set as $ S_n$, which the group theory folks will recognize as the permutation group on $ 1, \dots, n$. Likewise, there is a function $ \textup{pref}_{Y \to X}: Y \to S_n$ describing the preferences of each of the suited.

An example will help clarify these stuffy definitions. If $ X = \left \{ 1, 2, 3 \right \}$ and $ Y = \left \{ 1, 2, 3 \right \}$, then to say that

$ \textup{pref}_{X \to Y}(2) = (3, 1, 2)$

is to say that the second suitor prefers the third member of $ Y$ the most, and then the first member of $ Y$, and then the second. The programmer might imagine that the datum of the problem consists of two dictionaries (one for $ X$ and one for $ Y$) whose keys are integers and whose values are lists of integers which contain 1 through $ n$ in some order.

A solution to the problem, then, is a way to match (or marry) suitors with suited. Specifically, a matching is a bijection $ m: X \to Y$, so that $ x$ is matched with $ m(x)$. The reason we use a bijection is because the marriages are monogamous: only one suitor can be matched with one suited and vice versa. Later we’ll see this condition dropped so we can apply it to a more realistic problem of institutions (suited) which can accommodate many applicants (suitors). Because suitor and suited are awkward to say, we’ll use the familiar, antiquated, and politically incorrect terms “men and women.”

Now if we’re given a monogamous matching $ m$, a pair $ x \in X, y \in Y$ is called unstable for $ m$ if both $ x,y$ prefer each other over their partners assigned by $ m$. That is, $ (x,y)$ is unstable for $ m$ if $ y$ appears before $ m(y)$ in the preference list for $ x$, $ \textup{pref}_{X \to Y}(x)$, and likewise $ x$ appears before $ m^{-1}(y)$ in $ \textup{pref}_{Y \to X}(y)$.

Another example to clarify: again let $ X = Y = \left \{ 1,2,3 \right \}$ and suppose for simplicity that our matching $ m$ pairs $ m(i) = i$. If man 2 has the preference list $ (3,2,1)$ and woman 3 has the preference list $ (2,1,3)$, then 2 and 3 together form an unstable pair for $ m$, because they would rather be with each other over their current partners. That is, they have a mutual incentive to cheat on their spouses. We say that the matching is unstable or admits an unstable pair if there are any unstable pairs for it, and we call the entire matching stable if it doesn’t admit any unstable pairs.

Unlike real life, mathematically unstable marriages need not have constant arguments.

Unlike real life, mathematically unstable marriages need not feature constant arguments.

So the question at hand is: is there an algorithm which, given access to to the two sets of preferences, can efficiently produce a stable matching? We can also wonder whether a stable matching is guaranteed to exist, and the answer is yes. In fact, we’ll prove this and produce an efficient algorithm in one fell swoop.

The central concept of the algorithm is called deferred acceptance. The gist is like this. The algorithm operates in rounds. During each round, each man will “propose” to a woman, and each woman will pick the best proposal available. But the women will not commit to their pick. They instead reject all other suitors, who go on to propose to their second choices in the next round. At that stage each woman (who now may have a more preferred suitor than in the first round) may replace her old pick with a new one. The process continues in this manner until each man is paired with a woman. In this way, each of the women defers accepting any proposal until the end of the round, progressively increasing the quality of her choice. Likewise, the men progressively propose less preferred matches as the rounds progress.

It’s easy to argue such a process must eventually converge. Indeed, the contrary means there’s some sort of cycle in the order of proposals, but each man proposes to only strictly less preferred women than any previous round, and the women can only strictly increase the quality of their held pick. Mathematically, we’re using an important tool called monotonicity. That some quantity can only increase or decrease as time goes on, and since the quantity is bounded, we must eventually reach a local maximum. From there, we can prove that any local maximum satisfies the property we want (here, that the matching is stable), and we win. Indeed, supposing to the contrary that we have a pair $ (x,y)$ which is unstable for the matching $ m$ produced at the end of this process, then it must have been the case that $ x$ proposed to $ y$ in some earlier round. But $ y$ has as her final match some other suitor $ x’ = m^{-1}(y)$ whom she prefers less than $ x$. Though she may have never picked $ x$ at any point in the algorithm, she can only end up with the worse choice $ x’$ if at some point $ y$ chose a suitor that was less preferred than the suitor she already had. Since her choices are monotonic this cannot happen, so no unstable pairs can exist.

Rather than mathematically implement the algorithm in pseudocode, let’s produce the entire algorithm in Python to make the ideas completely concrete.

Python Implementation

We start off with some simple data definitions for the two parties which, in the renewed interest of generality, refer to as Suitor and Suited.

class Suitor(object):
   def __init__(self, id, prefList):
      self.prefList = prefList
      self.rejections = 0 # num rejections is also the index of the next option = id

   def preference(self):
      return self.prefList[self.rejections]

   def __repr__(self):
      return repr(

A Suitor is simple enough: he has an id representing his “index” in the set of Suitors, and a preference list prefList which in its $ i$-th position contains the Suitor’s $ i$-th most preferred Suited. This is identical to our mathematical representation from earlier, where a list like $ (2,3,1)$ means that the Suitor prefers the second Suited most and the first Suited least. Knowing the algorithm ahead of time, we add an additional piece of data: the number of rejections the Suitor has seen so far. This will double as the index of the Suited that the Suitor is currently proposing to. Indeed, the preference function provides a thin layer of indirection allowing us to ignore the underlying representation, so long as one updates the number of rejections appropriately.

Now for the Suited.

class Suited(object):
   def __init__(self, id, prefList):
      self.prefList = prefList
      self.held = None
      self.currentSuitors = set() = id

   def __repr__(self):
      return repr(

A Suited likewise has a list of preferences and an id, but in addition she has a held attribute for the currently held Suitor, and a list currentSuitors of Suitors that are currently proposing to her. Hence we can define a reject method which accepts no inputs, and returns a list of rejected suitors, while updating the woman’s state to hold onto her most preferred suitor.

   def reject(self):
      if len(self.currentSuitors) == 0:
         return set()

      if self.held is not None:

      self.held = min(self.currentSuitors, key=lambda suitor: self.prefList.index(
      rejected = self.currentSuitors - set([self.held])
      self.currentSuitors = set()

      return rejected

The call to min does all the work: finding the Suitor that appears first in her preference list. The rest is bookkeeping. Now the algorithm for finding a stable marriage, following the deferred acceptance algorithm, is simple.

# monogamousStableMarriage: [Suitor], [Suited] -> {Suitor -> Suited}
# construct a stable (monogamous) marriage between suitors and suiteds
def monogamousStableMarriage(suitors, suiteds):
   unassigned = set(suitors)

   while len(unassigned) > 0:
      for suitor in unassigned:
      unassigned = set()

      for suited in suiteds:
         unassigned |= suited.reject()

      for suitor in unassigned:
         suitor.rejections += 1

   return dict([(suited.held, suited) for suited in suiteds])

All the Suitors are unassigned to begin with. Each iteration of the loop corresponds to a round of the algorithm: the Suitors are added to the currentSuitors list of their next most preferred Suited. Then the Suiteds “simultaneously” reject some Suitors, whose rejection counts are upped by one and returned to the pool of unassigned Suitors. Once every Suited has held onto a Suitor we’re done.

Given a matching, we can define a function that verifies by brute force that the marriage is stable.

# verifyStable: [Suitor], [Suited], {Suitor -> Suited} -> bool
# check that the assignment of suitors to suited is a stable marriage
def verifyStable(suitors, suiteds, marriage):
   import itertools
   suitedToSuitor = dict((v,k) for (k,v) in marriage.items())
   precedes = lambda L, item1, item2: L.index(item1) < L.index(item2)

   def suitorPrefers(suitor, suited):
      return precedes(suitor.prefList,, marriage[suitor].id)

   def suitedPrefers(suited, suitor):
      return precedes(suited.prefList,, suitedToSuitor[suited].id)

   for (suitor, suited) in itertools.product(suitors, suiteds):
      if suited != marriage[suitor] and suitorPrefers(suitor, suited) and suitedPrefers(suited, suitor):
         return False, (,


Indeed, we can test the algorithm on an instance of the problem.

>>> suitors = [Suitor(0, [3,5,4,2,1,0]), Suitor(1, [2,3,1,0,4,5]),
...            Suitor(2, [5,2,1,0,3,4]), Suitor(3, [0,1,2,3,4,5]),
...            Suitor(4, [4,5,1,2,0,3]), Suitor(5, [0,1,2,3,4,5])]
>>> suiteds = [Suited(0, [3,5,4,2,1,0]), Suited(1, [2,3,1,0,4,5]),
...            Suited(2, [5,2,1,0,3,4]), Suited(3, [0,1,2,3,4,5]),
...            Suited(4, [4,5,1,2,0,3]), Suited(5, [0,1,2,3,4,5])]
>>> marriage = monogamousStableMarriage(suitors, suiteds)
{3: 0, 4: 4, 5: 1, 1: 2, 2: 5, 0: 3}
>>> verifyStable(suitors, suiteds, marriage)

We encourage the reader to check this by hand (this one only took two rounds). Even better, answer the question of whether the algorithm could ever require $ n$ steps to converge for $ 2n$ individuals, where you get to pick the preference list to try to make this scenario happen.

Stable Marriages with Capacity

We can extend this algorithm to work for “polygamous” marriages in which one Suited can accept multiple Suitors. In fact, the two problems are entirely the same! Just imagine duplicating a Suited with large capacity into many Suiteds with capacity of 1. This particular reduction is not very efficient, but it allows us to see that the same proof of convergence and correctness applies. We can then modify our classes and algorithm to account for it, so that (for example) instead of a Suited “holding” a single Suitor, she holds a set of Suitors. We encourage the reader to try extending our code above to the polygamous case as an exercise, and we’ve provided the solution in the code repository for this post on this blog’s Github page.

Ways to Make it Harder

When you study algorithmic graph problems as much as I do, you start to get disheartened. It seems like every problem is NP-hard or worse. So when we get a situation like this, a nice, efficient algorithm with very real consequences and interpretations, you start to get very excited. In between our heaves of excitement, we imagine all the other versions of this problem that we could solve and Nobel prizes we could win. Unfortunately the landscape is bleaker than that, and most extensions of stable marriage problems are NP-complete.

For example, what if we allow ties? That is, one man can be equally happy with two women. This is NP-complete. However, it turns out his extension can be formulated as an integer programming problem, and standard optimization techniques can be used to approximate a solution.

What if, thinking about the problem in terms of medical students and residencies, we allow people to pick their preferences as couples? Some med students are married, after all, and prefer to be close to their spouse even if it means they have a less preferred residency. NP-hard again. See page 53 (pdf page 71) of these notes for a more detailed investigation. The problem is essentially that there is not always a stable matching, and so even determining whether there is one is NP-complete.

So there are a lot of ways to enrich the problem, and there’s an interesting line between tractable and hard in the worst case. As a (relatively difficult) exercise, try to solve the “roommates” version of the problem, where there is no male/female distinction (anyone can be matched with anyone). It turns out to have a tractable solution, and the algorithm is similar to the one outlined in this post.

Until next time!

PS. I originally wrote this post about a year ago when I was contacted by someone in industry who agreed to provide some (anonymized) data listing the preferences of companies and interns applying to work at those companies. Not having heard from them for almost a year, I figure it’s a waste to let this finished post collect dust at the risk of not having an interesting data set. But if you, dear reader, have any data you’d like to provide that fits into the framework of stable marriages, I’d love to feature your company/service on my blog (and solve the matching problem) in exchange for the data. The only caveat is that the data would have to be public, so you would have to anonymize it.

Blog Referral – Economics, Arbitrage Schemes, and Team Fortress 2

A sample of tradable items from Valve’s “Team Fortress 2.”

I don’t often put dedicated referrals to other blogs here. If I see blogs on relevant topics, I usually want to implement the ideas myself and write an original post that delves deeper into the mathematics involved. And more often than not, the post I’d refer to shies away from or completely ignores the mathematics (the best part!), so I feel justified in not referring to the original motivating article.

But I recently found an absolutely fascinating blog, and a topic, which is a novel exception. The blog is called Valve Economics, and it’s so intriguing precisely because I don’t know anything about economics, and because Valve keeps all their juicy raw data private. I’d like to take a moment to share it with you.

Here’s Some Background

There’s an excellent video game company out there called Valve, which has made some of the (universally accepted) best video games of all time, including some I personally have enjoyed.

A scene from Portal 2, my personal favorite Valve game.

Valve also has a wonderful content distribution network called Steam, which doubles as a platform for their in-game economies. One prominent example of this is their popular online shooter Team Fortress 2. They have what’s known as a barter economy: goods are exchanged directly for other goods, without any universal intermediary currency.

Now the thing that makes this economy so fascinating is not what you’d expect. The guns, hats, and glasses and game mechanics are largely irrelevant. The astounding feature of the Team Fortress 2 economy, and that of all virtual economies, is that every single transaction is recorded and saved. Let me isolate that statement, because it deserves a the emphasis.

Valve has access to the data of every single transaction ever made in the Team Fortress 2 economy.

This opens up a dreamland of possibilities for economists, programmers, social scientists, and a whole host of businessmen.

Consider the current state of economics. The world has been experiencing befuddling economic woes. The “Fed”s of the world have been lowering interest rates to encourage lending to no avail. Governments have been spending to encourage a resistant job market to grow. Countries have been clawing through massive heaps of debt, and economists have been speculating about the collapse of the Euro, one of the world’s largest currencies.

In the midst of all this economists are more or less at a loss. Classical and contemporary economic models fail alike, and the usual tools to combat recession have been all but exhausted. And, not even considering the recent corruption scandals, banks have not contributed much.

One huge factor in the difficulty for economists to conduct a meaningful analysis is that there is simply not enough data. It’s an unavoidable fact of life that many transactions are not recorded. Migrant workers are hired under the table, big corporations reroute their income through circuitous digital pathways to avoid larger tax burdens, and a vast amount of exchanges are simply not available (recorded or not) for economists to use. They are forced to rely on macroscopic estimates and statistical simplifications to predict the future of an increasingly complex system. And so economists measure the growth of most goods-based markets by sampling a “basket” of commonly traded goods and measuring their value. But that doesn’t tell economists what people are buying, and most companies wouldn’t give up their sales data to an outsider anyway.

But Valve’s virtual economy suffers none of those ailments. Since every transaction goes through the Steam network (and through Valve servers), each one can be logged, stored, and retrieved at the flick of a SQL query. As an analogy to a real-world economy, imagine if every single container of milk that was ever bought, sold, traded, or given freely (even in, say, the last year) was recorded in a single spreadsheet. It’s ludicrous, right? And that’s just for one product! In the Team Fortress economy, Valve has this data for every one of their many goods.

But outside of Valve, and aside from being a euphoria-inducing, data-happy wonderland, what good is studying a virtual economy? I think the answer lies in economic knowledge itself. In the Team Fortress economy, the folks at Valve can perform experiments and measure the results. They can provide hard evidence that a particular technique will have a particular effect on their users, and this can provide valuable insight into real world economies. Because remember, these are real people making decisions on the value of various goods. Steam has about 90 million users. At peak hours, there are more than 5 million users playing games, about 60,000 of which are playing Team Fortress 2. Imagine the entire country of Singapore on Steam at the same time, or Ireland, or Costa Rica. Now imagine if the entire population of Flagstaff, Arizona was playing Team Fortress. And now consider the entire country of Germany having Steam accounts. In those reference frames, it’s not hard to imagine the relevance of this digital economy.

 Enter, Yanis Varoufakis

Valve recently hired a Greek economist by the name of Yanis Varoufakis to analyze their virtual economies, and he’s recording his insights and methods on his new blog, Valve Economics. He’s already shown some fascinating results on the fluctuation of potential for arbitrage schemes. That is, the opportunity for someone to buy cheap goods and immediately resell them for a higher price (a trader’s dream). Moreover, this represents the divergence of an economy from its equilibrium. Finally, it seems that the need for generalized statistical models evaporate once you have all the data. The mathematics and economics he explains is completely elementary, and the need for statistics largely evaporates in favor of large-scale computational techniques (which he only hints at in his posts). The resulting analysis is concise, clear, and elegant.

Varoufakis’s graph showing arbitrage potential in Team Fortress 2. Height represents potential for arbitrage, and thickness represents volume of transactions.

Varoufakis (a complete stranger to video games) has done a boatload of work in economics, been at the center of the debate over Greece’s debt problems, and now works full-time for Valve exploring virtual economies.

Varoufakis provides more details on the structure of the economy in this interview (starting about 33:00) on Left Business Observer radio, where he elaborates on content creation, bursting bubbles, philosophical economic debates, and economic regulation. I highly encourage the interested reader to listen, because he provides many details and analogies that are omitted on his blog.

Even with only two posts, I’m hooked. And as long as his posts are, I want more: more detail, more economic background, more history, more on his programmatic methods, more of it all! I’d love to learn more about economics from a mature point of view (and in particular mathematical and computational economics), and Varoufakis presents his work in an accessible, engaging way. But even more so, I’d love to get my hands on these awesome datasets.

If any readers know of a good text that would give an introduction to mathematical economics or computational economics, please let me know in the comments. More importantly, if anyone knows of freely available datasets from similarly isolated economies, please let me know! Without data to experiment with the analytic techniques is largely useless.

Until next time!