Silent Duels and an Old Paper of Restrepo

Two men start running at each other with loaded pistols, ready to shoot!

It’s a foggy morning for a duel. Newton and Leibniz have decided this macabre contest is the only way to settle their dispute over who invented Calculus. Each pistol is fitted with a silencer and has a single bullet. Neither can tell when the other has attempted a shot, unless, of course, they are hit.

Newton and Leibniz both know that the closer they get to their target, the higher the chance of a successful shot. Eventually they’ll be standing nose-to-nose: a guaranteed hit! But the longer each waits, the more opportunity they give their opponent to fire. If they both fire and miss, mild embarrassment ensues and they resolve to try again tomorrow. When should they shoot to maximize their chance at victory?

This is the so-called silent duel problem, and as you might have guessed it can be phrased without any violence.

Two players compete to succeed in taking some action in the interval [0,1]. They are given a function P(t) that describes the probability of success if the action is taken at time t. Since the two men are “running” at each other, P(t) is assumed to be increasing, with P(0) = 0, P(1) = 1. If Player 1 succeeds in their action first, Player 1 gets a dollar (1 unit of utility) from Player 2; if Player 2 succeeds, Player 1 loses a dollar to Player 2. What strategy should they use to maximize their expected payoff?

Yet another phrasing of the problem is that a beautiful young woman is arriving at a train station, and two suitors are competing to pick her up. If she arrives and nobody is there to pick her up, she waits for the first suitor to arrive. If a suitor arrives and the woman is not there, the suitor assumes she has already been picked up and leaves. I like the duel version better, because what self-respecting woman can’t arrange her own ride these days? Either way, neither example has aged well. We should come up with a modern version where people are racing to McDonald’s to get Mulan Szechuan Sauce.

I originally heard about this problem in a game theory course I took in undergrad, coincidentally the same class where I met my wife. See section 3 of the course notes by Anthony Mendes, which neatly describes how to solve the silent duel when P(x) = x. I remember spending a lot of time confused about this problem, and wrote out my solution over and over again until I felt I understood it.

Almost ten years later, I found a renewed interest in the silent duel when a colleague posed the following variant (having no leads on how to solve it). A government agency releases daily financial data concerning the market every morning at 6AM, and gives API access to it. In this version I’ll say that this data describes the demand for wheat and sheep. If you can get this data before anyone else, even an extra few milliseconds gives you an edge in the market for that day. You can buy up all the wheat if there’s a shortage, or short sheep futures if there’s a surplus.

There are two caveats. First caveat: 6AM is not precise because your clock deviates from the data provider’s clock. Maybe there’s a person who has to hit a button, and they took an extra few seconds to take a bite of their morning donut. If you call the API too early, it will respond, “Please try again later.” If you call the API after the data has been released, you receive the data immediately. Second caveat: since everyone is racing to get this data first, the API rate limits you to 6 requests per minute. If you go over, your account is blocked for 12 hours and you can’t get the data at all that day. You need a Scrooge-McDuckian vault of money to afford a new account, so you’re stuck with the one.

Assuming you have enough time to watch when the data gets released, you can construct a cumulative distribution P(t), which for a time t describes the probability that the data has been released before time t. I.e., the probability of success if you call the API at time t. If we assume the distribution falls within a single minute around 6AM, then we see a strikingly similarity between this problem and a silent duel with six shots. Perhaps it’s not exactly the same, since there are many more than two players in the game, but it’s close. Perhaps you can assume two players, but you (Player 1) get six shots and your opponent (Player 2) gets some larger number.

I was downright twitterpated to see a natural problem fit so neatly and unexpectedly into a bit of math I remembered, but hadn’t thought about for a decade. The solution was too detailed for me to remember it on the spot (I recall it involved some integrals and curious discontinuities), so I told my colleague I’d go find the paper that solves this problem.

Thus began my journey down the rabbit hole. The first hurdle was that I didn’t know what to call the problem. I found my professor’s notes from the course, but they didn’t provide a definitive name beyond “interval games.” After combing through some textbooks and, more helpfully, crawling back along the graph of citations, I discovered the name silent duel—and noisy duel for the variant where the shooters can hear each other’s attempts. However, no textbook I looked at actually provided a full explanation or proof of the solution. They just said, “optimal strategies have been proven to exist,” or detailed a simplification involving one or two bullets. And after a few more hours of looking I found the title of the original paper that solved this problem in the generality I wanted.

Rodrigo Restrepo. Tactical Problems Involving Several Actions. Contributions to the Theory of Games, Vol. III. 1957.

Unfortunately, I wasn’t able to find a digital copy. I did find a copy being sold on Amazon by a third-party seller. Apparently this seller bought old journal proceedings in bulk from the Bell Laboratories library after they closed down. I bought a copy, and according to the Amazon listing there are only 20-odd copies left.

I was also pleased to see the many recognizable names on the cover.

  • Rabin: Turing Award winner who invented nondeterminism as a computing concept (among many other accomplishments)
  • Gale & Shapley: inventors of the Stable Marriage algorithm, the latter of whom won the Nobel Price in Economics for subsequent work on applying it.
  • Berge: one of the leaders who established graph theory and combinatorics as mathematical disciplines in their own right
  • Karlin: a big name in math for social sciences (think of Arrow’s Impossibility Theorem).
  • Milnor: Fields medalist and heavyweight in differential topology.

I thought about how many of these old papers might be lost to history with no digital record. It’s a shame, because the silent duel is a cool problem, absent from many books, and, prompted by my recent discussions, applicable to software! Rodrigo Restrepo in particular seems to have had no PhD students. He might be a faculty emeritus at the University of British Columbia studying mathematical biology, but I wasn’t able to locate a website (or even a photo!) to cross-check publications. If any UBC math faculty read this, perhaps they can provide more details about who Dr. Restrepo is.

All of this culminated in the inevitable next steps. Buy the manuscript, re-typeset the paper in TeX, grok the theorem and the construction, put the paper on arXiv to make it accessible for the foreseeable future, and then use my newfound knowledge to corner the market on sheep futures once and for all!

I drafted the TeX rewrite (still has a few typos), and started working through the paper. Then I realized I had committed to publishing my book by the end of 2018. I forced myself to put it aside, and now I’ve returned to study it. I’ll detail my exploration of the paper and the code to implement the solution in subsequent posts. I intend the subsequent posts to be as much of a narrative of my process working through a paper as it is about the math itself (to be honest, the paper could be clearer, but I chalk it up to pre-computer era descriptions of algorithms). In general, I’d like to explore more and different kinds of ways to share and explore math on the internet.

In the mean time, intrepid readers can venture forth to see the draft on Github.

Until next time!

A Programmer’s Introduction to Mathematics

For the last four years I’ve been working on a book for programmers who want to learn mathematics. It’s finally done, and you can buy it today.

The website for the book is pimbook.org, which has purchase links—paperback and ebook—and a preview of the first pages. You can see more snippets later in the book on the Amazon listing’s “Look Inside” feature.

If you’re a programmer who wants to learn math, this book is written specifically for you! Why? Because programming and math are naturally complementary, and programmers have a leg up in learning math. Many of the underlying modes of thought in mathematics are present in programming, or are otherwise easy to explain by analogies and contrasts to familiar concepts in software. I leverage that in the book so that you can internalize the insights quickly, and appreciate the nuance more deeply than most books can allow. This book is a bridge from the world of programming to the world of math from the mathematician’s perspective. As far as I know, no other book provides this.

Programs make math more interesting and applicable than otherwise. Typical math writers often hold computation and algorithms at a healthy distance. Not us. We embrace computation as a prize and a principle worth fighting for. Each chapter of the book culminates in an exciting program that applies the mathematical insights from the chapter to an interesting application. The applications include cryptographic schemes, machine learning, drawing hyperbolic tessellations, and a Nobel-prize winning algorithm from economics.

The exercises of the book also push you beyond the book itself. There’s so much math out there that you can’t learn it from a single book. Perspectives and elaborations are spread throughout books, papers, blog posts, wikis, lecture notes, math magazines, and your own scratch paper. This book will prepare you to read a variety of sources by introducing you to the standard language of math, and also push you to engage with those resources.

Finally, this book includes a healthy dose of culture. Quotes and passages from the writings of famous mathematicians, contextual explanations of cultural attitudes, and a light dose of history will provide a peek into why mathematics is the way it is today, and why at times it can seem so confounding to an outsider. Through all this, I will show what progress means for math, what attitudes and patterns will help you along the way, and how to stay sane.

Of course, I couldn’t have written the book without the encouragement and support of you, my readers. Thank you for reading, commenting, and supporting me all these years.

Order the book today! I can’t wait to hear what you think 🙂

Table of Contents for PIM

I am down to the home stretch for publishing my upcoming book, “A Programmer’s Introduction to Mathematics.” I don’t have an exact publication date—I’m self publishing—but after months of editing, I’ve only got two chapters left in which to apply edits that I’ve already marked up in my physical copy. That and some notes from external reviewers, and adding jokes and anecdotes and fun exercises as time allows.

I’m committing to publishing by the end of the year. When that happens I’ll post here and also on the book’s mailing list. Here’s a sneak preview of the table of contents. And a shot of the cover design (still a work in progress)

img_20180914_184845-1-e1540048234788.jpg

Hanabi: a card game for logicians

Mathematics students often hear about the classic “blue-eyed islanders” puzzle early in their career. If you haven’t seen it, read Terry Tao’s excellent writeup linked above. The solution uses induction and the idea of common knowledge—I know X, and you know that I know X, and I know that you know that I know X, and so on—to make a striking inference from a seemingly useless piece of information.

Recreational mathematics is also full of puzzles involving prisoners wearing colored hats, where they can see others hats but not their own, and their goal is to each determine (often with high probability) the color of their own hat. Sometimes they are given the opportunity to convey a limited amount of information.

Over the last eight years I’ve been delighted by the renaissance of independent tabletop board and card games. Games leaning mathematical, like Set, have had a special place in my heart. Sadly, many games of incomplete information often fall to an onslaught of logic. One example is the popular game The Resistance (a.k.a. Avalon), in which players with unknown allegiances must either deduce which players are spies, or remain hidden as a spy while foiling a joint goal. With enough mathematicians playing, it can be easy to dictate a foolproof strategy. If we follow these steps, we can be 100% sure of victory against the spies, so anyone who disagrees with this plan or deviates from it is guaranteed to be a spy. Though we’re clearly digging a grave for our fun, it’s hard to close pandora’s logic box after it’s open. So I’m always on the lookout for games that resist being trivialized.

Enter Hanabi.

hanabi

A friend recently introduced me to the game, which channels the soul and complexion of the blue-eyed islanders and hat-donning prisoners into a delightful card game.

The game has simple rules: each player gets a hand that they may not see, but they reveal to all other players. The hands come from the following set of cards (with more 1’s than 2’s, and the fewest 5’s), and players work together, aiming to place cards from 1-5 in order in each color. It’s like solitaire, where stacks of different colors may progress independently, but a 2 must be placed before a 3.

Screen Shot 2018-08-09 at 8.34.18 PM

Then the players take turns, and on each turn a player may do one of the following:

  1. Choose a card from your hand to play. If the chosen card cannot be played (e.g, it’s a red 3 but only a red 1 is on the table), everyone gets a strike. Three strikes ends the game in a loss.
  2. Use an information token (limited in supply) to give one piece of information to one other player; the allowed types of information are explained below.
  3. Choose a card from your hand to discard, and regain an information token for future use.

The information you can give to a player to choosing a single feature (a specific rank or color), and pointing to all cards in that player’s hand that have that feature. Example: “these two cards are green”, or “this card is a 4”. House rules dictate whether “no cards are blue” is a valid piece of information. Officially—I like to think it’s in the spirit of the blue-eyed islander’s puzzle where “someone has blue eyes”—you must be able to point at something to reveal information about it.

So the game involves some randomness (the draw), and some resource management (the information tokens), but the heart of the game is figuring out how to convey as much information as possible in a single clue.

Just like the blue-eyed islander’s puzzle, giving a public piece of information to one player can indicate much more. Imagine their are 4 players. I can see your hand, but if, after looking at your hand, I decide instead to give Blair a clue, that gives you information that what’s in Blair’s hand is more valuable for me to reveal to her than what’s in your hand would be to reveal to you.

Another trick: say I know I have a 4, and say it’s the beginning of the game where 4’s are not playable, and the board has a blue 1 on it. If you play before me and you tell me that that same 4 and a second card are both blue, what does that tell me? It was certainly somewhat redundant: you told me more information about a card I knew was not playable, and seemingly not super-helpful information about a second card. After some reflection you can often infer that not only is the second card a blue 2, but also that you have at least one more 2 elsewhere in your hand that’s not immediately playable. That’s a lot of information!

The idea of common knowledge takes it down a rabbit hole that I haven’t quite gotten my head around, but which makes the game continually fun. If I know that you know that I can infer the above scenario with the blue 2, then you not giving me that clue tells me that either that situation isn’t present in my hand, or else that whatever information you’re instead giving to Matthieu is a higher priority. The more the group can understand to be commonly inferable (say, discussing strategies before starting the game), the more one can take advantage of common knowledge. The game starts to feel like a logical olympiad, where your worst enemy is your fallible memory, and if people aren’t playing at the same level, relying too much on an inference your teammate didn’t intend can cause grave mistakes!

It’s a guaranteed hit at your next gathering of logic-loving mathemalites!