A Working Mathematician’s Guide to Parsing

Our hero, a mathematician, is writing notes in LaTeX and needs to convert it to a format that her blog platform accepts. She’s used to using dollar sign delimiters for math mode, but her blog requires \( \) and \[ \]. Find-and-replace fails because it doesn’t know about which dollar sign is the start and which is the end. She knows there’s some computer stuff out there that could help, but she doesn’t have the damn time to sort through it all. Paper deadlines, argh!

If you want to skip ahead to a working solution, and you can run basic python scripts, see the Github repository for this post with details on how to run it and what the output looks like. It’s about 30 lines of code, and maybe 10 of those lines are not obvious. Alternatively, copy/paste the code posted inline in the section “First attempt: regular expressions.”

In this post I’ll guide our hero through the world of text manipulation, explain the options for solving a problem like this, and finally explain how to build the program in the repository from scratch. This article assumes you have access to a machine that has basic programming tools pre-installed on it, such as python and perl.

The problem with LaTeX

LaTeX is great, don’t get me wrong, but people who don’t have experience writing computer programs that operate on human input tend to write sloppy LaTeX. They don’t anticipate that they might need to programmatically modify the file because the need was never there before. The fact that many LaTeX compilers are relatively forgiving with syntax errors exacerbates the issue.

The most common way to enter math mode is with dollar signs, as in

Now let $\varepsilon > 0$ and set $\delta = 1/\varepsilon$.

For math equations that must be offset, one often first learns to use double-dollar-signs, as in

First we claim that $$0 \to a \to b \to c \to 0$$ is a short exact sequence

The specific details that make it hard to find and convert from this delimiter type to another are:

  1. Math mode can be broken across lines, but need not be.
  2. A simple search and replace for $ would conflict with $$.
  3. The fact that the start and end are symmetric means a simple search and replace for $$ fails: you can’t tell whether to replace it with \[ or \] without knowing the context of where it occurs in the document.
  4. You can insert a dollar sign in LaTeX using \$ and it will not enter math mode. (I do not solve this problem in my program, but leave it as an exercise to the reader to modify each solution to support this)

First attempt: regular expressions

The first thing most programmers will tell you when you have a text manipulation problem is to use regular expressions (or regex). Regular expressions are text patterns that a program called a regular expression engine uses to find subsets of text in a document. This can often be with the goal of modifying the matched text somehow, but also just to find places where the text occurs to generate a report.

In their basic form, regular expressions are based on a very clean theory called regular languages, which is a kind of grammar equivalent to “structure that can be recognized by a finite state machine.”

[Aside: some folks prefer to distinguish between regular expressions as implemented by software systems (regex) and regular expressions as a representation of a regular language; as it turns out, features added to regex engines make them strictly stronger than what can be represented by the theory of regular languages. In this post I will use “regex” and “regular expressions” both for practical implementations, because programmers and software don’t talk about the theory, and use “regular languages” for the CS theory concept]

The problem is that practical regular expressions are difficult and nit-picky, especially when there are exceptional cases to consider. Even matching something like a date can require a finnicky expression that’s hard for humans to read and debug when they are incorrect. A regular expression for a line in a file that contains a date by itself:

^\s*(19|20)\d\d[- /.](0[1-9]|1[012])[- /.](0[1-9]|[12][0-9]|3[01])\s*$

Worse, regular expressions are rife with “gotchas” having to do with escape characters. For example, parentheses are used for something called “capturing”, so you have to use \( to insert a literal parenthesis. If  having to use “\$” in LaTeX bothers you, you’ll hate regular expressions.

Another issue comes from history. There was a time when computers only allowed you to edit a file one line at a time. Many programming tools and frameworks were invented during this time that continue to be used today (you may have heard of sed which is a popular regular expression find/replace program—one I use almost daily). These tools struggle to operate on problems that span many lines of a file, because they simply weren’t designed for that. Problem (1) above suggests this might be a problem.

Yet another issue is in slight discrepancies between regex engines. Perl, python, sed, etc., all have slight variations and “nonstandard” features. As all programmers know, every visible behavior of a system will eventually be depended on by some other system.

But the real core problem is that regular expressions weren’t really designed for knowing about the context where a match occurs. Regular expressions are designed to be character-at-a-time pattern matching. [edit: removed an incorrect example] But over time, regular expression engines have added features to do such things over the years (which makes them more powerful than the original, formal definition of a regular language, and even more powerful than what parsers can handle!), but the more complicated you make a regular expression, the more likely it’s going to misbehave on odd inputs, and less likely others can use it without bugs or modification for their particular use case. Software engineers care very much about such things, though mathematicians needing a one-off solution may not.

One redeeming feature of regular expressions is that—by virtue of being so widely used in industry—there are many tools to work with them. Every major programming language has a regular expression engine built in. And many websites help explain how regexes work. regexr.com is one I like to use. Here is an example of using that website to replace offset mathmode delimiters. Note the “Explain” button, which traces the regex engine as it looks for matches.

Screen Shot 2019-04-20 at 3.25.41 PM

So applying this to our problem: we can use two regular expressions to solve the problem. I’m using the perl programming language because its regex engine supports multiline matches. All MacOS and linux systems come with perl pre-installed.

 perl -0777 -pe 's/\$\$(.*?)\$\$/\\[\1\\]/gs' < test.tex | perl -0777 -pe 's/\$(.*?)\$/\\(\1\\)/gs' > output.tex

Now let’s explain, starting with the core expressions being matched and replaced.

s/X/Y/ tells a regex engine to “substitute regex matches of X with Y”. In the first regex X is \$\$(.*?)\$\$, which breaks down as

  1. \$\$ match two literal dollar signs
  2. ( capture a group of characters signified by the regex between here and the matching closing parenthesis
    1. .* zero or more of any character
    2. ? looking at the “zero or more” in the previous step, try to match as few possible characters as you can while still making this pattern successfully match something
  3. ) stop the capture group, and save it as group 1
  4. \$\$ match two more literal dollar signs

Then Y is the chosen replacement. We’re processing offset mathmode, so we want \[ \]. Y is \\[\1\\], which means

  1. \\ a literal backslash
  2. [ a literal open bracket
  3. \1 the first capture group from the matched expression
  4. \\ a literal backslash
  5. ] a literal close bracket

All together we have s/\$\$(.*?)\$\$/\\[\1\\]/, but then we add a final s and g characters, which act as configuration. The “s” tells the regex engine to allow the dot . to match newlines (so a pattern can span multiple lines) and the “g” tells the regex to apply the substitution globally to every match it sees—as opposed to just the first. 

Finally, the full first command is

perl -0777 -pe 's/\$\$(.*?)\$\$/\\[\1\\]/gs' < test.tex 

This tells perl to read in the entire test.tex file and apply the regex to it. Broken down

  1. perl run perl
  2. -0777 read the entire file into one string. If you omit it, perl will apply the regex to each line separately.
  3. -p will make perl automatically “read input and print output” without having to tell it to with a “print” statement
  4. e tells perl to run the following command line argument as a program.
  5. < test.tex tells perl to use the file test.tex as input to the program (as input to the regex engine, in this case).

Then we pipe the output of this first perl command to a second one that does a very similar replacement for inline math mode.

<first_perl_command> | perl -0777 -pe 's/\$(.*?)\$/\\(\1\\)/gs'

The vertical bar | tells the shell executing the commands to take the output of the first program and feed it as input to the second program, allowing us to chain together sequences of programs. The second command does the same thing as the first, but replacing $ with \( and \). Note, it was crucial we had this second program occur after the offset mathmode regex, since $ would match $$.

Exercise: Adapt this solution to support Problem (4), support for literal \$ dollar signs. Hint: you can either try to upgrade the regular expression to not be tricked into thinking \$ is a delimiter, or you can add extra programs before that prevent \$ from being a problem. Warning: this exercise may cause a fit.

It can feel like a herculean accomplishment to successfully apply regular expressions to a problem. You can appreciate the now-classic programmer joke from the webcomic xkcd:

Regular Expressions

However, as you can tell, getting regular expressions right is hard and takes practice. It’s great when someone else solves your problem exactly, and you can copy/paste for a one-time fix. But debugging regular expressions that don’t quite work can be excruciating. There is another way!

Second attempt: using a parser generator

While regular expressions have a clean theory of “character by character stateless processing”, they are limited. It’s not possible to express the concept of “memory” in a regular expression, and the simplest example of this is the problem of counting. Suppose you want to find strings that constitute valid, balanced parentheticals. E.g., this is balanced:

(hello (()there)() wat)

But this is not

(hello ((there )() wat)

This is impossible for regexes to handle because counting the opening parentheses is required to match the closing parens, and regexes can’t count arbitrarily high. If you want to parse and manipulate structures like this, that have balance and nesting, regex will only bring you heartache.

The next level up from regular expressions and regular languages are the two equivalent theories of context-free grammars and pushdown automata. A pushdown automaton is literally a regular expression (a finite state machine) equipped with a simple kind of memory called a stack. Rather than dwell on the mechanics, we’ll see how context-free grammars work, since if you can express your document as a context free grammar, a tool called a parser generator will give you a parsing program for free. Then a few simple lines of code allow you to manipulate the parsed representation, and produce the output document.

The standard (abstract) notation of a context-free grammar is called Extended Backus-Naur Form (EBNF). It’s a “metasyntax”, i.e., a syntax for describing syntax. In EBNF, you describe rules and terminals. Terminals are sequences of constant patterns, like


A rule is an “or” of sequences of other rules or terminals. It’s much easier to show an example:

char = "a" | "b" | "c"


The above describes the structure of any string that looks like offset math mode, but with a single “a” or a single “b” or a single “c” inside, e.g, “\[b\]”. You can see some more complete examples on Wikipedia, though they use a slightly different notation.

With some help from a practical library’s built-in identifiers for things like “arbitrary text” we can build a grammar that covers all of the ways to do latex math mode.

latex = content
content = content mathmode content | TEXT | EMPTY


Here we’re taking advantage of the fact that we can’t nest mathmode inside of mathmode in LaTeX (you probably can, but I’ve never seen it), by defining the mathmode rule to contain only text, and not other instances of the “content” rule. This rules out some ambiguities, such as whether “$x$ y $z$” is a nested mathmode or not.

We may not need the counting powers of context-free grammars, yet EBNF is easier to manage than regular expressions. You can apply context-sensitive rules to matches, whereas with regexes that would require coordination between separate passes. The order of operations is less sensitive; because the parser generator knows about all patterns you want to match in advance, it will match longer terminals before shorter—more ambiguous—terminals. And if we wanted to do operations on all four kinds of math mode, this allows us to do so without complicated chains of regular expressions.

The history of parsers is long and storied, and the theory of generating parsing programs from specifications like EBNF is basically considered a solved problem. However, there are lot of parser generators out there. And, like regular expression engines, they each have their own flavor of EBNF—or, as is more popular nowadays, they have you write your EBNF using the features of the language the parser generator is written in. And finally, a downside of using a parser generator is that you have to then write a program to operate on the parsed representation (which also differs by implementation).

We’ll demonstrate this process by using a Python library that, in my opinion, stays pretty faithful to the EBNF heritage. It’s called lark and you can pip-install it as

pip install lark-parser

Note: the hard-core industry standard parser generators are antlr, lex, and yacc. I would not recommend them for small parsing jobs, but if you’re going to do this as part of a company, they are weighty, weathered, well-documented—unlike lark.

Lark is used entirely inside python, and you specify the EBNF-like grammar as a string. For example, ours is

tex: content+

?content: mathmode | text+

        | INLINE text+ INLINE


?text: /./s

You can see the similarities with our “raw” EBNF. The main difference here is the use of + for matching “one or more” of a rule, the use of a regular expression to define the “text” rule as any character (here again the trailing “s” means: allow the dot character to match newline characters). The backslashes are needed because backslash is an escape character in Python. And finally, the question mark tells lark to try to compress the tree if it only matches one item (you can see what the difference is by playing with our display-parsed-tree.py script that shows the parsed representation of the input document. You can read more in lark’s documentation about what the structure of the parsed tree is as python objects (Tree for rule/terminal matches and Token for individual characters).

For the input “Let $x=0$”, the parsed tree is as follows (note that the ? makes lark collapse the many “text” matches):

    [Token(__ANON_0, 'L'), 
     Token(__ANON_0, 'e'), 
     Token(__ANON_0, 't'), 
     Token(__ANON_0, ' ')]), 
    [Token(INLINE, '$'), 
     Token(__ANON_0, 'x'), 
     Token(__ANON_0, '='), 
     Token(__ANON_0, '0'), 
     Token(INLINE, '$')]), 
   Token(__ANON_0, '\n')])

So now we can write a simple python program that traverses this tree and converts the delimiters. The entire program is on Github, but the core is

def join_tokens(tokens):
    return ''.join(x.value for x in tokens)

def handle_mathmode(tree_node):
    '''Switch on the different types of math mode, and convert
       the delimiters to the desired output, and then concatenate the
       text between.'''
    starting_delimiter = tree_node.children[0].type

    if starting_delimiter in ['INLINE', 'INLINEOPEN']:
        return '\\(' + join_tokens(tree_node.children[1:-1]) + '\\)'
    elif starting_delimiter in ['OFFSETDOLLAR', 'OFFSETOPEN']:
        return '\\[' + join_tokens(tree_node.children[1:-1]) + '\\]'
        raise Exception("Unsupported mathmode type %s" % starting_delimiter)

def handle_content(tree_node):
    '''Each child is a Token node whose text we'd like to concatenate
    return join_tokens(tree_node.children)

The rest of the program uses lark to create the parser, reads the file from standard input, processes the parsed representation, and outputs the converted document to standard output. You can use the program like this:

python convert-delimiters.py < input.tex > output.tex

Exercise: extend this grammar to support literal dollar signs using \$, and passes them through to the output document unchanged.

What’s better?

I personally prefer regular expressions when the job is quick. If my text manipulation rule fits on one line, or can be expressed without requiring “look ahead” or “look behind” rules, regex is a winner. It’s also a winner when I only expect it to fail in a few exceptional cases that can easily be detected and fixed by hand. It’s faster to write a scrappy regex, and then open the output in a text editor and manually fix one or two mishaps, than it is to write a parser.

However, the longer I spend on a regular expression problem—and the more frustrated I get wrestling with it—the more I start to think I should have used a parser all along. This is especially true when dealing with massive jobs. Such as converting delimiters in hundreds of blog articles, each thousands of words long, or making changes across all chapter files of a book.

When I need something in between rigid structure and quick-and-dirty, I actually turn to vim. Vim has this fantastic philosophy of “act, repeat, rewind” wherein you find an edit that applies to the thing you want to change, then you search for the next occurrence of the start, try to apply the change again, visually confirm it does the right thing, and if not go back and correct it manually. Learning vim is a major endeavor (for me it feels lifelong, as I’m always learning new things), but since I spend most of my working hours editing structured text the investment and philosophy has paid off.

Until next time!

DIY Tracking Apps with Google Forms

It’s pretty well known now that the average mobile app sucks up as much information as it can about you and sells your data to shadowy nefarious organizations. My wife recently installed a sort of “health” tracking app and immediately got three unwanted subscriptions to magazines with names like “Shape.” We have no idea how to cancel them, or if we’ll eventually get an invoice demanding payment. This is a best case scenario, with worse cases being that your GPS location is tracked, your phone number is sold to robocallers, or your information is leaked to make it easier for hackers to break into your email account. All because you play Mobile Legends: Bang Bang!

But being able to track and understand your habits is a good thing. It encourages you to be healthier, more financially responsible, or to do more ultimately gratifying activities outside of staring at your phone or computer. If a tracker app is the difference between an alcoholic sticking to their AA plan and a relapse, you shouldn’t have to give up your privacy for it.

Rather than sell my data for convenience, I’ve recently started to make my own tracking apps. Here’s how:

  1. Make a Google Form for entering data.
  2. Analyze that data in the linked spreadsheet.

Here’s an example I made as a demo, but which has a real analogue that I use to track bullshit work I have to do at my job, and how long it would take to avoid it. When the amount of time wasted exceeds the time for a permanent fix, I can justify delaying other work. It’s called the Churn Log.

Screen Shot 2019-03-07 at 6.19.10 PM

And the linked spreadsheet with the raw response data looks like:

Screen Shot 2019-03-07 at 6.26.04 PM.png

These are super fast to make, and have a number of important benefits:

  • I can make them for whatever purpose I want, I don’t need to wait for some software engineers to happen to make an app that fits my needs. One other example I made is a “gift idea log.” I don’t think anyone will ever make this app.
  • It lives on my phone just like other apps, since (on Android) you can save a link to a webpage as an icon as if it were a native app.
  • It’s fast and uses minimal data.
  • You can use it trivially with family members and friends.
  • I get an incentive to become a spreadsheet wizard, which makes me better at my job.

The downsides are that it’s not as convenient as being completely automated. For instance, a finance tracker app can connect to your credit card account to automatically extract purchase history and group it into food, bills, etc. But then again, if I just want a tracker for my food purchases I have to give up my book purchases, my alcohol purchases (so many expensive liqueurs), and my obsession with bowties. With the Google Form method, I can quickly enter some data when I’m checking out at the grocery store or paying a check when dining out, and then when I’m interested I can go into the spreadsheet, make a chart or compute some averages, and I have 90% of the insight I care about.

But wait, doesn’t Google then have all your data? Can’t it sell it and send you unwanted magazines?

You’re right that technically Google gets all the data you enter. But with Mobile Legends: Health Tracker! (not a real app) they get to pick the structure of your entered data, so they know exactly what you’re entering. Since Google Forms lets you build a form with arbitrary semantics, it’s virtually impossible that enough people will choose the exact same structure that Google could feasibly be able to make sense of it.

And even if Google wanted to be evil and sell your self-tracked data, it wouldn’t be cost effective for Google to do so. The amount of work required to construct a lucrative interpretation of the random choices that humans make in building their own custom tracker app would far outweigh the gains from selling the data. The only reason that little apps like Mobile Legends Health Tracker can make money selling your data is that they suck up system metrics in a structured format whose semantics are known in advance. Disclosure: I work for Google, they aren’t paying me to write this—I honestly believe it’s a good idea—and having seen Google’s project management and incentive structure from the inside, I feel confident that custom tracker app data isn’t worthwhile enough to invest in parsing and exploiting. Not even to mention how much additional scrutiny Google gets from regulators.

While making apps like this I’ve actually learned a ton about spreadsheets that I never knew. For example, you can select an infinite range—e.g., an entire column—and make a chart that will auto-update as the empty cells get filled with new data. You can also create static references to named cells to act as configuration constants using dollar signs.

Even better, Google Sheets has two ways to interact with it externally. You can write Google Apps Script which is a flavor of Javascript that allows you to do things like send email alerts on certain conditions. E.g., if you tracked your dining budget you could get an email alert when you’re getting close to the limit. Or you could go full engineer and use the Google Sheets Python API to write whatever program you want to analyze your data. I sketched out a prototype scheduler app where the people involved entered their preferences via a Google Form, and I ran a Python script to pull the data and find a good arrangement that respected people’s preferences. That’s not a tracker app, but you can imagine arbitrarily complicated analysis of your own tracked data.

The beauty of this method is that it puts the power back in your hands, and has a gradual learning curve. If you or your friend has never written programs before, this gives an immediate and relevant application. You can start with simple spreadsheet tools (SUM and IF macros, charting and cross-sheet references), graduate to Apps script for scheduled checks and alerts, and finally to a fully fledged programming language, if the need arises.

I can’t think of a better way to induct someone into the empowering world of Automating Tedious Crap and gaining insights from data. We as programmers (and generally tech-inclined people) can help newcomers get set up. And my favorite part: most useful analyses require learning just a little bit of math and statistics 🙂

Silent Duels and an Old Paper of Restrepo

Two men start running at each other with loaded pistols, ready to shoot!

It’s a foggy morning for a duel. Newton and Leibniz have decided this macabre contest is the only way to settle their dispute over who invented Calculus. Each pistol is fitted with a silencer and has a single bullet. Neither can tell when the other has attempted a shot, unless, of course, they are hit.

Newton and Leibniz both know that the closer they get to their target, the higher the chance of a successful shot. Eventually they’ll be standing nose-to-nose: a guaranteed hit! But the longer each waits, the more opportunity they give their opponent to fire. If they both fire and miss, mild embarrassment ensues and they resolve to try again tomorrow. When should they shoot to maximize their chance at victory?

This is the so-called silent duel problem, and as you might have guessed it can be phrased without any violence.

Two players compete to succeed in taking some action in the interval [0,1]. They are given a function P(t) that describes the probability of success if the action is taken at time t. Since the two men are “running” at each other, P(t) is assumed to be increasing, with P(0) = 0, P(1) = 1. If Player 1 succeeds in their action first, Player 1 gets a dollar (1 unit of utility) from Player 2; if Player 2 succeeds, Player 1 loses a dollar to Player 2. What strategy should they use to maximize their expected payoff?

Yet another phrasing of the problem is that a beautiful young woman is arriving at a train station, and two suitors are competing to pick her up. If she arrives and nobody is there to pick her up, she waits for the first suitor to arrive. If a suitor arrives and the woman is not there, the suitor assumes she has already been picked up and leaves. I like the duel version better, because what self-respecting woman can’t arrange her own ride these days? Either way, neither example has aged well. We should come up with a modern version where people are racing to McDonald’s to get Mulan Szechuan Sauce.

I originally heard about this problem in a game theory course I took in undergrad, coincidentally the same class where I met my wife. See section 3 of the course notes by Anthony Mendes, which neatly describes how to solve the silent duel when P(x) = x. I remember spending a lot of time confused about this problem, and wrote out my solution over and over again until I felt I understood it.

Almost ten years later, I found a renewed interest in the silent duel when a colleague posed the following variant (having no leads on how to solve it). A government agency releases daily financial data concerning the market every morning at 6AM, and gives API access to it. In this version I’ll say that this data describes the demand for wheat and sheep. If you can get this data before anyone else, even an extra few milliseconds gives you an edge in the market for that day. You can buy up all the wheat if there’s a shortage, or short sheep futures if there’s a surplus.

There are two caveats. First caveat: 6AM is not precise because your clock deviates from the data provider’s clock. Maybe there’s a person who has to hit a button, and they took an extra few seconds to take a bite of their morning donut. If you call the API too early, it will respond, “Please try again later.” If you call the API after the data has been released, you receive the data immediately. Second caveat: since everyone is racing to get this data first, the API rate limits you to 6 requests per minute. If you go over, your account is blocked for 12 hours and you can’t get the data at all that day. You need a Scrooge-McDuckian vault of money to afford a new account, so you’re stuck with the one.

Assuming you have enough time to watch when the data gets released, you can construct a cumulative distribution P(t), which for a time t describes the probability that the data has been released before time t. I.e., the probability of success if you call the API at time t. If we assume the distribution falls within a single minute around 6AM, then we see a strikingly similarity between this problem and a silent duel with six shots. Perhaps it’s not exactly the same, since there are many more than two players in the game, but it’s close. Perhaps you can assume two players, but you (Player 1) get six shots and your opponent (Player 2) gets some larger number.

I was downright twitterpated to see a natural problem fit so neatly and unexpectedly into a bit of math I remembered, but hadn’t thought about for a decade. The solution was too detailed for me to remember it on the spot (I recall it involved some integrals and curious discontinuities), so I told my colleague I’d go find the paper that solves this problem.

Thus began my journey down the rabbit hole. The first hurdle was that I didn’t know what to call the problem. I found my professor’s notes from the course, but they didn’t provide a definitive name beyond “interval games.” After combing through some textbooks and, more helpfully, crawling back along the graph of citations, I discovered the name silent duel—and noisy duel for the variant where the shooters can hear each other’s attempts. However, no textbook I looked at actually provided a full explanation or proof of the solution. They just said, “optimal strategies have been proven to exist,” or detailed a simplification involving one or two bullets. And after a few more hours of looking I found the title of the original paper that solved this problem in the generality I wanted.

Rodrigo Restrepo. Tactical Problems Involving Several Actions. Contributions to the Theory of Games, Vol. III. 1957.

Unfortunately, I wasn’t able to find a digital copy. I did find a copy being sold on Amazon by a third-party seller. Apparently this seller bought old journal proceedings in bulk from the Bell Laboratories library after they closed down. I bought a copy, and according to the Amazon listing there are only 20-odd copies left.

I was also pleased to see the many recognizable names on the cover.

  • Rabin: Turing Award winner who invented nondeterminism as a computing concept (among many other accomplishments)
  • Gale & Shapley: inventors of the Stable Marriage algorithm, the latter of whom won the Nobel Price in Economics for subsequent work on applying it.
  • Berge: one of the leaders who established graph theory and combinatorics as mathematical disciplines in their own right
  • Karlin: a big name in math for social sciences (think of Arrow’s Impossibility Theorem).
  • Milnor: Fields medalist and heavyweight in differential topology.

I thought about how many of these old papers might be lost to history with no digital record. It’s a shame, because the silent duel is a cool problem, absent from many books, and, prompted by my recent discussions, applicable to software! Rodrigo Restrepo in particular seems to have had no PhD students. He might be a faculty emeritus at the University of British Columbia studying mathematical biology, but I wasn’t able to locate a website (or even a photo!) to cross-check publications. If any UBC math faculty read this, perhaps they can provide more details about who Dr. Restrepo is.

All of this culminated in the inevitable next steps. Buy the manuscript, re-typeset the paper in TeX, grok the theorem and the construction, put the paper on arXiv to make it accessible for the foreseeable future, and then use my newfound knowledge to corner the market on sheep futures once and for all!

I drafted the TeX rewrite (still has a few typos), and started working through the paper. Then I realized I had committed to publishing my book by the end of 2018. I forced myself to put it aside, and now I’ve returned to study it. I’ll detail my exploration of the paper and the code to implement the solution in subsequent posts. I intend the subsequent posts to be as much of a narrative of my process working through a paper as it is about the math itself (to be honest, the paper could be clearer, but I chalk it up to pre-computer era descriptions of algorithms). In general, I’d like to explore more and different kinds of ways to share and explore math on the internet.

In the mean time, intrepid readers can venture forth to see the draft on Github.

Until next time!

A Programmer’s Introduction to Mathematics

For the last four years I’ve been working on a book for programmers who want to learn mathematics. It’s finally done, and you can buy it today.

The website for the book is pimbook.org, which has purchase links—paperback and ebook—and a preview of the first pages. You can see more snippets later in the book on the Amazon listing’s “Look Inside” feature.

If you’re a programmer who wants to learn math, this book is written specifically for you! Why? Because programming and math are naturally complementary, and programmers have a leg up in learning math. Many of the underlying modes of thought in mathematics are present in programming, or are otherwise easy to explain by analogies and contrasts to familiar concepts in software. I leverage that in the book so that you can internalize the insights quickly, and appreciate the nuance more deeply than most books can allow. This book is a bridge from the world of programming to the world of math from the mathematician’s perspective. As far as I know, no other book provides this.

Programs make math more interesting and applicable than otherwise. Typical math writers often hold computation and algorithms at a healthy distance. Not us. We embrace computation as a prize and a principle worth fighting for. Each chapter of the book culminates in an exciting program that applies the mathematical insights from the chapter to an interesting application. The applications include cryptographic schemes, machine learning, drawing hyperbolic tessellations, and a Nobel-prize winning algorithm from economics.

The exercises of the book also push you beyond the book itself. There’s so much math out there that you can’t learn it from a single book. Perspectives and elaborations are spread throughout books, papers, blog posts, wikis, lecture notes, math magazines, and your own scratch paper. This book will prepare you to read a variety of sources by introducing you to the standard language of math, and also push you to engage with those resources.

Finally, this book includes a healthy dose of culture. Quotes and passages from the writings of famous mathematicians, contextual explanations of cultural attitudes, and a light dose of history will provide a peek into why mathematics is the way it is today, and why at times it can seem so confounding to an outsider. Through all this, I will show what progress means for math, what attitudes and patterns will help you along the way, and how to stay sane.

Of course, I couldn’t have written the book without the encouragement and support of you, my readers. Thank you for reading, commenting, and supporting me all these years.

Order the book today! I can’t wait to hear what you think 🙂