# Modulus Switching in LWE

The Learning With Errors problem is the basis of a few cryptosystems, and a foundation for many fully homomorphic encryption (FHE) schemes. In this article I’ll describe a technique used in some of these schemes called modulus switching.

In brief, an LWE sample is a vector of values in $\mathbb{Z}/q\mathbb{Z}$ for some $q$, and in LWE cryptosystems an LWE sample can be modified so that it hides a secret message $m$. Modulus switching allows one to convert an LWE encryption from having entries in $\mathbb{Z}/q{Z}$ to entries in some other $\mathbb{Z}/q'{Z}$, i.e., change the modulus from $q$ to $q’ < q$.

The reason you’d want to do this are a bit involved, so I won’t get into them here and instead back-reference this article in the future.

## LWE encryption

Briefly, the LWE encryption scheme I’ll use has the following parameters:

• A plaintext space $\mathbb{Z}/q\mathbb{Z}$, where $q \geq 2$ is a positive integer. This is the space that the underlying message comes from.
• An LWE dimension $n \in \mathbb{N}$.
• A discrete Gaussian error distribution $D$ with a mean of zero and a fixed standard deviation.

An LWE secret key is defined as a vector in $\{0, 1\}^n$ (uniformly sampled). An LWE ciphertext is defined as a vector $a = (a_1, \dots, a_n)$, sampled uniformly over $(\mathbb{Z} / q\mathbb{Z})^n$, and a scalar $b = \langle a, s \rangle + m + e$, where $e$ is drawn from $D$ and all arithmetic is done modulo $q$.

Without the error term, an attacker could determine the secret key from a polynomial-sized collection of LWE ciphertexts with something like Gaussian elimination. The set of samples looks like a linear (or affine) system, where the secret key entries are the unknown variables. With an error term, the problem of solving the system is believed to be hard, and only exponential time/space algorithms are known.

However, the error term in an LWE encryption encompasses all of the obstacles to FHE. For starters, if your message is $m=1$ and the error distribution is wide (say, a standard deviation of 10), then the error will completely obscure the message from the start. You can’t decrypt the LWE ciphertext because you can’t tell if the error generated in a particular instance was 9 or 10. So one thing people do is have a much smaller cleartext space (actual messages) and encode cleartexts as plaintexts by putting the messages in the higher-order bits of the plaintext space. E.g., you can encode 10-bit messages in the top 10 bits of a 32-bit integer, and leave the remaining 22 bits of the plaintext for the error distribution.

Moreover, for FHE you need to be able to add and multiply ciphertexts to get the corresponding sum/product of the underlying plaintexts. One can easily see that adding two LWE ciphertexts produces an LWE ciphertext of the sum of the plaintexts (multiplication is harder and beyond the scope of this article). Summing ciphertexts also sums the error terms together. So the error grows with each homomorphic operation, and eventually the error may overtake the message, at which point decryption fails. How to deal with this error accumulation is 99% of the difficulty of FHE.

Finally, because the error can be negative, even if you store a message in the high-order bits of the plaintext, you can’t decrypt by simply clearing the low order error bits. In that case an error of -1 would result in a corrupted message. Instead, to decrypt, we round the value $b – \langle a, s \rangle = m + e$ to the nearest multiple of $2^k$, where $k$ is the number of bits “reserved” for error, as described above. In particular, decryption will only succeed if the error is small enough in absolute value. So to make this work in practice, one must coordinate the encoding scheme (how many bits to reserve for error), the dimension of the vector $a$, and the standard deviation of the error distribution.

## Modulus switching

With a basic outline of an LWE ciphertext, we can talk about modulus switching.

Start with an LWE ciphertext for the plaintext $m$. Call it $(a_1, \dots, a_n, b) \in (\mathbb{Z}/q\mathbb{Z})^{n+1}$, where

$\displaystyle b = \left ( \sum_{i=1}^n a_i s_i \right ) + m + e_{\textup{original}}$

Given $q’ < q$, we would like to produce a vector $(a’_1, \dots, a’_n, b’) \in (\mathbb{Z}/q’\mathbb{Z})^{n+1}$ (all that has changed is I’ve put a prime on all the terms to indicate which are changing, most notably the new modulus $q’$) that also encrypts $m$, without knowing $m$ or $e_{\textup{original}}$, i.e., without access to the secret key.

Failed attempt: why not simply reduce each entry in the ciphertext vector modulo $q’$? That would set $a’_i = a_i \mod q’$ and $b’ = b \mod q’$. Despite the fact that this operation produces a perfectly valid equation, it won’t work. The problem is that taking $m \mod q’$ destroys part or all of the underlying message. For example, say $x$ is a 12-bit number stored in the top 12 bits of the plaintext, i.e., $m = x \cdot 2^{20}$. If $q’ = 2^{15}$, then the message is a multiple of $q’$ already, so the proposed modulus produces zero.

For this reason, we can’t hope to perfectly encrypt $m$, as the output ciphertext entries may not have a modulus large enough to represent $m$ at all. Rather, we can only hope to encrypt something like “the message $x$ that’s encoded in $m$, but instead with $x$ stored in lower order bits than $m$ originally used.” In more succinct terms, we can hope to encrypt $m’ = m q’ / q$. Indeed, the operation of $m \mapsto m q’ / q$ shifts up by $\log_2(q’)$ many bits (temporarily exceeding the maximum allowable bit length), and then shifting down by $\log_2(q)$ many bits.

For example, say the number $x=7$ is stored in the top 3 bits of a 32-bit unsigned integer ($q = 2^{32}$), i.e., $m = 7 \cdot 2^{29}$ and $q’ = 2^{10}$. Then $m q’ / q = 7 \cdot 2^{29} \cdot 2^{10} / 2^{32} = 7 \cdot 2^{29+10 – 32} = 7 \cdot 2^7$, which stores the same underlying number $x=7$, but in the top three bits of a 10-bit message. In particular, $x$ is in the same “position” in the plaintext space, while the plaintext space has shrunk around it.

Side note: because of this change to the cleartext-to-plaintext encoding, the decryption/decoding steps before and after a modulus switch are slightly different. In decryption you use different moduli, and in decoding you round to different powers of 2.

So the trick is instead to apply $z \mapsto z q’ / q$ to all the entries of the LWE ciphertext vector. However, because the entries like $a_i$ use the entire space of bits in the plaintext, this transformation will not necessarily result in an integer. So we can round the result to an integer and analyze that. The final proposal for a modulus switch is

$\displaystyle a’_i = \textup{round}(a_i q’ / q)$

$\displaystyle b’ = \textup{round}(b q’ / q)$

Because the error growth of LWE ciphertexts permeates everything, in addition to proving this transformation produces a valid ciphertext, we also have to understand how it impacts the error term.

## Analyzing the modulus switch

The statement summarizing the last section:

Theorem: Let $\mathbf{c} = (a_1, \dots, a_n, b) \in (\mathbb{Z}/q\mathbb{Z})^{n+1}$ be an LWE ciphertext encrypting plaintext $m$ with error term $e_\textup{original}$. Let $q’ < q$. Then $c’ = \textup{round}(\mathbf{c} q’ / q)$ (where rounding is performed entrywise) is an LWE encryption of $m’ = m q’ / q$, provided $m’$ is an integer.

Proof. The only substantial idea is that $\textup{round}(x) = x + \varepsilon$, where $|\varepsilon| \leq 0.5$. This is true by the definition of rounding, but that particular way to express it allows us to group the error terms across a sum-of-rounded-things in isolation, and then everything else has a factor of $q’/q$ that can be factored out. Let’s proceed.

Let $c’ = (a’_1, \dots, a’_n, b’)$, where $a’_i = \textup{round}(a_i q’ / q)$ and likewise for $b’$. need to show that $b’ = \left ( \sum_{i=1}^n a’_i s_i \right ) + m q’ / q + e_{\textup{new}}$, where $e_{\textup{new}}$ is a soon-to-be-derived error term.

Expanding $b’$ and using the “only substantial idea” above, we get

$\displaystyle b’ = \textup{round}(b q’ / q) = bq’/q + \varepsilon_b$

For some $\varepsilon_b$ with magnitude at most $1/2$. Continuing to expand, and noting that $b$ is related to the $a_i$ only modulo $q$, we have

\displaystyle \begin{aligned} b’ &= bq’/q + \varepsilon_b \\ b’ &= \left ( \left ( \sum_{i=1}^n a_i s_i \right ) + m + e_{\textup{original}} \right ) \frac{q’}{q} + \varepsilon_b \mod q \end{aligned}

Because we’re switching moduli, it makes sense to rewrite this over the integers, which means we add a term $Mq$ for some integer $M$ and continue to expand

\displaystyle \begin{aligned} b’ &= \left ( \left ( \sum_{i=1}^n a_i s_i \right ) + m + e_{\textup{original}} + Mq \right ) \frac{q’}{q} + \varepsilon_b \\ &= \left ( \sum_{i=1}^n \left ( a_i \frac{q’}{q} \right) s_i \right ) + m \frac{q’}{q} + e_{\textup{original}}\frac{q’}{q} + Mq \frac{q’}{q} + \varepsilon_b \\ &= \left ( \sum_{i=1}^n \left ( a_i \frac{q’}{q} \right) s_i \right ) + m’ + e_{\textup{original}}\frac{q’}{q} + Mq’ + \varepsilon_b \end{aligned}

The terms with $a_i$ are still missing their rounding, so, just like $b’$, rewrite $a’_i = a_i q’/q + \varepsilon_i$ as $a_i q’/q = a’_i – \varepsilon_i$, expanding, simplifying, and finally reducing modulo $q’$ to get

\displaystyle \begin{aligned} b’ &= \left ( \sum_{i=1}^n \left ( a’_i – \varepsilon_i \right) s_i \right ) + m’ + e_{\textup{original}}\frac{q’}{q} + Mq’ + \varepsilon_b \\ &= \left ( \sum_{i=1}^n a’_i s_i \right ) – \left ( \sum_{i=1}^n \varepsilon_i s_i \right) + m’ + e_{\textup{original}}\frac{q’}{q} + Mq’ + \varepsilon_b \\ &= \left ( \sum_{i=1}^n a’_i s_i \right ) + m’ + Mq’ + \left [ e_{\textup{original}}\frac{q’}{q} – \left ( \sum_{i=1}^n \varepsilon_i s_i \right) + \varepsilon_b \right ] \\ &= \left ( \sum_{i=1}^n a’_i s_i \right ) + m’ + \left [ e_{\textup{original}}\frac{q’}{q} – \left ( \sum_{i=1}^n \varepsilon_i s_i \right) + \varepsilon_b \right ] \mod q’ \end{aligned}

Define the square bracketed term as $e_{\textup{new}}$, and we have proved the theorem.

$\square$

The error after modulus switching is laid out. It’s the original error scaled, plus at most $n+1$ terms, each of which is at most $1/2$. However, note that this is larger than it appears. If the new modulus is, say, $q’=1024$, and the dimension is $n = 512$, then in the worst case the error right after modulus switching will leave us only $1$ bit left for the message. This is not altogether unrealistic, as production (128-bit) security parameters for LWE put $n$ around 600. But it is compensated for by the fact that the secret $s$ is chosen uniformly at random, and the errors are symmetric around zero. So in expectation only half the bits will be set, and half of the set bits will have a positive error, and half a negative error. Using these facts, you can bound the probability that the error exceeds, say, $\sqrt{n \log n}$ using a standard Hoeffding bound argument. I further believe that the error is bounded by $\sqrt{n}$. I have verified it empirically, but haven’t been able to quite nail down a proof.

Until next time!

# “Practical Math” Preview: Collect Sensitive Survey Responses Privately

This is a draft of a chapter from my in-progress book, Practical Math for Programmers: A Tour of Mathematics in Production Software.

Tip: Determine an aggregate statistic about a sensitive question, when survey respondents do not trust that their responses will be kept secret.

Solution:

import random

be_honest = random.random() < 0.5

def aggregate_responses(responses: List[bool]) -> Tuple[float, float]:
'''Return the estimated fraction of survey respondents that have a truthful
Yes answer to the survey question.
'''
yes_response_count = sum(responses)
n = len(responses)
mean = 2 * yes_response_count / n - 0.5
# Use n-1 when estimating variance, as per Bessel's correction.
variance = 3 / (4 * (n - 1))
return (mean, variance)


In the late 1960’s, most abortions were illegal in the United States. Daniel G. Horvitz, a statistician at The Research Triangle Institute in North Carolina and a leader in survey design for social sciences, was tasked with estimating how many women in North Carolina were receiving illegal abortions. The goal was to inform state and federal policymakers about the statistics around abortions, many of which were unreported, even when done legally.

The obstacles were obvious. As Horvitz put it, “a prudent woman would not divulge to a stranger the fact that she was party to a crime for which she could be prosecuted.” [Abernathy70] This resulted in a strong bias in survey responses. Similar issues had plagued surveys of illegal activity of all kinds, including drug abuse and violent crime. Lack of awareness into basic statistics about illegal behavior led to a variety of misconceptions, such as that abortions were not frequently sought out.

Horvitz worked with biostatisticians James Abernathy and Bernard Greenberg to test out a new method to overcome this obstacle, without violating the respondent’s privacy or ability to plausibly deny illegal behavior. The method, called randomized response, was invented by Stanley Warner in 1965, just a few years earlier. [Warner65] Warner’s method was a bit different from what we present in this Tip, but both Warner’s method and the code sample above use the same strategy of adding randomization to the survey.

The mechanism, as presented in the code above, requires respondents to start by flipping a coin. If heads, they answer the sensitive question truthfully. If tails, they flip a second coin to determine how to answer the question—heads resulting in a “yes” answer, tails in a “no” answer. Naturally, the coin flips are private and controlled by the respondent. And so if a respondent answers “Yes” to the question, they may plausibly claim the “Yes” was determined by the coin, preserving their privacy. The figure below describes this process as a diagram.

Another way to describe the outcome is to say that each respondent’s answer is a single bit of information that is flipped with probability 1/4. This is half way between two extremes on the privacy/accuracy tradeoff curve. The first extreme is a “perfectly honest” response, where the bit is never flipped and all information is preserved. The second extreme has the bit flipped with probability 1/2, which is equivalent to ignoring the question and choosing your answer completely at random, losing all information in the aggregate responses. In this perspective, the aggregate survey responses can be thought of as a digital signal, and the privacy mechanism adds noise to that signal.

It remains to determine how to recover the aggregate signal from these noisy responses. In other words, the surveyor cannot know any individual’s true answer, but they can, with some extra work, estimate statistics about the underlying population by correcting for the statistical bias. This is possible because the randomization is well understood. The expected fraction of “Yes” answers can be written as a function of the true fraction of “Yes” answers, and hence the true fraction can be solved for. In this case, where the random coin is fair, that formula is as follows (where $\mathbf{P}$ stands for “the probability of”).

$\displaystyle \mathbf{P}(\textup{Yes answer}) = \frac{1}{2} \mathbf{P}(\textup{Truthful yes answer}) + \frac{1}{4}$

And so we solve for $\mathbf{P}(\textup{Truthful yes answer})$

$\displaystyle \mathbf{P}(\textup{Truthful yes answer}) = 2 \mathbf{P}(\textup{Yes answer}) – \frac{1}{2}$

We can replace the true probability $\mathbf{P}(\textup{Yes answer})$ above with our fraction of “Yes” responses from the survey, and the result is an estimate $\hat{p}$ of $\mathbf{P}(\textup{Truthful yes answer})$. This estimate is unbiased, but has additional variance—beyond the usual variance caused by picking a finite random sample from the population of interest—introduced by the randomization mechanism.

With a bit of effort, one can calculate that the variance of the estimate is

$\displaystyle \textup{Var}(\hat{p}) = \frac{3}{4n}$

And via Chebyshev’s inequality, which bounds the likelihood that an estimator is far away from its expectation, we can craft a confidence interval and determine the needed sample sizes. Specifically, the estimate $\hat{p}$ has additive error at most $q$ with probability at most $\textup{Var}(\hat{p}) / q^2$. This implies that for a confidence of $1-c$, one requires at least $n \geq 3 / (4 c q^2)$ samples. For example, to achieve error 0.01 with 90 percent confidence ($c=0.1$), one requires 7,500 responses.

Horvitz’s randomization mechanism didn’t use coin flips. Instead they used an opaque box with red or blue colored balls which the respondent, who was in the same room as the surveyor, would shake and privately reveal a random color through a small window facing away from the surveyor. The statistical principle is the same. Horvitz and his associates surveyed the women about their opinions of the privacy protections of this mechanism. When asked whether their friends would answer a direct question about abortion honestly, over 80% either believed their friends would lie, or were unsure. [footnote: A common trick in survey methodology when asking someone if they would be dishonest is to instead ask if their friends would be dishonest. This tends to elicit more honesty, because people are less likely to uphold a false perception of the moral integrity of others, and people also don’t realize that their opinion of their friends correlates with their own personal behavior and attitudes. In other words, liars don’t admit to lying, but they think lying is much more common than it really is.] But 60% were convinced there was no trick involved in the randomization, while 20% were unsure and 20% thought there was a trick. This suggests many people were convinced that Horvitz’s randomization mechanism provided the needed safety guarantees to answer honestly.

Horvitz’s survey was a resounding success, both for randomized response as a method and for measuring abortion prevalence. [Abernathy70] They estimated the abortion rate at about 22 per 100 conceptions, with a distinct racial bias—minorities were twice as likely as whites to receive an abortion. Comparing their findings to a prior nationwide study from 1955—the so-called Arden House estimate—which gave a range of between 200,000 and 1.2 million abortions per year, Horvitz’s team estimated more precisely that there were 699,000 abortions in 1955 in the United States, with a reported standard deviation of about 6,000, less than one percent. For 1967, the year of their study, they estimated 829,000.

Their estimate was referenced widely in the flurry of abortion law and court cases that followed due to a surging public interest in the topic. For example, it is cited in the 1970 California Supreme Court opinion for the case Ballard v. Anderson, which concerned whether a minor needs parental consent to receive an otherwise legal abortion. [Ballard71, Roemer71] It was also cited in amici curiae briefs submitted to the United States Supreme Court in 1971 for Roe v. Wade, the famous case that invalidated most U.S. laws making abortion illegal. One such brief was filed jointly by the country’s leading women’s rights organizations like the National Organization for Women. Citing Horvitz for this paragraph, it wrote, [Womens71]

While the realities of law enforcement, social and public health problems posed by abortion laws have been openly discussed […] only within a period of not more than the last ten years, one fact appears undeniable, although unverifiable statistically. There are at least one million illegal abortions in the United States each year. Indeed, studies indicate that, if the local law still has qualifying requirements, the relaxation in the law has not diminished to any substantial extent the numbers in which women procure illegal abortions.

It’s unclear how the authors got this one million number (Horvitz’s estimate was 20% less for 1967), nor what they meant by “unverifiable statistically.” It may have been a misinterpretation of the randomized response technique. In any event, randomized response played a crucial role in providing a foundation for political debate.

Despite Horvitz’s success, and decades of additional research on crime, drug use, and other sensitive topics, randomized response mechanisms have been applied poorly. In some cases, the desired randomization is inextricably complex, such as when requiring a continuous random number. In these cases, a manual randomization mechanism is too complex for a respondent to use accurately. Trying to use software-assisted devices can help, but can also produce mistrust in the interviewee. See [Rueda16] for additional discussion of these pitfalls and what software packages exist for assisting in using randomized response. See [Fox16] for an analysis of the statistical differences between the variety of methods used between 1970 and 2010.

In other contexts, analogues to randomized response may not elicit the intended effect. In the 1950’s, Utah used death by firing squad as capital punishment. To avoid a guilty conscience of the shooters, one of five marksmen was randomly given a blank, providing him some plausible deniability that he knew he had delivered the killing shot. However, this approach failed on two counts. First, once a shot was fired the marksman could tell whether the bullet was real based on the recoil. Second, a 20% chance of a blank was not enough to dissuade a guilty marksman from purposely missing. In the 1951 execution of Elisio Mares, all four real bullets missed the condemned man’s heart, hitting his chest, stomach, and hip. He died, but it was neither painless nor instant.

Of many lessons one might draw from the botched execution, one is that randomization mechanisms must take into account both the psychology of the participants as well as the severity of a failed outcome.

### References

@book{Fox16,
title = {{Randomized Response and Related Methods: Surveying Sensitive Data}},
author = {James Alan Fox},
edition = {2nd},
year = {2016},
doi = {10.4135/9781506300122},
}

@article{Abernathy70,
author = {Abernathy, James R. and Greenberg, Bernard G. and Horvitz, Daniel G.
},
title = {{Estimates of induced abortion in urban North Carolina}},
journal = {Demography},
volume = {7},
number = {1},
pages = {19-29},
year = {1970},
month = {02},
issn = {0070-3370},
doi = {10.2307/2060019},
url = {https://doi.org/10.2307/2060019},
}

@article{Warner65,
author = {Stanley L. Warner},
journal = {Journal of the American Statistical Association},
number = {309},
pages = {63--69},
publisher = {{American Statistical Association, Taylor \& Francis, Ltd.}},
title = {Randomized Response: A Survey Technique for Eliminating Evasive
volume = {60},
year = {1965},
}

@article{Ballard71,
title = {{Ballard v. Anderson}},
journal = {California Supreme Court L.A. 29834},
year = {1971},
url = {https://caselaw.findlaw.com/ca-supreme-court/1826726.html},
}

@misc{Womens71,
title = {{Motion for Leave to File Brief Amici Curiae on Behalf of Women’s
Organizations and Named Women in Support of Appellants in Each Case,
and Brief Amici Curiae.}},
booktitle = {{Appellate Briefs for the case of Roe v. Wade}},
number = {WL 128048},
year = {1971},
publisher = {Supreme Court of the United States},
}

@article{Roemer71,
author = {R. Roemer},
journal = {Am J Public Health},
pages = {500--509},
title = {Abortion law reform and repeal: legislative and judicial developments
},
volume = {61},
number = {3},
year = {1971},
}

@incollection{Rueda16,
title = {Chapter 10 - Software for Randomized Response Techniques},
editor = {Arijit Chaudhuri and Tasos C. Christofides and C.R. Rao},
series = {Handbook of Statistics},
publisher = {Elsevier},
volume = {34},
pages = {155-167},
year = {2016},
booktitle = {Data Gathering, Analysis and Protection of Privacy Through
Randomized Response Techniques: Qualitative and Quantitative Human
Traits},
doi = {https://doi.org/10.1016/bs.host.2016.01.009},
author = {M. Rueda and B. Cobo and A. Arcos and R. Arnab},
}


# Cocktails

It’s April Cools! We’re taking back April Fools.

When I was younger I had a strange relationship with alcohol, not because of any sort of trauma, but because I found it decidedly boring and disgusting to the taste. I didn’t drink in high school, didn’t enjoy parties in college, and didn’t care for tailgating or other sports-based events where drinking was common. I also never enjoyed wine—red is too tannic, white is just meh—and almost all beer tastes awful to me (lambic is a delightful exception).

This attitude may have been largely because (in America) young drinkers are idiots and the goal is to get plastered quickly and cheaply. Underage drinkers in the US only have access to cheap liquor, and sugary sodas are the standard mixer. People who grow up in these circumstances largely seem to acclimate to the idea that alcoholic drinks should taste awful.

With this attitude, I was surprised to find myself drawn to making cocktails.

From what I can remember, the first good cocktail I had was some variation on an old fashioned at a fancy restaurant. It was the kind of drink that is only good if it’s well made. This piqued my interest—and I still love a good old fashioned—but what convinced me to start making cocktails was the variety of flavors and surprising combinations that could be made into a drink.

For example, one of my favorite cocktails to make when trying to impress a friend mixes snap peas, mint, caraway (Brennivin), and yuzu lemon—a milder, Japanese variety of lemon that almost tastes like a mix between a Eureka lemon and a pear—into a bright green drink with a naturally foamy top layer (due to shaking the citrus).

But literally any flavor you want can be the star of a cocktail. I make a pecan-pie flavored drink with Rivulet’s amazing liqueur, some aged rum, egg white, and pumpkin-pie spice. I made a drink showcasing hibiscus with a syrup made from hibiscus flowers, cinnamon, allspice, vanilla beans, ginger, and sugar water. I’ve made a smokey carrot cocktail (Mezcal, carrot juice, balsamic vinegar, black pepper), and a mango lassi cocktail (gin, mango liqueur, plain yogurt, simple syrup).

The flavors, in my opinion, largely come from two sources: liqueurs and syrups. Liqueurs often start as purified ethanol spirit that is sweetened and infused with various ingredients, and then watered down to a low-alcohol content, often 15%, not much more than a glass of wine. Many world-famous liqueurs originated as bitter herbal remedies as far back as the 1600’s. This was the case for Chartreuse, named after the Grande Chartreuse monastery in the Chartreuse mountains north of Grenoble, France. These monks originally developed Chartreuse because they were trying to make the “elixir of long life.” Elon Musk should try it. Because Yellow Chartreuse is my absolute favorite liqueur, I made a little pilgrimage to their distillery in Voiron, France. Elixir Vegetal, the herbal remedy still marketed as a cure-all tonic. I have a bottle as a novelty. They’re kept in a wooden outer bottle to prevent sunlight from making the liquid lose its color.

Anyway, point is liqueurs are delicious, even on their own or with tonic water. And there are liqueurs made from every conceivable fruit, peppers, birch bark, nuts, violet, you name it.

Syrups, on the other hand, allow you to infuse the flavor of anything by boiling it with equal parts water and sugar (or honey, or maple syrup). Tea, peanut butter, port, cardamom, whatever you want to experiment with, you can basically make the flavoring parts yourself and add them to a drink.

Beyond those two, texture is an interesting component for me personally. If you’ve ever had a whiskey sour or a pisco sour, you may know the nice foaminess in those drinks comes from “dry shaking” (shaking without ice) an egg white to emulsify and create a nice foam. It also tends to tone down harsher tannic flavors. You can achieve the same sort of effect with plain yogurt or the brine from a can of chickpeas. And then there are these techniques like “milk washing” and other forms of clarifying spirits, where you use liquids with special proteins to “wash off” the unwanted solids in another drink (like tannic solids in wine).

The rabbit whole goes a whole lot deeper. If you want to get into this and geek out a bit more on the science, check out Dave Arnold’s book Liquid Intelligence. It basically taught me all the advanced techniques I know, and many I have yet to muster the courage to try, like vacuum-infusing solids with liquor, using liquid nitrogen to freeze solids before muddling them, and plunging a RED HOT COPPER POKER into a drink to add flavor.

I also enjoyed The Aviary restaurant’s cocktail book, which included a ton of interesting recipes, along with new ideas like spherification (which I didn’t realize I would hate until after I made it, mainly because I hate boba and it has a similar texture).

At 22 I hadn’t yet figured out why alcohol and drinking culture was unappealing. At 32, I think I finally understand that an activity, hobby, or subject appeals to me principally through its sense of quality and craftsmanship. For the quality of the finished work, yes, but also in the attention to detail, and stewardship of the workspace and tools with which that work is done. Like the apocryphal 30-second Picasso napkin art, I strive to practice my crafts (mostly programming, writing, math) so that quality becomes second nature. Focusing on tools, methods, and patterns of work are critical to that.

A bonus of all this is that cocktails become an easy point of conversation when I am stuck in a drinking-culture setting. I offer to “bartend” small parties of my friends so that I can have something to do besides drink. It gives me an excuse to leave an awkward conversation and a nice excuse to join a new conversation. It usually leaves an impression and people remember me. And it’s the best excuse to get my friends to visit me (I have all the ingredients!) so that I don’t have to go anywhere to enjoy myself.

If you want to get into cocktails but aren’t sure where to get started, I’d recommend getting the minimal tools to make a shaken cocktail (Boston shaker tins, Hawthorne strainer, and a measuring jig), watching a video on how to use the tins to shake a cocktail with ice, then go to fancy restaurants, try cocktails, write down the ingredients, and go buy them and try to experiment with recreating them at home. A good default recipe uses 2oz liquor, 3/4 oz – 1 oz sweet stuff (syrups and/or liqueurs), 1oz citrus juice, shaken over ice. Personally I default to a 1-1 honey-water simple syrup, and add a small pinch of salt to mixtures before I shake them. That’ll give you a solid but flexible base for which choices of particular ingredients covers the gamut of gimlets, daiquiris, sours (all that + egg), margaritas, and more.

Anyhow. The math content will return shortly. Enjoy!

# Silent Duels—Constructing the Solution part 2

Previous posts in this series:

Since it’s been three years since the last post in this series, and the reason for the delay is that I got totally stuck on the implementation. I’m publishing this draft article as partial progress until I can find time to work on it again.

If you haven’t read the last post, please do. Code repo. Paper I’m working through. In this post I make progress on implementing a general construction of the optimal strategies from the paper, but at the end the first serious test case fails. At the end I’m stuck, and I’m not sure what to do next to fix it.

## Refactoring

Let’s start by refactoring the mess of code from last post. Now is a good time for that, because our tinkering in the last post instilled certainty we understand the basic mechanics. The central functions for the generic solution construction is in this file, as of this commit.

We’re using Sympy to represent functions and evaluate integrals, and it helps to name types appropriately. Note, I’ll using python type annotations in the code, but the types don’t validate because of missing stubs in sympy. Call it room for improvement.

from sympy import Lambda
SuccessFn = NewType('SuccessFn', Lambda)
@dataclass
class SilentDuelInput:
'''Class containing the static input data to the silent duel problem.'''
player_1_action_count: int
player_2_action_count: int
player_1_action_success: SuccessFn
player_2_action_success: SuccessFn


Now we turn to the output. Recall, a strategy is a partition of $[0,1]$ into $n$ intervals—where $n$ is the number of actions specified in the input—with a probability distribution for each piece of the partition. This suggests a natural breakdown

@dataclass
class Strategy:
'''
A strategy is a list of action distribution functions, each of which
describes the probability of taking an action on the interval of its
support.
'''
action_distributions: List[ActionDistribution]


And ActionDistribution is defined as:

@dataclass
class ActionDistribution:
'''The interval on which this distribution occurs.'''
support_start: float
support_end: float
'''
The cumulative density function for the distribution.
May be improper if point_mass > 0.
'''
cumulative_density_function: Lambda
'''
If nonzero, corresponds to an extra point mass at the support_end.
Only used in the last action in the optimal strategy.
'''
point_mass: float = 0
t = Symbol('t', nonnegative=True)
def draw(self, uniform_random_01=DEFAULT_RNG):
'''Return a random draw from this distribution.
Args:
- uniform_random_01: a callable that accepts zero arguments
and returns a uniform random float between 0 and 1. Defaults
to using python's standard random.random
'''
if self.support_start >= self.support_end:  # treat as a point mass
return self.support_start
uniform_random_number = uniform_random_01()
if uniform_random_number > 1 - self.point_mass - EPSILON:
return self.support_end
return solve_unique_real(
self.cumulative_density_function(self.t) - uniform_random_number,
self.t,
solution_min=self.support_start,
solution_max=self.support_end,
)


We represent the probability distribution as a cumulative density function $F(t)$, which for a given input $a$ outputs the total probability weight of values in the distribution that are less than $a$. One reason density functions are convenient is that it makes it easy to sample: pick a uniform random $a \in [0,1]$, and then solve $F(t) = a$. The resulting $t$ is the sampled value.

Cumulative density functions are related to their more often-used (in mathematics) counterpart, the probability density function $f(t)$, which measures the “instantaneous” probability. I.e., the probability of a range of values $(a,b)$ is given by $\int_a^b f(t) dt$. The probability density and cumulative density are related via $F(a) = \int_{-\infty}^a f(t) dt$. In our case, the lower bound of integration is finite since we’re working on a fixed subinterval of $[0,1]$.

The helper function solve_unique_real abstracts the work of using sympy to solve an equation that the caller guarantees has a unique real solution in a given range. It’s source can be found here. This is guaranteed for cumulative distribution functions, because they’re strictly increasing functions of the input.

Finally, the output is a strategy for each player.

@dataclass
class SilentDuelOutput:
p1_strategy: Strategy
p2_strategy: Strategy


I also chose to make a little data structure to help maintain the joint progress of building up the partition and normalizing constants for each player.

@dataclass
class IntermediateState:
'''
A list of the transition times computed so far. This field
maintains the invariant of being sorted. Thus, the first element
in the list is a_{i + 1}, the most recently computed value of
player 1's transition times, and the last element is a_{n + 1} = 1.
This value is set on initialization with new.
'''
player_1_transition_times: Deque[Expr]
'''
Same as player_1_transition_times, but for player 2 with b_j and b_m.
'''
player_2_transition_times: Deque[Expr]
'''
The values of h_i so far, the normalizing constants for the action
probability distributions for player 1. Has the same sorting
invariant as the transition time lists.
'''
player_1_normalizing_constants: Deque[Expr]
'''
Same as player_1_normalizing_constants, but for player 2,
i.e., the k_j normalizing constants.
'''
player_2_normalizing_constants: Deque[Expr]
@staticmethod
def new():
'''Create a new state object, and set a_{n+1} = 1, b_{m+1}=1.'''
return IntermediateState(
player_1_transition_times=deque(),
player_2_transition_times=deque(),
player_1_normalizing_constants=deque([]),
player_2_normalizing_constants=deque([]),
)
self.player_1_transition_times.appendleft(transition_time)
self.player_1_normalizing_constants.appendleft(normalizing_constant)
self.player_2_transition_times.appendleft(transition_time)
self.player_2_normalizing_constants.appendleft(normalizing_constant)


Now that we’ve established the types, we move on to the construction.

## Construction

There are three parts to the construction:

1. Computing the correct $\alpha$ and $\beta$.
2. Finding the transition times and normalizing constants for each player.
3. Using the above to build the output strategies.

## Building the output strategy

We’ll start with the last one: computing the output given the right values. The construction is symmetric for each player, so we can have a single function called with different inputs.

def compute_strategy(
player_action_success: SuccessFn,
player_transition_times: List[float],
player_normalizing_constants: List[float],
opponent_action_success: SuccessFn,
opponent_transition_times: List[float],
time_1_point_mass: float = 0) -> Strategy:


This function computes the construction Restrepo gives.

One difficulty is in expressing discontinuous breaks. The definition of $f^*$ is a product over $b_j > t$, but if a $b_j$ lies inside the action interval, that will cause a discontinuity. I don’t know of an easy way to express the $f^*$ product literally as written in sympy (let me know if you know better), so instead I opted to construct a piecewise function manually, which sympy supports nicely.

This results in the following function for building the final $f^*$ for a single action for one player. The piecewise function is to break the action interval into pieces according to the discontinuities introduced by the opponent’s transition times that lie in the same interval.

def f_star(player_action_success: SuccessFn,
opponent_action_success: SuccessFn,
variable: Symbol,
opponent_transition_times: Iterable[float]) -> Expr:
'''Compute f^* as in Restrepo '57.
The inputs can be chosen so that the appropriate f^* is built
for either player. I.e., if we want to compute f^* for player 1,
player_action_success should correspond to P, opponent_action_success
to Q, and larger_transition_times to the b_j.
If the inputs are switched appropriately, f^* is computed for player 2.
'''
P: SuccessFn = player_action_success
Q: SuccessFn = opponent_action_success
'''
We compute f^* as a Piecewise function of the following form:
[prod_{i=1}^m (1-Q(b_i))] * Q'(t) / Q^2(t)P(t)    if t < b_1
[prod_{i=2}^m (1-Q(b_i))] * Q'(t) / Q^2(t)P(t)    if t < b_2
[prod_{i=3}^m (1-Q(b_i))] * Q'(t) / Q^2(t)P(t)    if t < b_3
.
.
.
 *  Q'(t) / Q^2(t) P(t)                        if t >= b_m
'''
non_product_term = diff(Q(variable), variable) / (Q(variable)**2 * P(variable))
piecewise_components = []
for i, b_j in enumerate(opponent_transition_times):
larger_transition_times = opponent_transition_times[i:]
product = 1
for b in larger_transition_times:
product *= (1 - Q(b))
term = product * non_product_term
piecewise_components.append((term, variable < b_j))
# last term is when there are no larger transition times.
piecewise_components.append((non_product_term, True))
return Piecewise(*piecewise_components)


The piecewise components are probability densities, so we can integrate them piecewise to get the cumulative densities. There are a few helpers here to deal with piecewise functions. First, we define a mask_piecewise to modify the sympy-internal representation of a piecewise function to have specified values outside of a fixed interval (the interval on which the action occurs). For a pdf this should be zero, and for a cdf it should be 0 before the action interval and 1 after. There are also some nuanced bits where hidden calls to expr.simplify() allow the integration to happen much faster than otherwise.

def compute_strategy(
player_action_success: SuccessFn,
player_transition_times: List[float],
player_normalizing_constants: List[float],
opponent_action_success: SuccessFn,
opponent_transition_times: List[float],
time_1_point_mass: float = 0) -> Strategy:
'''
Given the transition times for a player, compute the action cumulative
density functions for the optimal strategy of the player.
'''
action_distributions = []
x = Symbol('x', real=True)
t = Symbol('t', real=True)
# chop off the last transition time, which is always 1
opponent_transition_times = [
x for x in opponent_transition_times if x < 1
]
pairs = subsequent_pairs(player_transition_times)
for (i, (action_start, action_end)) in enumerate(pairs):
normalizing_constant = player_normalizing_constants[i]
dF = normalizing_constant * f_star(
player_action_success,
opponent_action_success,
x,
opponent_transition_times,
)
piece_pdf = mask_piecewise(dF, x, action_start, action_end)
piece_cdf = integrate(piece_pdf, x, action_start, t)
piece_cdf,
t,
action_start,
action_end,
before_domain_val=0,
after_domain_val=1
)
action_distributions.append(ActionDistribution(
support_start=action_start,
support_end=action_end,
cumulative_density_function=Lambda((t,), piece_cdf),
))
action_distributions[-1].point_mass = time_1_point_mass
return Strategy(action_distributions=action_distributions)
def compute_player_strategies(silent_duel_input, intermediate_state, alpha, beta):
p1_strategy = compute_strategy(
player_action_success=silent_duel_input.player_1_action_success,
player_transition_times=intermediate_state.player_1_transition_times,
player_normalizing_constants=intermediate_state.player_1_normalizing_constants,
opponent_action_success=silent_duel_input.player_2_action_success,
opponent_transition_times=intermediate_state.player_2_transition_times,
time_1_point_mass=alpha,
)
p2_strategy = compute_strategy(
player_action_success=silent_duel_input.player_2_action_success,
player_transition_times=intermediate_state.player_2_transition_times,
player_normalizing_constants=intermediate_state.player_2_normalizing_constants,
opponent_action_success=silent_duel_input.player_1_action_success,
opponent_transition_times=intermediate_state.player_1_transition_times,
time_1_point_mass=beta,
)
return SilentDuelOutput(p1_strategy=p1_strategy, p2_strategy=p2_strategy)


Note that “Lambda” is a sympy-internal anonymous function declaration that we’re using here to imply functionality. A Lambda also supports function call notation by overloading __call__. This allows the later action distribution to be treated as if it were a function.

## Finding the transition times

Next we move on to computing the transition times for a given $\alpha, \beta$. This step is largely the same as in the previous post in this series, but we’re refactoring the code to be simpler and more generic.

The outer loop will construct an intermediate state object, and handle the process that Restrepo describes of “taking the larger parameter” and then computing the next $a_i, b_j$ using the previously saved parameters. There is one caveat, that Restrepo misses when he writes, “for definiteness, we assume that $a_n > b_m$…in the next step…a new $b_m$ is computed…using the single parameter $a_n^*$.” To the best of what I can tell (and experimenting with symmetric examples), when the computed transition times are equal, keeping only one and following Restrepo’s algorithm faithfully will result in an inconsistent output. To the best of what I can tell, this is a minor mistake, and if the transition times are equal then you must keep them both, and not recompute one using the other as a parameter.

def compute_as_and_bs(duel_input: SilentDuelInput,
alpha: float = 0,
beta: float = 0) -> IntermediateState:
'''
Compute the a's and b's for the silent duel, given a fixed
alpha and beta as input.
'''
t = Symbol('t0', nonnegative=True, real=True)
p1_index = duel_input.player_1_action_count
p2_index = duel_input.player_2_action_count
intermediate_state = IntermediateState.new()
while p1_index > 0 or p2_index > 0:
# the larger of a_i, b_j is kept as a parameter, then the other will be repeated
# in the next iteration; e.g., a_{i-1} and b_j (the latter using a_i in its f^*)
(a_i, b_j, h_i, k_j) = compute_ai_and_bj(
duel_input, intermediate_state, alpha=alpha, beta=beta
)
# there is one exception, if a_i == b_j, then the computation of f^* in the next
# iteration (I believe) should not include the previously kept parameter. I.e.,
# in the symmetric version, if a_n is kept and the next computation of b_m uses
# the previous a_n, then it will produce the wrong value.
#
# I resolve this by keeping both parameters when a_i == b_j.
if abs(a_i - b_j) < EPSILON and p1_index > 0 and p2_index > 0:
# use the average of the two to avoid roundoff errors
transition = (a_i + b_j) / 2
p1_index -= 1
p2_index -= 1
elif (a_i > b_j and p1_index > 0) or p2_index == 0:
p1_index -= 1
elif (b_j > a_i and p2_index > 0) or p1_index == 0:
p2_index -= 1
return intermediate_state


It remains to compute an individual $a_i, b_j$ pair given an intermediate state. We refactored the loop body from the last post into a generic function that works for both $a_n, b_m$ (which need access to $\alpha, \beta$) and lower $a_i, b_j$. With the exception of “simple_f_star” there is nothing new happening here.

def simple_f_star(player_action_success: SuccessFn,
opponent_action_success: SuccessFn,
variable: Symbol,
larger_transition_times: Iterable[float]) -> Expr:
P: SuccessFn = player_action_success
Q: SuccessFn = opponent_action_success
non_product_term = diff(Q(variable), variable) / (Q(variable)**2 * P(variable))
product = 1
for b in larger_transition_times:
product *= (1 - Q(b))
return product * non_product_term
def compute_ai_and_bj(duel_input: SilentDuelInput,
intermediate_state: IntermediateState,
alpha: float = 0,
beta: float = 0):
'''
Compute a pair of a_i and b_j transition times for both players,
using the intermediate state of the algorithm computed so far.
This function also computes a_n and b_m when the intermediate_state
input has no larger transition times for the opposite player. In
those cases, the integrals and equations being solved are slightly
different; they include some terms involving alpha and beta. In all
other cases, the alpha and beta parameters are unused.
'''
P: SuccessFn = duel_input.player_1_action_success
Q: SuccessFn = duel_input.player_2_action_success
t = Symbol('t0', nonnegative=True, real=True)
a_i = Symbol('a_i', positive=True, real=True)
b_j = Symbol('b_j', positive=True, real=True)
p1_transitions = intermediate_state.player_1_transition_times
p2_transitions = intermediate_state.player_2_transition_times
# the left end of the transitions arrays contain the smallest
# (latest computed) transition time for each player.
# these are set to 1 for an empty intermediate state, i.e. for a_n, b_m
a_i_plus_one = p1_transitions
b_j_plus_one = p2_transitions
computing_a_n = a_i_plus_one == 1
computing_b_m = b_j_plus_one == 1
p1_fstar_parameters = list(p2_transitions)[:-1]  # ignore b_{m+1} = 1
p1_fstar = simple_f_star(P, Q, t, p1_fstar_parameters)
# the a_i part
if computing_a_n:
p1_integrand = ((1 + alpha) - (1 - alpha) * P(t)) * p1_fstar
p1_integral_target = 2 * (1 - alpha)
else:
p1_integrand = (1 - P(t)) * p1_fstar
p1_integral_target = 1 / intermediate_state.player_1_normalizing_constants
a_i_integrated = integrate(p1_integrand, t, a_i, a_i_plus_one)
a_i = solve_unique_real(
a_i_integrated - p1_integral_target,
a_i,
solution_min=0,
solution_max=a_i_plus_one
)
# the b_j part
p2_fstar_parameters = list(p1_transitions)[:-1]  # ignore a_{n+1} = 1
p2_fstar = simple_f_star(Q, P, t, p2_fstar_parameters)
if computing_b_m:
p2_integrand = ((1 + beta) - (1 - beta) * Q(t)) * p2_fstar
p2_integral_target = 2 * (1 - beta)
else:
p2_integrand = (1 - Q(t)) * p2_fstar
p2_integral_target = 1 / intermediate_state.player_2_normalizing_constants
b_j_integrated = integrate(p2_integrand, t, b_j, b_j_plus_one)
b_j = solve_unique_real(
b_j_integrated - p2_integral_target,
b_j,
solution_min=0,
solution_max=b_j_plus_one
)
# the h_i part
h_i_integrated = integrate(p1_fstar, t, a_i, a_i_plus_one)
h_i_numerator = (1 - alpha) if computing_a_n else 1
h_i = h_i_numerator / h_i_integrated
# the k_j part
k_j_integrated = integrate(p2_fstar, t, b_j, b_j_plus_one)
k_j_numerator = (1 - beta) if computing_b_m else 1
k_j = k_j_numerator / k_j_integrated
return (a_i, b_j, h_i, k_j)


You might be wondering what is going on with “simple_f_star”. Let me save that for the end of the post, as it’s related to how I’m currently stuck in understanding the construction.

## Binary searching for alpha and beta

Finally, we need to binary search for the desired output $a_1 = b_1$ as a function of $\alpha, \beta$. Following Restrepo’s claim, we first compute $a_1, b_1$ using $\alpha=0, \beta=0$. If $a_1 > b_1$, then we are looking for a $\beta > 0$, and vice versa for $\alpha$.

To facilitate the binary search, I decided to implement an abstract binary search routine (that only works for floating point domains). You can see the code here. The important part is that it abstracts the test (did you find it?) and the response (no, too low), so that we can have the core of this test be a computation of $a_1, b_1$.

First the non-binary-search bits:

def optimal_strategies(silent_duel_input: SilentDuelInput) -> SilentDuelOutput:
'''Compute an optimal pair of corresponding strategies for the silent duel problem.'''
# First compute a's and b's, and check to see if a_1 == b_1, in which case quit.
intermediate_state = compute_as_and_bs(silent_duel_input, alpha=0, beta=0)
a1 = intermediate_state.player_1_transition_times
b1 = intermediate_state.player_2_transition_times
if abs(a1 - b1) < EPSILON:
return compute_player_strategies(
silent_duel_input, intermediate_state, alpha=0, beta=0,
)
# Otherwise, binary search for an alpha/beta
searching_for_beta = b1 < a1
<snip>
intermediate_state = compute_as_and_bs(
silent_duel_input, alpha=final_alpha, beta=final_beta
)
player_strategies = compute_player_strategies(
silent_duel_input, intermediate_state, final_alpha, final_beta
)
return player_strategies


The above first checks to see if a search is needed, and in either case uses our previously defined functions to compute the output strategies. The binary search part looks like this:

    if searching_for_beta:
def test(beta_value):
new_state = compute_as_and_bs(
silent_duel_input, alpha=0, beta=beta_value
)
new_a1 = new_state.player_1_transition_times
new_b1 = new_state.player_2_transition_times
found = abs(new_a1 - new_b1) < EPSILON
return BinarySearchHint(found=found, tooLow=new_b1 < new_a1)
else:  # searching for alpha
def test(alpha_value):
new_state = compute_as_and_bs(
silent_duel_input, alpha=alpha_value, beta=0
)
new_a1 = new_state.player_1_transition_times
new_b1 = new_state.player_2_transition_times
found = abs(new_a1 - new_b1) < EPSILON
return BinarySearchHint(found=found, tooLow=new_a1 < new_b1)
search_result = binary_search(
test, param_min=0, param_max=1, callback=print
)
assert search_result.found
# the optimal (alpha, beta) pair have product zero.
final_alpha = 0 if searching_for_beta else search_result.value
final_beta = search_result.value if searching_for_beta else 0


## Put it all together

Let’s run the complete construction on some examples. I added a couple of print statements in the code and overloaded __str__ on the dataclasses to help. First we can verify we get the same result as our previous symmetric example:

x = Symbol('x')
P = Lambda((x,), x)
Q = Lambda((x,), x)
duel_input = SilentDuelInput(
player_1_action_count=3,
player_2_action_count=3,
player_1_action_success=P,
player_2_action_success=Q,
)
print("Input: {}".format(duel_input))
output = optimal_strategies(duel_input)
print(output)
output.validate()
# output is:
Input: SilentDuelInput(player_1_action_count=3, player_2_action_count=3, player_1_action_success=Lambda(_x, _x), player_2_action_success=Lambda(_x, _x))
a_1 = 0.143 b_1 = 0.143
P1:
(0.143, 0.200): dF/dt = Piecewise((0, (t > 0.2) | (t < 0.142857142857143)), (0.083/t**3, t < 0.2))
(0.200, 0.333): dF/dt = Piecewise((0, (t > 0.333333333333333) | (t < 0.2)), (0.13/t**3, t < 0.333333333333333))
(0.333, 1.000): dF/dt = Piecewise((0, (t > 1) | (t < 0.333333333333333)), (0.25/t**3, t < 1))
P2:
(0.143, 0.200): dF/dt = Piecewise((0, (t < 0.2) | (t < 0.142857142857143)), (0.083/t**3, t < 0.2))
(0.200, 0.333): dF/dt = Piecewise((0, (t > 0.333333333333333) | (t < 0.2)), (0.13/t**3, t < 0.333333333333333))
(0.333, 1.000): dF/dt = Piecewise((0, (t > 1) | (t < 0.333333333333333)), (0.25/t**3, t > 1))
Validating P1
Validating. prob_mass=1.00000000000000 point_mass=0
Validating. prob_mass=1.00000000000000 point_mass=0
Validating. prob_mass=1.00000000000000 point_mass=0
Validating P2
Validating. prob_mass=1.00000000000000 point_mass=0
Validating. prob_mass=1.00000000000000 point_mass=0
Validating. prob_mass=1.00000000000000 point_mass=0


This lines up: the players have the same strategy, and the transition times are 1/7, 1/5, and 1/3.

Next up, replace Q = Lambda((x,), x**2), and only have a single action for each player. This should require a binary search, but it will be straightforward to verify manually. I added a callback that prints the bounds during each iteration of the binary search to observe. Also note that sympy integration is quite slow, so this binary search takes a minute or two.

Input: SilentDuelInput(player_1_action_count=1, player_2_action_count=1, player_1_action_success=Lambda(_x, _x), player_2_action_success=Lambda(x, x**2))
a_1 = 0.48109 b_1 = 0.42716
Binary searching for beta
{'current_min': 0, 'current_max': 1, 'tested_value': 0.5}
a_1 = 0.37545 b_1 = 0.65730
{'current_min': 0, 'current_max': 0.5, 'tested_value': 0.25}
a_1 = 0.40168 b_1 = 0.54770
{'current_min': 0, 'current_max': 0.25, 'tested_value': 0.125}
a_1 = 0.41139 b_1 = 0.50000
{'current_min': 0, 'current_max': 0.125, 'tested_value': 0.0625}
a_1 = 0.48109 b_1 = 0.44754
{'current_min': 0.0625, 'current_max': 0.125, 'tested_value': 0.09375}
a_1 = 0.41358 b_1 = 0.48860
{'current_min': 0.0625, 'current_max': 0.09375, 'tested_value': 0.078125}
a_1 = 0.41465 b_1 = 0.48297
{'current_min': 0.0625, 'current_max': 0.078125, 'tested_value': 0.0703125}
a_1 = 0.48109 b_1 = 0.45013
{'current_min': 0.0703125, 'current_max': 0.078125, 'tested_value': 0.07421875}
a_1 = 0.41492 b_1 = 0.48157
{'current_min': 0.0703125, 'current_max': 0.07421875, 'tested_value': 0.072265625}
a_1 = 0.48109 b_1 = 0.45078
{'current_min': 0.072265625, 'current_max': 0.07421875, 'tested_value': 0.0732421875}
a_1 = 0.41498 b_1 = 0.48122
{'current_min': 0.072265625, 'current_max': 0.0732421875, 'tested_value': 0.07275390625}
a_1 = 0.48109 b_1 = 0.45094
{'current_min': 0.07275390625, 'current_max': 0.0732421875, 'tested_value': 0.072998046875}
a_1 = 0.41500 b_1 = 0.48113
{'current_min': 0.07275390625, 'current_max': 0.072998046875, 'tested_value': 0.0728759765625}
a_1 = 0.48109 b_1 = 0.45098
{'current_min': 0.0728759765625, 'current_max': 0.072998046875, 'tested_value': 0.07293701171875}
a_1 = 0.41500 b_1 = 0.48111
{'current_min': 0.0728759765625, 'current_max': 0.07293701171875, 'tested_value': 0.072906494140625}
a_1 = 0.41500 b_1 = 0.48110
{'current_min': 0.0728759765625, 'current_max': 0.072906494140625, 'tested_value': 0.0728912353515625}
a_1 = 0.41501 b_1 = 0.48109
{'current_min': 0.0728759765625, 'current_max': 0.0728912353515625, 'tested_value': 0.07288360595703125}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288360595703125, 'current_max': 0.0728912353515625, 'tested_value': 0.07288742065429688}
a_1 = 0.41501 b_1 = 0.48109
{'current_min': 0.07288360595703125, 'current_max': 0.07288742065429688, 'tested_value': 0.07288551330566406}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288551330566406, 'current_max': 0.07288742065429688, 'tested_value': 0.07288646697998047}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288646697998047, 'current_max': 0.07288742065429688, 'tested_value': 0.07288694381713867}
a_1 = 0.41501 b_1 = 0.48109
{'current_min': 0.07288646697998047, 'current_max': 0.07288694381713867, 'tested_value': 0.07288670539855957}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288670539855957, 'current_max': 0.07288694381713867, 'tested_value': 0.07288682460784912}
a_1 = 0.41501 b_1 = 0.48109
{'current_min': 0.07288670539855957, 'current_max': 0.07288682460784912, 'tested_value': 0.07288676500320435}
a_1 = 0.41501 b_1 = 0.48109
{'current_min': 0.07288670539855957, 'current_max': 0.07288676500320435, 'tested_value': 0.07288673520088196}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288673520088196, 'current_max': 0.07288676500320435, 'tested_value': 0.07288675010204315}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288675010204315, 'current_max': 0.07288676500320435, 'tested_value': 0.07288675755262375}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288675755262375, 'current_max': 0.07288676500320435, 'tested_value': 0.07288676127791405}
a_1 = 0.41501 b_1 = 0.48109
{'current_min': 0.07288675755262375, 'current_max': 0.07288676127791405, 'tested_value': 0.0728867594152689}
a_1 = 0.41501 b_1 = 0.48109
{'current_min': 0.07288675755262375, 'current_max': 0.0728867594152689, 'tested_value': 0.07288675848394632}
a_1 = 0.41501 b_1 = 0.48109
{'current_min': 0.07288675755262375, 'current_max': 0.07288675848394632, 'tested_value': 0.07288675801828504}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288675801828504, 'current_max': 0.07288675848394632, 'tested_value': 0.07288675825111568}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288675825111568, 'current_max': 0.07288675848394632, 'tested_value': 0.072886758367531}
a_1 = 0.41501 b_1 = 0.48109
{'current_min': 0.07288675825111568, 'current_max': 0.072886758367531, 'tested_value': 0.07288675830932334}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288675830932334, 'current_max': 0.072886758367531, 'tested_value': 0.07288675833842717}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288675833842717, 'current_max': 0.072886758367531, 'tested_value': 0.07288675835297909}
a_1 = 0.48109 b_1 = 0.48109
a_1 = 0.48109 b_1 = 0.48109
P1:
(0.481, 1.000): dF/dt = Piecewise((0, (t > 1) | (t < 0.481089134572278)), (0.38/t**4, t < 1))
P2:
(0.481, 1.000): dF/dt = Piecewise((0, (t > 1) | (t < 0.481089134572086)), (0.35/t**4, t < 1)); Point mass of 0.0728868 at 1.000
Validating P1
Validating. prob_mass=1.00000000000000 point_mass=0
Validating P2
Validating. prob_mass=0.927113241647021 point_mass=0.07288675835297909


This passes the sanity check of the output distributions having probability mass 1. P2 should also have a point mass at the end, because P2’s distribution is $f(x) = x^2$, which has less weight at the beginning and sharply increases at the end. This gives P2 a disadvantage, and pushes their action probability towards the end. According to Restrepo’s theorem, it would be optimal to wait until the end about 7% of the time to guarantee a perfect shot. We can work through the example by hand, and turn the result into a unit test.

Note that we haven’t verified this example is correct by hand. We’re just looking at some sanity checks at this point.

## Where it falls apart

At this point I was feeling pretty good, and then the following example shows my implementation is broken:

x = Symbol('x')
P = Lambda((x,), x)
Q = Lambda((x,), x**2)
duel_input = SilentDuelInput(
player_1_action_count=2,
player_2_action_count=2,
player_1_action_success=P,
player_2_action_success=Q,
)
print("Input: {}".format(duel_input))
output = optimal_strategies(duel_input)
print(output)
output.validate(err_on_fail=False)


The output shows that the resulting probability distribution does not have a total probability mass of 1. I.e., it’s not a distribution. Uh oh.

Input: SilentDuelInput(player_1_action_count=2, player_2_action_count=2, player_1_action_success=Lambda(_x, _x), player_2_action_success=Lambda(x, x**2))
a_1 = 0.34405 b_1 = 0.28087
Binary searching for beta
{'current_min': 0, 'current_max': 1, 'tested_value': 0.5}
a_1 = 0.29894 b_1 = 0.45541
{'current_min': 0, 'current_max': 0.5, 'tested_value': 0.25}
a_1 = 0.32078 b_1 = 0.36181
{'current_min': 0, 'current_max': 0.25, 'tested_value': 0.125}
a_1 = 0.32660 b_1 = 0.34015
{'current_min': 0, 'current_max': 0.125, 'tested_value': 0.0625}
a_1 = 0.34292 b_1 = 0.29023
{'current_min': 0.0625, 'current_max': 0.125, 'tested_value': 0.09375}
a_1 = 0.33530 b_1 = 0.30495
{'current_min': 0.09375, 'current_max': 0.125, 'tested_value': 0.109375}
a_1 = 0.32726 b_1 = 0.33741
{'current_min': 0.09375, 'current_max': 0.109375, 'tested_value': 0.1015625}
a_1 = 0.32758 b_1 = 0.33604
{'current_min': 0.09375, 'current_max': 0.1015625, 'tested_value': 0.09765625}
a_1 = 0.32774 b_1 = 0.33535
{'current_min': 0.09375, 'current_max': 0.09765625, 'tested_value': 0.095703125}
a_1 = 0.33524 b_1 = 0.30526
{'current_min': 0.095703125, 'current_max': 0.09765625, 'tested_value': 0.0966796875}
a_1 = 0.33520 b_1 = 0.30542
{'current_min': 0.0966796875, 'current_max': 0.09765625, 'tested_value': 0.09716796875}
a_1 = 0.32776 b_1 = 0.33526
{'current_min': 0.0966796875, 'current_max': 0.09716796875, 'tested_value': 0.096923828125}
a_1 = 0.32777 b_1 = 0.33522
{'current_min': 0.0966796875, 'current_max': 0.096923828125, 'tested_value': 0.0968017578125}
a_1 = 0.33520 b_1 = 0.30544
{'current_min': 0.0968017578125, 'current_max': 0.096923828125, 'tested_value': 0.09686279296875}
a_1 = 0.32777 b_1 = 0.33521
{'current_min': 0.0968017578125, 'current_max': 0.09686279296875, 'tested_value': 0.096832275390625}
a_1 = 0.32777 b_1 = 0.33521
{'current_min': 0.0968017578125, 'current_max': 0.096832275390625, 'tested_value': 0.0968170166015625}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.0968017578125, 'current_max': 0.0968170166015625, 'tested_value': 0.09680938720703125}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.0968017578125, 'current_max': 0.09680938720703125, 'tested_value': 0.09680557250976562}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.0968017578125, 'current_max': 0.09680557250976562, 'tested_value': 0.09680366516113281}
a_1 = 0.33520 b_1 = 0.30544
{'current_min': 0.09680366516113281, 'current_max': 0.09680557250976562, 'tested_value': 0.09680461883544922}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680366516113281, 'current_max': 0.09680461883544922, 'tested_value': 0.09680414199829102}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680366516113281, 'current_max': 0.09680414199829102, 'tested_value': 0.09680390357971191}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680366516113281, 'current_max': 0.09680390357971191, 'tested_value': 0.09680378437042236}
a_1 = 0.33520 b_1 = 0.30544
{'current_min': 0.09680378437042236, 'current_max': 0.09680390357971191, 'tested_value': 0.09680384397506714}
a_1 = 0.33520 b_1 = 0.30544
{'current_min': 0.09680384397506714, 'current_max': 0.09680390357971191, 'tested_value': 0.09680387377738953}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680384397506714, 'current_max': 0.09680387377738953, 'tested_value': 0.09680385887622833}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680384397506714, 'current_max': 0.09680385887622833, 'tested_value': 0.09680385142564774}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680384397506714, 'current_max': 0.09680385142564774, 'tested_value': 0.09680384770035744}
a_1 = 0.33520 b_1 = 0.30544
{'current_min': 0.09680384770035744, 'current_max': 0.09680385142564774, 'tested_value': 0.09680384956300259}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680384770035744, 'current_max': 0.09680384956300259, 'tested_value': 0.09680384863168001}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680384770035744, 'current_max': 0.09680384863168001, 'tested_value': 0.09680384816601872}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680384770035744, 'current_max': 0.09680384816601872, 'tested_value': 0.09680384793318808}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680384770035744, 'current_max': 0.09680384793318808, 'tested_value': 0.09680384781677276}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680384770035744, 'current_max': 0.09680384781677276, 'tested_value': 0.0968038477585651}
a_1 = 0.33520 b_1 = 0.30544
{'current_min': 0.0968038477585651, 'current_max': 0.09680384781677276, 'tested_value': 0.09680384778766893}
a_1 = 0.33520 b_1 = 0.33520
a_1 = 0.33520 b_1 = 0.33520
deque([0.33520049631043414, 0.45303439059299566, 1])
deque([0.33520049631043414, 0.4897058985296734, 1])
P1:
(0.335, 0.453): dF/dt = Piecewise((0, t < 0.335200496310434), (0.19/t**4, t < 0.453034390592996), (0, t > 0.453034390592996))
(0.453, 1.000): dF/dt = Piecewise((0, t < 0.453034390592996), (0.31/t**4, t < 0.489705898529673), (0.4/t**4, t < 1), (0, t > 1))
P2:
(0.335, 0.490): dF/dt = Piecewise((0, t < 0.335200496310434), (0.17/t**4, t < 0.453034390592996), (0.3/t**4, t < 0.489705898529673), (0, t > 0.489705898529673))
(0.490, 1.000): dF/dt = Piecewise((0, t < 0.489705898529673), (0.36/t**4, t < 1), (0, t > 1)); Point mass of 0.0968038 at 1.000
Validating P1
Validating. prob_mass=0.999999999999130 point_mass=0
Validating. prob_mass=1.24303353980824 point_mass=0   INVALID
Probability distribution does not have mass 1: (0.453, 1.000): dF/dt = Piecewise((0, t < 0.453034390592996), (0.31/t**4, t < 0.489705898529673), (0.4/t**4, t < 1), (0, t > 1))
Validating P2
Validating. prob_mass=1.10285404591706 point_mass=0   INVALID
Probability distribution does not have mass 1: (0.335, 0.490): dF/dt = Piecewise((0, t < 0.335200496310434), (0.17/t**4, t < 0.453034390592996), (0.3/t**4, t < 0.489705898529673), (0, t > 0.489705898529673))
Validating. prob_mass=0.903196152212331 point_mass=0.09680384778766893


What’s fundamentally different about this example? The central thing I can tell is that this is the simplest example for which player 1 has an action that has a player 2 transition time in the middle. It’s this action:

(0.453, 1.000): dF/dt = Piecewise((0, t < 0.453034390592996), (0.31/t**4, t < 0.489705898529673), (0.4/t**4, t < 1), (0, t > 1))
...
Validating. prob_mass=1.24303353980824 point_mass=0   INVALID


This is where the discontinuity of $f^*$ actually matters. In the previous example either there was only one action, and by design the starting times of the action ranges are equal, or else the game was symmetric, so that the players had the same action range endpoints.

In my first implementation, I had actually ignored the discontinuities entirely, and because the game was symmetric it didn’t impact the output distributions. This is what’s currently in the code as “simple_f_star.” Neither player’s action transitions fell inside the bounds of any of the other player’s action ranges, and so I missed that the discontinuity was important.

In any event, in the three years since I first worked on this, I haven’t been able to figure out what I did wrong. I probably won’t come back to this problem for a while, and in the mean time perhaps some nice reader has the patience to figure it out. You can see a log of my confusion in this GitHub issue, and some of the strange workarounds I tried to get it to work.