It’s April Cools! We’re taking back April Fools.

When I was younger I had a strange relationship with alcohol, not because of any sort of trauma, but because I found it decidedly boring and disgusting to the taste. I didn’t drink in high school, didn’t enjoy parties in college, and didn’t care for tailgating or other sports-based events where drinking was common. I also never enjoyed wine—red is too tannic, white is just meh—and almost all beer tastes awful to me (lambic is a delightful exception).

This attitude may have been largely because (in America) young drinkers are idiots and the goal is to get plastered quickly and cheaply. Underage drinkers in the US only have access to cheap liquor, and sugary sodas are the standard mixer. People who grow up in these circumstances largely seem to acclimate to the idea that alcoholic drinks should taste awful.

With this attitude, I was surprised to find myself drawn to making cocktails.

From what I can remember, the first good cocktail I had was some variation on an old fashioned at a fancy restaurant. It was the kind of drink that is only good if it’s well made. This piqued my interest—and I still love a good old fashioned—but what convinced me to start making cocktails was the variety of flavors and surprising combinations that could be made into a drink.

For example, one of my favorite cocktails to make when trying to impress a friend mixes snap peas, mint, caraway (Brennivin), and yuzu lemon—a milder, Japanese variety of lemon that almost tastes like a mix between a Eureka lemon and a pear—into a bright green drink with a naturally foamy top layer (due to shaking the citrus).

“The Mendel,” my name for this cocktail whose central flavor is snap peas and mint.

But literally any flavor you want can be the star of a cocktail. I make a pecan-pie flavored drink with Rivulet’s amazing liqueur, some aged rum, egg white, and pumpkin-pie spice. I made a drink showcasing hibiscus with a syrup made from hibiscus flowers, cinnamon, allspice, vanilla beans, ginger, and sugar water. I’ve made a smokey carrot cocktail (Mezcal, carrot juice, balsamic vinegar, black pepper), and a mango lassi cocktail (gin, mango liqueur, plain yogurt, simple syrup).

The flavors, in my opinion, largely come from two sources: liqueurs and syrups. Liqueurs often start as purified ethanol spirit that is sweetened and infused with various ingredients, and then watered down to a low-alcohol content, often 15%, not much more than a glass of wine. Many world-famous liqueurs originated as bitter herbal remedies as far back as the 1600’s. This was the case for Chartreuse, named after the Grande Chartreuse monastery in the Chartreuse mountains north of Grenoble, France. These monks originally developed Chartreuse because they were trying to make the “elixir of long life.” Elon Musk should try it. Because Yellow Chartreuse is my absolute favorite liqueur, I made a little pilgrimage to their distillery in Voiron, France.

Elixir Vegetal, the herbal remedy still marketed as a cure-all tonic. I have a bottle as a novelty. They’re kept in a wooden outer bottle to prevent sunlight from making the liquid lose its color.

Anyway, point is liqueurs are delicious, even on their own or with tonic water. And there are liqueurs made from every conceivable fruit, peppers, birch bark, nuts, violet, you name it.

Syrups, on the other hand, allow you to infuse the flavor of anything by boiling it with equal parts water and sugar (or honey, or maple syrup). Tea, peanut butter, port, cardamom, whatever you want to experiment with, you can basically make the flavoring parts yourself and add them to a drink.

Beyond those two, texture is an interesting component for me personally. If you’ve ever had a whiskey sour or a pisco sour, you may know the nice foaminess in those drinks comes from “dry shaking” (shaking without ice) an egg white to emulsify and create a nice foam. It also tends to tone down harsher tannic flavors. You can achieve the same sort of effect with plain yogurt or the brine from a can of chickpeas. And then there are these techniques like “milk washing” and other forms of clarifying spirits, where you use liquids with special proteins to “wash off” the unwanted solids in another drink (like tannic solids in wine).

The rabbit whole goes a whole lot deeper. If you want to get into this and geek out a bit more on the science, check out Dave Arnold’s book Liquid Intelligence. It basically taught me all the advanced techniques I know, and many I have yet to muster the courage to try, like vacuum-infusing solids with liquor, using liquid nitrogen to freeze solids before muddling them, and plunging a RED HOT COPPER POKER into a drink to add flavor.

A page from Liquid Intelligence showing the red-hot copper poker technique.

I also enjoyed The Aviary restaurant’s cocktail book, which included a ton of interesting recipes, along with new ideas like spherification (which I didn’t realize I would hate until after I made it, mainly because I hate boba and it has a similar texture).

At 22 I hadn’t yet figured out why alcohol and drinking culture was unappealing. At 32, I think I finally understand that an activity, hobby, or subject appeals to me principally through its sense of quality and craftsmanship. For the quality of the finished work, yes, but also in the attention to detail, and stewardship of the workspace and tools with which that work is done. Like the apocryphal 30-second Picasso napkin art, I strive to practice my crafts (mostly programming, writing, math) so that quality becomes second nature. Focusing on tools, methods, and patterns of work are critical to that.

A bonus of all this is that cocktails become an easy point of conversation when I am stuck in a drinking-culture setting. I offer to “bartend” small parties of my friends so that I can have something to do besides drink. It gives me an excuse to leave an awkward conversation and a nice excuse to join a new conversation. It usually leaves an impression and people remember me. And it’s the best excuse to get my friends to visit me (I have all the ingredients!) so that I don’t have to go anywhere to enjoy myself.

If you want to get into cocktails but aren’t sure where to get started, I’d recommend getting the minimal tools to make a shaken cocktail (Boston shaker tins, Hawthorne strainer, and a measuring jig), watching a video on how to use the tins to shake a cocktail with ice, then go to fancy restaurants, try cocktails, write down the ingredients, and go buy them and try to experiment with recreating them at home. A good default recipe uses 2oz liquor, 3/4 oz – 1 oz sweet stuff (syrups and/or liqueurs), 1oz citrus juice, shaken over ice. Personally I default to a 1-1 honey-water simple syrup, and add a small pinch of salt to mixtures before I shake them. That’ll give you a solid but flexible base for which choices of particular ingredients covers the gamut of gimlets, daiquiris, sours (all that + egg), margaritas, and more.

Anyhow. The math content will return shortly. Enjoy!

Silent Duels—Constructing the Solution part 2

Previous posts in this series:

Silent Duels and an Old Paper of Restrepo
Silent Duels—Parsing the Construction
Silent Duels—Constructing the Solution part 1

Since it’s been three years since the last post in this series, and the reason for the delay is that I got totally stuck on the implementation. I’m publishing this draft article as partial progress until I can find time to work on it again.

If you haven’t read the last post, please do. Code repo. Paper I’m working through. In this post I make progress on implementing a general construction of the optimal strategies from the paper, but at the end the first serious test case fails. At the end I’m stuck, and I’m not sure what to do next to fix it.


Let’s start by refactoring the mess of code from last post. Now is a good time for that, because our tinkering in the last post instilled certainty we understand the basic mechanics. The central functions for the generic solution construction is in this file, as of this commit.

We’re using Sympy to represent functions and evaluate integrals, and it helps to name types appropriately. Note, I’ll using python type annotations in the code, but the types don’t validate because of missing stubs in sympy. Call it room for improvement.

from sympy import Lambda
SuccessFn = NewType('SuccessFn', Lambda)

class SilentDuelInput:
    '''Class containing the static input data to the silent duel problem.'''
    player_1_action_count: int
    player_2_action_count: int
    player_1_action_success: SuccessFn
    player_2_action_success: SuccessFn

Now we turn to the output. Recall, a strategy is a partition of [0,1] into n intervals—where n is the number of actions specified in the input—with a probability distribution for each piece of the partition. This suggests a natural breakdown

class Strategy:
    A strategy is a list of action distribution functions, each of which
    describes the probability of taking an action on the interval of its
    action_distributions: List[ActionDistribution]

And ActionDistribution is defined as:

class ActionDistribution:
    '''The interval on which this distribution occurs.'''
    support_start: float
    support_end: float

    The cumulative density function for the distribution.

    May be improper if point_mass > 0.
    cumulative_density_function: Lambda

    If nonzero, corresponds to an extra point mass at the support_end.
    Only used in the last action in the optimal strategy.
    point_mass: float = 0

    t = Symbol('t', nonnegative=True)

    def draw(self, uniform_random_01=DEFAULT_RNG):
        '''Return a random draw from this distribution.

         - uniform_random_01: a callable that accepts zero arguments
           and returns a uniform random float between 0 and 1. Defaults
           to using python's standard random.random
        if self.support_start >= self.support_end:  # treat as a point mass
            return self.support_start

        uniform_random_number = uniform_random_01()

        if uniform_random_number > 1 - self.point_mass - EPSILON:
            return self.support_end

        return solve_unique_real(
            self.cumulative_density_function(self.t) - uniform_random_number,

We represent the probability distribution as a cumulative density function F(t), which for a given input a outputs the total probability weight of values in the distribution that are less than a. One reason density functions are convenient is that it makes it easy to sample: pick a uniform random a \in [0,1], and then solve F(t) = a. The resulting t is the sampled value.

Cumulative density functions are related to their more often-used (in mathematics) counterpart, the probability density function f(t), which measures the “instantaneous” probability. I.e., the probability of a range of values (a,b) is given by \int_a^b f(t) dt. The probability density and cumulative density are related via F(a) = \int_{-\infty}^a f(t) dt. In our case, the lower bound of integration is finite since we’re working on a fixed subinterval of [0,1].

The helper function solve_unique_real abstracts the work of using sympy to solve an equation that the caller guarantees has a unique real solution in a given range. It’s source can be found here. This is guaranteed for cumulative distribution functions, because they’re strictly increasing functions of the input.

Finally, the output is a strategy for each player.

class SilentDuelOutput:
    p1_strategy: Strategy
    p2_strategy: Strategy

I also chose to make a little data structure to help maintain the joint progress of building up the partition and normalizing constants for each player.

class IntermediateState:
    A list of the transition times computed so far. This field
    maintains the invariant of being sorted. Thus, the first element
    in the list is a_{i + 1}, the most recently computed value of
    player 1's transition times, and the last element is a_{n + 1} = 1.
    This value is set on initialization with `new`.
    player_1_transition_times: Deque[Expr]

    Same as player_1_transition_times, but for player 2 with b_j and b_m.
    player_2_transition_times: Deque[Expr]

    The values of h_i so far, the normalizing constants for the action
    probability distributions for player 1. Has the same sorting
    invariant as the transition time lists.
    player_1_normalizing_constants: Deque[Expr]

    Same as player_1_normalizing_constants, but for player 2,
    i.e., the k_j normalizing constants.
    player_2_normalizing_constants: Deque[Expr]

    def new():
        '''Create a new state object, and set a_{n+1} = 1, b_{m+1}=1.'''
        return IntermediateState(

    def add_p1(self, transition_time, normalizing_constant):

    def add_p2(self, transition_time, normalizing_constant):

Now that we’ve established the types, we move on to the construction.


There are three parts to the construction:

  1. Computing the correct \alpha and \beta.
  2. Finding the transition times and normalizing constants for each player.
  3. Using the above to build the output strategies.

Building the output strategy

We’ll start with the last one: computing the output given the right values. The construction is symmetric for each player, so we can have a single function called with different inputs.

def compute_strategy(
        player_action_success: SuccessFn,
        player_transition_times: List[float],
        player_normalizing_constants: List[float],
        opponent_action_success: SuccessFn,
        opponent_transition_times: List[float],
        time_1_point_mass: float = 0) -> Strategy:

This function computes the construction Restrepo gives.

One difficulty is in expressing discontinuous breaks. The definition of f^* is a product over b_j > t, but if a b_j lies inside the action interval, that will cause a discontinuity. I don’t know of an easy way to express the f^* product literally as written in sympy (let me know if you know better), so instead I opted to construct a piecewise function manually, which sympy supports nicely.

This results in the following function for building the final f^* for a single action for one player. The piecewise function is to break the action interval into pieces according to the discontinuities introduced by the opponent’s transition times that lie in the same interval.

def f_star(player_action_success: SuccessFn,
           opponent_action_success: SuccessFn,
           variable: Symbol,
           opponent_transition_times: Iterable[float]) -> Expr:
    '''Compute f^* as in Restrepo '57.
    The inputs can be chosen so that the appropriate f^* is built
    for either player. I.e., if we want to compute f^* for player 1,
    player_action_success should correspond to P, opponent_action_success
    to Q, and larger_transition_times to the b_j.
    If the inputs are switched appropriately, f^* is computed for player 2.
    P: SuccessFn = player_action_success
    Q: SuccessFn = opponent_action_success

    We compute f^* as a Piecewise function of the following form:
    [prod_{i=1}^m (1-Q(b_i))] * Q'(t) / Q^2(t)P(t)    if t < b_1
    [prod_{i=2}^m (1-Q(b_i))] * Q'(t) / Q^2(t)P(t)    if t < b_2
    [prod_{i=3}^m (1-Q(b_i))] * Q'(t) / Q^2(t)P(t)    if t < b_3
    [1] *  Q'(t) / Q^2(t) P(t)                        if t >= b_m
    non_product_term = diff(Q(variable), variable) / (Q(variable)**2 * P(variable))
    piecewise_components = []
    for i, b_j in enumerate(opponent_transition_times):
        larger_transition_times = opponent_transition_times[i:]

        product = 1
        for b in larger_transition_times:
            product *= (1 - Q(b))

        term = product * non_product_term
        piecewise_components.append((term, variable < b_j))

    # last term is when there are no larger transition times.
    piecewise_components.append((non_product_term, True))

    return Piecewise(*piecewise_components)

The piecewise components are probability densities, so we can integrate them piecewise to get the cumulative densities. There are a few helpers here to deal with piecewise functions. First, we define a mask_piecewise to modify the sympy-internal representation of a piecewise function to have specified values outside of a fixed interval (the interval on which the action occurs). For a pdf this should be zero, and for a cdf it should be 0 before the action interval and 1 after. There are also some nuanced bits where hidden calls to expr.simplify() allow the integration to happen much faster than otherwise.

def compute_strategy(
        player_action_success: SuccessFn,
        player_transition_times: List[float],
        player_normalizing_constants: List[float],
        opponent_action_success: SuccessFn,
        opponent_transition_times: List[float],
        time_1_point_mass: float = 0) -> Strategy:
    Given the transition times for a player, compute the action cumulative
    density functions for the optimal strategy of the player.
    action_distributions = []
    x = Symbol('x', real=True)
    t = Symbol('t', real=True)

    # chop off the last transition time, which is always 1
    opponent_transition_times = [
        x for x in opponent_transition_times if x < 1

    pairs = subsequent_pairs(player_transition_times)
    for (i, (action_start, action_end)) in enumerate(pairs):
        normalizing_constant = player_normalizing_constants[i]

        dF = normalizing_constant * f_star(
        piece_pdf = mask_piecewise(dF, x, action_start, action_end)
        piece_cdf = integrate(piece_pdf, x, action_start, t)
        piece_cdf = mask_piecewise(

            cumulative_density_function=Lambda((t,), piece_cdf),

    action_distributions[-1].point_mass = time_1_point_mass
    return Strategy(action_distributions=action_distributions)

def compute_player_strategies(silent_duel_input, intermediate_state, alpha, beta):
    p1_strategy = compute_strategy(
    p2_strategy = compute_strategy(
    return SilentDuelOutput(p1_strategy=p1_strategy, p2_strategy=p2_strategy)

Note that “Lambda” is a sympy-internal anonymous function declaration that we’re using here to imply functionality. A Lambda also supports function call notation by overloading __call__. This allows the later action distribution to be treated as if it were a function.

Finding the transition times

Next we move on to computing the transition times for a given \alpha, \beta. This step is largely the same as in the previous post in this series, but we’re refactoring the code to be simpler and more generic.

The outer loop will construct an intermediate state object, and handle the process that Restrepo describes of “taking the larger parameter” and then computing the next a_i, b_j using the previously saved parameters. There is one caveat, that Restrepo misses when he writes, “for definiteness, we assume that a_n > b_m…in the next step…a new b_m is computed…using the single parameter a_n^*.” To the best of what I can tell (and experimenting with symmetric examples), when the computed transition times are equal, keeping only one and following Restrepo’s algorithm faithfully will result in an inconsistent output. To the best of what I can tell, this is a minor mistake, and if the transition times are equal then you must keep them both, and not recompute one using the other as a parameter.

def compute_as_and_bs(duel_input: SilentDuelInput,
                      alpha: float = 0,
                      beta: float = 0) -> IntermediateState:
    Compute the a's and b's for the silent duel, given a fixed
    alpha and beta as input.
    t = Symbol('t0', nonnegative=True, real=True)

    p1_index = duel_input.player_1_action_count
    p2_index = duel_input.player_2_action_count
    intermediate_state =

    while p1_index > 0 or p2_index > 0:
        # the larger of a_i, b_j is kept as a parameter, then the other will be repeated
        # in the next iteration; e.g., a_{i-1} and b_j (the latter using a_i in its f^*)
        (a_i, b_j, h_i, k_j) = compute_ai_and_bj(
            duel_input, intermediate_state, alpha=alpha, beta=beta

        # there is one exception, if a_i == b_j, then the computation of f^* in the next
        # iteration (I believe) should not include the previously kept parameter. I.e.,
        # in the symmetric version, if a_n is kept and the next computation of b_m uses
        # the previous a_n, then it will produce the wrong value.
        # I resolve this by keeping both parameters when a_i == b_j.
        if abs(a_i - b_j) < EPSILON and p1_index > 0 and p2_index > 0:
            # use the average of the two to avoid roundoff errors
            transition = (a_i + b_j) / 2
            intermediate_state.add_p1(float(transition), float(h_i))
            intermediate_state.add_p2(float(transition), float(k_j))
            p1_index -= 1
            p2_index -= 1
        elif (a_i > b_j and p1_index > 0) or p2_index == 0:
            intermediate_state.add_p1(float(a_i), float(h_i))
            p1_index -= 1
        elif (b_j > a_i and p2_index > 0) or p1_index == 0:
            intermediate_state.add_p2(float(b_j), float(k_j))
            p2_index -= 1

    return intermediate_state

It remains to compute an individual a_i, b_j pair given an intermediate state. We refactored the loop body from the last post into a generic function that works for both a_n, b_m (which need access to \alpha, \beta) and lower a_i, b_j. With the exception of “simple_f_star” there is nothing new happening here.

def simple_f_star(player_action_success: SuccessFn,
                  opponent_action_success: SuccessFn,
                  variable: Symbol,
                  larger_transition_times: Iterable[float]) -> Expr:
    P: SuccessFn = player_action_success
    Q: SuccessFn = opponent_action_success

    non_product_term = diff(Q(variable), variable) / (Q(variable)**2 * P(variable))

    product = 1
    for b in larger_transition_times:
        product *= (1 - Q(b))

    return product * non_product_term

def compute_ai_and_bj(duel_input: SilentDuelInput,
                      intermediate_state: IntermediateState,
                      alpha: float = 0,
                      beta: float = 0):
    Compute a pair of a_i and b_j transition times for both players,
    using the intermediate state of the algorithm computed so far.
    This function also computes a_n and b_m when the intermediate_state
    input has no larger transition times for the opposite player. In
    those cases, the integrals and equations being solved are slightly
    different; they include some terms involving alpha and beta. In all
    other cases, the alpha and beta parameters are unused.
    P: SuccessFn = duel_input.player_1_action_success
    Q: SuccessFn = duel_input.player_2_action_success
    t = Symbol('t0', nonnegative=True, real=True)
    a_i = Symbol('a_i', positive=True, real=True)
    b_j = Symbol('b_j', positive=True, real=True)

    p1_transitions = intermediate_state.player_1_transition_times
    p2_transitions = intermediate_state.player_2_transition_times

    # the left end of the transitions arrays contain the smallest
    # (latest computed) transition time for each player.
    # these are set to 1 for an empty intermediate state, i.e. for a_n, b_m
    a_i_plus_one = p1_transitions[0]
    b_j_plus_one = p2_transitions[0]
    computing_a_n = a_i_plus_one == 1
    computing_b_m = b_j_plus_one == 1

    p1_fstar_parameters = list(p2_transitions)[:-1]  # ignore b_{m+1} = 1
    p1_fstar = simple_f_star(P, Q, t, p1_fstar_parameters)
    # the a_i part
    if computing_a_n:
        p1_integrand = ((1 + alpha) - (1 - alpha) * P(t)) * p1_fstar
        p1_integral_target = 2 * (1 - alpha)
        p1_integrand = (1 - P(t)) * p1_fstar
        p1_integral_target = 1 / intermediate_state.player_1_normalizing_constants[0]

    a_i_integrated = integrate(p1_integrand, t, a_i, a_i_plus_one)
    a_i = solve_unique_real(
        a_i_integrated - p1_integral_target,

    # the b_j part
    p2_fstar_parameters = list(p1_transitions)[:-1]  # ignore a_{n+1} = 1
    p2_fstar = simple_f_star(Q, P, t, p2_fstar_parameters)
    if computing_b_m:
        p2_integrand = ((1 + beta) - (1 - beta) * Q(t)) * p2_fstar
        p2_integral_target = 2 * (1 - beta)
        p2_integrand = (1 - Q(t)) * p2_fstar
        p2_integral_target = 1 / intermediate_state.player_2_normalizing_constants[0]

    b_j_integrated = integrate(p2_integrand, t, b_j, b_j_plus_one)
    b_j = solve_unique_real(
        b_j_integrated - p2_integral_target,

    # the h_i part
    h_i_integrated = integrate(p1_fstar, t, a_i, a_i_plus_one)
    h_i_numerator = (1 - alpha) if computing_a_n else 1
    h_i = h_i_numerator / h_i_integrated

    # the k_j part
    k_j_integrated = integrate(p2_fstar, t, b_j, b_j_plus_one)
    k_j_numerator = (1 - beta) if computing_b_m else 1
    k_j = k_j_numerator / k_j_integrated

    return (a_i, b_j, h_i, k_j)

You might be wondering what is going on with “simple_f_star”. Let me save that for the end of the post, as it’s related to how I’m currently stuck in understanding the construction.

Binary searching for alpha and beta

Finally, we need to binary search for the desired output a_1 = b_1 as a function of \alpha, \beta. Following Restrepo’s claim, we first compute a_1, b_1 using \alpha=0, \beta=0. If a_1 > b_1, then we are looking for a \beta > 0, and vice versa for \alpha.

To facilitate the binary search, I decided to implement an abstract binary search routine (that only works for floating point domains). You can see the code here. The important part is that it abstracts the test (did you find it?) and the response (no, too low), so that we can have the core of this test be a computation of a_1, b_1.

First the non-binary-search bits:

def optimal_strategies(silent_duel_input: SilentDuelInput) -> SilentDuelOutput:
    '''Compute an optimal pair of corresponding strategies for the silent duel problem.'''
    # First compute a's and b's, and check to see if a_1 == b_1, in which case quit.
    intermediate_state = compute_as_and_bs(silent_duel_input, alpha=0, beta=0)
    a1 = intermediate_state.player_1_transition_times[0]
    b1 = intermediate_state.player_2_transition_times[0]

    if abs(a1 - b1) < EPSILON:
        return compute_player_strategies(
            silent_duel_input, intermediate_state, alpha=0, beta=0,

    # Otherwise, binary search for an alpha/beta
    searching_for_beta = b1 < a1


    intermediate_state = compute_as_and_bs(
        silent_duel_input, alpha=final_alpha, beta=final_beta
    player_strategies = compute_player_strategies(
        silent_duel_input, intermediate_state, final_alpha, final_beta

    return player_strategies

The above first checks to see if a search is needed, and in either case uses our previously defined functions to compute the output strategies. The binary search part looks like this:

    if searching_for_beta:
        def test(beta_value):
            new_state = compute_as_and_bs(
                silent_duel_input, alpha=0, beta=beta_value
            new_a1 = new_state.player_1_transition_times[0]
            new_b1 = new_state.player_2_transition_times[0]
            found = abs(new_a1 - new_b1) < EPSILON
            return BinarySearchHint(found=found, tooLow=new_b1 < new_a1)
    else:  # searching for alpha
        def test(alpha_value):
            new_state = compute_as_and_bs(
                silent_duel_input, alpha=alpha_value, beta=0
            new_a1 = new_state.player_1_transition_times[0]
            new_b1 = new_state.player_2_transition_times[0]
            found = abs(new_a1 - new_b1) < EPSILON
            return BinarySearchHint(found=found, tooLow=new_a1 < new_b1)

    search_result = binary_search(
        test, param_min=0, param_max=1, callback=print
    assert search_result.found

    # the optimal (alpha, beta) pair have product zero.
    final_alpha = 0 if searching_for_beta else search_result.value
    final_beta = search_result.value if searching_for_beta else 0

Put it all together

Let’s run the complete construction on some examples. I added a couple of print statements in the code and overloaded __str__ on the dataclasses to help. First we can verify we get the same result as our previous symmetric example:

x = Symbol('x')
P = Lambda((x,), x)
Q = Lambda((x,), x)

duel_input = SilentDuelInput(

print("Input: {}".format(duel_input))
output = optimal_strategies(duel_input)

# output is:

Input: SilentDuelInput(player_1_action_count=3, player_2_action_count=3, player_1_action_success=Lambda(_x, _x), player_2_action_success=Lambda(_x, _x))
a_1 = 0.143 b_1 = 0.143
(0.143, 0.200): dF/dt = Piecewise((0, (t > 0.2) | (t < 0.142857142857143)), (0.083/t**3, t < 0.2))
(0.200, 0.333): dF/dt = Piecewise((0, (t > 0.333333333333333) | (t < 0.2)), (0.13/t**3, t < 0.333333333333333))
(0.333, 1.000): dF/dt = Piecewise((0, (t > 1) | (t < 0.333333333333333)), (0.25/t**3, t < 1))

(0.143, 0.200): dF/dt = Piecewise((0, (t < 0.2) | (t < 0.142857142857143)), (0.083/t**3, t < 0.2))
(0.200, 0.333): dF/dt = Piecewise((0, (t > 0.333333333333333) | (t < 0.2)), (0.13/t**3, t < 0.333333333333333))
(0.333, 1.000): dF/dt = Piecewise((0, (t > 1) | (t < 0.333333333333333)), (0.25/t**3, t > 1))
Validating P1
Validating. prob_mass=1.00000000000000 point_mass=0
Validating. prob_mass=1.00000000000000 point_mass=0
Validating. prob_mass=1.00000000000000 point_mass=0
Validating P2
Validating. prob_mass=1.00000000000000 point_mass=0
Validating. prob_mass=1.00000000000000 point_mass=0
Validating. prob_mass=1.00000000000000 point_mass=0

This lines up: the players have the same strategy, and the transition times are 1/7, 1/5, and 1/3.

Next up, replace Q = Lambda((x,), x**2), and only have a single action for each player. This should require a binary search, but it will be straightforward to verify manually. I added a callback that prints the bounds during each iteration of the binary search to observe. Also note that sympy integration is quite slow, so this binary search takes a minute or two.

Input: SilentDuelInput(player_1_action_count=1, player_2_action_count=1, player_1_action_success=Lambda(_x, _x), player_2_action_success=Lambda(x, x**2))
a_1 = 0.48109 b_1 = 0.42716
Binary searching for beta
{'current_min': 0, 'current_max': 1, 'tested_value': 0.5}
a_1 = 0.37545 b_1 = 0.65730
{'current_min': 0, 'current_max': 0.5, 'tested_value': 0.25}
a_1 = 0.40168 b_1 = 0.54770
{'current_min': 0, 'current_max': 0.25, 'tested_value': 0.125}
a_1 = 0.41139 b_1 = 0.50000
{'current_min': 0, 'current_max': 0.125, 'tested_value': 0.0625}
a_1 = 0.48109 b_1 = 0.44754
{'current_min': 0.0625, 'current_max': 0.125, 'tested_value': 0.09375}
a_1 = 0.41358 b_1 = 0.48860
{'current_min': 0.0625, 'current_max': 0.09375, 'tested_value': 0.078125}
a_1 = 0.41465 b_1 = 0.48297
{'current_min': 0.0625, 'current_max': 0.078125, 'tested_value': 0.0703125}
a_1 = 0.48109 b_1 = 0.45013
{'current_min': 0.0703125, 'current_max': 0.078125, 'tested_value': 0.07421875}
a_1 = 0.41492 b_1 = 0.48157
{'current_min': 0.0703125, 'current_max': 0.07421875, 'tested_value': 0.072265625}
a_1 = 0.48109 b_1 = 0.45078
{'current_min': 0.072265625, 'current_max': 0.07421875, 'tested_value': 0.0732421875}
a_1 = 0.41498 b_1 = 0.48122
{'current_min': 0.072265625, 'current_max': 0.0732421875, 'tested_value': 0.07275390625}
a_1 = 0.48109 b_1 = 0.45094
{'current_min': 0.07275390625, 'current_max': 0.0732421875, 'tested_value': 0.072998046875}
a_1 = 0.41500 b_1 = 0.48113
{'current_min': 0.07275390625, 'current_max': 0.072998046875, 'tested_value': 0.0728759765625}
a_1 = 0.48109 b_1 = 0.45098
{'current_min': 0.0728759765625, 'current_max': 0.072998046875, 'tested_value': 0.07293701171875}
a_1 = 0.41500 b_1 = 0.48111
{'current_min': 0.0728759765625, 'current_max': 0.07293701171875, 'tested_value': 0.072906494140625}
a_1 = 0.41500 b_1 = 0.48110
{'current_min': 0.0728759765625, 'current_max': 0.072906494140625, 'tested_value': 0.0728912353515625}
a_1 = 0.41501 b_1 = 0.48109
{'current_min': 0.0728759765625, 'current_max': 0.0728912353515625, 'tested_value': 0.07288360595703125}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288360595703125, 'current_max': 0.0728912353515625, 'tested_value': 0.07288742065429688}
a_1 = 0.41501 b_1 = 0.48109
{'current_min': 0.07288360595703125, 'current_max': 0.07288742065429688, 'tested_value': 0.07288551330566406}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288551330566406, 'current_max': 0.07288742065429688, 'tested_value': 0.07288646697998047}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288646697998047, 'current_max': 0.07288742065429688, 'tested_value': 0.07288694381713867}
a_1 = 0.41501 b_1 = 0.48109
{'current_min': 0.07288646697998047, 'current_max': 0.07288694381713867, 'tested_value': 0.07288670539855957}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288670539855957, 'current_max': 0.07288694381713867, 'tested_value': 0.07288682460784912}
a_1 = 0.41501 b_1 = 0.48109
{'current_min': 0.07288670539855957, 'current_max': 0.07288682460784912, 'tested_value': 0.07288676500320435}
a_1 = 0.41501 b_1 = 0.48109
{'current_min': 0.07288670539855957, 'current_max': 0.07288676500320435, 'tested_value': 0.07288673520088196}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288673520088196, 'current_max': 0.07288676500320435, 'tested_value': 0.07288675010204315}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288675010204315, 'current_max': 0.07288676500320435, 'tested_value': 0.07288675755262375}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288675755262375, 'current_max': 0.07288676500320435, 'tested_value': 0.07288676127791405}
a_1 = 0.41501 b_1 = 0.48109
{'current_min': 0.07288675755262375, 'current_max': 0.07288676127791405, 'tested_value': 0.0728867594152689}
a_1 = 0.41501 b_1 = 0.48109
{'current_min': 0.07288675755262375, 'current_max': 0.0728867594152689, 'tested_value': 0.07288675848394632}
a_1 = 0.41501 b_1 = 0.48109
{'current_min': 0.07288675755262375, 'current_max': 0.07288675848394632, 'tested_value': 0.07288675801828504}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288675801828504, 'current_max': 0.07288675848394632, 'tested_value': 0.07288675825111568}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288675825111568, 'current_max': 0.07288675848394632, 'tested_value': 0.072886758367531}
a_1 = 0.41501 b_1 = 0.48109
{'current_min': 0.07288675825111568, 'current_max': 0.072886758367531, 'tested_value': 0.07288675830932334}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288675830932334, 'current_max': 0.072886758367531, 'tested_value': 0.07288675833842717}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288675833842717, 'current_max': 0.072886758367531, 'tested_value': 0.07288675835297909}
a_1 = 0.48109 b_1 = 0.48109
a_1 = 0.48109 b_1 = 0.48109
(0.481, 1.000): dF/dt = Piecewise((0, (t > 1) | (t < 0.481089134572278)), (0.38/t**4, t < 1))

(0.481, 1.000): dF/dt = Piecewise((0, (t > 1) | (t < 0.481089134572086)), (0.35/t**4, t < 1)); Point mass of 0.0728868 at 1.000
Validating P1
Validating. prob_mass=1.00000000000000 point_mass=0
Validating P2
Validating. prob_mass=0.927113241647021 point_mass=0.07288675835297909

This passes the sanity check of the output distributions having probability mass 1. P2 should also have a point mass at the end, because P2’s distribution is f(x) = x^2, which has less weight at the beginning and sharply increases at the end. This gives P2 a disadvantage, and pushes their action probability towards the end. According to Restrepo’s theorem, it would be optimal to wait until the end about 7% of the time to guarantee a perfect shot. We can work through the example by hand, and turn the result into a unit test.

Note that we haven’t verified this example is correct by hand. We’re just looking at some sanity checks at this point.

Where it falls apart

At this point I was feeling pretty good, and then the following example shows my implementation is broken:

x = Symbol('x')
P = Lambda((x,), x)
Q = Lambda((x,), x**2)

duel_input = SilentDuelInput(

print("Input: {}".format(duel_input))
output = optimal_strategies(duel_input)

The output shows that the resulting probability distribution does not have a total probability mass of 1. I.e., it’s not a distribution. Uh oh.

Input: SilentDuelInput(player_1_action_count=2, player_2_action_count=2, player_1_action_success=Lambda(_x, _x), player_2_action_success=Lambda(x, x**2))
a_1 = 0.34405 b_1 = 0.28087
Binary searching for beta
{'current_min': 0, 'current_max': 1, 'tested_value': 0.5}
a_1 = 0.29894 b_1 = 0.45541
{'current_min': 0, 'current_max': 0.5, 'tested_value': 0.25}
a_1 = 0.32078 b_1 = 0.36181
{'current_min': 0, 'current_max': 0.25, 'tested_value': 0.125}
a_1 = 0.32660 b_1 = 0.34015
{'current_min': 0, 'current_max': 0.125, 'tested_value': 0.0625}
a_1 = 0.34292 b_1 = 0.29023
{'current_min': 0.0625, 'current_max': 0.125, 'tested_value': 0.09375}
a_1 = 0.33530 b_1 = 0.30495
{'current_min': 0.09375, 'current_max': 0.125, 'tested_value': 0.109375}
a_1 = 0.32726 b_1 = 0.33741
{'current_min': 0.09375, 'current_max': 0.109375, 'tested_value': 0.1015625}
a_1 = 0.32758 b_1 = 0.33604
{'current_min': 0.09375, 'current_max': 0.1015625, 'tested_value': 0.09765625}
a_1 = 0.32774 b_1 = 0.33535
{'current_min': 0.09375, 'current_max': 0.09765625, 'tested_value': 0.095703125}
a_1 = 0.33524 b_1 = 0.30526
{'current_min': 0.095703125, 'current_max': 0.09765625, 'tested_value': 0.0966796875}
a_1 = 0.33520 b_1 = 0.30542
{'current_min': 0.0966796875, 'current_max': 0.09765625, 'tested_value': 0.09716796875}
a_1 = 0.32776 b_1 = 0.33526
{'current_min': 0.0966796875, 'current_max': 0.09716796875, 'tested_value': 0.096923828125}
a_1 = 0.32777 b_1 = 0.33522
{'current_min': 0.0966796875, 'current_max': 0.096923828125, 'tested_value': 0.0968017578125}
a_1 = 0.33520 b_1 = 0.30544
{'current_min': 0.0968017578125, 'current_max': 0.096923828125, 'tested_value': 0.09686279296875}
a_1 = 0.32777 b_1 = 0.33521
{'current_min': 0.0968017578125, 'current_max': 0.09686279296875, 'tested_value': 0.096832275390625}
a_1 = 0.32777 b_1 = 0.33521
{'current_min': 0.0968017578125, 'current_max': 0.096832275390625, 'tested_value': 0.0968170166015625}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.0968017578125, 'current_max': 0.0968170166015625, 'tested_value': 0.09680938720703125}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.0968017578125, 'current_max': 0.09680938720703125, 'tested_value': 0.09680557250976562}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.0968017578125, 'current_max': 0.09680557250976562, 'tested_value': 0.09680366516113281}
a_1 = 0.33520 b_1 = 0.30544
{'current_min': 0.09680366516113281, 'current_max': 0.09680557250976562, 'tested_value': 0.09680461883544922}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680366516113281, 'current_max': 0.09680461883544922, 'tested_value': 0.09680414199829102}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680366516113281, 'current_max': 0.09680414199829102, 'tested_value': 0.09680390357971191}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680366516113281, 'current_max': 0.09680390357971191, 'tested_value': 0.09680378437042236}
a_1 = 0.33520 b_1 = 0.30544
{'current_min': 0.09680378437042236, 'current_max': 0.09680390357971191, 'tested_value': 0.09680384397506714}
a_1 = 0.33520 b_1 = 0.30544
{'current_min': 0.09680384397506714, 'current_max': 0.09680390357971191, 'tested_value': 0.09680387377738953}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680384397506714, 'current_max': 0.09680387377738953, 'tested_value': 0.09680385887622833}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680384397506714, 'current_max': 0.09680385887622833, 'tested_value': 0.09680385142564774}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680384397506714, 'current_max': 0.09680385142564774, 'tested_value': 0.09680384770035744}
a_1 = 0.33520 b_1 = 0.30544
{'current_min': 0.09680384770035744, 'current_max': 0.09680385142564774, 'tested_value': 0.09680384956300259}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680384770035744, 'current_max': 0.09680384956300259, 'tested_value': 0.09680384863168001}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680384770035744, 'current_max': 0.09680384863168001, 'tested_value': 0.09680384816601872}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680384770035744, 'current_max': 0.09680384816601872, 'tested_value': 0.09680384793318808}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680384770035744, 'current_max': 0.09680384793318808, 'tested_value': 0.09680384781677276}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680384770035744, 'current_max': 0.09680384781677276, 'tested_value': 0.0968038477585651}
a_1 = 0.33520 b_1 = 0.30544
{'current_min': 0.0968038477585651, 'current_max': 0.09680384781677276, 'tested_value': 0.09680384778766893}
a_1 = 0.33520 b_1 = 0.33520
a_1 = 0.33520 b_1 = 0.33520
deque([0.33520049631043414, 0.45303439059299566, 1])
deque([0.33520049631043414, 0.4897058985296734, 1])
(0.335, 0.453): dF/dt = Piecewise((0, t < 0.335200496310434), (0.19/t**4, t < 0.453034390592996), (0, t > 0.453034390592996))
(0.453, 1.000): dF/dt = Piecewise((0, t < 0.453034390592996), (0.31/t**4, t < 0.489705898529673), (0.4/t**4, t < 1), (0, t > 1))

(0.335, 0.490): dF/dt = Piecewise((0, t < 0.335200496310434), (0.17/t**4, t < 0.453034390592996), (0.3/t**4, t < 0.489705898529673), (0, t > 0.489705898529673))
(0.490, 1.000): dF/dt = Piecewise((0, t < 0.489705898529673), (0.36/t**4, t < 1), (0, t > 1)); Point mass of 0.0968038 at 1.000
Validating P1
Validating. prob_mass=0.999999999999130 point_mass=0
Validating. prob_mass=1.24303353980824 point_mass=0   INVALID
Probability distribution does not have mass 1: (0.453, 1.000): dF/dt = Piecewise((0, t < 0.453034390592996), (0.31/t**4, t < 0.489705898529673), (0.4/t**4, t < 1), (0, t > 1))
Validating P2
Validating. prob_mass=1.10285404591706 point_mass=0   INVALID
Probability distribution does not have mass 1: (0.335, 0.490): dF/dt = Piecewise((0, t < 0.335200496310434), (0.17/t**4, t < 0.453034390592996), (0.3/t**4, t < 0.489705898529673), (0, t > 0.489705898529673))
Validating. prob_mass=0.903196152212331 point_mass=0.09680384778766893

What’s fundamentally different about this example? The central thing I can tell is that this is the simplest example for which player 1 has an action that has a player 2 transition time in the middle. It’s this action:

(0.453, 1.000): dF/dt = Piecewise((0, t < 0.453034390592996), (0.31/t**4, t < 0.489705898529673), (0.4/t**4, t < 1), (0, t > 1))
Validating. prob_mass=1.24303353980824 point_mass=0   INVALID

This is where the discontinuity of f^* actually matters. In the previous example either there was only one action, and by design the starting times of the action ranges are equal, or else the game was symmetric, so that the players had the same action range endpoints.

In my first implementation, I had actually ignored the discontinuities entirely, and because the game was symmetric it didn’t impact the output distributions. This is what’s currently in the code as “simple_f_star.” Neither player’s action transitions fell inside the bounds of any of the other player’s action ranges, and so I missed that the discontinuity was important.

In any event, in the three years since I first worked on this, I haven’t been able to figure out what I did wrong. I probably won’t come back to this problem for a while, and in the mean time perhaps some nice reader has the patience to figure it out. You can see a log of my confusion in this GitHub issue, and some of the strange workarounds I tried to get it to work.

My next book will be “Practical Math for Programmers”

tl;dr: I’m writing a new book, sign up for the announcements mailing list.

I’ve written exactly zero new technical blog posts this year because I’ve been spending all my writing efforts on my next book, Practical Math for Programmers (PMFP, subtitle: A Tour of Mathematics in Production Software).

I’ve written a little bit about it in my newsletter, Halfspace. There I rant, critique, brainstorm, and wax poetic about math and software. This blog will remain dedicated to technical details. If you like my casual writing—the complementary half of this space, hence the name—go sign up for that.

The concept for Practical Math grew from an ongoing frustration with academic mathematicians and computer scientists overselling the work in their research papers as “widely applicable.” I ranted about this at length in the newsletter, but it has bothered me and wasted my time throughout my education and career.

So this book will assemble a large list of applications of interesting math used in bona-fide production systems. I hope the title makes this goal abundantly obvious. It will serve the following purposes, beyond satiating my spite.

  • It will be a gateway drug to mathematics for people who are not interested in math (yet).
  • It will inspire the reader to draw analogies to problems they face in their own work.
  • It will be a clear and unambiguous answer to the question “when will we ever use this?”, hopefully satisfying skeptical students.
  • It will be much easier than pimbook to sample and read casually. It could be a bathroom reader for the stalls of software firms.

To achieve these, the book will follow three principles. It won’t be super heavy on mathematical details. No proofs, nice diagrams, and limited notation. It will also contain working demonstrations of each production application. Python 3, with a GitHub repo, full test suite, and code documentation to back it. And finally, each application section (I’m tentatively calling them “Tips”) will be only 3-5 pages long. I plan to write 60-75 Tips. Add essays, and this is a 200-300 page book. I’ve currently written drafts of 16 with two more in progress.

Though there will be technical details, the Tips will read be more like a survey than a textbook. And to make it more appealing and interesting, I’ll focus on the story behind the problem that the math is solving, the evidence that it is used in production, and the references you can read if you want details. In particular, I will interview the practitioners who actually use or manage the software built around the math, or cover the history from primary sources when I can. To achieve working demonstrations for 60+ applications (~3 years of blogging for me), I will limit myself to demonstrations that can fit in under 1 page of Python code, I will simplify the scope of the demonstrations (within reason), and port/adapt reference implementations in open source software when they are available.

And as I would hope, readers who are enticed by mathematics from reading Practical Math might be convinced to journey through A Programmer’s Introduction to Mathematics.

As for the content, the Tips are still largely unstructured, though themes are showing up around the following topics that will probably evolve into Chapter divisions.

  • Measuring things. Either because the concept is fuzzy or approximations are required. Problems around attribution, problems in counting, detection, and budgeting.
  • Dealing with uncertainty. Either uncertainty in measurements or defining useful probabilistic models for decision making.
  • Safety. In terms of data corruption, privacy, redundancy, etc.
  • Efficiency. Techniques for improving efficiency of things in place (without system redesigns). Or situations in which an weird bottleneck shows up that math helps smooth over.

And then there are topics that don’t fit anywhere, like Perlin noise generation or market design techniques. Maybe I’ll call that chapter Potpourri. If you have suggestions for applications you think would fit well in the book, please reach out! Especially if you have experience using the ideas in prod.

In between chapters I plan to write essays, such as one on the ethics of applying math to automated computer systems, and one reflecting on the sorts of problems that engineers face that could be avoided with just a modest amount of math, one on the value of analogy in engineering and math, and an updated and more professional version of “What’s in Production.”

I hope to have a first draft done this year (though I’m not on track), and publish some time in 2023.

The Gadget Decomposition in FHE

Lately I’ve been studying Fully Homomorphic Encryption, which is the miraculous ability to perform arbitrary computations on encrypted data without learning any information about the underlying message. It’s the most comprehensive private computing solution that can exist (and it does exist!).

The first FHE scheme by Craig Gentry was based on ideal lattices and was considered very complex (I never took the time to learn how it worked). Some later schemes (GSW = Gentry-Sahai-Waters) are based on matrix multiplication, and are conceptually much simpler. Even more recent FHE schemes build on GSW or use it as a core subroutine.

All of these schemes inject random noise into the ciphertext, and each homomorphic operation increases noise. Once the noise gets too big, you can no longer decrypt the message, and so every now and then you must apply a process called “bootstrapping” that reduces noise. It also tends to be the performance bottleneck of any FHE scheme, and this bottleneck is why FHE is not considered practical yet.

To help reduce noise growth, many FHE schemes like GSW use a technical construction dubbed the gadget decomposition. Despite the terribly vague name, it’s a crucial limitation on noise growth. When it shows up in a paper, it’s usually remarked as “well known in the literature,” and the details you’d need to implement it are omitted. It’s one of those topics.

So I’ll provide some details. The code from this post is on GitHub.

Binary digit decomposition

To create an FHE scheme, you need to apply two homomorphic operations to ciphertexts: addition and multiplication. Most FHE schemes admit one of the two operations trivially. If the ciphertexts are numbers as in RSA, you multiply them as numbers and that multiplies the underlying messages, but addition is not known to be possible. If ciphertexts are vectors as in the “Learning With Errors” scheme (LWE)—the basis of many FHE schemes—you add them as vectors and that adds the underlying messages. (Here the “Error” in LWE is synonymous with “random noise”, I will use the term “noise”) In LWE and most FHE schemes, a ciphertext hides the underlying message by adding random noise, and addition of two ciphertexts adds the corresponding noise. After too many unmitigated additions, the noise will grow so large it obstructs the message. So you stop computing, or you apply a bootstrapping operation to reduce the noise.

Most FHE schemes also allow you to multiply a ciphertext by an unencrypted constant A, but then the noise scales by a factor of A, which is undesirable if A is large. So you either need to limit the coefficients of your linear combinations by some upper bound, or use a version of the gadget decomposition.

The simplest version of the gadget decomposition works like this. Instead of encrypting a message m \in \mathbb{Z}, you would encrypt m, 2m, 4m, ..., 2^{k-1} m for some choice of k, and then to multiply A < 2^k you write the binary digits of A = \sum_{i=0}^{k-1} a_i 2^i and you compute \sum_{i=0}^{k-1} a_i \textup{Enc}(2^i m). If the noise in each encryption is E, and summing ciphertexts sums noise, then this trick reduces the noise growth from O(AE) to O(kE) = O(\log(A)E), at the cost of tracking k ciphertexts. (Calling the noise E is a bit of an abuse—in reality the error is sampled from a random distribution—but hopefully you see my point).

Some folks call the mapping \textup{PowersOf2}(m) = m \cdot (2^0, 2^1, 2^2, \dots, 2^{k-1}), and for the sake of this article let’s call the operation of writing a number A in terms of its binary digits \textup{Bin}(A) = (a_0, \dots, a_{k-1}) (note, the first digit is the least-significant bit, i.e., it’s a little-endian representation). Then PowersOf2 and Bin expand an integer product into a dot product, while shifting powers of 2 from one side to the other.

\displaystyle A \cdot m = \langle \textup{Bin}(A), \textup{PowersOf2}(m) \rangle

This inspired the following “proof by meme” that I can’t resist including.

Working out an example, if the message is m=7 and A = 100, k=7, then \textup{PowersOf2}(7) = (7, 14, 28, 56, 112, 224, 448, 896) and \textup{Bin}(A) = (0,0,1,0,0,1,1,0) (again, little-endian), and the dot product is

\displaystyle 28 \cdot 1 + 224 \cdot 1 + 448 \cdot 1 = 700 = 7 \cdot 2^2 + 7 \cdot 2^5 + 7 \cdot 2^6

A generalized gadget construction

One can generalize the binary digit decomposition to different bases, or to vectors of messages instead of a single message, or to include a subset of the digits for varying approximations. I’ve been puzzling over an FHE scheme that does all three. In my search for clarity I came across a nice paper of Genise, Micciancio, and Polyakov called “Building an Efficient Lattice Gadget Toolkit: Subgaussian Sampling and More“, in which they state a nice general definition.

Definition: For any finite additive group A, an Agadget of size w and quality \beta is a vector \mathbf{g} \in A^w such that any group element u \in A can be written as an integer combination u = \sum_{i=1}^w g_i x_i where \mathbf{x} = (x_1, \dots , x_w) has norm at most \beta.

The main groups considered in my case are A = (\mathbb{Z}/q\mathbb{Z})^n, where q is usually 2^{32} or 2^{64}, i.e., unsigned int sizes on computers for which we get free modulus operations. In this case, a (\mathbb{Z}/q\mathbb{Z})^n-gadget is a matrix G \in (\mathbb{Z}/q\mathbb{Z})^{n \times w}, and the representation x \in \mathbb{Z}^w of u \in (\mathbb{Z}/q\mathbb{Z})^n satisfies Gx = u.

Here n and q are fixed, and w, \beta are traded off to make the chosen gadget scheme more efficient (smaller w) or better at reducing noise (smaller \beta). An example of how this could work is shown in the next section by generalizing the binary digit decomposition to an arbitrary base B. This allows you to use fewer digits to represent the number A, but each digit may be as large as B and so the quality is \beta = O(B\sqrt{w}).

One commonly-used construction is to convert an A-gadget to an A^n-gadget using the Kronecker product. Let g \in A^w be an A-gadget of quality \beta. Then the following matrix is an A^n-gadget of size nw and quality \sqrt{n} \beta:

\displaystyle G = I_n \otimes \mathbf{g}^\top = \begin{pmatrix} g_1 & \dots & g_w & & & & & & & \\ & & & g_1 & \dots & g_w & & & & \\ & & & & & & \ddots & & & \\ & & & & & & & g_1 & \dots & g_w \end{pmatrix}

Blank spaces represent zeros, for clarity.

An example with A = (\mathbb{Z}/16\mathbb{Z}). The A-gadget is \mathbf{g} = (1,2,4,8). This has size 4 = \log(q) and quality \beta = 2 = \sqrt{1+1+1+1}. Then for an A^3-gadget, we construct

Now given a vector (15, 4, 7) \in \mathbb{A}^3 we write it as follows, where each little-endian representation is concatenated into a single vector.

\displaystyle \mathbf{x} = \begin{pmatrix} 1\\1\\1\\1\\0\\0\\1\\0\\1\\1\\1\\0 \end{pmatrix}

And finally,

To use the definition more rigorously, if we had to write the matrix above as a gadget “vector”, it would be in column order from left to right, \mathbf{g} = ((1,0,0), (2,0,0), \dots, (0,0,8)) \in A^{wn}. Since the vector \mathbf{x} can be at worst all 1’s, its norm is at most \sqrt{12} = \sqrt{nw} = \sqrt{n} \beta = 2 \sqrt{3}, as claimed above.

A signed representation in base B

As we’ve seen, the gadget decomposition trades reducing noise for a larger ciphertext size. With integers modulo q = 2^{32}, this can be fine-tuned a bit more by using a larger base. Instead of PowersOf2 we could define PowersOfB, where B = 2^b, such that B divides 2^{32}. For example, with b = 8, B = 256, we would only need to track 4 ciphertexts. And the gadget decomposition of the number we’re multiplying by would be the little-endian digits of its base-B representation. The cost here is that the maximum entry of the decomposed representation is 255.

We can fine tune this a little bit more by using a signed base-B representation. To my knowledge this is not the same thing as what computer programmers normally refer to as a signed integer, nor does it have anything to do with the two’s complement representation of negative numbers. Rather, instead of the normal base-B digits n_i \in \{ 0, 1, \dots, B-1 \} for a number N = \sum_{i=0}^k n_i B^i, the signed representation chooses n_i \in \{ -B/2, -B/2 + 1, \dots, -1, 0, 1, \dots, B/2 - 1 \}.

Computing the digits is slightly more involved, and it works by shifting large coefficients by -B/2, and “absorbing” the impact of that shift into the next more significant digit. E.g., if B = 256 and N = 2^{11} - 1 (all 1s up to the 10th digit), then the unsigned little-endian base-B representation of N is (255, 7) = 255 + 7 \cdot 256. The corresponding signed base-B representation subtracts B from the first digit, and adds 1 to the second digit, resulting in (-1, 8) = -1 + 8 \cdot 256. This works in general because of the following “add zero” identity, where p and q are two successive unsigned digits in the unsigned base-B representation of a number.

\displaystyle \begin{aligned} pB^{k-1} + qB^k &= pB^{k-1} - B^k + qB^k + B^k \\ &= (p-B)B^{k-1} + (q+1)B^k \end{aligned}

Then if q+1 \geq B/2, you’d repeat and carry the 1 to the next higher coefficient.

The result of all this is that the maximum absolute value of a coefficient of the signed representation is halved from the unsigned representation, which reduces the noise growth at the cost of a slightly more complex representation (from an implementation standpoint). Another side effect is that the largest representable number is less than 2^{32}-1. If you try to apply this algorithm to such a large number, the largest digit would need to be shifted, but there is no successor to carry to. Rather, if there are k digits in the unsigned base-B representation, the maximum number representable in the signed version has all digits set to B/2 - 1. In our example with B=256 and 32 bits, the largest digit is 127. The formula for the max representable integer is \sum_{i=0}^{k-1} (B/2 - 1) B^i = (B/2 - 1)\frac{B^k - 1}{B-1}.

max_digit = base // 2 - 1
max_representable = (max_digit 
    * (base ** (num_bits // base_log) - 1) // (base - 1)

A simple python implementation computes the signed representation, with code copied below, in which B=2^b is the base, and b = \log_2(B) is base_log.

def signed_decomposition(
  x: int, base_log: int, total_num_bits=32) -> List[int]:
    result = []
    base = 1 << base_log
    digit_mask = (1 << base_log) - 1
    base_over_2_threshold = 1 << (base_log - 1)
    carry = 0

    for i in range(total_num_bits // base_log):
        unsigned_digit = (x >> (i * base_log)) & digit_mask
        if carry:
            unsigned_digit += carry
            carry = 0

        signed_digit = unsigned_digit
        if signed_digit >= base_over_2_threshold:
            signed_digit -= base
            carry = 1

    return result

In a future article I’d like to demonstrate the gadget decomposition in action in a practical setting called key switching, which allows one to convert an LWE ciphertext encrypted with key s_1 into an LWE ciphertext encrypted with a different key s_2. This operation increases noise, and so the gadget decomposition is used to reduce noise growth. Key switching is used in FHE because some operations (like bootstrapping) have the side effect of switching the encryption key.

Until then!