Google’s Fully Homomorphic Encryption Compiler — A Primer

Back in May of 2022 I transferred teams at Google to work on Fully Homomorphic Encryption (newsletter announcement). Since then I’ve been working on a variety of projects in the space, including being the primary maintainer on github.com/google/fully-homomorphic-encryption, which is an open source FHE compiler for C++. This article will be an introduction to how to use it to compile programs to FHE, as well as a quick overview of its internals.

If you’d like to contribute to this project, please reach out to me at mathintersectprogramming@gmail.com or at j2kun@mathstodon.xyz. I have a few procedural hurdles to overcome before I can accept external contributions (with appropriate git commit credit), but if there’s enough interest I will make time for it sooner as opposed to later.

Overview

The core idea of fully homomorphic encryption (henceforth FHE) is that you can encrypt data and then run programs on it without ever decrypting it. In the extreme, even if someone had physical access to the machine and could inspect the values of individual memory cells or registers while the program was running, they would not see any of the bits of the underlying data being operated on (without cracking the cryptosystem).

Our FHE compiler converts C++ programs that operate on plaintext to programs that operate on the corresponding FHE ciphertexts (since it emits high-level code that then needs to be further compiled, it could be described as a transpiler). More specifically, it converts a specific subset of valid C++ programs—more on what defines that subset later—to programs that run the same program on encrypted data via one of the supported FHE cryptosystem implementations. In this sense it’s close to a traditional compiler: parse the input, run a variety of optimization passes, and generate some output. However, as we’ll see in this article, the unique properties of FHE make the compiler more like hardware circuit toolchains.

The variety of FHE supported by the compiler today is called “gate bootstrapping.” I won’t have time to go into intense detail about the math behind it, but suffice it to say that this technique gives away performance in exchange for a simpler job of optimizing and producing a working program. What I will say is that this blend of FHE encrypts each bit of its input into a separate ciphertext, and then represents the program as a boolean (combinational) circuit—composed of gates like AND, OR, XNOR, etc. Part of the benefit of the compiler is that it manages a mapping of higher order types like integers, arrays, and structs, to lists of encrypted booleans and back again.

A few limitations result from this circuit-based approach, which will be woven throughout the rest of this tutorial. First is that all loops must be fully unrolled and have statically-known bounds. Second, constructs like pointers, and dynamic memory allocation are not supported. Third, all control flow is multiplexed, meaning that all branches of all if statements are evaluated, and only then is one chosen. Finally, there are important practical considerations related to the bit-width of the types used and the expansion of cleartexts into ciphertexts that impact the performance of the resulting program.

On the other hand, combinational circuit optimization is a well-studied problem with off-the-shelf products that can be integrated (narrator: they did integrate some) into the FHE compiler to make the programs run faster.

Dependencies

tl;dr: check out the dockerfiles.

Google’s internal build system is called blaze, and it’s open source counterpart (equivalent in all except name) is called bazel. One of the first curious things you’ll notice about the compiler is that bazel is used both to build the project and to use the project (the latter I’d like to change). So you’ll need to install bazel, and an easy way to do that is to install bazelisk, which is the analogue of nvm for Node or pyenv for Python. You won’t need multiple versions of bazel, but this is just the easiest way to install the latest version. I’ll be using Bazel 4.0.0, but there are newer versions that should work just fine as well.

You’ll need a C compiler (I use gcc12) because most of the project’s dependencies are built from source (see next paragraph), and a small number of external libraries and programs to support some of the circuit optimizer plugins. For debian-based systems, this is the full list

apt-get update && apt-get install -y \
  gcc \
  git \
  libtinfo5 \
  python \
  python3 \
  python3-pip \
  autoconf \
  libreadline-dev \
  flex \
  bison \
  wget

As mentioned above, all the other dependencies are built from source, and this will take a while the first time you build the project. So you might as well clone and get that build started while you read. The command below will build the project and all the example binaries, and then cache the intermediate build artifacts for future builds, only recompiling what has changed in the mean time. See the Bazel/Starlark section for more details on what this command is doing. Note: the one weird case is LLVM. If you use an exotic operating system (or a docker container, don’t get me started on why this is an issue) then bazel may choose to build LLVM from scratch, which will take an hour or two for the first build. It may also fail due to a missing dependency of your system, which will be extremely frustrating (this is the #1 complaint in our GitHub issues). But, if you’re on a standard OS/architecture combination (as enumerated here), it will just fetch the right LLVM dependency and install it on your system.

git clone https://github.com/google/fully-homomorphic-encryption.git
cd fully-homomorphic-encryption
bazel build ...:all

A clean build on my home machine takes about 16 minutes.

Two end-to-end examples: add and string_cap

In this section I’ll show two end-to-end examples of using the compiler as an end user. The first will be for a dirt-simple program that adds two 32-bit integers. The second will be for a program that capitalizes the first character of each word in an ASCII string. The examples are already in the repository under transpiler/examples by the names simple_sum and string_cap.

Both of these programs will have the form of compiling a single function that is the entry point for the FHE part of the program, and providing a library and API to integrate it with a larger program.

First simple_sum. Add a header and source file like you would any standard C++ program, but with one extra line to tell the compiler which function is the function that should be compiled (along with any functions called within it).

// add.h
int add(int a, int b);

// add.cc
#include "add.h"

#pragma hls_top
int add(int a, int b) {
  return a + b;
}

The line #pragma hls_top tells the compiler which function is the entry point. Incidentally, hls stands for “high level synthesis,” and the pragma itself comes from the XLS project, which we use as our parser and initial circuit builder. Here ‘top’ just means top level function.

Then, inside a file in the same directory called BUILD (see the Bazel/Starlark section next for an overview of the build system), create a build target that invokes the FHE compiler. In our case we’ll use the OpenFHE backend.

# BUILD
# loads the FHE compiler as an extension to Bazel.
load("//transpiler:fhe.bzl", "fhe_cc_library")

fhe_cc_library(
  name = "add_fhe_lib",
  src = "add.cc",
  hdrs = ["add.h"],
  encryption = "openfhe",  # backend cryptosystem library
  interpreter = True,      # use dynamic thread scheduling
  optimizer = "yosys",     # boolean circuit optimizer
)

The full options for this build rule (i.e., the documentation of the compiler’s main entry point) can be found in the docstring of the bazel macro. I picked the parameters that have what I think of as the best tradeoff between stability and performance.

If you run bazel build add_fhe_lib, then you will see it build but nothing else (see the “intermediate files” section for more on what’s happening behind the scenes). But if you typed something wrong in the build file it would err at this point. It generates a header and cc file that contains the same API as add, but with different types for the arguments and extra arguments needed by the FHE library backend.

Next we need a main routine that uses the library. Since we’re using OpenFHE as our backend, it requires some configuration and the initial encryption of its inputs. The full code, with some slight changes for the blog, looks like this

#include <stdio.h>
#include <iostream>
#include <ostream>

#include "absl/strings/numbers.h"
#include "transpiler/codelab/add/add_fhe_lib.h"
#include "transpiler/data/openfhe_data.h"

constexpr auto kSecurityLevel = lbcrypto::MEDIUM;

int main(int argc, char** argv) {
  if (argc < 3) {
    fprintf(stderr, "Usage: add_main [int] [int]\n\n");
    return 1;
  }

  int x, y;
  if(!absl::SimpleAtoi(argv[1], &x)) {
    std::cout << "Bad int " << argv[1] << std::endl;
    return 1;
  }
  if(!absl::SimpleAtoi(argv[2], &y)) {
    std::cout << "Bad int " << argv[2] << std::endl;
    return 1;
  }
  std::cout << "Computing " << x << " + " << y << std::endl;

  // Set up backend context and encryption keys.
  auto context = lbcrypto::BinFHEContext();
  context.GenerateBinFHEContext(kSecurityLevel);
  auto sk = context.KeyGen();
  context.BTKeyGen(sk);

  OpenFhe<int> ciphertext_x = OpenFhe<int>::Encrypt(x, context, sk);
  OpenFhe<int> ciphertext_y = OpenFhe<int>::Encrypt(y, context, sk);
  OpenFhe<int> result(context);
  auto status = add(result, ciphertext_x, ciphertext_y, context);
  if(!status.ok()) {
    std::cout << "FHE computation failed: " << status << std::endl;
    return 1;
  }

  std::cout << "Result: " << result.Decrypt(sk) << "\n";
  return 0;
}

The parts that are not obvious boilerplate include:

Configuring the security level of the OpenFHE library (which is called BinFHE to signal it’s doing binary circuit FHE).

constexpr auto kSecurityLevel = lbcrypto::MEDIUM;

Setting up the initial OpenFHE secret key

 auto context = lbcrypto::BinFHEContext();
 context.GenerateBinFHEContext(kSecurityLevel);
 auto sk = context.KeyGen();
 context.BTKeyGen(sk);

Encrypting the inputs. This uses an API provided by the compiler (though because the project was a research prototype, I think the original authors never got around to unifying the “set up the secret key” part behind an API) and included in this from include "transpiler/data/openfhe_data.h"

 OpenFhe<int> ciphertext_x = OpenFhe<int>::Encrypt(x, context, sk);
 OpenFhe<int> ciphertext_y = OpenFhe<int>::Encrypt(y, context, sk);

Then calling the FHE-enabled add function, and decrypting the results.

Then create another BUILD rule for the binary:

cc_binary(
    name = "add_openfhe_fhe_demo",
    srcs = [
        "add_openfhe_fhe_demo.cc",
    ],
    deps = [
        ":add_fhe_lib",
        "//transpiler/data:openfhe_data",
        "@com_google_absl//absl/strings",
        "@openfhe//:binfhe",
    ],
)

Running it with bazel:

$ bazel run add_openfhe_fhe_demo -- 5 7
Computing 5 + 7
Result: 12

Timing this on my system, it takes a little less than 7 seconds.

On to a more complicated example: string_cap, which will showcase loops and arrays. This was slightly simplified from the GitHub example. First the header and source files:

// string_cap.h
#define MAX_LENGTH 32
void CapitalizeString(char my_string[MAX_LENGTH]);

// string_cap.cc
#include "string_cap.h"

#pragma hls_top
void CapitalizeString(char my_string[MAX_LENGTH]) {
  bool last_was_space = true;
#pragma hls_unroll yes
  for (int i = 0; i < MAX_LENGTH; i++) {
    char c = my_string[i];
    if (last_was_space && c >= 'a' && c <= 'z') {
      my_string[i] = c - ('a' - 'A');
    }
    last_was_space = (c == ' ');
  }
}

Now there’s a bit to discuss. First, the string has a static length known at compile time. This is required because the FHE program is a boolean circuit. It defines wires for each of the inputs, and it must know how many wires to define. In this case it will be a circuit with 32 * 8 wires, one for each bit of each character in the array.

The second new thing is the #pragma hsl_unroll yes, which, like hls_top, tells the XLS compiler to fully unroll that loop. Because the FHE program is a static circuit, it cannot have any loops. XLS unrolls our loops for us, and incidentally, I learned recently that it uses the Z3 solver to first prove the loops can be unrolled (which can lead to some slow compile times for complex programs). I’m not aware of other compilers that do this proving part. It looks like LLVM’s loop unroller just slingshots its CPU cycles into the sun if it’s asked to fully unroll an infinite loop.

The main routine is similar as before:

#include <array>
#include <iostream>
#include <string>

#include "openfhe/binfhe/binfhecontext.h"
#include "transpiler/data/openfhe_data.h"
#include "transpiler/examples/string_cap/string_cap.h"
#include "transpiler/examples/string_cap/string_cap_openfhe_yosys_interpreted.h"

int main(int argc, char** argv) {
  if (argc < 2) {
    fprintf(stderr, "Usage: string_cap_openfhe_testbench string_input\n\n");
    return 1;
  }

  std::string input = argv[1];
  input.resize(MAX_LENGTH, '\0');
  std::string plaintext(input);

  auto cc = lbcrypto::BinFHEContext();
  cc.GenerateBinFHEContext(lbcrypto::MEDIUM);
  auto sk = cc.KeyGen();
  cc.BTKeyGen(sk);

  auto ciphertext = OpenFheArray<char>::Encrypt(plaintext, cc, sk);
  auto status = CapitalizeString(ciphertext, cc);
  if (!status.ok()) {
    std::cout << "FHE computation failed " << status << std::endl;
    return 1;
  };
  std::cout << "Decrypted result: " << ciphertext.Decrypt(sk) << std::endl;
}

The key differences are:

  • We resize the input to be exactly MAX_LENGTH, padding with null bytes.
  • We use OpenFheArray instead of OpenFhe to encode an array of characters.

And now omitting the binary’s build rule and running it, we get

$ bazel run string_cap_openfhe_yosys_interpreted_testbench -- 'hello there'
Decrypted result: Hello There

Interestingly, this also takes about 6 seconds to run on my machine (same as the “add 32-bit integers” program). It would be the same runtime for a longer string, up to 32 characters, since, of course, the program processes all MAX_LENGTH characters without knowing if they are null bytes.

An overview of Bazel and Starlark

The FHE compiler originated within Google in a curious way. It was created by dozens of volunteer contributors (20%-ers, as they say), many of whom worked on the XLS hardware synthesis toolchain, which is a core component of the compiler. Because of these constraints, and also because it was happening entirely in Google, there wasn’t much bandwidth available to make the compiler independent of Google’s internal build tooling.

This brings us to Bazel and Starlark, which is the user-facing façade of the compiler today. Bazel is the open source analogue of Google’s internal build system (“Blaze” is the internal tool), and Starlark is its Python-inspired scripting language. There are lots of opinions about Bazel that I won’t repeat here. Instead I will give a minimal overview of how it works with regards to the FHE compiler.

First some terminology. To work with Bazel you do the following.

  • Define a WORKSPACE file which defines all your project’s external dependencies, how to fetch their source code, and what bazel commands should be used to build them. This can be thought of as a top-level CMakeLists, except that it doesn’t contain any instructions for building the project beyond declaring the root of the project’s directory tree and the project’s name.
  • Define a set of BUILD files in each subdirectory, declaring the build targets that can be built from the source files in that directory (but not its subdirectories). This is analogous to CMakeLists files in subdirectories. Each build target can declare dependence on other build targets, and bazel build ensures the dependencies are built first, and caches the build results across a session. Many projects have a BUILD file in the project root to expose the project’s public libraries and APIs.
  • Use the built-in bazel rules like cc_library and cc_binary and cc_test to group files into libraries that can be built with bazel build, executable binaries that can also be run with bazel run, and tests that can also be run with bazel test. Most bazel rules boil down to calling some executable program like gcc or javac with specific arguments, while also keeping track of the accumulated dependency set of build artifacts in a “hermetic” location on the filesystem.
  • Write any additional bazel macros that chain together built-in bazel commands, e.g., for defining logical groupings of build commands that need to happen in a particular sequence. Macros look like Python functions that call individual bazel rules and possibly pass data between them. They’re written in .bzl files which are interpreted directly by bazel.

Generally, bazel builds targets in two phases. First—the analysis phase—it loads all the BUILD files and imported .bzl files, and scans for all the rules that were called. In particular, it runs the macros, because it needs to know what rules are called by the macros (and rules can be guarded by control flow, or their arguments can be generated dynamically, etc.). But it doesn’t run the build rules themselves. In doing this, it can build a complete graph of dependencies, and report errors about typos, missing dependencies, cycles, etc. Once the analysis phase is complete, it runs the underlying rules in dependency order, and caches the results. Bazel will only run a rule again if something changes with the files it depends on or its underlying dependencies.

The FHE compiler is written in Starlark, in the sense that the main entrypoint for the compiler is the Bazel macro fhe_cc_library. This macro chains together a bunch of rules that call the parser, circuit optimizer, and codegen steps, each one being its own Bazel rule. Each of these rules in turn declare/write files that we can inspect—see the next section.

Here’s what fhe_cc_library looks like (a subset of the control flow for brevity)

def fhe_cc_library(name, src, hdrs, copts = [], num_opt_passes = 1,
        encryption = "openfhe", optimizer = "xls", interpreter = False, library_name = None,
        **kwargs):
    """A rule for building FHE-based cc_libraries. [docstring ommitted]"""
    transpiled_xlscc_files = "{}.cc_to_xls_ir".format(name)
    library_name = library_name or name
    cc_to_xls_ir(
        name = transpiled_xlscc_files,
        library_name = library_name,
        src = src,
        hdrs = hdrs,
        defines = kwargs.get("defines", None),
    )

    # below, adding a leading colon to the `src` argument points the source files attribute
    # to the files generated by a previously generated rule, with the name being the unique
    # identifier.
    transpiled_structs_headers = "{}.xls_cc_transpiled_structs".format(name)
    xls_cc_transpiled_structs(
        name = transpiled_structs_headers,
        src = ":" + transpiled_xlscc_files,
        encryption = encryption,
    )

    if optimizer == "yosys":  # other branch omitted for brevity
        verilog = "{}.verilog".format(name)
        xls_ir_to_verilog(name = verilog, src = ":" + transpiled_xlscc_files)
        netlist = "{}.netlist".format(name)
        verilog_to_netlist(name = netlist, src = ":" + verilog, encryption = encryption)
        cc_fhe_netlist_library(
            name = name,
            src = ":" + netlist,
            encryption = encryption,
            interpreter = interpreter,
            transpiled_structs = ":" + transpiled_structs_headers,
            copts = copts,
            **kwargs
        )

The rules invoked by the macro include:

  • cc_to_xls_ir, which calls the parser xlscc and outputs an intermediate representation of the program as a high-level circuit. This step does the loop unrolling and other smarts related to converting C++ to a circuit.
  • xlscc_transpiled_structs, which calls a binary that handles structs (this part is complicated and will not be covered in this article).
  • xls_ir_to_verilog, which converts the XLS IR to verilog so that it can be optimized using Yosys/ABC, a popular circuit design and optimization program.
  • verilog_to_netlist, which invokes Yosys to both optimize the circuit and convert it to the lowest-level IR, which is called a netlist.
  • cc_fhe_netlist_library, which calls the codegen step to generate C++ code from the netlist in the previous step.

All of this results in a C++ library (generated by the last step) that can be linked against an existing program and whose generated source we can inspect. Now let’s see what each generated file looks like.

The intermediate files generated by the compiler

Earlier I mentioned that bazel puts the intermediate files generated by each build rule into a “hermetic” location on the filesystem. That location is sym-linked from the workspace root by a link called bazel-bin.

$ ls -al . | grep bazel-bin
/home/j2kun/.cache/bazel/_bazel_j2kun/42987a3d4769c6105b2fa57d2291edc3/execroot/com_google_fully_homomorphic_encryption/bazel-out/k8-opt/bin

Within bazel-bin there’s a mirror of the project’s source tree, and in the directory for a build rule you can find all the generated files. For our 32-bit adder here’s what it looks like:

$ ls
_objs                                   add_test
add_fhe_lib.cc                          add_test-2.params
add_fhe_lib.entry                       add_test.runfiles
add_fhe_lib.generic.types.h             add_test.runfiles_manifest
add_fhe_lib.h                           libadd.a
add_fhe_lib.ir                          libadd.a-2.params
add_fhe_lib.netlist.v                   libadd.pic.a
add_fhe_lib.netlist.v.dot               libadd.pic.a-2.params
add_fhe_lib.opt.ir                      libadd.so
add_fhe_lib.types.h                     libadd.so-2.params
add_fhe_lib.v                           libadd_fhe_lib.a
add_fhe_lib.ys                          libadd_fhe_lib.a-2.params
add_fhe_lib_meta.proto                  libadd_fhe_lib.pic.a
add_openfhe_fhe_demo                    libadd_fhe_lib.pic.a-2.params
add_openfhe_fhe_demo-2.params           libadd_fhe_lib.so
add_openfhe_fhe_demo.runfiles           libadd_fhe_lib.so-2.params
add_openfhe_fhe_demo.runfiles_manifest

You can see the output .h and .cc files and their compiled .so files (the output build artifacts), but more importantly for us are the internal generated files. This is where we get to actually see the circuits generated.

The first one worth inspecting is add_fhe_lib.opt.ir, which is the output of the xlscc compiler plus an XLS-internal optimization step. This is the main part of how the compiler uses the XLS project: to convert an input program into a circuit. The file looks like:

package my_package

file_number 1 "./transpiler/codelab/add/add.cc"

top fn add(x: bits[32], y: bits[32]) -> bits[32] {
  ret add.3: bits[32] = add(x, y, id=3, pos=[(1,18,25)])
}

As you can see, it’s an XLS-defined internal representation (IR) of the main routine with some extra source code metadata. Because XLS-IR natively supports additions, the result is trivial. One interesting thing to note is that numbers are represented as bit arrays. In short, XLS-IR’s value type system supports only bits, arrays, and tuples, which tuples being the mechanism for supporting structures.

Next, the XLS-IR is converted to Verilog in add_fhe_lib.v, resulting in the (similarly trivial)

module add(
  input wire [31:0] x,
  input wire [31:0] y,
  output wire [31:0] out
);
  wire [31:0] add_6;
  assign add_6 = x + y;
  assign out = add_6;
endmodule

The next step is to run this verilog through Yosys, which is a mature circuit synthesis suite, and for our purposes is encapsulates the two tasks:

  • Convert higher-level operations to a specified set of boolean gates (that operate on individual bits)
  • Optimize the resulting circuit to be as small as possible

XLS can also do this, and if you want to see that you can change the build rule optimizer attribute from yosys to xls. But we’ve found that Yosys routinely produces 2-3x smaller circuits. The script that we give to yosys can be found in fhe_yosys.bzl, which encapsulates the bazel macros and rules related to invoking Yosys. The output for our adder program is:

module add(x, y, out);
  wire _000_;
  wire _001_;
  wire _002_;
  [...]
  wire _131_;
  wire _132_;
  output [31:0] out;
  wire [31:0] out;
  input [31:0] x;
  wire [31:0] x;
  input [31:0] y;
  wire [31:0] y;
  nand2 _133_ (.A(x[12]), .B(y[12]), .Y(_130_));
  xor2 _134_ ( .A(x[12]), .B(y[12]), .Y(_131_));
  nand2 _135_ ( .A(x[11]), .B(y[11]), .Y(_132_));
  or2 _136_ ( .A(x[11]), .B(y[11]), .Y(_000_));
  nand2 _137_ ( .A(x[10]), .B(y[10]), .Y(_001_));
  xor2 _138_ ( .A(x[10]), .B(y[10]), .Y(_002_));
  nand2 _139_ ( .A(x[9]), .B(y[9]), .Y(_003_));
  or2 _140_ ( .A(x[9]), .B(y[9]), .Y(_004_));
  nand2 _141_ ( .A(x[8]), .B(y[8]), .Y(_005_));
  xor2 _142_ ( .A(x[8]), .B(y[8]), .Y(_006_));
  nand2 _143_ ( .A(x[7]), .B(y[7]), .Y(_007_));
  or2 _144_ ( .A(x[7]), .B(y[7]), .Y(_008_));
  [...]
  xor2 _291_ ( .A(_006_), .B(_035_), .Y(out[8]));
  xnor2 _292_ ( .A(x[9]), .B(y[9]), .Y(_128_));
  xnor2 _293_ ( .A(_037_), .B(_128_), .Y(out[9]));
  xor2 _294_ ( .A(_002_), .B(_039_), .Y(out[10]));
  xnor2 _295_ ( .A(x[11]), .B(y[11]), .Y(_129_));
  xnor2 _296_ ( .A(_041_), .B(_129_), .Y(out[11]));
  xor2 _297_ ( .A(_131_), .B(_043_), .Y(out[12]));
endmodule

This produces a circuit with a total of 165 gates.

The codegen step then produces a add_fhe_lib.cc file which loads this circuit into an interpreter which knows to map the operation and2 to the chosen backend cryptosystem library call (see the source for the OpenFHE backend), and uses thread-pool scheduling on CPU to speed up the evaluation of the circuit.

For the string_cap circuit, the opt.ir shows off a bit more of XLS’s IR, including operations for sign extension, array indexing & slicing, and multiplexing (sel) branches. The resulting netlist after optimization is a 684-gate circuit (though many of those are “inverter” or “buffer” gates, which are effectively free for FHE).

The compiler also outputs a .dot file which can be rendered to an SVG (warning, the SVG is ~2.3 MiB). If you browse this circuit, you’ll see it is rather shallow and wide, and this allows the thread-pool scheduler to take advantage of the parallelism in the circuit to make it run fast. Meanwhile, the 32-bit adder, though it has roughly 25% the total number of gates, is a much deeper circuit and hence has less parallelism.

Supported C++ input programs and encryption overhead

This has so far been a tour of the compiler, but if you want to get started using the compiler to write programs, you’ll need to keep a few things in mind.

First, the subset of C++ supported by the compiler is rather small. As mentioned earlier, all data needs to have static sizes. This means, e.g., you can’t write a program that processes arbitrary images. Instead, you have to pick an upper bound on the image size, zero-pad the image appropriately before encrypting it, and then write the program to operate on that image size. In the same vein, the integer types you choose have nontrivial implications on performance. To see this, replace the int type in the 32-bit adder with a char and inspect the resulting circuit.

Similarly, loops need static bounds on their iteration count. Or, more precisely, xlscc needs to be able to fully unwrap every loop—which permits some forms of while loops and recursion that provably terminate. This can cause some problem if the input code has loops with complex exit criteria (i.e., break‘s guarded by if/else). It also requires you to think hard about how you write your loops, though future work will hopefully let the compiler do that thinking for you.

Finally, encrypting each bit of a plaintext message comes with major tax on space usage. Each encryption of a single bit corresponds to a list of roughly 700 32-bit integers. If you want to encrypt a 100×100 pixel greyscale image, each pixel of which is an 8-bit integer (0-255), it will cost you 218 MiB to store all the pixels in memory. It’s roughly a 20,000x overhead. For comparison, the music video for Rick Astley’s “Never Gonna Give You Up” at 360p is about 9 MiB (pretty small for a 3 minute video!), but encrypted in FHE would be 188 GiB, which (generously) corresponds to 20 feature-length films at 1080p. Some other FHE schemes have smaller ciphertext sizes, but at the cost of even larger in-memory requirements to run the computations. So if you want to run programs to operate on video—you can do it, but you will need to distribute the work appropriately, and find useful ways to reduce the data size as much as possible before encrypting it (such as working in lower resolution, greyscale, and a lower frame rate), which will also result in overall faster programs.

Until next time!

[Personal note]: Now that I’m more or less ramped up on the FHE domain, I’m curious to know what aspects of FHE my readers are interested in. Mathematical foundations? More practical demonstrations? Library tutorials? Circuit optimization? Please comment and tell me about what you’re interested in.

Carnival of Mathematics #209

Welcome to the 209th Carnival of Mathematics!

209 has a few distinctions, including being the smallest number with 6 representations as a sum of 3 positive squares:

$$\begin{aligned}209 &= 1^2 + 8^2 + 12^2 \\ &= 2^2 + 3^2 + 14^2 \\ &= 2^2 + 6^2 + 13^2 \\ &= 3^2 + 10^2 + 10^2 \\ &= 4^2 + 7^2 + 12^2 \\ &= 8^2 + 8^2 + 9^2 \end{aligned}$$

As well as being the 43rd Ulam number, the number of partitions of 16 into relatively prime parts and the number of partitions of 63 into squares.

Be sure to submit fun math you find in October to the next carvinal host!

Math YouTubers

The Heidelberg Laureate forum took place, which featured lectures from renowned mathematicians and computer scientists, like Rob Tarjan and Avi Wigderson on the CS theory side, as well as a panel discussion on post-quantum cryptography with none other than Vint Cerf, Whitfield Diffie, and Adi Shamir. All the videos are on YouTube.

Tom Edgar, who is behind the Mathematical Visual Proofs YouTube channel, published a video (using manim) exploring for which $n$ it is possible to divide a disk into $n$ equal pieces using a straightedge and compass. It was based on a proof from Roger Nelsen’s and Claudi Alsina’s book, “Icons of Mathematics”.

The folks at Ganit Charcha also published a talk “Fascinating Facts About Pi” from a Pi Day 2022 celebration. The video includes a question that was new to me about interpreting subsequences of pi digits as indexes and doing reverse lookups until you find a loop.

Henry Segerman published two nice videos, including one on an illusion of a square and circle in the same shape, and a preview of a genus-2 holonomy maze (Augh, my wallet! I have both of his original holonomy mazes and my houseguests love playing with them!)

Steve Mould published a nice video about the Chladni figures used (or adapted) in the new Lord of the Rings TV series’ title sequence.

The Simons institute has been doing a workshop on graph limits, which aims to cover some of the theory about things like low-rank matrix completion, random graphs, and various models of networks. Their lectures are posted on their YouTube page.

Math Twitter

Peter Rowlett shared a nice activity with his son about distinct colorings of a square divided into four triangular regions.

Krystal Guo showed off her approach to LiveTeX’ing lectures.

Tamás Görbe gave a nice thread about a function that enumerates all rational numbers exactly once.

Every math club leader should be called the Prime Minister.

In doing research for my book, I was writing a chapter on balanced incomplete block designs, and I found a few nice tidbits in threads (thread 1, thread 2). A few here: Latin squares were on Islamic amulets from the 1200’s. The entire back catalog of “The Mathematical Scientist” journal is available on Google Drive, and through it I found an old article describing the very first use of Latin squares for experimental design, in which a man ran an experiment on what crop was best to feed his sheep during the winter months in France in the 1800’s. Finally, I determined that NFL season scheduling is done via integer linear programming.

Math Bloggers

Lúcás Meier published a nice article at the end of August (which I only discovered in September, it counts!) going over the details of his favorite cryptography paper “Unifying Zero-Knowledge Proofs of Knowledge”, by Ueli Maurer, which gives a single zero-knowledge protocol that generalizes Schnorr, Fiat-Shamir, and a few others for proving knowledge of logarithms and roots.

Ralph Levien published a blog post about how to efficiently draw a decent approximation to the curve parallel to a given cubic Bezier curve. He has a previous blog post about fitting cubic Beziers to data, and a variety of other interesting graphics-inspired math articles in between articles about Rust and GPUs.

Cocktails

It’s April Cools! We’re taking back April Fools.

When I was younger I had a strange relationship with alcohol, not because of any sort of trauma, but because I found it decidedly boring and disgusting to the taste. I didn’t drink in high school, didn’t enjoy parties in college, and didn’t care for tailgating or other sports-based events where drinking was common. I also never enjoyed wine—red is too tannic, white is just meh—and almost all beer tastes awful to me (lambic is a delightful exception).

This attitude may have been largely because (in America) young drinkers are idiots and the goal is to get plastered quickly and cheaply. Underage drinkers in the US only have access to cheap liquor, and sugary sodas are the standard mixer. People who grow up in these circumstances largely seem to acclimate to the idea that alcoholic drinks should taste awful.

With this attitude, I was surprised to find myself drawn to making cocktails.

From what I can remember, the first good cocktail I had was some variation on an old fashioned at a fancy restaurant. It was the kind of drink that is only good if it’s well made. This piqued my interest—and I still love a good old fashioned—but what convinced me to start making cocktails was the variety of flavors and surprising combinations that could be made into a drink.

For example, one of my favorite cocktails to make when trying to impress a friend mixes snap peas, mint, caraway (Brennivin), and yuzu lemon—a milder, Japanese variety of lemon that almost tastes like a mix between a Eureka lemon and a pear—into a bright green drink with a naturally foamy top layer (due to shaking the citrus).

“The Mendel,” my name for this cocktail whose central flavor is snap peas and mint.

But literally any flavor you want can be the star of a cocktail. I make a pecan-pie flavored drink with Rivulet’s amazing liqueur, some aged rum, egg white, and pumpkin-pie spice. I made a drink showcasing hibiscus with a syrup made from hibiscus flowers, cinnamon, allspice, vanilla beans, ginger, and sugar water. I’ve made a smokey carrot cocktail (Mezcal, carrot juice, balsamic vinegar, black pepper), and a mango lassi cocktail (gin, mango liqueur, plain yogurt, simple syrup).

The flavors, in my opinion, largely come from two sources: liqueurs and syrups. Liqueurs often start as purified ethanol spirit that is sweetened and infused with various ingredients, and then watered down to a low-alcohol content, often 15%, not much more than a glass of wine. Many world-famous liqueurs originated as bitter herbal remedies as far back as the 1600’s. This was the case for Chartreuse, named after the Grande Chartreuse monastery in the Chartreuse mountains north of Grenoble, France. These monks originally developed Chartreuse because they were trying to make the “elixir of long life.” Elon Musk should try it. Because Yellow Chartreuse is my absolute favorite liqueur, I made a little pilgrimage to their distillery in Voiron, France.

Elixir Vegetal, the herbal remedy still marketed as a cure-all tonic. I have a bottle as a novelty. They’re kept in a wooden outer bottle to prevent sunlight from making the liquid lose its color.

Anyway, point is liqueurs are delicious, even on their own or with tonic water. And there are liqueurs made from every conceivable fruit, peppers, birch bark, nuts, violet, you name it.

Syrups, on the other hand, allow you to infuse the flavor of anything by boiling it with equal parts water and sugar (or honey, or maple syrup). Tea, peanut butter, port, cardamom, whatever you want to experiment with, you can basically make the flavoring parts yourself and add them to a drink.

Beyond those two, texture is an interesting component for me personally. If you’ve ever had a whiskey sour or a pisco sour, you may know the nice foaminess in those drinks comes from “dry shaking” (shaking without ice) an egg white to emulsify and create a nice foam. It also tends to tone down harsher tannic flavors. You can achieve the same sort of effect with plain yogurt or the brine from a can of chickpeas. And then there are these techniques like “milk washing” and other forms of clarifying spirits, where you use liquids with special proteins to “wash off” the unwanted solids in another drink (like tannic solids in wine).

The rabbit whole goes a whole lot deeper. If you want to get into this and geek out a bit more on the science, check out Dave Arnold’s book Liquid Intelligence. It basically taught me all the advanced techniques I know, and many I have yet to muster the courage to try, like vacuum-infusing solids with liquor, using liquid nitrogen to freeze solids before muddling them, and plunging a RED HOT COPPER POKER into a drink to add flavor.

A page from Liquid Intelligence showing the red-hot copper poker technique.

I also enjoyed The Aviary restaurant’s cocktail book, which included a ton of interesting recipes, along with new ideas like spherification (which I didn’t realize I would hate until after I made it, mainly because I hate boba and it has a similar texture).

At 22 I hadn’t yet figured out why alcohol and drinking culture was unappealing. At 32, I think I finally understand that an activity, hobby, or subject appeals to me principally through its sense of quality and craftsmanship. For the quality of the finished work, yes, but also in the attention to detail, and stewardship of the workspace and tools with which that work is done. Like the apocryphal 30-second Picasso napkin art, I strive to practice my crafts (mostly programming, writing, math) so that quality becomes second nature. Focusing on tools, methods, and patterns of work are critical to that.

A bonus of all this is that cocktails become an easy point of conversation when I am stuck in a drinking-culture setting. I offer to “bartend” small parties of my friends so that I can have something to do besides drink. It gives me an excuse to leave an awkward conversation and a nice excuse to join a new conversation. It usually leaves an impression and people remember me. And it’s the best excuse to get my friends to visit me (I have all the ingredients!) so that I don’t have to go anywhere to enjoy myself.

If you want to get into cocktails but aren’t sure where to get started, I’d recommend getting the minimal tools to make a shaken cocktail (Boston shaker tins, Hawthorne strainer, and a measuring jig), watching a video on how to use the tins to shake a cocktail with ice, then go to fancy restaurants, try cocktails, write down the ingredients, and go buy them and try to experiment with recreating them at home. A good default recipe uses 2oz liquor, 3/4 oz – 1 oz sweet stuff (syrups and/or liqueurs), 1oz citrus juice, shaken over ice. Personally I default to a 1-1 honey-water simple syrup, and add a small pinch of salt to mixtures before I shake them. That’ll give you a solid but flexible base for which choices of particular ingredients covers the gamut of gimlets, daiquiris, sours (all that + egg), margaritas, and more.

Anyhow. The math content will return shortly. Enjoy!

Silent Duels—Constructing the Solution part 2

Previous posts in this series:

Silent Duels and an Old Paper of Restrepo
Silent Duels—Parsing the Construction
Silent Duels—Constructing the Solution part 1

Since it’s been three years since the last post in this series, and the reason for the delay is that I got totally stuck on the implementation. I’m publishing this draft article as partial progress until I can find time to work on it again.

If you haven’t read the last post, please do. Code repo. Paper I’m working through. In this post I make progress on implementing a general construction of the optimal strategies from the paper, but at the end the first serious test case fails. At the end I’m stuck, and I’m not sure what to do next to fix it.

Refactoring

Let’s start by refactoring the mess of code from last post. Now is a good time for that, because our tinkering in the last post instilled certainty we understand the basic mechanics. The central functions for the generic solution construction is in this file, as of this commit.

We’re using Sympy to represent functions and evaluate integrals, and it helps to name types appropriately. Note, I’ll using python type annotations in the code, but the types don’t validate because of missing stubs in sympy. Call it room for improvement.

from sympy import Lambda
SuccessFn = NewType('SuccessFn', Lambda)
@dataclass
class SilentDuelInput:
    '''Class containing the static input data to the silent duel problem.'''
    player_1_action_count: int
    player_2_action_count: int
    player_1_action_success: SuccessFn
    player_2_action_success: SuccessFn

Now we turn to the output. Recall, a strategy is a partition of $[0,1]$ into $n$ intervals—where $n$ is the number of actions specified in the input—with a probability distribution for each piece of the partition. This suggests a natural breakdown

@dataclass
class Strategy:
    '''
    A strategy is a list of action distribution functions, each of which
    describes the probability of taking an action on the interval of its
    support.
    '''
    action_distributions: List[ActionDistribution]

And ActionDistribution is defined as:

@dataclass
class ActionDistribution:
    '''The interval on which this distribution occurs.'''
    support_start: float
    support_end: float
    '''
    The cumulative density function for the distribution.
    May be improper if point_mass > 0.
    '''
    cumulative_density_function: Lambda
    '''
    If nonzero, corresponds to an extra point mass at the support_end.
    Only used in the last action in the optimal strategy.
    '''
    point_mass: float = 0
    t = Symbol('t', nonnegative=True)
    def draw(self, uniform_random_01=DEFAULT_RNG):
        '''Return a random draw from this distribution.
        Args:
         - uniform_random_01: a callable that accepts zero arguments
           and returns a uniform random float between 0 and 1. Defaults
           to using python's standard random.random
        '''
        if self.support_start >= self.support_end:  # treat as a point mass
            return self.support_start
        uniform_random_number = uniform_random_01()
        if uniform_random_number > 1 - self.point_mass - EPSILON:
            return self.support_end
        return solve_unique_real(
            self.cumulative_density_function(self.t) - uniform_random_number,
            self.t,
            solution_min=self.support_start,
            solution_max=self.support_end,
        )

We represent the probability distribution as a cumulative density function $F(t)$, which for a given input $a$ outputs the total probability weight of values in the distribution that are less than $a$. One reason density functions are convenient is that it makes it easy to sample: pick a uniform random $a \in [0,1]$, and then solve $F(t) = a$. The resulting $t$ is the sampled value.

Cumulative density functions are related to their more often-used (in mathematics) counterpart, the probability density function $f(t)$, which measures the “instantaneous” probability. I.e., the probability of a range of values $(a,b)$ is given by $\int_a^b f(t) dt$. The probability density and cumulative density are related via $F(a) = \int_{-\infty}^a f(t) dt$. In our case, the lower bound of integration is finite since we’re working on a fixed subinterval of $[0,1]$.

The helper function solve_unique_real abstracts the work of using sympy to solve an equation that the caller guarantees has a unique real solution in a given range. It’s source can be found here. This is guaranteed for cumulative distribution functions, because they’re strictly increasing functions of the input.

Finally, the output is a strategy for each player.

@dataclass
class SilentDuelOutput:
    p1_strategy: Strategy
    p2_strategy: Strategy

I also chose to make a little data structure to help maintain the joint progress of building up the partition and normalizing constants for each player.

@dataclass
class IntermediateState:
    '''
    A list of the transition times computed so far. This field
    maintains the invariant of being sorted. Thus, the first element
    in the list is a_{i + 1}, the most recently computed value of
    player 1's transition times, and the last element is a_{n + 1} = 1.
    This value is set on initialization with `new`.
    '''
    player_1_transition_times: Deque[Expr]
    '''
    Same as player_1_transition_times, but for player 2 with b_j and b_m.
    '''
    player_2_transition_times: Deque[Expr]
    '''
    The values of h_i so far, the normalizing constants for the action
    probability distributions for player 1. Has the same sorting
    invariant as the transition time lists.
    '''
    player_1_normalizing_constants: Deque[Expr]
    '''
    Same as player_1_normalizing_constants, but for player 2,
    i.e., the k_j normalizing constants.
    '''
    player_2_normalizing_constants: Deque[Expr]
    @staticmethod
    def new():
        '''Create a new state object, and set a_{n+1} = 1, b_{m+1}=1.'''
        return IntermediateState(
            player_1_transition_times=deque([1]),
            player_2_transition_times=deque([1]),
            player_1_normalizing_constants=deque([]),
            player_2_normalizing_constants=deque([]),
        )
    def add_p1(self, transition_time, normalizing_constant):
        self.player_1_transition_times.appendleft(transition_time)
        self.player_1_normalizing_constants.appendleft(normalizing_constant)
    def add_p2(self, transition_time, normalizing_constant):
        self.player_2_transition_times.appendleft(transition_time)
        self.player_2_normalizing_constants.appendleft(normalizing_constant)

Now that we’ve established the types, we move on to the construction.

Construction

There are three parts to the construction:

  1. Computing the correct $\alpha$ and $\beta$.
  2. Finding the transition times and normalizing constants for each player.
  3. Using the above to build the output strategies.

Building the output strategy

We’ll start with the last one: computing the output given the right values. The construction is symmetric for each player, so we can have a single function called with different inputs.

def compute_strategy(
        player_action_success: SuccessFn,
        player_transition_times: List[float],
        player_normalizing_constants: List[float],
        opponent_action_success: SuccessFn,
        opponent_transition_times: List[float],
        time_1_point_mass: float = 0) -> Strategy:

This function computes the construction Restrepo gives.

One difficulty is in expressing discontinuous breaks. The definition of $f^*$ is a product over $b_j > t$, but if a $b_j$ lies inside the action interval, that will cause a discontinuity. I don’t know of an easy way to express the $f^*$ product literally as written in sympy (let me know if you know better), so instead I opted to construct a piecewise function manually, which sympy supports nicely.

This results in the following function for building the final $f^*$ for a single action for one player. The piecewise function is to break the action interval into pieces according to the discontinuities introduced by the opponent’s transition times that lie in the same interval.

def f_star(player_action_success: SuccessFn,
           opponent_action_success: SuccessFn,
           variable: Symbol,
           opponent_transition_times: Iterable[float]) -> Expr:
    '''Compute f^* as in Restrepo '57.
    The inputs can be chosen so that the appropriate f^* is built
    for either player. I.e., if we want to compute f^* for player 1,
    player_action_success should correspond to P, opponent_action_success
    to Q, and larger_transition_times to the b_j.
    If the inputs are switched appropriately, f^* is computed for player 2.
    '''
    P: SuccessFn = player_action_success
    Q: SuccessFn = opponent_action_success
    '''
    We compute f^* as a Piecewise function of the following form:
    [prod_{i=1}^m (1-Q(b_i))] * Q'(t) / Q^2(t)P(t)    if t < b_1
    [prod_{i=2}^m (1-Q(b_i))] * Q'(t) / Q^2(t)P(t)    if t < b_2
    [prod_{i=3}^m (1-Q(b_i))] * Q'(t) / Q^2(t)P(t)    if t < b_3
             .
             .
             .
    [1] *  Q'(t) / Q^2(t) P(t)                        if t >= b_m
    '''
    non_product_term = diff(Q(variable), variable) / (Q(variable)**2 * P(variable))
    piecewise_components = []
    for i, b_j in enumerate(opponent_transition_times):
        larger_transition_times = opponent_transition_times[i:]
        product = 1
        for b in larger_transition_times:
            product *= (1 - Q(b))
        term = product * non_product_term
        piecewise_components.append((term, variable < b_j))
    # last term is when there are no larger transition times.
    piecewise_components.append((non_product_term, True))
    return Piecewise(*piecewise_components)

The piecewise components are probability densities, so we can integrate them piecewise to get the cumulative densities. There are a few helpers here to deal with piecewise functions. First, we define a mask_piecewise to modify the sympy-internal representation of a piecewise function to have specified values outside of a fixed interval (the interval on which the action occurs). For a pdf this should be zero, and for a cdf it should be 0 before the action interval and 1 after. There are also some nuanced bits where hidden calls to expr.simplify() allow the integration to happen much faster than otherwise.

def compute_strategy(
        player_action_success: SuccessFn,
        player_transition_times: List[float],
        player_normalizing_constants: List[float],
        opponent_action_success: SuccessFn,
        opponent_transition_times: List[float],
        time_1_point_mass: float = 0) -> Strategy:
    '''
    Given the transition times for a player, compute the action cumulative
    density functions for the optimal strategy of the player.
    '''
    action_distributions = []
    x = Symbol('x', real=True)
    t = Symbol('t', real=True)
    # chop off the last transition time, which is always 1
    opponent_transition_times = [
        x for x in opponent_transition_times if x < 1
    ]
    pairs = subsequent_pairs(player_transition_times)
    for (i, (action_start, action_end)) in enumerate(pairs):
        normalizing_constant = player_normalizing_constants[i]
        dF = normalizing_constant * f_star(
            player_action_success,
            opponent_action_success,
            x,
            opponent_transition_times,
        )
        piece_pdf = mask_piecewise(dF, x, action_start, action_end)
        piece_cdf = integrate(piece_pdf, x, action_start, t)
        piece_cdf = mask_piecewise(
            piece_cdf,
            t,
            action_start,
            action_end,
            before_domain_val=0,
            after_domain_val=1
        )
        action_distributions.append(ActionDistribution(
            support_start=action_start,
            support_end=action_end,
            cumulative_density_function=Lambda((t,), piece_cdf),
        ))
    action_distributions[-1].point_mass = time_1_point_mass
    return Strategy(action_distributions=action_distributions)
def compute_player_strategies(silent_duel_input, intermediate_state, alpha, beta):
    p1_strategy = compute_strategy(
        player_action_success=silent_duel_input.player_1_action_success,
        player_transition_times=intermediate_state.player_1_transition_times,
        player_normalizing_constants=intermediate_state.player_1_normalizing_constants,
        opponent_action_success=silent_duel_input.player_2_action_success,
        opponent_transition_times=intermediate_state.player_2_transition_times,
        time_1_point_mass=alpha,
    )
    p2_strategy = compute_strategy(
        player_action_success=silent_duel_input.player_2_action_success,
        player_transition_times=intermediate_state.player_2_transition_times,
        player_normalizing_constants=intermediate_state.player_2_normalizing_constants,
        opponent_action_success=silent_duel_input.player_1_action_success,
        opponent_transition_times=intermediate_state.player_1_transition_times,
        time_1_point_mass=beta,
    )
    return SilentDuelOutput(p1_strategy=p1_strategy, p2_strategy=p2_strategy)

Note that “Lambda” is a sympy-internal anonymous function declaration that we’re using here to imply functionality. A Lambda also supports function call notation by overloading __call__. This allows the later action distribution to be treated as if it were a function.

Finding the transition times

Next we move on to computing the transition times for a given $\alpha, \beta$. This step is largely the same as in the previous post in this series, but we’re refactoring the code to be simpler and more generic.

The outer loop will construct an intermediate state object, and handle the process that Restrepo describes of “taking the larger parameter” and then computing the next $a_i, b_j$ using the previously saved parameters. There is one caveat, that Restrepo misses when he writes, “for definiteness, we assume that $a_n > b_m$…in the next step…a new $b_m$ is computed…using the single parameter $a_n^*$.” To the best of what I can tell (and experimenting with symmetric examples), when the computed transition times are equal, keeping only one and following Restrepo’s algorithm faithfully will result in an inconsistent output. To the best of what I can tell, this is a minor mistake, and if the transition times are equal then you must keep them both, and not recompute one using the other as a parameter.

def compute_as_and_bs(duel_input: SilentDuelInput,
                      alpha: float = 0,
                      beta: float = 0) -> IntermediateState:
    '''
    Compute the a's and b's for the silent duel, given a fixed
    alpha and beta as input.
    '''
    t = Symbol('t0', nonnegative=True, real=True)
    p1_index = duel_input.player_1_action_count
    p2_index = duel_input.player_2_action_count
    intermediate_state = IntermediateState.new()
    while p1_index > 0 or p2_index > 0:
        # the larger of a_i, b_j is kept as a parameter, then the other will be repeated
        # in the next iteration; e.g., a_{i-1} and b_j (the latter using a_i in its f^*)
        (a_i, b_j, h_i, k_j) = compute_ai_and_bj(
            duel_input, intermediate_state, alpha=alpha, beta=beta
        )
        # there is one exception, if a_i == b_j, then the computation of f^* in the next
        # iteration (I believe) should not include the previously kept parameter. I.e.,
        # in the symmetric version, if a_n is kept and the next computation of b_m uses
        # the previous a_n, then it will produce the wrong value.
        #
        # I resolve this by keeping both parameters when a_i == b_j.
        if abs(a_i - b_j) < EPSILON and p1_index > 0 and p2_index > 0:
            # use the average of the two to avoid roundoff errors
            transition = (a_i + b_j) / 2
            intermediate_state.add_p1(float(transition), float(h_i))
            intermediate_state.add_p2(float(transition), float(k_j))
            p1_index -= 1
            p2_index -= 1
        elif (a_i > b_j and p1_index > 0) or p2_index == 0:
            intermediate_state.add_p1(float(a_i), float(h_i))
            p1_index -= 1
        elif (b_j > a_i and p2_index > 0) or p1_index == 0:
            intermediate_state.add_p2(float(b_j), float(k_j))
            p2_index -= 1
    return intermediate_state

It remains to compute an individual $a_i, b_j$ pair given an intermediate state. We refactored the loop body from the last post into a generic function that works for both $a_n, b_m$ (which need access to $\alpha, \beta$) and lower $a_i, b_j$. With the exception of “simple_f_star” there is nothing new happening here.

def simple_f_star(player_action_success: SuccessFn,
                  opponent_action_success: SuccessFn,
                  variable: Symbol,
                  larger_transition_times: Iterable[float]) -> Expr:
    P: SuccessFn = player_action_success
    Q: SuccessFn = opponent_action_success
    non_product_term = diff(Q(variable), variable) / (Q(variable)**2 * P(variable))
    product = 1
    for b in larger_transition_times:
        product *= (1 - Q(b))
    return product * non_product_term
def compute_ai_and_bj(duel_input: SilentDuelInput,
                      intermediate_state: IntermediateState,
                      alpha: float = 0,
                      beta: float = 0):
    '''
    Compute a pair of a_i and b_j transition times for both players,
    using the intermediate state of the algorithm computed so far.
    This function also computes a_n and b_m when the intermediate_state
    input has no larger transition times for the opposite player. In
    those cases, the integrals and equations being solved are slightly
    different; they include some terms involving alpha and beta. In all
    other cases, the alpha and beta parameters are unused.
    '''
    P: SuccessFn = duel_input.player_1_action_success
    Q: SuccessFn = duel_input.player_2_action_success
    t = Symbol('t0', nonnegative=True, real=True)
    a_i = Symbol('a_i', positive=True, real=True)
    b_j = Symbol('b_j', positive=True, real=True)
    p1_transitions = intermediate_state.player_1_transition_times
    p2_transitions = intermediate_state.player_2_transition_times
    # the left end of the transitions arrays contain the smallest
    # (latest computed) transition time for each player.
    # these are set to 1 for an empty intermediate state, i.e. for a_n, b_m
    a_i_plus_one = p1_transitions[0]
    b_j_plus_one = p2_transitions[0]
    computing_a_n = a_i_plus_one == 1
    computing_b_m = b_j_plus_one == 1
    p1_fstar_parameters = list(p2_transitions)[:-1]  # ignore b_{m+1} = 1
    p1_fstar = simple_f_star(P, Q, t, p1_fstar_parameters)
    # the a_i part
    if computing_a_n:
        p1_integrand = ((1 + alpha) - (1 - alpha) * P(t)) * p1_fstar
        p1_integral_target = 2 * (1 - alpha)
    else:
        p1_integrand = (1 - P(t)) * p1_fstar
        p1_integral_target = 1 / intermediate_state.player_1_normalizing_constants[0]
    a_i_integrated = integrate(p1_integrand, t, a_i, a_i_plus_one)
    a_i = solve_unique_real(
        a_i_integrated - p1_integral_target,
        a_i,
        solution_min=0,
        solution_max=a_i_plus_one
    )
    # the b_j part
    p2_fstar_parameters = list(p1_transitions)[:-1]  # ignore a_{n+1} = 1
    p2_fstar = simple_f_star(Q, P, t, p2_fstar_parameters)
    if computing_b_m:
        p2_integrand = ((1 + beta) - (1 - beta) * Q(t)) * p2_fstar
        p2_integral_target = 2 * (1 - beta)
    else:
        p2_integrand = (1 - Q(t)) * p2_fstar
        p2_integral_target = 1 / intermediate_state.player_2_normalizing_constants[0]
    b_j_integrated = integrate(p2_integrand, t, b_j, b_j_plus_one)
    b_j = solve_unique_real(
        b_j_integrated - p2_integral_target,
        b_j,
        solution_min=0,
        solution_max=b_j_plus_one
    )
    # the h_i part
    h_i_integrated = integrate(p1_fstar, t, a_i, a_i_plus_one)
    h_i_numerator = (1 - alpha) if computing_a_n else 1
    h_i = h_i_numerator / h_i_integrated
    # the k_j part
    k_j_integrated = integrate(p2_fstar, t, b_j, b_j_plus_one)
    k_j_numerator = (1 - beta) if computing_b_m else 1
    k_j = k_j_numerator / k_j_integrated
    return (a_i, b_j, h_i, k_j)

You might be wondering what is going on with “simple_f_star”. Let me save that for the end of the post, as it’s related to how I’m currently stuck in understanding the construction.

Binary searching for alpha and beta

Finally, we need to binary search for the desired output $a_1 = b_1$ as a function of $\alpha, \beta$. Following Restrepo’s claim, we first compute $a_1, b_1$ using $\alpha=0, \beta=0$. If $a_1 > b_1$, then we are looking for a $\beta > 0$, and vice versa for $\alpha$.

To facilitate the binary search, I decided to implement an abstract binary search routine (that only works for floating point domains). You can see the code here. The important part is that it abstracts the test (did you find it?) and the response (no, too low), so that we can have the core of this test be a computation of $a_1, b_1$.

First the non-binary-search bits:

def optimal_strategies(silent_duel_input: SilentDuelInput) -> SilentDuelOutput:
    '''Compute an optimal pair of corresponding strategies for the silent duel problem.'''
    # First compute a's and b's, and check to see if a_1 == b_1, in which case quit.
    intermediate_state = compute_as_and_bs(silent_duel_input, alpha=0, beta=0)
    a1 = intermediate_state.player_1_transition_times[0]
    b1 = intermediate_state.player_2_transition_times[0]
    if abs(a1 - b1) < EPSILON:
        return compute_player_strategies(
            silent_duel_input, intermediate_state, alpha=0, beta=0,
        )
    # Otherwise, binary search for an alpha/beta
    searching_for_beta = b1 < a1
    <snip>
    intermediate_state = compute_as_and_bs(
        silent_duel_input, alpha=final_alpha, beta=final_beta
    )
    player_strategies = compute_player_strategies(
        silent_duel_input, intermediate_state, final_alpha, final_beta
    )
    return player_strategies

The above first checks to see if a search is needed, and in either case uses our previously defined functions to compute the output strategies. The binary search part looks like this:

    if searching_for_beta:
        def test(beta_value):
            new_state = compute_as_and_bs(
                silent_duel_input, alpha=0, beta=beta_value
            )
            new_a1 = new_state.player_1_transition_times[0]
            new_b1 = new_state.player_2_transition_times[0]
            found = abs(new_a1 - new_b1) < EPSILON
            return BinarySearchHint(found=found, tooLow=new_b1 < new_a1)
    else:  # searching for alpha
        def test(alpha_value):
            new_state = compute_as_and_bs(
                silent_duel_input, alpha=alpha_value, beta=0
            )
            new_a1 = new_state.player_1_transition_times[0]
            new_b1 = new_state.player_2_transition_times[0]
            found = abs(new_a1 - new_b1) < EPSILON
            return BinarySearchHint(found=found, tooLow=new_a1 < new_b1)
    search_result = binary_search(
        test, param_min=0, param_max=1, callback=print
    )
    assert search_result.found
    # the optimal (alpha, beta) pair have product zero.
    final_alpha = 0 if searching_for_beta else search_result.value
    final_beta = search_result.value if searching_for_beta else 0

Put it all together

Let’s run the complete construction on some examples. I added a couple of print statements in the code and overloaded __str__ on the dataclasses to help. First we can verify we get the same result as our previous symmetric example:

x = Symbol('x')
P = Lambda((x,), x)
Q = Lambda((x,), x)
duel_input = SilentDuelInput(
    player_1_action_count=3,
    player_2_action_count=3,
    player_1_action_success=P,
    player_2_action_success=Q,
)
print("Input: {}".format(duel_input))
output = optimal_strategies(duel_input)
print(output)
output.validate()
# output is:
Input: SilentDuelInput(player_1_action_count=3, player_2_action_count=3, player_1_action_success=Lambda(_x, _x), player_2_action_success=Lambda(_x, _x))
a_1 = 0.143 b_1 = 0.143
P1:
(0.143, 0.200): dF/dt = Piecewise((0, (t > 0.2) | (t < 0.142857142857143)), (0.083/t**3, t < 0.2))
(0.200, 0.333): dF/dt = Piecewise((0, (t > 0.333333333333333) | (t < 0.2)), (0.13/t**3, t < 0.333333333333333))
(0.333, 1.000): dF/dt = Piecewise((0, (t > 1) | (t < 0.333333333333333)), (0.25/t**3, t < 1))
P2:
(0.143, 0.200): dF/dt = Piecewise((0, (t < 0.2) | (t < 0.142857142857143)), (0.083/t**3, t < 0.2))
(0.200, 0.333): dF/dt = Piecewise((0, (t > 0.333333333333333) | (t < 0.2)), (0.13/t**3, t < 0.333333333333333))
(0.333, 1.000): dF/dt = Piecewise((0, (t > 1) | (t < 0.333333333333333)), (0.25/t**3, t > 1))
Validating P1
Validating. prob_mass=1.00000000000000 point_mass=0
Validating. prob_mass=1.00000000000000 point_mass=0
Validating. prob_mass=1.00000000000000 point_mass=0
Validating P2
Validating. prob_mass=1.00000000000000 point_mass=0
Validating. prob_mass=1.00000000000000 point_mass=0
Validating. prob_mass=1.00000000000000 point_mass=0

This lines up: the players have the same strategy, and the transition times are 1/7, 1/5, and 1/3.

Next up, replace Q = Lambda((x,), x**2), and only have a single action for each player. This should require a binary search, but it will be straightforward to verify manually. I added a callback that prints the bounds during each iteration of the binary search to observe. Also note that sympy integration is quite slow, so this binary search takes a minute or two.

Input: SilentDuelInput(player_1_action_count=1, player_2_action_count=1, player_1_action_success=Lambda(_x, _x), player_2_action_success=Lambda(x, x**2))
a_1 = 0.48109 b_1 = 0.42716
Binary searching for beta
{'current_min': 0, 'current_max': 1, 'tested_value': 0.5}
a_1 = 0.37545 b_1 = 0.65730
{'current_min': 0, 'current_max': 0.5, 'tested_value': 0.25}
a_1 = 0.40168 b_1 = 0.54770
{'current_min': 0, 'current_max': 0.25, 'tested_value': 0.125}
a_1 = 0.41139 b_1 = 0.50000
{'current_min': 0, 'current_max': 0.125, 'tested_value': 0.0625}
a_1 = 0.48109 b_1 = 0.44754
{'current_min': 0.0625, 'current_max': 0.125, 'tested_value': 0.09375}
a_1 = 0.41358 b_1 = 0.48860
{'current_min': 0.0625, 'current_max': 0.09375, 'tested_value': 0.078125}
a_1 = 0.41465 b_1 = 0.48297
{'current_min': 0.0625, 'current_max': 0.078125, 'tested_value': 0.0703125}
a_1 = 0.48109 b_1 = 0.45013
{'current_min': 0.0703125, 'current_max': 0.078125, 'tested_value': 0.07421875}
a_1 = 0.41492 b_1 = 0.48157
{'current_min': 0.0703125, 'current_max': 0.07421875, 'tested_value': 0.072265625}
a_1 = 0.48109 b_1 = 0.45078
{'current_min': 0.072265625, 'current_max': 0.07421875, 'tested_value': 0.0732421875}
a_1 = 0.41498 b_1 = 0.48122
{'current_min': 0.072265625, 'current_max': 0.0732421875, 'tested_value': 0.07275390625}
a_1 = 0.48109 b_1 = 0.45094
{'current_min': 0.07275390625, 'current_max': 0.0732421875, 'tested_value': 0.072998046875}
a_1 = 0.41500 b_1 = 0.48113
{'current_min': 0.07275390625, 'current_max': 0.072998046875, 'tested_value': 0.0728759765625}
a_1 = 0.48109 b_1 = 0.45098
{'current_min': 0.0728759765625, 'current_max': 0.072998046875, 'tested_value': 0.07293701171875}
a_1 = 0.41500 b_1 = 0.48111
{'current_min': 0.0728759765625, 'current_max': 0.07293701171875, 'tested_value': 0.072906494140625}
a_1 = 0.41500 b_1 = 0.48110
{'current_min': 0.0728759765625, 'current_max': 0.072906494140625, 'tested_value': 0.0728912353515625}
a_1 = 0.41501 b_1 = 0.48109
{'current_min': 0.0728759765625, 'current_max': 0.0728912353515625, 'tested_value': 0.07288360595703125}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288360595703125, 'current_max': 0.0728912353515625, 'tested_value': 0.07288742065429688}
a_1 = 0.41501 b_1 = 0.48109
{'current_min': 0.07288360595703125, 'current_max': 0.07288742065429688, 'tested_value': 0.07288551330566406}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288551330566406, 'current_max': 0.07288742065429688, 'tested_value': 0.07288646697998047}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288646697998047, 'current_max': 0.07288742065429688, 'tested_value': 0.07288694381713867}
a_1 = 0.41501 b_1 = 0.48109
{'current_min': 0.07288646697998047, 'current_max': 0.07288694381713867, 'tested_value': 0.07288670539855957}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288670539855957, 'current_max': 0.07288694381713867, 'tested_value': 0.07288682460784912}
a_1 = 0.41501 b_1 = 0.48109
{'current_min': 0.07288670539855957, 'current_max': 0.07288682460784912, 'tested_value': 0.07288676500320435}
a_1 = 0.41501 b_1 = 0.48109
{'current_min': 0.07288670539855957, 'current_max': 0.07288676500320435, 'tested_value': 0.07288673520088196}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288673520088196, 'current_max': 0.07288676500320435, 'tested_value': 0.07288675010204315}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288675010204315, 'current_max': 0.07288676500320435, 'tested_value': 0.07288675755262375}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288675755262375, 'current_max': 0.07288676500320435, 'tested_value': 0.07288676127791405}
a_1 = 0.41501 b_1 = 0.48109
{'current_min': 0.07288675755262375, 'current_max': 0.07288676127791405, 'tested_value': 0.0728867594152689}
a_1 = 0.41501 b_1 = 0.48109
{'current_min': 0.07288675755262375, 'current_max': 0.0728867594152689, 'tested_value': 0.07288675848394632}
a_1 = 0.41501 b_1 = 0.48109
{'current_min': 0.07288675755262375, 'current_max': 0.07288675848394632, 'tested_value': 0.07288675801828504}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288675801828504, 'current_max': 0.07288675848394632, 'tested_value': 0.07288675825111568}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288675825111568, 'current_max': 0.07288675848394632, 'tested_value': 0.072886758367531}
a_1 = 0.41501 b_1 = 0.48109
{'current_min': 0.07288675825111568, 'current_max': 0.072886758367531, 'tested_value': 0.07288675830932334}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288675830932334, 'current_max': 0.072886758367531, 'tested_value': 0.07288675833842717}
a_1 = 0.48109 b_1 = 0.45099
{'current_min': 0.07288675833842717, 'current_max': 0.072886758367531, 'tested_value': 0.07288675835297909}
a_1 = 0.48109 b_1 = 0.48109
a_1 = 0.48109 b_1 = 0.48109
P1:
(0.481, 1.000): dF/dt = Piecewise((0, (t > 1) | (t < 0.481089134572278)), (0.38/t**4, t < 1))
P2:
(0.481, 1.000): dF/dt = Piecewise((0, (t > 1) | (t < 0.481089134572086)), (0.35/t**4, t < 1)); Point mass of 0.0728868 at 1.000
Validating P1
Validating. prob_mass=1.00000000000000 point_mass=0
Validating P2
Validating. prob_mass=0.927113241647021 point_mass=0.07288675835297909

This passes the sanity check of the output distributions having probability mass 1. P2 should also have a point mass at the end, because P2’s distribution is $f(x) = x^2$, which has less weight at the beginning and sharply increases at the end. This gives P2 a disadvantage, and pushes their action probability towards the end. According to Restrepo’s theorem, it would be optimal to wait until the end about 7% of the time to guarantee a perfect shot. We can work through the example by hand, and turn the result into a unit test.

Note that we haven’t verified this example is correct by hand. We’re just looking at some sanity checks at this point.

Where it falls apart

At this point I was feeling pretty good, and then the following example shows my implementation is broken:

x = Symbol('x')
P = Lambda((x,), x)
Q = Lambda((x,), x**2)
duel_input = SilentDuelInput(
    player_1_action_count=2,
    player_2_action_count=2,
    player_1_action_success=P,
    player_2_action_success=Q,
)
print("Input: {}".format(duel_input))
output = optimal_strategies(duel_input)
print(output)
output.validate(err_on_fail=False)

The output shows that the resulting probability distribution does not have a total probability mass of 1. I.e., it’s not a distribution. Uh oh.

Input: SilentDuelInput(player_1_action_count=2, player_2_action_count=2, player_1_action_success=Lambda(_x, _x), player_2_action_success=Lambda(x, x**2))
a_1 = 0.34405 b_1 = 0.28087
Binary searching for beta
{'current_min': 0, 'current_max': 1, 'tested_value': 0.5}
a_1 = 0.29894 b_1 = 0.45541
{'current_min': 0, 'current_max': 0.5, 'tested_value': 0.25}
a_1 = 0.32078 b_1 = 0.36181
{'current_min': 0, 'current_max': 0.25, 'tested_value': 0.125}
a_1 = 0.32660 b_1 = 0.34015
{'current_min': 0, 'current_max': 0.125, 'tested_value': 0.0625}
a_1 = 0.34292 b_1 = 0.29023
{'current_min': 0.0625, 'current_max': 0.125, 'tested_value': 0.09375}
a_1 = 0.33530 b_1 = 0.30495
{'current_min': 0.09375, 'current_max': 0.125, 'tested_value': 0.109375}
a_1 = 0.32726 b_1 = 0.33741
{'current_min': 0.09375, 'current_max': 0.109375, 'tested_value': 0.1015625}
a_1 = 0.32758 b_1 = 0.33604
{'current_min': 0.09375, 'current_max': 0.1015625, 'tested_value': 0.09765625}
a_1 = 0.32774 b_1 = 0.33535
{'current_min': 0.09375, 'current_max': 0.09765625, 'tested_value': 0.095703125}
a_1 = 0.33524 b_1 = 0.30526
{'current_min': 0.095703125, 'current_max': 0.09765625, 'tested_value': 0.0966796875}
a_1 = 0.33520 b_1 = 0.30542
{'current_min': 0.0966796875, 'current_max': 0.09765625, 'tested_value': 0.09716796875}
a_1 = 0.32776 b_1 = 0.33526
{'current_min': 0.0966796875, 'current_max': 0.09716796875, 'tested_value': 0.096923828125}
a_1 = 0.32777 b_1 = 0.33522
{'current_min': 0.0966796875, 'current_max': 0.096923828125, 'tested_value': 0.0968017578125}
a_1 = 0.33520 b_1 = 0.30544
{'current_min': 0.0968017578125, 'current_max': 0.096923828125, 'tested_value': 0.09686279296875}
a_1 = 0.32777 b_1 = 0.33521
{'current_min': 0.0968017578125, 'current_max': 0.09686279296875, 'tested_value': 0.096832275390625}
a_1 = 0.32777 b_1 = 0.33521
{'current_min': 0.0968017578125, 'current_max': 0.096832275390625, 'tested_value': 0.0968170166015625}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.0968017578125, 'current_max': 0.0968170166015625, 'tested_value': 0.09680938720703125}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.0968017578125, 'current_max': 0.09680938720703125, 'tested_value': 0.09680557250976562}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.0968017578125, 'current_max': 0.09680557250976562, 'tested_value': 0.09680366516113281}
a_1 = 0.33520 b_1 = 0.30544
{'current_min': 0.09680366516113281, 'current_max': 0.09680557250976562, 'tested_value': 0.09680461883544922}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680366516113281, 'current_max': 0.09680461883544922, 'tested_value': 0.09680414199829102}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680366516113281, 'current_max': 0.09680414199829102, 'tested_value': 0.09680390357971191}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680366516113281, 'current_max': 0.09680390357971191, 'tested_value': 0.09680378437042236}
a_1 = 0.33520 b_1 = 0.30544
{'current_min': 0.09680378437042236, 'current_max': 0.09680390357971191, 'tested_value': 0.09680384397506714}
a_1 = 0.33520 b_1 = 0.30544
{'current_min': 0.09680384397506714, 'current_max': 0.09680390357971191, 'tested_value': 0.09680387377738953}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680384397506714, 'current_max': 0.09680387377738953, 'tested_value': 0.09680385887622833}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680384397506714, 'current_max': 0.09680385887622833, 'tested_value': 0.09680385142564774}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680384397506714, 'current_max': 0.09680385142564774, 'tested_value': 0.09680384770035744}
a_1 = 0.33520 b_1 = 0.30544
{'current_min': 0.09680384770035744, 'current_max': 0.09680385142564774, 'tested_value': 0.09680384956300259}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680384770035744, 'current_max': 0.09680384956300259, 'tested_value': 0.09680384863168001}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680384770035744, 'current_max': 0.09680384863168001, 'tested_value': 0.09680384816601872}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680384770035744, 'current_max': 0.09680384816601872, 'tested_value': 0.09680384793318808}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680384770035744, 'current_max': 0.09680384793318808, 'tested_value': 0.09680384781677276}
a_1 = 0.32777 b_1 = 0.33520
{'current_min': 0.09680384770035744, 'current_max': 0.09680384781677276, 'tested_value': 0.0968038477585651}
a_1 = 0.33520 b_1 = 0.30544
{'current_min': 0.0968038477585651, 'current_max': 0.09680384781677276, 'tested_value': 0.09680384778766893}
a_1 = 0.33520 b_1 = 0.33520
a_1 = 0.33520 b_1 = 0.33520
deque([0.33520049631043414, 0.45303439059299566, 1])
deque([0.33520049631043414, 0.4897058985296734, 1])
P1:
(0.335, 0.453): dF/dt = Piecewise((0, t < 0.335200496310434), (0.19/t**4, t < 0.453034390592996), (0, t > 0.453034390592996))
(0.453, 1.000): dF/dt = Piecewise((0, t < 0.453034390592996), (0.31/t**4, t < 0.489705898529673), (0.4/t**4, t < 1), (0, t > 1))
P2:
(0.335, 0.490): dF/dt = Piecewise((0, t < 0.335200496310434), (0.17/t**4, t < 0.453034390592996), (0.3/t**4, t < 0.489705898529673), (0, t > 0.489705898529673))
(0.490, 1.000): dF/dt = Piecewise((0, t < 0.489705898529673), (0.36/t**4, t < 1), (0, t > 1)); Point mass of 0.0968038 at 1.000
Validating P1
Validating. prob_mass=0.999999999999130 point_mass=0
Validating. prob_mass=1.24303353980824 point_mass=0   INVALID
Probability distribution does not have mass 1: (0.453, 1.000): dF/dt = Piecewise((0, t < 0.453034390592996), (0.31/t**4, t < 0.489705898529673), (0.4/t**4, t < 1), (0, t > 1))
Validating P2
Validating. prob_mass=1.10285404591706 point_mass=0   INVALID
Probability distribution does not have mass 1: (0.335, 0.490): dF/dt = Piecewise((0, t < 0.335200496310434), (0.17/t**4, t < 0.453034390592996), (0.3/t**4, t < 0.489705898529673), (0, t > 0.489705898529673))
Validating. prob_mass=0.903196152212331 point_mass=0.09680384778766893

What’s fundamentally different about this example? The central thing I can tell is that this is the simplest example for which player 1 has an action that has a player 2 transition time in the middle. It’s this action:

(0.453, 1.000): dF/dt = Piecewise((0, t < 0.453034390592996), (0.31/t**4, t < 0.489705898529673), (0.4/t**4, t < 1), (0, t > 1))
...
Validating. prob_mass=1.24303353980824 point_mass=0   INVALID

This is where the discontinuity of $f^*$ actually matters. In the previous example either there was only one action, and by design the starting times of the action ranges are equal, or else the game was symmetric, so that the players had the same action range endpoints.

In my first implementation, I had actually ignored the discontinuities entirely, and because the game was symmetric it didn’t impact the output distributions. This is what’s currently in the code as “simple_f_star.” Neither player’s action transitions fell inside the bounds of any of the other player’s action ranges, and so I missed that the discontinuity was important.

In any event, in the three years since I first worked on this, I haven’t been able to figure out what I did wrong. I probably won’t come back to this problem for a while, and in the mean time perhaps some nice reader has the patience to figure it out. You can see a log of my confusion in this GitHub issue, and some of the strange workarounds I tried to get it to work.