Featured Posts

Sample Extraction from RLWE to LWE

In this article I’ll derive a trick used in FHE called sample extraction. In brief, it allows one to partially convert a ciphertext in the Ring Learning With Errors (RLWE) scheme to the Learning With Errors (LWE) scheme.

Here are some other articles I’ve written about other FHE building blocks, though they are not prerequisites for this article.


The first two articles in the list above define the Learning With Errors problem (LWE). I will repeat the definition here:

LWE: The LWE encryption scheme has the following parameters:

  • A plaintext space $ \mathbb{Z}/q\mathbb{Z}$, where $ q \geq 2$ is a positive integer. This is the space that the underlying message $m$ comes from.
  • An LWE dimension $ n \in \mathbb{N}$.
  • A discrete Gaussian error distribution $ D$ with a mean of zero and a fixed standard deviation.

An LWE secret key is defined as a vector $s \in \{0, 1\}^n$ (uniformly sampled). An LWE ciphertext is defined as a vector $ a = (a_1, \dots, a_n)$, sampled uniformly over $ (\mathbb{Z} / q\mathbb{Z})^n$, and a scalar $ b = \langle a, s \rangle + m + e$, where $m$ is the message, $e$ is drawn from $D$ and all arithmetic is done modulo $q$. Note: the message $m$ usually is represented by placing an even smaller message (say, a 4-bit message) in the highest-order bits of a 32-bit unsigned integer. So then decryption corresponds to computing $b – \langle a, s \rangle = m + e$ and rounding the result to recover $m$ while discarding $e$.

Without the error term, an attacker could determine the secret key from a polynomial-sized collection of LWE ciphertexts with something like Gaussian elimination. The set of samples looks like a linear (or affine) system, where the secret key entries are the unknown variables. With an error term, the problem of solving the system is believed to be hard, and only exponential time/space algorithms are known.

RLWE: The Ring Learning With Errors (RLWE) problem is the natural analogue of LWE, where all scalars involved are replaced with polynomials over a (carefully) chosen ring.

Formally, the RLWE encryption scheme has the following parameters:

  • A ring $R = \mathbb{Z}/q\mathbb{Z}$, where $ q \geq 2$ is a positive integer. This is the space of coefficients of all polynomials in the scheme. I usually think of $q$ as $2^{32}$, i.e., unsigned 32-bit integers.
  • A plaintext space $R[x] / (x^N + 1)$, where $N$ is a power of 2. This is the space that the underlying message $m(x)$ comes from, and it is encoded as a list of $N$ integers forming the coefficients of the polynomial.
  • An RLWE dimension $n \in \mathbb{N}$.
  • A discrete Gaussian error distribution $D$ with a mean of zero and a fixed standard deviation.

An RLWE secret key $s$ is defined as a list of $n$ polynomials with binary coefficients in $\mathbb{B}[x] / (x^N+1)$, where $\mathbb{B} = \{0, 1\}$. The coefficients are uniformly sampled, like in LWE. An RLWE ciphertext is defined as a vector of $n$ polynomials $a = (a_1(x), \dots, a_n(x))$, sampled uniformly over $(R[x] / (x^N+1))^n$, and a polynomial $b(x) = \langle a, s \rangle + m(x) + e(x)$, where $m(x)$ is the message (with a similar “store it in the top bits” trick as LWE), $e(x)$ is a polynomial with coefficients drawn from $D$ and all the products of the inner product are done in $R[x] / (x^N+1)$. Decryption in RLWE involves computing $b(x) – \langle a, s \rangle$ and rounding appropriately to recover $m(x)$. Just like with RLWE, the message is “hidden” in the noise added to an equation corresponding to the polynomial products (i.e., without the noise and with enough sample encryptions of the same message/secret key, you can solve the system and recover the message). For more notes on how polynomial multiplication ends up being tricker in this ring, see my negacyclic polynomial multiplication article.

The most common version of RLWE you will see in the literature sets the vector dimension $n=1$, and so the secret key $s$ is a single polynomial, the ciphertext is a single polynomial, and RLWE can be viewed as directly replacing the vector dot product in LWE with a polynomial product. However, making $n$ larger is believed to provide more security, and it can be traded off against making the polynomial degree smaller, which can be useful when tweaking parameters for performance (keeping the security level constant).

Sample Extraction

Sample extraction is the trick of taking an RLWE encryption of $m(x) = m_0 + m_1(x) + \dots + m_{N-1}x^{N-1}$, and outputting an LWE encryption of $m_0$. In our case, the degree $N$ and the dimension $n_{\textup{RLWE}}$ of the input RLWE ciphertext scheme is fixed, but we may pick the dimension $n_{\textup{LWE}}$ of the LWE scheme as we choose to make this trick work.

This is one of those times in math when it is best to “just work it out with a pencil.” It turns out there are no serious obstacles to our goal. We start with polynomials $a = (a_1(x), \dots, a_n(x))$ and $b(x) = \langle a, s \rangle + m(x) + e(x)$, and we want to produce a vector of scalars $(x_1, \dots, x_D)$ of some dimension $D$, a corresponding secret key $s$, and a $b = \langle a, s \rangle + m_0 + e’$, where $e’$ may be different from the input error $e(x)$, but is hopefully not too much larger.

As with many of the articles in this series, we employ the so-called “phase function” to help with the analysis, which is just the partial decryption of an RLWE ciphertext without the rounding step: $\varphi(x) = b(x) – \langle a, s \rangle = m(x) + e(x)$. The idea is as follows: inspect the structure of the constant term of $\varphi(x)$, oh look, it’s an LWE encryption.

So let’s expand the constant term of $b(x) – \langle a, s \rangle$. Given a polynomial expression, I will use the notation $(-)[0]$ to denote the constant coefficient, and $(-)[k]$ for the $k$-th coefficient.

$$ \begin{aligned}(b(x) – \langle a, s \rangle)[0] &= b[0] – \left ( (a_1s_1)[0] + \dots + (a_n s_n)[0] \right ) \end{aligned}$$

Each entry in the dot product is a negacyclic polynomial product, so its constant term requires summing all the pairs of coefficients of $a_i$ and $s_i$ whose degrees sum to zero mod $N$, and flipping signs when there’s wraparound. In particular, a single product above for $a_i s_i$ has the form:

$$(a_is_i) [0] = s_i[0]a_i[0] – s_i[1]a_i[N-1] – s_i[2]a_i[N-2] – \dots – s_i[N-1]a_i[1]$$

Notice that I wrote the coefficients of $s_i$ in increasing order. This was on purpose, because if we re-write this expression $(a_is_i)[0]$ as a dot product, we get

$$(a_is_i[0]) = \left \langle (s_i[0], s_i[1], \dots, s_i[N-1]), (a_i[0], -a_i[N-1], \dots, -a_i[1])\right \rangle$$

In particular, the $a_i[k]$ are public, so we can sign-flip and reorder them easily in our conversion trick. But $s_i$ is unknown at the time the sample extraction needs to occur, so it helps if we can leave the secret key untouched. And indeed, when we apply the above expansion to all of the terms in the computation of $\varphi(x)[0]$, we end up manipulating the $a_i$’s a lot, but merely “flattening” the coefficients of $s = (s_1(x), \dots, s_n(x))$ into a single long vector.

So combining all of the above products, we see that $(b(x) – \langle a, s \rangle)[0]$ is already an LWE encryption with $(x, y) = ((x_1, \dots, x_D), b[0])$, and $x$ being the very long ($D = n*N$) vector

$$\begin{aligned} x = (& a_0[0], -a_0[N-1], \dots, -a_0[1], \\ &a_1[0], -a_1[N-1], \dots, -a_1[1], \\ &\dots , \\ &a_n[0], -a_n[N-1], \dots, -a_n[1] ) \end{aligned}$$

And the corresponding secret key is

$$\begin{aligned} s_{\textup{LWE}} = (& (s_0[0], s_0[1], \dots, s_0[N-1] \\ &(s_1[0], s_1[1], \dots, s_1[N-1], \\ &\dots , \\ &s_n[0], s_n[1], \dots, s_n[N-1] ) \end{aligned}$$

And the error in this ciphertext is exactly the constant coefficient of the error polynomial $e(x)$ from the RLWE encryption, which is independent of the error of all the other coefficients.


This trick is a best case scenario. Unlike with key switching, we don’t need to encrypt the output LWE secret key to perform the conversion. And unlike modulus switching, there is no impact on the error growth in the conversion from RLWE to LWE. So in a sense, this trick is “perfect,” though it loses information about the other coefficients of $m(x)$ in the process. As it happens, the CGGI FHE scheme that these articles are building toward only uses the constant coefficient.

The only twist to think about is that the output LWE ciphertext is dependent on the RLWE scheme parameters. What if you wanted to get a smaller-dimensional LWE ciphertext as output? This is a realistic concern, as in the CGGI FHE scheme one starts from an LWE ciphertext of one dimension, goes to RLWE of another (larger) dimension, and needs to get back to LWE of the original dimension by the end.

To do this, you have two options: one is to pick the RLWE ciphertext parameters $n, N$, so that their product is the value you need. A second is to allow the RLWE parameters to be whatever you need for performance/security, and then employ a key switching operation after the sample extraction to get back to the LWE parameters you need.

It is worth mentioning—though I am far from fully understanding the methods—there other ways to convert between LWE and RLWE. One can go from LWE to RLWE, or from a collection of LWEs to RLWE. Some methods can be found in this paper and its references.

Until next time!

Google’s Fully Homomorphic Encryption Compiler — A Primer

Back in May of 2022 I transferred teams at Google to work on Fully Homomorphic Encryption (newsletter announcement). Since then I’ve been working on a variety of projects in the space, including being the primary maintainer on github.com/google/fully-homomorphic-encryption, which is an open source FHE compiler for C++. This article will be an introduction to how to use it to compile programs to FHE, as well as a quick overview of its internals.

If you’d like to contribute to this project, please reach out to me at mathintersectprogramming@gmail.com or at j2kun@mathstodon.xyz. I have a few procedural hurdles to overcome before I can accept external contributions (with appropriate git commit credit), but if there’s enough interest I will make time for it sooner as opposed to later.


The core idea of fully homomorphic encryption (henceforth FHE) is that you can encrypt data and then run programs on it without ever decrypting it. In the extreme, even if someone had physical access to the machine and could inspect the values of individual memory cells or registers while the program was running, they would not see any of the bits of the underlying data being operated on (without cracking the cryptosystem).

Our FHE compiler converts C++ programs that operate on plaintext to programs that operate on the corresponding FHE ciphertexts (since it emits high-level code that then needs to be further compiled, it could be described as a transpiler). More specifically, it converts a specific subset of valid C++ programs—more on what defines that subset later—to programs that run the same program on encrypted data via one of the supported FHE cryptosystem implementations. In this sense it’s close to a traditional compiler: parse the input, run a variety of optimization passes, and generate some output. However, as we’ll see in this article, the unique properties of FHE make the compiler more like hardware circuit toolchains.

The variety of FHE supported by the compiler today is called “gate bootstrapping.” I won’t have time to go into intense detail about the math behind it, but suffice it to say that this technique gives away performance in exchange for a simpler job of optimizing and producing a working program. What I will say is that this blend of FHE encrypts each bit of its input into a separate ciphertext, and then represents the program as a boolean (combinational) circuit—composed of gates like AND, OR, XNOR, etc. Part of the benefit of the compiler is that it manages a mapping of higher order types like integers, arrays, and structs, to lists of encrypted booleans and back again.

A few limitations result from this circuit-based approach, which will be woven throughout the rest of this tutorial. First is that all loops must be fully unrolled and have statically-known bounds. Second, constructs like pointers, and dynamic memory allocation are not supported. Third, all control flow is multiplexed, meaning that all branches of all if statements are evaluated, and only then is one chosen. Finally, there are important practical considerations related to the bit-width of the types used and the expansion of cleartexts into ciphertexts that impact the performance of the resulting program.

On the other hand, combinational circuit optimization is a well-studied problem with off-the-shelf products that can be integrated (narrator: they did integrate some) into the FHE compiler to make the programs run faster.


tl;dr: check out the dockerfiles.

Google’s internal build system is called blaze, and it’s open source counterpart (equivalent in all except name) is called bazel. One of the first curious things you’ll notice about the compiler is that bazel is used both to build the project and to use the project (the latter I’d like to change). So you’ll need to install bazel, and an easy way to do that is to install bazelisk, which is the analogue of nvm for Node or pyenv for Python. You won’t need multiple versions of bazel, but this is just the easiest way to install the latest version. I’ll be using Bazel 4.0.0, but there are newer versions that should work just fine as well.

You’ll need a C compiler (I use gcc12) because most of the project’s dependencies are built from source (see next paragraph), and a small number of external libraries and programs to support some of the circuit optimizer plugins. For debian-based systems, this is the full list

apt-get update && apt-get install -y \
  gcc \
  git \
  libtinfo5 \
  python \
  python3 \
  python3-pip \
  autoconf \
  libreadline-dev \
  flex \
  bison \

As mentioned above, all the other dependencies are built from source, and this will take a while the first time you build the project. So you might as well clone and get that build started while you read. The command below will build the project and all the example binaries, and then cache the intermediate build artifacts for future builds, only recompiling what has changed in the mean time. See the Bazel/Starlark section for more details on what this command is doing. Note: the one weird case is LLVM. If you use an exotic operating system (or a docker container, don’t get me started on why this is an issue) then bazel may choose to build LLVM from scratch, which will take an hour or two for the first build. It may also fail due to a missing dependency of your system, which will be extremely frustrating (this is the #1 complaint in our GitHub issues). But, if you’re on a standard OS/architecture combination (as enumerated here), it will just fetch the right LLVM dependency and install it on your system.

git clone https://github.com/google/fully-homomorphic-encryption.git
cd fully-homomorphic-encryption
bazel build ...:all

A clean build on my home machine takes about 16 minutes.

Two end-to-end examples: add and string_cap

In this section I’ll show two end-to-end examples of using the compiler as an end user. The first will be for a dirt-simple program that adds two 32-bit integers. The second will be for a program that capitalizes the first character of each word in an ASCII string. The examples are already in the repository under transpiler/examples by the names simple_sum and string_cap.

Both of these programs will have the form of compiling a single function that is the entry point for the FHE part of the program, and providing a library and API to integrate it with a larger program.

First simple_sum. Add a header and source file like you would any standard C++ program, but with one extra line to tell the compiler which function is the function that should be compiled (along with any functions called within it).

// add.h
int add(int a, int b);

// add.cc
#include "add.h"

#pragma hls_top
int add(int a, int b) {
  return a + b;

The line #pragma hls_top tells the compiler which function is the entry point. Incidentally, hls stands for “high level synthesis,” and the pragma itself comes from the XLS project, which we use as our parser and initial circuit builder. Here ‘top’ just means top level function.

Then, inside a file in the same directory called BUILD (see the Bazel/Starlark section next for an overview of the build system), create a build target that invokes the FHE compiler. In our case we’ll use the OpenFHE backend.

# loads the FHE compiler as an extension to Bazel.
load("//transpiler:fhe.bzl", "fhe_cc_library")

  name = "add_fhe_lib",
  src = "add.cc",
  hdrs = ["add.h"],
  encryption = "openfhe",  # backend cryptosystem library
  interpreter = True,      # use dynamic thread scheduling
  optimizer = "yosys",     # boolean circuit optimizer

The full options for this build rule (i.e., the documentation of the compiler’s main entry point) can be found in the docstring of the bazel macro. I picked the parameters that have what I think of as the best tradeoff between stability and performance.

If you run bazel build add_fhe_lib, then you will see it build but nothing else (see the “intermediate files” section for more on what’s happening behind the scenes). But if you typed something wrong in the build file it would err at this point. It generates a header and cc file that contains the same API as add, but with different types for the arguments and extra arguments needed by the FHE library backend.

Next we need a main routine that uses the library. Since we’re using OpenFHE as our backend, it requires some configuration and the initial encryption of its inputs. The full code, with some slight changes for the blog, looks like this

#include <stdio.h>
#include <iostream>
#include <ostream>

#include "absl/strings/numbers.h"
#include "transpiler/codelab/add/add_fhe_lib.h"
#include "transpiler/data/openfhe_data.h"

constexpr auto kSecurityLevel = lbcrypto::MEDIUM;

int main(int argc, char** argv) {
  if (argc < 3) {
    fprintf(stderr, "Usage: add_main [int] [int]\n\n");
    return 1;

  int x, y;
  if(!absl::SimpleAtoi(argv[1], &x)) {
    std::cout << "Bad int " << argv[1] << std::endl;
    return 1;
  if(!absl::SimpleAtoi(argv[2], &y)) {
    std::cout << "Bad int " << argv[2] << std::endl;
    return 1;
  std::cout << "Computing " << x << " + " << y << std::endl;

  // Set up backend context and encryption keys.
  auto context = lbcrypto::BinFHEContext();
  auto sk = context.KeyGen();

  OpenFhe<int> ciphertext_x = OpenFhe<int>::Encrypt(x, context, sk);
  OpenFhe<int> ciphertext_y = OpenFhe<int>::Encrypt(y, context, sk);
  OpenFhe<int> result(context);
  auto status = add(result, ciphertext_x, ciphertext_y, context);
  if(!status.ok()) {
    std::cout << "FHE computation failed: " << status << std::endl;
    return 1;

  std::cout << "Result: " << result.Decrypt(sk) << "\n";
  return 0;

The parts that are not obvious boilerplate include:

Configuring the security level of the OpenFHE library (which is called BinFHE to signal it’s doing binary circuit FHE).

constexpr auto kSecurityLevel = lbcrypto::MEDIUM;

Setting up the initial OpenFHE secret key

 auto context = lbcrypto::BinFHEContext();
 auto sk = context.KeyGen();

Encrypting the inputs. This uses an API provided by the compiler (though because the project was a research prototype, I think the original authors never got around to unifying the “set up the secret key” part behind an API) and included in this from include "transpiler/data/openfhe_data.h"

 OpenFhe<int> ciphertext_x = OpenFhe<int>::Encrypt(x, context, sk);
 OpenFhe<int> ciphertext_y = OpenFhe<int>::Encrypt(y, context, sk);

Then calling the FHE-enabled add function, and decrypting the results.

Then create another BUILD rule for the binary:

    name = "add_openfhe_fhe_demo",
    srcs = [
    deps = [

Running it with bazel:

$ bazel run add_openfhe_fhe_demo -- 5 7
Computing 5 + 7
Result: 12

Timing this on my system, it takes a little less than 7 seconds.

On to a more complicated example: string_cap, which will showcase loops and arrays. This was slightly simplified from the GitHub example. First the header and source files:

// string_cap.h
#define MAX_LENGTH 32
void CapitalizeString(char my_string[MAX_LENGTH]);

// string_cap.cc
#include "string_cap.h"

#pragma hls_top
void CapitalizeString(char my_string[MAX_LENGTH]) {
  bool last_was_space = true;
#pragma hls_unroll yes
  for (int i = 0; i < MAX_LENGTH; i++) {
    char c = my_string[i];
    if (last_was_space && c >= 'a' && c <= 'z') {
      my_string[i] = c - ('a' - 'A');
    last_was_space = (c == ' ');

Now there’s a bit to discuss. First, the string has a static length known at compile time. This is required because the FHE program is a boolean circuit. It defines wires for each of the inputs, and it must know how many wires to define. In this case it will be a circuit with 32 * 8 wires, one for each bit of each character in the array.

The second new thing is the #pragma hsl_unroll yes, which, like hls_top, tells the XLS compiler to fully unroll that loop. Because the FHE program is a static circuit, it cannot have any loops. XLS unrolls our loops for us, and incidentally, I learned recently that it uses the Z3 solver to first prove the loops can be unrolled (which can lead to some slow compile times for complex programs). I’m not aware of other compilers that do this proving part. It looks like LLVM’s loop unroller just slingshots its CPU cycles into the sun if it’s asked to fully unroll an infinite loop.

The main routine is similar as before:

#include <array>
#include <iostream>
#include <string>

#include "openfhe/binfhe/binfhecontext.h"
#include "transpiler/data/openfhe_data.h"
#include "transpiler/examples/string_cap/string_cap.h"
#include "transpiler/examples/string_cap/string_cap_openfhe_yosys_interpreted.h"

int main(int argc, char** argv) {
  if (argc < 2) {
    fprintf(stderr, "Usage: string_cap_openfhe_testbench string_input\n\n");
    return 1;

  std::string input = argv[1];
  input.resize(MAX_LENGTH, '\0');
  std::string plaintext(input);

  auto cc = lbcrypto::BinFHEContext();
  auto sk = cc.KeyGen();

  auto ciphertext = OpenFheArray<char>::Encrypt(plaintext, cc, sk);
  auto status = CapitalizeString(ciphertext, cc);
  if (!status.ok()) {
    std::cout << "FHE computation failed " << status << std::endl;
    return 1;
  std::cout << "Decrypted result: " << ciphertext.Decrypt(sk) << std::endl;

The key differences are:

  • We resize the input to be exactly MAX_LENGTH, padding with null bytes.
  • We use OpenFheArray instead of OpenFhe to encode an array of characters.

And now omitting the binary’s build rule and running it, we get

$ bazel run string_cap_openfhe_yosys_interpreted_testbench -- 'hello there'
Decrypted result: Hello There

Interestingly, this also takes about 6 seconds to run on my machine (same as the “add 32-bit integers” program). It would be the same runtime for a longer string, up to 32 characters, since, of course, the program processes all MAX_LENGTH characters without knowing if they are null bytes.

An overview of Bazel and Starlark

The FHE compiler originated within Google in a curious way. It was created by dozens of volunteer contributors (20%-ers, as they say), many of whom worked on the XLS hardware synthesis toolchain, which is a core component of the compiler. Because of these constraints, and also because it was happening entirely in Google, there wasn’t much bandwidth available to make the compiler independent of Google’s internal build tooling.

This brings us to Bazel and Starlark, which is the user-facing façade of the compiler today. Bazel is the open source analogue of Google’s internal build system (“Blaze” is the internal tool), and Starlark is its Python-inspired scripting language. There are lots of opinions about Bazel that I won’t repeat here. Instead I will give a minimal overview of how it works with regards to the FHE compiler.

First some terminology. To work with Bazel you do the following.

  • Define a WORKSPACE file which defines all your project’s external dependencies, how to fetch their source code, and what bazel commands should be used to build them. This can be thought of as a top-level CMakeLists, except that it doesn’t contain any instructions for building the project beyond declaring the root of the project’s directory tree and the project’s name.
  • Define a set of BUILD files in each subdirectory, declaring the build targets that can be built from the source files in that directory (but not its subdirectories). This is analogous to CMakeLists files in subdirectories. Each build target can declare dependence on other build targets, and bazel build ensures the dependencies are built first, and caches the build results across a session. Many projects have a BUILD file in the project root to expose the project’s public libraries and APIs.
  • Use the built-in bazel rules like cc_library and cc_binary and cc_test to group files into libraries that can be built with bazel build, executable binaries that can also be run with bazel run, and tests that can also be run with bazel test. Most bazel rules boil down to calling some executable program like gcc or javac with specific arguments, while also keeping track of the accumulated dependency set of build artifacts in a “hermetic” location on the filesystem.
  • Write any additional bazel macros that chain together built-in bazel commands, e.g., for defining logical groupings of build commands that need to happen in a particular sequence. Macros look like Python functions that call individual bazel rules and possibly pass data between them. They’re written in .bzl files which are interpreted directly by bazel.

Generally, bazel builds targets in two phases. First—the analysis phase—it loads all the BUILD files and imported .bzl files, and scans for all the rules that were called. In particular, it runs the macros, because it needs to know what rules are called by the macros (and rules can be guarded by control flow, or their arguments can be generated dynamically, etc.). But it doesn’t run the build rules themselves. In doing this, it can build a complete graph of dependencies, and report errors about typos, missing dependencies, cycles, etc. Once the analysis phase is complete, it runs the underlying rules in dependency order, and caches the results. Bazel will only run a rule again if something changes with the files it depends on or its underlying dependencies.

The FHE compiler is written in Starlark, in the sense that the main entrypoint for the compiler is the Bazel macro fhe_cc_library. This macro chains together a bunch of rules that call the parser, circuit optimizer, and codegen steps, each one being its own Bazel rule. Each of these rules in turn declare/write files that we can inspect—see the next section.

Here’s what fhe_cc_library looks like (a subset of the control flow for brevity)

def fhe_cc_library(name, src, hdrs, copts = [], num_opt_passes = 1,
        encryption = "openfhe", optimizer = "xls", interpreter = False, library_name = None,
    """A rule for building FHE-based cc_libraries. [docstring ommitted]"""
    transpiled_xlscc_files = "{}.cc_to_xls_ir".format(name)
    library_name = library_name or name
        name = transpiled_xlscc_files,
        library_name = library_name,
        src = src,
        hdrs = hdrs,
        defines = kwargs.get("defines", None),

    # below, adding a leading colon to the `src` argument points the source files attribute
    # to the files generated by a previously generated rule, with the name being the unique
    # identifier.
    transpiled_structs_headers = "{}.xls_cc_transpiled_structs".format(name)
        name = transpiled_structs_headers,
        src = ":" + transpiled_xlscc_files,
        encryption = encryption,

    if optimizer == "yosys":  # other branch omitted for brevity
        verilog = "{}.verilog".format(name)
        xls_ir_to_verilog(name = verilog, src = ":" + transpiled_xlscc_files)
        netlist = "{}.netlist".format(name)
        verilog_to_netlist(name = netlist, src = ":" + verilog, encryption = encryption)
            name = name,
            src = ":" + netlist,
            encryption = encryption,
            interpreter = interpreter,
            transpiled_structs = ":" + transpiled_structs_headers,
            copts = copts,

The rules invoked by the macro include:

  • cc_to_xls_ir, which calls the parser xlscc and outputs an intermediate representation of the program as a high-level circuit. This step does the loop unrolling and other smarts related to converting C++ to a circuit.
  • xlscc_transpiled_structs, which calls a binary that handles structs (this part is complicated and will not be covered in this article).
  • xls_ir_to_verilog, which converts the XLS IR to verilog so that it can be optimized using Yosys/ABC, a popular circuit design and optimization program.
  • verilog_to_netlist, which invokes Yosys to both optimize the circuit and convert it to the lowest-level IR, which is called a netlist.
  • cc_fhe_netlist_library, which calls the codegen step to generate C++ code from the netlist in the previous step.

All of this results in a C++ library (generated by the last step) that can be linked against an existing program and whose generated source we can inspect. Now let’s see what each generated file looks like.

The intermediate files generated by the compiler

Earlier I mentioned that bazel puts the intermediate files generated by each build rule into a “hermetic” location on the filesystem. That location is sym-linked from the workspace root by a link called bazel-bin.

$ ls -al . | grep bazel-bin

Within bazel-bin there’s a mirror of the project’s source tree, and in the directory for a build rule you can find all the generated files. For our 32-bit adder here’s what it looks like:

$ ls
_objs                                   add_test
add_fhe_lib.cc                          add_test-2.params
add_fhe_lib.entry                       add_test.runfiles
add_fhe_lib.generic.types.h             add_test.runfiles_manifest
add_fhe_lib.h                           libadd.a
add_fhe_lib.ir                          libadd.a-2.params
add_fhe_lib.netlist.v                   libadd.pic.a
add_fhe_lib.netlist.v.dot               libadd.pic.a-2.params
add_fhe_lib.opt.ir                      libadd.so
add_fhe_lib.types.h                     libadd.so-2.params
add_fhe_lib.v                           libadd_fhe_lib.a
add_fhe_lib.ys                          libadd_fhe_lib.a-2.params
add_fhe_lib_meta.proto                  libadd_fhe_lib.pic.a
add_openfhe_fhe_demo                    libadd_fhe_lib.pic.a-2.params
add_openfhe_fhe_demo-2.params           libadd_fhe_lib.so
add_openfhe_fhe_demo.runfiles           libadd_fhe_lib.so-2.params

You can see the output .h and .cc files and their compiled .so files (the output build artifacts), but more importantly for us are the internal generated files. This is where we get to actually see the circuits generated.

The first one worth inspecting is add_fhe_lib.opt.ir, which is the output of the xlscc compiler plus an XLS-internal optimization step. This is the main part of how the compiler uses the XLS project: to convert an input program into a circuit. The file looks like:

package my_package

file_number 1 "./transpiler/codelab/add/add.cc"

top fn add(x: bits[32], y: bits[32]) -> bits[32] {
  ret add.3: bits[32] = add(x, y, id=3, pos=[(1,18,25)])

As you can see, it’s an XLS-defined internal representation (IR) of the main routine with some extra source code metadata. Because XLS-IR natively supports additions, the result is trivial. One interesting thing to note is that numbers are represented as bit arrays. In short, XLS-IR’s value type system supports only bits, arrays, and tuples, which tuples being the mechanism for supporting structures.

Next, the XLS-IR is converted to Verilog in add_fhe_lib.v, resulting in the (similarly trivial)

module add(
  input wire [31:0] x,
  input wire [31:0] y,
  output wire [31:0] out
  wire [31:0] add_6;
  assign add_6 = x + y;
  assign out = add_6;

The next step is to run this verilog through Yosys, which is a mature circuit synthesis suite, and for our purposes is encapsulates the two tasks:

  • Convert higher-level operations to a specified set of boolean gates (that operate on individual bits)
  • Optimize the resulting circuit to be as small as possible

XLS can also do this, and if you want to see that you can change the build rule optimizer attribute from yosys to xls. But we’ve found that Yosys routinely produces 2-3x smaller circuits. The script that we give to yosys can be found in fhe_yosys.bzl, which encapsulates the bazel macros and rules related to invoking Yosys. The output for our adder program is:

module add(x, y, out);
  wire _000_;
  wire _001_;
  wire _002_;
  wire _131_;
  wire _132_;
  output [31:0] out;
  wire [31:0] out;
  input [31:0] x;
  wire [31:0] x;
  input [31:0] y;
  wire [31:0] y;
  nand2 _133_ (.A(x[12]), .B(y[12]), .Y(_130_));
  xor2 _134_ ( .A(x[12]), .B(y[12]), .Y(_131_));
  nand2 _135_ ( .A(x[11]), .B(y[11]), .Y(_132_));
  or2 _136_ ( .A(x[11]), .B(y[11]), .Y(_000_));
  nand2 _137_ ( .A(x[10]), .B(y[10]), .Y(_001_));
  xor2 _138_ ( .A(x[10]), .B(y[10]), .Y(_002_));
  nand2 _139_ ( .A(x[9]), .B(y[9]), .Y(_003_));
  or2 _140_ ( .A(x[9]), .B(y[9]), .Y(_004_));
  nand2 _141_ ( .A(x[8]), .B(y[8]), .Y(_005_));
  xor2 _142_ ( .A(x[8]), .B(y[8]), .Y(_006_));
  nand2 _143_ ( .A(x[7]), .B(y[7]), .Y(_007_));
  or2 _144_ ( .A(x[7]), .B(y[7]), .Y(_008_));
  xor2 _291_ ( .A(_006_), .B(_035_), .Y(out[8]));
  xnor2 _292_ ( .A(x[9]), .B(y[9]), .Y(_128_));
  xnor2 _293_ ( .A(_037_), .B(_128_), .Y(out[9]));
  xor2 _294_ ( .A(_002_), .B(_039_), .Y(out[10]));
  xnor2 _295_ ( .A(x[11]), .B(y[11]), .Y(_129_));
  xnor2 _296_ ( .A(_041_), .B(_129_), .Y(out[11]));
  xor2 _297_ ( .A(_131_), .B(_043_), .Y(out[12]));

This produces a circuit with a total of 165 gates.

The codegen step then produces a add_fhe_lib.cc file which loads this circuit into an interpreter which knows to map the operation and2 to the chosen backend cryptosystem library call (see the source for the OpenFHE backend), and uses thread-pool scheduling on CPU to speed up the evaluation of the circuit.

For the string_cap circuit, the opt.ir shows off a bit more of XLS’s IR, including operations for sign extension, array indexing & slicing, and multiplexing (sel) branches. The resulting netlist after optimization is a 684-gate circuit (though many of those are “inverter” or “buffer” gates, which are effectively free for FHE).

The compiler also outputs a .dot file which can be rendered to an SVG (warning, the SVG is ~2.3 MiB). If you browse this circuit, you’ll see it is rather shallow and wide, and this allows the thread-pool scheduler to take advantage of the parallelism in the circuit to make it run fast. Meanwhile, the 32-bit adder, though it has roughly 25% the total number of gates, is a much deeper circuit and hence has less parallelism.

Supported C++ input programs and encryption overhead

This has so far been a tour of the compiler, but if you want to get started using the compiler to write programs, you’ll need to keep a few things in mind.

First, the subset of C++ supported by the compiler is rather small. As mentioned earlier, all data needs to have static sizes. This means, e.g., you can’t write a program that processes arbitrary images. Instead, you have to pick an upper bound on the image size, zero-pad the image appropriately before encrypting it, and then write the program to operate on that image size. In the same vein, the integer types you choose have nontrivial implications on performance. To see this, replace the int type in the 32-bit adder with a char and inspect the resulting circuit.

Similarly, loops need static bounds on their iteration count. Or, more precisely, xlscc needs to be able to fully unwrap every loop—which permits some forms of while loops and recursion that provably terminate. This can cause some problem if the input code has loops with complex exit criteria (i.e., break‘s guarded by if/else). It also requires you to think hard about how you write your loops, though future work will hopefully let the compiler do that thinking for you.

Finally, encrypting each bit of a plaintext message comes with major tax on space usage. Each encryption of a single bit corresponds to a list of roughly 700 32-bit integers. If you want to encrypt a 100×100 pixel greyscale image, each pixel of which is an 8-bit integer (0-255), it will cost you 218 MiB to store all the pixels in memory. It’s roughly a 20,000x overhead. For comparison, the music video for Rick Astley’s “Never Gonna Give You Up” at 360p is about 9 MiB (pretty small for a 3 minute video!), but encrypted in FHE would be 188 GiB, which (generously) corresponds to 20 feature-length films at 1080p. Some other FHE schemes have smaller ciphertext sizes, but at the cost of even larger in-memory requirements to run the computations. So if you want to run programs to operate on video—you can do it, but you will need to distribute the work appropriately, and find useful ways to reduce the data size as much as possible before encrypting it (such as working in lower resolution, greyscale, and a lower frame rate), which will also result in overall faster programs.

Until next time!

[Personal note]: Now that I’m more or less ramped up on the FHE domain, I’m curious to know what aspects of FHE my readers are interested in. Mathematical foundations? More practical demonstrations? Library tutorials? Circuit optimization? Please comment and tell me about what you’re interested in.

Estimating the Security of Ring Learning with Errors (RLWE)

This article was written by my colleague, Cathie Yun. Cathie is an applied cryptographer and security engineer, currently working with me to make fully homomorphic encryption a reality at Google. She’s also done a lot of cool stuff with zero knowledge proofs.

In previous articles, we’ve discussed techniques used in Fully Homomorphic Encryption (FHE) schemes. The basis for many FHE schemes, as well as other privacy-preserving protocols, is the Learning With Errors (LWE) problem. In this article, we’ll talk about how to estimate the security of lattice-based schemes that rely on the hardness of LWE, as well as its widely used variant, Ring LWE (RLWE).

A previous article on modulus switching introduced LWE encryption, but as a refresher:

Reminder of LWE

A literal repetition from the modulus switching article. The LWE encryption scheme I’ll use has the following parameters:

  • A plaintext space $\mathbb{Z}/q\mathbb{Z}$, where $q \geq 2$ is a positive integer. This is the space that the underlying message comes from.
  • An LWE dimension $n \in \mathbb{N}$.
  • A discrete Gaussian error distribution $ D$ with a mean of zero and a fixed standard deviation.

An LWE secret key is defined as a vector in $\{0, 1\}^n$ (uniformly sampled). An LWE ciphertext is defined as a vector $a = (a_1, \dots, a_n)$, sampled uniformly over $(\mathbb{Z} / q\mathbb{Z})^n$, and a scalar $b = \langle a, s \rangle + m + e$, where $e$ is drawn from $D$ and all arithmetic is done modulo $q$. Note that $e$ must be small for the encryption to be valid.

Learning With Errors (LWE) security

Choosing appropriate LWE parameters is a nontrivial challenge when designing and implementing LWE based schemes, because there are conflicting requirements of security, correctness, and performance. Some of the parameters that can be manipulated are the LWE dimension $n$, error distribution $D$ (referred to in the next few sections as $X_e$), secret distribution $X_s$, and plaintext modulus $q$.

Lattice Estimator

Here is where the Lattice Estimator tool comes to our assistance! The lattice estimator is a Sage module written by a group of lattice cryptography researchers which estimates the concrete security of Learning with Errors (LWE) instances.

For a given set of LWE parameters, the Lattice Estimator calculates the cost of all known efficient lattice attacks – for example, the Primal, Dual, and Coded-BKW attacks. It returns the estimated number of “rops” or “ring operations” required to carry out each attack; the attack that is the most efficient is the one that determines the security parameter. The bits of security for the parameter set can be calculated as $\log_2(\text{rops})$ for the most efficient attack.

Running the Lattice Estimator

For example, let’s estimate the security of the security parameters originally published for the popular TFHE scheme:

n = 630
q = 2^32
Xs = UniformMod(2)
Xe = DiscreteGaussian(stddev=2^17)

After installing the Lattice Estimator and sage, we run the following commands in sage:

> from estimator import *
> schemes.TFHE630
LWEParameters(n=630, q=4294967296, Xs=D(σ=0.50, μ=-0.50), Xe=D(σ=131072.00), m=+Infinity, tag='TFHE630')
> _ = LWE.estimate(schemes.TFHE630)
bkw                  :: rop: ≈2^153.1, m: ≈2^139.4, mem: ≈2^132.6, b: 4, t1: 0, t2: 24, ℓ: 3, #cod: 552, #top: 0, #test: 78, tag: coded-bkw
usvp                 :: rop: ≈2^124.5, red: ≈2^124.5, δ: 1.004497, β: 335, d: 1123, tag: usvp
bdd                  :: rop: ≈2^131.0, red: ≈2^115.1, svp: ≈2^131.0, β: 301, η: 393, d: 1095, tag: bdd
bdd_hybrid           :: rop: ≈2^185.3, red: ≈2^115.9, svp: ≈2^185.3, β: 301, η: 588, ζ: 0, |S|: 1, d: 1704, prob: 1, ↻: 1, tag: hybrid
bdd_mitm_hybrid      :: rop: ≈2^265.5, red: ≈2^264.5, svp: ≈2^264.5, β: 301, η: 2, ζ: 215, |S|: ≈2^189.2, d: 1489, prob: ≈2^-146.6, ↻: ≈2^148.8, tag: hybrid
dual                 :: rop: ≈2^128.7, mem: ≈2^72.0, m: 551, β: 346, d: 1181, ↻: 1, tag: dual
dual_hybrid          :: rop: ≈2^119.8, mem: ≈2^115.5, m: 516, β: 314, d: 1096, ↻: 1, ζ: 50, tag: dual_hybrid

In this example, the most efficient attack is the dual_hybrid attack. It uses 2^119.8 ring operations, and so these parameters provide 119.8 bits of security. The reader may notice that the TFHE website claims those parameters give 128 bits of security. This discrepancy is due to the fact that they used an older library (the LWE estimator, which is no longer maintained), which doesn’t take into account the most up-to-date lattice attacks.

For further reading, Benjamin Curtis wrote an article about parameter selection for the CONCRETE implementation of the TFHE scheme. Benjamin Curtis, Martin Albrecht, and other researchers also used the Lattice Estimator to estimate all the LWE and NTRU schemes.

Ring Learning with Errors (RLWE) security

It is often desirable to use Ring LWE instead of LWE, for greater efficiency and smaller key sizes (as Chris Peikert illustrates via meme). We’d like to estimate the security of a Ring LWE scheme, but it wasn’t immediately obvious to us how to do this, since the Lattice Estimator only operates over LWE instances. In order to use the Lattice Estimator for this security estimate, we first needed to do a reduction from the RLWE instance to an LWE instance.

Attempted RLWE to LWE reduction

Given an RLWE instance with $ \text{RLWE_dimension} = k $ and $ \text{poly_log_degree} = N $, we can create a relation that looks like an LWE instance of $ \text{LWE_dimension} = N * k $ with the same security, as long as $N$ is a power of 2 and there are no known attacks that target the ring structure of RLWE that are more efficient than the best LWE attacks. Note: $N$ must be a power of 2 so that $x^N+1$ is a cyclotomic polynomial.

An RLWE encryption has the following form: $ (a_0(x), a_1(x), … a_{k-1}(x), b(x)) $

  •   Public polynomials: $ a_0(x), a_1(x), \dots a_{k-1}(x) \overset{{\scriptscriptstyle\$}}{\leftarrow} (\mathbb{Z}/{q \mathbb{Z}[x]} ) / (x^N + 1)^k$
  •   Secret (binary) polynomials: $ s_0(x), s_1(x), \dots s_{k-1}(x) \overset{{\scriptscriptstyle\$}}{\leftarrow} (\mathbb{B}_N[x])^k$
  •   Error: $ e(x) \overset{{\scriptscriptstyle\$}}{\leftarrow} \chi_e$
  •   RLWE instance: $ b(x) = \sum_{i=0}^{k-1} a_i(x) \cdot s_i(x) + e(x) \in (\mathbb{Z}/{q \mathbb{Z}[x]} ) / (x^N + 1)$

We would like to express this in the form of an LWE encryption. We can make start with the simple case, where $ k=1 $. Therefore, we will only be working with the zero-entry polynomials, $a_0(x)$ and $s_0(x)$. (For simplicity, in the next example you can ignore the zero-subscript and think of them as $a(x)$ and $s(x)$).

Naive reduction for $k=1$ (wrong!)

Naively, if we simply defined the LWE $A$ matrix to be a concatenation of the coefficients of the RLWE polynomial $a(x)$, we get:

$$ A_{\text{LWE}} = ( a_{0, 0}, a_{0, 1}, \dots a_{0, N-1} ) $$

We can do the same for the LWE $s$ vector:

$$ s_{\text{LWE}} = ( s_{0, 0}, s_{0, 1}, \dots s_{0, N-1} ) $$

But this doesn’t give us the value of $b_{LWE}$ for the LWE encryption that we want. In particular, the first entry of $b_{LWE}$, which we can call $b_{\text{LWE}, 0}$, is simply a product of the first entries of $a_0(x)$ and $s_0(x)$:

$$ b_{\text{LWE}, 0} = a_{0, 0} \cdot s_{0, 0} + e_0 $$

However, we want $b_{\text{LWE}, 0}$ to be a sum of the products of all the coefficients of $a_0(x)$ and $s_0(x)$ that give us a zero-degree coefficient mod $x^N + 1$. This modulus is important because it causes the product of high-degree monomials to “wrap around” to smaller degree monomials because of the negacyclic property, such that $x^N \equiv -1 \mod x^N + 1$. So the constant term $b_{\text{LWE}, 0}$ should include all of the following terms:

b_{\text{LWE}, 0} = & a_{0, 0} \cdot s_{0, 0} \\
 – & a_{0, 1} \cdot s_{0, N-1} \\
 – & a_{0, 2} \cdot s_{0, N-2} \\
 – & \dots \\
 – & a_{0, N-1} \cdot s_{0, 1}\\
 + & e_0\\

Improved reduction for $k=1$

We can achieve the desired value of $b_{\text{LWE}}$ by more strategically forming a matrix $A_{\text{LWE}}$, to reflect the negacyclic property of our polynomials in the RLWE space. We can keep the naive construction for $s_\text{LWE}$.

$$ A_{\text{LWE}} =
a_{0, 0}   & -a_{0, N-1} & -a_{0, N-2} & \dots & -a_{0, 1}\\
a_{0, 1}   & a_{0, 0}    & -a_{0, N-1} & \dots & -a_{0, 2}\\
\vdots     & \ddots      &             &       & \vdots   \\
a_{0, N-1} & \dots       &             &       & a_{0, 0} \\

This definition of $A_\text{LWE}$ gives us the desired value for $b_\text{LWE}$, when $b_{\text{LWE}}$ is interpreted as the coefficients of a polynomial. As an example, we can write out the elements of the first row of $b_\text{LWE}$:

b_{\text{LWE}, 0} = & \sum_{i=0}^{N-1} A_{\text{LWE}, 0, i} \cdot s_{0, i} + e_0 \\
b_{\text{LWE}, 0} = & a_{0, 0} \cdot s_{0, 0} \\
 – & a_{0, 1} \cdot s_{0, N-1} \\
 – & a_{0, 2} \cdot s_{0, N-2} \\
 – & \dots \\
 – & a_{0, N-1} \cdot s_{0, 1}\\
 + & e_0 \\

Generalizing for all $k$

In the generalized $k$ case, we have the RLWE equation:

$$ b(x) = a_0(x) \cdot s_0(x) + a_1(x) \cdot s_1(x) \cdot a_{k-1}(x) \cdot s_{k-1}(x) + e(x) $$

We can construct the LWE elements as follows:

$$A_{\text{LWE}} =
\left ( \begin{array}{c|c|c|c}
A_{0, \text{LWE}} & A_{1, \text{LWE}} & \dots & A_{k-1, \text{LWE}} \end{array}
 \right )

where each sub-matrix is the construction from the previous section:

$$ A_{\text{LWE}} =
a_{i, 0}   & -a_{i, N-1} & -a_{i, N-2} & \dots & -a_{i, 1}\\
a_{i, 1}   & a_{i, 0}    & -a_{i, N-1} & \dots & -a_{i, 2}\\
\vdots     & \ddots      &             &       & \vdots   \\
a_{i, N-1} & \dots       &             &       & a_{i, 0} \\

And the secret keys are stacked similarly:

$$ s_{\text{LWE}} = ( s_{0, 0}, s_{0, 1}, \dots s_{0, N-1} \mid s_{1, 0}, s_{1, 1}, \dots s_{1, N-1} \mid \dots ) $$

This is how we can reduce an RLWE instance with RLWE dimension $k$ and polynomial modulus degree $N$, to a relation that looks like an LWE instance of LWE dimension $N * k$.

Caveats and open research

This reduction does not result in a correctly formed LWE instance, since an LWE instance would have a matrix $A$ that is randomly sampled, whereas the reduction results in an matrix $A$ that has cyclic structure, due to the cyclic property of the RLWE instance. This is why I’ve been emphasizing that the reduction produces an instance that looks like LWE. All currently known attacks on RLWE do not take advantage of the structure, but rather directly attack this transformed LWE instance. Whether the additional ring structure can be exploited in the design of more efficient attacks remains an open question in the lattice cryptography research community.

In her PhD thesis, Rachel Player mentions the RLWE to LWE security reduction:

In order to try to pick parameters in Ring-LWE-based schemes (FHE or otherwise) that we hope are sufficiently secure, we can choose parameters such that the underlying Ring-LWE instance should be hard to solve according to known attacks. Each Ring-LWE sample can be used to extract $n$ LWE samples. To the best of our knowledge, the most powerful attacks against $d$-sample Ring-LWE all work by instead attacking the $nd$-sample LWE problem. When estimating the security of a particular set of Ring-LWE parameters we therefore estimate the security of the induced set of LWE parameters.

This indicates that we can do this reduction for certain RLWE instances. However, we must be careful to ensure that the polynomial modulus degree $N$ is a power of two, because otherwise the error distribution “breaks”, as my colleague Baiyu Li explained to me in conversation:

The RLWE problem is typically defined in using the ring of integers of the cyclotomic field $\mathbb{Q}[X]/(f(X))$, where $f(X)$ is a cyclotomic polynomial of degree $k=\phi(N)$ (where $\phi$ is Euler’s totient function), and the error is a spherical Gaussian over the image of the canonical embedding into the complex numbers $\mathbb{C}^k$ (basically the images of primitive roots of unity under $f$). In many cases we set $N$ to be a power of 2, thus $f(X)=X^{N/2}+1$, since the canonical embedding for such $N$ has a nice property that the preimage of the spherical Gaussian error is also a spherical Gaussian over the coefficients of polynomials in $\mathbb{Q}[X]/(f(X))$. So in this case we can sample $k=N/2$ independent Gaussian numbers and use them as the coefficients of the error polynomial $e(x)$. For $N$ not a power of 2, $f(X)$ may have some low degree terms, and in order to get the spherical Gaussian with the same variance $s^2$ in the canonical embedding, we probably need to use a larger variance when sampling the error polynomial coefficients.

The RLWE we frequently use in practice is actually a specialized version called “polynomial LWE”, and instantiated with $N$ = power of 2 and so $f(X)=X^{N/2}+1$. For other parameters the two are not exactly the same. This paper has some explanations: https://eprint.iacr.org/2018/170.pdf

The error distribution “breaks” if $N$ is not a power of 2 due to the fact that the precise form of RLWE is not defined on integer polynomial rings $R = \mathbb{Z}[X]/(f(X))$, but is defined on its dual (or the dual in the underlying number field, which is a fractional ideal of $\mathbb{Q}[X]/(f(x))$), and the noise distribution is on the Minkowski embedding of this dual ring. For non-power of 2 $N$, the product mod $f$ of two small polynomials in $\mathbb{Q}[X]/(f(x))$ may be large, where small/large means their L2 norm on the coefficient vector. This means that in order to sample the required noise distribution, you may need a skewed coefficient distribution. Only when $N$ is a power of 2, the dual of $R$ is a scaling of $R$, and distance in the embedding of $R^{\text{dual}}$ is preserved in $R$, and so we can just sample iid gaussian coefficient to get the required noise.

Because working with a power-of-two RLWE polynomial modulus gives “nice” error behavior, this parameter choice is often recommended and chosen for concrete instantiations of RLWE. For example, the Homomorphic Encryption Standard
recommends and only analyzes the security of parameters for power-of-two cyclotomic fields for use in homomorphic encryption (though future versions of the standard aim to extend the security analysis to generic cyclotomic rings):

We stress that when the error is chosen from sufficiently wide and “well spread” distributions that match the ring at hand, we do not have meaningful attacks on RLWE that are better than LWE attacks, regardless of the ring. For power-of-two cyclotomics, it is sufficient to sample the noise in the polynomial basis, namely choosing the coefficients of the error polynomial $e \in \mathbb{Z}[x] / \phi_k(x)$ independently at random from a very “narrow” distribution.

Existing works analyzing and targeting the ring structure of RLWE include:

It would of course be great to have a definitive answer on whether we can be confident using this RLWE to LWE reduction to estimate the security of RLWE based schemes. In the meantime, we have seen many Fully Homomorphic Encryption (FHE) schemes using this RLWE to LWE reduction, and we hope that this article helps explain how that reduction works and the existing open questions around this approach.

Negacyclic Polynomial Multiplication

In this article I’ll cover three techniques to compute special types of polynomial products that show up in lattice cryptography and fully homomorphic encryption. Namely, the negacyclic polynomial product, which is the product of two polynomials in the quotient ring $\mathbb{Z}[x] / (x^N + 1)$. As a precursor to the negacyclic product, we’ll cover the simpler cyclic product.

All of the Python code written for this article is on GitHub.

The DFT and Cyclic Polynomial Multiplication

A recent program gallery piece showed how single-variable polynomial multiplication could be implemented using the Discrete Fourier Transform (DFT). This boils down to two observations:

  1. The product of two polynomials $f, g$ can be computed via the convolution of the coefficients of $f$ and $g$.
  2. The Convolution Theorem, which says that the Fourier transform of a convolution of two signals $f, g$ is the point-wise product of the Fourier transforms of the two signals. (The same holds for the DFT)

This provides a much faster polynomial product operation than one could implement using the naïve polynomial multiplication algorithm (though see the last section for an implementation anyway). The DFT can be used to speed up large integer multiplication as well.

A caveat with normal polynomial multiplication is that one needs to pad the input coefficient lists with enough zeros so that the convolution doesn’t “wrap around.” That padding results in the output having length at least as large as the sum of the degrees of $f$ and $g$ (see the program gallery piece for more details).

If you don’t pad the polynomials, instead you get what’s called a cyclic polynomial product. More concretely, if the two input polynomials $f, g$ are represented by coefficient lists $(f_0, f_1, \dots, f_{N-1}), (g_0, g_1, \dots, g_{N-1})$ of length $N$ (implying the inputs are degree at most $N-1$, i.e., the lists may end in a tail of zeros), then the Fourier Transform technique computes

\[ f(x) \cdot g(x) \mod (x^N – 1) \]

This modulus is in the sense of a quotient ring $\mathbb{Z}[x] / (x^N – 1)$, where $(x^N – 1)$ denotes the ring ideal generated by $x^N-1$, i.e., all polynomials that are evenly divisible by $x^N – 1$. A particularly important interpretation of this quotient ring is achieved by interpreting the ideal generator $x^N – 1$ as an equation $x^N – 1 = 0$, also known as $x^N = 1$. To get the canonical ring element corresponding to any polynomial $h(x) \in \mathbb{Z}[x]$, you “set” $x^N = 1$ and reduce the polynomial until there are no more terms with degree bigger than $N-1$. For example, if $N=5$ then $x^{10} + x^6 – x^4 + x + 2 = -x^4 + 2x + 3$ (the $x^{10}$ becomes 1, and $x^6 = x$).

To prove the DFT product computes a product in this particular ring, note how the convolution theorem produces the following formula, where $\textup{fprod}(f, g)$ denotes the process of taking the Fourier transform of the two coefficient lists, multiplying them entrywise, and taking a (properly normalized) inverse FFT, and $\textup{fprod}(f, g)(j)$ is the $j$-th coefficient of the output polynomial:

\[ \textup{fprod}(f, g)(j) = \sum_{k=0}^{N-1} f_k g_{j-k \textup{ mod } N} \]

In words, the output polynomial coefficient $j$ equals the sum of all products of pairs of coefficients whose indices sum to $j$ when considered “wrapping around” $N$. Fixing $j=1$ as an example, $\textup{fprod}(f, g)(1) = f_0 g_1 + f_1g_0 + f_2 g_{N-1} + f_3 g_{N-2} + \dots$. This demonstrates the “set $x^N = 1$” interpretation above: the term $f_2 g_{N-1}$ corresponds to the product $f_2x^2 \cdot g_{N-1}x^{N-1}$, which contributes to the $x^1$ term of the polynomial product if and only if $x^{2 + N-1} = x$, if and only if $x^N = 1$.

To achieve this in code, we simply use the version of the code from the program gallery piece, but fix the size of the arrays given to numpy.fft.fft in advance. We will also, for simplicity, assume the $N$ one wishes to use is a power of 2. The resulting code is significantly simpler than the original program gallery code (we omit zero-padding to length $N$ for brevity).

import numpy
from numpy.fft import fft, ifft

def cyclic_polymul(p1, p2, N):
    """Multiply two integer polynomials modulo (x^N - 1).

    p1 and p2 are arrays of coefficients in degree-increasing order.
    assert len(p1) == len(p2) == N
    product = fft(p1) * fft(p2)
    inverted = ifft(product)
    return numpy.round(numpy.real(inverted)).astype(numpy.int32)

As a side note, there’s nothing that stops this from working with polynomials that have real or complex coefficients, but so long as we use small magnitude integer coefficients and round at the end, I don’t have to worry about precision issues (hat tip to Brad Lucier for suggesting an excellent paper by Colin Percival, “Rapid multiplication modulo the sum and difference of highly composite numbers“, which covers these precision issues in detail).

Negacyclic polynomials, DFT with duplication

Now the kind of polynomial quotient ring that shows up in cryptography is critically not $\mathbb{Z}[x]/(x^N-1)$, because that ring has enough easy-to-reason-about structure that it can’t hide secrets. Instead, cryptographers use the ring $\mathbb{Z}[x]/(x^N+1)$ (the minus becomes a plus), which is believed to be more secure for cryptography—although I don’t have a great intuitive grasp on why.

The interpretation is similar here as before, except we “set” $x^N = -1$ instead of $x^N = 1$ in our reductions. Repeating the above example, if $N=5$ then $x^{10} + x^6 – x^4 + x + 2 = -x^4 + 3$ (the $x^{10}$ becomes $(-1)^2 = 1$, and $x^6 = -x$). It’s called negacyclic because as a term $x^k$ passes $k \geq N$, it cycles back to $x^0 = 1$, but with a sign flip.

The negacyclic polynomial multiplication can’t use the DFT without some special hacks. The first and simplest hack is to double the input lists with a negation. That is, starting from $f(x) \in \mathbb{Z}[x]/(x^N+1)$, we can define $f^*(x) = f(x) – x^Nf(x)$ in a different ring $\mathbb{Z}[x]/(x^{2N} – 1)$ (and similarly for $g^*$ and $g$).

Before seeing how this causes the DFT to (almost) compute a negacyclic polynomial product, some math wizardry. The ring $\mathbb{Z}[x]/(x^{2N} – 1)$ is special because it contains our negacyclic ring as a subring. Indeed, because the polynomial $x^{2N} – 1$ factors as $(x^N-1)(x^N+1)$, and because these two factors are coprime in $\mathbb{Z}[x]/(x^{2N} – 1)$, the Chinese remainder theorem (aka Sun-tzu’s theorem) generalizes to polynomial rings and says that any polynomial in $\mathbb{Z}[x]/(x^{2N} – 1)$ is uniquely determined by its remainders when divided by $(x^N-1)$ and $(x^N+1)$. Another way to say it is that the ring $\mathbb{Z}[x]/(x^{2N} – 1)$ factors as a direct product of the two rings $\mathbb{Z}[x]/(x^{N} – 1)$ and $\mathbb{Z}[x]/(x^{N} + 1)$.

Now mapping a polynomial $f(x)$ from the bigger ring $(x^{2N} – 1)$ to the smaller ring $(x^{N}+1)$ involves taking a remainder of $f(x)$ when dividing by $x^{N}+1$ (“setting” $x^N = -1$ and reducing). There are many possible preimage mappings, depending on what your goal is. In this case, we actually intentionally choose a non preimage mapping, because in general to compute a preimage requires solving a system of congruences in the larger polynomial ring. So instead we choose $f(x) \mapsto f^*(x) = f(x) – x^Nf(x) = -f(x)(x^N – 1)$, which maps back down to $2f(x)$ in $\mathbb{Z}[x]/(x^{N} + 1)$. This preimage mapping has a particularly nice structure, in that you build it by repeating the polynomial’s coefficients twice and flipping the sign of the second half. It’s easy to see that the product $f^*(x) g^*(x)$ maps down to $4f(x)g(x)$.

So if we properly account for these extra constant factors floating around, our strategy to perform negacyclic polynomial multiplication is to map $f$ and $g$ up to the larger ring as described, compute their cyclic product (modulo $x^{2N} – 1$) using the FFT, and then the result should be a degree $2N-1$ polynomial which can be reduced with one more modular reduction step to the right degree $N-1$ negacyclic product, i.e., setting $x^N = -1$, which materializes as taking the second half of the coefficients, flipping their signs, and adding them to the corresponding coefficients in the first half.

The code for this is:

def negacyclic_polymul_preimage_and_map_back(p1, p2):
    p1_preprocessed = numpy.concatenate([p1, -p1])
    p2_preprocessed = numpy.concatenate([p2, -p2])
    product = fft(p1_preprocessed) * fft(p2_preprocessed)
    inverted = ifft(product)
    rounded = numpy.round(numpy.real(inverted)).astype(p1.dtype)
    return (rounded[: p1.shape[0]] - rounded[p1.shape[0] :]) // 4

However, this chosen mapping hides another clever trick. The product of the two preimages has enough structure that we can “read” the result off without doing the full “set $x^N = -1$” reduction step. Mapping $f$ and $g$ up to $f^*, g^*$ and taking their product modulo $(x^{2N} – 1)$ gives

\[ \begin{aligned} f^*g^* &= -f(x^N-1) \cdot -g(x^N – 1) \\ &= fg (x^N-1)^2 \\ &= fg(x^{2N} – 2x^N + 1) \\ &= fg(2 – 2x^N) \\ &= 2(fg – x^Nfg) \end{aligned} \]

This has the same syntactical format as the original mapping $f \mapsto f – x^Nf$, with an extra factor of 2, and so its coefficients also have the form “repeat the coefficients and flip the sign of the second half” (times two). We can then do the “inverse mapping” by reading only the first half of the coefficients and dividing by 2.

def negacyclic_polymul_use_special_preimage(p1, p2):
    p1_preprocessed = numpy.concatenate([p1, -p1])
    p2_preprocessed = numpy.concatenate([p2, -p2])
    product = fft(p1_preprocessed) * fft(p2_preprocessed)
    inverted = ifft(product)
    rounded = numpy.round(0.5 * numpy.real(inverted)).astype(p1.dtype)
    return rounded[: p1.shape[0]]

Our chosen mapping $f \mapsto f-x^Nf$ is not particularly special, except that it uses a small number of pre and post-processing operations. For example, if you instead used the mapping $f \mapsto 2f + x^Nf$ (which would map back to $f$ exactly), then the FFT product would result in $5fg + 4x^Nfg$ in the larger ring. You can still read off the coefficients as before, but you’d have to divide by 5 instead of 2 (which, the superstitious would say, is harder). It seems that “double and negate” followed by “halve and take first half” is the least amount of pre/post processing possible.

Negacyclic polynomials with a “twist”

The previous section identified a nice mapping (or embedding) of the input polynomials into a larger ring. But studying that shows some symmetric structure in the FFT output. I.e., the coefficients of $f$ and $g$ are repeated twice, with some scaling factors. It also involves taking an FFT of two $2N$-dimensional vectors when we start from two $N$-dimensional vectors.

This sort of situation should make you think that we can do this more efficiently, either by using a smaller size FFT or by packing some data into the complex part of the input, and indeed we can do both.

[Aside: it’s well known that if all the entries of an FFT input are real, then the result also has symmetry that can be exploted for efficiency by reframing the problem as a size-N/2 FFT in some cases, and just removing half the FFT algorithm’s steps in other cases, see Wikipedia for more]

This technique was explained in Fast multiplication and its applications (pdf link) by Daniel Bernstein, a prominent cryptographer who specializes in cryptography performance, and whose work appears in widely-used standards like TLS, OpenSSH, and he designed a commonly used elliptic curve for cryptography.

[Aside: Bernstein cites this technique as using something called the “Tangent FFT (pdf link).” This is a drop-in FFT replacement he invented that is faster than previous best (split-radix FFT), and Bernstein uses it mainly to give a precise expression for the number of operations required to do the multiplication end to end. We will continue to use the numpy FFT implementation, since in this article I’m just focusing on how to express negacyclic multiplication in terms of the FFT. Also worth noting both the Tangent FFT and “Fast multiplication” papers frame their techniques—including FFT algorithm implementations!—in terms of polynomial ring factorizations and mappings. Be still, my beating cardioid.]

In terms of polynomial mappings, we start from the ring $\mathbb{R}[x] / (x^N + 1)$, where $N$ is a power of 2. We then pick a reversible mapping from $\mathbb{R}[x]/(x^N + 1) \to \mathbb{C}[x]/(x^{N/2} – 1)$ (note the field change from real to complex), apply the FFT to the image of the mapping, and reverse appropriately it at the end.

One such mapping takes two steps, first mapping $\mathbb{R}[x]/(x^N + 1) \to \mathbb{C}[x]/(x^{N/2} – i)$ and then from $\mathbb{C}[x]/(x^{N/2} – i) \to \mathbb{C}[x]/(x^{N/2} – 1)$. The first mapping is as easy as the last section, because $(x^N + 1) = (x^{N/2} + i) (x^{N/2} – i)$, and so we can just set $x^{N/2} = i$ and reduce the polynomial. This as the effect of making the second half of the polynomial’s coefficients become the complex part of the first half of the coefficients.

The second mapping is more nuanced, because we’re not just reducing via factorization. And we can’t just map $i \mapsto 1$ generically, because that would reduce complex numbers down to real values. Instead, we observe that (momentarily using an arbitrary degree $k$ instead of $N/2$), for any polynomial $f \in \mathbb{C}[x]$, the remainder of $f \mod x^k-i$ uniquely determines the remainder of $f \mod x^k – 1$ via the change of variables $x \mapsto \omega_{4k} x$, where $\omega_{4k}$ is a $4k$-th primitive root of unity $\omega_{4k} = e^{\frac{2 \pi i}{4k}}$. Spelling this out in more detail: if $f(x) \in \mathbb{C}[x]$ has remainder $f(x) = g(x) + h(x)(x^k – i)$ for some polynomial $h(x)$, then

\[ \begin{aligned} f(\omega_{4k}x) &= g(\omega_{4k}x) + h(\omega_{4k}x)((\omega_{4k}x)^{k} – i) \\ &= g(\omega_{4k}x) + h(\omega_{4k}x)(e^{\frac{\pi i}{2}} x^k – i) \\ &= g(\omega_{4k}x) + i h(\omega_{4k}x)(x^k – 1) \\ &= g(\omega_{4k}x) \mod (x^k – 1) \end{aligned} \]

Translating this back to $k=N/2$, the mapping from $\mathbb{C}[x]/(x^{N/2} – i) \to \mathbb{C}[x]/(x^{N/2} – 1)$ is $f(x) \mapsto f(\omega_{2N}x)$. And if $f = f_0 + f_1x + \dots + f_{N/2 – 1}x^{N/2 – 1}$, then the mapping involves multiplying each coefficient $f_k$ by $\omega_{2N}^k$.

When you view polynomials as if they were a simple vector of their coefficients, then this operation $f(x) \mapsto f(\omega_{k}x)$ looks like $(a_0, a_1, \dots, a_n) \mapsto (a_0, \omega_{k} a_1, \dots, \omega_k^n a_n)$. Bernstein calls the operation a twist of $\mathbb{C}^n$, which I mused about in this Mathstodon thread.

What’s most important here is that each of these transformations are invertible. The first because the top half coefficients end up in the complex parts of the polynomial, and the second because the mapping $f(x) \mapsto f(\omega_{2N}^{-1}x)$ is an inverse. Together, this makes the preprocessing and postprocessing exact inverses of each other. The code is then

def negacyclic_polymul_complex_twist(p1, p2):
    n = p2.shape[0]
    primitive_root = primitive_nth_root(2 * n)
    root_powers = primitive_root ** numpy.arange(n // 2)

    p1_preprocessed = (p1[: n // 2] + 1j * p1[n // 2 :]) * root_powers
    p2_preprocessed = (p2[: n // 2] + 1j * p2[n // 2 :]) * root_powers

    p1_ft = fft(p1_preprocessed)
    p2_ft = fft(p2_preprocessed)
    prod = p1_ft * p2_ft
    ifft_prod = ifft(prod)
    ifft_rotated = ifft_prod * primitive_root ** numpy.arange(0, -n // 2, -1)

    return numpy.round(
        numpy.concatenate([numpy.real(ifft_rotated), numpy.imag(ifft_rotated)])

And so, at the cost of a bit more pre- and postprocessing, we can negacyclically multiply two degree $N-1$ polynomials using an FFT of length $N/2$. In theory, no information is wasted and this is optimal.

And finally, a simple matrix multiplication

The last technique I wanted to share is not based on the FFT, but it’s another method for doing negacyclic polynomial multiplication that has come in handy in situations where I am unable to use FFTs. I call it the Toeplitz method, because one of the polynomials is converted to a Toeplitz matrix. Sometimes I hear it referred to as a circulant matrix technique, but due to the negacyclic sign flip, I don’t think it’s a fully accurate term.

The idea is to put the coefficients of one polynomial $f(x) = f_0 + f_1x + \dots + f_{N-1}x^{N-1}$ into a matrix as follows:

\[ \begin{pmatrix} f_0 & -f_{N-1} & \dots & -f_1 \\ f_1 & f_0 & \dots & -f_2 \\ \vdots & \vdots & \ddots & \vdots \\ f_{N-1} & f_{N-2} & \dots & f_0 \end{pmatrix} \]

The polynomial coefficients are written down in the first column unchanged, then in each subsequent column, the coefficients are cyclically shifted down one, and the term that wraps around the top has its sign flipped. When the second polynomial is treated as a vector of its coefficients, say, $g(x) = g_0 + g_1x + \dots + g_{N-1}x^{N-1}$, then the matrix-vector product computes their negacyclic product (as a vector of coefficients):

\[ \begin{pmatrix} f_0 & -f_{N-1} & \dots & -f_1 \\ f_1 & f_0 & \dots & -f_2 \\ \vdots & \vdots & \ddots & \vdots \\ f_{N-1} & f_{N-2} & \dots & f_0 \end{pmatrix} \begin{pmatrix} g_0 \\ g_1 \\ \vdots \\ g_{N-1} \end{pmatrix} \]

This works because each row $j$ corresponds to one output term $x^j$, and the cyclic shift for that row accounts for the degree-wrapping, with the sign flip accounting for the negacyclic part. (If there were no sign attached, this method could be used to compute a cyclic polynomial product).

The Python code for this is

def cylic_matrix(c: numpy.array) -> numpy.ndarray:
    """Generates a cyclic matrix with each row of the input shifted.

    For input: [1, 2, 3], generates the following matrix:

        [[1 2 3]
         [2 3 1]
         [3 1 2]]
    c = numpy.asarray(c).ravel()
    a, b = numpy.ogrid[0 : len(c), 0 : -len(c) : -1]
    indx = a + b
    return c[indx]

def negacyclic_polymul_toeplitz(p1, p2):
    n = len(p1)

    # Generates a sign matrix with 1s below the diagonal and -1 above.
    up_tri = numpy.tril(numpy.ones((n, n), dtype=int), 0)
    low_tri = numpy.triu(numpy.ones((n, n), dtype=int), 1) * -1
    sign_matrix = up_tri + low_tri

    cyclic_matrix = cylic_matrix(p1)
    toeplitz_p1 = sign_matrix * cyclic_matrix
    return numpy.matmul(toeplitz_p1, p2)

Obviously on most hardware this would be less efficient than an FFT-based method (and there is some relationship between circulant matrices and Fourier Transforms, see Wikipedia). But in some cases—when the polynomials are small, or one of the two polynomials is static, or a particular hardware choice doesn’t handle FFTs with high-precision floats very well, or you want to take advantage of natural parallelism in the matrix-vector product—this method can be useful. It’s also simpler to reason about.

Until next time!