Sample Extraction from RLWE to LWE

In this article I’ll derive a trick used in FHE called sample extraction. In brief, it allows one to partially convert a ciphertext in the Ring Learning With Errors (RLWE) scheme to the Learning With Errors (LWE) scheme.

Here are some other articles I’ve written about other FHE building blocks, though they are not prerequisites for this article.

LWE and RLWE

The first two articles in the list above define the Learning With Errors problem (LWE). I will repeat the definition here:

LWE: The LWE encryption scheme has the following parameters:

  • A plaintext space $ \mathbb{Z}/q\mathbb{Z}$, where $ q \geq 2$ is a positive integer. This is the space that the underlying message $m$ comes from.
  • An LWE dimension $ n \in \mathbb{N}$.
  • A discrete Gaussian error distribution $ D$ with a mean of zero and a fixed standard deviation.

An LWE secret key is defined as a vector $s \in \{0, 1\}^n$ (uniformly sampled). An LWE ciphertext is defined as a vector $ a = (a_1, \dots, a_n)$, sampled uniformly over $ (\mathbb{Z} / q\mathbb{Z})^n$, and a scalar $ b = \langle a, s \rangle + m + e$, where $m$ is the message, $e$ is drawn from $D$ and all arithmetic is done modulo $q$. Note: the message $m$ usually is represented by placing an even smaller message (say, a 4-bit message) in the highest-order bits of a 32-bit unsigned integer. So then decryption corresponds to computing $b – \langle a, s \rangle = m + e$ and rounding the result to recover $m$ while discarding $e$.

Without the error term, an attacker could determine the secret key from a polynomial-sized collection of LWE ciphertexts with something like Gaussian elimination. The set of samples looks like a linear (or affine) system, where the secret key entries are the unknown variables. With an error term, the problem of solving the system is believed to be hard, and only exponential time/space algorithms are known.

RLWE: The Ring Learning With Errors (RLWE) problem is the natural analogue of LWE, where all scalars involved are replaced with polynomials over a (carefully) chosen ring.

Formally, the RLWE encryption scheme has the following parameters:

  • A ring $R = \mathbb{Z}/q\mathbb{Z}$, where $ q \geq 2$ is a positive integer. This is the space of coefficients of all polynomials in the scheme. I usually think of $q$ as $2^{32}$, i.e., unsigned 32-bit integers.
  • A plaintext space $R[x] / (x^N + 1)$, where $N$ is a power of 2. This is the space that the underlying message $m(x)$ comes from, and it is encoded as a list of $N$ integers forming the coefficients of the polynomial.
  • An RLWE dimension $n \in \mathbb{N}$.
  • A discrete Gaussian error distribution $D$ with a mean of zero and a fixed standard deviation.

An RLWE secret key $s$ is defined as a list of $n$ polynomials with binary coefficients in $\mathbb{B}[x] / (x^N+1)$, where $\mathbb{B} = \{0, 1\}$. The coefficients are uniformly sampled, like in LWE. An RLWE ciphertext is defined as a vector of $n$ polynomials $a = (a_1(x), \dots, a_n(x))$, sampled uniformly over $(R[x] / (x^N+1))^n$, and a polynomial $b(x) = \langle a, s \rangle + m(x) + e(x)$, where $m(x)$ is the message (with a similar “store it in the top bits” trick as LWE), $e(x)$ is a polynomial with coefficients drawn from $D$ and all the products of the inner product are done in $R[x] / (x^N+1)$. Decryption in RLWE involves computing $b(x) – \langle a, s \rangle$ and rounding appropriately to recover $m(x)$. Just like with RLWE, the message is “hidden” in the noise added to an equation corresponding to the polynomial products (i.e., without the noise and with enough sample encryptions of the same message/secret key, you can solve the system and recover the message). For more notes on how polynomial multiplication ends up being tricker in this ring, see my negacyclic polynomial multiplication article.

The most common version of RLWE you will see in the literature sets the vector dimension $n=1$, and so the secret key $s$ is a single polynomial, the ciphertext is a single polynomial, and RLWE can be viewed as directly replacing the vector dot product in LWE with a polynomial product. However, making $n$ larger is believed to provide more security, and it can be traded off against making the polynomial degree smaller, which can be useful when tweaking parameters for performance (keeping the security level constant).

Sample Extraction

Sample extraction is the trick of taking an RLWE encryption of $m(x) = m_0 + m_1(x) + \dots + m_{N-1}x^{N-1}$, and outputting an LWE encryption of $m_0$. In our case, the degree $N$ and the dimension $n_{\textup{RLWE}}$ of the input RLWE ciphertext scheme is fixed, but we may pick the dimension $n_{\textup{LWE}}$ of the LWE scheme as we choose to make this trick work.

This is one of those times in math when it is best to “just work it out with a pencil.” It turns out there are no serious obstacles to our goal. We start with polynomials $a = (a_1(x), \dots, a_n(x))$ and $b(x) = \langle a, s \rangle + m(x) + e(x)$, and we want to produce a vector of scalars $(x_1, \dots, x_D)$ of some dimension $D$, a corresponding secret key $s$, and a $b = \langle a, s \rangle + m_0 + e’$, where $e’$ may be different from the input error $e(x)$, but is hopefully not too much larger.

As with many of the articles in this series, we employ the so-called “phase function” to help with the analysis, which is just the partial decryption of an RLWE ciphertext without the rounding step: $\varphi(x) = b(x) – \langle a, s \rangle = m(x) + e(x)$. The idea is as follows: inspect the structure of the constant term of $\varphi(x)$, oh look, it’s an LWE encryption.

So let’s expand the constant term of $b(x) – \langle a, s \rangle$. Given a polynomial expression, I will use the notation $(-)[0]$ to denote the constant coefficient, and $(-)[k]$ for the $k$-th coefficient.

$$ \begin{aligned}(b(x) – \langle a, s \rangle)[0] &= b[0] – \left ( (a_1s_1)[0] + \dots + (a_n s_n)[0] \right ) \end{aligned}$$

Each entry in the dot product is a negacyclic polynomial product, so its constant term requires summing all the pairs of coefficients of $a_i$ and $s_i$ whose degrees sum to zero mod $N$, and flipping signs when there’s wraparound. In particular, a single product above for $a_i s_i$ has the form:

$$(a_is_i) [0] = s_i[0]a_i[0] – s_i[1]a_i[N-1] – s_i[2]a_i[N-2] – \dots – s_i[N-1]a_i[1]$$

Notice that I wrote the coefficients of $s_i$ in increasing order. This was on purpose, because if we re-write this expression $(a_is_i)[0]$ as a dot product, we get

$$(a_is_i[0]) = \left \langle (s_i[0], s_i[1], \dots, s_i[N-1]), (a_i[0], -a_i[N-1], \dots, -a_i[1])\right \rangle$$

In particular, the $a_i[k]$ are public, so we can sign-flip and reorder them easily in our conversion trick. But $s_i$ is unknown at the time the sample extraction needs to occur, so it helps if we can leave the secret key untouched. And indeed, when we apply the above expansion to all of the terms in the computation of $\varphi(x)[0]$, we end up manipulating the $a_i$’s a lot, but merely “flattening” the coefficients of $s = (s_1(x), \dots, s_n(x))$ into a single long vector.

So combining all of the above products, we see that $(b(x) – \langle a, s \rangle)[0]$ is already an LWE encryption with $(x, y) = ((x_1, \dots, x_D), b[0])$, and $x$ being the very long ($D = n*N$) vector

$$\begin{aligned} x = (& a_0[0], -a_0[N-1], \dots, -a_0[1], \\ &a_1[0], -a_1[N-1], \dots, -a_1[1], \\ &\dots , \\ &a_n[0], -a_n[N-1], \dots, -a_n[1] ) \end{aligned}$$

And the corresponding secret key is

$$\begin{aligned} s_{\textup{LWE}} = (& (s_0[0], s_0[1], \dots, s_0[N-1] \\ &(s_1[0], s_1[1], \dots, s_1[N-1], \\ &\dots , \\ &s_n[0], s_n[1], \dots, s_n[N-1] ) \end{aligned}$$

And the error in this ciphertext is exactly the constant coefficient of the error polynomial $e(x)$ from the RLWE encryption, which is independent of the error of all the other coefficients.

Commentary

This trick is a best case scenario. Unlike with key switching, we don’t need to encrypt the output LWE secret key to perform the conversion. And unlike modulus switching, there is no impact on the error growth in the conversion from RLWE to LWE. So in a sense, this trick is “perfect,” though it loses information about the other coefficients of $m(x)$ in the process. As it happens, the CGGI FHE scheme that these articles are building toward only uses the constant coefficient.

The only twist to think about is that the output LWE ciphertext is dependent on the RLWE scheme parameters. What if you wanted to get a smaller-dimensional LWE ciphertext as output? This is a realistic concern, as in the CGGI FHE scheme one starts from an LWE ciphertext of one dimension, goes to RLWE of another (larger) dimension, and needs to get back to LWE of the original dimension by the end.

To do this, you have two options: one is to pick the RLWE ciphertext parameters $n, N$, so that their product is the value you need. A second is to allow the RLWE parameters to be whatever you need for performance/security, and then employ a key switching operation after the sample extraction to get back to the LWE parameters you need.

It is worth mentioning—though I am far from fully understanding the methods—there other ways to convert between LWE and RLWE. One can go from LWE to RLWE, or from a collection of LWEs to RLWE. Some methods can be found in this paper and its references.

Until next time!

Estimating the Security of Ring Learning with Errors (RLWE)

This article was written by my colleague, Cathie Yun. Cathie is an applied cryptographer and security engineer, currently working with me to make fully homomorphic encryption a reality at Google. She’s also done a lot of cool stuff with zero knowledge proofs.


In previous articles, we’ve discussed techniques used in Fully Homomorphic Encryption (FHE) schemes. The basis for many FHE schemes, as well as other privacy-preserving protocols, is the Learning With Errors (LWE) problem. In this article, we’ll talk about how to estimate the security of lattice-based schemes that rely on the hardness of LWE, as well as its widely used variant, Ring LWE (RLWE).

A previous article on modulus switching introduced LWE encryption, but as a refresher:

Reminder of LWE

A literal repetition from the modulus switching article. The LWE encryption scheme I’ll use has the following parameters:

  • A plaintext space $\mathbb{Z}/q\mathbb{Z}$, where $q \geq 2$ is a positive integer. This is the space that the underlying message comes from.
  • An LWE dimension $n \in \mathbb{N}$.
  • A discrete Gaussian error distribution $ D$ with a mean of zero and a fixed standard deviation.

An LWE secret key is defined as a vector in $\{0, 1\}^n$ (uniformly sampled). An LWE ciphertext is defined as a vector $a = (a_1, \dots, a_n)$, sampled uniformly over $(\mathbb{Z} / q\mathbb{Z})^n$, and a scalar $b = \langle a, s \rangle + m + e$, where $e$ is drawn from $D$ and all arithmetic is done modulo $q$. Note that $e$ must be small for the encryption to be valid.

Learning With Errors (LWE) security

Choosing appropriate LWE parameters is a nontrivial challenge when designing and implementing LWE based schemes, because there are conflicting requirements of security, correctness, and performance. Some of the parameters that can be manipulated are the LWE dimension $n$, error distribution $D$ (referred to in the next few sections as $X_e$), secret distribution $X_s$, and plaintext modulus $q$.

Lattice Estimator

Here is where the Lattice Estimator tool comes to our assistance! The lattice estimator is a Sage module written by a group of lattice cryptography researchers which estimates the concrete security of Learning with Errors (LWE) instances.

For a given set of LWE parameters, the Lattice Estimator calculates the cost of all known efficient lattice attacks – for example, the Primal, Dual, and Coded-BKW attacks. It returns the estimated number of “rops” or “ring operations” required to carry out each attack; the attack that is the most efficient is the one that determines the security parameter. The bits of security for the parameter set can be calculated as $\log_2(\text{rops})$ for the most efficient attack.

Running the Lattice Estimator

For example, let’s estimate the security of the security parameters originally published for the popular TFHE scheme:

n = 630
q = 2^32
Xs = UniformMod(2)
Xe = DiscreteGaussian(stddev=2^17)

After installing the Lattice Estimator and sage, we run the following commands in sage:

> from estimator import *
> schemes.TFHE630
LWEParameters(n=630, q=4294967296, Xs=D(σ=0.50, μ=-0.50), Xe=D(σ=131072.00), m=+Infinity, tag='TFHE630')
> _ = LWE.estimate(schemes.TFHE630)
bkw                  :: rop: ≈2^153.1, m: ≈2^139.4, mem: ≈2^132.6, b: 4, t1: 0, t2: 24, ℓ: 3, #cod: 552, #top: 0, #test: 78, tag: coded-bkw
usvp                 :: rop: ≈2^124.5, red: ≈2^124.5, δ: 1.004497, β: 335, d: 1123, tag: usvp
bdd                  :: rop: ≈2^131.0, red: ≈2^115.1, svp: ≈2^131.0, β: 301, η: 393, d: 1095, tag: bdd
bdd_hybrid           :: rop: ≈2^185.3, red: ≈2^115.9, svp: ≈2^185.3, β: 301, η: 588, ζ: 0, |S|: 1, d: 1704, prob: 1, ↻: 1, tag: hybrid
bdd_mitm_hybrid      :: rop: ≈2^265.5, red: ≈2^264.5, svp: ≈2^264.5, β: 301, η: 2, ζ: 215, |S|: ≈2^189.2, d: 1489, prob: ≈2^-146.6, ↻: ≈2^148.8, tag: hybrid
dual                 :: rop: ≈2^128.7, mem: ≈2^72.0, m: 551, β: 346, d: 1181, ↻: 1, tag: dual
dual_hybrid          :: rop: ≈2^119.8, mem: ≈2^115.5, m: 516, β: 314, d: 1096, ↻: 1, ζ: 50, tag: dual_hybrid

In this example, the most efficient attack is the dual_hybrid attack. It uses 2^119.8 ring operations, and so these parameters provide 119.8 bits of security. The reader may notice that the TFHE website claims those parameters give 128 bits of security. This discrepancy is due to the fact that they used an older library (the LWE estimator, which is no longer maintained), which doesn’t take into account the most up-to-date lattice attacks.

For further reading, Benjamin Curtis wrote an article about parameter selection for the CONCRETE implementation of the TFHE scheme. Benjamin Curtis, Martin Albrecht, and other researchers also used the Lattice Estimator to estimate all the LWE and NTRU schemes.

Ring Learning with Errors (RLWE) security

It is often desirable to use Ring LWE instead of LWE, for greater efficiency and smaller key sizes (as Chris Peikert illustrates via meme). We’d like to estimate the security of a Ring LWE scheme, but it wasn’t immediately obvious to us how to do this, since the Lattice Estimator only operates over LWE instances. In order to use the Lattice Estimator for this security estimate, we first needed to do a reduction from the RLWE instance to an LWE instance.

Attempted RLWE to LWE reduction

Given an RLWE instance with $ \text{RLWE_dimension} = k $ and $ \text{poly_log_degree} = N $, we can create a relation that looks like an LWE instance of $ \text{LWE_dimension} = N * k $ with the same security, as long as $N$ is a power of 2 and there are no known attacks that target the ring structure of RLWE that are more efficient than the best LWE attacks. Note: $N$ must be a power of 2 so that $x^N+1$ is a cyclotomic polynomial.

An RLWE encryption has the following form: $ (a_0(x), a_1(x), … a_{k-1}(x), b(x)) $

  •   Public polynomials: $ a_0(x), a_1(x), \dots a_{k-1}(x) \overset{{\scriptscriptstyle\$}}{\leftarrow} (\mathbb{Z}/{q \mathbb{Z}[x]} ) / (x^N + 1)^k$
  •   Secret (binary) polynomials: $ s_0(x), s_1(x), \dots s_{k-1}(x) \overset{{\scriptscriptstyle\$}}{\leftarrow} (\mathbb{B}_N[x])^k$
  •   Error: $ e(x) \overset{{\scriptscriptstyle\$}}{\leftarrow} \chi_e$
  •   RLWE instance: $ b(x) = \sum_{i=0}^{k-1} a_i(x) \cdot s_i(x) + e(x) \in (\mathbb{Z}/{q \mathbb{Z}[x]} ) / (x^N + 1)$

We would like to express this in the form of an LWE encryption. We can make start with the simple case, where $ k=1 $. Therefore, we will only be working with the zero-entry polynomials, $a_0(x)$ and $s_0(x)$. (For simplicity, in the next example you can ignore the zero-subscript and think of them as $a(x)$ and $s(x)$).

Naive reduction for $k=1$ (wrong!)

Naively, if we simply defined the LWE $A$ matrix to be a concatenation of the coefficients of the RLWE polynomial $a(x)$, we get:

$$ A_{\text{LWE}} = ( a_{0, 0}, a_{0, 1}, \dots a_{0, N-1} ) $$

We can do the same for the LWE $s$ vector:

$$ s_{\text{LWE}} = ( s_{0, 0}, s_{0, 1}, \dots s_{0, N-1} ) $$

But this doesn’t give us the value of $b_{LWE}$ for the LWE encryption that we want. In particular, the first entry of $b_{LWE}$, which we can call $b_{\text{LWE}, 0}$, is simply a product of the first entries of $a_0(x)$ and $s_0(x)$:

$$ b_{\text{LWE}, 0} = a_{0, 0} \cdot s_{0, 0} + e_0 $$

However, we want $b_{\text{LWE}, 0}$ to be a sum of the products of all the coefficients of $a_0(x)$ and $s_0(x)$ that give us a zero-degree coefficient mod $x^N + 1$. This modulus is important because it causes the product of high-degree monomials to “wrap around” to smaller degree monomials because of the negacyclic property, such that $x^N \equiv -1 \mod x^N + 1$. So the constant term $b_{\text{LWE}, 0}$ should include all of the following terms:

$$\begin{aligned}
b_{\text{LWE}, 0} = & a_{0, 0} \cdot s_{0, 0} \\
 – & a_{0, 1} \cdot s_{0, N-1} \\
 – & a_{0, 2} \cdot s_{0, N-2} \\
 – & \dots \\
 – & a_{0, N-1} \cdot s_{0, 1}\\
 + & e_0\\
\end{aligned}
$$

Improved reduction for $k=1$

We can achieve the desired value of $b_{\text{LWE}}$ by more strategically forming a matrix $A_{\text{LWE}}$, to reflect the negacyclic property of our polynomials in the RLWE space. We can keep the naive construction for $s_\text{LWE}$.

$$ A_{\text{LWE}} =
\begin{pmatrix}
a_{0, 0}   & -a_{0, N-1} & -a_{0, N-2} & \dots & -a_{0, 1}\\
a_{0, 1}   & a_{0, 0}    & -a_{0, N-1} & \dots & -a_{0, 2}\\
\vdots     & \ddots      &             &       & \vdots   \\
a_{0, N-1} & \dots       &             &       & a_{0, 0} \\
\end{pmatrix}
$$

This definition of $A_\text{LWE}$ gives us the desired value for $b_\text{LWE}$, when $b_{\text{LWE}}$ is interpreted as the coefficients of a polynomial. As an example, we can write out the elements of the first row of $b_\text{LWE}$:

$$
\begin{aligned}
b_{\text{LWE}, 0} = & \sum_{i=0}^{N-1} A_{\text{LWE}, 0, i} \cdot s_{0, i} + e_0 \\
b_{\text{LWE}, 0} = & a_{0, 0} \cdot s_{0, 0} \\
 – & a_{0, 1} \cdot s_{0, N-1} \\
 – & a_{0, 2} \cdot s_{0, N-2} \\
 – & \dots \\
 – & a_{0, N-1} \cdot s_{0, 1}\\
 + & e_0 \\
\end{aligned}
$$

Generalizing for all $k$

In the generalized $k$ case, we have the RLWE equation:

$$ b(x) = a_0(x) \cdot s_0(x) + a_1(x) \cdot s_1(x) \cdot a_{k-1}(x) \cdot s_{k-1}(x) + e(x) $$

We can construct the LWE elements as follows:

$$A_{\text{LWE}} =
\left ( \begin{array}{c|c|c|c}
A_{0, \text{LWE}} & A_{1, \text{LWE}} & \dots & A_{k-1, \text{LWE}} \end{array}
 \right )
$$

where each sub-matrix is the construction from the previous section:

$$ A_{\text{LWE}} =
\begin{pmatrix}
a_{i, 0}   & -a_{i, N-1} & -a_{i, N-2} & \dots & -a_{i, 1}\\
a_{i, 1}   & a_{i, 0}    & -a_{i, N-1} & \dots & -a_{i, 2}\\
\vdots     & \ddots      &             &       & \vdots   \\
a_{i, N-1} & \dots       &             &       & a_{i, 0} \\
\end{pmatrix}
$$

And the secret keys are stacked similarly:

$$ s_{\text{LWE}} = ( s_{0, 0}, s_{0, 1}, \dots s_{0, N-1} \mid s_{1, 0}, s_{1, 1}, \dots s_{1, N-1} \mid \dots ) $$

This is how we can reduce an RLWE instance with RLWE dimension $k$ and polynomial modulus degree $N$, to a relation that looks like an LWE instance of LWE dimension $N * k$.

Caveats and open research

This reduction does not result in a correctly formed LWE instance, since an LWE instance would have a matrix $A$ that is randomly sampled, whereas the reduction results in an matrix $A$ that has cyclic structure, due to the cyclic property of the RLWE instance. This is why I’ve been emphasizing that the reduction produces an instance that looks like LWE. All currently known attacks on RLWE do not take advantage of the structure, but rather directly attack this transformed LWE instance. Whether the additional ring structure can be exploited in the design of more efficient attacks remains an open question in the lattice cryptography research community.

In her PhD thesis, Rachel Player mentions the RLWE to LWE security reduction:

In order to try to pick parameters in Ring-LWE-based schemes (FHE or otherwise) that we hope are sufficiently secure, we can choose parameters such that the underlying Ring-LWE instance should be hard to solve according to known attacks. Each Ring-LWE sample can be used to extract $n$ LWE samples. To the best of our knowledge, the most powerful attacks against $d$-sample Ring-LWE all work by instead attacking the $nd$-sample LWE problem. When estimating the security of a particular set of Ring-LWE parameters we therefore estimate the security of the induced set of LWE parameters.

This indicates that we can do this reduction for certain RLWE instances. However, we must be careful to ensure that the polynomial modulus degree $N$ is a power of two, because otherwise the error distribution “breaks”, as my colleague Baiyu Li explained to me in conversation:

The RLWE problem is typically defined in using the ring of integers of the cyclotomic field $\mathbb{Q}[X]/(f(X))$, where $f(X)$ is a cyclotomic polynomial of degree $k=\phi(N)$ (where $\phi$ is Euler’s totient function), and the error is a spherical Gaussian over the image of the canonical embedding into the complex numbers $\mathbb{C}^k$ (basically the images of primitive roots of unity under $f$). In many cases we set $N$ to be a power of 2, thus $f(X)=X^{N/2}+1$, since the canonical embedding for such $N$ has a nice property that the preimage of the spherical Gaussian error is also a spherical Gaussian over the coefficients of polynomials in $\mathbb{Q}[X]/(f(X))$. So in this case we can sample $k=N/2$ independent Gaussian numbers and use them as the coefficients of the error polynomial $e(x)$. For $N$ not a power of 2, $f(X)$ may have some low degree terms, and in order to get the spherical Gaussian with the same variance $s^2$ in the canonical embedding, we probably need to use a larger variance when sampling the error polynomial coefficients.

The RLWE we frequently use in practice is actually a specialized version called “polynomial LWE”, and instantiated with $N$ = power of 2 and so $f(X)=X^{N/2}+1$. For other parameters the two are not exactly the same. This paper has some explanations: https://eprint.iacr.org/2018/170.pdf

The error distribution “breaks” if $N$ is not a power of 2 due to the fact that the precise form of RLWE is not defined on integer polynomial rings $R = \mathbb{Z}[X]/(f(X))$, but is defined on its dual (or the dual in the underlying number field, which is a fractional ideal of $\mathbb{Q}[X]/(f(x))$), and the noise distribution is on the Minkowski embedding of this dual ring. For non-power of 2 $N$, the product mod $f$ of two small polynomials in $\mathbb{Q}[X]/(f(x))$ may be large, where small/large means their L2 norm on the coefficient vector. This means that in order to sample the required noise distribution, you may need a skewed coefficient distribution. Only when $N$ is a power of 2, the dual of $R$ is a scaling of $R$, and distance in the embedding of $R^{\text{dual}}$ is preserved in $R$, and so we can just sample iid gaussian coefficient to get the required noise.

Because working with a power-of-two RLWE polynomial modulus gives “nice” error behavior, this parameter choice is often recommended and chosen for concrete instantiations of RLWE. For example, the Homomorphic Encryption Standard
recommends and only analyzes the security of parameters for power-of-two cyclotomic fields for use in homomorphic encryption (though future versions of the standard aim to extend the security analysis to generic cyclotomic rings):

We stress that when the error is chosen from sufficiently wide and “well spread” distributions that match the ring at hand, we do not have meaningful attacks on RLWE that are better than LWE attacks, regardless of the ring. For power-of-two cyclotomics, it is sufficient to sample the noise in the polynomial basis, namely choosing the coefficients of the error polynomial $e \in \mathbb{Z}[x] / \phi_k(x)$ independently at random from a very “narrow” distribution.

Existing works analyzing and targeting the ring structure of RLWE include:

It would of course be great to have a definitive answer on whether we can be confident using this RLWE to LWE reduction to estimate the security of RLWE based schemes. In the meantime, we have seen many Fully Homomorphic Encryption (FHE) schemes using this RLWE to LWE reduction, and we hope that this article helps explain how that reduction works and the existing open questions around this approach.

Negacyclic Polynomial Multiplication

In this article I’ll cover three techniques to compute special types of polynomial products that show up in lattice cryptography and fully homomorphic encryption. Namely, the negacyclic polynomial product, which is the product of two polynomials in the quotient ring $\mathbb{Z}[x] / (x^N + 1)$. As a precursor to the negacyclic product, we’ll cover the simpler cyclic product.

All of the Python code written for this article is on GitHub.

The DFT and Cyclic Polynomial Multiplication

A recent program gallery piece showed how single-variable polynomial multiplication could be implemented using the Discrete Fourier Transform (DFT). This boils down to two observations:

  1. The product of two polynomials $f, g$ can be computed via the convolution of the coefficients of $f$ and $g$.
  2. The Convolution Theorem, which says that the Fourier transform of a convolution of two signals $f, g$ is the point-wise product of the Fourier transforms of the two signals. (The same holds for the DFT)

This provides a much faster polynomial product operation than one could implement using the naïve polynomial multiplication algorithm (though see the last section for an implementation anyway). The DFT can be used to speed up large integer multiplication as well.

A caveat with normal polynomial multiplication is that one needs to pad the input coefficient lists with enough zeros so that the convolution doesn’t “wrap around.” That padding results in the output having length at least as large as the sum of the degrees of $f$ and $g$ (see the program gallery piece for more details).

If you don’t pad the polynomials, instead you get what’s called a cyclic polynomial product. More concretely, if the two input polynomials $f, g$ are represented by coefficient lists $(f_0, f_1, \dots, f_{N-1}), (g_0, g_1, \dots, g_{N-1})$ of length $N$ (implying the inputs are degree at most $N-1$, i.e., the lists may end in a tail of zeros), then the Fourier Transform technique computes

\[ f(x) \cdot g(x) \mod (x^N – 1) \]

This modulus is in the sense of a quotient ring $\mathbb{Z}[x] / (x^N – 1)$, where $(x^N – 1)$ denotes the ring ideal generated by $x^N-1$, i.e., all polynomials that are evenly divisible by $x^N – 1$. A particularly important interpretation of this quotient ring is achieved by interpreting the ideal generator $x^N – 1$ as an equation $x^N – 1 = 0$, also known as $x^N = 1$. To get the canonical ring element corresponding to any polynomial $h(x) \in \mathbb{Z}[x]$, you “set” $x^N = 1$ and reduce the polynomial until there are no more terms with degree bigger than $N-1$. For example, if $N=5$ then $x^{10} + x^6 – x^4 + x + 2 = -x^4 + 2x + 3$ (the $x^{10}$ becomes 1, and $x^6 = x$).

To prove the DFT product computes a product in this particular ring, note how the convolution theorem produces the following formula, where $\textup{fprod}(f, g)$ denotes the process of taking the Fourier transform of the two coefficient lists, multiplying them entrywise, and taking a (properly normalized) inverse FFT, and $\textup{fprod}(f, g)(j)$ is the $j$-th coefficient of the output polynomial:

\[ \textup{fprod}(f, g)(j) = \sum_{k=0}^{N-1} f_k g_{j-k \textup{ mod } N} \]

In words, the output polynomial coefficient $j$ equals the sum of all products of pairs of coefficients whose indices sum to $j$ when considered “wrapping around” $N$. Fixing $j=1$ as an example, $\textup{fprod}(f, g)(1) = f_0 g_1 + f_1g_0 + f_2 g_{N-1} + f_3 g_{N-2} + \dots$. This demonstrates the “set $x^N = 1$” interpretation above: the term $f_2 g_{N-1}$ corresponds to the product $f_2x^2 \cdot g_{N-1}x^{N-1}$, which contributes to the $x^1$ term of the polynomial product if and only if $x^{2 + N-1} = x$, if and only if $x^N = 1$.

To achieve this in code, we simply use the version of the code from the program gallery piece, but fix the size of the arrays given to numpy.fft.fft in advance. We will also, for simplicity, assume the $N$ one wishes to use is a power of 2. The resulting code is significantly simpler than the original program gallery code (we omit zero-padding to length $N$ for brevity).

import numpy
from numpy.fft import fft, ifft

def cyclic_polymul(p1, p2, N):
    """Multiply two integer polynomials modulo (x^N - 1).

    p1 and p2 are arrays of coefficients in degree-increasing order.
    """
    assert len(p1) == len(p2) == N
    product = fft(p1) * fft(p2)
    inverted = ifft(product)
    return numpy.round(numpy.real(inverted)).astype(numpy.int32)

As a side note, there’s nothing that stops this from working with polynomials that have real or complex coefficients, but so long as we use small magnitude integer coefficients and round at the end, I don’t have to worry about precision issues (hat tip to Brad Lucier for suggesting an excellent paper by Colin Percival, “Rapid multiplication modulo the sum and difference of highly composite numbers“, which covers these precision issues in detail).

Negacyclic polynomials, DFT with duplication

Now the kind of polynomial quotient ring that shows up in cryptography is critically not $\mathbb{Z}[x]/(x^N-1)$, because that ring has enough easy-to-reason-about structure that it can’t hide secrets. Instead, cryptographers use the ring $\mathbb{Z}[x]/(x^N+1)$ (the minus becomes a plus), which is believed to be more secure for cryptography—although I don’t have a great intuitive grasp on why.

The interpretation is similar here as before, except we “set” $x^N = -1$ instead of $x^N = 1$ in our reductions. Repeating the above example, if $N=5$ then $x^{10} + x^6 – x^4 + x + 2 = -x^4 + 3$ (the $x^{10}$ becomes $(-1)^2 = 1$, and $x^6 = -x$). It’s called negacyclic because as a term $x^k$ passes $k \geq N$, it cycles back to $x^0 = 1$, but with a sign flip.

The negacyclic polynomial multiplication can’t use the DFT without some special hacks. The first and simplest hack is to double the input lists with a negation. That is, starting from $f(x) \in \mathbb{Z}[x]/(x^N+1)$, we can define $f^*(x) = f(x) – x^Nf(x)$ in a different ring $\mathbb{Z}[x]/(x^{2N} – 1)$ (and similarly for $g^*$ and $g$).

Before seeing how this causes the DFT to (almost) compute a negacyclic polynomial product, some math wizardry. The ring $\mathbb{Z}[x]/(x^{2N} – 1)$ is special because it contains our negacyclic ring as a subring. Indeed, because the polynomial $x^{2N} – 1$ factors as $(x^N-1)(x^N+1)$, and because these two factors are coprime in $\mathbb{Z}[x]/(x^{2N} – 1)$, the Chinese remainder theorem (aka Sun-tzu’s theorem) generalizes to polynomial rings and says that any polynomial in $\mathbb{Z}[x]/(x^{2N} – 1)$ is uniquely determined by its remainders when divided by $(x^N-1)$ and $(x^N+1)$. Another way to say it is that the ring $\mathbb{Z}[x]/(x^{2N} – 1)$ factors as a direct product of the two rings $\mathbb{Z}[x]/(x^{N} – 1)$ and $\mathbb{Z}[x]/(x^{N} + 1)$.

Now mapping a polynomial $f(x)$ from the bigger ring $(x^{2N} – 1)$ to the smaller ring $(x^{N}+1)$ involves taking a remainder of $f(x)$ when dividing by $x^{N}+1$ (“setting” $x^N = -1$ and reducing). There are many possible preimage mappings, depending on what your goal is. In this case, we actually intentionally choose a non preimage mapping, because in general to compute a preimage requires solving a system of congruences in the larger polynomial ring. So instead we choose $f(x) \mapsto f^*(x) = f(x) – x^Nf(x) = -f(x)(x^N – 1)$, which maps back down to $2f(x)$ in $\mathbb{Z}[x]/(x^{N} + 1)$. This preimage mapping has a particularly nice structure, in that you build it by repeating the polynomial’s coefficients twice and flipping the sign of the second half. It’s easy to see that the product $f^*(x) g^*(x)$ maps down to $4f(x)g(x)$.

So if we properly account for these extra constant factors floating around, our strategy to perform negacyclic polynomial multiplication is to map $f$ and $g$ up to the larger ring as described, compute their cyclic product (modulo $x^{2N} – 1$) using the FFT, and then the result should be a degree $2N-1$ polynomial which can be reduced with one more modular reduction step to the right degree $N-1$ negacyclic product, i.e., setting $x^N = -1$, which materializes as taking the second half of the coefficients, flipping their signs, and adding them to the corresponding coefficients in the first half.

The code for this is:

def negacyclic_polymul_preimage_and_map_back(p1, p2):
    p1_preprocessed = numpy.concatenate([p1, -p1])
    p2_preprocessed = numpy.concatenate([p2, -p2])
    product = fft(p1_preprocessed) * fft(p2_preprocessed)
    inverted = ifft(product)
    rounded = numpy.round(numpy.real(inverted)).astype(p1.dtype)
    return (rounded[: p1.shape[0]] - rounded[p1.shape[0] :]) // 4

However, this chosen mapping hides another clever trick. The product of the two preimages has enough structure that we can “read” the result off without doing the full “set $x^N = -1$” reduction step. Mapping $f$ and $g$ up to $f^*, g^*$ and taking their product modulo $(x^{2N} – 1)$ gives

\[ \begin{aligned} f^*g^* &= -f(x^N-1) \cdot -g(x^N – 1) \\ &= fg (x^N-1)^2 \\ &= fg(x^{2N} – 2x^N + 1) \\ &= fg(2 – 2x^N) \\ &= 2(fg – x^Nfg) \end{aligned} \]

This has the same syntactical format as the original mapping $f \mapsto f – x^Nf$, with an extra factor of 2, and so its coefficients also have the form “repeat the coefficients and flip the sign of the second half” (times two). We can then do the “inverse mapping” by reading only the first half of the coefficients and dividing by 2.

def negacyclic_polymul_use_special_preimage(p1, p2):
    p1_preprocessed = numpy.concatenate([p1, -p1])
    p2_preprocessed = numpy.concatenate([p2, -p2])
    product = fft(p1_preprocessed) * fft(p2_preprocessed)
    inverted = ifft(product)
    rounded = numpy.round(0.5 * numpy.real(inverted)).astype(p1.dtype)
    return rounded[: p1.shape[0]]

Our chosen mapping $f \mapsto f-x^Nf$ is not particularly special, except that it uses a small number of pre and post-processing operations. For example, if you instead used the mapping $f \mapsto 2f + x^Nf$ (which would map back to $f$ exactly), then the FFT product would result in $5fg + 4x^Nfg$ in the larger ring. You can still read off the coefficients as before, but you’d have to divide by 5 instead of 2 (which, the superstitious would say, is harder). It seems that “double and negate” followed by “halve and take first half” is the least amount of pre/post processing possible.

Negacyclic polynomials with a “twist”

The previous section identified a nice mapping (or embedding) of the input polynomials into a larger ring. But studying that shows some symmetric structure in the FFT output. I.e., the coefficients of $f$ and $g$ are repeated twice, with some scaling factors. It also involves taking an FFT of two $2N$-dimensional vectors when we start from two $N$-dimensional vectors.

This sort of situation should make you think that we can do this more efficiently, either by using a smaller size FFT or by packing some data into the complex part of the input, and indeed we can do both.

[Aside: it’s well known that if all the entries of an FFT input are real, then the result also has symmetry that can be exploted for efficiency by reframing the problem as a size-N/2 FFT in some cases, and just removing half the FFT algorithm’s steps in other cases, see Wikipedia for more]

This technique was explained in Fast multiplication and its applications (pdf link) by Daniel Bernstein, a prominent cryptographer who specializes in cryptography performance, and whose work appears in widely-used standards like TLS, OpenSSH, and he designed a commonly used elliptic curve for cryptography.

[Aside: Bernstein cites this technique as using something called the “Tangent FFT (pdf link).” This is a drop-in FFT replacement he invented that is faster than previous best (split-radix FFT), and Bernstein uses it mainly to give a precise expression for the number of operations required to do the multiplication end to end. We will continue to use the numpy FFT implementation, since in this article I’m just focusing on how to express negacyclic multiplication in terms of the FFT. Also worth noting both the Tangent FFT and “Fast multiplication” papers frame their techniques—including FFT algorithm implementations!—in terms of polynomial ring factorizations and mappings. Be still, my beating cardioid.]

In terms of polynomial mappings, we start from the ring $\mathbb{R}[x] / (x^N + 1)$, where $N$ is a power of 2. We then pick a reversible mapping from $\mathbb{R}[x]/(x^N + 1) \to \mathbb{C}[x]/(x^{N/2} – 1)$ (note the field change from real to complex), apply the FFT to the image of the mapping, and reverse appropriately it at the end.

One such mapping takes two steps, first mapping $\mathbb{R}[x]/(x^N + 1) \to \mathbb{C}[x]/(x^{N/2} – i)$ and then from $\mathbb{C}[x]/(x^{N/2} – i) \to \mathbb{C}[x]/(x^{N/2} – 1)$. The first mapping is as easy as the last section, because $(x^N + 1) = (x^{N/2} + i) (x^{N/2} – i)$, and so we can just set $x^{N/2} = i$ and reduce the polynomial. This as the effect of making the second half of the polynomial’s coefficients become the complex part of the first half of the coefficients.

The second mapping is more nuanced, because we’re not just reducing via factorization. And we can’t just map $i \mapsto 1$ generically, because that would reduce complex numbers down to real values. Instead, we observe that (momentarily using an arbitrary degree $k$ instead of $N/2$), for any polynomial $f \in \mathbb{C}[x]$, the remainder of $f \mod x^k-i$ uniquely determines the remainder of $f \mod x^k – 1$ via the change of variables $x \mapsto \omega_{4k} x$, where $\omega_{4k}$ is a $4k$-th primitive root of unity $\omega_{4k} = e^{\frac{2 \pi i}{4k}}$. Spelling this out in more detail: if $f(x) \in \mathbb{C}[x]$ has remainder $f(x) = g(x) + h(x)(x^k – i)$ for some polynomial $h(x)$, then

\[ \begin{aligned} f(\omega_{4k}x) &= g(\omega_{4k}x) + h(\omega_{4k}x)((\omega_{4k}x)^{k} – i) \\ &= g(\omega_{4k}x) + h(\omega_{4k}x)(e^{\frac{\pi i}{2}} x^k – i) \\ &= g(\omega_{4k}x) + i h(\omega_{4k}x)(x^k – 1) \\ &= g(\omega_{4k}x) \mod (x^k – 1) \end{aligned} \]

Translating this back to $k=N/2$, the mapping from $\mathbb{C}[x]/(x^{N/2} – i) \to \mathbb{C}[x]/(x^{N/2} – 1)$ is $f(x) \mapsto f(\omega_{2N}x)$. And if $f = f_0 + f_1x + \dots + f_{N/2 – 1}x^{N/2 – 1}$, then the mapping involves multiplying each coefficient $f_k$ by $\omega_{2N}^k$.

When you view polynomials as if they were a simple vector of their coefficients, then this operation $f(x) \mapsto f(\omega_{k}x)$ looks like $(a_0, a_1, \dots, a_n) \mapsto (a_0, \omega_{k} a_1, \dots, \omega_k^n a_n)$. Bernstein calls the operation a twist of $\mathbb{C}^n$, which I mused about in this Mathstodon thread.

What’s most important here is that each of these transformations are invertible. The first because the top half coefficients end up in the complex parts of the polynomial, and the second because the mapping $f(x) \mapsto f(\omega_{2N}^{-1}x)$ is an inverse. Together, this makes the preprocessing and postprocessing exact inverses of each other. The code is then

def negacyclic_polymul_complex_twist(p1, p2):
    n = p2.shape[0]
    primitive_root = primitive_nth_root(2 * n)
    root_powers = primitive_root ** numpy.arange(n // 2)

    p1_preprocessed = (p1[: n // 2] + 1j * p1[n // 2 :]) * root_powers
    p2_preprocessed = (p2[: n // 2] + 1j * p2[n // 2 :]) * root_powers

    p1_ft = fft(p1_preprocessed)
    p2_ft = fft(p2_preprocessed)
    prod = p1_ft * p2_ft
    ifft_prod = ifft(prod)
    ifft_rotated = ifft_prod * primitive_root ** numpy.arange(0, -n // 2, -1)

    return numpy.round(
        numpy.concatenate([numpy.real(ifft_rotated), numpy.imag(ifft_rotated)])
    ).astype(p1.dtype)

And so, at the cost of a bit more pre- and postprocessing, we can negacyclically multiply two degree $N-1$ polynomials using an FFT of length $N/2$. In theory, no information is wasted and this is optimal.

And finally, a simple matrix multiplication

The last technique I wanted to share is not based on the FFT, but it’s another method for doing negacyclic polynomial multiplication that has come in handy in situations where I am unable to use FFTs. I call it the Toeplitz method, because one of the polynomials is converted to a Toeplitz matrix. Sometimes I hear it referred to as a circulant matrix technique, but due to the negacyclic sign flip, I don’t think it’s a fully accurate term.

The idea is to put the coefficients of one polynomial $f(x) = f_0 + f_1x + \dots + f_{N-1}x^{N-1}$ into a matrix as follows:

\[ \begin{pmatrix} f_0 & -f_{N-1} & \dots & -f_1 \\ f_1 & f_0 & \dots & -f_2 \\ \vdots & \vdots & \ddots & \vdots \\ f_{N-1} & f_{N-2} & \dots & f_0 \end{pmatrix} \]

The polynomial coefficients are written down in the first column unchanged, then in each subsequent column, the coefficients are cyclically shifted down one, and the term that wraps around the top has its sign flipped. When the second polynomial is treated as a vector of its coefficients, say, $g(x) = g_0 + g_1x + \dots + g_{N-1}x^{N-1}$, then the matrix-vector product computes their negacyclic product (as a vector of coefficients):

\[ \begin{pmatrix} f_0 & -f_{N-1} & \dots & -f_1 \\ f_1 & f_0 & \dots & -f_2 \\ \vdots & \vdots & \ddots & \vdots \\ f_{N-1} & f_{N-2} & \dots & f_0 \end{pmatrix} \begin{pmatrix} g_0 \\ g_1 \\ \vdots \\ g_{N-1} \end{pmatrix} \]

This works because each row $j$ corresponds to one output term $x^j$, and the cyclic shift for that row accounts for the degree-wrapping, with the sign flip accounting for the negacyclic part. (If there were no sign attached, this method could be used to compute a cyclic polynomial product).

The Python code for this is

def cylic_matrix(c: numpy.array) -> numpy.ndarray:
    """Generates a cyclic matrix with each row of the input shifted.

    For input: [1, 2, 3], generates the following matrix:

        [[1 2 3]
         [2 3 1]
         [3 1 2]]
    """
    c = numpy.asarray(c).ravel()
    a, b = numpy.ogrid[0 : len(c), 0 : -len(c) : -1]
    indx = a + b
    return c[indx]


def negacyclic_polymul_toeplitz(p1, p2):
    n = len(p1)

    # Generates a sign matrix with 1s below the diagonal and -1 above.
    up_tri = numpy.tril(numpy.ones((n, n), dtype=int), 0)
    low_tri = numpy.triu(numpy.ones((n, n), dtype=int), 1) * -1
    sign_matrix = up_tri + low_tri

    cyclic_matrix = cylic_matrix(p1)
    toeplitz_p1 = sign_matrix * cyclic_matrix
    return numpy.matmul(toeplitz_p1, p2)

Obviously on most hardware this would be less efficient than an FFT-based method (and there is some relationship between circulant matrices and Fourier Transforms, see Wikipedia). But in some cases—when the polynomials are small, or one of the two polynomials is static, or a particular hardware choice doesn’t handle FFTs with high-precision floats very well, or you want to take advantage of natural parallelism in the matrix-vector product—this method can be useful. It’s also simpler to reason about.

Until next time!

Programming with Finite Fields

Back when I was first exposed to programming language design, I decided it would be really cool if there were a language that let you define your own number types and then do all your programming within those number types. And since I get excited about math, I think of really exotic number types (Boolean rings, Gaussian integers, Octonions, oh my!). I imagined it would be a language feature, so I could do something like this:

use gaussianintegers as numbers

x = 1 + i
y = 2 - 3i
print(x*y)
z = 2 + 3.5i  # error

I’m not sure why I thought this would be so cool. Perhaps I felt like I would be teaching a computer math. Or maybe the next level of abstraction in playing god by writing programs is to play god by designing languages (and I secretly satisfy a massive god complex by dictating the actions of my computer).

But despite not writing a language of my own, programming with weird number systems still has a special place in my heart. It just so happens that we’re in the middle of a long series on elliptic curves, and in the next post we’ll actually implement elliptic curve arithmetic over a special kind of number type (the finite field). In this post, we’ll lay the groundwork by implementing number types in Python that allow us to work over any finite field. This is actually a pretty convoluted journey, and to be totally rigorous we’d need to prove a bunch of lemmas, develop a bunch of ring theory, and prove the correctness of a few algorithms involving polynomials.

Instead of taking the long and winding road, we’ll just state the important facts with links to proofs, prove the easy stuff, and focus more heavily than usual on the particular Python implementation details. As usual, all of the code used in this post is available on this blog’s Github page.

Integers Modulo Primes

The simples kind of finite field is the set of integers modulo a prime. We’ve dealt with this number field extensively on this blog (in groups, rings, fields, with RSA, etc.), but let’s recall what it is. The modulo operator $\textup{mod}$ (in programming it’s often denoted %) is a binary operation on integers such that $ x \mod y$ is the unique positive remainder of $ x$ when divided by $ y$.

Definition: Let $ p$ be a prime number. The set $ \mathbb{Z}/p$ consists of the numbers $ \left \{ 0, 1, \dots, p-1 \right \}$. If you endow it with the operations of addition (mod $ p$) and multiplication (mod $ p$), it forms a field.

To say it’s a field is just to say that arithmetic more or less behaves the way we expect it to, and in particular that every nonzero element has a (unique) multiplicative inverse. Making a number type for $ \mathbb{Z}/p$ in Python is quite simple.

def IntegersModP(p):
   class IntegerModP(FieldElement):
      def __init__(self, n):
         self.n = n % p
         self.field = IntegerModP

      def __add__(self, other): return IntegerModP(self.n + other.n)
      def __sub__(self, other): return IntegerModP(self.n - other.n)
      def __mul__(self, other): return IntegerModP(self.n * other.n)
      def __truediv__(self, other): return self * other.inverse()
      def __div__(self, other): return self * other.inverse()
      def __neg__(self): return IntegerModP(-self.n)
      def __eq__(self, other): return isinstance(other, IntegerModP) and self.n == other.n
      def __abs__(self): return abs(self.n)
      def __str__(self): return str(self.n)
      def __repr__(self): return '%d (mod %d)' % (self.n, self.p)

      def __divmod__(self, divisor):
         q,r = divmod(self.n, divisor.n)
         return (IntegerModP(q), IntegerModP(r))

      def inverse(self):
         ...?

   IntegerModP.p = p
   IntegerModP.__name__ = 'Z/%d' % (p)
   return IntegerModP

We’ve done a couple of things worth note here. First, all of the double-underscore methods are operator overloads, so they are called when one tries to, e.g., add two instances of this class together. We’ve also implemented a division algorithm via __divmod__ which computes a (quotient, remainder) pair for the appropriate division. The built in Python function divmod function does this for integers, and we can overload it for a custom type. We’ll write a more complicated division algorithm later in this post. Finally, we’re dynamically creating our class so that different primes will correspond to different types. We’ll come back to why this encapsulation is a good idea later, but it’s crucial to make our next few functions reusable and elegant.

Here’s an example of the class in use:

>>> mod7 = IntegersModP(7)
>>> mod7(3) + mod7(6)
2 (mod 7)

The last (undefined) function in the IntegersModP class, the inverse function, is our only mathematical hurdle. Luckily, we can compute inverses in a generic way, using an algorithm called the extended Euclidean algorithm. Here’s the mathematics.

Definition: An element $ d$ is called a greatest common divisor (gcd) of $ a,b$ if it divides both $ a$ and $ b$, and for every other $ z$ dividing both $ a$ and $ b$, $ z$ divides $ d$. For $ \mathbb{Z}/p$ gcd’s and we denote it as $ \gcd(a,b)$. [1]

Note that we called it ‘a’ greatest common divisor. In general gcd’s need not be unique, though for integers one often picks the positive gcd. We’ll actually see this cause a tiny programmatic bug later in this post, but let’s push on for now.

Theorem: For any two integers $ a,b \in \mathbb{Z}$ there exist unique $ x,y \in \mathbb{Z}$ such that $ ax + by = \gcd(a,b)$.

We could beat around the bush and try to prove these things in various ways, but when it comes down to it there’s one algorithm of central importance that both computes the gcd and produces the needed linear combination $ x,y$. The algorithm is called the Euclidean algorithm. Here is a simple version that just gives the gcd.

def gcd(a, b):
   if abs(a) < abs(b):
      return gcd(b, a)

   while abs(b) > 0:
      q,r = divmod(a,b)
      a,b = b,r

   return a

This works by the simple observation that $ \gcd(a, aq+r) = \gcd(a,r)$ (this is an easy exercise to prove directly). So the Euclidean algorithm just keeps applying this rule over and over again: take the remainder when dividing the bigger argument by the smaller argument until the remainder becomes zero. Then the $ \gcd(x,0) = x$ because everything divides zero.

Now the so-called ‘extended’ Euclidean algorithm just keeps track of some additional data as it goes (the partial quotients and remainders). Here’s the algorithm.

def extendedEuclideanAlgorithm(a, b):
   if abs(b) > abs(a):
      (x,y,d) = extendedEuclideanAlgorithm(b, a)
      return (y,x,d)

   if abs(b) == 0:
      return (1, 0, a)

   x1, x2, y1, y2 = 0, 1, 1, 0
   while abs(b) > 0:
      q, r = divmod(a,b)
      x = x2 - q*x1
      y = y2 - q*y1
      a, b, x2, x1, y2, y1 = b, r, x1, x, y1, y

   return (x2, y2, a)

Indeed, the reader who hasn’t seen this stuff before is encouraged to trace out a run for the numbers 4864, 3458. Their gcd is 38 and the two integers are 32 and -45, respectively.

How does this help us compute inverses? Well, if we want to find the inverse of $ a$ modulo $ p$, we know that their gcd is 1. So compute the $ x,y$ such that $ ax + py = 1$, and then reduce both sides mod $ p$. You get $ ax + 0 = 1 \mod p$, which means that $ x \mod p$ is the inverse of $ a$. So once we have the extended Euclidean algorithm our inverse function is trivial to write!

def inverse(self):
   x,y,d = extendedEuclideanAlgorithm(self.n, self.p)
   return IntegerModP(x)

And indeed it works as expected:

>>> mod23 = IntegersModP(23)
>>> mod23(7).inverse()
10 (mod 23)
>>> mod23(7).inverse() * mod23(7)
1 (mod 23)

Now one very cool thing, and something that makes some basic ring theory worth understanding, is that we can compute the gcd of any number type using the exact same code for the Euclidean algorithm, provided we implement an abs function and a division algorithm. Via a chain of relatively easy-to-prove lemmas, if your number type has enough structure (in particular, if it has a division algorithm that satisfies some properties), then greatest common divisors are well-defined, and the Euclidean algorithm gives us that special linear combination. And using the same trick above in finite fields, we can use the Euclidean algorithm to compute inverses.

But in order to make things work programmatically we need to be able to deal with the literal ints 0 and 1 in the algorithm. That is, we need to be able to silently typecast integers up to whatever number type we’re working with. This makes sense because all rings have 0 and 1, but it requires a bit of scaffolding to implement. In particular, typecasting things sensibly is really difficult if you aren’t careful. And the problems are compounded in a language like Python that blatantly ignores types whenever possible. [2]

So let’s take a quick break to implement a tiny type system with implicit typecasting.

[1] The reader familiar with our series on category theory will recognize this as the product of two integers in a category whose arrows represent divisibility. So by abstract nonsense, this proves that gcd’s are unique up to multiplication by a unit in any ring.
[2] In the process of writing the code for this post, I was sorely missing the stronger type systems of Java and Haskell. Never thought I’d say that, but it’s true.

A Tiny Generic Type System

The main driving force behind our type system will be a decorator called @typecheck. We covered decorators toward the end of our primer on dynamic programming, but in short a decorator is a Python syntax shortcut that allows some pre- or post-processing to happen to a function in a reusable way. All you need to do to apply the pre/post-processing is prefix the function definition with the name of the decorator.

Our decorator will be called typecheck, and it will decorate binary operations on our number types. In its basic form, our type checker will work as follows: if the types of the two operands are the same, then the decorator will just pass them on through to the operator. Otherwise, it will try to do some typecasting, and if that fails it will raise exceptions with reckless abandon.

def typecheck(f):
   def newF(self, other):
      if type(self) is not type(other):
         try:
            other = self.__class__(other)
         except TypeError:
            message = 'Not able to typecast %s of type %s to type %s in function %s'
            raise TypeError(message % (other, type(other).__name__, type(self).__name__, f.__name__))
         except Exception as e:
            message = 'Type error on arguments %r, %r for functon %s. Reason:%s'
            raise TypeError(message % (self, other, f.__name__, type(other).__name__, type(self).__name__, e))

      return f(self, other)

   return newF

So this is great, but there are two issues. The first is that this will only silently typecast if the thing we’re casting is on the right-hand side of the expression. In other words, the following will raise an exception complaining that you can’t add ints to Mod7 integers.

>>> x = IntegersModP(7)(1)
>>> 1 + x

What we need are the right-hand versions of all the operator overloads. They are the same as the usual operator overloads, but Python gives preference to the left-hand operator overloads. Anticipating that we will need to rewrite these silly right-hand overloads for every number type, and they’ll all be the same, we make two common base classes.

class DomainElement(object):
   def __radd__(self, other): return self + other
   def __rsub__(self, other): return -self + other
   def __rmul__(self, other): return self * other

class FieldElement(DomainElement):
   def __truediv__(self, other): return self * other.inverse()
   def __rtruediv__(self, other): return self.inverse() * other
   def __div__(self, other): return self.__truediv__(other)
   def __rdiv__(self, other): return self.__rtruediv__(other)

And we can go ahead and make our IntegersModP a subclass of FieldElement. [3]

But now we’re wading into very deep waters. In particular, we know ahead of time that our next number type will be for Polynomials (over the integers, or fractions, or $ \mathbb{Z}/p$, or whatever). And we’ll want to do silent typecasting from ints and IntegersModP to Polynomials! The astute reader will notice the discrepancy. What will happen if I try to do this?

>>> MyInteger() + MyPolynomial()

Let’s take this slowly: by our assumption both MyInteger and MyPolynomial have the __add__ and __radd__ functions defined on them, and each tries to typecast the other the appropriate type. But which is called? According to Python’s documentation if the left-hand side has an __add__ function that’s called first, and the right-hand sides’s __radd__ function is only sought if no __add__ function is found for the left operand.

Well that’s a problem, and we’ll deal with it in a half awkward and half elegant way. What we’ll do is endow our number types with an “operatorPrecedence” constant. And then inside our type checker function we’ll see if the right-hand operand is an object of higher precedence. If it is, we return the global constant NotImplemented, which Python takes to mean that no __add__ function was found, and it proceeds to look for __radd__. And so with this modification our typechecker is done. [4]

def typecheck(f):
   def newF(self, other):
      if (hasattr(other.__class__, 'operatorPrecedence') and
            other.__class__.operatorPrecedence > self.__class__.operatorPrecedence):
         return NotImplemented

      if type(self) is not type(other):
         try:
            other = self.__class__(other)
         except TypeError:
            message = 'Not able to typecast %s of type %s to type %s in function %s'
            raise TypeError(message % (other, type(other).__name__, type(self).__name__, f.__name__))
         except Exception as e:
            message = 'Type error on arguments %r, %r for functon %s. Reason:%s'
            raise TypeError(message % (self, other, f.__name__, type(other).__name__, type(self).__name__, e))

      return f(self, other)

   return newF

We add a default operatorPrecedence of 1 to the DomainElement base class. Now this function answers our earlier question of why we want to encapsulate the prime modulus into the IntegersModP class. If this typechecker is really going to be generic, we need to be able to typecast an int by passing the single int argument to the type constructor with no additional information! Indeed, this will be the same pattern for our polynomial class and the finite field class to follow.

Now there is still one subtle problem. If we try to generate two copies of the same number type from our number-type generator (in other words, the following code snippet), we’ll get a nasty exception.

>>> mod7 = IntegersModP(7)
>>> mod7Copy = IntegersModP(7)
>>> mod7(1) + mod7Copy(2)
... error ...

The reason for this is that in the type-checker we’re using the Python built-in ‘is’ which checks for identity, not semantic equality. To fix this, we simply need to memoize the IntegersModP function (and all the other functions we’ll use to generate number types) so that there is only ever one copy of a number type in existence at a time.

So enough Python hacking: let’s get on with implementing finite fields!

[3] This also compels us to make some slight modifications to the constructor for IntegersModP, but they’re not significant enough to display here. Check out the Github repo if you want to see.
[4] This is truly a hack, and we’ve considered submitting a feature request to the Python devs. It is conceivably useful for the operator-overloading aficionado. I’d be interested to hear your thoughts in the comments as to whether this is a reasonable feature to add to Python.

Polynomial Arithmetic

Recall from our finite field primer that every finite field can be constructed as a quotient of a polynomial ring with coefficients in $ \mathbb{Z}/p$ by some prime ideal. We spelled out exactly what this means in fine detail in the primer, so check that out before reading on.

Indeed, to construct a finite field we need to find some irreducible monic polynomial $ f$ with coefficients in $ \mathbb{Z}/p$, and then the elements of our field will be remainders of arbitrary polynomials when divided by $ f$. In order to determine if they’re irreducible we’ll need to compute a gcd. So let’s build a generic polynomial type with a polynomial division algorithm, and hook it into our gcd framework.

We start off in much the same way as with the IntegersModP:

# create a polynomial with coefficients in a field; coefficients are in
# increasing order of monomial degree so that, for example, [1,2,3]
# corresponds to 1 + 2x + 3x^2
@memoize
def polynomialsOver(field=fractions.Fraction):

   class Polynomial(DomainElement):
      operatorPrecedence = 2
      factory = lambda L: Polynomial([field(x) for x in L])

      def __init__(self, c):
         if type(c) is Polynomial:
            self.coefficients = c.coefficients
         elif isinstance(c, field):
            self.coefficients = [c]
         elif not hasattr(c, '__iter__') and not hasattr(c, 'iter'):
            self.coefficients = [field(c)]
         else:
            self.coefficients = c

         self.coefficients = strip(self.coefficients, field(0))

      def isZero(self): return self.coefficients == []

      def __repr__(self):
         if self.isZero():
            return '0'

         return ' + '.join(['%s x^%d' % (a,i) if i > 0 else '%s'%a
                              for i,a in enumerate(self.coefficients)])

      def __abs__(self): return len(self.coefficients)
      def __len__(self): return len(self.coefficients)
      def __sub__(self, other): return self + (-other)
      def __iter__(self): return iter(self.coefficients)
      def __neg__(self): return Polynomial([-a for a in self])

      def iter(self): return self.__iter__()
      def leadingCoefficient(self): return self.coefficients[-1]
      def degree(self): return abs(self) - 1

All of this code just lays down conventions. A polynomial is a list of coefficients (in increasing order of their monomial degree), the zero polynomial is the empty list of coefficients, and the abs() of a polynomial is one plus its degree. [5] Finally, instead of closing over a prime modulus, as with IntegersModP, we’re closing over the field of coefficients. In general you don’t have to have polynomials with coefficients in a field, but if they do come from a field then you’re guaranteed to get a sensible Euclidean algorithm. In the formal parlance, if $ k$ is a field then $ k[x]$ is a Euclidean domain. And for our goal of defining finite fields, we will always have coefficients from $ \mathbb{Z}/p$, so there’s no problem.

Now we can define things like addition, multiplication, and equality using our typechecker to silently cast and watch for errors.

      @typecheck
      def __eq__(self, other):
         return self.degree() == other.degree() and all([x==y for (x,y) in zip(self, other)])

      @typecheck
      def __add__(self, other):
         newCoefficients = [sum(x) for x in itertools.zip_longest(self, other, fillvalue=self.field(0))]
         return Polynomial(newCoefficients)

      @typecheck
      def __mul__(self, other):
         if self.isZero() or other.isZero():
            return Zero()

         newCoeffs = [self.field(0) for _ in range(len(self) + len(other) - 1)]

         for i,a in enumerate(self):
            for j,b in enumerate(other):
               newCoeffs[i+j] += a*b

         return Polynomial(newCoeffs)

Notice that, if the underlying field of coefficients correctly implements the operator overloads, none of this depends on the coefficients. Reusability, baby!

And we can finish off with the division algorithm for polynomials.

      @typecheck
      def __divmod__(self, divisor):
         quotient, remainder = Zero(), self
         divisorDeg = divisor.degree()
         divisorLC = divisor.leadingCoefficient()

         while remainder.degree() >= divisorDeg:
            monomialExponent = remainder.degree() - divisorDeg
            monomialZeros = [self.field(0) for _ in range(monomialExponent)]
            monomialDivisor = Polynomial(monomialZeros + [remainder.leadingCoefficient() / divisorLC])

            quotient += monomialDivisor
            remainder -= monomialDivisor * divisor

         return quotient, remainder

Indeed, we’re doing nothing here but formalizing the grade-school algorithm for doing polynomial long division [6]. And we can finish off the function for generating this class by assigning the field member variable along with a class name. And we give it a higher operator precedence than the underlying field of coefficients so that an isolated coefficient is cast up to a constant polynomial.

@memoize
def polynomialsOver(field=fractions.Fraction):

   class Polynomial(DomainElement):
      operatorPrecedence = 2

      [... methods defined above ...]

   def Zero():
      return Polynomial([])

   Polynomial.field = field
   Polynomial.__name__ = '(%s)[x]' % field.__name__
   return Polynomial

We provide a modest test suite in the Github repository for this post, but here’s a sample test:

>>> Mod5 = IntegersModP(5)
>>> Mod11 = IntegersModP(11)
>>> polysOverQ = polynomialsOver(Fraction).factory
>>> polysMod5 = polynomialsOver(Mod5).factory
>>> polysMod11 = polynomialsOver(Mod11).factory
>>> polysOverQ([1,7,49]) / polysOverQ([7])
1/7 + 1 x^1 + 7 x^2
>>> polysMod5([1,7,49]) / polysMod5([7])
3 + 1 x^1 + 2 x^2
>>> polysMod11([1,7,49]) / polysMod11([7])
8 + 1 x^1 + 7 x^2

And indeed, the extended Euclidean algorithm works without modification, so we know our typecasting is doing what’s expected:

>>> p = polynomialsOver(Mod2).factory
>>> f = p([1,0,0,0,1,1,1,0,1,1,1]) # x^10 + x^9 + x^8 + x^6 + x^5 + x^4 + 1
>>> g = p([1,0,1,1,0,1,1,0,0,1])   # x^9 + x^6 + x^5 + x^3 + x^1 + 1
>>> theGcd = p([1,1,0,1]) # x^3 + x + 1
>>> x = p([0,0,0,0,1]) # x^4
>>> y = p([1,1,1,1,1,1]) # x^5 + x^4 + x^3 + x^2 + x + 1
>>> (x,y,theGcd) == extendedEuclideanAlgorithm(f, g)
True

[5] The mathematical name for the abs() function that we’re using is a valuation.
[6] One day we will talk a lot more about polynomial long division on this blog. You can do a lot of cool algebraic geometry with it, and the ideas there lead you to awesome applications like robot motion planning and automated geometry theorem proving.

Generating Irreducible Polynomials

Now that we’ve gotten Polynomials out of the way, we need to be able to generate irreducible polynomials over $ \mathbb{Z}/p$ of any degree we want. It might be surprising that irreducible polynomials of any degree exist [7], but in fact we know a lot more.

Theorem: The product of all irreducible monic polynomials of degree dividing $ m$ is equal to $ x^{p^m} – x$.

This is an important theorem, but it takes a little bit more field theory than we have under our belts right now. You could summarize the proof by saying there is a one-to-one correspondence between elements of a field and monic irreducible polynomials, and then you say some things about splitting fields. You can see a more detailed proof outline here, but it assumes you’re familiar with the notion of a minimal polynomial. We will probably cover this in a future primer.

But just using the theorem we can get a really nice algorithm for determining if a polynomial $ f(x)$ of degree $ m$ is irreducible: we just look at its gcd with all the $ x^{p^k} – x$ for $ k$ smaller than $ m$. If all the gcds are constants, then we know it’s irreducible, and if even one is a non-constant polynomial then it has to be irreducible. Why’s that? Because if you have some nontrivial gcd $ d(x) = \gcd(f(x), x^{p^k} – x)$ for $ k < m$, then it’s a factor of $ f(x)$ by definition. And since we know all irreducible monic polynomials are factors of that this collection of polynomials, if the gcd is always 1 then there are no other possible factors to be divisors. (If there is any divisor then there will be a monic irreducible one!) So the candidate polynomial must be irreducible. In fact, with a moment of thought it’s clear that we can stop at $ k= m/2$, as any factor of large degree will necessarily require corresponding factors of small degree. So the algorithm to check for irreducibility is just this simple loop:

def isIrreducible(polynomial, p):
   ZmodP = IntegersModP(p)
   poly = polynomialsOver(ZmodP).factory
   x = poly([0,1])
   powerTerm = x
   isUnit = lambda p: p.degree() == 0

   for _ in range(int(polynomial.degree() / 2)):
      powerTerm = powerTerm.powmod(p, polynomial)
      gcdOverZmodp = gcd(polynomial, powerTerm - x)
      if not isUnit(gcdOverZmodp):
         return False

   return True

We’re just computing the powers iteratively as $ x^p, (x^p)^p = x^{p^2}, \dots, x^{p^j}$ and in each step of the loop subtracting $ x$ and computing the relevant gcd. The powmod function is just there so that we can reduce the power mod our irreducible polynomial after each multiplication, keeping the degree of the polynomial small and efficient to work with.

Now generating an irreducible polynomial is a little bit harder than testing for one. With a lot of hard work, however, field theorists discovered that irreducible polynomials are quite common. In fact, if you just generate the coefficients of your degree $ n$ monic polynomial at random, the chance that you’ll get something irreducible is at least $ 1/n$. So this suggests an obvious randomized algorithm: keep guessing until you find one.

def generateIrreduciblePolynomial(modulus, degree):
   Zp = IntegersModP(modulus)
   Polynomial = polynomialsOver(Zp)

   while True:
      coefficients = [Zp(random.randint(0, modulus-1)) for _ in range(degree)]
      randomMonicPolynomial = Polynomial(coefficients + [Zp(1)])

      if isIrreducible(randomMonicPolynomial, modulus):
         return randomMonicPolynomial

Since the probability of getting an irreducible polynomial is close to $ 1/n$, we expect to require $ n$ trials before we find one. Moreover we could give a pretty tight bound on how likely it is to deviate from the expected number of trials. So now we can generate some irreducible polynomials!

>>> F23 = FiniteField(2,3)
>>> generateIrreduciblePolynomial(23, 3)
21 + 12 x^1 + 11 x^2 + 1 x^3

And so now we are well-equipped to generate any finite field we want! It’s just a matter of generating the polynomial and taking a modulus after every operation.

@memoize
def FiniteField(p, m, polynomialModulus=None):
   Zp = IntegersModP(p)
   if m == 1:
      return Zp

   Polynomial = polynomialsOver(Zp)
   if polynomialModulus is None:
      polynomialModulus = generateIrreduciblePolynomial(modulus=p, degree=m)

   class Fq(FieldElement):
      fieldSize = int(p ** m)
      primeSubfield = Zp
      idealGenerator = polynomialModulus
      operatorPrecedence = 3

      def __init__(self, poly):
         if type(poly) is Fq:
            self.poly = poly.poly
         elif type(poly) is int or type(poly) is Zp:
            self.poly = Polynomial([Zp(poly)])
         elif isinstance(poly, Polynomial):
            self.poly = poly % polynomialModulus
         else:
            self.poly = Polynomial([Zp(x) for x in poly]) % polynomialModulus

         self.field = Fq

      @typecheck
      def __add__(self, other): return Fq(self.poly + other.poly)
      @typecheck
      def __sub__(self, other): return Fq(self.poly - other.poly)
      @typecheck
      def __mul__(self, other): return Fq(self.poly * other.poly)
      @typecheck
      def __eq__(self, other): return isinstance(other, Fq) and self.poly == other.poly

      def __pow__(self, n): return Fq(pow(self.poly, n))
      def __neg__(self): return Fq(-self.poly)
      def __abs__(self): return abs(self.poly)
      def __repr__(self): return repr(self.poly) + ' \u2208 ' + self.__class__.__name__

      @typecheck
      def __divmod__(self, divisor):
         q,r = divmod(self.poly, divisor.poly)
         return (Fq(q), Fq(r))

      def inverse(self):
         if self == Fq(0):
            raise ZeroDivisionError

         x,y,d = extendedEuclideanAlgorithm(self.poly, self.idealGenerator)
         return Fq(x) * Fq(d.coefficients[0].inverse())

   Fq.__name__ = 'F_{%d^%d}' % (p,m)
   return Fq

And some examples of using it:

>>> F23 = FiniteField(2,3)
>>> x = F23([1,1])
>>> x
1 + 1 x^1 ∈ F_{2^3}
>>> x*x
1 + 0 x^1 + 1 x^2 ∈ F_{2^3}
>>> x**10
0 + 0 x^1 + 1 x^2 ∈ F_{2^3}
>>> 1 / x
0 + 1 x^1 + 1 x^2 ∈ F_{2^3}
>>> x * (1 / x)
1 ∈ F_{2^3}
>>> k = FiniteField(23, 4)
>>> k.fieldSize
279841
>>> k.idealGenerator
6 + 8 x^1 + 10 x^2 + 10 x^3 + 1 x^4
>>> y
9 + 21 x^1 + 14 x^2 + 12 x^3 ∈ F_{23^4}
>>> y*y
13 + 19 x^1 + 7 x^2 + 14 x^3 ∈ F_{23^4}
>>> y**5 - y
15 + 22 x^1 + 15 x^2 + 5 x^3 ∈ F_{23^4}

And that’s it! Now we can do arithmetic over any finite field we want.

[7] Especially considering that other wacky things happen like this: $ x^4 +1$ is reducible over every finite field!

Some Words on Efficiency

There are a few things that go without stating about this program, but I’ll state them anyway.

The first is that we pay a big efficiency price for being so generic. All the typecasting we’re doing isn’t cheap, and in general cryptography needs to be efficient. For example, if I try to create a finite field of order $ 104729^{20}$, it takes about ten to fifteen seconds to complete. This might not seem so bad for a one-time initialization, but it’s clear that our representation is somewhat bloated. We would display a graph of the expected time to perform various operations in various finite fields, but this post is already way too long.

In general, the larger and more complicated the polynomial you use to define your finite field, the longer operations will take (since dividing by a complicated polynomial is more expensive than dividing by a simple polynomial). For this reason and a few other reasons, a lot of research has gone into efficiently finding irreducible polynomials with, say, only three nonzero terms. Moreover, if we know in advance that we’ll only work over fields of characteristic two we can make a whole lot of additional optimizations. Essentially, all of the arithmetic reduces to really fast bitwise operations, and things like exponentiation can easily be implemented in hardware. But it also seems that the expense coming with large field characteristics corresponds to higher security, so some researchers have tried to meet in the middle an get efficient representations of other field characteristics.

In any case, the real purpose of our implementation in this post is not for efficiency. We care first and foremost about understanding the mathematics, and to have a concrete object to play with and use in the future for other projects. And we have accomplished just that.

Until next time!