Negacyclic Polynomial Multiplication

In this article I’ll cover three techniques to compute special types of polynomial products that show up in lattice cryptography and fully homomorphic encryption. Namely, the negacyclic polynomial product, which is the product of two polynomials in the quotient ring $\mathbb{Z}[x] / (x^N + 1)$. As a precursor to the negacyclic product, we’ll cover the simpler cyclic product.

All of the Python code written for this article is on GitHub.

The DFT and Cyclic Polynomial Multiplication

A recent program gallery piece showed how single-variable polynomial multiplication could be implemented using the Discrete Fourier Transform (DFT). This boils down to two observations:

  1. The product of two polynomials $f, g$ can be computed via the convolution of the coefficients of $f$ and $g$.
  2. The Convolution Theorem, which says that the Fourier transform of a convolution of two signals $f, g$ is the point-wise product of the Fourier transforms of the two signals. (The same holds for the DFT)

This provides a much faster polynomial product operation than one could implement using the naïve polynomial multiplication algorithm (though see the last section for an implementation anyway). The DFT can be used to speed up large integer multiplication as well.

A caveat with normal polynomial multiplication is that one needs to pad the input coefficient lists with enough zeros so that the convolution doesn’t “wrap around.” That padding results in the output having length at least as large as the sum of the degrees of $f$ and $g$ (see the program gallery piece for more details).

If you don’t pad the polynomials, instead you get what’s called a cyclic polynomial product. More concretely, if the two input polynomials $f, g$ are represented by coefficient lists $(f_0, f_1, \dots, f_{N-1}), (g_0, g_1, \dots, g_{N-1})$ of length $N$ (implying the inputs are degree at most $N-1$, i.e., the lists may end in a tail of zeros), then the Fourier Transform technique computes

\[ f(x) \cdot g(x) \mod (x^N – 1) \]

This modulus is in the sense of a quotient ring $\mathbb{Z}[x] / (x^N – 1)$, where $(x^N – 1)$ denotes the ring ideal generated by $x^N-1$, i.e., all polynomials that are evenly divisible by $x^N – 1$. A particularly important interpretation of this quotient ring is achieved by interpreting the ideal generator $x^N – 1$ as an equation $x^N – 1 = 0$, also known as $x^N = 1$. To get the canonical ring element corresponding to any polynomial $h(x) \in \mathbb{Z}[x]$, you “set” $x^N = 1$ and reduce the polynomial until there are no more terms with degree bigger than $N-1$. For example, if $N=5$ then $x^{10} + x^6 – x^4 + x + 2 = -x^4 + 2x + 3$ (the $x^{10}$ becomes 1, and $x^6 = x$).

To prove the DFT product computes a product in this particular ring, note how the convolution theorem produces the following formula, where $\textup{fprod}(f, g)$ denotes the process of taking the Fourier transform of the two coefficient lists, multiplying them entrywise, and taking a (properly normalized) inverse FFT, and $\textup{fprod}(f, g)(j)$ is the $j$-th coefficient of the output polynomial:

\[ \textup{fprod}(f, g)(j) = \sum_{k=0}^{N-1} f_k g_{j-k \textup{ mod } N} \]

In words, the output polynomial coefficient $j$ equals the sum of all products of pairs of coefficients whose indices sum to $j$ when considered “wrapping around” $N$. Fixing $j=1$ as an example, $\textup{fprod}(f, g)(1) = f_0 g_1 + f_1g_0 + f_2 g_{N-1} + f_3 g_{N-2} + \dots$. This demonstrates the “set $x^N = 1$” interpretation above: the term $f_2 g_{N-1}$ corresponds to the product $f_2x^2 \cdot g_{N-1}x^{N-1}$, which contributes to the $x^1$ term of the polynomial product if and only if $x^{2 + N-1} = x$, if and only if $x^N = 1$.

To achieve this in code, we simply use the version of the code from the program gallery piece, but fix the size of the arrays given to numpy.fft.fft in advance. We will also, for simplicity, assume the $N$ one wishes to use is a power of 2. The resulting code is significantly simpler than the original program gallery code (we omit zero-padding to length $N$ for brevity).

import numpy
from numpy.fft import fft, ifft

def cyclic_polymul(p1, p2, N):
    """Multiply two integer polynomials modulo (x^N - 1).

    p1 and p2 are arrays of coefficients in degree-increasing order.
    """
    assert len(p1) == len(p2) == N
    product = fft(p1) * fft(p2)
    inverted = ifft(product)
    return numpy.round(numpy.real(inverted)).astype(numpy.int32)

As a side note, there’s nothing that stops this from working with polynomials that have real or complex coefficients, but so long as we use small magnitude integer coefficients and round at the end, I don’t have to worry about precision issues (hat tip to Brad Lucier for suggesting an excellent paper by Colin Percival, “Rapid multiplication modulo the sum and difference of highly composite numbers“, which covers these precision issues in detail).

Negacyclic polynomials, DFT with duplication

Now the kind of polynomial quotient ring that shows up in cryptography is critically not $\mathbb{Z}[x]/(x^N-1)$, because that ring has enough easy-to-reason-about structure that it can’t hide secrets. Instead, cryptographers use the ring $\mathbb{Z}[x]/(x^N+1)$ (the minus becomes a plus), which is believed to be more secure for cryptography—although I don’t have a great intuitive grasp on why.

The interpretation is similar here as before, except we “set” $x^N = -1$ instead of $x^N = 1$ in our reductions. Repeating the above example, if $N=5$ then $x^{10} + x^6 – x^4 + x + 2 = -x^4 + 3$ (the $x^{10}$ becomes $(-1)^2 = 1$, and $x^6 = -x$). It’s called negacyclic because as a term $x^k$ passes $k \geq N$, it cycles back to $x^0 = 1$, but with a sign flip.

The negacyclic polynomial multiplication can’t use the DFT without some special hacks. The first and simplest hack is to double the input lists with a negation. That is, starting from $f(x) \in \mathbb{Z}[x]/(x^N+1)$, we can define $f^*(x) = f(x) – x^Nf(x)$ in a different ring $\mathbb{Z}[x]/(x^{2N} – 1)$ (and similarly for $g^*$ and $g$).

Before seeing how this causes the DFT to (almost) compute a negacyclic polynomial product, some math wizardry. The ring $\mathbb{Z}[x]/(x^{2N} – 1)$ is special because it contains our negacyclic ring as a subring. Indeed, because the polynomial $x^{2N} – 1$ factors as $(x^N-1)(x^N+1)$, and because these two factors are coprime in $\mathbb{Z}[x]/(x^{2N} – 1)$, the Chinese remainder theorem (aka Sun-tzu’s theorem) generalizes to polynomial rings and says that any polynomial in $\mathbb{Z}[x]/(x^{2N} – 1)$ is uniquely determined by its remainders when divided by $(x^N-1)$ and $(x^N+1)$. Another way to say it is that the ring $\mathbb{Z}[x]/(x^{2N} – 1)$ factors as a direct product of the two rings $\mathbb{Z}[x]/(x^{N} – 1)$ and $\mathbb{Z}[x]/(x^{N} + 1)$.

Now mapping a polynomial $f(x)$ from the bigger ring $(x^{2N} – 1)$ to the smaller ring $(x^{N}+1)$ involves taking a remainder of $f(x)$ when dividing by $x^{N}+1$ (“setting” $x^N = -1$ and reducing). There are many possible preimage mappings, depending on what your goal is. In this case, we actually intentionally choose a non preimage mapping, because in general to compute a preimage requires solving a system of congruences in the larger polynomial ring. So instead we choose $f(x) \mapsto f^*(x) = f(x) – x^Nf(x) = -f(x)(x^N – 1)$, which maps back down to $2f(x)$ in $\mathbb{Z}[x]/(x^{N} + 1)$. This preimage mapping has a particularly nice structure, in that you build it by repeating the polynomial’s coefficients twice and flipping the sign of the second half. It’s easy to see that the product $f^*(x) g^*(x)$ maps down to $4f(x)g(x)$.

So if we properly account for these extra constant factors floating around, our strategy to perform negacyclic polynomial multiplication is to map $f$ and $g$ up to the larger ring as described, compute their cyclic product (modulo $x^{2N} – 1$) using the FFT, and then the result should be a degree $2N-1$ polynomial which can be reduced with one more modular reduction step to the right degree $N-1$ negacyclic product, i.e., setting $x^N = -1$, which materializes as taking the second half of the coefficients, flipping their signs, and adding them to the corresponding coefficients in the first half.

The code for this is:

def negacyclic_polymul_preimage_and_map_back(p1, p2):
    p1_preprocessed = numpy.concatenate([p1, -p1])
    p2_preprocessed = numpy.concatenate([p2, -p2])
    product = fft(p1_preprocessed) * fft(p2_preprocessed)
    inverted = ifft(product)
    rounded = numpy.round(numpy.real(inverted)).astype(p1.dtype)
    return (rounded[: p1.shape[0]] - rounded[p1.shape[0] :]) // 4

However, this chosen mapping hides another clever trick. The product of the two preimages has enough structure that we can “read” the result off without doing the full “set $x^N = -1$” reduction step. Mapping $f$ and $g$ up to $f^*, g^*$ and taking their product modulo $(x^{2N} – 1)$ gives

\[ \begin{aligned} f^*g^* &= -f(x^N-1) \cdot -g(x^N – 1) \\ &= fg (x^N-1)^2 \\ &= fg(x^{2N} – 2x^N + 1) \\ &= fg(2 – 2x^N) \\ &= 2(fg – x^Nfg) \end{aligned} \]

This has the same syntactical format as the original mapping $f \mapsto f – x^Nf$, with an extra factor of 2, and so its coefficients also have the form “repeat the coefficients and flip the sign of the second half” (times two). We can then do the “inverse mapping” by reading only the first half of the coefficients and dividing by 2.

def negacyclic_polymul_use_special_preimage(p1, p2):
    p1_preprocessed = numpy.concatenate([p1, -p1])
    p2_preprocessed = numpy.concatenate([p2, -p2])
    product = fft(p1_preprocessed) * fft(p2_preprocessed)
    inverted = ifft(product)
    rounded = numpy.round(0.5 * numpy.real(inverted)).astype(p1.dtype)
    return rounded[: p1.shape[0]]

Our chosen mapping $f \mapsto f-x^Nf$ is not particularly special, except that it uses a small number of pre and post-processing operations. For example, if you instead used the mapping $f \mapsto 2f + x^Nf$ (which would map back to $f$ exactly), then the FFT product would result in $5fg + 4x^Nfg$ in the larger ring. You can still read off the coefficients as before, but you’d have to divide by 5 instead of 2 (which, the superstitious would say, is harder). It seems that “double and negate” followed by “halve and take first half” is the least amount of pre/post processing possible.

Negacyclic polynomials with a “twist”

The previous section identified a nice mapping (or embedding) of the input polynomials into a larger ring. But studying that shows some symmetric structure in the FFT output. I.e., the coefficients of $f$ and $g$ are repeated twice, with some scaling factors. It also involves taking an FFT of two $2N$-dimensional vectors when we start from two $N$-dimensional vectors.

This sort of situation should make you think that we can do this more efficiently, either by using a smaller size FFT or by packing some data into the complex part of the input, and indeed we can do both.

[Aside: it’s well known that if all the entries of an FFT input are real, then the result also has symmetry that can be exploted for efficiency by reframing the problem as a size-N/2 FFT in some cases, and just removing half the FFT algorithm’s steps in other cases, see Wikipedia for more]

This technique was explained in Fast multiplication and its applications (pdf link) by Daniel Bernstein, a prominent cryptographer who specializes in cryptography performance, and whose work appears in widely-used standards like TLS, OpenSSH, and he designed a commonly used elliptic curve for cryptography.

[Aside: Bernstein cites this technique as using something called the “Tangent FFT (pdf link).” This is a drop-in FFT replacement he invented that is faster than previous best (split-radix FFT), and Bernstein uses it mainly to give a precise expression for the number of operations required to do the multiplication end to end. We will continue to use the numpy FFT implementation, since in this article I’m just focusing on how to express negacyclic multiplication in terms of the FFT. Also worth noting both the Tangent FFT and “Fast multiplication” papers frame their techniques—including FFT algorithm implementations!—in terms of polynomial ring factorizations and mappings. Be still, my beating cardioid.]

In terms of polynomial mappings, we start from the ring $\mathbb{R}[x] / (x^N + 1)$, where $N$ is a power of 2. We then pick a reversible mapping from $\mathbb{R}[x]/(x^N + 1) \to \mathbb{C}[x]/(x^{N/2} – 1)$ (note the field change from real to complex), apply the FFT to the image of the mapping, and reverse appropriately it at the end.

One such mapping takes two steps, first mapping $\mathbb{R}[x]/(x^N + 1) \to \mathbb{C}[x]/(x^{N/2} – i)$ and then from $\mathbb{C}[x]/(x^{N/2} – i) \to \mathbb{C}[x]/(x^{N/2} – 1)$. The first mapping is as easy as the last section, because $(x^N + 1) = (x^{N/2} + i) (x^{N/2} – i)$, and so we can just set $x^{N/2} = i$ and reduce the polynomial. This as the effect of making the second half of the polynomial’s coefficients become the complex part of the first half of the coefficients.

The second mapping is more nuanced, because we’re not just reducing via factorization. And we can’t just map $i \mapsto 1$ generically, because that would reduce complex numbers down to real values. Instead, we observe that (momentarily using an arbitrary degree $k$ instead of $N/2$), for any polynomial $f \in \mathbb{C}[x]$, the remainder of $f \mod x^k-i$ uniquely determines the remainder of $f \mod x^k – 1$ via the change of variables $x \mapsto \omega_{4k} x$, where $\omega_{4k}$ is a $4k$-th primitive root of unity $\omega_{4k} = e^{\frac{2 \pi i}{4k}}$. Spelling this out in more detail: if $f(x) \in \mathbb{C}[x]$ has remainder $f(x) = g(x) + h(x)(x^k – i)$ for some polynomial $h(x)$, then

\[ \begin{aligned} f(\omega_{4k}x) &= g(\omega_{4k}x) + h(\omega_{4k}x)((\omega_{4k}x)^{k} – i) \\ &= g(\omega_{4k}x) + h(\omega_{4k}x)(e^{\frac{\pi i}{2}} x^k – i) \\ &= g(\omega_{4k}x) + i h(\omega_{4k}x)(x^k – 1) \\ &= g(\omega_{4k}x) \mod (x^k – 1) \end{aligned} \]

Translating this back to $k=N/2$, the mapping from $\mathbb{C}[x]/(x^{N/2} – i) \to \mathbb{C}[x]/(x^{N/2} – 1)$ is $f(x) \mapsto f(\omega_{2N}x)$. And if $f = f_0 + f_1x + \dots + f_{N/2 – 1}x^{N/2 – 1}$, then the mapping involves multiplying each coefficient $f_k$ by $\omega_{2N}^k$.

When you view polynomials as if they were a simple vector of their coefficients, then this operation $f(x) \mapsto f(\omega_{k}x)$ looks like $(a_0, a_1, \dots, a_n) \mapsto (a_0, \omega_{k} a_1, \dots, \omega_k^n a_n)$. Bernstein calls the operation a twist of $\mathbb{C}^n$, which I mused about in this Mathstodon thread.

What’s most important here is that each of these transformations are invertible. The first because the top half coefficients end up in the complex parts of the polynomial, and the second because the mapping $f(x) \mapsto f(\omega_{2N}^{-1}x)$ is an inverse. Together, this makes the preprocessing and postprocessing exact inverses of each other. The code is then

def negacyclic_polymul_complex_twist(p1, p2):
    n = p2.shape[0]
    primitive_root = primitive_nth_root(2 * n)
    root_powers = primitive_root ** numpy.arange(n // 2)

    p1_preprocessed = (p1[: n // 2] + 1j * p1[n // 2 :]) * root_powers
    p2_preprocessed = (p2[: n // 2] + 1j * p2[n // 2 :]) * root_powers

    p1_ft = fft(p1_preprocessed)
    p2_ft = fft(p2_preprocessed)
    prod = p1_ft * p2_ft
    ifft_prod = ifft(prod)
    ifft_rotated = ifft_prod * primitive_root ** numpy.arange(0, -n // 2, -1)

    return numpy.round(
        numpy.concatenate([numpy.real(ifft_rotated), numpy.imag(ifft_rotated)])
    ).astype(p1.dtype)

And so, at the cost of a bit more pre- and postprocessing, we can negacyclically multiply two degree $N-1$ polynomials using an FFT of length $N/2$. In theory, no information is wasted and this is optimal.

And finally, a simple matrix multiplication

The last technique I wanted to share is not based on the FFT, but it’s another method for doing negacyclic polynomial multiplication that has come in handy in situations where I am unable to use FFTs. I call it the Toeplitz method, because one of the polynomials is converted to a Toeplitz matrix. Sometimes I hear it referred to as a circulant matrix technique, but due to the negacyclic sign flip, I don’t think it’s a fully accurate term.

The idea is to put the coefficients of one polynomial $f(x) = f_0 + f_1x + \dots + f_{N-1}x^{N-1}$ into a matrix as follows:

\[ \begin{pmatrix} f_0 & -f_{N-1} & \dots & -f_1 \\ f_1 & f_0 & \dots & -f_2 \\ \vdots & \vdots & \ddots & \vdots \\ f_{N-1} & f_{N-2} & \dots & f_0 \end{pmatrix} \]

The polynomial coefficients are written down in the first column unchanged, then in each subsequent column, the coefficients are cyclically shifted down one, and the term that wraps around the top has its sign flipped. When the second polynomial is treated as a vector of its coefficients, say, $g(x) = g_0 + g_1x + \dots + g_{N-1}x^{N-1}$, then the matrix-vector product computes their negacyclic product (as a vector of coefficients):

\[ \begin{pmatrix} f_0 & -f_{N-1} & \dots & -f_1 \\ f_1 & f_0 & \dots & -f_2 \\ \vdots & \vdots & \ddots & \vdots \\ f_{N-1} & f_{N-2} & \dots & f_0 \end{pmatrix} \begin{pmatrix} g_0 \\ g_1 \\ \vdots \\ g_{N-1} \end{pmatrix} \]

This works because each row $j$ corresponds to one output term $x^j$, and the cyclic shift for that row accounts for the degree-wrapping, with the sign flip accounting for the negacyclic part. (If there were no sign attached, this method could be used to compute a cyclic polynomial product).

The Python code for this is

def cylic_matrix(c: numpy.array) -> numpy.ndarray:
    """Generates a cyclic matrix with each row of the input shifted.

    For input: [1, 2, 3], generates the following matrix:

        [[1 2 3]
         [2 3 1]
         [3 1 2]]
    """
    c = numpy.asarray(c).ravel()
    a, b = numpy.ogrid[0 : len(c), 0 : -len(c) : -1]
    indx = a + b
    return c[indx]


def negacyclic_polymul_toeplitz(p1, p2):
    n = len(p1)

    # Generates a sign matrix with 1s below the diagonal and -1 above.
    up_tri = numpy.tril(numpy.ones((n, n), dtype=int), 0)
    low_tri = numpy.triu(numpy.ones((n, n), dtype=int), 1) * -1
    sign_matrix = up_tri + low_tri

    cyclic_matrix = cylic_matrix(p1)
    toeplitz_p1 = sign_matrix * cyclic_matrix
    return numpy.matmul(toeplitz_p1, p2)

Obviously on most hardware this would be less efficient than an FFT-based method (and there is some relationship between circulant matrices and Fourier Transforms, see Wikipedia). But in some cases—when the polynomials are small, or one of the two polynomials is static, or a particular hardware choice doesn’t handle FFTs with high-precision floats very well, or you want to take advantage of natural parallelism in the matrix-vector product—this method can be useful. It’s also simpler to reason about.

Until next time!

Programming with Finite Fields

Back when I was first exposed to programming language design, I decided it would be really cool if there were a language that let you define your own number types and then do all your programming within those number types. And since I get excited about math, I think of really exotic number types (Boolean rings, Gaussian integers, Octonions, oh my!). I imagined it would be a language feature, so I could do something like this:

use gaussianintegers as numbers

x = 1 + i
y = 2 - 3i
print(x*y)
z = 2 + 3.5i  # error

I’m not sure why I thought this would be so cool. Perhaps I felt like I would be teaching a computer math. Or maybe the next level of abstraction in playing god by writing programs is to play god by designing languages (and I secretly satisfy a massive god complex by dictating the actions of my computer).

But despite not writing a language of my own, programming with weird number systems still has a special place in my heart. It just so happens that we’re in the middle of a long series on elliptic curves, and in the next post we’ll actually implement elliptic curve arithmetic over a special kind of number type (the finite field). In this post, we’ll lay the groundwork by implementing number types in Python that allow us to work over any finite field. This is actually a pretty convoluted journey, and to be totally rigorous we’d need to prove a bunch of lemmas, develop a bunch of ring theory, and prove the correctness of a few algorithms involving polynomials.

Instead of taking the long and winding road, we’ll just state the important facts with links to proofs, prove the easy stuff, and focus more heavily than usual on the particular Python implementation details. As usual, all of the code used in this post is available on this blog’s Github page.

Integers Modulo Primes

The simples kind of finite field is the set of integers modulo a prime. We’ve dealt with this number field extensively on this blog (in groups, rings, fields, with RSA, etc.), but let’s recall what it is. The modulo operator $\textup{mod}$ (in programming it’s often denoted %) is a binary operation on integers such that $ x \mod y$ is the unique positive remainder of $ x$ when divided by $ y$.

Definition: Let $ p$ be a prime number. The set $ \mathbb{Z}/p$ consists of the numbers $ \left \{ 0, 1, \dots, p-1 \right \}$. If you endow it with the operations of addition (mod $ p$) and multiplication (mod $ p$), it forms a field.

To say it’s a field is just to say that arithmetic more or less behaves the way we expect it to, and in particular that every nonzero element has a (unique) multiplicative inverse. Making a number type for $ \mathbb{Z}/p$ in Python is quite simple.

def IntegersModP(p):
   class IntegerModP(FieldElement):
      def __init__(self, n):
         self.n = n % p
         self.field = IntegerModP

      def __add__(self, other): return IntegerModP(self.n + other.n)
      def __sub__(self, other): return IntegerModP(self.n - other.n)
      def __mul__(self, other): return IntegerModP(self.n * other.n)
      def __truediv__(self, other): return self * other.inverse()
      def __div__(self, other): return self * other.inverse()
      def __neg__(self): return IntegerModP(-self.n)
      def __eq__(self, other): return isinstance(other, IntegerModP) and self.n == other.n
      def __abs__(self): return abs(self.n)
      def __str__(self): return str(self.n)
      def __repr__(self): return '%d (mod %d)' % (self.n, self.p)

      def __divmod__(self, divisor):
         q,r = divmod(self.n, divisor.n)
         return (IntegerModP(q), IntegerModP(r))

      def inverse(self):
         ...?

   IntegerModP.p = p
   IntegerModP.__name__ = 'Z/%d' % (p)
   return IntegerModP

We’ve done a couple of things worth note here. First, all of the double-underscore methods are operator overloads, so they are called when one tries to, e.g., add two instances of this class together. We’ve also implemented a division algorithm via __divmod__ which computes a (quotient, remainder) pair for the appropriate division. The built in Python function divmod function does this for integers, and we can overload it for a custom type. We’ll write a more complicated division algorithm later in this post. Finally, we’re dynamically creating our class so that different primes will correspond to different types. We’ll come back to why this encapsulation is a good idea later, but it’s crucial to make our next few functions reusable and elegant.

Here’s an example of the class in use:

>>> mod7 = IntegersModP(7)
>>> mod7(3) + mod7(6)
2 (mod 7)

The last (undefined) function in the IntegersModP class, the inverse function, is our only mathematical hurdle. Luckily, we can compute inverses in a generic way, using an algorithm called the extended Euclidean algorithm. Here’s the mathematics.

Definition: An element $ d$ is called a greatest common divisor (gcd) of $ a,b$ if it divides both $ a$ and $ b$, and for every other $ z$ dividing both $ a$ and $ b$, $ z$ divides $ d$. For $ \mathbb{Z}/p$ gcd’s and we denote it as $ \gcd(a,b)$. [1]

Note that we called it ‘a’ greatest common divisor. In general gcd’s need not be unique, though for integers one often picks the positive gcd. We’ll actually see this cause a tiny programmatic bug later in this post, but let’s push on for now.

Theorem: For any two integers $ a,b \in \mathbb{Z}$ there exist unique $ x,y \in \mathbb{Z}$ such that $ ax + by = \gcd(a,b)$.

We could beat around the bush and try to prove these things in various ways, but when it comes down to it there’s one algorithm of central importance that both computes the gcd and produces the needed linear combination $ x,y$. The algorithm is called the Euclidean algorithm. Here is a simple version that just gives the gcd.

def gcd(a, b):
   if abs(a) < abs(b):
      return gcd(b, a)

   while abs(b) > 0:
      q,r = divmod(a,b)
      a,b = b,r

   return a

This works by the simple observation that $ \gcd(a, aq+r) = \gcd(a,r)$ (this is an easy exercise to prove directly). So the Euclidean algorithm just keeps applying this rule over and over again: take the remainder when dividing the bigger argument by the smaller argument until the remainder becomes zero. Then the $ \gcd(x,0) = x$ because everything divides zero.

Now the so-called ‘extended’ Euclidean algorithm just keeps track of some additional data as it goes (the partial quotients and remainders). Here’s the algorithm.

def extendedEuclideanAlgorithm(a, b):
   if abs(b) > abs(a):
      (x,y,d) = extendedEuclideanAlgorithm(b, a)
      return (y,x,d)

   if abs(b) == 0:
      return (1, 0, a)

   x1, x2, y1, y2 = 0, 1, 1, 0
   while abs(b) > 0:
      q, r = divmod(a,b)
      x = x2 - q*x1
      y = y2 - q*y1
      a, b, x2, x1, y2, y1 = b, r, x1, x, y1, y

   return (x2, y2, a)

Indeed, the reader who hasn’t seen this stuff before is encouraged to trace out a run for the numbers 4864, 3458. Their gcd is 38 and the two integers are 32 and -45, respectively.

How does this help us compute inverses? Well, if we want to find the inverse of $ a$ modulo $ p$, we know that their gcd is 1. So compute the $ x,y$ such that $ ax + py = 1$, and then reduce both sides mod $ p$. You get $ ax + 0 = 1 \mod p$, which means that $ x \mod p$ is the inverse of $ a$. So once we have the extended Euclidean algorithm our inverse function is trivial to write!

def inverse(self):
   x,y,d = extendedEuclideanAlgorithm(self.n, self.p)
   return IntegerModP(x)

And indeed it works as expected:

>>> mod23 = IntegersModP(23)
>>> mod23(7).inverse()
10 (mod 23)
>>> mod23(7).inverse() * mod23(7)
1 (mod 23)

Now one very cool thing, and something that makes some basic ring theory worth understanding, is that we can compute the gcd of any number type using the exact same code for the Euclidean algorithm, provided we implement an abs function and a division algorithm. Via a chain of relatively easy-to-prove lemmas, if your number type has enough structure (in particular, if it has a division algorithm that satisfies some properties), then greatest common divisors are well-defined, and the Euclidean algorithm gives us that special linear combination. And using the same trick above in finite fields, we can use the Euclidean algorithm to compute inverses.

But in order to make things work programmatically we need to be able to deal with the literal ints 0 and 1 in the algorithm. That is, we need to be able to silently typecast integers up to whatever number type we’re working with. This makes sense because all rings have 0 and 1, but it requires a bit of scaffolding to implement. In particular, typecasting things sensibly is really difficult if you aren’t careful. And the problems are compounded in a language like Python that blatantly ignores types whenever possible. [2]

So let’s take a quick break to implement a tiny type system with implicit typecasting.

[1] The reader familiar with our series on category theory will recognize this as the product of two integers in a category whose arrows represent divisibility. So by abstract nonsense, this proves that gcd’s are unique up to multiplication by a unit in any ring.
[2] In the process of writing the code for this post, I was sorely missing the stronger type systems of Java and Haskell. Never thought I’d say that, but it’s true.

A Tiny Generic Type System

The main driving force behind our type system will be a decorator called @typecheck. We covered decorators toward the end of our primer on dynamic programming, but in short a decorator is a Python syntax shortcut that allows some pre- or post-processing to happen to a function in a reusable way. All you need to do to apply the pre/post-processing is prefix the function definition with the name of the decorator.

Our decorator will be called typecheck, and it will decorate binary operations on our number types. In its basic form, our type checker will work as follows: if the types of the two operands are the same, then the decorator will just pass them on through to the operator. Otherwise, it will try to do some typecasting, and if that fails it will raise exceptions with reckless abandon.

def typecheck(f):
   def newF(self, other):
      if type(self) is not type(other):
         try:
            other = self.__class__(other)
         except TypeError:
            message = 'Not able to typecast %s of type %s to type %s in function %s'
            raise TypeError(message % (other, type(other).__name__, type(self).__name__, f.__name__))
         except Exception as e:
            message = 'Type error on arguments %r, %r for functon %s. Reason:%s'
            raise TypeError(message % (self, other, f.__name__, type(other).__name__, type(self).__name__, e))

      return f(self, other)

   return newF

So this is great, but there are two issues. The first is that this will only silently typecast if the thing we’re casting is on the right-hand side of the expression. In other words, the following will raise an exception complaining that you can’t add ints to Mod7 integers.

>>> x = IntegersModP(7)(1)
>>> 1 + x

What we need are the right-hand versions of all the operator overloads. They are the same as the usual operator overloads, but Python gives preference to the left-hand operator overloads. Anticipating that we will need to rewrite these silly right-hand overloads for every number type, and they’ll all be the same, we make two common base classes.

class DomainElement(object):
   def __radd__(self, other): return self + other
   def __rsub__(self, other): return -self + other
   def __rmul__(self, other): return self * other

class FieldElement(DomainElement):
   def __truediv__(self, other): return self * other.inverse()
   def __rtruediv__(self, other): return self.inverse() * other
   def __div__(self, other): return self.__truediv__(other)
   def __rdiv__(self, other): return self.__rtruediv__(other)

And we can go ahead and make our IntegersModP a subclass of FieldElement. [3]

But now we’re wading into very deep waters. In particular, we know ahead of time that our next number type will be for Polynomials (over the integers, or fractions, or $ \mathbb{Z}/p$, or whatever). And we’ll want to do silent typecasting from ints and IntegersModP to Polynomials! The astute reader will notice the discrepancy. What will happen if I try to do this?

>>> MyInteger() + MyPolynomial()

Let’s take this slowly: by our assumption both MyInteger and MyPolynomial have the __add__ and __radd__ functions defined on them, and each tries to typecast the other the appropriate type. But which is called? According to Python’s documentation if the left-hand side has an __add__ function that’s called first, and the right-hand sides’s __radd__ function is only sought if no __add__ function is found for the left operand.

Well that’s a problem, and we’ll deal with it in a half awkward and half elegant way. What we’ll do is endow our number types with an “operatorPrecedence” constant. And then inside our type checker function we’ll see if the right-hand operand is an object of higher precedence. If it is, we return the global constant NotImplemented, which Python takes to mean that no __add__ function was found, and it proceeds to look for __radd__. And so with this modification our typechecker is done. [4]

def typecheck(f):
   def newF(self, other):
      if (hasattr(other.__class__, 'operatorPrecedence') and
            other.__class__.operatorPrecedence > self.__class__.operatorPrecedence):
         return NotImplemented

      if type(self) is not type(other):
         try:
            other = self.__class__(other)
         except TypeError:
            message = 'Not able to typecast %s of type %s to type %s in function %s'
            raise TypeError(message % (other, type(other).__name__, type(self).__name__, f.__name__))
         except Exception as e:
            message = 'Type error on arguments %r, %r for functon %s. Reason:%s'
            raise TypeError(message % (self, other, f.__name__, type(other).__name__, type(self).__name__, e))

      return f(self, other)

   return newF

We add a default operatorPrecedence of 1 to the DomainElement base class. Now this function answers our earlier question of why we want to encapsulate the prime modulus into the IntegersModP class. If this typechecker is really going to be generic, we need to be able to typecast an int by passing the single int argument to the type constructor with no additional information! Indeed, this will be the same pattern for our polynomial class and the finite field class to follow.

Now there is still one subtle problem. If we try to generate two copies of the same number type from our number-type generator (in other words, the following code snippet), we’ll get a nasty exception.

>>> mod7 = IntegersModP(7)
>>> mod7Copy = IntegersModP(7)
>>> mod7(1) + mod7Copy(2)
... error ...

The reason for this is that in the type-checker we’re using the Python built-in ‘is’ which checks for identity, not semantic equality. To fix this, we simply need to memoize the IntegersModP function (and all the other functions we’ll use to generate number types) so that there is only ever one copy of a number type in existence at a time.

So enough Python hacking: let’s get on with implementing finite fields!

[3] This also compels us to make some slight modifications to the constructor for IntegersModP, but they’re not significant enough to display here. Check out the Github repo if you want to see.
[4] This is truly a hack, and we’ve considered submitting a feature request to the Python devs. It is conceivably useful for the operator-overloading aficionado. I’d be interested to hear your thoughts in the comments as to whether this is a reasonable feature to add to Python.

Polynomial Arithmetic

Recall from our finite field primer that every finite field can be constructed as a quotient of a polynomial ring with coefficients in $ \mathbb{Z}/p$ by some prime ideal. We spelled out exactly what this means in fine detail in the primer, so check that out before reading on.

Indeed, to construct a finite field we need to find some irreducible monic polynomial $ f$ with coefficients in $ \mathbb{Z}/p$, and then the elements of our field will be remainders of arbitrary polynomials when divided by $ f$. In order to determine if they’re irreducible we’ll need to compute a gcd. So let’s build a generic polynomial type with a polynomial division algorithm, and hook it into our gcd framework.

We start off in much the same way as with the IntegersModP:

# create a polynomial with coefficients in a field; coefficients are in
# increasing order of monomial degree so that, for example, [1,2,3]
# corresponds to 1 + 2x + 3x^2
@memoize
def polynomialsOver(field=fractions.Fraction):

   class Polynomial(DomainElement):
      operatorPrecedence = 2
      factory = lambda L: Polynomial([field(x) for x in L])

      def __init__(self, c):
         if type(c) is Polynomial:
            self.coefficients = c.coefficients
         elif isinstance(c, field):
            self.coefficients = [c]
         elif not hasattr(c, '__iter__') and not hasattr(c, 'iter'):
            self.coefficients = [field(c)]
         else:
            self.coefficients = c

         self.coefficients = strip(self.coefficients, field(0))

      def isZero(self): return self.coefficients == []

      def __repr__(self):
         if self.isZero():
            return '0'

         return ' + '.join(['%s x^%d' % (a,i) if i > 0 else '%s'%a
                              for i,a in enumerate(self.coefficients)])

      def __abs__(self): return len(self.coefficients)
      def __len__(self): return len(self.coefficients)
      def __sub__(self, other): return self + (-other)
      def __iter__(self): return iter(self.coefficients)
      def __neg__(self): return Polynomial([-a for a in self])

      def iter(self): return self.__iter__()
      def leadingCoefficient(self): return self.coefficients[-1]
      def degree(self): return abs(self) - 1

All of this code just lays down conventions. A polynomial is a list of coefficients (in increasing order of their monomial degree), the zero polynomial is the empty list of coefficients, and the abs() of a polynomial is one plus its degree. [5] Finally, instead of closing over a prime modulus, as with IntegersModP, we’re closing over the field of coefficients. In general you don’t have to have polynomials with coefficients in a field, but if they do come from a field then you’re guaranteed to get a sensible Euclidean algorithm. In the formal parlance, if $ k$ is a field then $ k[x]$ is a Euclidean domain. And for our goal of defining finite fields, we will always have coefficients from $ \mathbb{Z}/p$, so there’s no problem.

Now we can define things like addition, multiplication, and equality using our typechecker to silently cast and watch for errors.

      @typecheck
      def __eq__(self, other):
         return self.degree() == other.degree() and all([x==y for (x,y) in zip(self, other)])

      @typecheck
      def __add__(self, other):
         newCoefficients = [sum(x) for x in itertools.zip_longest(self, other, fillvalue=self.field(0))]
         return Polynomial(newCoefficients)

      @typecheck
      def __mul__(self, other):
         if self.isZero() or other.isZero():
            return Zero()

         newCoeffs = [self.field(0) for _ in range(len(self) + len(other) - 1)]

         for i,a in enumerate(self):
            for j,b in enumerate(other):
               newCoeffs[i+j] += a*b

         return Polynomial(newCoeffs)

Notice that, if the underlying field of coefficients correctly implements the operator overloads, none of this depends on the coefficients. Reusability, baby!

And we can finish off with the division algorithm for polynomials.

      @typecheck
      def __divmod__(self, divisor):
         quotient, remainder = Zero(), self
         divisorDeg = divisor.degree()
         divisorLC = divisor.leadingCoefficient()

         while remainder.degree() >= divisorDeg:
            monomialExponent = remainder.degree() - divisorDeg
            monomialZeros = [self.field(0) for _ in range(monomialExponent)]
            monomialDivisor = Polynomial(monomialZeros + [remainder.leadingCoefficient() / divisorLC])

            quotient += monomialDivisor
            remainder -= monomialDivisor * divisor

         return quotient, remainder

Indeed, we’re doing nothing here but formalizing the grade-school algorithm for doing polynomial long division [6]. And we can finish off the function for generating this class by assigning the field member variable along with a class name. And we give it a higher operator precedence than the underlying field of coefficients so that an isolated coefficient is cast up to a constant polynomial.

@memoize
def polynomialsOver(field=fractions.Fraction):

   class Polynomial(DomainElement):
      operatorPrecedence = 2

      [... methods defined above ...]

   def Zero():
      return Polynomial([])

   Polynomial.field = field
   Polynomial.__name__ = '(%s)[x]' % field.__name__
   return Polynomial

We provide a modest test suite in the Github repository for this post, but here’s a sample test:

>>> Mod5 = IntegersModP(5)
>>> Mod11 = IntegersModP(11)
>>> polysOverQ = polynomialsOver(Fraction).factory
>>> polysMod5 = polynomialsOver(Mod5).factory
>>> polysMod11 = polynomialsOver(Mod11).factory
>>> polysOverQ([1,7,49]) / polysOverQ([7])
1/7 + 1 x^1 + 7 x^2
>>> polysMod5([1,7,49]) / polysMod5([7])
3 + 1 x^1 + 2 x^2
>>> polysMod11([1,7,49]) / polysMod11([7])
8 + 1 x^1 + 7 x^2

And indeed, the extended Euclidean algorithm works without modification, so we know our typecasting is doing what’s expected:

>>> p = polynomialsOver(Mod2).factory
>>> f = p([1,0,0,0,1,1,1,0,1,1,1]) # x^10 + x^9 + x^8 + x^6 + x^5 + x^4 + 1
>>> g = p([1,0,1,1,0,1,1,0,0,1])   # x^9 + x^6 + x^5 + x^3 + x^1 + 1
>>> theGcd = p([1,1,0,1]) # x^3 + x + 1
>>> x = p([0,0,0,0,1]) # x^4
>>> y = p([1,1,1,1,1,1]) # x^5 + x^4 + x^3 + x^2 + x + 1
>>> (x,y,theGcd) == extendedEuclideanAlgorithm(f, g)
True

[5] The mathematical name for the abs() function that we’re using is a valuation.
[6] One day we will talk a lot more about polynomial long division on this blog. You can do a lot of cool algebraic geometry with it, and the ideas there lead you to awesome applications like robot motion planning and automated geometry theorem proving.

Generating Irreducible Polynomials

Now that we’ve gotten Polynomials out of the way, we need to be able to generate irreducible polynomials over $ \mathbb{Z}/p$ of any degree we want. It might be surprising that irreducible polynomials of any degree exist [7], but in fact we know a lot more.

Theorem: The product of all irreducible monic polynomials of degree dividing $ m$ is equal to $ x^{p^m} – x$.

This is an important theorem, but it takes a little bit more field theory than we have under our belts right now. You could summarize the proof by saying there is a one-to-one correspondence between elements of a field and monic irreducible polynomials, and then you say some things about splitting fields. You can see a more detailed proof outline here, but it assumes you’re familiar with the notion of a minimal polynomial. We will probably cover this in a future primer.

But just using the theorem we can get a really nice algorithm for determining if a polynomial $ f(x)$ of degree $ m$ is irreducible: we just look at its gcd with all the $ x^{p^k} – x$ for $ k$ smaller than $ m$. If all the gcds are constants, then we know it’s irreducible, and if even one is a non-constant polynomial then it has to be irreducible. Why’s that? Because if you have some nontrivial gcd $ d(x) = \gcd(f(x), x^{p^k} – x)$ for $ k < m$, then it’s a factor of $ f(x)$ by definition. And since we know all irreducible monic polynomials are factors of that this collection of polynomials, if the gcd is always 1 then there are no other possible factors to be divisors. (If there is any divisor then there will be a monic irreducible one!) So the candidate polynomial must be irreducible. In fact, with a moment of thought it’s clear that we can stop at $ k= m/2$, as any factor of large degree will necessarily require corresponding factors of small degree. So the algorithm to check for irreducibility is just this simple loop:

def isIrreducible(polynomial, p):
   ZmodP = IntegersModP(p)
   poly = polynomialsOver(ZmodP).factory
   x = poly([0,1])
   powerTerm = x
   isUnit = lambda p: p.degree() == 0

   for _ in range(int(polynomial.degree() / 2)):
      powerTerm = powerTerm.powmod(p, polynomial)
      gcdOverZmodp = gcd(polynomial, powerTerm - x)
      if not isUnit(gcdOverZmodp):
         return False

   return True

We’re just computing the powers iteratively as $ x^p, (x^p)^p = x^{p^2}, \dots, x^{p^j}$ and in each step of the loop subtracting $ x$ and computing the relevant gcd. The powmod function is just there so that we can reduce the power mod our irreducible polynomial after each multiplication, keeping the degree of the polynomial small and efficient to work with.

Now generating an irreducible polynomial is a little bit harder than testing for one. With a lot of hard work, however, field theorists discovered that irreducible polynomials are quite common. In fact, if you just generate the coefficients of your degree $ n$ monic polynomial at random, the chance that you’ll get something irreducible is at least $ 1/n$. So this suggests an obvious randomized algorithm: keep guessing until you find one.

def generateIrreduciblePolynomial(modulus, degree):
   Zp = IntegersModP(modulus)
   Polynomial = polynomialsOver(Zp)

   while True:
      coefficients = [Zp(random.randint(0, modulus-1)) for _ in range(degree)]
      randomMonicPolynomial = Polynomial(coefficients + [Zp(1)])

      if isIrreducible(randomMonicPolynomial, modulus):
         return randomMonicPolynomial

Since the probability of getting an irreducible polynomial is close to $ 1/n$, we expect to require $ n$ trials before we find one. Moreover we could give a pretty tight bound on how likely it is to deviate from the expected number of trials. So now we can generate some irreducible polynomials!

>>> F23 = FiniteField(2,3)
>>> generateIrreduciblePolynomial(23, 3)
21 + 12 x^1 + 11 x^2 + 1 x^3

And so now we are well-equipped to generate any finite field we want! It’s just a matter of generating the polynomial and taking a modulus after every operation.

@memoize
def FiniteField(p, m, polynomialModulus=None):
   Zp = IntegersModP(p)
   if m == 1:
      return Zp

   Polynomial = polynomialsOver(Zp)
   if polynomialModulus is None:
      polynomialModulus = generateIrreduciblePolynomial(modulus=p, degree=m)

   class Fq(FieldElement):
      fieldSize = int(p ** m)
      primeSubfield = Zp
      idealGenerator = polynomialModulus
      operatorPrecedence = 3

      def __init__(self, poly):
         if type(poly) is Fq:
            self.poly = poly.poly
         elif type(poly) is int or type(poly) is Zp:
            self.poly = Polynomial([Zp(poly)])
         elif isinstance(poly, Polynomial):
            self.poly = poly % polynomialModulus
         else:
            self.poly = Polynomial([Zp(x) for x in poly]) % polynomialModulus

         self.field = Fq

      @typecheck
      def __add__(self, other): return Fq(self.poly + other.poly)
      @typecheck
      def __sub__(self, other): return Fq(self.poly - other.poly)
      @typecheck
      def __mul__(self, other): return Fq(self.poly * other.poly)
      @typecheck
      def __eq__(self, other): return isinstance(other, Fq) and self.poly == other.poly

      def __pow__(self, n): return Fq(pow(self.poly, n))
      def __neg__(self): return Fq(-self.poly)
      def __abs__(self): return abs(self.poly)
      def __repr__(self): return repr(self.poly) + ' \u2208 ' + self.__class__.__name__

      @typecheck
      def __divmod__(self, divisor):
         q,r = divmod(self.poly, divisor.poly)
         return (Fq(q), Fq(r))

      def inverse(self):
         if self == Fq(0):
            raise ZeroDivisionError

         x,y,d = extendedEuclideanAlgorithm(self.poly, self.idealGenerator)
         return Fq(x) * Fq(d.coefficients[0].inverse())

   Fq.__name__ = 'F_{%d^%d}' % (p,m)
   return Fq

And some examples of using it:

>>> F23 = FiniteField(2,3)
>>> x = F23([1,1])
>>> x
1 + 1 x^1 ∈ F_{2^3}
>>> x*x
1 + 0 x^1 + 1 x^2 ∈ F_{2^3}
>>> x**10
0 + 0 x^1 + 1 x^2 ∈ F_{2^3}
>>> 1 / x
0 + 1 x^1 + 1 x^2 ∈ F_{2^3}
>>> x * (1 / x)
1 ∈ F_{2^3}
>>> k = FiniteField(23, 4)
>>> k.fieldSize
279841
>>> k.idealGenerator
6 + 8 x^1 + 10 x^2 + 10 x^3 + 1 x^4
>>> y
9 + 21 x^1 + 14 x^2 + 12 x^3 ∈ F_{23^4}
>>> y*y
13 + 19 x^1 + 7 x^2 + 14 x^3 ∈ F_{23^4}
>>> y**5 - y
15 + 22 x^1 + 15 x^2 + 5 x^3 ∈ F_{23^4}

And that’s it! Now we can do arithmetic over any finite field we want.

[7] Especially considering that other wacky things happen like this: $ x^4 +1$ is reducible over every finite field!

Some Words on Efficiency

There are a few things that go without stating about this program, but I’ll state them anyway.

The first is that we pay a big efficiency price for being so generic. All the typecasting we’re doing isn’t cheap, and in general cryptography needs to be efficient. For example, if I try to create a finite field of order $ 104729^{20}$, it takes about ten to fifteen seconds to complete. This might not seem so bad for a one-time initialization, but it’s clear that our representation is somewhat bloated. We would display a graph of the expected time to perform various operations in various finite fields, but this post is already way too long.

In general, the larger and more complicated the polynomial you use to define your finite field, the longer operations will take (since dividing by a complicated polynomial is more expensive than dividing by a simple polynomial). For this reason and a few other reasons, a lot of research has gone into efficiently finding irreducible polynomials with, say, only three nonzero terms. Moreover, if we know in advance that we’ll only work over fields of characteristic two we can make a whole lot of additional optimizations. Essentially, all of the arithmetic reduces to really fast bitwise operations, and things like exponentiation can easily be implemented in hardware. But it also seems that the expense coming with large field characteristics corresponds to higher security, so some researchers have tried to meet in the middle an get efficient representations of other field characteristics.

In any case, the real purpose of our implementation in this post is not for efficiency. We care first and foremost about understanding the mathematics, and to have a concrete object to play with and use in the future for other projects. And we have accomplished just that.

Until next time!

(Finite) Fields — A Primer

So far on this blog we’ve given some introductory notes on a few kinds of algebraic structures in mathematics (most notably groups and rings, but also monoids). Fields are the next natural step in the progression.

If the reader is comfortable with rings, then a field is extremely simple to describe: they’re just commutative rings with 0 and 1, where every nonzero element has a multiplicative inverse. We’ll give a list of all of the properties that go into this “simple” definition in a moment, but an even more simple way to describe a field is as a place where “arithmetic makes sense.” That is, you get operations for $ +,-, \cdot , /$ which satisfy the expected properties of addition, subtraction, multiplication, and division. So whatever the objects in your field are (and sometimes they are quite weird objects), they behave like usual numbers in a very concrete sense.

So here’s the official definition of a field. We call a set $ F$ a field if it is endowed with two binary operations addition ($ +$) and multiplication ($ \cdot$, or just symbol juxtaposition) that have the following properties:

  • There is an element we call 0 which is the identity for addition.
  • Addition is commutative and associative.
  • Every element $ a \in F$ has a corresponding additive inverse $ b$ (which may equal $ a$) for which $ a + b = 0$.

These three properties are just the axioms of a (commutative) group, so we continue:

  • There is an element we call 1 (distinct from 0) which is the identity for multiplication.
  • Multiplication is commutative and associative.
  • Every nonzero element $ a \in F$ has a corresponding multiplicative inverse $ b$ (which may equal $ a$) for which $ ab = 1$.
  • Addition and multiplication distribute across each other as we expect.

If we exclude the existence of multiplicative inverses, these properties make $ F$ a commutative ring, and so we have the following chain of inclusions that describes it all

$ \displaystyle \textup{Fields} \subset \textup{Commutative Rings} \subset \textup{Rings} \subset \textup{Commutative Groups} \subset \textup{Groups}$

The standard examples of fields are the real numbers $ \mathbb{R}$, the rationals $ \mathbb{Q}$, and the complex numbers $ \mathbb{C}$. But of course there are many many more. The first natural question to ask about fields is: what can they look like?

For example, can there be any finite fields? A field $ F$ which as a set has only finitely many elements?

As we saw in our studies of groups and rings, the answer is yes! The simplest example is the set of integers modulo some prime $ p$. We call them $ \mathbb{Z} / p \mathbb{Z},$ or sometimes just $ \mathbb{Z}/p$ for short, and let’s rederive what we know about them now.

As a set, $ \mathbb{Z}/p$ consists of the integers $ \left \{ 0, 1, \dots, p-1 \right \}$. The addition and multiplication operations are easy to define, they’re just usual addition and multiplication followed by a modulus. That is, we add by $ a + b \mod p$ and multiply with $ ab \mod p$. This thing is clearly a commutative ring (because the integers form a commutative ring), so to show this is a field we need to show that everything has a multiplicative inverse.

There is a nice fact that allows us to do this: an element $ a$ has an inverse if and only if the only way for it to divide zero is the trivial way $ 0a = 0$. Here’s a proof. For one direction, suppose $ a$ divides zero nontrivially, that is there is some $ c \neq 0$ with $ ac = 0$. Then if $ a$ had an inverse $ b$, then $ 0 = b(ac) = (ba)c = c$, but that’s very embarrassing for $ c$ because it claimed to be nonzero. Now suppose $ a$ only divides zero in the trivial way. Then look at all possible ways to multiply $ a$ by other nonzero elements of $ F$. No two can give you the same result because if $ ax = ay$ then (without using multiplicative inverses) $ a(x-y) = 0$, but we know that $ a$ can only divide zero in the trivial way so $ x=y$. In other words, the map “multiplication by $ a$” is injective. Because the set of nonzero elements of $ F$ is finite you have to hit everything (the map is in fact a bijection), and some $ x$ will give you $ ax = 1$.

Now let’s use this fact on $ \mathbb{Z}/p$ in the obvious way. Since $ p$ is a prime, there are no two smaller numbers $ a, b < p$ so that $ ab = p$. But in $ \mathbb{Z}/p$ the number $ p$ is equivalent to zero (mod $ p$)! So $ \mathbb{Z}/p$ has no nontrivial zero divisors, and so every element has an inverse, and so it’s a finite field with $ p$ elements.

The next question is obvious: can we get finite fields of other sizes? The answer turns out to be yes, but you can’t get finite fields of any size. Let’s see why.

Characteristics and Vector Spaces

Say you have a finite field $ k$ (lower-case k is the standard letter for a field, so let’s forget about $ F$). Beacuse the field is finite, if you take 1 and keep adding it to itself you’ll eventually run out of field elements. That is, $ n = 1 + 1 + \dots + 1 = 0$ at some point. How do I know it’s zero and doesn’t keep cycling never hitting zero? Well if at two points $ n = m \neq 0$, then $ n-m = 0$ is a time where you hit zero, contradicting the claim.

Now we define $ \textup{char}(k)$, the characteristic of $ k$, to be the smallest $ n$ (sums of 1 with itself) for which $ n = 0$. If there is no such $ n$ (this can happen if $ k$ is infinite, but doesn’t always happen for infinite fields), then we say the characteristic is zero. It would probably make more sense to say the characteristic is infinite, but that’s just the way it is. Of course, for finite fields the characteristic is always positive. So what can we say about this number? We have seen lots of example where it’s prime, but is it always prime? It turns out the answer is yes!

For if $ ab = n = \textup{char}(k)$ is composite, then by the minimality of $ n$ we get $ a,b \neq 0$, but $ ab = n = 0$. This can’t happen by our above observation, because being a zero divisor means you have no inverse! Contradiction, sucker.

But it might happen that there are elements of $ k$ that can’t be written as $ 1 + 1 + \dots + 1$ for any number of terms. We’ll construct examples in a minute (in fact, we’ll classify all finite fields), but we already have a lot of information about what those fields might look like. Indeed, since every field has 1 in it, we just showed that every finite field contains a smaller field (a subfield) of all the ways to add 1 to itself. Since the characteristic is prime, the subfield is a copy of $ \mathbb{Z}/p$ for $ p = \textup{char}(k)$. We call this special subfield the prime subfield of $ k$.

The relationship between the possible other elements of $ k$ and the prime subfield is very neat. Because think about it: if $ k$ is your field and $ F$ is your prime subfield, then the elements of $ k$ can interact with $ F$ just like any other field elements. But if we separate $ k$ from $ F$ (make a separate copy of $ F$), and just think of $ k$ as having addition, then the relationship with $ F$ is that of a vector space! In fact, whenever you have two fields $ k \subset k’$, the latter has the structure of a vector space over the former.

Back to finite fields, $ k$ is a vector space over its prime subfield, and now we can impose all the power and might of linear algebra against it. What’s it’s dimension? Finite because $ k$ is a finite set! Call the dimension $ m$, then we get a basis $ v_1, \dots, v_m$. Then the crucial part: every element of $ k$ has a unique representation in terms of the basis. So they are expanded in the form

$ \displaystyle f_1v_1 + \dots + f_mv_m$

where the $ f_i$ come from $ F$. But now, since these are all just field operations, every possible choice for the $ f_i$ has to give you a different field element. And how many choices are there for the $ f_i$? Each one has exactly $ |F| = \textup{char}(k) = p$. And so by counting we get that $ k$ has $ p^m$ many elements.

This is getting exciting quickly, but we have to pace ourselves! This is a constraint on the possible size of a finite field, but can we realize it for all choices of $ p, m$? The answer is again yes, and in the next section we’ll see how.  But reader be warned: the formal way to do it requires a little bit of familiarity with ideals in rings to understand the construction. I’ll try to avoid too much technical stuff, but if you don’t know what an ideal is, you should expect to get lost (it’s okay, that’s the nature of learning new math!).

Constructing All Finite Fields

Let’s describe a construction. Take a finite field $ k$ of characteristic $ p$, and say you want to make a field of size $ p^m$. What we need to do is construct a field extension, that is, find a bigger field containing $ k$ so that the vector space dimension of our new field over $ k$ is exactly $ m$.

What you can do is first form the ring of polynomials with coefficients in $ k$. This ring is usually denoted $ k[x]$, and it’s easy to check it’s a ring (polynomial addition and multiplication are defined in the usual way). Now if I were speaking to a mathematician I would say, “From here you take an irreducible monic polynomial $ p(x)$ of degree $ m$, and quotient your ring by the principal ideal generated by $ p$. The result is the field we want!”

In less compact terms, the idea is exactly the same as modular arithmetic on integers. Instead of doing arithmetic with integers modulo some prime (an irreducible integer), we’re doing arithmetic with polynomials modulo some irreducible polynomial $ p(x)$. Now you see the reason I used $ p$ for a polynomial, to highlight the parallel thought process. What I mean by “modulo a polynomial” is that you divide some element $ f$ in your ring by $ p$ as much as you can, until the degree of the remainder is smaller than the degree of $ p(x)$, and that’s the element of your quotient. The Euclidean algorithm guarantees that we can do this no matter what $ k$ is (in the formal parlance, $ k[x]$ is called a Euclidean domain for this very reason). In still other words, the “quotient structure” tells us that two polynomials $ f, g \in k[x]$ are considered to be the same in $ k[x] / p$ if and only if $ f – g$ is divisible by $ p$. This is actually the same definition for $ \mathbb{Z}/p$, with polynomials replacing numbers, and if you haven’t already you can start to imagine why people decided to study rings in general.

Let’s do a specific example to see what’s going on. Say we’re working with $ k = \mathbb{Z}/3$ and we want to compute a field of size $ 27 = 3^3$. First we need to find a monic irreducible polynomial of degree $ 3$. For now, I just happen to know one: $ p(x) = x^3 – x + 1$. In fact, we can check it’s irreducible, because to be reducible it would have to have a linear factor and hence a root in $ \mathbb{Z}/3$. But it’s easy to see that if you compute $ p(0), p(1), p(2)$ and take (mod 3) you never get zero.

So I’m calling this new ring

$ \displaystyle \frac{\mathbb{Z}/3[x]}{(x^3 – x + 1)}$

It happens to be a field, and we can argue it with a whole lot of ring theory. First, we know an irreducible element of this ring is also prime (because the ring is a unique factorization domain), and prime elements generate maximal ideals (because it’s a principal ideal domain), and if you quotient by a maximal ideal you get a field (true of all rings).

But if we want to avoid that kind of argument and just focus on this ring, we can explicitly construct inverses. Say you have a polynomial $ f(x)$, and for illustration purposes we’ll choose $ f(x) = x^4 + x^2 – 1$. Now in the quotient ring we could do polynomial long division to find remainders, but another trick is just to notice that the quotient is equivalent to the condition that $ x^3 = x – 1$. So we can reduce $ f(x)$ by applying this rule to $ x^4 = x^3 x$ to get

$ \displaystyle f(x) = x^2 + x(x-1) – 1 = 2x^2 – x – 1$

Now what’s the inverse of $ f(x)$? Well we need a polynomial $ g(x) = ax^2 + bx + c$ whose product with $ f$ gives us something which is equivalent to 1, after you reduce by $ x^3 – x + 1$. A few minutes of algebra later and you’ll discover that this is equivalent to the following polynomial being identically 1

$ \displaystyle (a-b+2c)x^2 + (-3a+b-c)x + (a – 2b – 2c) = 1$

In other words, we get a system of linear equations which we need to solve:

$ \displaystyle \begin{aligned} a & – & b & + & 2c & = 0 \\ -3a & + & b & – & c &= 0 \\ a & – & 2b & – & 2c &= 1 \end{aligned}$

And from here you can solve with your favorite linear algebra techniques. This is a good exercise for working in fields, because you get to abuse the prime subfield being characteristic 3 to say terrifying things like $ -1 = 2$ and $ 6b = 0$. The end result is that the inverse polynomial is $ 2x^2 + x + 1$, and if you were really determined you could write a program to compute these linear systems for any input polynomial and ensure they’re all solvable. We prefer the ring theoretic proof.

In any case, it’s clear that taking a polynomial ring like this and quotienting by a monic irreducible polynomial gives you a field. We just control the size of that field by choosing the degree of the irreducible polynomial to our satisfaction. And that’s how we get all finite fields!

One Last Word on Irreducible Polynomials

One thing we’ve avoided is the question of why irreducible monic polynomials exist of all possible degrees $ m$ over any $ \mathbb{Z}/p$ (and as a consequence we can actually construct finite fields of all possible sizes).

The answer requires a bit of group theory to prove this, but it turns out that the polynomial $ x^{p^m} – x$ has all degree $ m$ monic irreducible polynomials as factors. But perhaps a better question (for computer scientists) is how do we work over a finite field in practice? One way is to work with polynomial arithmetic as we described above, but this has some downsides: it requires us to compute these irreducible monic polynomials (which doesn’t sound so hard, maybe), to do polynomial long division every time we add, subtract, or multiply, and to compute inverses by solving a linear system.

But we can do better for some special finite fields, say where the characteristic is 2 (smells like binary) or we’re only looking at $ F_{p^2}$. The benefit there is that we aren’t forced to use polynomials. We can come up with some other kind of structure (say, matrices of a special form) which happens to have the same field structure and makes computing operations relatively painless. We’ll see how this is done in the future, and see it applied to cryptography when we continue with our series on elliptic curve cryptography.

Until then!

Rings — A Second Primer

Last time we defined and gave some examples of rings. Recapping, a ring is a special kind of group with an additional multiplication operation that “plays nicely” with addition. The important thing to remember is that a ring is intended to remind us arithmetic with integers (though not too much: multiplication in a ring need not be commutative). We proved some basic properties, like zero being unique and negation being well-behaved. We gave a quick definition of an integral domain as a place where the only way to multiply two things to get zero is when one of the multiplicands was already zero, and of a Euclidean domain where we can perform nice algorithms like the one for computing the greatest common divisor. Finally, we saw a very important example of the ring of polynomials.

In this post we’ll take a small step upward from looking at low-level features of rings, and start considering how general rings relate to each other. The reader familiar with this blog will notice many similarities between this post and our post on group theory. Indeed, the definitions here will be “motivated” by an attempt to replicate the same kinds of structures we found helpful in group theory (subgroups, homomorphisms, quotients, and kernels). And because rings are also abelian groups, we will play fast and loose with a number of the proofs here by relying on facts we proved in our two-post series on group theory. The ideas assume a decidedly distinct flavor here (mostly in ideals), and in future posts we will see how this affects the computational aspects in more detail.

Homomorphisms and Sub-structures

The first question we should ask about rings is: what should mappings between rings look like? For groups they were set-functions which preserved the group operation ($ f(xy) = f(x)f(y)$ for all $ x,y$). The idea of preserving algebraic structure translates to rings, but as rings have two operations we must preserve both.

Definition: Let $ (R, +_R, \cdot_R), (S, +_S, \cdot_S)$ be rings, and let $ f: R \to S$ be a function on the underlying sets. We call $ f$ a ring homomorphism if $ f$ is a group homomorphism of the underlying abelian groups, and for all $ x,y \in R$ we have $ f(x \cdot_R y) = f(x) \cdot_S f(y)$. A bijective ring homomorphism is called an isomorphism.

Indeed, we have the usual properties of ring homomorphisms that we would expect. All ring homomorphisms preserve the additive and multiplicative identities, multiplicative inverses (when they exist), and things like zero-divisors. We leave these verifications to the reader.

Ring homomorphisms have the same kinds of constructions as group homomorphisms did. One of particular importance is the kernel $ \textup{ker} f$, which is the preimage of zero under $ f$, or equivalently the set $ \left \{ x : f(x) = 0 \right \}$. This is the same definition as for group homomorphisms, linear maps, etc.

The second question we want to ask about rings is: what is the appropriate notion of a sub-structure for rings? One possibility is obvious.

Definition: A subring $ S$ of a ring $ R$ is a subset $ S \subset R$ which is also a ring under the same operations as those of $ R$ (and with the same identities).

Unfortunately subrings do not have the properties we want them to have, and this is mostly because of the requirement that our rings have a multiplicative identity. For instance, we want to say that the kernel of a ring homomorphism $ f: R \to S$ is a subring of $ R$. This will obviously not be the case, because the multiplicative identity never maps to zero under a ring homomorphism! We also want an appropriate notion of a quotient ring, but if we quotient out (“declare to be zero”) a subring, then the identity will become zero, which yields all sorts of contradictions in all but the most trivial of rings. One nice thing, however, is that the image of a ring homomorphism $ R \to S$ is a subring of $ S$. It is a trivial exercise to prove.

Rather than modify our definition of a kernel or a quotient to make it work with the existing definition of a subring, we pick a different choice of an appropriate sub-structure: the ideal.

The Study of Ideals

From an elementary perspective, an ideal is easiest understood as a generalization of even numbers. Even integers have this nice property that multiplying any integer by an even integer gives you an even integer. This is a kind of closure property that mathematicians really love to think about, because they tend to lead to further interesting properties and constructions. Indeed, we can generalize it as follows:

Definition: Let $ R$ be a commutative ring, and $ I \subset R$. We call $ I$ an ideal if two conditions are satisfied:

  1. $ I$ is a subgroup of $ R$ under addition.
  2. For all $ r \in R$ and for all $ x \in I$, we have $ rx \in I$.

That is, an ideal is a subgroup closed under multiplication. The even integers as a subset of $ \mathbb{Z}$ give a nice example, as does the set $ \left \{ 0 \right \}$. In fact, every ideal contains zero by being a subgroup. A slightly more complicated example is the set of polynomials divisible by $ (x-1)$ as a subset of the polynomial ring $ \mathbb{R}[x]$.

We have to be a little careful here if our ring is not commutative. Technically this definition above is for a left-ideal, which is closed under left-multiplication. You can also define right-ideals closed under right-multiplication, and the official name for a plain old “ideal” is a two-sided ideal. The only time we will ever work with noncommutative rings (that we can envision) is with matrix rings, so excluding that case our ideals will forevermore be two-sided. Often times we will use the notation $ rI$ to denote the set of all possible left multiplications $ \left \{ rx : x \in I \right \}$, and so the ideal condition is just $ rI = I$. This is a multiplicative kind of coset, although as we are about to see we will use both additive and multiplicative cosets in talking about rings.

Let’s look at some properties of ideals that show how neatly this definition encapsulates what we want. First,

Proposition: The kernel of a ring homomorphism is an ideal.

Proof. Let $ f: R \to S$ be a ring homomorphism, and let $ I = \textup{ker}(f)$. We already know that $ I$ is a subgroup of $ R$ under addition because kernels of group homomorphisms are subgroups. To show the ideal condition, simply take $ r \in R, x \in I$ and note that $ f(rx) = f(r)f(x) = f(r)0 = 0$, and so $ rx \in \textup{ker}(f)$. $ \square$

In fact the correspondence is one-to-one: every ideal is the kernel of a ring homomorphism and every ring homomorphism’s kernel is an ideal. This is not surprising as it was the case for groups, and the story starts here with quotients, too.

Definition: Let $ R$ be a ring and $ I$ an ideal of $ R$. The quotient group $ R/I$ forms a ring called the quotient ring, and is still denoted by $ R / I$.

To show this definition makes any sense requires some thought: what are the operations of this new ring? Are they well-defined on coset representatives?

For the first, we already know the addition operation because it is the same operation when we view $ R / I$ as a quotient group; that is, $ (x + I) + (y + I) = (x+y) + I$. The additive identity is just $ 0 + I = I$. The multiplication operation is similar: $ (x + I)(y + I) = xy + I$. And the multiplicative identity is clearly $ 1 + I$.

The fact that multiplication works as we said it does above gives more motivation for the definition of an ideal. To prove it, pick any representatives $ x + z_1 \in x + I$ and $ y + z_2 \in y + I$. Their product is

$ (xy + xz_2 + yz_1 + z_1z_2) \in xy + xI + yI + I^2$

Where we denote by $ I^2 = \left \{ xy : x \in I, y \in I \right \}$. The condition for an ideal to be an ideal is precisely that the weird parts, the $ xI + yI + I^2$, become just $ I$. And indeed, $ I$ is an additive group, so sums of things in $ I$ are in $ I$, and moreover $ I$ is closed under multiplication by arbitrary elements of $ R$. More rigorously, $ xy + xz_2 + yz_1 + z_1z_2$ is equivalent to $ xy$ under the coset equivalence relation because their difference $ xz_2 + yz_1 + z_1z_2$ is a member of $ I$.

Now we can realize that every ideal is the kernel of some homomorphism. Indeed, if $ I$ is an ideal of $ R$, then there is a natural map $ \pi : R \to R/I$ (called the canonical projection, see our post on universal properties for more) defined by sending an element to its coset: $ \pi(x) = x+ I$. It should be obvious to the reader that the kernel of $ \pi$ is exactly $ I$ (because $ x + i = 0 + I$ if and only if $ x \in I$; this is a fact we borrow from groups). And so there is a correspondence between kernels and ideals. They are really the same concept manifested in two different ways.

Because we have quotient rings and ring homomorphisms, we actually get a number of “theorems” for free. These are the isomorphism theorems for rings, the most important one being the analogue of the First Isomorphism Theorem for groups. That is, if $ f: R \to S$ is a ring homomorphism, $ \textup{im}(f)$ is a subring of $ S$, and moreover $ R/ \textup{ker}(f) \cong \textup{im}(f)$. We invite the reader to prove it by hand (start by defining a map $ \varphi(x + I) = f(x)$ and prove it’s a ring isomorphism). There are some other isomorphism theorems, but they’re not particularly deep; rather, they’re just commonly-applied corollaries of the first isomorphism theorem. In light of this blog’s discussions on category theory, the isomorphism theorems are a trivial consequence of the fact that rings have quotients, and that $ \textup{im}(f)$ happens to be well-behaved.

Noetherian Rings and Principal Ideal Domains

One cannot overestimate the importance of ideals as a fundamental concept in ring theory. They are literally the cornerstone of everything interesting in the subject, and especially the computational aspects of the field (more on that in future posts). So to study ideals, we make some basic definitions.

Definition: Let $ A \subset R$ be any subset of the ring $ R$. The ideal generated by $ A$ is the smallest ideal of $ R$ containing $ A$, denoted $ I(A)$. It is “smallest” in the sense that all ideals containing $ A$ must also contain $ I(A)$.

Indeed, we can realize $ I(A)$ directly as the set of all possible finite linear combinations $ r_1 a_1 + \dots r_k a_k$ where the $ r_i \in R$ and $ a_i \in A$. Such a linear combination is called an $ R$-linear combination because the “coefficients” come from $ R$. It is not hard to see that this is an ideal. It is obviously a subgroup since sums of linear combinations are also linear combinations, and negation distributes across the sum. It satisfies multiplicative closure because

$ r(r_1a_1 + \dots + r_k a_k) = (rr_1)a_1 + \dots + (rr_k)a_k$

is another $ R$-linear combination.

One convenient way to think of this ideal is as the intersection of all ideals $ I$ which contain $ A$. Since the intersection of any family of ideals is an ideal (check this!) this will always give us the smallest possible ideal containing $ A$.

One important bit of notation is that if $ I = I(A)$ is the ideal generated by a finite set $ A$, then we write $ I = \left \langle a_1, \dots, a_n \right \rangle$ where $ A = \left \{ a_1, \dots, a_n \right \}$. We say that $ I$ is finitely generated. If $ A$ happens to be a singleton, we say that $ I$ is a principal ideal. Following the notation of linear algebra, a minimal (by cardinality) generating set for an ideal is called a basis for the ideal. 

Thinking about how ideals are generated is extremely important both in the pure theory of rings and in the computational algorithms that deal with them. It’s so important, in fact, that there are entire classes of rings defined by how their ideals behave. We give the two most important ones here.

Definition: A commutative ring is called Noetherian if all of its ideals are finitely generated. An integral domain is called a principal ideal domain if all of its ideals are principal.

The concept of a Noetherian ring is a particularly juicy one, and it was made famous by the founding mother of commutative ring theory, Emmy Noether. Without going into too much detail, just as an integral domain is the most faithful abstraction of the ring of integers, a Noetherian ring is the best way to think about polynomial rings (and quotients of polynomial rings). This is highlighted by a few basic facts and some deep theorems:

Fact: If $ R$ is a Noetherian ring and $ I$ an ideal, then $ R/I$ is Noetherian.

This follows from the correspondence of ideals between $ R$ and $ R/I$. Indeed, for every ideal $ J \subset R/I$ there is an ideal $ \pi^{-1}(J)$ of $ R$ which contains $ I$. Moreover, this correspondence is a bijection. So if we want a generating set for $ J$, we can lift it up to $ R$ via $ \pi$, get a finite generating set for $ \pi^{-1}(J)$, and project the generators back down to $ R/I$. Some of the generators will be killed off (sent to $ I$ by $ \pi$), but what remains will be a valid generating set for $ J$, and still finite.

Theorem (Hilbert Basis Theorem): If $ R$ is a Noetherian ring, then the polynomial ring in one variable $ R[x]$ is Noetherian. Inductively, $ R[x_1, \dots, x_n]$ is also Noetherian.

These theorems start to lay the foundation for algebraic geometry, which connects ideals generated by a family of polynomials to the geometric solution set of those polynomials. Since a vast array of industrial problems can be reduced to solving systems of polynomial equations (take robot motion planning, for example), it would be quite convenient if we could write programs to reason about those systems of equations. Indeed, such algorithms do exist, and we make heavy use of theorems like the Hilbert Basis Theorem to ensure their correctness.

In the next post in this series, we’ll start this journey through elementary algebraic geometry by defining a variety, and establishing its relationship to ideals of polynomial rings. We’ll then work towards theorems like Hilbert’s Nullstellensatz, the computational operations we wish to perform on ideals, and the algorithms that carry out those operations. As usual, we’ll eventually see some applications and write some code.

Until then!