(Finite) Fields — A Primer

So far on this blog we’ve given some introductory notes on a few kinds of algebraic structures in mathematics (most notably groups and rings, but also monoids). Fields are the next natural step in the progression.

If the reader is comfortable with rings, then a field is extremely simple to describe: they’re just commutative rings with 0 and 1, where every nonzero element has a multiplicative inverse. We’ll give a list of all of the properties that go into this “simple” definition in a moment, but an even more simple way to describe a field is as a place where “arithmetic makes sense.” That is, you get operations for $ +,-, \cdot , /$ which satisfy the expected properties of addition, subtraction, multiplication, and division. So whatever the objects in your field are (and sometimes they are quite weird objects), they behave like usual numbers in a very concrete sense.

So here’s the official definition of a field. We call a set $ F$ a field if it is endowed with two binary operations addition ($ +$) and multiplication ($ \cdot$, or just symbol juxtaposition) that have the following properties:

  • There is an element we call 0 which is the identity for addition.
  • Addition is commutative and associative.
  • Every element $ a \in F$ has a corresponding additive inverse $ b$ (which may equal $ a$) for which $ a + b = 0$.

These three properties are just the axioms of a (commutative) group, so we continue:

  • There is an element we call 1 (distinct from 0) which is the identity for multiplication.
  • Multiplication is commutative and associative.
  • Every nonzero element $ a \in F$ has a corresponding multiplicative inverse $ b$ (which may equal $ a$) for which $ ab = 1$.
  • Addition and multiplication distribute across each other as we expect.

If we exclude the existence of multiplicative inverses, these properties make $ F$ a commutative ring, and so we have the following chain of inclusions that describes it all

$ \displaystyle \textup{Fields} \subset \textup{Commutative Rings} \subset \textup{Rings} \subset \textup{Commutative Groups} \subset \textup{Groups}$

The standard examples of fields are the real numbers $ \mathbb{R}$, the rationals $ \mathbb{Q}$, and the complex numbers $ \mathbb{C}$. But of course there are many many more. The first natural question to ask about fields is: what can they look like?

For example, can there be any finite fields? A field $ F$ which as a set has only finitely many elements?

As we saw in our studies of groups and rings, the answer is yes! The simplest example is the set of integers modulo some prime $ p$. We call them $ \mathbb{Z} / p \mathbb{Z},$ or sometimes just $ \mathbb{Z}/p$ for short, and let’s rederive what we know about them now.

As a set, $ \mathbb{Z}/p$ consists of the integers $ \left \{ 0, 1, \dots, p-1 \right \}$. The addition and multiplication operations are easy to define, they’re just usual addition and multiplication followed by a modulus. That is, we add by $ a + b \mod p$ and multiply with $ ab \mod p$. This thing is clearly a commutative ring (because the integers form a commutative ring), so to show this is a field we need to show that everything has a multiplicative inverse.

There is a nice fact that allows us to do this: an element $ a$ has an inverse if and only if the only way for it to divide zero is the trivial way $ 0a = 0$. Here’s a proof. For one direction, suppose $ a$ divides zero nontrivially, that is there is some $ c \neq 0$ with $ ac = 0$. Then if $ a$ had an inverse $ b$, then $ 0 = b(ac) = (ba)c = c$, but that’s very embarrassing for $ c$ because it claimed to be nonzero. Now suppose $ a$ only divides zero in the trivial way. Then look at all possible ways to multiply $ a$ by other nonzero elements of $ F$. No two can give you the same result because if $ ax = ay$ then (without using multiplicative inverses) $ a(x-y) = 0$, but we know that $ a$ can only divide zero in the trivial way so $ x=y$. In other words, the map “multiplication by $ a$” is injective. Because the set of nonzero elements of $ F$ is finite you have to hit everything (the map is in fact a bijection), and some $ x$ will give you $ ax = 1$.

Now let’s use this fact on $ \mathbb{Z}/p$ in the obvious way. Since $ p$ is a prime, there are no two smaller numbers $ a, b < p$ so that $ ab = p$. But in $ \mathbb{Z}/p$ the number $ p$ is equivalent to zero (mod $ p$)! So $ \mathbb{Z}/p$ has no nontrivial zero divisors, and so every element has an inverse, and so it’s a finite field with $ p$ elements.

The next question is obvious: can we get finite fields of other sizes? The answer turns out to be yes, but you can’t get finite fields of any size. Let’s see why.

Characteristics and Vector Spaces

Say you have a finite field $ k$ (lower-case k is the standard letter for a field, so let’s forget about $ F$). Beacuse the field is finite, if you take 1 and keep adding it to itself you’ll eventually run out of field elements. That is, $ n = 1 + 1 + \dots + 1 = 0$ at some point. How do I know it’s zero and doesn’t keep cycling never hitting zero? Well if at two points $ n = m \neq 0$, then $ n-m = 0$ is a time where you hit zero, contradicting the claim.

Now we define $ \textup{char}(k)$, the characteristic of $ k$, to be the smallest $ n$ (sums of 1 with itself) for which $ n = 0$. If there is no such $ n$ (this can happen if $ k$ is infinite, but doesn’t always happen for infinite fields), then we say the characteristic is zero. It would probably make more sense to say the characteristic is infinite, but that’s just the way it is. Of course, for finite fields the characteristic is always positive. So what can we say about this number? We have seen lots of example where it’s prime, but is it always prime? It turns out the answer is yes!

For if $ ab = n = \textup{char}(k)$ is composite, then by the minimality of $ n$ we get $ a,b \neq 0$, but $ ab = n = 0$. This can’t happen by our above observation, because being a zero divisor means you have no inverse! Contradiction, sucker.

But it might happen that there are elements of $ k$ that can’t be written as $ 1 + 1 + \dots + 1$ for any number of terms. We’ll construct examples in a minute (in fact, we’ll classify all finite fields), but we already have a lot of information about what those fields might look like. Indeed, since every field has 1 in it, we just showed that every finite field contains a smaller field (a subfield) of all the ways to add 1 to itself. Since the characteristic is prime, the subfield is a copy of $ \mathbb{Z}/p$ for $ p = \textup{char}(k)$. We call this special subfield the prime subfield of $ k$.

The relationship between the possible other elements of $ k$ and the prime subfield is very neat. Because think about it: if $ k$ is your field and $ F$ is your prime subfield, then the elements of $ k$ can interact with $ F$ just like any other field elements. But if we separate $ k$ from $ F$ (make a separate copy of $ F$), and just think of $ k$ as having addition, then the relationship with $ F$ is that of a vector space! In fact, whenever you have two fields $ k \subset k’$, the latter has the structure of a vector space over the former.

Back to finite fields, $ k$ is a vector space over its prime subfield, and now we can impose all the power and might of linear algebra against it. What’s it’s dimension? Finite because $ k$ is a finite set! Call the dimension $ m$, then we get a basis $ v_1, \dots, v_m$. Then the crucial part: every element of $ k$ has a unique representation in terms of the basis. So they are expanded in the form

$ \displaystyle f_1v_1 + \dots + f_mv_m$

where the $ f_i$ come from $ F$. But now, since these are all just field operations, every possible choice for the $ f_i$ has to give you a different field element. And how many choices are there for the $ f_i$? Each one has exactly $ |F| = \textup{char}(k) = p$. And so by counting we get that $ k$ has $ p^m$ many elements.

This is getting exciting quickly, but we have to pace ourselves! This is a constraint on the possible size of a finite field, but can we realize it for all choices of $ p, m$? The answer is again yes, and in the next section we’ll see how.  But reader be warned: the formal way to do it requires a little bit of familiarity with ideals in rings to understand the construction. I’ll try to avoid too much technical stuff, but if you don’t know what an ideal is, you should expect to get lost (it’s okay, that’s the nature of learning new math!).

Constructing All Finite Fields

Let’s describe a construction. Take a finite field $ k$ of characteristic $ p$, and say you want to make a field of size $ p^m$. What we need to do is construct a field extension, that is, find a bigger field containing $ k$ so that the vector space dimension of our new field over $ k$ is exactly $ m$.

What you can do is first form the ring of polynomials with coefficients in $ k$. This ring is usually denoted $ k[x]$, and it’s easy to check it’s a ring (polynomial addition and multiplication are defined in the usual way). Now if I were speaking to a mathematician I would say, “From here you take an irreducible monic polynomial $ p(x)$ of degree $ m$, and quotient your ring by the principal ideal generated by $ p$. The result is the field we want!”

In less compact terms, the idea is exactly the same as modular arithmetic on integers. Instead of doing arithmetic with integers modulo some prime (an irreducible integer), we’re doing arithmetic with polynomials modulo some irreducible polynomial $ p(x)$. Now you see the reason I used $ p$ for a polynomial, to highlight the parallel thought process. What I mean by “modulo a polynomial” is that you divide some element $ f$ in your ring by $ p$ as much as you can, until the degree of the remainder is smaller than the degree of $ p(x)$, and that’s the element of your quotient. The Euclidean algorithm guarantees that we can do this no matter what $ k$ is (in the formal parlance, $ k[x]$ is called a Euclidean domain for this very reason). In still other words, the “quotient structure” tells us that two polynomials $ f, g \in k[x]$ are considered to be the same in $ k[x] / p$ if and only if $ f – g$ is divisible by $ p$. This is actually the same definition for $ \mathbb{Z}/p$, with polynomials replacing numbers, and if you haven’t already you can start to imagine why people decided to study rings in general.

Let’s do a specific example to see what’s going on. Say we’re working with $ k = \mathbb{Z}/3$ and we want to compute a field of size $ 27 = 3^3$. First we need to find a monic irreducible polynomial of degree $ 3$. For now, I just happen to know one: $ p(x) = x^3 – x + 1$. In fact, we can check it’s irreducible, because to be reducible it would have to have a linear factor and hence a root in $ \mathbb{Z}/3$. But it’s easy to see that if you compute $ p(0), p(1), p(2)$ and take (mod 3) you never get zero.

So I’m calling this new ring

$ \displaystyle \frac{\mathbb{Z}/3[x]}{(x^3 – x + 1)}$

It happens to be a field, and we can argue it with a whole lot of ring theory. First, we know an irreducible element of this ring is also prime (because the ring is a unique factorization domain), and prime elements generate maximal ideals (because it’s a principal ideal domain), and if you quotient by a maximal ideal you get a field (true of all rings).

But if we want to avoid that kind of argument and just focus on this ring, we can explicitly construct inverses. Say you have a polynomial $ f(x)$, and for illustration purposes we’ll choose $ f(x) = x^4 + x^2 – 1$. Now in the quotient ring we could do polynomial long division to find remainders, but another trick is just to notice that the quotient is equivalent to the condition that $ x^3 = x – 1$. So we can reduce $ f(x)$ by applying this rule to $ x^4 = x^3 x$ to get

$ \displaystyle f(x) = x^2 + x(x-1) – 1 = 2x^2 – x – 1$

Now what’s the inverse of $ f(x)$? Well we need a polynomial $ g(x) = ax^2 + bx + c$ whose product with $ f$ gives us something which is equivalent to 1, after you reduce by $ x^3 – x + 1$. A few minutes of algebra later and you’ll discover that this is equivalent to the following polynomial being identically 1

$ \displaystyle (a-b+2c)x^2 + (-3a+b-c)x + (a – 2b – 2c) = 1$

In other words, we get a system of linear equations which we need to solve:

$ \displaystyle \begin{aligned} a & – & b & + & 2c & = 0 \\ -3a & + & b & – & c &= 0 \\ a & – & 2b & – & 2c &= 1 \end{aligned}$

And from here you can solve with your favorite linear algebra techniques. This is a good exercise for working in fields, because you get to abuse the prime subfield being characteristic 3 to say terrifying things like $ -1 = 2$ and $ 6b = 0$. The end result is that the inverse polynomial is $ 2x^2 + x + 1$, and if you were really determined you could write a program to compute these linear systems for any input polynomial and ensure they’re all solvable. We prefer the ring theoretic proof.

In any case, it’s clear that taking a polynomial ring like this and quotienting by a monic irreducible polynomial gives you a field. We just control the size of that field by choosing the degree of the irreducible polynomial to our satisfaction. And that’s how we get all finite fields!

One Last Word on Irreducible Polynomials

One thing we’ve avoided is the question of why irreducible monic polynomials exist of all possible degrees $ m$ over any $ \mathbb{Z}/p$ (and as a consequence we can actually construct finite fields of all possible sizes).

The answer requires a bit of group theory to prove this, but it turns out that the polynomial $ x^{p^m} – x$ has all degree $ m$ monic irreducible polynomials as factors. But perhaps a better question (for computer scientists) is how do we work over a finite field in practice? One way is to work with polynomial arithmetic as we described above, but this has some downsides: it requires us to compute these irreducible monic polynomials (which doesn’t sound so hard, maybe), to do polynomial long division every time we add, subtract, or multiply, and to compute inverses by solving a linear system.

But we can do better for some special finite fields, say where the characteristic is 2 (smells like binary) or we’re only looking at $ F_{p^2}$. The benefit there is that we aren’t forced to use polynomials. We can come up with some other kind of structure (say, matrices of a special form) which happens to have the same field structure and makes computing operations relatively painless. We’ll see how this is done in the future, and see it applied to cryptography when we continue with our series on elliptic curve cryptography.

Until then!

Rings — A Second Primer

Last time we defined and gave some examples of rings. Recapping, a ring is a special kind of group with an additional multiplication operation that “plays nicely” with addition. The important thing to remember is that a ring is intended to remind us arithmetic with integers (though not too much: multiplication in a ring need not be commutative). We proved some basic properties, like zero being unique and negation being well-behaved. We gave a quick definition of an integral domain as a place where the only way to multiply two things to get zero is when one of the multiplicands was already zero, and of a Euclidean domain where we can perform nice algorithms like the one for computing the greatest common divisor. Finally, we saw a very important example of the ring of polynomials.

In this post we’ll take a small step upward from looking at low-level features of rings, and start considering how general rings relate to each other. The reader familiar with this blog will notice many similarities between this post and our post on group theory. Indeed, the definitions here will be “motivated” by an attempt to replicate the same kinds of structures we found helpful in group theory (subgroups, homomorphisms, quotients, and kernels). And because rings are also abelian groups, we will play fast and loose with a number of the proofs here by relying on facts we proved in our two-post series on group theory. The ideas assume a decidedly distinct flavor here (mostly in ideals), and in future posts we will see how this affects the computational aspects in more detail.

Homomorphisms and Sub-structures

The first question we should ask about rings is: what should mappings between rings look like? For groups they were set-functions which preserved the group operation ($ f(xy) = f(x)f(y)$ for all $ x,y$). The idea of preserving algebraic structure translates to rings, but as rings have two operations we must preserve both.

Definition: Let $ (R, +_R, \cdot_R), (S, +_S, \cdot_S)$ be rings, and let $ f: R \to S$ be a function on the underlying sets. We call $ f$ a ring homomorphism if $ f$ is a group homomorphism of the underlying abelian groups, and for all $ x,y \in R$ we have $ f(x \cdot_R y) = f(x) \cdot_S f(y)$. A bijective ring homomorphism is called an isomorphism.

Indeed, we have the usual properties of ring homomorphisms that we would expect. All ring homomorphisms preserve the additive and multiplicative identities, multiplicative inverses (when they exist), and things like zero-divisors. We leave these verifications to the reader.

Ring homomorphisms have the same kinds of constructions as group homomorphisms did. One of particular importance is the kernel $ \textup{ker} f$, which is the preimage of zero under $ f$, or equivalently the set $ \left \{ x : f(x) = 0 \right \}$. This is the same definition as for group homomorphisms, linear maps, etc.

The second question we want to ask about rings is: what is the appropriate notion of a sub-structure for rings? One possibility is obvious.

Definition: A subring $ S$ of a ring $ R$ is a subset $ S \subset R$ which is also a ring under the same operations as those of $ R$ (and with the same identities).

Unfortunately subrings do not have the properties we want them to have, and this is mostly because of the requirement that our rings have a multiplicative identity. For instance, we want to say that the kernel of a ring homomorphism $ f: R \to S$ is a subring of $ R$. This will obviously not be the case, because the multiplicative identity never maps to zero under a ring homomorphism! We also want an appropriate notion of a quotient ring, but if we quotient out (“declare to be zero”) a subring, then the identity will become zero, which yields all sorts of contradictions in all but the most trivial of rings. One nice thing, however, is that the image of a ring homomorphism $ R \to S$ is a subring of $ S$. It is a trivial exercise to prove.

Rather than modify our definition of a kernel or a quotient to make it work with the existing definition of a subring, we pick a different choice of an appropriate sub-structure: the ideal.

The Study of Ideals

From an elementary perspective, an ideal is easiest understood as a generalization of even numbers. Even integers have this nice property that multiplying any integer by an even integer gives you an even integer. This is a kind of closure property that mathematicians really love to think about, because they tend to lead to further interesting properties and constructions. Indeed, we can generalize it as follows:

Definition: Let $ R$ be a commutative ring, and $ I \subset R$. We call $ I$ an ideal if two conditions are satisfied:

  1. $ I$ is a subgroup of $ R$ under addition.
  2. For all $ r \in R$ and for all $ x \in I$, we have $ rx \in I$.

That is, an ideal is a subgroup closed under multiplication. The even integers as a subset of $ \mathbb{Z}$ give a nice example, as does the set $ \left \{ 0 \right \}$. In fact, every ideal contains zero by being a subgroup. A slightly more complicated example is the set of polynomials divisible by $ (x-1)$ as a subset of the polynomial ring $ \mathbb{R}[x]$.

We have to be a little careful here if our ring is not commutative. Technically this definition above is for a left-ideal, which is closed under left-multiplication. You can also define right-ideals closed under right-multiplication, and the official name for a plain old “ideal” is a two-sided ideal. The only time we will ever work with noncommutative rings (that we can envision) is with matrix rings, so excluding that case our ideals will forevermore be two-sided. Often times we will use the notation $ rI$ to denote the set of all possible left multiplications $ \left \{ rx : x \in I \right \}$, and so the ideal condition is just $ rI = I$. This is a multiplicative kind of coset, although as we are about to see we will use both additive and multiplicative cosets in talking about rings.

Let’s look at some properties of ideals that show how neatly this definition encapsulates what we want. First,

Proposition: The kernel of a ring homomorphism is an ideal.

Proof. Let $ f: R \to S$ be a ring homomorphism, and let $ I = \textup{ker}(f)$. We already know that $ I$ is a subgroup of $ R$ under addition because kernels of group homomorphisms are subgroups. To show the ideal condition, simply take $ r \in R, x \in I$ and note that $ f(rx) = f(r)f(x) = f(r)0 = 0$, and so $ rx \in \textup{ker}(f)$. $ \square$

In fact the correspondence is one-to-one: every ideal is the kernel of a ring homomorphism and every ring homomorphism’s kernel is an ideal. This is not surprising as it was the case for groups, and the story starts here with quotients, too.

Definition: Let $ R$ be a ring and $ I$ an ideal of $ R$. The quotient group $ R/I$ forms a ring called the quotient ring, and is still denoted by $ R / I$.

To show this definition makes any sense requires some thought: what are the operations of this new ring? Are they well-defined on coset representatives?

For the first, we already know the addition operation because it is the same operation when we view $ R / I$ as a quotient group; that is, $ (x + I) + (y + I) = (x+y) + I$. The additive identity is just $ 0 + I = I$. The multiplication operation is similar: $ (x + I)(y + I) = xy + I$. And the multiplicative identity is clearly $ 1 + I$.

The fact that multiplication works as we said it does above gives more motivation for the definition of an ideal. To prove it, pick any representatives $ x + z_1 \in x + I$ and $ y + z_2 \in y + I$. Their product is

$ (xy + xz_2 + yz_1 + z_1z_2) \in xy + xI + yI + I^2$

Where we denote by $ I^2 = \left \{ xy : x \in I, y \in I \right \}$. The condition for an ideal to be an ideal is precisely that the weird parts, the $ xI + yI + I^2$, become just $ I$. And indeed, $ I$ is an additive group, so sums of things in $ I$ are in $ I$, and moreover $ I$ is closed under multiplication by arbitrary elements of $ R$. More rigorously, $ xy + xz_2 + yz_1 + z_1z_2$ is equivalent to $ xy$ under the coset equivalence relation because their difference $ xz_2 + yz_1 + z_1z_2$ is a member of $ I$.

Now we can realize that every ideal is the kernel of some homomorphism. Indeed, if $ I$ is an ideal of $ R$, then there is a natural map $ \pi : R \to R/I$ (called the canonical projection, see our post on universal properties for more) defined by sending an element to its coset: $ \pi(x) = x+ I$. It should be obvious to the reader that the kernel of $ \pi$ is exactly $ I$ (because $ x + i = 0 + I$ if and only if $ x \in I$; this is a fact we borrow from groups). And so there is a correspondence between kernels and ideals. They are really the same concept manifested in two different ways.

Because we have quotient rings and ring homomorphisms, we actually get a number of “theorems” for free. These are the isomorphism theorems for rings, the most important one being the analogue of the First Isomorphism Theorem for groups. That is, if $ f: R \to S$ is a ring homomorphism, $ \textup{im}(f)$ is a subring of $ S$, and moreover $ R/ \textup{ker}(f) \cong \textup{im}(f)$. We invite the reader to prove it by hand (start by defining a map $ \varphi(x + I) = f(x)$ and prove it’s a ring isomorphism). There are some other isomorphism theorems, but they’re not particularly deep; rather, they’re just commonly-applied corollaries of the first isomorphism theorem. In light of this blog’s discussions on category theory, the isomorphism theorems are a trivial consequence of the fact that rings have quotients, and that $ \textup{im}(f)$ happens to be well-behaved.

Noetherian Rings and Principal Ideal Domains

One cannot overestimate the importance of ideals as a fundamental concept in ring theory. They are literally the cornerstone of everything interesting in the subject, and especially the computational aspects of the field (more on that in future posts). So to study ideals, we make some basic definitions.

Definition: Let $ A \subset R$ be any subset of the ring $ R$. The ideal generated by $ A$ is the smallest ideal of $ R$ containing $ A$, denoted $ I(A)$. It is “smallest” in the sense that all ideals containing $ A$ must also contain $ I(A)$.

Indeed, we can realize $ I(A)$ directly as the set of all possible finite linear combinations $ r_1 a_1 + \dots r_k a_k$ where the $ r_i \in R$ and $ a_i \in A$. Such a linear combination is called an $ R$-linear combination because the “coefficients” come from $ R$. It is not hard to see that this is an ideal. It is obviously a subgroup since sums of linear combinations are also linear combinations, and negation distributes across the sum. It satisfies multiplicative closure because

$ r(r_1a_1 + \dots + r_k a_k) = (rr_1)a_1 + \dots + (rr_k)a_k$

is another $ R$-linear combination.

One convenient way to think of this ideal is as the intersection of all ideals $ I$ which contain $ A$. Since the intersection of any family of ideals is an ideal (check this!) this will always give us the smallest possible ideal containing $ A$.

One important bit of notation is that if $ I = I(A)$ is the ideal generated by a finite set $ A$, then we write $ I = \left \langle a_1, \dots, a_n \right \rangle$ where $ A = \left \{ a_1, \dots, a_n \right \}$. We say that $ I$ is finitely generated. If $ A$ happens to be a singleton, we say that $ I$ is a principal ideal. Following the notation of linear algebra, a minimal (by cardinality) generating set for an ideal is called a basis for the ideal. 

Thinking about how ideals are generated is extremely important both in the pure theory of rings and in the computational algorithms that deal with them. It’s so important, in fact, that there are entire classes of rings defined by how their ideals behave. We give the two most important ones here.

Definition: A commutative ring is called Noetherian if all of its ideals are finitely generated. An integral domain is called a principal ideal domain if all of its ideals are principal.

The concept of a Noetherian ring is a particularly juicy one, and it was made famous by the founding mother of commutative ring theory, Emmy Noether. Without going into too much detail, just as an integral domain is the most faithful abstraction of the ring of integers, a Noetherian ring is the best way to think about polynomial rings (and quotients of polynomial rings). This is highlighted by a few basic facts and some deep theorems:

Fact: If $ R$ is a Noetherian ring and $ I$ an ideal, then $ R/I$ is Noetherian.

This follows from the correspondence of ideals between $ R$ and $ R/I$. Indeed, for every ideal $ J \subset R/I$ there is an ideal $ \pi^{-1}(J)$ of $ R$ which contains $ I$. Moreover, this correspondence is a bijection. So if we want a generating set for $ J$, we can lift it up to $ R$ via $ \pi$, get a finite generating set for $ \pi^{-1}(J)$, and project the generators back down to $ R/I$. Some of the generators will be killed off (sent to $ I$ by $ \pi$), but what remains will be a valid generating set for $ J$, and still finite.

Theorem (Hilbert Basis Theorem): If $ R$ is a Noetherian ring, then the polynomial ring in one variable $ R[x]$ is Noetherian. Inductively, $ R[x_1, \dots, x_n]$ is also Noetherian.

These theorems start to lay the foundation for algebraic geometry, which connects ideals generated by a family of polynomials to the geometric solution set of those polynomials. Since a vast array of industrial problems can be reduced to solving systems of polynomial equations (take robot motion planning, for example), it would be quite convenient if we could write programs to reason about those systems of equations. Indeed, such algorithms do exist, and we make heavy use of theorems like the Hilbert Basis Theorem to ensure their correctness.

In the next post in this series, we’ll start this journey through elementary algebraic geometry by defining a variety, and establishing its relationship to ideals of polynomial rings. We’ll then work towards theorems like Hilbert’s Nullstellensatz, the computational operations we wish to perform on ideals, and the algorithms that carry out those operations. As usual, we’ll eventually see some applications and write some code.

Until then!

Rings — A Primer

Previously on this blog, we’ve covered two major kinds of algebraic objects: the vector space and the group. There are at least two more fundamental algebraic objects every mathematician should something know about. The first, and the focus of this primer, is the ring. The second, which we’ve mentioned briefly in passing on this blog, is the field. There are a few others important to the pure mathematician, such as the $ R$-module (here $ R$ is a ring). These do have some nice computational properties, but in order to even begin to talk about them we need to know about rings.

A Very Special Kind of Group

Recall that an abelian group $ (G, +)$ is a set $ G$ paired with a commutative binary operation $ +$, where $ G$ has a special identity element called 0 which acts as an identity for $ +$. The archetypal example of an abelian group is, of course, the integers $ \mathbb{Z}$ under addition, with zero playing the role of the identity element.

The easiest way to think of a ring is as an abelian group with more structure. This structure comes in the form of a multiplication operation which is “compatible” with the addition coming from the group structure.

Definition:ring $ (R, +, \cdot)$ is a set $ R$ which forms an abelian group under $ +$ (with additive identity 0), and has an additional associative binary operation $ \cdot$ with an element 1 serving as a (two-sided) multiplicative identity. Furthermore, $ \cdot$ distributes over $ +$ in the sense that for all $ x,y,z \in R$

$ x(y+z) = xy + xz$ and $ (y+z)x = yx + zx$

The most important thing to note is that multiplication is not commutative both in general rings and for most rings in practice. If multiplication is commutative, then the ring is called commutative. Some easy examples of commutative rings include rings of numbers like $ \mathbb{Z}, \mathbb{Z}/n\mathbb{Z}, \mathbb{Q}, \mathbb{R}$, which are just the abelian groups we know and love with multiplication added on.

If the reader takes anything away from this post, it should be the following:

Rings generalize arithmetic with integers.

Of course, this would imply that all rings are commutative, but this is not the case. More meaty and tempestuous examples of rings are very visibly noncommutative. One of the most important examples are rings of matrices. In particular, denote by $ M_n(\mathbb{R})$ the set of all $ n \times n$ matrices with real valued entries. This forms a ring under addition and multiplication of matrices, and has as a multiplicative identity the $ n \times n$ identity matrix $ I_n$.

Commutative rings are much more well-understood than noncommutative rings, and the study of the former is called commutative algebra. This is the main prerequisite for fields like algebraic geometry, which (in the simplest examples) associate commutative rings to geometric objects.

For us, all rings will have an identity, but many ring theorists will point out that one can just as easily define a ring to not have a multiplicative identity. We will call these non-unital rings, and will rarely, if ever, see them on this blog.

Another very important example of a concrete ring is the polynomial ring in $ n$ variables with coefficients in $ \mathbb{Q}$ or $ \mathbb{R}$. This ring is denoted with square brackets denoting the variables, e.g. $ \mathbb{R}[x_1, x_2, \dots , x_n]$. We rest assured that the reader is familiar with addition and multiplication of polynomials, and that this indeed forms a ring.

Kindergarten Math

Let’s start with some easy properties of rings. We will denote our generic ring by $ R$.

First, the multiplicative identity of a ring is unique. The proof is exactly the same as it was for groups, but note that identities must be two-sided for this to work. If $ 1, 1’$ are two identities, then $ 1 = 1 \cdot 1′ = 1’$.

Next, we prove that $ 0a = a0 = 0$ for all $ a \in R$. Indeed, by the fact that multiplication distributes across addition, $ 0a = (0 + 0)a = 0a + 0a$, and additively canceling $ 0a$ from both sides gives $ 0a = 0$. An identical proof works for $ a0$.

In fact, pretty much any “obvious property” from elementary arithmetic is satisfied for rings. For instance, $ -(-a) = a$ and $ (-a)b = a(-b) = -(ab)$ and $ (-1)^2 = 1$ are all trivial to prove. Here is a list of these and more properties which we invite the reader to prove.

Zero Divisors, Integral Domains, and Units

One thing that is very much not automatically given in the general theory of rings is multiplicative cancellation. That is, if I have $ ac = bc$ then it is not guaranteed to be the case that $ a = b$. It is quite easy to come up with examples in modular arithmetic on integers; if $ R = \mathbb{Z}/8\mathbb{Z}$ then $ 2*6 = 6*6 = 4 \mod 8$, but $ 2 \neq 6$.

The reason for this phenomenon is that many rings have elements that lack multiplicative inverses. In $ \mathbb{Z}/8\mathbb{Z}$, for instance, $ 2$ has no multiplicative inverse (and neither does 6). Indeed, one is often interested in determining which elements are invertible in a ring and which elements are not. In a seemingly unrelated issue, one is interested in determining whether one can multiply any given element $ x \in R$ by some $ y$ to get zero. It turns out that these two conditions are disjoint, and closely related to our further inspection of special classes of rings.

Definition: An element $ x$ of a ring $ R$ is said to be a left zero-divisor if there is some $ y \neq 0$ such that $ xy = 0$. Similarly, $ x$ is a right zero-divisor if there is a $ z$ for which $ zx = 0$. If $ x$ is a left and right zero-divisor (e.g. if $ R$ is commutative), it is just called a zero-divisor.

Definition: Let $ x,y \in R$. The element $ y$ is said to be a left inverse to $ x$ if $ yx = 1$, and a right inverse if $ xy = 1$. If there is some $ z \neq 0$ for which $ xz = zx = 1$, then $ x$ is said to be a two-sided inverse and $ z$ is called the inverse of $ x$, and $ x$ is called a unit.

As a quick warmup, we prove that if $ x$ has a left  and a right inverse then it has a two-sided inverse. Indeed, if $ zx = 1 = xy$, then $ z = z(xy) = (zx)y = y$, so in fact the left and right inverses are the same.

The salient fact here is that having a (left- or right-) inverse allows one to do (left- or right-) cancellation, since obviously when $ ac = bc$ and $ c$ has a right inverse, we can multiply $ acc^{-1} = bcc^{-1}$ to get $ a=b$. We will usually work with two-sided inverses and zero-divisors (since we will usually work in a commutative ring). But in non-commutative rings, like rings of matrices, one-sided phenomena do run rampant, and one must distinguish between them.

The right way to relate these two concepts is as follows. If $ c$ has a right inverse, then define the right-multiplication function $ (- \cdot c) : R \to R$ which takes $ x$ and spits out $ xc$. In fact, this function is an injection. Indeed, we already proved that (because $ c$ has a right inverse) if $ xc = yc$ then $ x = y$. In particular, there is a unique preimage of $ 0$ under this map. Since $ 0c = 0$ is always true, then it must be the case that the only way to left-multiply $ c$ times something to get zero is $ 0c$. That is, $ c$ is not a right zero-divisor if right-multiplication by $ c$ is injective. On the other hand, if the map is not injective, then there are some $ x \neq y$ such that $ xc = yc$, implying $ (x-y)c = 0$, and this proves that $ c$ is a right zero-divisor. We can do exactly the same argument with left-multiplication.

But there is one minor complication: what if right-multiplication is injective, but $ c$ has no inverses? It’s not hard to come up with an example: 2 as an element of the ring of integers $ \mathbb{Z}$ is a perfectly good one. It’s neither a zero-divisor nor a unit.

This basic study of zero-divisors gives us some natural definitions:

Definition: A division ring is a ring in which every element has a two-sided inverse.

If we allow that $ R$ is commutative, we get something even better (more familiar: $ \mathbb{Q}, \mathbb{R}, \mathbb{C}$ are the standard examples of fields).

Definition: field is a nonzero commutative division ring.

The “nonzero” part here is just to avoid the case when the ring is the trivial ring (sometimes called the zero ring) with one element. i.e., the set $ \left \{ 0 \right \}$ is a ring in which zero satisfies both the additive and multiplicative identities. The zero ring is excluded from being a field for silly reasons: elegant theorems will hold for all fields except the zero ring, and it would be messy to require every theorem to add the condition that the field in question is nonzero.

We will have much more to say about fields later on this blog, but for now let’s just state one very non-obvious and interesting result in non-commutative algebra, known as Wedderburn’s Little Theorem.

Theorem: Every finite divison ring is a field.

That is, simply having finitely many elements in a division ring is enough to prove that multiplication is commutative. Pretty neat stuff. We will actually see a simpler version of this theorem in a moment.

Now as we saw units and zero-divisors are disjoint, but not quite opposites of each other. Since we have defined a division ring as a ring where all (non-zero) elements are units, it is natural to define a ring in which the only zero-divisor is zero. This is considered a natural generalization of our favorite ring $ \mathbb{Z}$, hence the name “integral.”

Definition: An integral domain is a commutative ring in which zero is the only zero-divisor.

Note the requirement that the ring is commutative. Often we will simply call it a domain, although most authors allow domains to be noncommutative.

Already we can prove a very nice theorem:

Theorem: Every finite integral domain is a field.

Proof. Integral domains are commutative by definition, and so it suffices to show that every non-zero element has an inverse. Let $ R$ be our integral domain in question, and $ x \in R$ the element whose inverse we seek. By our discussion of above, right multiplication by $ x$ is an injective map $ R \to R$, and since $ R$ is finite this map must be a bijection. Hence $ x$ must have some $ y \neq 0$ so that $ yx = 1$. And so $ y$ is the inverse of $ x$.

$ \square$

We could continue traveling down this road of studying special kinds of rings and their related properties, but we won’t often use these ideas on this blog. We do think the reader should be familiar with the names of these special classes of rings, and we will state the main theorems relating them.

Definition: A nonzero element $ p \in R$ is called prime if whenever $ p$ divides a product $ ab$ it either divides $ a$ or $ b$ (or both). A unique factorization domain (abbreviated UFD) is an integral domain in which every element can be written uniquely as a product of primes.

Definition:Euclidean domain is a ring in which the division algorithm can be performed. That is, there is a norm function $ | \cdot | : R \ \left \{ 0 \right \} \to \mathbb{N}$, for which every pair $ a,b \neq 0$ can be written as $ a = bq + r$ with r satisfying $ |r| < |b|$.

Paolo Aluffi has a wonderful diagram showing the relations among the various special classes of integral domains. This image comes from his book, Algebra: Chapter 0, which is a must-have for the enterprising mathematics student interested in algebra.

rings

In terms of what we have already seen, this diagram says that every field is a Euclidean domain, and in turn every Euclidean domain is a unique factorization domain. These are standard, but non-trivial theorems. We will not prove them here.

The two big areas in this diagram we haven’t yet mentioned on this blog are PIDs and Noetherian domains. The reason for that is because they both require a theory of ideals in rings (perhaps most briefly described as a generalization of the even numbers). We will begin next time with a discussion of ideals, and their important properties in studying rings, but before we finish we want to focus on one main example that will show up later on this blog.

Polynomial Rings

Let us formally define the polynomial ring.

Definition: Let $ R$ be a commutative ring. Define the ring $ R[x]$, to be the set of all polynomials in $ x$ with coefficients in $ R$, where addition and multiplication are the usual addition and multiplication of polynomials. We will often call $ R[x]$ the polynomial ring in one variable over $ R$.

We will often replace $ x$ by some other letter representing an “indeterminate” variable, such as $ t$, or $ y$, or multiple indexed variables as in the following definition.

Definition: Let $ R$ be a commutative ring. The ring $ R[x_1, x_2, \dots, x_n]$ is the set of all polynomials in the $ n$ variables with the usual addition and multiplication of polynomials.

What can we say about the polynomial ring in one variable $ R[x]$? It’s additive and multiplicative identities are clear: the constant 0 and 1 polynomials, respectively. Other than that, we can’t quite get much more. There are some very bizarre features of polynomial rings with bizarre coefficient rings, such as multiplication decreasing degree.

However, when we impose additional conditions on $ R$, the situation becomes much nicer.

Theorem: If $ R$ is a unique factorization domain, then so is $ R[x]$.

Proof. As we have yet to discuss ideals, we refer the reader to this proof, and recommend the reader return to it after our next primer.

$ \square$

On the other hand, we will most often be working with polynomial rings over a field. And here the situation is even better:

Theorem: If $ k$ is a field, then $ k[x]$ is a Euclidean domain.

Proof. The norm function here is precisely the degree of the polynomial (the highest power of a monomial in the polynomial). Then given $ f,g$, the usual algorithm for polynomial division gives a quotient and a remainder $ q, r$ so that $ f = qg + r$. In following the steps of the algorithm, one will note that all multiplication and division operations are performed in the field $ k$, and the remainder always has a smaller degree than the quotient. Indeed, one can explicitly describe the algorithm and prove its correctness, and we will do so in full generality in the future of this blog when we discuss computational algebraic geometry.

$ \square$

For multiple variables, things are a bit murkier. For instance, it is not even the case that $ k[x,y]$ is a euclidean domain. One of the strongest things we can say originates from this simple observation:

Lemma: $ R[x,y]$ is isomorphic to $ R[x][y]$.

We haven’t quite yet talked about isomorphisms of rings (we will next time), but the idea is clear: every polynomial in two variables $ x,y$ can be thought of as a polynomial in $ y$ where the coefficients are polynomials in $ x$ (gathered together by factoring out common factors of $ y^k$). Similarly, $ R[x_1, \dots, x_n]$ is the same thing as $ R[x_1, \dots, x_{n-1}][x_n]$ by induction. This allows us to prove that any polynomial ring is a unique factorization domain:

Theorem: If $ R$ is a UFD, so is $ R[x_1, \dots, x_n]$.

Proof. $ R[x]$ is a UFD as described above. By the lemma, $ R[x_1, \dots, x_n] = R[x_1, \dots, x_{n-1}][x_n]$ so by induction $ R[x_1, \dots, x_{n-1}]$ is a UFD implies $ R[x_1, \dots, x_n]$ is as well.

$ \square$

We’ll be very interested in exactly how to compute useful factorizations of polynomials into primes when we start our series on computational algebraic geometry. Some of the applications include robot motion planning, and automated theorem proving.

Next time we’ll visit the concept of an ideal, see quotient rings, and work toward proving Hilbert’s Nullstellensatz, a fundamental result in algebraic geometry.

Until then!

Z[√2] has Infinitely Many Units

Note, while the problem below arose in ring theory (specifically, Euclidean domains), the proof itself is elementary, and so the title should not scare away any viewers. In fact, we boil the problem down to something which requires no knowledge of abstract algebra at all.

Problem: Show that the ring $ \mathbb{Z}[\sqrt{2}]$ has infinitely many units.

Solution: An element of $ \mathbb{Z}[\sqrt{2}]$ has the form $ a+b\sqrt{2}$ for integers $ a, b$, and there is a function $ N: \mathbb{Z}[\sqrt{2}] \to \mathbb{Z}$ defined by $ N(a+b\sqrt{2}) = a^2 – 2b^2$ satisfying $ N(uv) = N(u)N(v)$, and from this it’s easy to see $ u$ is a unit if and only if $ N(u) = \pm 1$.

So we have reduced this problem to proving there are infinitely many integer solutions to the equation $ a^2 – 2b^2 = \pm 1$. A quick check with some small numbers provides at least one solution: $ a=3, b=2$. An attempt at proof by contradiction seems to bear no fruit, and it’s unclear how to continue. It would be great to have some more examples of solutions, so that one could study a pattern, so let us write a short program to search for solutions (in Mathematica):

L = Flatten[Table[{i,j,i^2 - 2 j^2},{i,1,1000},{j,1,1000}], 1];
Select[L, (Abs[#[[3]] == 1])&]

Here we construct a table of all possible differences $ i^2-2j^2$ as $ i,j$ range over the integers from 1 to 1000 (flattening the list appropriately), and then we select those whose values are $ \pm 1$. Here is the result of evaluating the above expression:

{{1, 1, -1}, {3, 2, 1}, {7, 5, -1}, {17, 12, 1},
 {41, 29, -1}, {99, 70, 1}, {239, 169, -1}, {577, 408, 1}}

i.e., the triple $ \left \{ 99, 70, 1 \right \}$ represents the solution $ 99^2 – 2(70^2) = 1$. Of course, these don’t represent all solutions because we could take any of these solutions and stick a minus sign in front of the number as we wish, leaving their squares unchanged. In any event, the numbers appear random, and it takes a few moments before one is inspired enough to see a pattern: each pair can be written in terms of the pair before it! Take a moment to see if you can determine how.

Now we conjecture, if $ a, b$ are two positive solutions and $ b < a$, then the pair $ a+2b, a+b$ gives a new solution! Doing the needed algebra,

$ (a+2b)^2 – 2(a+b)^2 = a^2 + 4b^2 + 4ab – 2(a^2 + b^2 + 2ab) = -a^2 + 2b^2$

Which is of course $ -(a^2 – 2b^2)$, giving the opposite of our original solution, and our two numbers are strictly larger in absolute value, so we just generated a bona fide new solution! Continuing in this manner indefinitely will clearly produce an infinite sequence of distinct solutions, proving the theorem.

Now we get a bunch of extra things for free: If there is any element $ x$ of our ring which has $ N(x)=k$, then there are infinitely many such elements. In other words, for any integer $ k$, if there is a single solution to the integer equation $ a^2 – 2b^2 =k$, then there are infinitely many. Going back to the ring $ \mathbb{Z}[\sqrt{2}]$, the natural next step is to show that this process generates all such units, after appropriately including the negative counterparts. (Or does it? We leave it to the reader to decide. Can you find a closed form for the units?)

This proof is nice for two reasons. First, it uses the heavy machinery of algebraic structures to give elegant proofs of difficult-to-prove elementary statements (this is an algebraist’s dream). Second, when at a loss, we used a computer program to give us a leg up. It would have taken quite a while even to find the solution {17,12,-1}. But after spending two minutes writing a computer program, we got a definitive answer for all numbers up to a thousand. We were able to study the pattern, make a conjecture, and then finally proving the conjecture was trivial. The inspiration to see the pattern was the hard part.

Of course, one cannot always just “extrapolate” the solutions to difficult theorems by just looking at tables of numbers. There are plenty of open problems which have been validated by computers well into the hundred-thousand-digit numbers, but still remain unsolved in general. But as we have shown, computers can give one a nudge in the right direction, and if there’s nobody to provide a helpful hint (and no solution to look up online), the program provide the most efficient (perhaps, the most elegant) route to a solution.