Rings — A Primer

Previously on this blog, we’ve covered two major kinds of algebraic objects: the vector space and the group. There are at least two more fundamental algebraic objects every mathematician should something know about. The first, and the focus of this primer, is the ring. The second, which we’ve mentioned briefly in passing on this blog, is the field. There are a few others important to the pure mathematician, such as the R-module (here R is a ring). These do have some nice computational properties, but in order to even begin to talk about them we need to know about rings.

A Very Special Kind of Group

Recall that an abelian group (G, +) is a set G paired with a commutative binary operation +, where G has a special identity element called 0 which acts as an identity for +. The archetypal example of an abelian group is, of course, the integers \mathbb{Z} under addition, with zero playing the role of the identity element.

The easiest way to think of a ring is as an abelian group with more structure. This structure comes in the form of a multiplication operation which is “compatible” with the addition coming from the group structure.

Definition:ring (R, +, \cdot) is a set R which forms an abelian group under + (with additive identity 0), and has an additional associative binary operation \cdot with an element 1 serving as a (two-sided) multiplicative identity. Furthermore, \cdot distributes over + in the sense that for all x,y,z \in R

x(y+z) = xy + xz and (y+z)x = yx + zx

The most important thing to note is that multiplication is not commutative both in general rings and for most rings in practice. If multiplication is commutative, then the ring is called commutative. Some easy examples of commutative rings include rings of numbers like \mathbb{Z}, \mathbb{Z}/n\mathbb{Z}, \mathbb{Q}, \mathbb{R}, which are just the abelian groups we know and love with multiplication added on.

If the reader takes anything away from this post, it should be the following:

Rings generalize arithmetic with integers.

Of course, this would imply that all rings are commutative, but this is not the case. More meaty and tempestuous examples of rings are very visibly noncommutative. One of the most important examples are rings of matrices. In particular, denote by M_n(\mathbb{R}) the set of all n \times n matrices with real valued entries. This forms a ring under addition and multiplication of matrices, and has as a multiplicative identity the n \times n identity matrix I_n.

Commutative rings are much more well-understood than noncommutative rings, and the study of the former is called commutative algebra. This is the main prerequisite for fields like algebraic geometry, which (in the simplest examples) associate commutative rings to geometric objects.

For us, all rings will have an identity, but many ring theorists will point out that one can just as easily define a ring to not have a multiplicative identity. We will call these non-unital rings, and will rarely, if ever, see them on this blog.

Another very important example of a concrete ring is the polynomial ring in n variables with coefficients in \mathbb{Q} or \mathbb{R}. This ring is denoted with square brackets denoting the variables, e.g. \mathbb{R}[x_1, x_2, \dots , x_n]. We rest assured that the reader is familiar with addition and multiplication of polynomials, and that this indeed forms a ring.

Kindergarten Math

Let’s start with some easy properties of rings. We will denote our generic ring by R.

First, the multiplicative identity of a ring is unique. The proof is exactly the same as it was for groups, but note that identities must be two-sided for this to work. If 1, 1' are two identities, then 1 = 1 \cdot 1' = 1'.

Next, we prove that 0a = a0 = 0 for all a \in R. Indeed, by the fact that multiplication distributes across addition, 0a = (0 + 0)a = 0a + 0a, and additively canceling 0a from both sides gives 0a = 0. An identical proof works for a0.

In fact, pretty much any “obvious property” from elementary arithmetic is satisfied for rings. For instance, -(-a) = a and (-a)b = a(-b) = -(ab) and (-1)^2 = 1 are all trivial to prove. Here is a list of these and more properties which we invite the reader to prove.

Zero Divisors, Integral Domains, and Units

One thing that is very much not automatically given in the general theory of rings is multiplicative cancellation. That is, if I have ac = bc then it is not guaranteed to be the case that a = b. It is quite easy to come up with examples in modular arithmetic on integers; if R = \mathbb{Z}/8\mathbb{Z} then 2*6 = 6*6 = 4 \mod 8, but 2 \neq 6.

The reason for this phenomenon is that many rings have elements that lack multiplicative inverses. In \mathbb{Z}/8\mathbb{Z}, for instance, 2 has no multiplicative inverse (and neither does 6). Indeed, one is often interested in determining which elements are invertible in a ring and which elements are not. In a seemingly unrelated issue, one is interested in determining whether one can multiply any given element x \in R by some y to get zero. It turns out that these two conditions are disjoint, and closely related to our further inspection of special classes of rings.

Definition: An element x of a ring R is said to be a left zero-divisor if there is some y \neq 0 such that xy = 0. Similarly, x is a right zero-divisor if there is a z for which zx = 0. If x is a left and right zero-divisor (e.g. if R is commutative), it is just called a zero-divisor.

Definition: Let x,y \in R. The element y is said to be a left inverse to x if yx = 1, and a right inverse if xy = 1. If there is some z \neq 0 for which xz = zx = 1, then x is said to be a two-sided inverse and z is called the inverse of x, and x is called a unit.

As a quick warmup, we prove that if x has a left  and a right inverse then it has a two-sided inverse. Indeed, if zx = 1 = xy, then z = z(xy) = (zx)y = y, so in fact the left and right inverses are the same.

The salient fact here is that having a (left- or right-) inverse allows one to do (left- or right-) cancellation, since obviously when ac = bc and c has a right inverse, we can multiply acc^{-1} = bcc^{-1} to get a=b. We will usually work with two-sided inverses and zero-divisors (since we will usually work in a commutative ring). But in non-commutative rings, like rings of matrices, one-sided phenomena do run rampant, and one must distinguish between them.

The right way to relate these two concepts is as follows. If c has a right inverse, then define the right-multiplication function (- \cdot c) : R \to R which takes x and spits out xc. In fact, this function is an injection. Indeed, we already proved that (because c has a right inverse) if xc = yc then x = y. In particular, there is a unique preimage of 0 under this map. Since 0c = 0 is always true, then it must be the case that the only way to left-multiply c times something to get zero is 0c. That is, c is not a right zero-divisor if right-multiplication by c is injective. On the other hand, if the map is not injective, then there are some x \neq y such that xc = yc, implying (x-y)c = 0, and this proves that c is a right zero-divisor. We can do exactly the same argument with left-multiplication.

But there is one minor complication: what if right-multiplication is injective, but c has no inverses? It’s not hard to come up with an example: 2 as an element of the ring of integers \mathbb{Z} is a perfectly good one. It’s neither a zero-divisor nor a unit.

This basic study of zero-divisors gives us some natural definitions:

Definition: A division ring is a ring in which every element has a two-sided inverse.

If we allow that R is commutative, we get something even better (more familiar: \mathbb{Q}, \mathbb{R}, \mathbb{C} are the standard examples of fields).

Definition: field is a nonzero commutative division ring.

The “nonzero” part here is just to avoid the case when the ring is the trivial ring (sometimes called the zero ring) with one element. i.e., the set \left \{ 0 \right \} is a ring in which zero satisfies both the additive and multiplicative identities. The zero ring is excluded from being a field for silly reasons: elegant theorems will hold for all fields except the zero ring, and it would be messy to require every theorem to add the condition that the field in question is nonzero.

We will have much more to say about fields later on this blog, but for now let’s just state one very non-obvious and interesting result in non-commutative algebra, known as Wedderburn’s Little Theorem.

Theorem: Every finite divison ring is a field.

That is, simply having finitely many elements in a division ring is enough to prove that multiplication is commutative. Pretty neat stuff. We will actually see a simpler version of this theorem in a moment.

Now as we saw units and zero-divisors are disjoint, but not quite opposites of each other. Since we have defined a division ring as a ring where all (non-zero) elements are units, it is natural to define a ring in which the only zero-divisor is zero. This is considered a natural generalization of our favorite ring \mathbb{Z}, hence the name “integral.”

Definition: An integral domain is a commutative ring in which zero is the only zero-divisor.

Note the requirement that the ring is commutative. Often we will simply call it a domain, although most authors allow domains to be noncommutative.

Already we can prove a very nice theorem:

Theorem: Every finite integral domain is a field.

Proof. Integral domains are commutative by definition, and so it suffices to show that every non-zero element has an inverse. Let R be our integral domain in question, and x \in R the element whose inverse we seek. By our discussion of above, right multiplication by x is an injective map R \to R, and since R is finite this map must be a bijection. Hence x must have some y \neq 0 so that yx = 1. And so y is the inverse of x.

\square

We could continue traveling down this road of studying special kinds of rings and their related properties, but we won’t often use these ideas on this blog. We do think the reader should be familiar with the names of these special classes of rings, and we will state the main theorems relating them.

Definition: A nonzero element p \in R is called prime if whenever p divides a product ab it either divides a or b (or both). A unique factorization domain (abbreviated UFD) is an integral domain in which every element can be written uniquely as a product of primes.

Definition:Euclidean domain is a ring in which the division algorithm can be performed. That is, there is a norm function | \cdot | : R \ \left \{ 0 \right \} \to \mathbb{N}, for which every pair a,b \neq 0 can be written as a = bq + r with r satisfying |r| < |b|.

Paolo Aluffi has a wonderful diagram showing the relations among the various special classes of integral domains. This image comes from his book, Algebra: Chapter 0, which is a must-have for the enterprising mathematics student interested in algebra.

rings

In terms of what we have already seen, this diagram says that every field is a Euclidean domain, and in turn every Euclidean domain is a unique factorization domain. These are standard, but non-trivial theorems. We will not prove them here.

The two big areas in this diagram we haven’t yet mentioned on this blog are PIDs and Noetherian domains. The reason for that is because they both require a theory of ideals in rings (perhaps most briefly described as a generalization of the even numbers). We will begin next time with a discussion of ideals, and their important properties in studying rings, but before we finish we want to focus on one main example that will show up later on this blog.

Polynomial Rings

Let us formally define the polynomial ring.

Definition: Let R be a commutative ring. Define the ring R[x], to be the set of all polynomials in x with coefficients in R, where addition and multiplication are the usual addition and multiplication of polynomials. We will often call R[x] the polynomial ring in one variable over R.

We will often replace x by some other letter representing an “indeterminate” variable, such as t, or y, or multiple indexed variables as in the following definition.

Definition: Let R be a commutative ring. The ring R[x_1, x_2, \dots, x_n] is the set of all polynomials in the n variables with the usual addition and multiplication of polynomials.

What can we say about the polynomial ring in one variable R[x]? It’s additive and multiplicative identities are clear: the constant 0 and 1 polynomials, respectively. Other than that, we can’t quite get much more. There are some very bizarre features of polynomial rings with bizarre coefficient rings, such as multiplication decreasing degree.

However, when we impose additional conditions on R, the situation becomes much nicer.

Theorem: If R is a unique factorization domain, then so is R[x].

Proof. As we have yet to discuss ideals, we refer the reader to this proof, and recommend the reader return to it after our next primer.

\square

On the other hand, we will most often be working with polynomial rings over a field. And here the situation is even better:

Theorem: If k is a field, then k[x] is a Euclidean domain.

Proof. The norm function here is precisely the degree of the polynomial (the highest power of a monomial in the polynomial). Then given f,g, the usual algorithm for polynomial division gives a quotient and a remainder q, r so that f = qg + r. In following the steps of the algorithm, one will note that all multiplication and division operations are performed in the field k, and the remainder always has a smaller degree than the quotient. Indeed, one can explicitly describe the algorithm and prove its correctness, and we will do so in full generality in the future of this blog when we discuss computational algebraic geometry.

\square

For multiple variables, things are a bit murkier. For instance, it is not even the case that k[x,y] is a euclidean domain. One of the strongest things we can say originates from this simple observation:

Lemma: R[x,y] is isomorphic to R[x][y].

We haven’t quite yet talked about isomorphisms of rings (we will next time), but the idea is clear: every polynomial in two variables x,y can be thought of as a polynomial in y where the coefficients are polynomials in x (gathered together by factoring out common factors of y^k). Similarly, R[x_1, \dots, x_n] is the same thing as R[x_1, \dots, x_{n-1}][x_n] by induction. This allows us to prove that any polynomial ring is a unique factorization domain:

Theorem: If R is a UFD, so is R[x_1, \dots, x_n].

Proof. R[x] is a UFD as described above. By the lemma, R[x_1, \dots, x_n] = R[x_1, \dots, x_{n-1}][x_n] so by induction R[x_1, \dots, x_{n-1}] is a UFD implies R[x_1, \dots, x_n] is as well.

\square

We’ll be very interested in exactly how to compute useful factorizations of polynomials into primes when we start our series on computational algebraic geometry. Some of the applications include robot motion planning, and automated theorem proving.

Next time we’ll visit the concept of an ideal, see quotient rings, and work toward proving Hilbert’s Nullstellensatz, a fundamental result in algebraic geometry.

Until then!

Groups — A Second Primer

The First Isomorphism Theorem

The meat of our last primer was a proof that quotient groups are well-defined. One important result that helps us compute groups is a very easy consequence of this well-definition.

Recall that if G,H are groups and \varphi: G \to H is a group homomorphism, then the image of \varphi is a subgroup of H. Also the kernel of \varphi is the normal subgroup of G consisting of the elements which are mapped to the identity under \varphi. Moreover, we proved that the quotient G / \ker \varphi is a well-defined group, and that every normal subgroup N is the kernel of the quotient map G \to G/N. These ideas work together to compute groups with the following theorem. Intuitively, it tells us that the existence of a homomorphism between two groups gives us a way to relate the two groups.

Theorem: Let \varphi: G \to H be a group homomorphism. Then the quotient G/ \ker \varphi is isomorphic to the image of \varphi. That is,

G/ \ker \varphi \cong \varphi(G)

As a quick corollary before the proof, if \varphi is surjective then H \cong G / \ker \varphi.

Proof. We define an explicit map f : G/ \ker \varphi \to \varphi(G) and prove it is an isomorphism. Let g \ker \varphi be an arbitrary coset and set f(g \ker \varphi) = \varphi(g). First of all, we need to prove that this definition does not depend on the choice of a coset representative. That is, if g \ker \varphi = g' \ker \varphi, then f(g) = f(g'). But indeed, f(g)^{-1}f(g') = f(g^{-1}g') = 1, since for any coset N we have by definition gN = g'N if and only if g^{-1}g' \in N.

It is similarly easy to verify that f is a homomorphism:

f((g \ker \varphi )(g' \ker \varphi)) = \varphi(gg') = \varphi(g)\varphi(g') = f(g \ker \varphi) f(g' \ker \varphi)

It suffices now to show that f is a bijection. It is trivially surjective (since anything in the image of \varphi is in a coset). It is injective since if f(g \ker \varphi) = 1, then \varphi(g) = 1 and hence g \in \ker \varphi, so the coset g \ker \varphi = 1 \ker \varphi is the identity element. So f is an isomorphism. \square

Let’s use this theorem to compute some interesting things.

Denote by D_{2n} the group of symmetries of the regular n-gon. That is, D_{16} is the symmetry group of the regular octagon and D_{8} is the symmetry group of the square (the 2n notation is because this group always has order 2n). We want to relate D_{16} to D_8. To do this, let’s define a homomorphism f:D_{16} \to D_8 by sending a one-eighth rotation \rho of the octagon to a one-fourth rotation of the square f(\rho) = \rho^2, and using the same reflection for both (f(\sigma) = \sigma). It is easy to check that this is a surjective homomorphism, and moreover the kernel is \left \{ 1, \rho^4 \right \}. That is, D_8 \cong D_{16}/ \left \{ 1, \rho^4 \right \}.

Here is a more general example. If G, H are groups of relatively prime order, then there are no nontrivial homomorphisms G \to H. In order to see this, note that |G/ \ker \varphi| = |G| / |\ker \varphi| as a simple consequence of Lagrange’s theorem. Indeed, by the first isomorphism theorem this quantity is equal to |\varphi(G)|. So |G| = | \ker \varphi| |\varphi(G)|. That is, the order of \varphi(G) divides the order of G. But it also divides the order of H because \varphi(G) is a subgroup of H. In other words, the order of \varphi(G) is a common factor of the orders of G and H. By hypothesis, the only such number is 1, and so |\varphi(G)| = 1 and \varphi(G) is the trivial group.

We will use the first isomorphism theorem quite a bit on this blog. Because it is such a common tool, it is often used without explicitly stating the theorem.

Generators

One extremely useful way to describe a subgroup is via a set of generators. The simplest example is for a single element.

Definition: Let G be a group and x \in G. Then the subgroup generated by x, denoted \left \langle x \right \rangle, is the smallest subgroup of G containing x. More generally, if S \subset G then the subgroup generated by S is the smallest subgroup containing S.

This definition is not quite useful, but the useful version is easy to derive. On one hand, the identity element must always be in \left \langle x \right \rangle. Since x \in \left \langle x \right \rangle and it’s a subgroup, we must have that x^{-1} \in \left \langle x \right \rangle. Moreover, all powers of x must be in the subgroup, as must all powers of the inverse (equivalently, inverses of the powers). In fact that is all that is necessary. That is,

\left \langle x \right \rangle = \left \{ \dots, x^{-2}, x^{-1}, 1, x, x^2, \dots \right \}

For finite groups, this list of elements will terminate. And in fact, the inverse of x will be a power of x as well. To see this, note that if we keep taking powers of x, eventually one of those will be the identity element. Specifically, some power of x must repeat, and if x^n = x^m then x^{n-m} = 1. Hence x^{-1} = x^{n-m-1}.

For subgroups generated by more than one element, these subgroups are more difficult to write down. For example, if x,y \in G then x,y may have a nontrivial relationship. That is, even though all possible products involving x and y are  in the subgroup, it is very difficult to determine whether two such products are the same (in fact, one very general formulation of this problem is undecidable!). Often times one can find a set of generators which generates the entire group G. In this case, we say G is generated by those elements.

A familiar example is the symmetry group of the square. As it turns out this group is generated by \rho, \sigma, where \rho is a quarter turn and \sigma is a reflection across some axis of symmetry. The relationship between the two elements is succinctly given by the equality \rho \sigma \rho \sigma = 1. To see this, try holding our your hand with your palm facing away; rotate your hand clockwise, flip it so your fingers are pointing left, rotate again so your fingers are pointing up, and then flip to get back to where you started; note that the two flips had the same axis of symmetry (the up-down axis). The other (obvious) relationships are that \rho^4 = 1 and \sigma^2 = 1. If we want to describe the group in a compact form, we write

G = \left \langle \rho, \sigma | \rho^4, \sigma^2, \rho \sigma \rho \sigma \right \rangle

This is an example of a group presentation. The left hand side is the list of generators, and the right hand side gives a list of relators, where each one is declared to be the identity element. In particular, the existence of a presentation with generators and relators implies that all possible relationships between the generators can be decuded from the list of relators (kind of like how all relationships between sine and cosine can be deduced from the fact that \sin^2(x) + \cos^2(x) = 1). Indeed, this is the case for the symmetry group (and all dihedral groups); there are only three distinct equations describing the behavior of rotations and reflections.

Here’s a quick definition we will often refer to in the future: a group is called cyclic if it is generated by a single element. Here are some interesting exercises for the beginning reader to puzzle over, which are considered basic facts for experienced group theorists:

  • Every subgroup of a cyclic group is cyclic.
  • There is only one infinite cyclic group: \mathbb{Z}.
  • Every finite cyclic group is isomorphic to \mathbb{Z}/n\mathbb{Z} for some n.

Finally, we will call a group finitely generated if it is generated by a finite set of elements and finitely presented if it has a presentation with finitely many generators and relators. Just to give the reader a good idea about how vast this class of groups is: many basic conjectured facts about finitely generated groups which have “only” one relator are still open problems. So trying to classify groups with two relators (or finitely many relators) is still a huge leap away from what we currently understand. As far as this author knows, this subject has been largely abandoned after a scant few results were proved.

Products and Direct Sums

Just as one does in every field of math, in order to understand groups better we want to decompose them into smaller pieces which are easier to understand. Two of the main ways to do this are via direct products and direct sums.

Definition: Let (G, \cdot_G),(H, \cdot_H) be groups. The product group G \times H is defined to have the underlying set G \times H (pairs of elements), and the operation is defined by entrywise multiplication in the appropriate group.

(g,h) \cdot (g', h') = (g \cdot_G g', h \cdot_H h')

Of course, one must verify that this operation actually defines a group according to the usual definition, but this is a simple exercise. One should note that there are two canonical subgroups of the direct product. Define by p_1 : G \times H \to G the projection onto the first coordinate (that is, (a,b) \mapsto a). This map is obviously a homomorphism, and its kernel is the subgroup of elements (1,h), h \in H. That is, we can identify H as a subgroup of G \times H. Identically, we see with p_2(a,b) = b that G is a subgroup of G \times H.

Note that this allows us to make some very weird groups. For instance, by induction a single direct product allows us to define products of arbitrarily many groups. Note that reordering the terms of such a product does not change the isomorphism class of the group (e.g. \mathbb{Z} \times D_8 \cong D_8 \times \mathbb{Z}). Additionally, there is nothing that stops us from defining infinite product groups. The elements of such a group are sequences of elements from the corresponding multiplicands. For example, the group \mathbb{Z} \times \mathbb{Z} \times \dots is the group of sequences of integers, where addition is defined termwise and the identity is the sequence of all zeroes.

Now infinite products can be particularly unwieldy, but we may still want to talk about groups constructed from infinitely many pieces. Although we have no motivation for this, one such example is the group of an elliptic curve. In order to tame the unwieldiness, we define the following construction which takes an infinite product, and allows only those elements which have finitely many non-identity terms.

Definition: Let G_{\alpha} be a family of groups. Define the direct sum of the G_{\alpha}, denoted by \bigoplus_{\alpha} G_{\alpha}, to be the subgroup of \prod_{\alpha} G_{\alpha} of elements (g_0, g_1, \dots) where all but finitely many g_i are the identity in the corresponding G_i.

[As a quick side note, this is mathematically incorrect: since the family of groups need not be countable, the g_i may not be enumerable. One can fix this by defining the elements as functions on the index set instead of sequences, but we are too lazy to do this.]

Note that we can define a direct sum of only finitely many groups, and for example this would be denoted G \oplus H, but finite sums are trivially the same as finite products. In fact, in this primer and all foreseeable work on this blog, we will stick to finite direct sums of groups, and implicitly identify them with direct products.

Finally, in terms of notation we will write G^n for a direct product of G with itself n times, and G^{\oplus n} for the direct sum of G with itself n times. As we just mentioned, these two are identical, so we will just default to the former for simplicity of reading.

One might wonder: why do we even distinguish these two constructions? The answer is somewhat deep, and we will revisit the question when in our future series on category theory. For now, we can simply say that the distinction is in the case of infinite products and infinite sums, which we won’t discuss anyway except for in passing curiosity.

The Classification of Finitely Generated Abelian Groups

Now we have finally laid enough groundwork to state the first big classification theorem of group theory. In words, it says that any finitely generated abelian group can be written as a direct sum of things isomorphic to \mathbb{Z} and \mathbb{Z}/n\mathbb{Z} for various choices of n. Moreover, the choices of n are related to each other.

In particular, the theorem is stated as follows:

Theorem: Let G be a finitely generated abelian group. Then G is isomorphic to a group of the form:

\displaystyle \mathbb{Z}^m \oplus \mathbb{Z}/p_1^{n_1}\mathbb{Z} \oplus \dots \oplus \mathbb{Z}/p_k^{n_k}\mathbb{Z}

Where p_i are (not necessarily distinct) primes. Moreover, G is completely determined by the choices of primes and exponents above.

In particular, we name these numbers as follows. The \mathbb{Z}^m part of the equation is called the free part, the exponent m is called the rank of G, and the numbers p_i^{n_i} are called the primary factors.

The proof of this theorem is beyond the scope of this blog, but any standard algebra text will have it. All we are saying here that every finitely generated abelian group can be broken up into the part that has infinite order (the free part) and the part that has finite order (often called the torsion part), and that these parts are largely disjoint from each other. For finite groups this is a huge step forward: to classify all finite groups one now only needs to worry about nonabelian groups (although this is still a huge feat in its own).

A quick application of this theorem is as follows:

Corollary: Let G be a finitely generated abelian group which has no elements of finite order. Then G has no nontrivial relators except for those enforcing commutativity.

Proof. Indeed, G \cong \mathbb{Z}^m for some m, and it has a presentation  \left \langle x_1, x_1, \dots, x_m | x_ix_jx_i^{-1}x_j^{-1}, i \neq j \right \rangle, which has no nontrivial relators. \square

Borrowing the free terminology, such groups without relators are called free abelian groups. Indeed, there are also nonabelian “free” groups, and this is the last topic we will cover in this primer.

Free Groups, and a Universal Property

Equivalently to “a group with no relators,” we can define a free group as a group which has presentation \left \langle x_{\alpha} \right \rangle for some potentially infinite family of elements x_{\alpha}. If the number of generators is finite, say there are n of them, then we call it the free group on n generators.

The interesting thing here is that all possible products of the generating elements are completely distinct. For example, the free group on two generators \left \langle a,b \right \rangle contains the elements ab, aba, abab, b^3a^{-5}b^2a, along with infinitely many others. The only way that two elements can be the same is if they can be transformed into each other by a sequence of inserting or deleting strings which are trivially the identity. For example, abb^{-1}a = a^2, but only because of our cancellations of bb^{-1} = 1, which holds in every group.

There is another way to get free groups, and that is by taking free products. In particular, the group \mathbb{Z} = \left \langle a \right \rangle is the free group on a single generator. Given two copies of \mathbb{Z}, we’d like to combine them in some way to get \left \langle a,b \right \rangle. More generally, given any two groups we’d like to define their free product G * H to be the group which contains all possible words using elements in G or H, which has no nontrivial relators, except for those already existing among the elements of G and H before taking a product.

Rigorously, this is very easy to do with group presentations. Give presentations for G = \left \langle x_i | r_j \right \rangle and H = \left \langle y_k | s_m \right \rangle, and define the free product by giving a presentation

G * H = \left \langle x_i, y_k | r_j, s_m \right \rangle

For instance, the free product \mathbb{Z}/3\mathbb{Z} * \mathbb{Z}/4\mathbb{Z} has presentation \left \langle a,b | a^3, b^4 \right \rangle. One interesting fact is that even if G,H are finite, as long as one is nontrivial then the free product will be an infinite group. Another interesting fact, which we’ll explore in future posts on category theory, is that the free product of groups is “the same thing” as the disjoint union of sets. That is, these two operations play the same role in their respective categories.

This “role” is called the universal property of the free product. We will use this directly in our post on the fundamental group, and in general it just says that homomorphisms provided on the two pieces of the free product extend uniquely to a homomorphism of the product. The simpler form is the universal property of the free group, which says that the free group is the “most general possible” group which is generated by these generators.

Theorem (Universal Property of the Free Group): Let S be a set of elements, and let F(S) be the free group generated by the elements of S. Then any set-function S \to G where G is a group extends uniquely to a group homomorphism F(S) \to G.

That is, deciding where the generators should go in G carries with it all of the information needed to define a homomorphism F(S) \to G, and it is uniquely determined in this way. To see this, simply see that once f(a), f(b) are determined, so are f(a^{-1}) = f(a)^{-1}, f(ab) = f(a)f(b), \dots

The corresponding property for free products is similar:

Theorem (Universal Property of the Free Product): Let G_{\alpha} be groups, and let f_{\alpha}: G_{\alpha} \to H be group homomorphisms. Then there is a unique group homomorphism from the free product of the G_{\alpha} to H, i.e. \ast_{\alpha} G_{\alpha} \to H.

The idea is the same as for the first universal property: the free product is the most general group containing the G_{\alpha} as subgroups, and so any group homomorphism from the product to another group is completely determined by what happens to each of the multiplicand groups.

Moreover, we can take a free product and add in extra relators in some reasonable way. This is called an amalgamated free product, and it has similar properties which we won’t bother to state here. The important part for us is that this shows up in topology and in our special ways to associate a topological space with a group.

After these two short primers, we have covered a decent chunk of group theory. Nevertheless, we have left out a lot of the important exercises and intuition that goes into this subject. In the future, we will derive group-theoretic propositions we need as we go (in the middle of other posts). On the other hand, we will continually use groups (and abelian groups, among others) as an example of a category in our exploration of category theory. Finally, when we study rings and fields we will first lay them out as abelian groups with further structural constraints. This will shorten our definitions to a manageable form.