The study of groups is often one’s first foray into advanced mathematics. In the naivete of set theory one develops tools for describing basic objects, and through a first run at analysis one develops a certain dexterity for manipulating symbols and definitions. But it is not until the study of groups that one must step back and inspect the larger picture. The main point of that picture (and indeed the main point of a group) is that algebraic structure can be found in the most unalgebraic of settings. And we mean algebraic literally: we will make sense of what it means to “multiply” things which are not even close to numbers.

It is in my mind prohibitively difficult to motivate the study of these sorts of abstract algebraic objects. Part of the problem is that in order to impress upon the reader any hint of the riches returned from investing in their study, one must already be fluent in their language (indeed, in their culture!) and recognize the commanding utility of discovering them in otherwise mysterious places.

In pure mathematics, the study of groups alone has yielded some impressive and counterintuitive results:

- There is no arithmetic formula for the roots of a polynomial of degree 5 or higher.
- Loads of interesting properties of numbers, for example that $ m!n!$ divides $ (m+n)!$
- A classification of all tilings of the real plane (there are only 17 ways).
- Tools to help classify topological spaces, varieties, field extensions, and many other classes of mathematical objects.

And there are plenty of applications to the real world:

- Public-key cryptography (based on the group-theoretic treatment of elliptic curves).
- Error-detection systems on credit cards, ISBNs for books, bar codes, etc.
- Study of abstract games like the Rubik’s Cube.
- Determining properties of crystal and molecular structures.
- The formulation of modern particle physics.
- Mathematical biology, in particular determining the not-quite-icosahedral structure of a virus.

As a general rule, anything which has some kind of symmetrical structure can be “represented” by a group. We will of course make these notions clearer in this post, but before we jump into the rigorous mathematics, let us give two accessible (visual) *examples* of groups.

## The Square and the Rubik’s Cube

Imagine a square in the plane:

Specifically, we will label each of the corners of this square so that we can distinguish them from another. One can imagine cutting this square out of the plane, doing some kind of physical manipulation, and placing it back into the same hole (so that it fills up all the same space). For instance, one can rotate the square by a quarter turn (say, counterclockwise), or reflect it across a diagonal (say, the AC diagonal) before replacing it. Either of these two operations will not alter the square itself: it will always take points in the square to other points in the square. In other words, it might be the case that our four labelled corners are swapped around, but if we had not labelled the corners we wouldn’t be able to tell anything had changed. More rigorously, these transformations have to preserve the distances between points, so we cannot stretch, tear, or squeeze the square in any way.

If we denote a quarter-turn rotation by $ \rho$ and a flip by $ \sigma$, we can write down a sequence of these operations like

$ \rho \sigma \rho \rho$

Where we apply the operations in order from left to right. That is, the above operation is “rotate a quarter turn, then flip, then rotate twice more.” Using the same notation: here are some additional examples of these operations on the square:

In particular, we will call one of these operations a *symmetry* of the square. Now we can ask the following question, “How can we classify all of the different kinds of symmetry of the square?”

There are some obvious properties of these symmetries. First, and trivially, the operation where we “do nothing” is a valid symmetry. Second, each of these symmetries has an “opposite” symmetry. Indeed, in terms of sets these symmetries are functions, and they should be bijections. And finally, as we’ve already seen, we can *compose* any two symmetries to get another symmetry.

The salient point here is that two different compositions of symmetries can end up being the same thing. Indeed, doing the same flip twice is the same thing as doing nothing, and rotating four times is also the same thing as doing nothing.

Moreover, a symmetry of the square is completely determined by how it moves the corners around. Here’s a short sketch of a proof of this. By our requirement that distances are preserved, the corners must also go to corners (think of the diagonal corners), and once the corners are chosen every other point in the square is required to be a certain distance from each corner. A standard geometric argument shows that three or more circles with non-collinear centers which have a simultaneous intersection point must intersect in a single point.

And so there is a one-to-one correspondence between symmetries of the square and the pictures of the resulting squares with labels. The important fact is that not all possible labelings can be represented in this way. For example, the reader will try in vain to find a symmetry between the two labeled squares:

We will touch more on this example later, but here’s a more complicated example.

In the same way that we can enumerate all possible symmetries of the square, we can enumerate all possible symmetries of the Rubik’s cube. But here the structure is more complicated: we can rotate any one of the six faces of the cube, and the relationships between operations are not at all obvious. The colored stickers form our labels to distinguish two configurations, but it’s not clear which (if any) stickers are superfluous. But again the same properties hold: there is a do-nothing operation, every operation is reversible, and any two operations can be composed and the result is still a valid operation.

As we will see shortly, these properties *characterize* what it means to have symmetry.

## Groups

Now that we’ve seen two examples of symmetry groups, it’s time to get abstract. The three properties of symmetries we mentioned above will form the basis for our definition:

**Definition****:** A *group* $ G$ is a set with a specified binary operation (denoted here by a $ \cdot$), so that the following properties hold:

- $ G$ contains an
*identity element*denoted $ e$ for which $ e \cdot x = x$ for all $ x \in G$. - Every $ x \in G$ has an
*inverse*element $ y \in G$ for which $ x \cdot y = e$. - $ G$ is closed under $ \cdot$. That is, for any $ x,y \in G$, $ x \cdot y$ is again in $ G$.
- The group operation is associative. That is, $ x \cdot (y \cdot z) = (x \cdot y) \cdot z$

There are a few issues we need to get out of the way about this definition and the mess of notation associated with it, but first let’s see some trivial examples.

The singleton set $ \left \{ e \right \}$ with the binary operation $ \cdot$ defined by setting $ e \cdot e = e$ is trivially a group. In terms of symmetries there is only one operation (to do nothing), and doing nothing any number of times is the same as doing nothing.

The set of integers $ \mathbb{Z}$ forms a group under the operation of addition. It is common knowledge that zero fits the definition of the identity element, that the sum of two integers is an integer, that addition on integers is associative, and that every integer has an additive inverse. We can similarly take any of our common number systems and see that they are groups under addition: rational numbers, real numbers, complex numbers, etc. IF we want to work with multiplication, it is not hard to see that $ \mathbb{R} – \left \{ 0 \right \}$ is a group, since every nonzero real number has an inverse, and 1 is the multiplicative identity.

Here are a few basic propositions to clear up some silly ambiguities in the definition. For instance, we never say that there should only be one identity element. But indeed, if there are two identity elements $ e, e’$, then we can show they must be equal:

$ e = e \cdot e’ = e’$

The first equality holds because $ e’$ is an identity element, and the second because $ e$ is. A similar proof shows that the inverse of an element is unique. Therefore, we are justified in the following notation: we will call the identity element $ 1$, and use subscripts $ 1_G, 1_H$ to distinguish between identity elements in different groups $ G, H$. We will also conventionally replace the explicit $ \cdot$ operation with juxtaposition, and emulate the multiplication operation by saying things like $ x^n$ to mean $ x \cdot x \cdot \dots \cdot x$ multiplying $ n$ times.

Of course, if we’re talking about the integers $ \mathbb{Z}$ that won’t do, because $ \mathbb{Z}$ is only a group under addition (although multiplication is well-defined, only $ \pm 1$ have multiplicative inverses). So in the case of groups whose underlying sets are numbers, we have to be more careful. In these cases, we will write things like $ nx$ to mean $ x + x + \dots + x$ adding $ n$ times (and here $ n$ is not considered an element of $ \mathbb{Z}$ as a group, but just the number of additions), and $ -x$ for the additive inverse of $ x$. While all of this notation might sound horribly confusing, we promise that it’s easy to get used to.

Another preliminary question we might ask is: are there groups of any given set size?

**Definition:** The *order *of a group, denoted $ |G|$, is the size of $ G$ as a set. If $ G$ is an infinite set, we say it has *infinite order*. Otherwise we say it has *finite order*.

Just as a quick example, let’s take the group of integers modulo 4. That is, our set is $ \left \{ 0, 1, 2, 3 \right \}$, and the operation $ +$ is defined by adding the two operands and taking the remainder when dividing by four. It is easy to verify this is indeed a group, and it has order 4. In addition this group is equivalent to the structure of rotations of our square earlier. In particular, if we consider the do-nothing operation to be 0, a quarter turn to be the number 1, a half turn to be 2, and a three-quarter turn to be 3, then composing two rotations would correspond to addition between the two numbers, where a full 360-degree rotation is the same as doing nothing. In the formal parlance, which we’ll explain shortly, these two groups are *isomorphic*. Everything about their group structure is the same, they just have different labels for their elements and operations.

The number 4 is obviously arbitrary. We could define $ \mathbb{Z}/n\mathbb{Z}$ to be the group of integers modulo $ n$ for any $ n$. And it’s not hard to see that this group is isomorphic to the group of rotations of the regular $ n$-gon in the plane. So there are finite groups of any order we please, and we can realize them as the rotational groups of regular polygons.

These groups $ \mathbb{Z}$ and $ \mathbb{Z}/n\mathbb{Z}$ share something else in common: their operations are commutative. That is, for any two integers $ x+y = y+x$ (regardless of whether we take modulus afterward). It is not hard to see that group operations are not always commutative, because we have already seen one. The symmetry group of the square, if we call $ \rho$ a quarter turn and $ \sigma$ a diagonal flip, it will not be true that $ \rho \sigma = \sigma \rho$.

**Definition****: **A group $ G$ is called *abelian* (pronounced, uh-BEE-lee-un) if its group operation is commutative.

Abelian groups are quite important. Part of the reason is that finite abelian groups are completely understood. This is a standard statement of classification, and it is one of the main results of group theory. We are not yet equipped to state or prove it, but in principle the idea is clear: every finite abelian group decomposes into pieces that look like $ \mathbb{Z}$ and $ \mathbb{Z}/n\mathbb{Z}$ for some choices of $ n$.

Before we continue we should note a few other examples of groups which come up quite often:

- $ n \times n$ matrices with real entries under the operation of entry-wise matrix addition. This group is called $ M_{n \times n}(\mathbb{R})$
- $ n \times n$ invertible matrices with real entries under matrix multiplication. This group is often called the
*general linear group*, denoted $ GL_n(\mathbb{R})$. - The symmetry group of the regular $ n$-gon, called the
*dihedral group of order*$ 2n$ denoted $ D_{2n}$. - The
*symmetric group**on*$ n$*letters*is the group of all permutations of a finite set of size $ n$. For example, there are twenty-four ways to permute the numbers $ 1,2,3,4$, and each of these permutations (that is, bijections $ \left \{ 1,2,3,4 \right \} \to \left \{ 1,2,3,4 \right \}$) is an element of the group. In general, the symmetric group is denoted $ S_n$ and has order $ n!$. - The
*symmetry group*of any set $ A$, denoted $ S(A)$, is the set of all bijections $ A \to A$, with the group operation being function composition. Note that if $ A$ is finite, then this is just the previously defined symmetric group on $ |A|$ letters $ S_{|A|}$, but it is useful at times to use infinite sets.

## Subgroups, Group Homomorphims, and Quotients

**Definition:** Let $ (G, \cdot)$ be a group and let $ H \subset G$ be a subset. We call $ H$ a *subgroup *of $ G$ if $ H$ is a group under the same operation as $ G$. That is, the operation on $ H$ is precisely the restriction $ \cdot |_{H \times H}$.

In particular, $ H$ must be closed under the operation of $ G$. For instance, the set of even integers $ 2\mathbb{Z}$ is a subset of the set of all integers $ \mathbb{Z}$, and the sum of any two even numbers is even, so this is a subgroup. Similarly, the subset of all integers which are multiples of $ n$ forms a subgroup of $ \mathbb{Z}$, and we denote it $ n\mathbb{Z}$. On the other hand, the subset of integers modulo $ n$ is not a subgroup of $ \mathbb{Z}$, because the operation is not just addition. As we will see in a bit, $ \mathbb{Z}/n\mathbb{Z}$ is a *quotient* of $ \mathbb{Z}$, justifying the unexplained division notation.

Another example is the subgroup of rotations within the group of symmetries of the regular $ n$-gon, and the subgroup of matrices with determinant 1 within the group of invertible matrices $ GL_n(\mathbb{R})$. The reader may easily verify that these are indeed subgroups.

We are about to give the most important example of subgroups, but first we must speak of homomorphisms.

**Definition:** Let $ G, H$ be two groups. A function $ \varphi: G \to H$ is called a *homomorphism *if $ \varphi(xy) = \varphi(x)\varphi(y)$ for all $ x,y \in G$.

Note that the operation on the left-hand side of the equality is happening with the operation in $ G$, but on the right-hand side it is happening in $ H$. In short, this condition says that the map $ \varphi$ *preserves the structure of the group*, at least in some coarse way.

We have already seen an example of a group homomorphism between the group $ \mathbb{Z}/4\mathbb{Z}$ and the subgroup of rotations of the square. There we said that $ \varphi(\rho^k) = k \mod 4$, and indeed

$ \varphi(\rho^n \rho^m) = \varphi(\rho^{n+m}) = n+m \mod 4 = (n \mod 4) + (m \mod 4)$

So this is a group homomorphism. Here the homomorphism also happens to be a bijection. In this case we call it an *isomorphism*, but this need not always be the case. For example, if $ G$ is a group of invertible matrices under multiplication, the operation of taking a determinant is is group homomorphism to the multiplicative group of nonzero real numbers $ \textup{det}: GL_n(\mathbb{R}) \to \mathbb{R} – \left \{ 0 \right \}$. Indeed, $ \textup{det}(AB) = \textup{det}(A)\textup{det}(B)$ for all matrices $ A,B$. But many different matrices have equal determinants, so this map cannot be a bijection.

It is easy to verify (and we will leave it to the reader) that group homomorphisms always preserve the identity and inverses. For a slightly more detailed exercise, the reader can prove that the set-theoretic image of a homomorphism is a subgroup of the target group (you may want to use the “three properties” detailed below for this).

Now our important subgroup example is simply the set of things in a group homomorphism which get sent to the identity. That is:

**Definition:** The *kernel* of a group homomorphism $ \varphi: G \to H$, denoted $ \ker \varphi$ is the preimage of the identity $ \varphi^{-1}(1_H)$.

To verify that this is a group, let’s verify the three group properties (distributivity obviously holds since $ G$ is a group).

- Identity: $ \varphi(1_G) = \varphi(1_G 1_G) = \varphi(1_G) \varphi(1_G)$, and multiplying by $ \varphi(1_G)^{-1}$ (whatever it is), gives $ 1_H = \varphi(1_G)$. So $ 1_G \in \ker \varphi$.
- Inverses: if $ \varphi(x) = 1_H$ then $ \varphi(xx^{-1}) = \varphi(x) \varphi(x^{-1}) = 1_H \varphi(x^{-1}) = \varphi(x^{-1})$. But $ \varphi(xx^{-1}) = \varphi(1_G) = 1_H$ to begin with, so $ 1_H = \varphi(x^{-1})$.
- Closed under multiplication: if $ \varphi(x) = \varphi(y) = 1_H$ then so does $ \varphi(xy) = \varphi(x)\varphi(y) = 1_H^2 = 1_H$.

So the kernel of a homomorphism is a valid subgroup of $ G$.

Now something amazing happens with group homomorphisms. We just saw that the set of things which map to the identity form a group themselves, but there’s more to the story. As the reader knows, or can easily verify, every function $ f: G \to H$ between two sets induces an equivalence relation on $ G$, where two elements are equivalent if they get mapped to the same element in $ H$. For group homomorphisms we can say something much more powerful: the set of *equivalence classes* of a homomorphism forms a group. As we will see, the identity in this group is precisely the kernel of the homomorphism. This is a deep fact about groups, so let’s make it more rigorous.

**Definition:** Let $ G$ be a group, and let $ H$ be a subgroup. The *left coset* of $ H$ by an element $ x \in G$ is the set

$ xH = \left \{ xy: y \in H \right \}$

Similarly, the *right coset * by $ x$ is $ Hx = \left \{ yx : y \in H \right \}$.

For example, one can take the trivial coset $ 1H$, and it is easy to verify that the coset $ hH = H$ if and only if $ h \in H$.

In general nonabelian groups the right coset and left coset if $ H$ by the same element will *not* agree. Moreover, as sets sometimes two left cosets $ xH, yH$ will be equal even when $ x \neq y$. In fact, they will be equal if and only if $ y^{-1}x \in H$. In particular, if $ xH = yH$, then there are two elements $ z, z’ \in H$ for which $ xz = yz’$, or equivalently $ y^{-1}x = z’z^{-1}$. But $ z, z’$ are both in $ H$, so these two products are.

Since set equality is an equivalence relation, we just defined an equivalence relation on $ G$ by $ x \sim y$ if $ y^{-1}x \in H$. The equivalence classes are just the cosets, and we can ask what is required for the set of equivalence classes to form a group. But before we do that, let’s verify what we said about group homomorphisms holds. That is, if $ x \ker \varphi = y \ker \varphi$, we have $ y^{-1}x \in \ker \varphi$, so that $ \varphi(y^{-1}x) = 1 = \varphi(y)^{-1} \varphi(x)$. In other words, $ \varphi(y) = \varphi(x)$, as we claimed.

Now here’s the cool part. We can “multiply” two cosets $ xHyH$ by calling this product $ \left \{ ab : a \in xH, b \in yH \right \}$. If we want the set of left cosets to form a group under this operation, we require the usual properties to hold:

- Identity: the obvious identity satisfies $ 1HxH = xH = xH1H$
- Inverses: for any $ xH$ we have $ xHx^{-1}H = H$
- Closure: $ xHyH = xyH$
- Associativity: $ xH(yHzH) = (xHyH)zH$

Associativity will follow form closure, since it is just $ xyzH$ and the group operation in $ G$ is associative. Suppose that the identity part works and let’s look at closure: if $ xHyH = xyH = xyH1H$, then we can isolate the middle parts, and see that $ Hy = yH$ (the argument really should be more detailed, and we leave it to the reader to flush it out). That is, the left and right cosets by $ y$ must be equal!

Moreover, if left and right cosets are always equal, then one can easily verify that these properties all hold. So we just proved the following statement:

**Theorem:** The set of left (right) cosets of a subgroup $ H \subset G$ form a group if and only if the left and right cosets by $ x$ agree for any $ x \in G$.

And for the final stroke: left and right cosets agree for kernels. Let $ x \ker \varphi$ be such a coset; we want to show that for any $ z \in \ker \varphi$, we have $ xz = yx$ for some $ y \in \ker \varphi$. This is equivalent to requiring that $ xzx^{-1} \in \ker \varphi$, and indeed $ \varphi(xzx^{-1}) = \varphi(x)\varphi(z)\varphi(x)^{-1} = \varphi(x)\varphi(x)^{-1} = 1$. So the cosets of a kernel always form a group.

This condition that the left and right cosets agree (or that $ xzx^{-1} \in H$ for all $ z \in H, x \in G$) is such an important condition that we give it a name:

**Definition:** If the left and right cosets of a subgroup $ H \subset G$ agree, we call it a *normal* subgroup.

Now our theorem is simply: the cosets of a subgroup form a group if and only if the subgroup is normal, and kernels are always normal subgroups. To put the final touches on this mathematical cake, it just so happens that every normal subgroup is the kernel of some homomorphism. To see this, we just need one more definition.

**Definition:** Let $ G$ be a group and $ N$ be a normal subgroup. The *quotient group *of $ G$ by $ N$, denoted $ G/N$, is the group of left cosets of $ N$ in $ G$.

The important part is that there is an obvious group homomorphism $ G \to G/N$: just send an element $ x$ to the coset $ xN$. We call this map the *canonical projection *of $ G$ onto $ G/N$. Indeed it is surjective (hence the name projection), and the kernel of this homomorphism is clearly $ N$ (since $ xN = N$ if and only if $ x \in N$). So every normal subgroup is the kernel of the canonical projection map onto its quotient.

As a quick aside, the avid reader of this blog may recognize the notation and terminology form our post on constructing topological spaces. Indeed, the quotients introduced here are in a sense identical to quotients of groups, but in a different “setting.” The exploration of these analogous constructions in different settings falls under the name of *category theory*. Category theory happens to be extremely interesting and have a computational flavor; needless to say, we will be investigating it on this blog in the future.

The promised example here is of $ \mathbb{Z}/n\mathbb{Z}$. As we already saw, $ n\mathbb{Z}$ is a subgroup of $ \mathbb{Z}$, and it is normal simply because $ \mathbb{Z}$ is an abelian group (every subgroup of an abelian group is normal). So the cosets (denoted $ x + n\mathbb{Z}$ in the additive style) correspond to the remainders modulo $ n$: $ x + n\mathbb{Z} = y + n\mathbb{Z}$ if and only if $ y-z$ is a multiple of $ n$. The connection is immediate.

## Lagrange’s Theorem

The language of quotients and cosets immediately gives us some powerful tools. The most immediate one is called Lagrange’s Theorem. In our discussion of cosets, we implicitly realized that the set of left cosets of $ H \subset G$ partition a $ G$ (every equivalence relation defines a partition). Moreover, it is interesting to note that all of these cosets have equal size!

To see this, note that for a fixed $ x \in G$ there is a natural bijection $ G \to G$ defined by left multiplication by $ x$. The nice thing about bijections is that they preserve the size of sets. So if we can restrict this map to a coset $ yH$ and the image is another coset $ zH$, then these two cosets have the same size.

Indeed, for any given $ y, z$, choose $ x = zy^{-1}$. Then left-multiplication by $ x$ gives $ zy^{-1}yH = zH$. As we mentioned in our discussion of cosets, this operation is well defined (in the sense that we chose the representative $ z$ of $ zH$ and ended up with the representative $ y$ of $ yH$). Since $ y,z$ were arbitrary, it must be the case that all left cosets have the same size.

**Theorem:** (Lagrange) Let $ G$ be a finite group. The order of any subgroup $ H \subset G$ divides the order of $ G$.

*Proof*. Note that $ H$ is a left coset of $ H$ (by, say, the identity). Further note that since $ G$ is finite, there are finitely many cosets of $ H$. Denote by $ [G:H]$ the number of cosets of $ H$ in $ G$. Since all cosets have the same size, they are disjoint, and the union of all cosets of $ H$ is $ G$, it must be that $ |H| [G:H] = |G|$. In particular, the order of $ H$ divides the order of $ G$. $ \square$

This imposes a very large amount of structure on a group. For instance, one can now say that a group of primer order $ p$ has no nontrivial subgroups except for the whole group itself (indeed, with a little more work we can completely characterize groups of prime order). Further, any group of order $ 2^n$ cannot have subgroups of odd order (except the identity subgroup). Many of the arguments of group theory turn into arguments by divisibility solely because of this theorem. And the proof is essentially summed up by the phrase “multiplication by a group element is bijective.” This is a gem of elegance that shows the power of viewing mathematics from the perspective of functions.

It is common to call the quantity $ [G:H]$ in the proof above the *index* of $ H$ in $ G$. As we said, this is nothing more than the number of cosets of $ H$, and it satisfies $ |G| = [G:H] |H|$.

## Group Actions

The last thing we will discuss in this primer is the idea of a group action. As we mentioned, the important part about group theory is that groups represent symmetries of things. Going back to our example of a square, it’s not that a square itself is a group in a nice way, but that the *symmetries* of a square form a group. This is an important distinction, because we are saying that somehow the symmetry group of the square “performs an action” on the square.

This gives us a hint that we should define what it means for a general group to “perform actions” on a general set.

**Definition: **A group $ G$ *acts* on a set $ A$ if there is a group homomorphism $ \varphi: G \to S(A)$, where $ S(A)$ is the group of bijections $ A \to A$. We call $ \varphi$ a *group action*.

Let’s dissect this definition a little. What we’re saying is that we want to associate elements of $ G$ with operations on $ A$. If $ g \in G$ and $ a \in A$, then $ \varphi(g): A \to A$ permutes the elements of $ A$ in some way. We can even go so far as to identify $ g$ with its image in $ S(A)$, and write $ ga$ for the action of (the function) $ g$ on (the set element) $ a$.

For example, $ \mathbb{Z}/4\mathbb{Z}$ acts on the square by sending $ n \mapsto \rho^n$, where $ \rho$ is again the quarter turn rotation. Indeed, any finite symmetry group $ S_n$ acts on the set of numbers $ 1, \dots, n$ by permutations (this is how they were defined), and for general sets $ A$, the group $ S(A)$ acts on $ A$ (the homomorphism is the identity map). Specific examples we’ve seen on this blog include matrix groups acting on vector spaces and homeomorphisms acting on topological spaces.

There are a few important aspects of group actions that we will investigate in future posts. But at least for now we can prove a nice theorem using group actions:

**Theorem: **Every group is a subgroup of a group of symmetries. In particular, every finite group $ G$ is isomorphic to a subgroup of $ S_n$ for some $ n$.

*Proof*. Every group $ G$ acts on itself by left multiplication. That is, every element $ g \in G$ corresponds to a distinct isomorphism $ G \to G$ by sending $ x \mapsto gx$. Indeed, left multiplication by an element $ g$ is easily proven to be an isomorphism on $ G$. Furthermore, if $ g \neq h$ then $ g^{-1}h \neq 1$ and so left multiplication by $ h$ followed by left multiplication by $ g^{-1}$ is not the identity map on $ G$. This proves that two distinct elements in $ G$ correspond to two distinct isomorphisms of $ G$. That is, the group action is injective as a homomorphism $ G \to S(G)$. In general every injective map is a bijection onto its image, and the image of a homomorphism is always a subgroup. So the group action provides an isomorphism $ G \to \varphi(G) \subset S(G)$, as required. $ \square$

There is a whole host of things we have yet to talk about with groups. We will leave these for next time, but just to lay them on the table, we need to discuss:

- Generators and cyclic groups
- The first isomorphism theorem (more appropriately named the homomorphism decomposition theorem)
- Direct sums and products
- Orbits, stabilizers, and other objects associated to group actions.
- Free groups, group presentations, and relations

That being said, group theory is a very large subject, and the reader may wish to pick up some books. This author learned algebra from two primary textual sources, the first being Michael Artin’s “Algebra” and the second (from a more mature perspective) being Paolo Aluffi’s “Algebra: Chapter 0”. We highly recommend the second text as a way to simultaneously learn algebra, demistify category theory, and see a wide host of applications of abstract algebra to other fields of mathematics.

Until next time!

Watch out, left multiplication by a fixed $x \in G$ is no morphism! If we call this function $f$, it’s easy to see that $f(g) f(h) = xg xh \neq xgh = f(gh)$ !

Yes, thanks for catching that mistake. And in fact we don’t even need that fact, so this was just my hastiness in writing.

Hi, I think i caught a few typos:

When showing that kernels agree on left and right cosets, you write “phi(x.z.x^-1) = phi(x) phi(z) phi(z^)-1” while I think the last element should be phi(x)^-1.

In the formulation of Lagrange’s theorem, there is “H \in G”, I think it should be “H \subset G”.

Finally, just above the section on Lagrange’s theorem, there is “So the cosets (denoted x+nZ in the additive style) correspond to the remainders modulo n: x+Z = y+Z iff y – z is multiple of n”. I might be confused about that part, but don’t you mean something more like : “x+nZ = y+nZ iff y – x is multiple of n” ?

Anyway, I really enjoyed the article, thanks for writing it !

Right on all counts. Thanks for pointing those out.

Hiya, seem to have another minor typo.

“Associativity will follow form closure”

Uh, phi(y^(-1)x) = 1 = phi(y)^(-1)phi(x)? Shouldn’t the inverse be inside the brackets?

Intuitively you should expect this formula to hold. A group homomorphism “preserves the structure of a group,” and so it should preserve inverses as well. Of course, I’ll leave it as an exercise to you: prove that for every group homomorphism , it holds that .

Oh-kay.

> let’s take the group of integers modulo 4. That is, our set is { 0, 1, 2, 3}, and

> the operation + is defined by adding the two operands and taking the remainder

> when dividing by four.

I’ve a two questions:

1. What is the “group of integers modulo 4”? I mean — I’d expect that the modulo meant to be the binary operation, but next you’re defining the operation differently, so I don’t see what would that mean.

2. The binary operation for me seems to be invalid. What would we expect after doing “0 · 0”, which is “0 + 0 / 4”?

1. “Modulo” is a binary operation: a%b is the remainder when dividing a by b. “Addition modulo 4” is also a binary operation, which I call + in the context of the group: a + b is the remainder when dividing (a+b) by 4, i.e. (a+b)%4. It’s weird that I’m using + to mean two different things here, but it should be clear that when I’m adding things in the group, I’m using “+ modulo 4”, and when I’m

defining“addition modulo 4” as (a+b)%4, I’m using the usual + on integers.2. The remainder when dividing zero by anything is zero. So in the group of integers modulo 4, 0 + 0 = (0+0)%4 = 0.

Thank you very much!

1. “Modulo” is a binary operation: a%b is the remainder when dividing a by b. “Addition modulo 4” is also a binary operation, which I call + in the context of the group: a + b is the remainder when dividing (a+b) by 4, i.e. (a+b)%4. It’s weird that I’m using + to mean two different things here, but it should be clear that when I’m adding things in the group, I’m using “+ modulo 4”, and when I’m

defining“addition modulo 4” as (a+b)%4, I’m using the usual + on integers.2. The remainder when dividing zero by anything is zero. So in the group of integers modulo 4, 0 + 0 = (0+0)%4 = 0.

Thank you very much!

Could you please explain that part:

> As the reader knows, or can easily verify, every function ƒ: G → H between

> two sets induces an equivalence relation on G, where two elements are

> equivalent if they get mapped to the same element in H.

Here’s how I see that: since the words says «every function ƒ: G → H between two sets», hence we can take *any* set — because even if the sets would be groups, the binary operations absolutely not related to a mapping «ƒ: G → H». I.e. we could take G = {0,1,2,3} with modulo as a binary operation, but a mapping {0,1,2,3} → {a,b,c,d} have nothing to do with an existing operation inside a set, it doesn’t even know if a set is a group.

Now the question: how a mapping between sets could even be related to anything inside of the first set? I can suppose that there an equivalence relation could be present with an internal operation, but I absolutely don’t see a relation with mapping.

I’m sorry for the long delay, I’m just reading a pdf version of your post along with some other books for I am on the way to/from a job — as I am not really good at math.

I may not understand the question properly, but all I’m saying here is that if you give me a function , I can define to be the set of all elements in that are mapped to under . Using this I can partition into the sets . Partitions are the same thing as equivalence relations. The only part that “depends” on one set or the other is in how you define . But you’re right nothing in what I said here depends on the group structure of or .

Okay, thank very much you, now much clearer. Then I have another question: how a partition have an equiualence relation? E.g. suppose {0,1,2,3} → {a}, here’s a single partition. Then how {0,1,2,3} have e.g. a simmetry? Or, perhaps, do you imply that an equiualence relation here is a simple product of the partition to itself?

I got it. So, there’s a thing like a partition relation, which is a question of whether two elements are in the same partition. Let’s sign it as “~”. Now in the simplest example we could have, say, two sets {a,b,c} → {α}, i.e. every e ∈ {a,b,c} is mapped to the element in the second set, so they’re in the same partition (which is single here btw). Now reflexivity: a~a, symmetry: a~b ∧ b~a, transitivity: a~b ∧ b~c ⇒ a~c.

My confusion was partly because I mistook the word «equivalent» to the word «equal».

Ah, yes. The more advanced you go in math the more fuzzy the lines get. Eventually people *will* start saying equal when they mean “equivalent under my favorite equivalence relation.” So be ready for that.

>For example, one can take the trivial coset 1H, and it is easy to verify

>that the coset hH = H if and only if h ∈ H.

Somewhat confusing phrase. Perhaps you wanted to say just «It is easy to verify that the coset hH = H if and only if h ∈ H.» *(that’s right since a group is closed under its operation)*? Because the identity of the supergroup alsways have to be in the subgroup *(beause they’re under the same operation, and, as previously the article shown, there could be only the one identity)*

> Moreover, as sets sometimes two left cosets xH, yH will be equal even when

> x ≠ y. In fact, they will be equal if and only if y⁻¹x ∈ H.

Honestly, I didn’t get it. How is the “y⁻¹ · x” even defined?

G is a group, and x and y are elements so by the definition of a group y has an inverse which is also in G, and you can multiply any two elements of G to get another. So this element is defined abstractly, and if you give me any specific group I can define it in that context…

But you said “In fact, they will be equal if and only if y⁻¹x ∈ H”. How do you know that? When I am trying to find that by myself, the question goes even further: we only know the result of y⁻¹y, but not y⁻¹x (where x≠y). That’s why I’m wondering.

This is a simple claim you can prove directly from the definitions. In fact, the sentence that directly follows the one you quoted gives a sketch of one direction of the “if and only if” that proves the claim.

So start with the assumption that and prove that follows. Then start from the assumption that and prove that follows. This will prove the claim.

But wait, in the example both x and y were contained within a supergroup of H. Hence we can’t be sure that y⁻¹x ∈ H *(i.e. it could be contained within the supergroup part)*. Moreover, for the same reason we don’t even know that y⁻¹ ∈ H.

I just thought that perhaps you meant that a subgroup is closed even when its binary operation applied to an external element. Then I have a counter-example. Let G a group, and H ⊂ G a subgroup. Let x to be an element such that x ∈ G and x ∉ H. Now x · 1H = x, i.e. group operation with identity of H and x as operands results in element which is not in H, but in G.

Such an element, btw, have to be a part of a coset.

I don’t understand the statement: “A standard geometric argument shows that three or more circles with different centers which have a simultaneous intersection point must intersect in a single point.” Is there some unstated assumption or constraint that I am missing? I am picturing 3 circles in the XY plane. Circle c1 has radius 5 and is centered at (-4, 0). Circle c2 has radius 3 and is centered at (0,0). Circle c3 has radius 5 and is centered at (4, 0). These three circles appear in my mind to intersect in two points (0, 3) and (0, -3). Am I supposed to be only considering circles which lie in the same plane AND have the same radius? I apologize if this was supposed to be obvious.

You’re quite right. The one additional requirement is that the third center is not collinear with the first two centers. Then the argument is standard. Call A,B,C the three centers and D the third point. If the circles for A,B have two intersection points, call E one of them and D the other (by assumption D has the same distance from A,B), then the only way that E can lie on the circle for C is if C is on the line passing through AB. In our case, any three corners of the square satisfy the requirements of this claim.

To be pedantic, the definition of a group requires to define the identity element both way (x e = e x = e) as well as the inverse both way (x y = y x = e). Otherwise, their unicity cannot be proven (the proof uses each).

> Definition: The kernel of a group homomorphism φ: G → H, denoted ker φ is

> the preimage of the identity φ⁻¹(1ₕ).

You can’t actually use φ⁻¹(1ₕ). Because φ(1ₕ) maps multiple elements to a single one, so the φ⁻¹(1ₕ) would have to choose a single element. (sorry for non-capital ₕ, unicode support for subscript capitals is sparse, in particular “H” is missing)

> Since set equality is an equivalence relation, we just defined an equivalence

> relation on G by x ~ y if y⁻¹x ∈ H. The equivalence classes are just the cosets

Perhaps you meant not “the cosets” but rather “cosets”? Because I imagine there could be a coset x ∈ G, y ∈ H, where xy ∉ H.

> Because φ(1ₕ) maps multiple elements

Err… It maps a single element, my bad. Anyway, φ⁻¹(1ₕ) still have to choose a single element, but not multiple ones.

I’m sorry, but I don’t understand either of your points. Your first point seems to be because you’re assuming the notation means the output is a single element, when instead it’s a set (the “preimage” is always set). For the second, using “the cosets” versus “cosets” does not change the semantic meaning of that sentence. The equality xH = yH (i.e., x and y are equivalent) holds if and only if x and y are in the same coset. Indeed, H is a subgroup and so x1 = x is a member of xH.

Well, for first point: strictly speaking — say, per Wikipedia — a function does only have an inverse if it’s surjective and injective. And as the mapping from a group to identity is not injection, you can’t use φ⁻¹(1ₕ).

For second point: I’m sorry, I reread it again, and noticed that you actually proved that if xH = yH. Then, let x=y, we can say that ∀x ∈ G, y ∈ H, xy ∈ H. I.e. there doesn’t exist a coset of a subgroup, which ends up in the supergroup. That’s actually cool, though it’s a hard proof which I just “doesn’t see”, I probably have to find some examples to see that “in the wild”.

It is not an inverse. It is a preimage. This is not a matter of strictly speaking, you simply didn’t know what that word meant in the definition. https://en.wikipedia.org/wiki/Image_(mathematics)#Inverse_image

> Then, let x=y, we can say that ∀x ∈ G, y ∈ H, xy ∈ H. I.e. there doesn’t exist a coset of a subgroup, which ends up in the supergroup.

I don’t think this statement makes any sense. Why is x=y? What is a “supergroup”? What do you mean by “ends up”? What is acting on the coset?

Here is a fact: if G is a group and H is a subgroup of G, and y is in G, then yH is a coset of H and yH is a subset of G. That is, every coset of a subgroup if H is still a subset of G. (This is the only way I have managed to parse your comment, that in fact every coset of a subgroup “ends up in the supergroup”, so what you claimed is false)

> This is the only way I have managed to parse your comment

Sorry, I was editing around the comment, and left it in a half-valid state. I am probably not math enough yet to make everything perfect :Ь

Anyway, forget what I wrote. I re-read the paragraph with the proof “xH = yH ⇔ y⁻¹x ∈ H” multiple times more, thought more about it, and also noticed my comments from a year ago ( D: ), and I see, I again stopped on the same step α) why “xz = xz'” is equal to “y⁻¹x = z’z⁻¹”, and more importantly β) why xH = yH ⇒ y⁻¹x ∈ H (i.e. as we don’t have a specific examples, to me the y⁻¹x ∉ H ∧ y⁻¹x ∈ G looks perfectly valid).

Do I miss much if I just skip that part? Or alternatively, is there a more elaborated proof somewhere else? The query “cosets equality” didn’t come up with anything similar, perhaps does it have a more specific name?

For any element g ∈ H, you get another element xg ∈ xH. Vice versa, every element z ∈ xH can be written as z = xg for some g ∈ H. This is the definition of a coset. Moreover, the statement “xH = yH” is an equality of sets.

Now you want to show y⁻¹x ∈ H from the assumption that xH = yH. Fact 1: x ∈ xH because 1 is in H. Fact 2: since xH = yH, x is in yH. Fact 3: you can write x = yg for some g ∈ H (as in the previous paragraph). Now use the group operation to left-multiply both sides by y⁻¹, and you get y⁻¹x = y⁻¹yg = g. Since g ∈ H it follows that y⁻¹x is also in H.

Wow, man, this is cool! Now I see, but I wouldn’t find this myself. It is… well… not obvious.

Fact 1 was trivial.

Fact 2 did require some thinking; i.e. it’s trivial either, but I probably didn’t have yet some important associations in the brain about the equality, so when I read “x is in H”, I had to check back the start of sentence.

Fact 3 took me 5 minutes or so of re-reading the text and juggling with imaginary elements.

And the last sentence was very easy, but unnatural, i.e. I could probably try doing that because I saw alike pattern in other math texts, but rather as a last resort, because it’s not something I used to do.

Thank you! I’d like to do that just as easy as you do 🙂 I guess, I probably need to read even more math for that.

This is great! I did notice one typo in the paragraph above the definition of a coset: “the set of equivalence classes of a homormohpism form a group”