# The Universal Properties of Map, Fold, and Filter

A lot of people who like functional programming often give the reason that the functional style is simply more elegant than the imperative style. When compelled or inspired to explain (as I did in my old post, How I Learned to Love Functional Programming), they often point to the three “higher-order” functions map, fold, and filter, as providing a unifying framework for writing and reasoning about programs. But how unifying are they, really? In this post we’ll give characterizations of these functions in terms of universal properties, and we’ll see that in fact fold is the “most” universal of the three, and its natural generalization gives a characterization of transformations of standard compound data types.

By being universal or having a universal property, we don’t mean that map, fold, and filter are somehow mystically connected to all programming paradigms. That might be true, but it’s not the point of this article. Rather, by saying something has a universal property we are making a precise mathematical statement that it is either an initial or final object (or the unique morphism determined by such) in some category.

That means that, as a fair warning to the reader, this post assumes some basic knowledge of category theory, but no more than what we’ve covered on this blog in the past. Of particular importance for the first half of this post is the concept of a universal property, and in the followup post we’ll need some basic knowledge of functors.

## Map, Filter, and Fold

Recalling their basic definitions, map is a function which accepts as input a list $L$ whose elements all have the same type (call it $A$), and a function $f$ which maps $A$ to another type $B$. Map then applies $f$ to every entry of $L$ to produce a new list whose entries all have type $B$.

In most languages, implementing the map function is an elementary exercise. Here is one possible definition in ML.

fun map(_, nil) = nil


Next, filter takes as input a boolean-valued predicate $p : A \to \textup{bool}$ on types $A$ and a list $L$ whose entries have type $A$, and produces a list of those entries of $L$ which satisfy the predicate. Again, it’s implementation in ML might be:

fun filter(_, nil) = nil
else filter(p, tail)


Finally, fold is a function which “reduces” a list $L$ of entries with type $A$ down to a single value of type $B$. It accepts as input a function $f : A \times B \to B$, and an initial value $v \in B$, and produces a value of type $B$ by recursively applying $f$ as follows:

fun fold(_, v, nil) = v


(If this definition is mysterious, you’re probably not ready for the rest of this post.)

One thing that’s nice about fold is that we can define map and filter in terms of it:

fun map(f, L) = fold((fn x, xs => (f(x):xs)), [], L)
fun filter(p, L) = fold((fn x, xs => if p(x) then x:xs else xs end), [], L)


We’ll see that this is no accident: fold happens to have the “most general” universality of the three functions, but as a warm-up and a reminder we first investigate the universality of map and filter.

## Free Monoids and Map

Map is the easiest function of the three to analyze, but to understand it we need to understand something about monoids. A monoid is simple to describe, it’s just a set $M$ with an associative binary operation denoted by multiplication and an identity element for that operation.

monoid homomoprhism between two monoids $M, N$ is a function $f: M \to N$ such that $f(xy) = f(x)f(y)$. Here the multiplication on the left hand side of the equality is the operation from $M$ and on the right it’s the one from $N$ (this is the same definition as a group homomorphism). As is to be expected, monoids with monoid homomorphisms form a category.

We encounter simple monoids every day in programming which elucidate this definition. Strings with the operation of string concatenation form a monoid, and the empty string acts as an identity because concatenating a string to the empty string has no effect. Similarly, lists with list concatenation form a monoid, where the identity is the empty list. A nice example of a monoid homomorphism is the length function. We know it’s a homomorphism because the length of a concatenation of two lists is just the sum of the lengths of the two lists.

Integers also form a monoid, with addition as the operation and zero as the identity element. However, the list and string monoids have an extra special property that integers do not. For a number $n$ you can always find $-n$ so that $n + (-n) = 0$ is the identity element. But for lists, the only way to concatenate two lists and get the empty list is if both of the lists were empty to begin with. A monoid with this property is called free, and to fully understand it we should make a definition which won’t seem to mean anything at first.

Definition: Let $\mathbf{C}$ be a category. Given a set $A$, the free object over $A$, denoted $F(A)$, is an object of $\mathbf{C}$ which is universal with respect to set-maps $A \to B$ for any object $B$ in $\mathbf{C}$.

As usual with a definition by universal property, we should elaborate as to exactly what’s going on. Let $\mathbf{C}$ be a category whose objects are sets and let $A$ a set, possibly not in this category. We can construct a new category, $\mathbf{C}_{A \to X}$ whose objects are set-maps

$\displaystyle f: A \to X$

and whose morphisms are commutative diagrams of the form

where $\varphi$ is a morphism in $\mathbf{C}$. In other words, we simply require that $\varphi(f_1(a)) = f_2(a)$ for every $a \in A$.

By saying that the free monoid on $A$ satisfies this universal property for the category of monoids, we really mean that it is initial in this category of set-maps and commutative diagrams. That is, there is a monoid $F(A)$ and a set-map $i_A: A \to F(A)$ so that for every monoid $Y$ and set-map $f: A \to Y$ there is a unique monoid homomorphism from $i_A$ to $f$. In other words, there is a unique monoid homomorphism $\varphi$ making the following diagram commute:

For example, if $A$ is the set of all unicode characters, then $F(A)$ is precisely the monoid of all strings on those characters. The map $i_A$ is then the map taking a character $a$ to the single-character string “$a$“. More generally, if $T$ is any type in the category of types and computable functions, then $F(T)$ is the type of homogeneous lists whose elements have type $T$.

We will now restrict our attention to lists and types, and we will denote the free (list) monoid on a type $A$ as $[A]$. The universal property of map is then easy to describe, it’s just a specific instance of this more general universal property for the free monoids. Specifically, we work in the sub-category of homogeneous list monoids (we only allow objects which are $[T]$ for some type $T$). The morphism $i_A: A \to [A]$ just takes a value $a$ of type $A$ and spits out a single-item list $[a]$. We might hope that for any mapping $A \to [B]$, there is a unique monoid homomorphism $[A] \to [B]$ making the following diagram commute.

And while this is true, the diagram lies because “map” does not achieve what $\varphi$ does. The map $f$ might send all of $A$ to the empty list, and this would cause $\varphi$ to be the trivial map. But if we decomposed $f$ further to require it to send $a$ to a single $b$, such as

then things work out nicely. In particular, specifying a function $f: A \to B$ uniquely defines how a list homomorphism operates on lists. In particular, the diagram forces the behavior for single-element lists: $[a] \mapsto [f(a)]$. And the property of $\varphi$ being a monoid homomorphism forces $\varphi$ to preserve order.

Indeed, we’ve learned from this analysis that the structure of the list involved is crucial in forming the universal property. Map is far from the only computable function making the diagram commute, but it is clearly the only monoid homomorphism. As we’ll see, specifying the order of operations is more generally described by fold, and we’ll be able to restate the universal property of map without relying on monoid homomorphisms.

## Filter

The filter function is a small step up in complexity from map, but its universal property is almost exactly the same. Again filter can be viewed through the lens of a universal property for list monoids, because filter itself takes a predicate and produces a list monoid. We know this because filtering two lists by the same predicate and concatenating them is the same as concatenating them first and then filtering them. Indeed, the only difference here is that the diagram now looks like this

where $p$ is our predicate, and $j_A$ is defined by $(a, T) \mapsto [a]$ and $(a,F) \mapsto []$. The composition $j_A \circ p$ gives the map $A \to [A]$ that we can extend uniquely to a monoid homomorphism $[A] \to [A]$. We won’t say any more on it, except to mention that again maintaining the order of the list is better described via fold.

## Fold, and its Universal Property

Fold is different from map and filter in a very concrete way. Even though map and filter do specify that the order of the list should be preserved, it’s not an important part of their definition: filter should filter, and map should map, but no interaction occurs between different entries during the computation. In a sense, we got lucky that having everything be monoid homomorphisms resulted in preserving order without our specific intention.

Because it doesn’t necessarily produce lists and the operation can be quite weird, fold cannot exist without an explicit description of the order of operations. Let’s recall the definition of fold, and stick to the “right associative” order of operations sometimes specified by calling the function “foldr.” We will use foldr and fold interchangeably.

fun foldr(_, v, nil) = v


Even more, we can’t talk about foldr in terms of monoids. Even after fixing $f$ and $v$, foldr need not produce a monoid homomorphism. So if we’re going to find a universal property of foldr, we’ll need a more general categorical picture.

One first try would be to view foldr as a morphism of sets

$\displaystyle B \times \textup{Hom}(A \times B, B) \to \textup{Hom}([A], B)$

The mapping is just $f,b \mapsto \textup{foldr } f \textup{ } b$, and this is just the code definition we gave above. One might hope that this mapping defines an isomorphism in the category of types and programs, but it’s easy to see that it does not. For example, let

$A = \left \{ 1 \right \}, B = \left \{ 1,2 \right \}$

Then the left hand side of the mapping above is a set of size 8 (there are eight ways to combine a choice of element in $B$ with a map from $A \times B \to B$). But the right hand size is clearly infinite. In fact, it’s uncountably infinite, though not all possible mappings are realizable in programs of a reasonable length (in fact, very few are). So the morphism can’t possibly be a surjective and hence is not an isomorphism.

So what can we say about fold? The answer is a bit abstract, but it works out nicely.

Say we fix a type for our lists, $A$. We can define a category $\mathbf{C}_A$ which has as objects the following morphisms

By $1$ we mean the type with one value (null, if you wish), and $f$ is a morphism from a coproduct (i.e. there are implicit parentheses around $A \times B$). As we saw in our post on universal properties, a morphism from a coproduct is the same thing as a pair of functions which operates on each piece. Here one operates on $1$ and the other on $A \times B$. Since morphisms from $1$ are specified by the image of the sole value $f(1) = b$, we will often write such a function as $b \amalg h$, where $h: A \times B \to B$.

Now the morphisms in $\mathbf{C}_A$ are the pairs of morphisms $\varphi, \overline{\varphi}$ which make the following diagram commute:

Here we write $\overline{\varphi}$ because it is determined by $\varphi$ in a canonical way. It maps $1 \to 1$ in the only way one can, and it maps $(a,b) \mapsto (a, \varphi(b))$. So we’re really specifying $\varphi$ alone, but as we’ll see it’s necessary to include $\overline{\varphi}$ as well; it will provide the “inductive step” in the definition of fold.

Now it turns out that fold satisfies the universal property of being initial in this category. Well, not quite. We’re saying first that the following object $\mathscr{A}$ is initial,

Where cons is the list constructor $A \times [A] \to [A]$ which sends $(a_0, [a_1, \dots, a_n]) \mapsto [a_0, a_1, \dots, a_n]$. By being initial, we mean that for any object $\mathscr{B}$ given by the morphism $b \amalg g: 1 \coprod A \times B \to B$, there is a unique morphism from $\mathscr{A} \to \mathscr{B}$. The “fold” is precisely this unique morphism, which we denote by $(\textup{fold g, b})$. We implicitly know its “barred” counterpart making the following diagram commute.

This diagram has a lot going on in it, so let’s go ahead and recap. The left column represents an object $\mathscr{A}$ we’re claiming is initial in this crazy category we’ve made. The right hand side is an arbitrary object $\mathscr{B}$, which is equivalently the data of an element $b \in B$ and a mapping $g: A \times B \to B$. This is precisely the data needed to define fold. The dashed lines represent the unique morphisms making the diagram commute, whose existence and uniqueness is the defining property of what it means for an object to be initial in this category. Finally, we’re claiming that foldr, when we fix $g$ and $b$, makes this diagram commute, and is hence the very same unique morphism we seek.

To prove all of this, we need to first show that the object $\mathscr{A}$ is initial. That is, that any two morphisms we pick from $\mathscr{A} \to \mathscr{B}$ must be equal. The first thing to notice is that the two objects $1 \coprod A \times [A]$ and $[A]$ are really the same thing. That is, $\textup{nil} \amalg \textup{cons}$ has a two-sided inverse which makes it an isomorphism in the category of types. Specifically, the inverse is the map $\textup{split}$ sending

$\textup{nil} \mapsto \textup{nil}$
$[a_1, \dots, a_n] \mapsto (a_1, [a_2, \dots, a_n])$

So if we have a morphism $\varphi, \overline{\varphi}$ from $\mathscr{A} \to \mathscr{B}$, and the diagram commutes, we can see that $\varphi = (b \amalg g) \circ \overline{\varphi} \circ \textup{split}$. We’re just going the long way around the diagram.

Supposing that we have two such morphisms $\varphi_1, \varphi_2$, we can prove they are equal by induction on the length of the input list. It is trivial to see that they both must send the empty list to $b$. Now suppose that for lists of length $n-1$ the two are equal. Given a list $[a_1, \dots, a_n]$ we see that

$\displaystyle \varphi_1([a_1, \dots, a_n]) = \varphi_1 \circ \textup{cons} (a_1, [a_2, \dots, a_n]) = g \circ \overline{\varphi_1} (a_1, [a_2, \dots, a_n])$

where the last equality holds by the commutativity of the diagram. Continuing,

$\displaystyle g \circ \overline{\varphi_1} (a_1, [a_2, \dots, a_n]) = g (a_1, \varphi_1([a_2, \dots, a_n])) = g(a_1, \varphi_2(a_2, \dots, a_n))$

where the last equality holds by the inductive hypothesis. From here, we can reverse the equalities using $\varphi_2$ and it’s “barred” version to get back to $\varphi_2([a_1, \dots, a_n])$, proving the equality.

To show that fold actually makes the diagram commute is even simpler. In order to make the diagram commute we need to send the empty list to $b$, and we need to inductively send $[a_1, \dots, a_n]$ to $g(a_1, (\textup{fold g b})([a_2, \dots, a_n]))$, but this is the very definition of foldr!

So we’ve established that fold has this universal property. It’s easy now to see how map and filter fit into the picture. For mapping types $A$ to $C$ via $f$, just use the type $[C]$ in place of $B$ above, and have $g(a, L) = \textup{cons}(f(a), L)$, and have $b$ be the empty list. Filter similarly just needs a special definition for $g$.

A skeptical reader might ask: what does all of this give us? It’s a good question, and one that shouldn’t be taken lightly because I don’t have an immediate answer. I do believe that with some extra work we could use universal properties to give a trivial proof of the third homomorphism theorem for lists, which says that any function expressible as both a foldr and a foldl can be expressed as a list homomorphism. The proof would involve formulating a universal property for foldl, which is very similar to the one for foldr, and attaching the diagrams in a clever way to give the universal property of a monoid homomorphism for lists. Caveat emptor: this author has not written such a proof, but it seems likely that it would work.

More generally, any time we can represent the requirements of a list computation by an object like $\mathscr{B}$, we can represent the computation as a foldr. What good does this do us? Well, it might already be obvious when you can and can’t use fold. But in addition to proving when it’s possible to use fold, this new perspective generalizes very nicely to give us a characterization of arbitrary computations on compound data types. One might want to know when to perform fold-like operations on trees, or other sufficiently complicated beasts, and the universal property gives us a way to see when such a computation is possible.

That’s right, I said it: there’s more to the world than lists. Shun me if you must, but I will continue dream of great things.

In an effort to make the egregiously long posts on this blog slightly shorter, we’ll postpone our generalization of the universal property of fold until next time. There we’ll define “initial algebras” and show how to characterize “fold-like” computations any compound data type.

Until then!

# Universal Properties

Previously in this series we’ve seen the definition of a category and a bunch of examples, basic properties of morphisms, and a first look at how to represent categories as types in ML. In this post we’ll expand these ideas and introduce the notion of a universal property. We’ll see examples from mathematics and write some programs which simultaneously prove certain objects have universal properties and construct the morphisms involved.

## A Grand Simple Thing

One might go so far as to call universal properties the most important concept in category theory. This should initially strike the reader as odd, because at first glance universal properties are so succinctly described that they don’t seem to be very interesting. In fact, there are only two universal properties and they are that of being initial and final.

Definition: An object $A$ in a category $\mathbf{C}$ is called initial if for every object $B$ there is a unique morphism $A \to B$. An object $Z$ is called final if for every object $B$ there is a unique morphism $B \to Z$. If an object satisfies either of these properties, it is called universal. If an object satisfies both, it is called a zero object.

In both cases, the existence of a unique morphism is the same as saying the relevant Hom set is a singleton (i.e., for initial objects $A$, the Hom set $\textup{Hom}_{\mathbf{C}}(A,B)$ consists of a single element). There is one and only one morphism between the two objects. In the particular case of $\textup{Hom}(A,A)$ when $A$ is initial (or final), the definition of a category says there must be at least one morphism, the identity, and the universal property says there is no other.

There’s only one way such a simple definition could find fruitful applications, and that is by cleverly picking categories. Before we get to constructing interesting categories with useful universal objects, let’s recognize some universal objects in categories we already know.

In $\mathbf{Set}$ the single element set is final, but not initial; there is only one set-function to a single-element set. It is important to note that the single-element set is far from unique. There are infinitely many (uncountably many!) singleton sets, but as we have already seen all one-element sets are isomorphic in $\mathbf{Set}$ (they all have the same cardinality). On the other hand, the empty set is initial, since the “empty function” is the only set-mapping from the empty set to any set. Here the initial object truly is unique, and not just up to isomorphism.

It turns out universal objects are always unique up to isomorphism, when they exist. Here is the official statement.

Proposition: If $A, A'$ are both initial in $\mathbf{C}$, then $A \cong A'$ are isomorphic. If $Z, Z'$ are both final, then $Z \cong Z'$.

Proof. Recall that a mophism $f: A \to B$ is an isomorphism if it has a two sided inverse, a $g:B \to A$ so that $gf = 1_A$ and $fg=1_B$ are the identities. Now if $A,A'$ are two initial objects there are unique morphisms $f : A \to A'$ and $g: A' \to A$. Moreover, these compose to be morphisms $gf: A \to A$. But since $A$ is initial, the only morphism $A \to A$ is the identity. The situation for $fg : A' \to A'$ is analogous, and so these morphisms are actually inverses of each other, and $A, A'$ are isomorphic. The proof for final objects is identical. $\square$

Let’s continue with examples. In the category of groups, the trivial group $\left \{ 1 \right \}$ is both initial and final, because group homomorphisms must preserve the identity element. Hence the trivial group is a zero object. Again, “the” trivial group is not unique, but unique up to isomorphism.

In the category of types with computable (halting) functions as morphisms, the null type is final. To be honest, this depends on how we determine whether two computable functions are “equal.” In this case, we only care about the set of inputs and outputs, and for the null type all computable functions have the same output: null.

Partial order categories are examples of categories which need not have universal objects. If the partial order is constructed from subsets of a set $X$, then the initial object is the empty set (by virtue of being a subset of every set), and $X$ as a subset of itself is obviously final. But there are other partial orders, such as inequality of integers, which have no “smallest” or “largest” objects. Partial order categories which have particularly nice properties (such as initial and final objects, but not quite exactly) are closely related to the concept of a domain in denotational semantics, and the language of universal properties is relevant to that discussion as well.

The place where universal properties really shine is in defining new constructions. For instance, the direct product of sets is defined by the fact that it satisfies a universal property. Such constructions abound in category theory, and they work via the ‘diagram categories’ we defined in our introductory post. Let’s investigate them now.

## Quotients

Let’s recall the classical definition from set theory of a quotient. We described special versions of quotients in the categories of groups and topological spaces, and we’ll see them all unified via the universal property of a quotient in a moment.

Definition: An equivalence relation on a set $X$ is a subset of the set product $\sim \subset X \times X$ which is reflexive, symmetric, and transitive. The quotient set $X / \sim$ is the set of equivalence classes on $X$. The canonical projection $\pi : X \to X/\sim$ is the map sending $x$ to its equivalence class under $\sim$.

The quotient set $X / \sim$ can also be described in terms of a special property: it is the “largest” set which agrees with the equivalence relation $\sim$. On one hand, it is the case that whenever $a \sim b$ in $X$ then $\pi(a) = \pi(b)$. Moreover, for any set $Y$ and any map $g: X \to Y$ which equates equivalent things ($g(a) = g(b)$ for all $a \sim b$), then there is a unique map $f : X/\sim \to Y$ such that $f \pi = g$. This word salad is best translated into a diagram.

Here we use a dashed line to assert the existence of a morphism (once we’ve proven such a morphism exists, we use a solid line instead), and the symbol $\exists !$ signifies existence ($\exists$) and uniqueness (!).

We can prove this explicitly in the category $\mathbf{Set}$. Indeed, if $g$ is any map such that $g(a) = g(b)$ for all equivalent $a,b \in X$, then we can define $f$ as follows: for any $a \in X$ whose equivalence class is denoted by $[a]$ in $X / \sim$, and define $f([a]) = g(a)$. This map is well defined because if $a \sim b$, then $f([a]) = g(a) = g(b) = f([b])$. It is unique because if $f \pi = g = h \pi$ for some other $h: X / \sim \to Y$, then $h([x]) = g(x) = f([x])$; this is the only possible definition.

Now the “official” way to state this universal property is as follows:

The quotient set $X / \sim$ is universal with respect to the property of mapping $X$ to a set so that equivalent elements have the same image.

But as we said earlier, there are only two kinds of universal properties: initial and final. Now this $X / \sim$ looks suspiciously like an initial object ($f$ is going from $X / \sim$, after all), but what exactly is the category we’re considering? The trick to dissecting this sentence is to notice that this is not a statement about just $X / \sim$, but of the morphism $\pi$.

That is, we’re considering a diagram category. In more detail: fix an object $X$ in $\mathbf{Set}$ and an equivalence relation $\sim$ on $X$. We define a category $\mathbf{Set}_{X,\sim}$ as follows. The objects in the category are morphisms $f:X \to Y$ such that $a \sim b$ in $X$ implies $f(a) = f(b)$ in $Y$. The morphisms in the category are commutative diagrams

Here $f_1, f_2$ need to be such that they send equivalent things to equal things (or they wouldn’t be objects in the category!), and by the commutativity of the diagram $f_2 = \varphi f_1$. Indeed, the statement about quotients is that $\pi : X \to X / \sim$ is an initial object in this category. In fact, we have already proved it! But note the abuse of language in our offset statement above: it’s not really $X / \sim$ that is the universal object, but $\pi$. Moreover, the statement itself doesn’t tell us what category to inspect, nor whether we care about initial or final objects in that category. Unfortunately this abuse of language is widespread in the mathematical world, and for arguably good reason. Once one gets acquainted with these kinds of constructions, reading between the limes becomes much easier and it would be a waste of time to spell it out. After all, once we understand $X / \sim$ there is no “obvious” choice for a map $X \to X / \sim$ except for the projection $\pi$. This is how $\pi$ got its name, the canonical projection.

Two last bits of terminology: if $\mathbf{C}$ is any category whose objects are sets (and hence, where equivalence relations make sense), we say that $\mathbf{C}$ has quotients if for every object $X$ there is a morphism $\pi$ satisfying the universal property of a quotient. Another way to state the universal property is to say that all maps respecting the equivalence structure factor through the quotient, in the sense that we get a diagram like the one above.

What is the benefit of viewing $X / \sim$ by its universal property? For one, the set $X / \sim$ is unique up to isomorphism. That is, if any other pair $(Z, g)$ satisfies the same property, we automatically get an isomorphism $X / \sim \to Z$. For instance, if $\sim$ is defined via a function $f : X \to Y$ (that is, $a \sim b$ if $f(a) = f(b)$), then the pair $(\textup{im}(f), f)$ satisfies the universal property of a quotient. This means that we can “decompose” any function into three pieces:

$\displaystyle X \to X / \sim \to \textup{im}(f) \to Y$

The first map is the canonical projection, the second is the isomorphism given by the universal property of the quotient, and the last is the inclusion map into $Y$. In a sense, all three of these maps are “canonical.” This isn’t so magical for set-maps, but the same statement (and essentially the same proof) holds for groups and topological spaces, and are revered as theorems. For groups, this is called The First Isomorphism Theorem, but it’s essentially the claim that the category of groups has quotients.

This is getting a bit abstract, so let’s see how the idea manifests itself as a program. In fact, it’s embarrassingly simple. Using our “simpler” ML definition of a category from last time, the constructive proof that quotient sets satisfy the universal property is simply a concrete version of the definition of $f$ we gave above. In code,

fun inducedMapFromQuotient(setMap(x, pi, q), setMap(x, g, y)) =
setMap(q, (fn a => g(representative(a))), y)

That is, once we have $\pi$ and $X / \sim$ defined for our given equivalence relation, this function accepts as input any morphism $g$ and produces the uniquely defined $f$ in the diagram above. Here the “representative” function just returns an arbitrary element of the given set, which we added to the abstract datatype for sets. If the set $X$ is empty, then all functions involved will raise an “empty” exception upon being called, which is perfectly fine. We leave the functions which explicitly construct the quotient set given $X, \sim$ as an exercise to the reader.

## Products and Coproducts

Just as the concept of a quotient set or quotient group can be generalized to a universal property, so can the notion of a product. Again we take our intuition from $\mathbf{Set}$. There the product of two sets $X,Y$ is the set of ordered pairs

$\displaystyle X \times Y = \left \{ (x,y) : x \in X, y \in Y \right \}$

But as with quotients, there’s much more going on and the key is in the morphisms. Specifically, there are two obvious choices for morphisms $X \times Y \to X$ and $X \times Y \to Y$. These are the two projections onto the components, namely $\pi_1(x,y) = x$ and $\pi_2(x,y) = y$. These projections are also called “canonical projections,” and they satisfy their own universal property.

The product of sets is universal with respect to the property of having two morphisms to its factors.

Indeed, this idea is so general that it can be formulated in any category, not just categories whose objects are sets. Let $X,Y$ be two fixed objects in a category $\mathbf{C}$. Should it exist, the product $X \times Y$ is defined to be a final object in the following diagram category. This category has as objects pairs of morphisms

and as morphisms it has commutative diagrams

In words, to say products are final is to say that for any object in this category, there is a unique map $\varphi$ that factors through the product, so that $\pi_1 \varphi = f$ and $\pi_2 \varphi = g$. In a diagram, it is to claim the following commutes:

If the product $X \times Y$ exists for any pair of objects, we declare that the category $\mathbf{C}$ has products.

Indeed, many familiar product constructions exist in pure mathematics: sets, groups, topological spaces, vector spaces, and rings all have products. In fact, so does the category of ML types. Given two types ‘a and ‘b, we can form the (aptly named) product type ‘a * ‘b. The canonical projections exist because ML supports parametric polymorphism. They are

fun leftProjection(x,y) = x
fun rightProjection(x,y) = y

And to construct the unique morphism to the product,

fun inducedMapToProduct(f,g) = fn a => (f(a), g(a))

We leave the uniqueness proof to the reader as a brief exercise.

There is a similar notion called a coproduct, denoted $X \coprod Y$, in which everything is reversed: the arrows in the diagram category go to $X \coprod Y$ and the object is initial in the diagram category. Explicitly, for a fixed $X, Y$ the objects in the diagram category are again pairs of morphisms, but this time with arrows going to the central object

The morphisms are again commutative diagrams, but with the connecting morphism going away from the central object

And a coproduct is defined to be an initial object in this category. That is, a pair of morphisms $i_1, i_2$ such that there is a unique connecting morphism in the following diagram.

Coproducts are far less intuitive than products in their concrete realizations, but the universal property is no more complicated.  For the category of sets, the coproduct is a disjoint union (in which shared elements of two sets $X, Y$ are forcibly considered different), and the canonical morphisms are “inclusion” maps (hence the switch from $\pi$ to $i$ in the diagram above). Specifically, if we define the coproduct

$\displaystyle X \coprod Y = (X \times \left \{ 1 \right \}) \cup (Y \times \left \{ 2 \right \})$

as the set of “tagged” elements (the right entry in a tuple signifies which piece of the coproduct the left entry came from), then the inclusion maps $i_1(x) = (x,1)$ and $i_2(y) = (y,2)$ are the canonical morphisms.

There are similar notions of disjoint unions for topological spaces and graphs, which are their categories’ respective coproducts. However, in most categories the coproducts are dramatically different from “unions.” In group theory, it is a somewhat complicated object known as the free product. We mentioned free products in our hasty discussion of the fundamental group, but understanding why and where free groups naturally occur is quite technical. Similarly, coproducts of vector spaces (or $R$-modules) are more like a product, with the stipulation that at most finitely many of the entries of a tuple are nonzero (e.g., formal linear combinations of things from the pieces). Even without understanding these examples, the reader should begin to believe that relatively simple universal properties can yield very useful objects with potentially difficult constructions in specific categories. The ubiquity of the concepts across drastically different fields of mathematics is part of why universal properties are called “universal.”

Luckily, the category of ML types has a nice coproduct which feels like a union, but it is not supported as a “native” language feature like products types are. Specifically, given two types ‘a, ‘b we can define the “coproduct type”

datatype ('a, 'b)Coproduct = left of 'a | right of 'b

Let’s prove this is actually a coproduct: fix two types ‘a and ‘b, and let $i_1, i_2$ be the functions

fun leftInclusion(x) = left(x)
fun rightInclusion(y) = right(y)

Then given any other pair of functions $f,g$ which accept as input types ‘a and ‘b, respectively, there is a unique function $\varphi$ operating on the coproduct type. We construct it as follows.

fun inducedCoproductMap(f,g) =
let
theMap(left(a)) = f(a)
theMap(right(b)) = g(b)
in
theMap
end

The uniqueness of this construction is self-evident. This author finds it fascinating that these definitions are so deep and profound (indeed, category theory is heralded as the queen of abstraction), but their realizations are trivially obvious to the working programmer. Perhaps this is a statement about how well-behaved the category of ML types is.

## Continuing On

So far we have seen three relatively simple examples of universal properties, and explored how they are realized in some categories. We should note before closing two things. The first is that not every category has objects with these universal properties. Unfortunately poset categories don’t serve as a good counterexample for this (they have both products and coproducts; what are they?), but it may be the case that in some categories only some pairs of objects have products or coproducts, while others do not.

Second, there are many more universal properties that we haven’t covered here. For instance, the notion of a limit is a universal property, as is the notion of a “free” object. There are kernels, pull-backs, equalizers, and many other ad-hoc universal properties without names. And for every universal property there is a corresponding “dual” property that results from reversing the arrows in every diagram, as we did with coproducts. We will visit the relevant ones as they come up in our explorations.

In the next few posts we’ll encounter functors and the concept of functoriality, and start asking some poignant questions about familiar programmatic constructions.

Until then!

# A Sample of Standard ML, the TreeSort Algorithm, and Monoids

In this post we will assume the reader has a passing familiarity with some of the basic concepts of functional programming (the map, fold, and filter functions). We introduce these topics in our Racket primer, but the average reader will understand the majority of this primer without expertise in functional programming.

## Preface: ML for Category Theory

A few of my readers have been asking for more posts about functional languages and algorithms written in functional languages. While I do have a personal appreciation for the Haskell programming language (and I plan to do a separate primer for it), I have wanted to explore category theory within the context of programming for quite a while now. From what I can tell, ML is a better choice than Haskell for this.

Part of the reason is that, while many Haskell enthusiasts claim it to be a direct implementation of category theory, Haskell actually tweaks category theoretic concepts in certain ways. I rest assured that the designers of Haskell (who are by assumption infinitely better at everything than I am) have very good reasons for doing this. But rather than sort through the details of the Haskell language specification to clarify the finer details, we would learn a lot more by implementing category theory by hand in a programming language that doesn’t have such concepts already.

And so we turn to ML.

ML, which stands for MetaLanguage, apparently has historical precedents for being a first in many things. One of these is an explicit recognition of parametric polymorphism, which is the idea that an operation can have the same functionality regardless of the types of the data involved; the types can, in effect, be considered variables. Another ground-breaking aspect of ML is an explicit type inference system. Similar to Haskell, an ML program will not run unless the compiler can directly prove that the program produces the correct types in every step of the computation.

Both of these features are benefits for the student of category theory. Most of our time in category theory will be spent working with very general assumptions on the capabilities of our data involved, and parametric polymorphism will be our main tool for describing what these assumptions are and for laying out function signatures.

As a side note, I’ve noticed through my ever-growing teaching experiences that one of the main things new programming students struggle with (specifically, after mastering the syntax and semantics of basic language constructs) is keeping their types straight. This is especially prominent in a language like Python (which is what I teach), where duck-typing is so convenient that it lulls the students into a false sense of security. Sooner as opposed to later they’ll add strings to numbers with the blind confidence that Python will simply get it. Around this time in their first semester of programming, I would estimate that type errors lie at the heart of 75% of the bugs my students face and fail to resolve before asking me for help. So one benefit of programming in ML for pedagogy is that it is literally impossible to make type errors. The second you try to run a program with bad types, the compiler points out what the expected type is and what the given (incorrect) type was. It takes a while to get used to type variables (and appeasing the type checker when you want to play fast and loose). But once you do you’ll find the only bugs that remain in your code are conceptual ones, which are of course much more rewarding and important bugs to fix.

So enough of this preamble. Let’s learn some ML!

## Warming Up: Basic Arithmetic and Conditionals

As one would expect, ML has variables and arithmetic which work in much the same way as other languages. Each variable declaration is prefixed by the word “val,” as below

val x = 7;
val y = 2;

This statements modify the global environment (the list of which variable names are associated to which values). Semicolons are required to terminate variable declarations at the global level. We can declare multiple variables in a single line using the “and” keyword

val x = 7 and y = 2;

As a precaution, “and” is only used in ML for syntactic conjunctions of variable/function declarations, and is only necessary when the two defined variable names are mutually defined in terms of each other (this can happen naturally for recursive function definitions). We will see in a moment that the logical and operation is denoted “andalso.”

We can also use pattern matching to bind these two variables in one line, much the same way as it might work in Python:

val (x,y) = (7,2);

We note that while ML does not require us to specify the type of a variable, the type is known and ever present under the surface. If we run the above code through the sml compiler (which after running the contents of a file opens a REPL to further evaluate commands), we see the following output

[opening vars.sml]
val x = 7 : int
val n = 2 : int

The environment is printed out to the user, and it displays that the two types are “int.”

Arithmetic is defined for integers, and the standard ones we will use are the expected +, -, *, and (not a slash, but) div. Here are some examples, and here is a list of all basic operations on ints. A few things to note: the unary negation operator is a tilde (~), and the semicolons are only used terminate statements in the REPL, which tells the compiler we’re ready for it to evaluate our code. Semicolons can also be used to place multiple statements on a single line of code. The “it” variable is a REPL construct which saves the most recent unbound expression evaluation.

- 3 + 6;
val it = 9 : int
- 6 div 3;
val it = 2 : int
- 2 * 9;
val it = 18 : int
- 2 - 9;
val it = ~7 : int
- ~9;
val it = ~9 : int

ML also has floating point arithmetic (in ML this type is called “real”), but treats it in a very prudish manner. Specifically (and this is a taste of the type checker doing its job too well), ML does not coerce types for you. If you want to multiply a real number and an integer, you have to first convert the int to a real and then multiply. An error will occur if you do not:

- val x = 4.0;
val x = 4.0 : real
- val y = 7;
val y = 7 : int
- x * y;
stdIn:5.1-5.6 Error: operator and operand don't agree [tycon mismatch]
operator domain: real * real
operand:         real * int
in expression:
x * y
- x * Real.fromInt(y);
val it = 28.0 : real

Here is a list of all operations on reals. We don’t anticipate using reals much, but it’s good to know that ML fervently separates them.

It seems odd that we’re talking so much about statements, because often enough we will be either binding function names (and tests) to the global environment, or restricting ourselves to local variable declarations. The latter has a slightly more complicated syntax, simply surrounding your variable declarations and evaluated code in a “let … in … end” expression. This will be a much more common construction for us.

let
val x = 7
val y = 9
in
(x + 2*y) div 3
end

The “in” expression is run with the combined variables from the ambient environment (the variables declared outside of the let) and those defined inside the let. The variables defined in the let leave scope after the “in” expression is evaluated, and the entire let expression as a whole evaluates to the result of evaluating the “in” expression. Clearly and example shows what is going on much more directly than words.

The last major basic expression form are the boolean expressions and operations. The type for booleans in ML is called “bool,” and the two possible values are “true,” and “false.” They have the usual unary and binary operations, but the names are a bit weird. Binary conjunction is called “andalso,” while binary disjunction is called “orelse.”

val a = true and b = false;
val c = (a andalso b) orelse ((not a) andalso (not b));

But aside from that, boolean expressions work largely as one would expect. There are the six standard numerical comparison functions, where testing for equality is given by a single equals sign (in most languages, comparison for equality is ==), and inequality is given by the diamond operator <>. The others are, as usual, <, <=, >, >=.

- 6 = 7;
val it = false : bool
- 6 = 6;
val it = true : bool
- 6 < 7;
val it = true : bool
- 7 <= 6;
val it = false : bool
- 6 <> 7;
val it = true : bool

ML also has the standard if statement, which has the following syntax, which is more or less the same as most languages:

- val x = if 6 < 7 then ~1 else 4;
val x = ~1 : int

ML gives the programmer more or less complete freedom with whitespace, so any of these expressions can be spread out across multiple lines if the writer desires.

val x = if 6 < 7
then
~1
else
4

This can sometimes be helpful when defining things inside of let expressions inside of function definitions (inside of other function definitions, inside of …).

So far the basics are easy, if perhaps syntactically different from most other languages we’re familiar with. So let’s move on to the true heart of ML and all functional programming languages: functions.

## Functions and cases, recursion

Now that we have basic types and conditions, we can start to define some simple functions. In the global environment, functions are defined the same way as values, using the word “fun” in the place of “val.” For instance, here is a function that adds 3 to a number.

fun add3(x) = x+3

The left hand side is the function signature, and the right hand side is the body expression. As in Racket, and distinct from most imperative languages, a function evaluates to whatever the body expression evaluates to. Calling functions has two possible syntaxes:

add3(5)
(add3 5)

In other words, if the function application is unambiguous, parentheses aren’t required. Otherwise, one can specify precedence using parentheses in either Racket (Lisp) style or in standard mathematical style.

The most significant difference between ML and most other programming languages, is that ML’s functions have case-checking. That is, we can specify what action is to be taken based on the argument, and these actions are completely disjoint in the function definition (no if statements are needed).

For instance, we could define an add3 function which nefariously does the wrong thing when the user inputs 7.

fun add3(7) = 2
| add3(x) = x+3

The vertical bar is read “or,” and the idea is that the possible cases for the function definition must be written in most-specific to least-specific order. For example, interchanging the orders of the add3 function cases gives the following error:

- fun add3(x) = x+3
stdIn:13.5-14.16 Error: match redundant
x => ...
-->   7 => ...

Functions can call themselves recursively, and this is the main way to implement loops in ML. For instance (and this is quite an inefficient example), I could define a function to check whether a number is even as follows.

fun even(0) = true
| even(n) = not(even(n-1))

Don’t cringe too visibly; we will see recursion used in less horrifying ways in a moment.

Functions with multiple arguments are similarly easy, but there are two semantic possibilities for how to define the arguments. The first, and simplest is what we would expect from a typical language: put commas in between the arguments.

fun add(x,y) = x+y

When one calls the add function, one is forced to supply both arguments immediately. This is usually how programs are written, but often times it can be convenient to only supply one argument, and defer the second argument until later.

If this sounds like black magic, you can thank mathematicians for it. The technique is called currying, and the idea stems from the lambda calculus, in which we can model all computation using just functions (with a single argument) as objects, and function application. Numbers, arithmetic, lists, all of these things are modeled in terms of functions and function calls; the amazing thing is that everything can be done with just these two tools. If readers are interested, we could do a post or two on the lambda calculus to see exactly how these techniques work; the fun part would be that we can actually write programs to prove the theorems.

Function currying is built-in to Standard ML, and to get it requires a minor change in syntax. Here is the add function rewritten in a curried style.

fun add(x)(y) = x+y

Now we can, for instance, define the add3 function in terms of add as follows:

val add3 = add(3)

And we can curry the second argument by defining a new function which defers the first argument appropriately.

fun add6(x) = add(x)(6)

Of course, in this example addition is commutative so which argument you pick is useless.

We should also note that we can define anonymous functions as values (for instance, in a let expression) using this syntax:

val f = (fn x => x+3)

The “fn x => x+3″ is just like a “lambda x: x+3″ expression in Python, or a “(lambda (x) (+ x 3))” in Racket. Note that one can also define functions using the “fun” syntax in a let expression, so this is truly only for use-once function arguments.

## Tuples, Lists, and Types

As we’ve discovered already, ML figures out the types of our expressions for us. That is, if we define the function “add” as above (in the REPL),

- fun add(x,y) = x+y;
val add = fn : int * int -> int

then ML is smart enough to know that “add” is to accept a list of two ints (we’ll get to what the asterisk means in a moment) and returns an int.

The curried version is similarly intuited:

- fun add(x)(y) = x+y;
val add = fn : int -> int -> int

The parentheses are implied here: int -> (int -> int). So that this is a function which accepts as input an int, and produces as output another function which accepts an int and returns an int.

But, what if we’d like to use “add” to add real numbers? ML simply won’t allow us to; it will complain that we’re providing the wrong types. In order to make things work, we can tell ML that the arguments are reals, and it will deduce that “+” here means addition on reals and not addition on ints. This is one awkward thing about ML; while the compiler is usually able to determine the most general possible type for our functions, it has no general type for elements of a field, and instead defaults to int whenever it encounters arithmetic operations. In any case, this is how to force a function to have a type signature involving reals:

- fun add(x:real, y:real) = x+y;
val add = fn : real * real -> real

If we’re going to talk about types, we need to know all of ML’s syntax for its types. Of course there are the basics (int, real, bool). Then there are function types: int -> int is the type of a function which accepts one int and returns an int.

We’ll see two new types in this section, and the first is the tuple. In ML, tuples are heterogeneous, so we can mix types in them. For instance,

- val tup = (true, 7, 4.4);
val tup = (true,7,4.4) : bool * int * real

Here the asterisk denotes the tuple type, and one familiar with set theory can think of a tuple as an element of the product of sets, in this case

$\displaystyle \left \{ \textup{true}, \textup{false} \right \} \times \mathbb{Z} \times \mathbb{R}$

Indeed, there is a distinct type for each possible kind of tuple. A tuple of three ints (int * int * int) is a distint type from a tuple of three booleans (bool * bool * bool). When we define a function that has multiple arguments using the comma syntax, we are really defining a function which accepts as input a single argument which is a tuple. This parallels exactly how functions on multiple arguments work in classical mathematics.

The second kind of compound type we’ll use quite often is the list. Lists are distinct from tuples in ML in that lists must be homogenous. So a list of integers (which has type “int list”) is different from a list of booleans.

The operations on lists are almost identical as in Haskell. To construct explicit lists use square brackets with comma-delimited elements. To construct them one piece at a time, use the :: list constructor operation. For those readers who haven’t had much experience with lists in functional programming: all lists are linked lists, and the :: operation is the operation of appending a single value to the beginning of a given list. Here are some examples.

- val L = [1,2,3];
val L = [1,2,3] : int list

- val L = 1::(2::(3::nil));
val L = [1,2,3] : int list

The “nil” expression is the empty list, as is the empty-square-brackets expression “[]“.

There is a third kind of compound type called the record, and it is analogous to a C struct, where each field is named. We will mention this in the future once we have a need for it.

The most important thing about lists and tuples in ML is that functions which operate on them don’t always have an obvious type signature. For instance, here is a function which takes in a tuple of two elements and returns the first element.

fun first(x,y) = x

What should the type of this function be? We want it to be able to work with any tuple, no matter the type. As it turns out, ML was one of the first programming language to allow this sort of construction to work. The formal name is parametric polymorphism. It means that a function can operate on many different kinds of types (because the actual computation is the same regardless of the types involved) and the full type signature can be deduced for each function call based on the parameters for that particular call.

The other kind of polymorphism is called ad-hoc polymorphism. Essentially this means that multiple (very different) operations can have the same name. For instance, addition of integers and addition of floating point numbers require two very different sets of instructions, but they both go under the name of +.

What ML does to make its type system understand parametric polymorphism is introduce so-called type variables. A type variable is any string prefixed by a single quote, e.g. ‘a, and they can represent any type. When ML encounters a function with an ambiguous type signature, it decides what the most general possible type is (which usually involves a lot of type variables), and then uses that type.

So to complete our example, the first function has the type

'a * 'b -> 'a

As a side note, the “first” function is in a sense the “canonical” operation that has this type signature. If nothing is known about the types, then no other action can happen besides the projection. There are some more interesting things to be said about such canonical operations (for instance, could we get away with not having to even define them at all).

The analogous version for lists is as follows. Note that in order to decompose a list into its first element and the tail list, we need to use pattern matching.

fun listFirst([x]) = x
| listFirst(head::tail) = head

And this function has the type signature ‘a list -> ‘a. As a slightly more complicated example (where we need recursion), we can write a function to test for list membership.

fun member(x, nil) = false
else member(x, tail)

If you run this program and see some interesting warning messages, see this StackOverflow question for a clarification.

## Defining New Types

The simplest way to define a new type is to just enumerate all possibilities. For instance, here is an enumerated datatype with four possibilities.

datatype maths = algebra | analysis | logic | computation

Then we can define functions which operate on those types using pattern matching.

fun awesome(algebra) = true
| awesome(analysis) = false
| awesome(logic) = false
| awesome(computation) = true

And this function has type maths -> bool (don’t take it too seriously :-)). We can also define data types whose constructors require arguments.

datatype language = functional of string*bool
| imperative of string
| other

Here we define a language to be functional or imperative. The functional type consists of a name and a boolean representing whether it is purely functional, while the imperative type just consists of a name. We can then construct these types by treating the type constructors as if they were functions.

val haskell = functional("Haskell", true) and
java = imperative("Java") and
prolog = other;

Perhaps more useful than this is to define types using type variables. A running example we will use for the remainder of this post is a binary tree of homogeneous elements at each node. Defining such types is easy: all one needs to do is place the appropriate type (with parentheses to clarify when the description of a type starts or ends) after the “of” keyword.

datatype 'a Tree = empty
| leaf of 'a
| node of (('a Tree) * 'a * ('a Tree))

We can create instances of an integer tree as expected:

val t2 = node(node(leaf(2), 3, leaf(4)), 6, leaf(8))

## The TreeSort Algorithm

We can define a host of useful operations on trees (of any type). For instance, below we compute the breadth of a tree (the total number of leaves), the depth of a tree (the maximal length of a path from the root to a leaf), and the ability to flatten a tree (traverse the tree in order and place all of the values into a list). These first two are a nice warm-up.

fun breadth(empty) = 0
| breadth(node(left, _, right)) = breadth(left) + breadth(right)

Here the underscore indicates a pattern match with a variable we won’t actually use. This is more space efficient for the compiler; it can avoid adding extra values to the current environment in a potentially deep recursion.

fun depth(empty) = 0
| depth(leaf(_)) = 1
| depth(node(left, _, right)) =
let
val (lDepth, rDepth) = (1 + depth(left), 1 + depth(right))
in
if lDepth > rDepth then lDepth else rDepth
end

This function should be self explanatory.

fun flatten(empty) = []
| flatten(leaf(x)) = [x]
| flatten(node(left, x, right)) =
flatten(left) @ (x :: flatten(right))

Here the @ symbol is list concatenation. This is not quite the most efficient way to do this (we are going to write a forthcoming post about tail-call optimization, and there we will see why), but it is certainly the clearest. In the final recursive call, we traverse the left subtree first, flattening it into a list in order. Then we flatten the right hand side, and put the current node’s element in between the two flattened parts.

Note that if our tree is ordered, then flatten will produce a strictly increasing list of the elements in the tree. For those readers unfamiliar with ordered binary trees, for all intents and purposes this is an “int Tree” and we can compare the values at different nodes. Then an ordered binary tree is a tree which satisfies the following property for each node: all of the values in the left child’s subtree are strictly smaller than the current node’s value, and all of the values in the right child’s subtree are greater than or equal to the current node’s value.

Indeed, we can use the flatten function as part of a simple algorithm to sort lists of numbers. First we insert the numbers in the unsorted list into a tree (in a way that preserves the ordered property at each step), and then we flatten the tree into a sorted list. This algorithm is called TreeSort, and the insert function is simple as well.

fun insert(x, empty) = leaf(x)
| insert(y, leaf(x)) = if x <= y
then node(empty, x, leaf(y))
else node(leaf(y), x, empty)
| insert(y, node(left, x, right)) =
if x <= y
then node(left, x, insert(y, right))
else node(insert(y, left), x, right)

If we’re at a nonempty node, then we just recursively insert into the appropriate subtree (taking care to create new interior nodes if we’re at a leaf). Note that we do not do any additional computation to ensure that the tree is balanced (that each node has approximately as many children in its left subtree as in its right). Doing so would digress from the point of this primer, but rest assured that the problem of keeping trees balanced has long been solved.

Now the process of calling insert on every element of a list is just a simple application of fold. ML does have the standard map, foldl, foldr, and filter functions, although apparently filter is not available in the standard library (one needs to reference the List module via List.filter).

In any case, foldl is written in the currying style and building the tree is a simple application of it. As we said, the full sorting algorithm is just the result of flattening the resulting tree with our in-order traversal.

fun sort(L) = flatten(foldl(insert)(empty)(L))

So there you have it! One of the simplest (efficient) sorting algorithms I can think of in about twelve lines of code.

## Free Monoid Homomorphisms: A More Advanced Example

Just to get a quick taste of what our series on category theory will entail, let’s write a program with a slightly more complicated type signature. The idea hinges on the idea that lists form a what’s called a free monoid. In particular,

Definition: monoid is a set $X$ equipped with an associative binary operation $\cdot: X \times X \to X$ and an identity element $1 \in X$ for which $x1 = 1x = x$ for all $x \in X$.

Those readers who have been following our series on group theory will recognize a monoid as a group with less structure (there is no guarantee of inverses for the $\cdot$ operation). The salient fact for this example is that the set of ML values of type $\textup{'a list}$ forms a monoid. The operation is list concatenation, and the identity element is the empty list. Call the empty list nil and the append operation @, as it is in ML.

More than that, $\textup{'a list}$ forms a free monoid, and the idea of freeness has multiple ways of realization. One sort of elementary way to understand freeness is that the only way to use the binary operation to get to the identity is to have the two summands both be the identity element. In terms of lists, the only way to concatenate two lists to get the empty list is to have both pieces be empty to begin with.

Another, more mature (more category-theoretical) way to think about freeness is to say is satisfies the following “universal” property. Call $A$ the set of values of type ‘a, and $[A]$ the set of values of type $\textup{'a list}$, and suppose that $(B, \cdot_B, 1_B)$ is the datum of an arbitrary monoid. The universal property says that if we are given a function $f: A \to B$, and we take the canonical map $g: A \to [A]$ which maps an element $a \in A$ to the single-entry list $[a] \in [A]$, then there is a unique way to extend $f$ to a monoid homomorphism $f^* : [A] \to B$ on lists, so that $f^*g = f$. We have mentioned monoid homomorphisms on this blog before in the context of string metrics, but briefly a monoid homomorphism respects the monoid structure in the sense that (for this example) $f^*(a \textup{ @ } b) = f^*(a) \cdot_B f^*(b)$ no matter what $a, b$ are.

This was quite a mouthful, but the data is often written in terms of a so-called “commutative diagram,” whose general definition we will defer to a future post. The diagram for this example looks like:

The dashed line says we are asserting the existence of $f^*$, and the symbol $\exists !$ says this function exists and is uniquely determined by $f, g$. The diagram “commutes” in the sense that traveling from $A$ to $B$ along $f$ gives you the same computational result as traveling by $g$ and then $f^*$. The reason for the word “universal” will become clear in future posts, but vaguely it’s because the set $[A]$ is a unique “starting place” in a special category.

If this talk is too mysterious, we can just go ahead and prove that $f^*$ exists by writing a program that computes the function transforming $f \mapsto f^*$. We call the function “listMonoidLift” because it “lifts” the function $f$ from just operating on $A$ to the status of a monoid homomorphism. Very regal, indeed.

Part of the beauty of this function is that a number of different list operations (list length, list sum, member tests, map, etc.), when viewed under this lens, all become special cases of this theorem! By thinking about things in terms of monoids, we write less code, and more importantly we recognize that these functions all have the same structural signature. Perhaps one can think of it like parametric polymorphism on steroids.

fun listMonoidLift(f:('a->'b), (combine:(('b * 'b) -> 'b), id:'b)) =
let
fun f'(nil) = id
in
f'
end

Here we specified the types of the input arguments to be completely clear what’s what. The first argument is our function $f$ as in the above diagram, and the second two arguments together form the data of our monoid (the set $B$ is implicitly the collection of types $'b$ determined at the time of the function call). Now let’s see how the list summation and list length functions can be written in terms of the listMonoidLift function.

fun plus(x, y) = x + y
val sum = listMonoidLift((fn x => x), (plus, 0))
val length = listMonoidLift((fn x => 1), (plus, 0))

The plus function on integers with zero as the identity is the monoid $B$ in both cases (and also happens to be $A$ by coincidence), but in the summation case the function $f$ is the identity and for length it is the constant $1$ function.

As a more interesting example, see how list membership is a lift.

fun member(x) = listMonoidLift((fn y => y = x),
((fn (a,b) => a orelse b), false))

Here the member function is curried; it has type ‘a -> ‘a list -> bool (though it’s a bit convoluted since listMonoidLift is what’s returning the ‘a list -> bool part of the type signature). Here the $B$ monoid is the monoid of boolean values, where the operation is logical “or” and the identity is false. It is a coincidence and a simple exercise to prove that $B$ is a free monoid as well.

Now the mapping $f(y)$ is the test to see if $y$ is the same object as $x$. The lift to the list monoid will compute the logical “or” of all evaluations of $f$ on the values.

Indeed, (although this author hates bringing up too many buzzwords where they aren’t entirely welcome) the monoid lifting operation we’ve just described is closely related to the MapReduce framework (without all the distributed computing parts). Part of the benefit of MapReduce is that the programmer need only define the Map() and Reduce() functions (the heart of the computation) and MapReduce does the rest. What this example shows is that defining the Map() function can be even simpler: one only needs define the function $f$, and Map() is computed as $f^*$ automatically. The Reduce() part is simply the definition of the target monoid $B$.

Just to drive this point home, we give the reader a special exercise: write map as a special case of listMonoidLift. The result (the map function) should have one of the two type signatures:

map : ('a -> 'b) * ('a list) -> 'b list
map : 'a -> 'b -> 'a list -> b' list

As a hint, the target monoid should also be a list monoid.

Part of why this author is hesitant to bring up contemporary software in discussing these ideas is because the ideas themselves are far from contemporary. Burstall and Landin’s, 1969 text Programs and Their Proofs (which this author would love to find a copy of) details this exact reduction and other versions of this idea in a more specific kind of structure called “free algebras.” So MapReduce (minus the computer programs and distributed systems) was a well-thought-out idea long before the internet or massively distributed systems were commonplace.

In any case, we’ll start the next post in this series right off with the mathematical concept of a category. We’ll start slow and detail many examples of categories that show up both in (elementary) mathematics and computing, and then move on to universal properties, functors, natural transformations, and more, implementing the ideas all along the way.

Until then!

# Groups — A Second Primer

## The First Isomorphism Theorem

The meat of our last primer was a proof that quotient groups are well-defined. One important result that helps us compute groups is a very easy consequence of this well-definition.

Recall that if $G,H$ are groups and $\varphi: G \to H$ is a group homomorphism, then the image of $\varphi$ is a subgroup of $H$. Also the kernel of $\varphi$ is the normal subgroup of $G$ consisting of the elements which are mapped to the identity under $\varphi$. Moreover, we proved that the quotient $G / \ker \varphi$ is a well-defined group, and that every normal subgroup $N$ is the kernel of the quotient map $G \to G/N$. These ideas work together to compute groups with the following theorem. Intuitively, it tells us that the existence of a homomorphism between two groups gives us a way to relate the two groups.

Theorem: Let $\varphi: G \to H$ be a group homomorphism. Then the quotient $G/ \ker \varphi$ is isomorphic to the image of $\varphi$. That is,

$G/ \ker \varphi \cong \varphi(G)$

As a quick corollary before the proof, if $\varphi$ is surjective then $H \cong G / \ker \varphi$.

Proof. We define an explicit map $f : G/ \ker \varphi \to \varphi(G)$ and prove it is an isomorphism. Let $g \ker \varphi$ be an arbitrary coset and set $f(g \ker \varphi) = \varphi(g)$. First of all, we need to prove that this definition does not depend on the choice of a coset representative. That is, if $g \ker \varphi = g' \ker \varphi$, then $f(g) = f(g')$. But indeed, $f(g)^{-1}f(g') = f(g^{-1}g') = 1$, since for any coset $N$ we have by definition $gN = g'N$ if and only if $g^{-1}g' \in N$.

It is similarly easy to verify that $f$ is a homomorphism:

$f((g \ker \varphi )(g' \ker \varphi)) = \varphi(gg') = \varphi(g)\varphi(g') = f(g \ker \varphi) f(g' \ker \varphi)$

It suffices now to show that $f$ is a bijection. It is trivially surjective (since anything in the image of $\varphi$ is in a coset). It is injective since if $f(g \ker \varphi) = 1$, then $\varphi(g) = 1$ and hence $g \in \ker \varphi$, so the coset $g \ker \varphi = 1 \ker \varphi$ is the identity element. So $f$ is an isomorphism. $\square$

Let’s use this theorem to compute some interesting things.

Denote by $D_{2n}$ the group of symmetries of the regular $n$-gon. That is, $D_{16}$ is the symmetry group of the regular octagon and $D_{8}$ is the symmetry group of the square (the $2n$ notation is because this group always has order $2n$). We want to relate $D_{16}$ to $D_8$. To do this, let’s define a homomorphism $f:D_{16} \to D_8$ by sending a one-eighth rotation $\rho$ of the octagon to a one-fourth rotation of the square $f(\rho) = \rho^2$, and using the same reflection for both ($f(\sigma) = \sigma$). It is easy to check that this is a surjective homomorphism, and moreover the kernel is $\left \{ 1, \rho^4 \right \}$. That is, $D_8 \cong D_{16}/ \left \{ 1, \rho^4 \right \}$.

Here is a more general example. If $G, H$ are groups of relatively prime order, then there are no nontrivial homomorphisms $G \to H$. In order to see this, note that $|G/ \ker \varphi| = |G| / |\ker \varphi|$ as a simple consequence of Lagrange’s theorem. Indeed, by the first isomorphism theorem this quantity is equal to $|\varphi(G)|$. So $|G| = | \ker \varphi| |\varphi(G)|$. That is, the order of $\varphi(G)$ divides the order of $G$. But it also divides the order of $H$ because $\varphi(G)$ is a subgroup of $H$. In other words, the order of $\varphi(G)$ is a common factor of the orders of $G$ and $H$. By hypothesis, the only such number is 1, and so $|\varphi(G)| = 1$ and $\varphi(G)$ is the trivial group.

We will use the first isomorphism theorem quite a bit on this blog. Because it is such a common tool, it is often used without explicitly stating the theorem.

## Generators

One extremely useful way to describe a subgroup is via a set of generators. The simplest example is for a single element.

Definition: Let $G$ be a group and $x \in G$. Then the subgroup generated by $x$, denoted $\left \langle x \right \rangle$, is the smallest subgroup of $G$ containing $x$. More generally, if $S \subset G$ then the subgroup generated by $S$ is the smallest subgroup containing $S$.

This definition is not quite useful, but the useful version is easy to derive. On one hand, the identity element must always be in $\left \langle x \right \rangle$. Since $x \in \left \langle x \right \rangle$ and it’s a subgroup, we must have that $x^{-1} \in \left \langle x \right \rangle$. Moreover, all powers of $x$ must be in the subgroup, as must all powers of the inverse (equivalently, inverses of the powers). In fact that is all that is necessary. That is,

$\left \langle x \right \rangle = \left \{ \dots, x^{-2}, x^{-1}, 1, x, x^2, \dots \right \}$

For finite groups, this list of elements will terminate. And in fact, the inverse of $x$ will be a power of $x$ as well. To see this, note that if we keep taking powers of $x$, eventually one of those will be the identity element. Specifically, some power of $x$ must repeat, and if $x^n = x^m$ then $x^{n-m} = 1$. Hence $x^{-1} = x^{n-m-1}$.

For subgroups generated by more than one element, these subgroups are more difficult to write down. For example, if $x,y \in G$ then $x,y$ may have a nontrivial relationship. That is, even though all possible products involving $x$ and $y$ are  in the subgroup, it is very difficult to determine whether two such products are the same (in fact, one very general formulation of this problem is undecidable!). Often times one can find a set of generators which generates the entire group $G$. In this case, we say $G$ is generated by those elements.

A familiar example is the symmetry group of the square. As it turns out this group is generated by $\rho, \sigma$, where $\rho$ is a quarter turn and $\sigma$ is a reflection across some axis of symmetry. The relationship between the two elements is succinctly given by the equality $\rho \sigma \rho \sigma = 1$. To see this, try holding our your hand with your palm facing away; rotate your hand clockwise, flip it so your fingers are pointing left, rotate again so your fingers are pointing up, and then flip to get back to where you started; note that the two flips had the same axis of symmetry (the up-down axis). The other (obvious) relationships are that $\rho^4 = 1$ and $\sigma^2 = 1$. If we want to describe the group in a compact form, we write

$G = \left \langle \rho, \sigma | \rho^4, \sigma^2, \rho \sigma \rho \sigma \right \rangle$

This is an example of a group presentation. The left hand side is the list of generators, and the right hand side gives a list of relators, where each one is declared to be the identity element. In particular, the existence of a presentation with generators and relators implies that all possible relationships between the generators can be decuded from the list of relators (kind of like how all relationships between sine and cosine can be deduced from the fact that $\sin^2(x) + \cos^2(x) = 1$). Indeed, this is the case for the symmetry group (and all dihedral groups); there are only three distinct equations describing the behavior of rotations and reflections.

Here’s a quick definition we will often refer to in the future: a group is called cyclic if it is generated by a single element. Here are some interesting exercises for the beginning reader to puzzle over, which are considered basic facts for experienced group theorists:

• Every subgroup of a cyclic group is cyclic.
• There is only one infinite cyclic group: $\mathbb{Z}$.
• Every finite cyclic group is isomorphic to $\mathbb{Z}/n\mathbb{Z}$ for some $n$.

Finally, we will call a group finitely generated if it is generated by a finite set of elements and finitely presented if it has a presentation with finitely many generators and relators. Just to give the reader a good idea about how vast this class of groups is: many basic conjectured facts about finitely generated groups which have “only” one relator are still open problems. So trying to classify groups with two relators (or finitely many relators) is still a huge leap away from what we currently understand. As far as this author knows, this subject has been largely abandoned after a scant few results were proved.

## Products and Direct Sums

Just as one does in every field of math, in order to understand groups better we want to decompose them into smaller pieces which are easier to understand. Two of the main ways to do this are via direct products and direct sums.

Definition: Let $(G, \cdot_G),(H, \cdot_H)$ be groups. The product group $G \times H$ is defined to have the underlying set $G \times H$ (pairs of elements), and the operation is defined by entrywise multiplication in the appropriate group.

$(g,h) \cdot (g', h') = (g \cdot_G g', h \cdot_H h')$

Of course, one must verify that this operation actually defines a group according to the usual definition, but this is a simple exercise. One should note that there are two canonical subgroups of the direct product. Define by $p_1 : G \times H \to G$ the projection onto the first coordinate (that is, $(a,b) \mapsto a$). This map is obviously a homomorphism, and its kernel is the subgroup of elements $(1,h), h \in H$. That is, we can identify $H$ as a subgroup of $G \times H$. Identically, we see with $p_2(a,b) = b$ that $G$ is a subgroup of $G \times H$.

Note that this allows us to make some very weird groups. For instance, by induction a single direct product allows us to define products of arbitrarily many groups. Note that reordering the terms of such a product does not change the isomorphism class of the group (e.g. $\mathbb{Z} \times D_8 \cong D_8 \times \mathbb{Z}$). Additionally, there is nothing that stops us from defining infinite product groups. The elements of such a group are sequences of elements from the corresponding multiplicands. For example, the group $\mathbb{Z} \times \mathbb{Z} \times \dots$ is the group of sequences of integers, where addition is defined termwise and the identity is the sequence of all zeroes.

Now infinite products can be particularly unwieldy, but we may still want to talk about groups constructed from infinitely many pieces. Although we have no motivation for this, one such example is the group of an elliptic curve. In order to tame the unwieldiness, we define the following construction which takes an infinite product, and allows only those elements which have finitely many non-identity terms.

Definition: Let $G_{\alpha}$ be a family of groups. Define the direct sum of the $G_{\alpha}$, denoted by $\bigoplus_{\alpha} G_{\alpha}$, to be the subgroup of $\prod_{\alpha} G_{\alpha}$ of elements $(g_0, g_1, \dots)$ where all but finitely many $g_i$ are the identity in the corresponding $G_i$.

[As a quick side note, this is mathematically incorrect: since the family of groups need not be countable, the $g_i$ may not be enumerable. One can fix this by defining the elements as functions on the index set instead of sequences, but we are too lazy to do this.]

Note that we can define a direct sum of only finitely many groups, and for example this would be denoted $G \oplus H$, but finite sums are trivially the same as finite products. In fact, in this primer and all foreseeable work on this blog, we will stick to finite direct sums of groups, and implicitly identify them with direct products.

Finally, in terms of notation we will write $G^n$ for a direct product of $G$ with itself $n$ times, and $G^{\oplus n}$ for the direct sum of $G$ with itself $n$ times. As we just mentioned, these two are identical, so we will just default to the former for simplicity of reading.

One might wonder: why do we even distinguish these two constructions? The answer is somewhat deep, and we will revisit the question when in our future series on category theory. For now, we can simply say that the distinction is in the case of infinite products and infinite sums, which we won’t discuss anyway except for in passing curiosity.

## The Classification of Finitely Generated Abelian Groups

Now we have finally laid enough groundwork to state the first big classification theorem of group theory. In words, it says that any finitely generated abelian group can be written as a direct sum of things isomorphic to $\mathbb{Z}$ and $\mathbb{Z}/n\mathbb{Z}$ for various choices of $n$. Moreover, the choices of $n$ are related to each other.

In particular, the theorem is stated as follows:

Theorem: Let $G$ be a finitely generated abelian group. Then $G$ is isomorphic to a group of the form:

$\displaystyle \mathbb{Z}^m \oplus \mathbb{Z}/p_1^{n_1}\mathbb{Z} \oplus \dots \oplus \mathbb{Z}/p_k^{n_k}\mathbb{Z}$

Where $p_i$ are (not necessarily distinct) primes. Moreover, $G$ is completely determined by the choices of primes and exponents above.

In particular, we name these numbers as follows. The $\mathbb{Z}^m$ part of the equation is called the free part, the exponent $m$ is called the rank of $G$, and the numbers $p_i^{n_i}$ are called the primary factors.

The proof of this theorem is beyond the scope of this blog, but any standard algebra text will have it. All we are saying here that every finitely generated abelian group can be broken up into the part that has infinite order (the free part) and the part that has finite order (often called the torsion part), and that these parts are largely disjoint from each other. For finite groups this is a huge step forward: to classify all finite groups one now only needs to worry about nonabelian groups (although this is still a huge feat in its own).

A quick application of this theorem is as follows:

Corollary: Let $G$ be a finitely generated abelian group which has no elements of finite order. Then $G$ has no nontrivial relators except for those enforcing commutativity.

Proof. Indeed, $G \cong \mathbb{Z}^m$ for some $m$, and it has a presentation  $\left \langle x_1, x_1, \dots, x_m | x_ix_jx_i^{-1}x_j^{-1}, i \neq j \right \rangle$, which has no nontrivial relators. $\square$

Borrowing the free terminology, such groups without relators are called free abelian groups. Indeed, there are also nonabelian “free” groups, and this is the last topic we will cover in this primer.

## Free Groups, and a Universal Property

Equivalently to “a group with no relators,” we can define a free group as a group which has presentation $\left \langle x_{\alpha} \right \rangle$ for some potentially infinite family of elements $x_{\alpha}$. If the number of generators is finite, say there are $n$ of them, then we call it the free group on $n$ generators.

The interesting thing here is that all possible products of the generating elements are completely distinct. For example, the free group on two generators $\left \langle a,b \right \rangle$ contains the elements $ab, aba, abab, b^3a^{-5}b^2a$, along with infinitely many others. The only way that two elements can be the same is if they can be transformed into each other by a sequence of inserting or deleting strings which are trivially the identity. For example, $abb^{-1}a = a^2$, but only because of our cancellations of $bb^{-1} = 1$, which holds in every group.

There is another way to get free groups, and that is by taking free products. In particular, the group $\mathbb{Z} = \left \langle a \right \rangle$ is the free group on a single generator. Given two copies of $\mathbb{Z}$, we’d like to combine them in some way to get $\left \langle a,b \right \rangle$. More generally, given any two groups we’d like to define their free product $G * H$ to be the group which contains all possible words using elements in $G$ or $H$, which has no nontrivial relators, except for those already existing among the elements of $G$ and $H$ before taking a product.

Rigorously, this is very easy to do with group presentations. Give presentations for $G = \left \langle x_i | r_j \right \rangle$ and $H = \left \langle y_k | s_m \right \rangle$, and define the free product by giving a presentation

$G * H = \left \langle x_i, y_k | r_j, s_m \right \rangle$

For instance, the free product $\mathbb{Z}/3\mathbb{Z} * \mathbb{Z}/4\mathbb{Z}$ has presentation $\left \langle a,b | a^3, b^4 \right \rangle$. One interesting fact is that even if $G,H$ are finite, as long as one is nontrivial then the free product will be an infinite group. Another interesting fact, which we’ll explore in future posts on category theory, is that the free product of groups is “the same thing” as the disjoint union of sets. That is, these two operations play the same role in their respective categories.

This “role” is called the universal property of the free product. We will use this directly in our post on the fundamental group, and in general it just says that homomorphisms provided on the two pieces of the free product extend uniquely to a homomorphism of the product. The simpler form is the universal property of the free group, which says that the free group is the “most general possible” group which is generated by these generators.

Theorem (Universal Property of the Free Group): Let $S$ be a set of elements, and let $F(S)$ be the free group generated by the elements of $S$. Then any set-function $S \to G$ where $G$ is a group extends uniquely to a group homomorphism $F(S) \to G$.

That is, deciding where the generators should go in $G$ carries with it all of the information needed to define a homomorphism $F(S) \to G$, and it is uniquely determined in this way. To see this, simply see that once $f(a), f(b)$ are determined, so are $f(a^{-1}) = f(a)^{-1}, f(ab) = f(a)f(b), \dots$

Theorem (Universal Property of the Free Product): Let $G_{\alpha}$ be groups, and let $f_{\alpha}: G_{\alpha} \to H$ be group homomorphisms. Then there is a unique group homomorphism from the free product of the $G_{\alpha}$ to $H$, i.e. $\ast_{\alpha} G_{\alpha} \to H$.
The idea is the same as for the first universal property: the free product is the most general group containing the $G_{\alpha}$ as subgroups, and so any group homomorphism from the product to another group is completely determined by what happens to each of the multiplicand groups.