# Properties of Morphisms

This post is mainly mathematical. We left it out of our introduction to categories for brevity, but we should lay these definitions down and some examples before continuing on to universal properties and doing more computation. The reader should feel free to skip this post and return to it later when the words “isomorphism,” “monomorphism,” and “epimorphism” come up again. Perhaps the most important part of this post is the description of an isomorphism.

## Isomorphisms, Monomorphisms, and Epimorphisms

Perhaps the most important paradigm shift in category theory is the focus on morphisms as the main object of study. In particular, category theory stipulates that the only knowledge one can gain about an object is in how it relates to other objects. Indeed, this is true in nearly all fields of mathematics: in groups we consider all isomorphic groups to be the same. In topology, homeomorphic spaces are not distinguished. The list goes on. The only way to do determine if two objects are “the same” is by finding a morphism with special properties. Barry Mazur gives a more colorful explanation by considering the meaning of the number 5 in his essay, “When is one thing equal to some other thing?” The point is that categories, more than existing to be a “foundation” for all mathematics as a formal system (though people are working to make such a formal system), exist primarily to “capture the essence” of mathematical discourse, as Mazur puts it. A category defines objects and morphisms, but literally all of the structure of a category lies in its morphisms. And so we study them.

The strongest kind of morphism we can consider is an isomorphism. An isomorphism is the way we say two objects in a category are “the same.” We don’t usually care whether two objects are equal, but rather whether some construction is unique up to isomorphism (more on that when we talk of universal properties). The choices made in defining morphisms in a particular category allow us to strengthen or weaken this idea of “sameness.”

Definition: A morphism $f : A \to B$ in a category $\mathbf{C}$ is an isomorphism if there exists a morphism $g: B \to A$ so that both ways to compose $f$ and $g$ give the identity morphisms on the respective objects. That is,

$gf = 1_A$ and $fg = 1_B$.

The most basic (usually obvious, but sometimes very deep) question in approaching a new category is to ask what the isomorphisms are. Let us do this now.

In $\mathbf{Set}$ the morphisms are set-functions, and it is not hard to see that any two sets of equal cardinality have a bijection between them. As all bijections have two-sided inverses, two objects in $\mathbf{Set}$ are isomorphic if and only if they have the same cardinality. For example, all sets of size 10 are isomorphic. This is quite a weak notion of “sameness.” In contrast, there is a wealth of examples of groups of equal cardinality which are not isomorphic (the smallest example has cardinality 4). On the other end of the spectrum, a poset category $\mathbf{Pos}_X$ has no isomorphisms except for the identity morphisms. The poset categories still have useful structure, but (as with objects within a category) the interesting structure is in how a poset category relates to other categories. This will become clearer later when we look at functors, but we just want to dissuade the reader from ruling out poset categories as uninteresting due to a lack of interesting morphisms.

Consider the category $\mathbf{C}$ of ML types with ML functions as morphisms. An isomorphism in this category would be a function which has a two-sided inverse. Can the reader think of such a function?

Let us now move on to other, weaker properties of morphisms.

Definition: A morphism $f: A \to B$ is a monomorphism if for every object $C$ and every pair of morphisms $g,h: C \to A$ the condition $fg = fh$ implies $g = h$.

The reader should parse this notation carefully, but truly think of it in terms of the following commutative diagram:

Whenever this diagram commutes and $f$ is a monomorphism, then we conclude (by definition) that $g=h$. Remember that a diagram commuting just means that all ways to compose morphisms (and arrive at morphisms with matching sources and targets) result in an identical morphism. In this diagram, commuting is the equivalent of claiming that $fg = fh$, since there are only two nontrivial ways to compose.

The idea is that monomorphisms allow one to “cancel” $f$ from the left side of a composition (although, confusingly, this means the cancelled part is on the right hand side of the diagram).

The corresponding property for cancelling on the right is defined identically.

Definition: A morphism $f: A \to B$ is an epimorphism if for every object $C$ and every pair of morphism $g,h: B \to C$ the condition $gf = hf$ implies $g = h$.

Again, the relevant diagram.

Whenever $f$ is an epimorphism and this diagram commutes, we can conclude $g=h$.

Now one of the simplest things one can do when considering a category is to identify the monomorphisms and epimorphisms. Let’s do this for a few important examples.

## Monos and Epis in Various Categories

In the category $\mathbf{Set}$, monomorphisms and epimorphisms correspond to injective and surjective functions, respectively. Lets see why this is the case for monomorphisms. Recall that an injective function $f$ has the property that $f(x) = f(y)$ implies $x=y$. With this property we can show $f$ is a monomorphism because if $f(g(x)) = f(h(x))$ then the injective property gives us immediately that $g(x) = h(x)$. Conversely, if $f$ is a monomorphism and $f(x) = f(y)$, we will construct a set $C$ and two convenient functions $g, h: C \to A$ to help us show that $x=y$. In particular, pick $C$ to be the one point set $\left \{ c \right \}$, and define $g(c) = x, h(c) = y$. Then as functions $fg = fh$. Indeed, there is only one value in the domain, so saying this amounts to saying $f(x) = fg(c) = fh(c) = f(y)$, which we know is true by assumption. By the monomorphism property $g = h$, so $x = g(c) = h(c) = y$.

Now consider epimorphisms. It is clear that a surjective map is an epimorphism, but the converse is a bit trickier. We prove by contraposition. Instead of now picking the “one-point set,” for our $C$, we must choose something which is one element bigger than $B$. In particular, define $g, h : B \to B'$, where $B'$ is $B$ with one additional point $x$ added (which we declare to not already be in $B$). Then if $f$ is not surjective, and there is some $b_0 \in B$ which is missed by $f$, we define $g(b_0) = x$ and $g(b) = b$ otherwise. We can also define $h$ to be the identity on $B$, so that $gf = hf$, but $g \neq h$. So epimorphisms are exactly the surjective set-maps.

There is one additional fact that makes the category of sets well-behaved: a morphism in $\mathbf{Set}$ is an isomorphism if and only if it is both a monomorphism and an epimorphism. Indeed, isomorphisms are set-functions with two-sided inverses (bijections) and we know from classical set theory that bijections are exactly the simultaneous injections and surjections. A warning to the reader: not all categories are like this! We will see in a moment an example of a nontrivial category in which isomorphisms are not the same thing as simultaneous monomorphisms and epimorphisms.

The category $\mathbf{Grp}$ is very similar to $\mathbf{Set}$ in regards to monomorphisms and epimorphisms. The former are simply injective group homomorphisms, while the latter are surjective group homomorphisms. And again, a morphisms is an isomorphism if and only if it is both a monomorphism and an epimorphism. We invite the reader to peruse the details of the argument above and adapt it to the case of groups. In both cases, the hard decision is in choosing $C$ when necessary. For monomorphisms, the “one-point group” does not work because we are constrained to send the identity to the identity in any group homomorphism. The fortuitous reader will avert their eyes and think about which group would work, and otherwise we suggest trying $C = \mathbb{Z}$. After completing the proof, the reader will see that the trick is to find a $C$ for which only one “choice” can be made. For epimorphisms, the required $C$ is a bit more complex, but we invite the reader to attempt a proof to see the difficulties involved.

Why do these categories have the same properties but they are acquired in such different ways? It turns out that although these proofs seem different in execution, they are the same in nature, and they follow from properties of the category as a whole. In particular, the “one-point object” (a singleton set for $\mathbf{Set}$ and $\mathbb{Z}$ for $\mathbf{Grp}$) is more categorically defined as the “free object on one generator.” We will discuss this more when we get to universal properties, but a “free object on $n$ generators” is roughly speaking an object $A$ for which any morphism with source $A$ must make exactly $n$ “choices” in its definition. With sets that means $n$ choices for the images of elements, for groups that means $n$ consistent choices for images of group elements. On the epimorphism side, the construction is a sort of “disjoint union object” which is correctly termed a “coproduct.” But  momentarily putting aside all of this new and confusing terminology, let us see some more examples of morphisms in various categories.

Our recent primer on rings was well-timed, because the category $\mathbf{Ring}$ of rings (with identity) is an example of a not-so-well-behaved category. As with sets and groups, we do have that monomorphisms are equivalent to injective ring homomorphisms, but the argument is trickier than it was for groups. It is not obvious which “convenient” object $C$ to choose here, since maps $\mathbb{Z} \to R$ yield no choices: 1 maps to 1, 0 maps to 0, and the properties of a ring homomorphism determine everything else (in fact, the abelian group structure and the fact that units are preserved is enough). This makes $\mathbb{Z}$ into what’s called an “initial object” in $\mathbf{Ring}$; more on that when we study universal properties. In fact, we invite the reader to return to this post after we talk about the universal property of polynomial rings. It turns out that $\mathbb{Z}[x]$ is a suitable choice for $C$, and the “choice” made is where to send the indeterminate $x$.

On the other hand, things go awry when trying to apply analogous arguments to epimorphisms. While it is true that every surjective ring homomorphism is an epimorphism (it is already an epimorphism in $\mathbf{Set}$, and the argument there applies), there are ring epimorphisms which are not surjections! Consider the inclusion map of rings $i : \mathbb{Z} \to \mathbb{Q}$. The map $i$ is not surjective, but it is an epimorphism. Suppose $g, h : \mathbb{Q} \to R$ are two parallel ring morphisms, and they agree on $\mathbb{Z}$ (they will always do so, since there is only one ring homomorphism $\mathbb{Z} \to R$). Then $g,h$ must also agree on $\mathbb{Q}$, because if $p,q \in \mathbb{Z}$ with $q \neq 0$, then

$\displaystyle g(p/q) = g(p)g(q^{-1}) = g(p)g(q)^{-1} = h(p)h(q)^{-1} = h(p/q)$

Because the map above is also an injection, the category of rings is a very naturally occurring example of a category which has morphisms that are both epimorphisms and monomorphisms, but not isomorphisms.

There are instances in which monomorphisms and epimorphisms are trivial. Take, for instance any poset category. There is at most one morphism between any two objects, and so the conditions for an epimorphism and monomorphism vacuously hold. This is an extreme example of a time when simultaneous monomorphisms and epimorphisms are not the same thing as isomorphisms! The only isomorphisms in a poset category are the identity morphisms.

The inspection of epimorphisms and monomorphisms is an important step in the analysis of a category. It gives one insight into how “well-behaved” the category is, and picks out the objects which are special either for their useful properties or confounding trickery.

This reminds us of a quote of Alexander Grothendieck, one of the immortal gods of mathematics who popularized the use of categories in mainstream mathematics.

A good category containing some bad objects is preferable to a bad category containing only good objects.

I suppose the thesis here is that the having only “good” objects yields less interesting and useful structure, and that one should be willing to put up with the bad objects in order to have access to that structure.

Next time we’ll jump into a discussion of universal properties, and start constructing some programs to prove that various objects satisfy them.

Until then!

# Categories as Types

In this post we’ll get a quick look at two ways to define a category as a type in ML. The first way will be completely trivial: we’ll just write it as a tuple of functions. The second will involve the terribly-named “functor” expression in ML, which allows one to give a bit more structure on data types.

The reader unfamiliar with the ML programming language should consult our earlier primer.

## What Do We Want?

The first question to ask is “what do we want out of a category?” Recall from last time that a category is a class of objects, where each pair of objects is endowed with a set of morphisms. The morphisms were subject to a handful of conditions, which we paraphrase below:

• Composition has to make sense when it’s defined.
• Composition is associative.
• Every object has an identity morphism.

What is the computational heart of such a beast? Obviously, we can’t explicitly represent the class of objects itself. Computers are finite (or at least countable) beings, after all. And in general we can’t even represent the set of all morphisms explicitly. Think of the category $\mathbf{Set}$: if $A, B$ are infinite, then there are infinitely many morphisms (set-functions) $A \to B$.

A subtler and more important point is that everything in our representation of a category needs to be a type in ML (for indeed, how can a program run without being able to understand the types involved). This nudges us in an interesting direction: if an object is a type, and a morphism is a type, then we might reasonably expect a composition function:

compose: 'arrow * 'arrow -> 'arrow

We might allow this function to raise an exception if the two morphisms are uncomposable. But then, of course, we need a way to determine if morphisms are composable, and this requires us to know what the source and target functions are.

source: 'arrow -> 'object
target: 'arrow -> 'object

We can’t readily enforce composition to be associative at this stage (or any), because morphisms don’t have any other manipulable properties. We can’t reasonably expect to be able to compare morphisms for equality, as we aren’t able to do this in $\mathbf{Set}$.

But we can enforce the final axiom: that every object has an identity morphism. This would come in the form of a function which accepts as input an object and produces a morphism:

identity: 'object -> 'arrow

But again, we can’t necessarily enforce that identity behaves as its supposed to.

Even so, this should be enough to define a category. That is, in ML, we have the datatype

datatype ('object, 'arrow)Category =
category of ('arrow -> 'object) *
('arrow -> 'object) *
('object -> 'arrow) *
('arrow * 'arrow -> 'arrow)

Where we understand the first two functions to be source and target, and the third and fourth to be identity and compose, respectively.

## Categories as Values

In order to see this type in action, we defined (and included in the source code archive for this post) a type for homogeneous sets. Since ML doesn’t support homogeneous lists natively, we’ll have to settle for a particular subcategory of $\mathbf{Set}$. We’ll call it $\mathbf{Set}_a$, the category whose objects are finite sets whose elements have type $a$, and whose morphisms are again set-functions. Hence, we can define an object in this category as the ML type

'a Set

and an arrow as a datatype

datatype 'a SetMap = setMap of ('a Set) * ('a -> 'a) * ('a Set)

where the first object is the source and the second is the target (mirroring the notation $A \to B$).

Now we must define the functions required for the data constructor of a Category. Source and target are trivial.

fun setSource(setMap(a, f, b)) = a
fun setTarget(setMap(a, f, b)) = b

And identity is equally natural.

fun setIdentity(aSet) = setMap(aSet, (fn x => x), aSet)

Finally, compose requires us to check for set equality of the relevant source and target, and compose the bundled set-maps. A few notational comments: the “o” operator does function composition in ML, and we defined “setEq” and the “uncomposable” exception elsewhere.

fun setCompose(setMap(b', g, c), setMap(a, f, b)) =
if setEq(b, b') then
setMap(a, (g o f), c)
else
raise uncomposable

Note that the second map in the composition order is the first argument to the function, as is standard in mathematical notation.

And now this category of finite sets is just the instantiation

val FiniteSets = category(setSource, setTarget, setIdentity, setCompose)

Oddly enough, this definition does not depend on the choice of a type for our sets! None of the functions we defined care about what the elements of sets are, so long as they are comparable for equality (in ML, such a type is denoted with two leading apostrophes: ”a).

This example is not very interesting, but we can build on it in two interesting ways. The first is to define the poset category of sets. Recall that the poset category of a set $X$ has as objects subsets of $X$ and there is a unique morphism from $A \to B$ if and only if $A \subset B$. There are two ways to model this in ML. The first is that we can assume the person constructing the morphisms is giving correct data (that is, only those objects which satisfy the subset relation). This would amount to a morphism just representing “nothing” except the pair $(A,B)$. On the other hand, we may also require a function which verifies that the relation holds. That is, we could describe a morphism in this (or any) poset category as a datatype

datatype 'a PosetArrow = ltArrow of 'a * ('a -> 'a -> bool) * 'a

We still do need to assume the user is not creating multiple arrows with different functions (or we must tacitly call them all equal regardless of the function). We can do a check that the function actually returns true by providing a function for creating arrows in this particular poset category of a set with subsets.

exception invalidArrow
fun setPosetArrow(x)(y) = if subset(x)(y) then ltArrow(x, subset, y)
else raise invalidArrow

Then the functions defining a poset category are very similar to those of a set. The only real difference is in the identity and composition functions, and it is minor at that.

fun posetSource(ltArrow(x, f, y)) = x
fun posetTarget(ltArrow(x, f, y)) = y
fun posetIdentity(x) = ltArrow(x, (fn z => fn w => true), x)
fun posetCompose(ltArrow(b', g, c), ltArrow(a, f, b)) =
if b = b' then
ltArrow(a, (fn x => fn y => f(x)(b) andalso g(b)(y)), c)
else
raise uncomposable

We know that a set is always a subset of itself, so we can provide the constant true function in posetIdentity. In posetCompose, we assume things are transitive and provide the logical conjunction of the two verification functions for the pieces. Then the category as a value is

val Posets = category(posetSource, posetTarget, posetIdentity, posetCompose)

One will notice again that the homogeneous type of our sets is irrelevant. The poset category is parametric. To be completely clear, the type of the Poset category defined above is

('a Set, ('a Set)PosetArrow)Category

and so we can define a shortcut for this type.

type 'a PosetCategory = ('a Set, ('a Set)PosetArrow)Category

The only way we could make this more general is to pass the constructor for creating a particular kind of posetArrow (in our case, it just means fixing the choice of verification function, subset, for the more generic type constructor). We leave this as an exercise to the reader.

Our last example will return again to our abstract “diagram category.” Recall that if we fix an object $A$ in a category $\mathbf{C}$, the category $\mathbf{C}_A$ has as objects the morphisms with fixed source $A$,

and the arrows are commutative diagrams

Lets fix $A$ to be a set, and define a function to instantiate objects in this category:

fun makeArrow(fixedSet)(f)(target) = setMap(fixedSet, f, target)

That is, if I have a fixed set $A$, I can call the function like so

val makeObject = makeArrow(fixedSet)

And use this function to create all of my objects. A morphism is then just a pair of set-maps with a connecting function. Note how similar this snippet of code looks structurally to the other morphism datatypes we’ve defined. This should reinforce the point that morphisms really aren’t any different in this abstract category!

datatype 'a TriangleDiagram =
triangle of ('a SetMap) * ('a -> 'a) * ('a SetMap)

And the corresponding functions and category instantiation are

fun triangleSource(triangle(x, f, y)) = x
fun triangleTarget(triangle(x, f, y)) = y
fun triangleIdentity(x) = triangle(x, (fn z => z), x)
fun triangleCompose(triangle(b', g, c), triangle(a, f, b)) =
if setEq(setTarget(b'))(setTarget(b)) then
triangle(a, (g o f), c)
else
raise uncomposable

val SetDiagrams =
category(triangleSource, triangleTarget, triangleIdentity, triangleCompose)

Notice how we cannot compare the objects in this category for equality! Indeed, two functions cannot be compared in ML, and the best we can do is compare the targets of these functions to ensure that the connecting morphisms can be composed. A malicious programmer might then be able to compose uncomposable morphisms, a devious and startling proposition.

Notice further how we can’t avoid using set-specific functions with our current definition of a category. What we could do instead is require a category to have a function which checks for composability. Then compose could be written in a more homogeneous (but not quite completely parametric fashion). The reader is welcome to try this on their own, but we’ll soon have a more organized way to do this later, when we package up a category into an ML structure.

## Categories as Signatures

Let’s look at a more advanced feature of ML and see how we can apply it to representing categories in code. The idea is familiar to most Java and C++ programmers, and that is of an interface: the programmer specifies an abstract type which has specific functions attached to it, but does not implement those functions. Then some (perhaps different) programmer implements the interface by defining those functions for a concrete type.

For the reader who isn’t so interested in these features of ML, feel free to skip this section. We won’t be introducing any new examples of categories, just rephrasing what we’ve done above in a different way. It is also highly debatable whether this new way is actually any better (it’s certainly longer).

The corresponding concept in ML is called a signature (interface) and a structure (concrete instance). The only difference is that a structure is removed one step further from ML’s type system. This is in a sense necessary: a structure should be able to implement the requirements for many different signatures, and a signature should be able to have many different structures implement it. So there is no strict hierarchy of types to lean on (it’s just a many-to-many relationship).

On the other hand, not having any way to think of a structure as a type at all would be completely useless. The whole point is to be able to write function which manipulate a structure (concrete instance) by abstractly accessing the signature’s (interface’s) functions regardless of the particular implementation details. And so ML adds in another layer of indirection: the functor (don’t confuse this with categorical functors, which we haven’t talked abou yet). A functor in ML is a procedure which accepts as input a structure and produces as output another structure.

Let’s see this on a simple example. We can define a structure for “things that have magnitudes,” which as a signature would be

signature MAG_OBJ =
sig
type object
val mag : object -> int
end

This introduced a few new language forms: the “signature” keyword is like “fun” or “val,” it just declares the purpose of the statement as a whole. The “sig … end” expression is what actually creates the signature, and the contents are a list of type definitions, datatype definitions, and value/function type definitions. The reader should think of this as a named “type” for structures, analogous to a C struct.

To implement a signature in ML is to “ascribe” it. That is, we can create a structure that defines the functions laid out in a signature (and perhaps more). Here is an example of a structure for a mag object of integers:

structure IntMag : MAG_OBJ =
struct
type object = int
fun mag(x) = if x >= 0 then x else ~x
end

The colon is a type ascription, and at the compiler-level it goes through the types and functions defined in this structure and attempts to match them with the required ones from the MAG_OBJ signature. If it fails, it raises a compiler error. A more detailed description of this process can be found here. We can then use the structure as follows:

local open IntMag in
val y = ~7
val z = mag(y)
end

The open keyword binds all of the functions and types to the current environment (and so it’s naturally a bad idea to open structures in the global environment). The “local” construction is made to work specifically with open.

In any case, the important part about signatures and structures is that one can write functions which accept as input structures with a given signature and produce structures with other signatures. The name for this is a “functor,” but we will usually call it an “ML functor” to clarify that it is not a categorical functor (if you don’t know what a categorical functor is, don’t worry yet).

For example, here is a signature for a “distance object”

signature DIST_OBJ =
sig
type object
val dist: object * object -> int
end

And a functor which accepts as input a MAG_OBJ and creates a DIST_OBJ in a contrived and silly way.

functor MakeDistObj(structure MAG: MAG_OBJ) =
struct
type object = MAG.object
val dist = fn (x, y) =>
let
val v = MAG.mag(x) - MAG.mag(y)
in
if v > 0 then v else ~v
end
end

Here the argument to the functor is a structure called MAG, and we need to specify which type is to be ascribed to it; in this case, it’s a MAG_OBJ. Only things within the specified signature can be used within the struct expression that follows (if one needs a functor to accept as input something which has multiple type ascriptions, one can create a new signature that “inherits” from both). Then the right hand side of the functor is just a new structure definition, where the innards are allowed to refer to the properties of the given MAG_OBJ.

We could in turn define a functor which accepts a DIST_OBJ as input and produces as output another structure, and compose the two functors. This is started to sound a bit category-theoretical, but we want to remind the reader that a “functor” in category theory is quite different from a “functor” in ML (in particular, we have yet to mention the former, and the latter is just a mechanism to parameterize and modularize code).

In any case, we can do the same thing with a category. A general category will be represented by a signature, and a particular instance of a category is a structure with the implemented pieces.

signature CATEGORY =
sig
type object
type arrow
exception uncomposable

val source: arrow -> object
val target: arrow -> object
val identity: object -> arrow
val composable: arrow * arrow -> bool
val compose: arrow * arrow -> arrow
end

As it turns out, ML has an implementation of “sets” already, and it uses signatures. So instead of using our rinky-dink implementation of sets from earlier in this post, we can implement an instance of the ORD_KEY signature, and then call one of the many set-creating functors available to us. We’ll use an implementation based on red-black trees for now.

First the key:

structure OrderedInt : ORD_KEY =
struct
type ord_key = int
val compare = Int.compare
end

then the functor:

functor MakeSetCategory(structure ELT: ORD_KEY) =
struct
structure Set:ORD_SET = RedBlackSetFn(ELT)
type elt = ELT.ord_key
type object = Set.set
datatype arrow = function of object * (elt -> elt) * object

exception uncomposable

fun source(function(a, _, _)) = a
fun target(function(_, _, b)) = b
fun identity(a) = function(a, fn x => x, a)
fun composable(a,b) = Set.equal(a,b)
fun compose(function(b2, g, c), function(a, f, b1)) =
if composable(b1, b2) then
function(a, (g o f), c)
else
raise uncomposable
fun apply(function(a, f, b), x) = f(x)
end

The first few lines are the most confusing; the rest is exactly what we’ve seen from our first go-round of defining the category of sets. In particular, we call the RedBlackSetFn functor given the ELT structure, and it produces an implementation of sets which we name Set (and superfluously ascribe the type ORD_SET for clarity).

Then we define the “elt” type which is used to describe an arrow, the object type as the main type of an ORD_SET (see in this documentation that this is the only new type available in that signature), and the arrow type as we did earlier.

We can then define the category of sets with a given type as follows

structure IntSets = MakeSetCategory(structure ELT = OrderedInt)

The main drawback of this approach is that there’s so much upkeep! Every time we want to make a new kind of category, we need to define a new functor, and every time we want to make a new kind of category of sets, we need to implement a new kind of ORD_KEY. All of this signature and structure business can be quite confusing; it often seems like the compiler doesn’t want to cooperate when it should.

## Nothing Too Shocking So Far

To be honest, we haven’t really done anything very interesting so far. We’ve seen a single definition (of a category), looked at a number of examples to gain intuition, and given a flavor of how these things will turn into code.

Before we finish, let’s review the pros and cons of a computational representation of category-theoretical concepts.

Pros:

• We can prove results by explicit construction (more on this next time).
• Different-looking categories are structurally similar in code.
• We can faithfully represent the idea of using (objects and morphisms of) categories as parameters to construct other categories.
• Writing things in code gives a fuller understanding of how they work.

Cons:

• All computations are finite, requiring us to think much harder than a mathematician about the definition of an object.
• The type system is too weak. we can’t enforce the axioms of a category directly or even ensure the functions act sensibly at all. As such, the programmer becomes responsible for any failure that occur from bad definitions.
• The type system is too strong. when we work concretely we are forced to work within homogeneous categories (e.g. of ‘a Set).
• We cannot ensure the ability to check equality on objects. This showed up in our example of diagram categories.
• The functions used in defining morphisms, e.g. in the category of sets, are somewhat removed from real set-functions. For example, nothing about category theory requires functions to be computable. Moreover, nothing about our implementation requires the functions to have any outputs at all (they may loop infinitely!). Moreover, it is not possible to ensure that any given function terminates on a given input set (this is the Halting problem).

This list looks quite imbalanced, but one might argue that the cons are relatively minor compared to the pros. In particular (and this is what this author hopes to be the case), being able to explicitly construct proofs to theorems in category theory will give one a much deeper understanding both of category theory and of programming. This is the ultimate prize.

Next time we will begin our investigation of universal properties (where the real fun starts), and we’ll use the programmatic constructions we laid out in this post to give constructive proofs that various objects are universal.

Until then!

# Introducing Categories

For a list of all the posts on Category Theory, see the Main Content page.

It is time for us to formally define what a category is, to see a wealth of examples. In our next post we’ll see how the definitions laid out here translate to programming constructs. As we’ve said in our soft motivational post on categories, the point of category theory is to organize mathematical structures across various disciplines into a unified language. As such, most of this post will be devote to laying down the definition of a category and the associated notation. We will be as clear as possible to avoid a notational barrier for newcomers, so if anything is unclear we will clarify it in the comments.

## Definition of a Category

Let’s recall some examples of categories we’ve seen on this blog that serve to motivate the abstract definition of a category. We expect the reader to be comfortable with sets, and to absorb or glaze over the other examples as comfort dictates. The reader who is uncomfortable with sets and functions on sets should stop here. Instead, visit our primers on proof techniques, which doubles as a primer on set theory (or our terser primer on set theory from a two years ago).

The go-to example of a category is that of sets: sets together with functions between sets form a category. We will state exactly what this means momentarily, but first some examples of categories of “sets with structure” and “structure-preserving maps.”

Groups together with group homomorphisms form a category, as do rings and fields with their respective kinds of homomorphisms. Topological spaces together with continuous functions form a category, and metric spaces with distance-nonincreasing maps (“short” functions) form a sub-category. Vector spaces and linear maps, smooth manifolds and smooth maps, and algebraic varieties with rational maps all form categories. We could continue but the essential idea is clear: a category is some way to specify a collection of objects and “structure-preserving” mappings between those objects. There are three main features common to all of these examples:

1. Composition of structure-preserving maps produces structure-preserving maps.
2. Composition is associative.
3. There is an identity map for each object.

The main abstraction is that forgetting what the objects and mappings are and only considering how they behave allows one to deviate from the examples above in useful ways. For instance, once we see the formal definition below, it will become clear that mathematical (say, first-order logical) statements, together with proofs of implication, form a category. Even though a “proof” isn’t strictly a structure-preserving map, it still fits with the roughly stated axioms above. One can compose proofs by laying the implications out one after another, this composition is trivially associative, and there is an identity proof. Thus, proofs provide a way to “transform” true statements into true statements, preserving the structure of boolean-valued truth.

Another example is the category of ML types and computable functions in ML. Computable functions can be quite wild in their behavior, but they are nevertheless composable, associative, and equipped with an identity.

And so the definition of a category seems to come as a natural consequence to think of all of these examples as special cases of one concept.

Before we state the definition we should note that, for abstruse technical reasons, we cannot phrase the definition of a category as a “set” of objects and mappings between objects. This is already impossible for the category of sets, because there is no “set of all sets.” Somehow (as illustrated by Russell’s paradox) there are “too many” sets to do this. Likewise, there is no “set” of all groups, topological spaces, vector spaces, etc.

This apparent difficulty requires some sidestepping. One possibility is to define a universe of non-paradoxical sets, and define categories by way of a universe. Another is to define a class, which bypasses set theory in another way. We won’t deliberate on the differences between these methods of avoiding Russell’s paradox. The reader need only know that it can be done. For our official definition, we will use the terminology of classes.

Definition: A category $\textbf{C}$ consists of the following data:

• A class of objects, denoted $\textup{Obj}(\mathbf{C})$.
• For each pair of objects $A,B$, a set $\textup{Hom}(A,B)$ of morphisms. Sometimes these are called hom-sets.

The morphisms satisfy the following conditions:

• For all objects $A,B,C$ and morphisms $f \in \textup{Hom}(A,B), g \in \textup{Hom}(B,C)$ there is a composition operation $\circ$ and $g \circ f \in \textup{Hom}(A,C)$ is a morphism. We will henceforth drop the $\circ$ and write $gf$.
• Composition is associative.
• For all objects $A$, there is an identity morphism $1_A \in \textup{Hom}(A,A)$. For all $A,B$, and $f \in \textup{Hom}(A,B)$, we have $f 1_A = f$ and $1_B f = f$.

Some additional notation and terminology: we denote a morphism $f \in \textup{Hom}(A,B)$ in three ways. Aside from “as an element of a set,” the most general way is as a diagram,

$\displaystyle A \xrightarrow{\hspace{.5cm} f \hspace{.5cm}} B$

Although we will often shorten a single morphism to the standard function notation $f: A \to B$. Given a morphism $f: A \to B$, we call $A$ the source of $f$, and $B$ the target of $f$.

We will often name our categories, and we will do so in bold. So the category of sets with set-functions is denoted $\mathbf{Set}$. When working with multiple categories, we will give subscripts on the hom-sets to avoid ambiguity, as in $\textup{Hom}_{\mathbf{Set}}(A,B)$. A morphism from an object to itself is called an endomorphism, and the set of all endomorphisms of an object $A$ is denoted $\textup{End}_{\mathbf{C}}(A)$.

Note that in the definition above we require that morphisms form a set. This is important because hom-sets will have additional algebraic structure (e.g., for certain categories $\textup{End}_{\mathbf{C}}(A)$ will form a ring).

## Examples of Categories

Lets now formally define some of our simple examples of categories. Defining a category amounts to specifying the objects and morphisms, and verifying the conditions in the definition hold.

Sets

Define the category $\mathbf{Set}$ whose objects are all sets, and whose morphisms are functions on sets. By now it should hopefully be clear that sets form a category, but let us go through the motions explicitly. Every set $A$ has an identity function $1_A(x) = x$, and as we already know, functions on sets compose associatively. To verify this in complete detail would amount to writing down a general function as a relation, and using the definitions from elementary set theory. We have more pressing matters, but a reader who has not seen this before should consult our set theory primer. One can also define the category of finite sets $\mathbf{FinSet}$ whose objects are finite sets and whose morphisms are again set-functions. As every object and morphism of $\mathbf{FinSet}$ is one of $\mathbf{Set}$, we call the former a subcategory of the latter.

Finite categories

The most trivial possible categories are those with only finitely many objects. For instance, define the category $\mathbf{1}$ to have a single object $\ast$ and a single morphism $1_{\ast}$, which must of course be the identity. The composition $1_{\ast} 1_{\ast}$ is forced to be $1_{\ast}$, and so this is a (rather useless) category. One can also imagine a category $\mathbf{2}$ which has one non-identity morphism $A \to B$ as well as examples of categories with any number of finite objects. Nothing interesting is going on here; we just completely specify the structure of the category object by object and morphism by morphism. Sometimes they come in handy, though.

Poset Categories

Here is an elementary example of a category that is nonetheless fundamental to modern discussions in topology and algebraic geometry. It will show up again in our work with persistent homology. Let $X$ be any set, and define the category $\mathbf{X}_\subset$ as follows. The objects of $\mathbf{X}_\subset$ are all the subsets of $X$. If $A \subset B$ are subsets of $X$, then the set $\textup{Hom}(A,B)$ is defined to be the (unique, unnamed) singleton $A \to B$. Otherwise, $\textup{Hom}(A,B)$ is the empty set. Identities exist since every set is a subset of itself. The property of being a subset is transitive, so composition of morphisms makes sense and associativity is trivial since there is at most one morphism between any two objects. If the reader doesn’t believe this, we state what composition is rigorously: define each morphism as a pair $(A,B)$ and define the composition operation as a set-function $\textup{Hom}(A,B) \times \textup{Hom}(B,C) \to \textup{Hom}(A,C)$, which sends $((A,B), (B,C)) \mapsto (A,C)$. Because this operation is so trivial, it seems more appropriate to state it with a diagram:

We say this diagram commutes if all ways to compose morphisms (travel along arrows) have equal results. That is, in the diagram above, we assert that the morphism $A \to B$ composed with the morphism $B \to C$ is exactly the one morphism $A \to C$. Usually one must prove that a diagram commutes, but in this case we are defining the composition operation so that commutativity holds. The reader can now directly verify that composition is associative:

$(a,b)((b,c)(c,d)) = (a,b)(b,d) = (a,d) = (a,c)(c,d) = ((a,b)(b,c))(c,d)$

More generally, it is not hard to see how any transitive reflexive relation on a set (including partial orders) can be used to form a category: objects are elements of the set, and morphisms are unique arrows which exist when the objects are (asymmetrically) related. The subset category above is a special case where the set in question is the power set of $X$, and the relation is $\subset$. The familiar reader should note that the most prominent example of this in higher mathematics is to have $X$ be the topology of a topological space (the set of open subsets).

Groups

Next, define the category $\mathbf{Grp}$ whose objects are groups and whose morphisms are group homomorphisms. Recall briefly that a group is a set $G$ endowed with a sensible (associative) binary operation denoted by multiplication, which has an identity and with respect to which every element has an inverse. For the uninitiated reader, just replace any abstract “group” by the set of all nonzero rational numbers with usual multiplication. It will suffice for this example.

A group homomorphism is a set-function $f: A \to B$ which satisfies $f(xy) = f(x)f(y)$ (here the binary operations on the left and right side of the equal sign are the operations in the two respective groups). Being set-functions, group homomorphisms are composable as functions and associatively so. Given any homomorphism $g: B \to C$, we need to show that the composite is again a homomorphism:

$\displaystyle gf(xy) = g(f(xy)) = g(f(x)f(y)) = g(f(x))g(f(y)) = gf(x) gf(y)$

Note that there are three multiplication operations floating around in this equation. Groups (as all sets) have identity maps, and the identity map is a perfectly good group homomorphism. This verifies that $\mathbf{Grp}$ is indeed a category. While we could have stated all of these equalities via commutative diagrams, the pictures are quite large and messy so we will avoid them until later.

A similar derivation will prove that rings form a category, as do vector spaces, topological spaces, and fields. We are unlikely to use these categories in great detail in this series, so we refrain from giving them names for now.

One special example of a category of groups is the category $\mathbf{Ab}$ of abelian groups (for which the multiplication operation is commutative). This category shows up as the prototypical example of a so-called “abelian category,” which means it has enough structure to do homology.

Graphs

In a more discrete domain, define by $\mathbf{Graph}$ as follows. The objects in this category are triples $(V,E, \varphi)$ where $\varphi: E \to V \times V$ represents edge adjacency. We will usually supress $\varphi$ by saying vertices $v,w$ are adjacent instead of $\varphi(e) = (v,w)$. The morphisms in this category are graph homomorphisms. That is, if $G = (V,E,\varphi)$ and $G' = (V', E',\varphi')$, then $f \in \textup{Hom}_{\mathbf{Graph}}(G,G')$ is a pair of set functions on $f_V: V \to V', f_E: E \to E'$, satisfying the following commutative diagram. Here we denote by $(f,f)$ the map which sends $(u,v) \to (f(u), f(v))$.

This diagram is quite a mouthful, but in words it requires that whenever $v,w$ are adjacent in $G$, then $f(v), f(w)$ are adjacent in $G'$. Rewriting the diagram as an equation more explicitly, we are saying that if $e \in E$ is an edge with $\varphi(e) = (v,w)$, then it must be the case that $(f_V(v), f_V(w)) = \varphi'(f_E(e))$. This is how one “preserves” the structure of a graph.

To prove this is a category, we can observe that composition makes sense: given two pairs

$\displaystyle f_V:V \to V'$
$\displaystyle f_E: E \to E'$

and

$\displaystyle g_{V'}: V' \to V''$
$\displaystyle g_{E'}: E' \to E''$

we can compose each morphism individually, getting $gf_V: V \to V''$ and $gf_E: E \to E''$, and the following hefty-looking commutative diagram.

Let’s verify commutativity together: we already know the two squares on the left and right commute (by hypothesis, they are morphisms in this category). So we have two things left to check, that the three ways to get from $E$ to $V'' \times V''$ are all the same. This amounts to verifying the two equalities

$\displaystyle (g_{V'}, g_{V'}) (f_V, f_V) \varphi = (g_{V'}, g_{V'}) \varphi' f_E = \varphi'' g_{E'} f_E$

But indeed, going left to right in the above equation, each transition from one expression to another only swaps two morphisms within one of the commutative squares. In other words, the first equality is already enforced by the commutativity of the left-hand square, and the second by the right. We are literally only substituting what we already know to be equal!

If it feels like we didn’t actually do any work there (aside from unravelling exactly what the diagrams mean), then you’re already starting to see the benefits of category theory. It can often feel like a cycle: commutative diagrams make it easy to argue about other commutative diagrams, and one can easily get lost in the wilderness of arrows. But more often than not, devoting a relatively small amount of time up front to show one diagram commutes will make a wealth of facts and theorems follow with no extra work. This is part of the reason category theory is affectionately referred to as “abstract nonsense.”

Diagram Categories

Speaking of abstract nonsense: next up is a very abstract example, but thinking about it will reinforce the ideas we’ve put forth in this post while giving a sneak peek to our treatment of universal properties. Fix an object $A$ in an arbitrary category $\mathbf{C}$. Define the category $\mathbf{C}_A$ whose objects are morphisms with source $A$. That is, an object of $\mathbf{C}_A$ looks like

Where $B$ ranges over all objects of $\mathbf{C}$. The morphisms of $\mathbf{C}_A$ are commutative diagrams of the form

where as we said earlier the stipulation asserted by the diagram is that $f' = gf$. Let’s verify that the axioms for a category hold. Suppose we have two such commutative diagrams with matching target and source, say,

Note that the arrows $g: A \to C$ must match up in both diagrams, or else composition does not make sense! Then we can combine them into a single commutative diagram:

If it is not obvious that this diagram commutes, we need only write down the argument explicitly: by the first diagram $\beta f = g$ and $\gamma g = h$, and piecing these together we have $\gamma \beta f = \gamma g = h$. Associativity of this “piecing together” follows from the associativity of the morphisms in $\mathbf{C}$. The identity morphism is a diagram whose two “legs” are both $f: A \to B$ and whose connecting morphism is just $1_B$.

This kind of “diagram” category is immensely important, and we will revisit it and many variants of it in the future. A quick sneak peek: this category is closely related to the universal properties of polynomial rings, free objects, and limits.

As a slight generalization, you can define a category whose objects consist of pairs of morphisms with a shared source (or target), i.e.,

We challenge the reader to write down what a morphism of this category would look like, and prove the axioms for a category hold (it’s not hard, but a bit messy). We will revisit categories like this one in our post on universal properties; this particular one is intimately related to products.

## Next Time

Next time we’ll jump straight into some code, and realize the definition of a category as a type in ML. We’ll see how some of our examples of categories here can be implemented using the code, and inspect the pro’s and con’s of the computational version of our definition.

# Categories, What’s the Point?

Perhaps primarily due to the prominence of monads in the Haskell programming language, programmers are often curious about category theory. Proponents of Haskell and other functional languages can put category-theoretic concepts on a pedestal or in a mexican restaurant, and their benefits can seem as mysterious as they are magical. For instance, the most common use of a monad in Haskell is to simulate the mutation of immutable data. Others include suspending and backtracking computations, and even untying tangled rope.

Category theory is often mocked (or praised) as the be-all and end-all of mathematical abstraction, and as such (and for other reasons I’ve explored on this blog) many have found it difficult to digest and impossible to master. However, in truth category theory arose from a need for organizing mathematical ideas based on their shared structure. In this post, I want to give a brief overview of what purpose category theory serves within mathematics, and emphasize what direction we’ll be going with it on this blog.

We should also note (writing this after the fact), that this article is meant to be a motivation to our future series on category theory. It is very difficult to explain what category theory is without going into very specific details, but we can explain by analogy what category theory achieves for us. That is the goal of this post.

## Category Theory as a Modern Language

It would be a silly question to ask why we don’t program entirely in binary. It’s a slow, inefficient process prone to mistakes of all sorts, and so we generally avoid it (although a well-rounded programmer can readily work in binary when it is necessary). But once upon a time there was no choice. Eventually people found that certain programmatic constructions were ubiquitous, and the developers of the next generation of languages abstracted these ideas to make types, lists, loops, if statements, and functions. The cycle continued: we found that we needed to further allow programmers to define custom data types, polymorphic operations, protected data, and others. Another iteration and we have list comprehensions, Python decorators, Javascript promises, and a whole host of other programming paradigms that have turned into features [1].

So it is with programming as it is in mathematics. I would digress by detailing the history of numbers as symbols, or of the transition to set-based mathematics in the late 1800’s to early 1900’s. But within the last fifty years there has been a major revolution in the discourse of modern mathematics: the integration of category theory.

I have to contextualize this immediately, because I don’t want to enter a metamathematical discussion about what the proper foundation of all mathematics should be. This is a huge can of worms riddled with strong opinions and logical fallacies. I’m not saying that mathematicians have unanimously agreed that category theory is the correct basis for all logic and mathematics. I wouldn’t claim that just as I wouldn’t claim that all programmers agree that object-oriented programming is the best model for all programs, because that would be ridiculous and myopic.

I am saying, however, that enough mathematicians have agreed category theory is useful and convenient. As a result category theory is the contemporary baseline for serious discussion in many fields of pure mathematics. In short, anyone who wants to do serious mathematics in these fields must learn the language of category theory.

One goal of this blog’s category theory series is to gain fair fluency in this modern language.

## Category Theory as an Organizing Principle

As most readers will readily understand, people who study and develop programming languages think differently about language features than people who just use programming languages. For instance, a systems programmer friend of mine was pleased to discover that Python supports multiple variable assignments in a single statement (e.g. a,b = 7, False). Now a language designer might smile and think, “Well that’s just syntactic sugar.” But to a working programmer, this kind of feature is quite convenient and extensible (to one-line variable swapping, multiple return values, etc.). On the other hand, when a language designer claims to have invented a wonderful new paradigm, say, continuations, the working programmer would think, “What good is that for? It’s just an ugly way to write functions!” It’s not until someone uses the feature to do amazing things like eliminate the need for a stack or provide lightweight extensible stack inspection that the working programmer appreciates the abstraction.

Analogously, those who study category theory from a logical viewpoint see it very differently than those who use it to come up with new mathematics. Many would argue it was not until the work of Alexander Grothendieck (in the late 1950’s and 60’s) that non-category-theorists saw the true value of applying category theory to their fields. Grothendieck implemented a sweeping reform of a number of mathematical fields, his most notable stemming from the invention of étale cohomology. Now I’m not a number theorist or an algebraic geometer, but I know enough to understand category theory’s role there.

The main benefit to using category theory is as a way to organize and synthesize information. This is particularly true of the concept of a universal property. We will hear more about this in due time, but as it turns out most important mathematical structures can be phrased in terms of universal properties. Moreover, a universal property jumps right to the heart of why a construction is important. For example, one new to mathematics might wonder why polynomials are so ubiquitous and important. The answer is (vaguely) that they satisfy a universal property which makes them canonical extensions of certain computations.

I want to make this point very clear, because most newcomers to category theory are never told this. Category theory exists because it fills a need. Even if that need is a need for better organization and a refocusing of existing definitions. It was not just an attempt to build higher abstractions, but a successful adaptation of mathematics to a more complex world.

And so as category theory has spread through the mathematical world, more and more definitions are phrased in terms of various universal constructions in special categories. This is a good thing precisely because there are only two universal properties. As we’ll see, by stating that an object satisfies a universal property, one immediately understands how the proof will progress, and many properties like uniqueness and invariance will follow trivially. Familiar readers of this blog will remember our posts on groups (and will read future posts on rings and fields), in which we state and prove theorems about quotients and products and isomorphism theorems which are all essentially the same across the various fields. Viewing the problem abstractly in category theory allows one to prove all of these theorems simultaneously, and study the differences between the objects via the properties of the category as a whole.

To reiterate, category theory streamlines the process of making precise technical definitions and proving their well-definition. One hopes, then, that very general theorems proved within category theory can apply to a wide breadth of practical areas. [2]

## Category as a Tool to Gain More Knowledge

When someone invents a new programming tool, the primary goal is usually to allow a programmer to do something that he couldn’t do previously (or was difficult/inconvenient to do). For instance, programmers invented version control to allow for easy project collaboration and rollbacks. Before then, managing multiple versions of a file was a horrendous task.

In mathematics, we can ask the poignant question: what does category theory allow us to do that we couldn’t do before? This should be read as besides having a new way to think about mathematical structures and besides having a more efficient language for discourse. Of course, this is a highly philosophical question. Could it be that there is some (non-categorical) theorem that can’t be proved unless you resort to category-theoretical arguments? In my optimistic mind the answer must certainly be no. Moreover, it appears that most proofs that “rely” on category theory only really do so because they’re so deeply embedded in the abstraction that unraveling them to find non-category-theoretical proofs would be a tiresome and fruitless process.

In programming we can ask a related question: what insights does category theory give us about programming? Can we write programs better if we resort to organizing things in terms of categories? Would it be easier to prove correctness of our programs or to discover a good algorithm to solve a task?

I think it goes without saying that we certainly can’t do anything that we couldn’t have done before (this would violate the notion that our usual languages are Turing complete). But these other soft questions should have positive answers. While in the previous two sections I gave concrete reasons why one might want to learn category theory, here the reason is very vague. Supposedly, learning category theory makes one a better programmer by forcing one to make connections between structures and computation. Then when a new problem comes along, it becomes easy (almost natural!) to fit it into a categorical organization of the world, and the right solution just “falls into your lap.”

While I don’t necessarily espouse this line of thinking (I believe any mathematics makes you better at analyzing problems), this is essentially the argument for why functional programming is a good thing to learn.

## What We’ll Do With Categories

I don’t necessarily have any amazing applications of category theory in mind for this blog. Instead, I want to develop a fair fluency and categorical organization (the first to sections of this article) among my readers. Along the way, we will additionally implement the concepts of category theory in code. This will give us a chance to play with the ideas as we learn, and hopefully will make all of the abstract nonsense much more concrete.

So next time we’ll start with the definition of a category, and give a wealth of examples. Until then!

[1] I’m obviously oversimplifying the history of programming languages, but the spirit is true, and the same as for all technological developments: incremental improvements based on a need for convenience and removing repetitive tasks. (back)
[2] One place this is particularly convenient is actually in the theory of persistent homology. Though on this blog the plan is to avoid the theory side before we investigate the algorithm from a high-level, once we get to the theory we will see an effective use of category theory in action. (back)