Last time we worked through some basic examples of universal properties, specifically singling out quotients, products, and coproducts. There are many many more universal properties that we will mention as we encounter them, but there is one crucial topic in category theory that we have only hinted at: functoriality.

As we’ve repeatedly stressed, the meat of category theory is in the *morphisms. *One natural question one might ask is, what notion of morphism is there between categories themselves? Indeed, the most straightforward way to see category theoretic concepts in classical mathematics is in a clever choice of functor. For example (and this example isn’t necessary for the rest of the article) one can “associate” to each topological space a group, called the homology group, in such a way that continuous functions on topological spaces translate to group homomorphisms. Moreover, this translation is *functorial* in the following sense: the group homomorphism associated to a composition is the composition of the associated group homomorphisms. If we denote the association by a subscripted asterisk, then we get the following common formula.

This is *the* crucial property that maintains the structure of morphisms. Again, this should reinforce the idea that the crucial ingredient of every definition in category theory is its effect on morphisms.

## Functors: a Definition

In complete generality, a functor is a mapping between two categories which preserves the structure of morphisms. Formally,

**Definition: **Let be categories. A *functor * consists of two parts:

- For each object an associated object .
- For each morphism a corresponding morphism . Specifically, for each we have a set-function .

There are two properties that a functor needs to satisfy to “preserve structure.” The first is that the identity morphisms are preserved for every object; that is, for every object . Second, composition must be preserved. That is, if and , we have

We often denote a functor as we would a function , and use the function application notation as if everything were with sets.

Let’s look at a few simple examples.

Let be the poset category of finite sets with subsets as morphisms, and let be the category whose objects are integers where there is a unique morphism from if . Then the size function is a functor . Continuing with , remember that forms a group under addition (also known as ). And so by its very definition any group homomorphism is a functor from . A functor from a category to itself is often called an *endofunctor*.

There are many examples of functors from the category of topological spaces to the category of groups. These include some examples we’ve seen on this blog, such as the fundamental group and homology groups.

One trivial example of a functor is called the *forgetful functor.* Let be a category whose objects are sets and whose morphisms are set-maps with additional structure, for example the category of groups. Define a functor which acts as the identity on both sets and functions. This functor simply “forgets” the structure in . In the realm of programs and types (this author is thinking Java), one can imagine this as a ‘type-cast’ from String to Object. In the same vein, one could define an “identity” endofunctor which does absolutely nothing.

One interesting way to think of a functor is as a “realization” of one category inside another. In particular, because the composition structure of is preserved by a functor , it must be the case that all commutative diagrams are “sent” to commutative diagrams. In addition, isomorphisms are sent to isomorphisms because if are inverses of each other, and likewise for the reverse composition. And so if we have a functor from a poset category (say, the real numbers with the usual inequality) to some category , then we can realize the structure of the poset sitting inside of (perhaps involving only some of the objects of ). This view comes in handy in a few places we’ll see later in our series on computational topology.

## The Hom Functor

There is a very important and nontrivial example called the “hom functor” which is motivated by the category of vector spaces. We’ll stick to the concrete example of vector spaces, but the generalization to arbitrary categories is straightforward. If the reader knows absolutely nothing about vector spaces, replace “vector space” with “object” and “linear map” with “morphism.” It won’t quite be correct, but it will get the idea across.

To each vector space one can define a *dual* vector space of functions (or whatever the field of scalars for is). Following the lead of hom sets, the dual vector space is denoted . Here the morphisms in the set are those from the category of vector spaces (that is, linear maps ). Indeed, this is a vector space: one can add two functions pointwise () and scale them (), and the properties for a vector spaces are trivial to check.

Now the mapping which takes and produces is a functor called the *hom functor*. But let’s inspect this one more closely. The source category is obviously the category of vector spaces, but what is the target category? The objects are clear: the hom sets where is a vector space. The morphisms of the category are particularly awkward. Officially, they are written as

so a morphism in this category takes as input a linear map and produces as output one . But what are the morphisms in words we can understand? And how can we compose them? Before reading on, think about what a morphism of morphisms should look like.

Okay, ready?

The morphisms in this category can be thought of as linear maps . More specifically, given a morphism and a linear map , we can construct a linear map by composing .

And so if we apply the functor to a morphism , we get a morphism in . Let’s denote the application of the hom functor using an asterisk so that .

But wait a minute! The mapping here is going in the wrong direction: we took a map in one category going from the side to the side, and after applying the functor we got a map going from the side () to the side (). It seems there is no reasonable way to take a map and get a map in using just , but the other way is obvious. The hom functor “goes backward” in a sense. In other words, the composition property for our “functor” makes the composite to the map taking to . On the other hand, there is no way to compose , as they operate on the wrong domains! It must be the other way around:

We advise the reader to write down the commutative diagram and trace out the compositions to make sure everything works out. But this is a problem, because it makes the hom functor fail the most important requirement. In order to fix this reversal “problem,” we make the following definition:

**Definition: **A functor is called *covariant *if it preserves the order of morphism composition, so that . If it reverses the order, we call it *contravariant*.

And so the hom functor on vector spaces is a contravariant functor, while all of the other functors we’ve defined in this post are covariant.

There is another way to describe a contravariant functor as a covariant functor which is often used. It involves the idea of an “opposite” category. For any category we can define the *opposite category * to be a category with the same objects as , but with all morphisms reversed. That is, we define

We leave it to the reader to verify that this is indeed a category. It is also not hard to see that . Opposite categories give us a nice recharacterization of a contrvariant functor. Indeed, because composition in opposite categories is reversed, a contravariant functor is just a *covariant* functor on the opposite category . Or equivalently, one . More than anything, opposite categories are syntactical sugar. Composition is only reversed artificially to make domains and codomains line up, but the actual composition is the same as in the original category.

## Functors as Types

Before we move on to some code, let’s take a step back and look at the big picture (we’ve certainly plowed through enough details up to this point). The main thesis is that functoriality is a valuable property for an operation to have, but it’s not entirely clear *why*. Even the brightest of readers can only assume such properties are useful for mathematical analysis. It seems that the question we started this series out with, “what does category theory allow us to *do* that we couldn’t do before?” still has the answer, “nothing.” More relevantly, the question of what functoriality allows us to do is unclear. Indeed, once again the answer is “nothing.” Rather, functoriality in a computation allows one to analyze the behavior of a program. It gives the programmer a common abstraction in which to frame operations, and ease in proving the correctness of one’s algorithms.

In this light, the best we can do in implementing functors in programs is to give a type definition and examples. And in this author’s opinion this series is quickly becoming boring (all of the computational examples are relatively lame), so we will skip the examples in favor of the next post which will analyze more meaty programming constructs from a categorical viewpoint.

So recall the ML type definition of a category, a tuple of operations for source, target, identity, and composition:

[sourcecode]datatype (‘object, ‘arrow)Category =

category of (‘arrow -> ‘object) *

(‘arrow -> ‘object) *

(‘object -> ‘arrow) *

(‘arrow * ‘arrow -> ‘arrow)[/sourcecode]

And so a functor consists of the two categories involved (as types), and the mapping on objects, and the mapping on morphisms.

[sourcecode]datatype (‘cObject, ‘cArrow, ‘dObject, ‘dArrow)Functor =

aFunctor of (‘cObject, ‘cArrow)Category *

(‘cObject -> ‘dObject) *

(‘cArrow -> ‘dArrow) *

(‘dObject -> ‘dArrow)Category[/sourcecode]

We encourage the reader who is uncomfortable with these type definitions to experiment with them by implementing some of our simpler examples (say, the size functor from sets to integers). Insofar as the basic definitions go, functors are not all that interesting. They become much more interesting when additional structure is imposed on them, and in the distant future we will see a glimpse of this in the form of adjointness. We hope to get around to analyzing statements like “syntax and semantics are adjoint functors.” For the next post in this series, we will take the three beloved functions of functional programming (map, foldl(r), and filter), and see what their categorical properties are.

Until then!