How to Conquer Tensorphobia

A professor at Stanford once said,

If you really want to impress your friends and confound your enemies, you can invoke tensor products… People run in terror from the $ \otimes$ symbol.

He was explaining some aspects of multidimensional Fourier transforms, but this comment is only half in jest; people get confused by tensor products. It’s often for good reason. People who really understand tensors feel obligated to explain it using abstract language (specifically, universal properties). And the people who explain it in elementary terms don’t really understand tensors.

This post is an attempt to bridge the gap between the elementary and advanced understandings of tensors. We’ll start with the elementary (axiomatic) approach, just to get a good feel for the objects we’re working with and their essential properties. Then we’ll transition to the “universal” mode of thought, with the express purpose of enlightening us as to why the properties are both necessary and natural.

But above all, we intend to be sufficiently friendly so as to not make anybody run in fear. This means lots of examples and preferring words over symbols. Unfortunately, we simply can’t get by without the reader knowing the very basics of linear algebra (the content of our first two primers on linear algebra (1) (2), though the only important part of the second is the definition of an inner product).

So let’s begin.

Tensors as a Bunch of Axioms

Before we get into the thick of things I should clarify some basic terminology. Tensors are just vectors in a special vector space. We’ll see that such a vector space comes about by combining two smaller vector spaces via a tensor product. So the tensor product is an operation combining vector spaces, and tensors are the elements of the resulting vector space.

Now the use of the word product is quite suggestive, and it may lead one to think that a tensor product is similar or related to the usual direct product of vector spaces. In fact they are related (in very precise sense), but they are far from similar. If you were pressed, however, you could start with the direct product of two vector spaces and take a mathematical machete to it until it’s so disfigured that you have to give it a new name (the tensor product).

With that image in mind let’s see how that is done. For the sake of generality we’ll talk about two arbitrary finite-dimensional vector spaces $ V, W$ of dimensions $ n, m$. Recall that the direct product  $ V \times W$ is the vector space of pairs $ (v,w)$ where $ v$ comes from $ V$ and $ w$ from $ W$. Recall that addition in this vector space is defined componentwise ($ (v_1,w_1) + (v_2, w_2) = (v_1 + v_2, w_1 + w_2$)) and scalar multiplication scales both components $ \lambda (v,w) = (\lambda v, \lambda w)$.

To get the tensor product space $ V \otimes W$, we make the following modifications. First, we redefine what it means to do scalar multiplication. In this brave new tensor world, scalar multiplication of the whole vector-pair is declared to be the same as scalar multiplication of any component you want. In symbols,

$ \displaystyle \lambda (v, w) = (\lambda v, w) = (v, \lambda w)$

for all choices of scalars $ \lambda$ and vectors $ v, w$. Second, we change the addition operation so that it only works if one of the two components are the same. In symbols, we declare that

$ (v, w) + (v’, w) = (v + v’, w)$

only works because $ w$ is the same in both pieces, and with the same rule applying if we switch the positions of $ v,w$ above. All other additions are simply declared to be new vectors. I.e. $ (x,y) + (z,w)$ is simply itself. It’s a valid addition — we need to be able to add stuff to be a vector space — but you just can’t combine it any further unless you can use the scalar multiplication to factor out some things so that $ y=w$ or $ x=z$. To say it still one more time, a general element of the tensor $ V \otimes W$ is a sum of these pairs that can or can’t be combined by addition (in general things can’t always be combined).

Finally, we rename the pair $ (v,w)$ to $ v \otimes w$, to distinguish it from the old vector space $ V \times W$ that we’ve totally butchered and reanimated, and we call the tensor product space as a whole $ V \otimes W$. Those familiar with this kind of abstract algebra will recognize quotient spaces at work here, but we won’t use that language except to note that we cover quotients and free spaces elsewhere on this blog, and that’s the formality we’re ignoring.

As an example, say we’re taking the tensor product of two copies of $ \mathbb{R}$. This means that our space $ \mathbb{R} \otimes \mathbb{R}$ is comprised of vectors like $ 3 \otimes 5$, and moreover that the following operations are completely legitimate.

$ 3 \otimes 5 + 1 \otimes (-5) = 3 \otimes 5 + (-1) \otimes 5 = 2 \otimes 5$

$ 6 \otimes 1 + 3\pi \otimes \pi = 3 \otimes 2 + 3 \otimes \pi^2 = 3 \otimes (2 + \pi^2)$

Cool. This seemingly innocuous change clearly has huge implications on the structure of the space. We’ll get to specifics about how different tensors are from regular products later in this post, but for now we haven’t even proved this thing is a vector space. It might not be obvious, but if you go and do the formalities and write the thing as a quotient of a free vector space (as we mentioned we wouldn’t do) then you know that quotients of vector spaces are again vector spaces. So we get that one for free. But even without that it should be pretty obvious: we’re essentially just declaring that all the axioms of a vector space hold when we want them to. So if you were wondering whether

$ \lambda (a \otimes b + c \otimes d) = \lambda(a \otimes b) + \lambda(c \otimes d)$

The answer is yes, by force of will.

So just to recall, the axioms of a tensor space $ V \otimes W$ are

  1. The “basic” vectors are $ v \otimes w$ for $ v \in V, w \in W$, and they’re used to build up all other vectors.
  2. Addition is symbolic, unless one of the components is the same in both addends, in which case $ (v_1, w) + (v_2, w) = (v_1+ v_2, w)$ and $ (v, w_1) + (v,w_2) = (v, w_1 + w_2)$.
  3. You can freely move scalar multiples around the components of $ v \otimes w$.
  4. The rest of the vector space axioms (distributivity, additive inverses, etc) are assumed with extreme prejudice.

Naturally, one can extend this definition to $ n$-fold tensor products, like $ V_1 \otimes V_2 \otimes \dots \otimes V_d$. Here we write the vectors as sums of things like $ v_1 \otimes \dots \otimes v_d$, and we enforce that addition can only be combined if all but one coordinates are the same in the addends, and scalar multiples move around to all coordinates equally freely.

So where does it come from?!

By now we have this definition and we can play with tensors, but any sane mathematically minded person would protest, “What the hell would cause anyone to come up with such a definition? I thought mathematics was supposed to be elegant!”

It’s an understandable position, but let me now try to convince you that tensor products are very natural. The main intrinsic motivation for the rest of this section will be this:

We have all these interesting mathematical objects, but over the years we have discovered that the maps between objects are the truly interesting things.

A fair warning: although we’ll maintain a gradual pace and informal language in what follows, by the end of this section you’ll be reading more or less mature 20th-century mathematics. It’s quite alright to stop with the elementary understanding (and skip to the last section for some cool notes about computing), but we trust that the intrepid readers will push on.

So with that understanding we turn to multilinear maps. Of course, the first substantive thing we study in linear algebra is the notion of a linear map between vector spaces. That is, a map $ f: V \to W$ that factors through addition and scalar multiplication (i.e. $ f(v + v’) = f(v) + f(v’)$ and $ f(\lambda v) = \lambda f(v)$).

But it turns out that lots of maps we work with have much stronger properties worth studying. For example, if we think of matrix multiplication as an operation, call it $ m$, then $ m$ takes in two matrices and spits out their product

$ m(A,B) = AB$

Now what would be an appropriate notion of linearity for this map? Certainly it is linear in the first coordinate, because if we fix $ B$ then

$ m(A+C, B) = (A+C)B = AB + CB = m(A,B) + m(C,B)$

And for the same reason it’s linear in the second coordinate. But it is most definitely not linear in both coordinates simultaneously. In other words,

$ m(A+B, C+D) = (A+B)(C+D) = AC + AD + BC + BD \neq AC + BD = m(A,C) + m(B,D)$

In fact, there is only one function that satisfies both “linearity in its two coordinates separately” and also “linearity in both coordinates simultaneously,” and it’s the zero map! (Try to prove this as an exercise.) So the strongest kind of linearity we could reasonably impose is that $ m$ is linear in each coordinate when all else is fixed. Note that this property allows us to shift around scalar multiples, too. For example,

$ \displaystyle m(\lambda A, B) = \lambda AB = A (\lambda B) = m(A, \lambda B) = \lambda m(A,B)$

Starting to see the wispy strands of a connection to tensors? Good, but hold it in for a bit longer. This single-coordinate-wise-linear property is called bilinearity when we only have two coordinates, and multilinearity when we have more.

Here are some examples of nice multilinear maps that show up everywhere:

  • If $ V$ is an inner product space over $ \mathbb{R}$, then the inner product is bilinear.
  • The determinant of a matrix is a multilinear map if we view the columns of the matrix as vector arguments.
  • The cross product of vectors in $ \mathbb{R}^3$ is bilinear.

There are many other examples, but you should have at least passing familiarity with these notions, and it’s enough to convince us that multilinearity is worth studying abstractly.

And so what tensors do is give a sort of classification of multilinear maps. The idea is that every multilinear map $ f$ from a product vector space $ U_1 \times \dots \times U_d$ to any vector space $ Y$ can be written first as a multilinear map to the tensor space

$ \displaystyle \alpha : U_1 \times \dots \times U_d \to U_1 \otimes \dots \otimes U_d$

Followed by a linear map to $ Y$,

$ \displaystyle \hat{f} : U_1 \otimes \dots \otimes U_d \to Y$

And the important part is that $ \alpha$ doesn’t depend on the original $ f$ (but $ \hat{f}$ does). One usually draws this as a single diagram:

comm-diagram-tensor

And to say this diagram commutes is to say that all possible ways to get from one point to another are equivalent (the compositions of the corresponding maps you follow are equal, i.e. $ f = \hat{f} \alpha$).

In fuzzy words, the tensor product is like the gatekeeper of all multilinear maps, and $ \alpha$ is the gate. Yet another way to say this is that $ \alpha$ is the most general possible multilinear map that can be constructed from $ U_1 \times \dots \times U_d$. Moreover, the tensor product itself is uniquely defined by having a “most-general” $ \alpha$ (up to isomorphism). This notion is often referred to by mathematicians as the “universal property” of the tensor product. And they might say something like “the tensor product is initial with respect to multilinear mappings from the standard product.” We discuss language like this in detail in this blog’s series on category theory, but it’s essentially a super-compact (and almost too vague) way of saying what the diagram says.

Let’s explore this definition when we specialize to a tensor of two vector spaces, and it will give us a good understanding of $ \alpha$ (which is really incredibly simple, but people like to muck it up with choices of coordinates and summations). So fix $ V, W$ as vector spaces and look at the diagram

comm-diagram-tensor-2

What is $ \alpha$ in this case? Well it just sends $ (v,w) \mapsto v \otimes w$. Is this map multilinear? Well if we fix $ w$ then

$ \displaystyle \alpha(v_1 + v_2, w) = (v_1 + v_2) \otimes w = v_1 \otimes w + v_2 \otimes w = \alpha(v_1, w) + \alpha (v_2, w)$

and

$ \displaystyle \alpha(\lambda v, w) = (\lambda v) \otimes w = (\lambda) (v \otimes w) = \lambda \alpha(v,w)$

And our familiarity with tensors now tells us that the other side holds too. Actually, rather than say this is a result of our “familiarity with tensors,” the truth is that this is how we know that we need to define the properties of tensors as we did. It’s all because we designed tensors to be the gatekeepers of multilinear maps!

So now let’s prove that all maps $ f : V \times W \to Y$ can be decomposed into an $ \alpha$ part and a $ \hat{f}$ part. To do this we need to know what data uniquely defines a multilinear map. For usual linear maps, all we had to do was define the effect of the map on each element of a basis (the rest was uniquely determined by the linearity property). We know what a basis of $ V \times W$ is, it’s just the union of the bases of the pieces. Say that $ V$ has a basis $ v_1, \dots, v_n$ and $ W$ has $ w_1, \dots, w_m$, then a basis for the product is just $ ((v_1, 0), \dots, (v_n,0), (0,w_1), \dots, (0,w_m))$.

But multilinear maps are more nuanced, because they have two arguments. In order to say “what they do on a basis” we really need to know how they act on all possible pairs of basis elements. For how else could we determine $ f(v_1 + v_2, w_1)$? If there are $ n$ of the $ v_i$’s and $ m$ of the $ w_i$’s, then there are $ nm$ such pairs $ f(v_i, w_j)$.

Uncoincidentally, as $ V \otimes W$ is a vector space, its basis can also be constructed in terms of the bases of $ V$ and $ W$. You simply take all possible tensors $ v_i \otimes w_j$. Since every $ v \in V, w \in W$ can be written in terms of their bases, it’s clear than any tensor $ \sum_{k} a_k \otimes b_k$ can also be written in terms of the basis tensors $ v_i \otimes w_j$ (by simply expanding each $ a_k, b_k$ in terms of their respective bases, and getting a larger sum of more basic tensors).

Just to drive this point home, if $ (e_1, e_2, e_3)$ is a basis for $ \mathbb{R}^3$, and $ (g_1, g_2)$ a basis for $ \mathbb{R}^2$, then the tensor space $ \mathbb{R}^3 \otimes \mathbb{R}^2$ has basis

$ (e_1 \otimes g_1, e_1 \otimes g_2, e_2 \otimes g_1, e_2 \otimes g_2, e_3 \otimes g_1, e_3 \otimes g_2)$

It’s a theorem that finite-dimensional vector spaces of equal dimension are isomorphic, so the length of this basis (6) tells us that $ \mathbb{R}^3 \otimes \mathbb{R}^2 \cong \mathbb{R}^6$.

So fine, back to decomposing $ f$. All we have left to do is use the data given by $ f$ (the effect on pairs of basis elements) to define $ \hat{f} : V \otimes W \to Y$. The definition is rather straightforward, as we have already made the suggestive move of showing that the basis for the tensor space ($ v_i \otimes w_j$) and the definition of $ f(v_i, w_j)$ are essentially the same.

That is, just take $ \hat{f}(v_i \otimes w_j) = f(v_i, w_j)$. Note that this is just defined on the basis elements, and so we extend to all other vectors in the tensor space by imposing linearity (defining $ \hat{f}$ to split across sums of tensors as needed). Is this well defined? Well, multilinearity of $ f$ forces it to be so. For if we had two equal tensors, say, $ \lambda v \otimes w = v \otimes \lambda w$, then we know that $ f$ has to respect their equality, because $ f(\lambda v_i, w_j) = f(v_i, \lambda w_j)$, so $ \hat{f}$ will take the same value on equal tensors regardless of which representative we pick (where we decide to put the $ \lambda$). The same idea works for sums, so everything checks out, and $ f(v,w)$ is equal to $ \hat{f} \alpha$, as desired. Moreover, we didn’t make any choices in constructing $ \hat{f}$. If you retrace our steps in the argument, you’ll see that everything was essentially decided for us once we fixed a choice of a basis (by our wise decisions in defining $ V \otimes W$). Since the construction would be isomorphic if we changed the basis, our choice of $ \hat{f}$ is unique.

There is a lot more to say about tensors, and indeed there are some other useful ways to think about tensors that we’ve completely ignored. But this discussion should make it clear why we define tensors the way we do. Hopefully it eliminates most of the mystery in tensors, although there is still a lot of mystery in trying to compute stuff using tensors. So we’ll wrap up this post with a short discussion about that.

Computability and Stuff

It should be clear by now that plain product spaces $ V \times W$ and tensor product spaces $ V \otimes W$ are extremely different. In fact, they’re only related in that their underlying sets of vectors are built from pairs of vectors in $ V$ and $ W$. Avid readers of this blog will also know that operations involving matrices (like row reduction, eigenvalue computations, etc.) are generally efficient, or at least they run in polynomial time so they’re not crazy impractically slow for modest inputs.

On the other hand, it turns out that almost every question you might want to ask about tensors is difficult to answer computationally. As with the definition of the tensor product, this is no mere coincidence. There is something deep going on with tensors, and it has serious implications regarding quantum computing. More on that in a future post, but for now let’s just focus on one hard problem to answer for tensors.

As you know, the most general way to write an element of a tensor space $ U_1 \otimes \dots \otimes U_d$ is as a sum of the basic-looking tensors.

$ \sum_k \displaystyle a_{1,k} \otimes a_{2,k} \otimes \dots \otimes a_{d,k}$

where the $ a_{i,d}$ are linear combinations of basis vectors in the $ U_i$. But as we saw with our examples over $ \mathbb{R}$, there can be lots of different ways to write a tensor. If you’re lucky, you can write the entire tensor as a one-term sum, that is just a tensor $ a_1 \otimes \dots \otimes a_d$. If you can do this we call the tensor a pure tensor, or a rank 1 tensor. We then have the following natural definition and problem:

Definition: The rank of a tensor $ x \in U_1 \otimes \dots \otimes U_d$ is the minimum number of terms in any representation of $ x$ as a sum of pure tensors. The one exception is the zero element, which has rank zero by convention.

Problem: Given a tensor $ x \in k^{n_1} \otimes k^{n_2} \otimes k^{n_3}$ where $ k$ is a field, compute its rank.

Of course this isn’t possible in standard computing models unless you can represent the elements of the field (and hence the elements of the vector space in question) in a computer program. So we restrict $ k$ to be either the rational numbers $ \mathbb{Q}$ or a finite field $ \mathbb{F}_{q}$.

Even though the problem is simple to state, it was proved in 1990 (a result of Johan Håstad) that tensor rank is hard to compute. Specifically, the theorem is that

Theorem: Computing tensor rank is NP-hard when $ k = \mathbb{Q}$ and NP-complete when $ k$ is a finite field.

The details are given in Håstad’s paper, but the important work that followed essentially showed that most problems involving tensors are hard to compute (many of them by reduction from computing rank). This is unfortunate, but also displays the power of tensors. In fact, tensors are so powerful that many believe understanding them better will lead to insight in some very important problems, like finding faster matrix multiplication algorithms or proving circuit lower bounds (which is closely related to P vs NP). Finding low-rank tensor approximations is also a key technique in a lot of recent machine learning and data mining algorithms.

With this in mind, the enterprising reader will probably agree that understanding tensors is both valuable and useful. In the future of this blog we’ll hope to see some of these techniques, but at the very least we’ll see the return of tensors when we delve into quantum computing.

Until next time!

The Universal Properties of Map, Fold, and Filter

A lot of people who like functional programming often give the reason that the functional style is simply more elegant than the imperative style. When compelled or inspired to explain (as I did in my old post, How I Learned to Love Functional Programming), they often point to the three “higher-order” functions map, fold, and filter, as providing a unifying framework for writing and reasoning about programs. But how unifying are they, really? In this post we’ll give characterizations of these functions in terms of universal properties, and we’ll see that in fact fold is the “most” universal of the three, and its natural generalization gives a characterization of transformations of standard compound data types.

By being universal or having a universal property, we don’t mean that map, fold, and filter are somehow mystically connected to all programming paradigms. That might be true, but it’s not the point of this article. Rather, by saying something has a universal property we are making a precise mathematical statement that it is either an initial or final object (or the unique morphism determined by such) in some category.

That means that, as a fair warning to the reader, this post assumes some basic knowledge of category theory, but no more than what we’ve covered on this blog in the past. Of particular importance for the first half of this post is the concept of a universal property, and in the followup post we’ll need some basic knowledge of functors.

Map, Filter, and Fold

Recalling their basic definitions, map is a function which accepts as input a list $ L$ whose elements all have the same type (call it $ A$), and a function $ f$ which maps $ A$ to another type $ B$. Map then applies $ f$ to every entry of $ L$ to produce a new list whose entries all have type $ B$.

In most languages, implementing the map function is an elementary exercise. Here is one possible definition in ML.

fun map(_, nil) = nil
  | map(f, (head::tail)) = f(head) :: map(f, tail)

Next, filter takes as input a boolean-valued predicate $ p : A \to \textup{bool}$ on types $ A$ and a list $ L$ whose entries have type $ A$, and produces a list of those entries of $ L$ which satisfy the predicate. Again, it’s implementation in ML might be:

fun filter(_, nil) = nil
  | filter(p, (head::tail)) = if p(head)
                                 then (head :: filter(p, tail))
                                 else filter(p, tail)

Finally, fold is a function which “reduces” a list $ L$ of entries with type $ A$ down to a single value of type $ B$. It accepts as input a function $ f : A \times B \to B$, and an initial value $ v \in B$, and produces a value of type $ B$ by recursively applying $ f$ as follows:

fun fold(_, v, nil) = v
  | fold(f, v, (head::tail)) = f(head, fold(f, v, tail))

(If this definition is mysterious, you’re probably not ready for the rest of this post.)

One thing that’s nice about fold is that we can define map and filter in terms of it:

fun map(f, L) = fold((fn x, xs => (f(x):xs)), [], L)
fun filter(p, L) = fold((fn x, xs => if p(x) then x:xs else xs end), [], L)

We’ll see that this is no accident: fold happens to have the “most general” universality of the three functions, but as a warm-up and a reminder we first investigate the universality of map and filter.

Free Monoids and Map

Map is the easiest function of the three to analyze, but to understand it we need to understand something about monoids. A monoid is simple to describe, it’s just a set $ M$ with an associative binary operation denoted by multiplication and an identity element for that operation.

monoid homomoprhism between two monoids $ M, N$ is a function $ f: M \to N$ such that $ f(xy) = f(x)f(y)$. Here the multiplication on the left hand side of the equality is the operation from $ M$ and on the right it’s the one from $ N$ (this is the same definition as a group homomorphism). As is to be expected, monoids with monoid homomorphisms form a category.

We encounter simple monoids every day in programming which elucidate this definition. Strings with the operation of string concatenation form a monoid, and the empty string acts as an identity because concatenating a string to the empty string has no effect. Similarly, lists with list concatenation form a monoid, where the identity is the empty list. A nice example of a monoid homomorphism is the length function. We know it’s a homomorphism because the length of a concatenation of two lists is just the sum of the lengths of the two lists.

Integers also form a monoid, with addition as the operation and zero as the identity element. However, the list and string monoids have an extra special property that integers do not. For a number $ n$ you can always find $ -n$ so that $ n + (-n) = 0$ is the identity element. But for lists, the only way to concatenate two lists and get the empty list is if both of the lists were empty to begin with. A monoid with this property is called free, and to fully understand it we should make a definition which won’t seem to mean anything at first.

Definition: Let $ \mathbf{C}$ be a category. Given a set $ A$, the free object over $ A$, denoted $ F(A)$, is an object of $ \mathbf{C}$ which is universal with respect to set-maps $ A \to B$ for any object $ B$ in $ \mathbf{C}$.

As usual with a definition by universal property, we should elaborate as to exactly what’s going on. Let $ \mathbf{C}$ be a category whose objects are sets and let $ A$ a set, possibly not in this category. We can construct a new category, $ \mathbf{C}_{A \to X}$ whose objects are set-maps

$ \displaystyle f: A \to X$

and whose morphisms are commutative diagrams of the form

free-object-cat-morphism

where $ \varphi$ is a morphism in $ \mathbf{C}$. In other words, we simply require that $ \varphi(f_1(a)) = f_2(a)$ for every $ a \in A$.

By saying that the free monoid on $ A$ satisfies this universal property for the category of monoids, we really mean that it is initial in this category of set-maps and commutative diagrams. That is, there is a monoid $ F(A)$ and a set-map $ i_A: A \to F(A)$ so that for every monoid $ Y$ and set-map $ f: A \to Y$ there is a unique monoid homomorphism from $ i_A$ to $ f$. In other words, there is a unique monoid homomorphism $ \varphi$ making the following diagram commute:

free-object-univ-propFor example, if $ A$ is the set of all unicode characters, then $ F(A)$ is precisely the monoid of all strings on those characters. The map $ i_A$ is then the map taking a character $ a$ to the single-character string “$ a$”. More generally, if $ T$ is any type in the category of types and computable functions, then $ F(T)$ is the type of homogeneous lists whose elements have type $ T$.

We will now restrict our attention to lists and types, and we will denote the free (list) monoid on a type $ A$ as $ [A]$. The universal property of map is then easy to describe, it’s just a specific instance of this more general universal property for the free monoids. Specifically, we work in the sub-category of homogeneous list monoids (we only allow objects which are $ [T]$ for some type $ T$). The morphism $ i_A: A \to [A]$ just takes a value $ a$ of type $ A$ and spits out a single-item list $ [a]$. We might hope that for any mapping $ A \to [B]$, there is a unique monoid homomorphism $ [A] \to [B]$ making the following diagram commute.

bad-map

And while this is true, the diagram lies because “map” does not achieve what $ \varphi$ does. The map $ f$ might send all of $ A$ to the empty list, and this would cause $ \varphi$ to be the trivial map. But if we decomposed $ f$ further to require it to send $ a$ to a single $ b$, such as

still-bad-map

then things work out nicely. In particular, specifying a function $ f: A \to B$ uniquely defines how a list homomorphism operates on lists. In particular, the diagram forces the behavior for single-element lists: $ [a] \mapsto [f(a)]$. And the property of $ \varphi$ being a monoid homomorphism forces $ \varphi$ to preserve order.

Indeed, we’ve learned from this analysis that the structure of the list involved is crucial in forming the universal property. Map is far from the only computable function making the diagram commute, but it is clearly the only monoid homomorphism. As we’ll see, specifying the order of operations is more generally described by fold, and we’ll be able to restate the universal property of map without relying on monoid homomorphisms.

Filter

The filter function is a small step up in complexity from map, but its universal property is almost exactly the same. Again filter can be viewed through the lens of a universal property for list monoids, because filter itself takes a predicate and produces a list monoid. We know this because filtering two lists by the same predicate and concatenating them is the same as concatenating them first and then filtering them. Indeed, the only difference here is that the diagram now looks like this

filter-bad

where $ p$ is our predicate, and $ j_A$ is defined by $ (a, T) \mapsto [a]$ and $ (a,F) \mapsto []$. The composition $ j_A \circ p$ gives the map $ A \to [A]$ that we can extend uniquely to a monoid homomorphism $ [A] \to [A]$. We won’t say any more on it, except to mention that again maintaining the order of the list is better described via fold.

Fold, and its Universal Property

Fold is different from map and filter in a very concrete way. Even though map and filter do specify that the order of the list should be preserved, it’s not an important part of their definition: filter should filter, and map should map, but no interaction occurs between different entries during the computation. In a sense, we got lucky that having everything be monoid homomorphisms resulted in preserving order without our specific intention.

Because it doesn’t necessarily produce lists and the operation can be quite weird, fold cannot exist without an explicit description of the order of operations. Let’s recall the definition of fold, and stick to the “right associative” order of operations sometimes specified by calling the function “foldr.” We will use foldr and fold interchangeably.

fun foldr(_, v, nil) = v
  | foldr(f, v, (head::tail)) = f(head, foldr(f, v, tail))

Even more, we can’t talk about foldr in terms of monoids. Even after fixing $ f$ and $ v$, foldr need not produce a monoid homomorphism. So if we’re going to find a universal property of foldr, we’ll need a more general categorical picture.

One first try would be to view foldr as a morphism of sets

$ \displaystyle B \times \textup{Hom}(A \times B, B) \to \textup{Hom}([A], B)$

The mapping is just $ f,b \mapsto \textup{foldr } f \textup{ } b$, and this is just the code definition we gave above. One might hope that this mapping defines an isomorphism in the category of types and programs, but it’s easy to see that it does not. For example, let

$ A = \left \{ 1 \right \}, B = \left \{ 1,2 \right \}$

Then the left hand side of the mapping above is a set of size 8 (there are eight ways to combine a choice of element in $ B$ with a map from $ A \times B \to B$). But the right hand size is clearly infinite. In fact, it’s uncountably infinite, though not all possible mappings are realizable in programs of a reasonable length (in fact, very few are). So the morphism can’t possibly be a surjective and hence is not an isomorphism.

So what can we say about fold? The answer is a bit abstract, but it works out nicely.

Say we fix a type for our lists, $ A$. We can define a category $ \mathbf{C}_A$ which has as objects the following morphisms

fold objects

By $ 1$ we mean the type with one value (null, if you wish), and $ f$ is a morphism from a coproduct (i.e. there are implicit parentheses around $ A \times B$). As we saw in our post on universal properties, a morphism from a coproduct is the same thing as a pair of functions which operates on each piece. Here one operates on $ 1$ and the other on $ A \times B$. Since morphisms from $ 1$ are specified by the image of the sole value $ f(1) = b$, we will often write such a function as $ b \amalg h$, where $ h: A \times B \to B$.

Now the morphisms in $ \mathbf{C}_A$ are the pairs of morphisms $ \varphi, \overline{\varphi}$ which make the following diagram commute:

fold morphimsHere we write $ \overline{\varphi}$ because it is determined by $ \varphi$ in a canonical way. It maps $ 1 \to 1$ in the only way one can, and it maps $ (a,b) \mapsto (a, \varphi(b))$. So we’re really specifying $ \varphi$ alone, but as we’ll see it’s necessary to include $ \overline{\varphi}$ as well; it will provide the “inductive step” in the definition of fold.

Now it turns out that fold satisfies the universal property of being initial in this category. Well, not quite. We’re saying first that the following object $ \mathscr{A}$ is initial,

initial object fold

Where cons is the list constructor $ A \times [A] \to [A]$ which sends $ (a_0, [a_1, \dots, a_n]) \mapsto [a_0, a_1, \dots, a_n]$. By being initial, we mean that for any object $ \mathscr{B}$ given by the morphism $ b \amalg g: 1 \coprod A \times B \to B$, there is a unique morphism from $ \mathscr{A} \to \mathscr{B}$. The “fold” is precisely this unique morphism, which we denote by $ (\textup{fold g, b})$. We implicitly know its “barred” counterpart making the following diagram commute.

fold-univ-propertyThis diagram has a lot going on in it, so let’s go ahead and recap. The left column represents an object $ \mathscr{A}$ we’re claiming is initial in this crazy category we’ve made. The right hand side is an arbitrary object $ \mathscr{B}$, which is equivalently the data of an element $ b \in B$ and a mapping $ g: A \times B \to B$. This is precisely the data needed to define fold. The dashed lines represent the unique morphisms making the diagram commute, whose existence and uniqueness is the defining property of what it means for an object to be initial in this category. Finally, we’re claiming that foldr, when we fix $ g$ and $ b$, makes this diagram commute, and is hence the very same unique morphism we seek.

To prove all of this, we need to first show that the object $ \mathscr{A}$ is initial. That is, that any two morphisms we pick from $ \mathscr{A} \to \mathscr{B}$ must be equal. The first thing to notice is that the two objects $ 1 \coprod A \times [A]$ and $ [A]$ are really the same thing. That is, $ \textup{nil} \amalg \textup{cons}$ has a two-sided inverse which makes it an isomorphism in the category of types. Specifically, the inverse is the map $ \textup{split}$ sending

$ \textup{nil} \mapsto \textup{nil}$
$ [a_1, \dots, a_n] \mapsto (a_1, [a_2, \dots, a_n])$

So if we have a morphism $ \varphi, \overline{\varphi}$ from $ \mathscr{A} \to \mathscr{B}$, and the diagram commutes, we can see that $ \varphi = (b \amalg g) \circ \overline{\varphi} \circ \textup{split}$. We’re just going the long way around the diagram.

Supposing that we have two such morphisms $ \varphi_1, \varphi_2$, we can prove they are equal by induction on the length of the input list. It is trivial to see that they both must send the empty list to $ b$. Now suppose that for lists of length $ n-1$ the two are equal. Given a list $ [a_1, \dots, a_n]$ we see that

$ \displaystyle \varphi_1([a_1, \dots, a_n]) = \varphi_1 \circ \textup{cons} (a_1, [a_2, \dots, a_n]) = g \circ \overline{\varphi_1} (a_1, [a_2, \dots, a_n])$

where the last equality holds by the commutativity of the diagram. Continuing,

$ \displaystyle g \circ \overline{\varphi_1} (a_1, [a_2, \dots, a_n]) = g (a_1, \varphi_1([a_2, \dots, a_n])) = g(a_1, \varphi_2(a_2, \dots, a_n))$

where the last equality holds by the inductive hypothesis. From here, we can reverse the equalities using $ \varphi_2$ and it’s “barred” version to get back to $ \varphi_2([a_1, \dots, a_n])$, proving the equality.

To show that fold actually makes the diagram commute is even simpler. In order to make the diagram commute we need to send the empty list to $ b$, and we need to inductively send $ [a_1, \dots, a_n]$ to $ g(a_1, (\textup{fold g b})([a_2, \dots, a_n]))$, but this is the very definition of foldr!

So we’ve established that fold has this universal property. It’s easy now to see how map and filter fit into the picture. For mapping types $ A$ to $ C$ via $ f$, just use the type $ [C]$ in place of $ B$ above, and have $ g(a, L) = \textup{cons}(f(a), L)$, and have $ b$ be the empty list. Filter similarly just needs a special definition for $ g$.

A skeptical reader might ask: what does all of this give us? It’s a good question, and one that shouldn’t be taken lightly because I don’t have an immediate answer. I do believe that with some extra work we could use universal properties to give a trivial proof of the third homomorphism theorem for lists, which says that any function expressible as both a foldr and a foldl can be expressed as a list homomorphism. The proof would involve formulating a universal property for foldl, which is very similar to the one for foldr, and attaching the diagrams in a clever way to give the universal property of a monoid homomorphism for lists. Caveat emptor: this author has not written such a proof, but it seems likely that it would work.

More generally, any time we can represent the requirements of a list computation by an object like $ \mathscr{B}$, we can represent the computation as a foldr. What good does this do us? Well, it might already be obvious when you can and can’t use fold. But in addition to proving when it’s possible to use fold, this new perspective generalizes very nicely to give us a characterization of arbitrary computations on compound data types. One might want to know when to perform fold-like operations on trees, or other sufficiently complicated beasts, and the universal property gives us a way to see when such a computation is possible.

That’s right, I said it: there’s more to the world than lists. Shun me if you must, but I will continue dream of great things.

In an effort to make the egregiously long posts on this blog slightly shorter, we’ll postpone our generalization of the universal property of fold until next time. There we’ll define “initial algebras” and show how to characterize “fold-like” computations any compound data type.

Until then!

Functoriality

Last time we worked through some basic examples of universal properties, specifically singling out quotients, products, and coproducts. There are many many more universal properties that we will mention as we encounter them, but there is one crucial topic in category theory that we have only hinted at: functoriality.

As we’ve repeatedly stressed, the meat of category theory is in the morphisms. One natural question one might ask is, what notion of morphism is there between categories themselves? Indeed, the most straightforward way to see category theoretic concepts in classical mathematics is in a clever choice of functor. For example (and this example isn’t necessary for the rest of the article) one can “associate” to each topological space a group, called the homology group, in such a way that continuous functions on topological spaces translate to group homomorphisms. Moreover, this translation is functorial in the following sense: the group homomorphism associated to a composition is the composition of the associated group homomorphisms. If we denote the association by a subscripted asterisk, then we get the following common formula.

$ \displaystyle (fg)_* = f_* g_*$

This is the crucial property that maintains the structure of morphisms. Again, this should reinforce the idea that the crucial ingredient of every definition in category theory is its effect on morphisms.

Functors: a Definition

In complete generality, a functor is a mapping between two categories which preserves the structure of morphisms. Formally,

Definition: Let $ \mathbf{C}, \mathbf{D}$ be categories. A functor $ \mathscr{F}$ consists of two parts:

  • For each object $ C \in \mathbf{C}$ an associated object $ \mathscr{F}(C) \in \mathbf{D}$.
  • For each morphism $ f \in \textup{Hom}_{\mathbf{C}}(A,B)$ a corresponding morphism $ \mathscr{F}(f) \in \textup{Hom}_{\mathbf{D}}(\mathscr{F}(A), \mathscr{F}(B))$. Specifically, for each $ A,B$ we have a set-function $ \textup{Hom}_{\mathbf{C}}(A,B) \to \textup{Hom}_{\mathbf{C}}(\mathscr{F}(A), \mathscr{F}(B))$.

There are two properties that a functor needs to satisfy to “preserve structure.” The first is that the identity morphisms are preserved for every object; that is, $ \mathscr{F}(1_A) = 1_{\mathscr{F}(A)}$ for every object $ A$. Second, composition must be preserved. That is, if $ f \in \textup{Hom}_{\mathbf{C}}(A,B)$ and $ g \in \textup{Hom}_{\mathbf{C}}(B,C)$, we have

$ \displaystyle \mathscr{F}(gf) = \mathscr{F}(g) \mathscr{F}(f)$

We often denote a functor as we would a function $ \mathscr{F}: \mathbf{C} \to \mathbf{D}$, and use the function application notation as if everything were with sets.

Let’s look at a few simple examples.

Let $ \mathbf{FiniteSet}$ be the poset category of finite sets with subsets as morphisms, and let $ \mathbf{Int}$ be the category whose objects are integers where there is a unique morphism from $ x \to y$ if $ x \leq y$. Then the size function is a functor $ \mathbf{Set} \to \mathbf{Int}$. Continuing with $ \mathbf{Int}$, remember that $ \mathbf{Int}$ forms a group under addition (also known as $ \mathbb{Z}$). And so by its very definition any group homomorphism $ \mathbb{Z} \to \mathbb{Z}$ is a functor from $ \mathbf{Int} \to \mathbf{Int}$. A functor from a category to itself is often called an endofunctor.

There are many examples of functors from the category $ \mathbf{Top}$ of topological spaces to the category $ \mathbf{Grp}$ of groups. These include some examples we’ve seen on this blog, such as the fundamental group and homology groups.

One trivial example of a functor is called the forgetful functor. Let $ \mathbf{C}$ be a category whose objects are sets and whose morphisms are set-maps with additional structure, for example the category of groups. Define a functor $ \mathbf{C} \to \mathbf{Set}$ which acts as the identity on both sets and functions. This functor simply “forgets” the structure in $ \mathbf{C}$. In the realm of programs and types (this author is thinking Java), one can imagine this as a ‘type-cast’ from String to Object.  In the same vein, one could define an “identity” endofunctor $ \mathbf{C} \to \mathbf{C}$ which does absolutely nothing.

One interesting way to think of a functor is as a “realization” of one category inside another. In particular, because the composition structure of $ \mathbf{C}$ is preserved by a functor $ \mathbf{C} \to \mathbf{D}$, it must be the case that all commutative diagrams are “sent” to commutative diagrams. In addition, isomorphisms are sent to isomorphisms because if $ f, g$ are inverses of each other, $ 1_{\mathscr{F}(A)} = \mathscr{F}(1_A) = \mathscr{F}(gf) = \mathscr{F}(g)\mathscr{F}(f)$ and likewise for the reverse composition. And so if we have a functor $ \mathscr{F}$ from a poset category (say, the real numbers with the usual inequality) to some category $ \mathbf{C}$, then we can realize the structure of the poset sitting inside of $ \mathbf{C}$ (perhaps involving only some of the objects of $ \mathbf{C}$). This view comes in handy in a few places we’ll see later in our series on computational topology.

The Hom Functor

There is a very important and nontrivial example called the “hom functor” which is motivated by the category of vector spaces. We’ll stick to the concrete example of vector spaces, but the generalization to arbitrary categories is straightforward. If the reader knows absolutely nothing about vector spaces, replace “vector space” with “object” and “linear map” with “morphism.” It won’t quite be correct, but it will get the idea across.

To each vector space $ V$ one can define a dual vector space of functions $ V \to \mathbb{C}$ (or whatever the field of scalars for $ V$ is). Following the lead of hom sets, the dual vector space is denoted $ \textup{Hom}(V,\mathbb{C})$. Here the morphisms in the set are those from the category of vector spaces (that is, linear maps $ V \to \mathbb{C}$). Indeed, this is a vector space: one can add two functions pointwise ($ (f+g)(v) = f(v) + g(v)$) and scale them ($ (\lambda f)(v) = \lambda f(v)$), and the properties for a vector spaces are trivial to check.

Now the mapping $ \textup{Hom}(-, \mathbb{C})$ which takes $ V$ and produces $ \textup{Hom}(V, \mathbb{C})$ is a functor called the hom functor. But let’s inspect this one more closely. The source category is obviously the category of vector spaces, but what is the target category? The objects are clear: the hom sets $ \textup{Hom}(V,\mathbb{C})$ where $ V$ is a vector space. The morphisms of the category are particularly awkward. Officially, they are written as

$ \displaystyle \textup{Hom}(\textup{Hom}(V, \mathbb{C}), \textup{Hom}(W, \mathbb{C}))$

so a morphism in this category takes as input a linear map $ V \to \mathbb{C}$ and produces as output one $ W \to \mathbb{C}$. But what are the morphisms in words we can understand? And how can we compose them? Before reading on, think about what a morphism of morphisms should look like.

Okay, ready?

The morphisms in this category can be thought of as linear maps $ W \to V$. More specifically, given a morphism $ \varphi: V \to \mathbb{C}$ and a linear map $ g: W \to V$, we can construct a linear map $ W \to \mathbb{C}$ by composing $ \varphi g$.

And so if we apply the $ \textup{Hom}$ functor to a morphism $ f: W \to V$, we get a morphism in $ \textup{Hom}(\textup{Hom}(V, \mathbb{C}), \textup{Hom}(W, \mathbb{C}))$. Let’s denote the application of the hom functor using an asterisk so that $ f \mapsto f_*$.

But wait a minute! The mapping here is going in the wrong direction: we took a map in one category going from the $ V$ side to the $ W$ side, and after applying the functor we got a map going from the $ W$ side ($ \textup{Hom}(W, \mathbb{C})$) to the $ V$ side ($ \textup{Hom}(V, \mathbb{C})$). It seems there is no reasonable way to take a map $ V \to \mathbb{C}$ and get a map in $ W \to \mathbb{C}$ using just $ f$, but the other way is obvious. The hom functor “goes backward” in a sense. In other words, the composition property for our “functor” makes the composite $ (g f)_*$ to the map taking $ \varphi$ to $ \varphi g f$. On the other hand, there is no way to compose $ g_* f_*$, as they operate on the wrong domains! It must be the other way around:

$ \displaystyle (gf)_* = f_* g_*$

We advise the reader to write down the commutative diagram and trace out the compositions to make sure everything works out. But this is a problem, because it makes the hom functor fail the most important requirement. In order to fix this reversal “problem,” we make the following definition:

Definition: A functor $ \mathscr{F} : \mathbf{C} \to \mathbf{D}$ is called covariant if it preserves the order of morphism composition, so that $ \mathscr{F}(gf) = \mathscr{F}(g) \mathscr{F}(f)$. If it reverses the order, we call it contravariant.

And so the hom functor on vector spaces is a contravariant functor, while all of the other functors we’ve defined in this post are covariant.

There is another way to describe a contravariant functor as a covariant functor which is often used. It involves the idea of an “opposite” category. For any category $ \mathbf{C}$ we can define the opposite category $ \mathbf{C}^{\textup{op}}$ to be a category with the same objects as $ \mathbf{C}$, but with all morphisms reversed. That is, we define

$ \displaystyle \textup{Hom}_{\mathbf{C}^{\textup{op}}}(A,B) = \textup{Hom}_{\mathbf{C}}(B,A)$

We leave it to the reader to verify that this is indeed a category. It is also not hard to see that $ (\mathbf{C}^{\textup{op}})^{\textup{op}} = \mathbf{C}$. Opposite categories give us a nice recharacterization of a contrvariant functor. Indeed, because composition in opposite categories is reversed, a contravariant functor $ \mathbf{C} \to \mathbf{D}$ is just a covariant functor on the opposite category $ \mathbf{C}^{\textup{op}} \to \mathbf{D}$. Or equivalently, one $ \mathbf{C} \to \mathbf{D}^{\textup{op}}$. More than anything, opposite categories are syntactical sugar. Composition is only reversed artificially to make domains and codomains line up, but the actual composition is the same as in the original category.

Functors as Types

Before we move on to some code, let’s take a step back and look at the big picture (we’ve certainly plowed through enough details up to this point). The main thesis is that functoriality is a valuable property for an operation to have, but it’s not entirely clear why. Even the brightest of readers can only assume such properties are useful for mathematical analysis. It seems that the question we started this series out with, “what does category theory allow us to do that we couldn’t do before?” still has the answer, “nothing.” More relevantly, the question of what functoriality allows us to do is unclear. Indeed, once again the answer is “nothing.” Rather, functoriality in a computation allows one to analyze the behavior of a program. It gives the programmer a common abstraction in which to frame operations, and ease in proving the correctness of one’s algorithms.

In this light, the best we can do in implementing functors in programs is to give a type definition and examples. And in this author’s opinion this series is quickly becoming boring (all of the computational examples are relatively lame), so we will skip the examples in favor of the next post which will analyze more meaty programming constructs from a categorical viewpoint.

So recall the ML type definition of a category, a tuple of operations for source, target, identity, and composition:

datatype ('object, 'arrow)Category =
    category of ('arrow -> 'object) *
                ('arrow -> 'object) *
                ('object -> 'arrow) *
                ('arrow * 'arrow -> 'arrow)

And so a functor consists of the two categories involved (as types), and the mapping on objects, and the mapping on morphisms.

datatype ('cObject, 'cArrow, 'dObject, 'dArrow)Functor =
   aFunctor of ('cObject, 'cArrow)Category *
               ('cObject -> 'dObject) *
               ('cArrow -> 'dArrow) *
               ('dObject -> 'dArrow)Category

We encourage the reader who is uncomfortable with these type definitions to experiment with them by implementing some of our simpler examples (say, the size functor from sets to integers). Insofar as the basic definitions go, functors are not all that interesting. They become much more interesting when additional structure is imposed on them, and in the distant future we will see a glimpse of this in the form of adjointness. We hope to get around to analyzing statements like “syntax and semantics are adjoint functors.” For the next post in this series, we will take the three beloved functions of functional programming (map, foldl(r), and filter), and see what their categorical properties are.

Until then!

Universal Properties

Previously in this series we’ve seen the definition of a category and a bunch of examples, basic properties of morphisms, and a first look at how to represent categories as types in ML. In this post we’ll expand these ideas and introduce the notion of a universal property. We’ll see examples from mathematics and write some programs which simultaneously prove certain objects have universal properties and construct the morphisms involved.

A Grand Simple Thing

One might go so far as to call universal properties the most important concept in category theory. This should initially strike the reader as odd, because at first glance universal properties are so succinctly described that they don’t seem to be very interesting. In fact, there are only two universal properties and they are that of being initial and final.

Definition: An object $ A$ in a category $ \mathbf{C}$ is called initial if for every object $ B$ there is a unique morphism $ A \to B$. An object $ Z$ is called final if for every object $ B$ there is a unique morphism $ B \to Z$. If an object satisfies either of these properties, it is called universal. If an object satisfies both, it is called a zero object.

In both cases, the existence of a unique morphism is the same as saying the relevant Hom set is a singleton (i.e., for initial objects $ A$, the Hom set $ \textup{Hom}_{\mathbf{C}}(A,B)$ consists of a single element). There is one and only one morphism between the two objects. In the particular case of $ \textup{Hom}(A,A)$ when $ A$ is initial (or final), the definition of a category says there must be at least one morphism, the identity, and the universal property says there is no other.

There’s only one way such a simple definition could find fruitful applications, and that is by cleverly picking categories. Before we get to constructing interesting categories with useful universal objects, let’s recognize some universal objects in categories we already know.

In $ \mathbf{Set}$ the single element set is final, but not initial; there is only one set-function to a single-element set. It is important to note that the single-element set is far from unique. There are infinitely many (uncountably many!) singleton sets, but as we have already seen all one-element sets are isomorphic in $ \mathbf{Set}$ (they all have the same cardinality). On the other hand, the empty set is initial, since the “empty function” is the only set-mapping from the empty set to any set. Here the initial object truly is unique, and not just up to isomorphism.

It turns out universal objects are always unique up to isomorphism, when they exist. Here is the official statement.

Proposition: If $ A, A’$ are both initial in $ \mathbf{C}$, then $ A \cong A’$ are isomorphic. If $ Z, Z’$ are both final, then $ Z \cong Z’$.

Proof. Recall that a mophism $ f: A \to B$ is an isomorphism if it has a two sided inverse, a $ g:B \to A$ so that $ gf = 1_A$ and $ fg=1_B$ are the identities. Now if $ A,A’$ are two initial objects there are unique morphisms $ f : A \to A’$ and $ g: A’ \to A$. Moreover, these compose to be morphisms $ gf: A \to A$. But since $ A$ is initial, the only morphism $ A \to A$ is the identity. The situation for $ fg : A’ \to A’$ is analogous, and so these morphisms are actually inverses of each other, and $ A, A’$ are isomorphic. The proof for final objects is identical. $ \square$

Let’s continue with examples. In the category of groups, the trivial group $ \left \{ 1 \right \}$ is both initial and final, because group homomorphisms must preserve the identity element. Hence the trivial group is a zero object. Again, “the” trivial group is not unique, but unique up to isomorphism.

In the category of types with computable (halting) functions as morphisms, the null type is final. To be honest, this depends on how we determine whether two computable functions are “equal.” In this case, we only care about the set of inputs and outputs, and for the null type all computable functions have the same output: null.

Partial order categories are examples of categories which need not have universal objects. If the partial order is constructed from subsets of a set $ X$, then the initial object is the empty set (by virtue of being a subset of every set), and $ X$ as a subset of itself is obviously final. But there are other partial orders, such as inequality of integers, which have no “smallest” or “largest” objects. Partial order categories which have particularly nice properties (such as initial and final objects, but not quite exactly) are closely related to the concept of a domain in denotational semantics, and the language of universal properties is relevant to that discussion as well.

The place where universal properties really shine is in defining new constructions. For instance, the direct product of sets is defined by the fact that it satisfies a universal property. Such constructions abound in category theory, and they work via the ‘diagram categories’ we defined in our introductory post. Let’s investigate them now.

Quotients

Let’s recall the classical definition from set theory of a quotient. We described special versions of quotients in the categories of groups and topological spaces, and we’ll see them all unified via the universal property of a quotient in a moment.

Definition: An equivalence relation on a set $ X$ is a subset of the set product $ \sim \subset X \times X$ which is reflexive, symmetric, and transitive. The quotient set $ X / \sim$ is the set of equivalence classes on $ X$. The canonical projection $ \pi : X \to X/\sim$ is the map sending $ x$ to its equivalence class under $ \sim$.

The quotient set $ X / \sim$ can also be described in terms of a special property: it is the “largest” set which agrees with the equivalence relation $ \sim$. On one hand, it is the case that whenever $ a \sim b$ in $ X$ then $ \pi(a) = \pi(b)$. Moreover, for any set $ Y$ and any map $ g: X \to Y$ which equates equivalent things ($ g(a) = g(b)$ for all $ a \sim b$), then there is a unique map $ f : X/\sim \to Y$ such that $ f \pi = g$. This word salad is best translated into a diagram.

universal-quotient

Here we use a dashed line to assert the existence of a morphism (once we’ve proven such a morphism exists, we use a solid line instead), and the symbol $ \exists !$ signifies existence ($ \exists$) and uniqueness (!).

We can prove this explicitly in the category $ \mathbf{Set}$. Indeed, if $ g$ is any map such that $ g(a) = g(b)$ for all equivalent $ a,b \in X$, then we can define $ f$ as follows: for any $ a \in X$ whose equivalence class is denoted by $ [a]$ in $ X / \sim$, and define $ f([a]) = g(a)$. This map is well defined because if $ a \sim b$, then $ f([a]) = g(a) = g(b) = f([b])$. It is unique because if $ f \pi = g = h \pi$ for some other $ h: X / \sim \to Y$, then $ h([x]) = g(x) = f([x])$; this is the only possible definition.

Now the “official” way to state this universal property is as follows:

The quotient set $ X / \sim$ is universal with respect to the property of mapping $ X$ to a set so that equivalent elements have the same image.

But as we said earlier, there are only two kinds of universal properties: initial and final. Now this $ X / \sim$ looks suspiciously like an initial object ($ f$ is going from $ X / \sim$, after all), but what exactly is the category we’re considering? The trick to dissecting this sentence is to notice that this is not a statement about just $ X / \sim$, but of the morphism $ \pi$.

That is, we’re considering a diagram category. In more detail: fix an object $ X$ in $ \mathbf{Set}$ and an equivalence relation $ \sim$ on $ X$. We define a category $ \mathbf{Set}_{X,\sim}$ as follows. The objects in the category are morphisms $ f:X \to Y$ such that $ a \sim b$ in $ X$ implies $ f(a) = f(b)$ in $ Y$. The morphisms in the category are commutative diagrams

diagrams-quotient-category

Here $ f_1, f_2$ need to be such that they send equivalent things to equal things (or they wouldn’t be objects in the category!), and by the commutativity of the diagram $ f_2 = \varphi f_1$. Indeed, the statement about quotients is that $ \pi : X \to X / \sim$ is an initial object in this category. In fact, we have already proved it! But note the abuse of language in our offset statement above: it’s not really $ X / \sim$ that is the universal object, but $ \pi$. Moreover, the statement itself doesn’t tell us what category to inspect, nor whether we care about initial or final objects in that category. Unfortunately this abuse of language is widespread in the mathematical world, and for arguably good reason. Once one gets acquainted with these kinds of constructions, reading between the limes becomes much easier and it would be a waste of time to spell it out. After all, once we understand $ X / \sim$ there is no “obvious” choice for a map $ X \to X / \sim$ except for the projection $ \pi$. This is how $ \pi$ got its name, the canonical projection.

Two last bits of terminology: if $ \mathbf{C}$ is any category whose objects are sets (and hence, where equivalence relations make sense), we say that $ \mathbf{C}$ has quotients if for every object $ X$ there is a morphism $ \pi$ satisfying the universal property of a quotient. Another way to state the universal property is to say that all maps respecting the equivalence structure factor through the quotient, in the sense that we get a diagram like the one above.

What is the benefit of viewing $ X / \sim$ by its universal property? For one, the set $ X / \sim$ is unique up to isomorphism. That is, if any other pair $ (Z, g)$ satisfies the same property, we automatically get an isomorphism $ X / \sim \to Z$. For instance, if $ \sim$ is defined via a function $ f : X \to Y$ (that is, $ a \sim b$ if $ f(a) = f(b)$), then the pair $ (\textup{im}(f), f)$ satisfies the universal property of a quotient. This means that we can “decompose” any function into three pieces:

$ \displaystyle X \to X / \sim \to \textup{im}(f) \to Y$

The first map is the canonical projection, the second is the isomorphism given by the universal property of the quotient, and the last is the inclusion map into $ Y$. In a sense, all three of these maps are “canonical.” This isn’t so magical for set-maps, but the same statement (and essentially the same proof) holds for groups and topological spaces, and are revered as theorems. For groups, this is called The First Isomorphism Theorem, but it’s essentially the claim that the category of groups has quotients.

This is getting a bit abstract, so let’s see how the idea manifests itself as a program. In fact, it’s embarrassingly simple. Using our “simpler” ML definition of a category from last time, the constructive proof that quotient sets satisfy the universal property is simply a concrete version of the definition of $ f$ we gave above. In code,

fun inducedMapFromQuotient(setMap(x, pi, q), setMap(x, g, y)) =
   setMap(q, (fn a => g(representative(a))), y)

That is, once we have $ \pi$ and $ X / \sim$ defined for our given equivalence relation, this function accepts as input any morphism $ g$ and produces the uniquely defined $ f$ in the diagram above. Here the “representative” function just returns an arbitrary element of the given set, which we added to the abstract datatype for sets. If the set $ X$ is empty, then all functions involved will raise an “empty” exception upon being called, which is perfectly fine. We leave the functions which explicitly construct the quotient set given $ X, \sim$ as an exercise to the reader.

Products and Coproducts

Just as the concept of a quotient set or quotient group can be generalized to a universal property, so can the notion of a product. Again we take our intuition from $ \mathbf{Set}$. There the product of two sets $ X,Y$ is the set of ordered pairs

$ \displaystyle X \times Y = \left \{ (x,y) : x \in X, y \in Y \right \}$

But as with quotients, there’s much more going on and the key is in the morphisms. Specifically, there are two obvious choices for morphisms $ X \times Y \to X$ and $ X \times Y \to Y$. These are the two projections onto the components, namely $ \pi_1(x,y) = x$ and $ \pi_2(x,y) = y$. These projections are also called “canonical projections,” and they satisfy their own universal property.

The product of sets is universal with respect to the property of having two morphisms to its factors.

Indeed, this idea is so general that it can be formulated in any category, not just categories whose objects are sets. Let $ X,Y$ be two fixed objects in a category $ \mathbf{C}$. Should it exist, the product $ X \times Y$ is defined to be a final object in the following diagram category. This category has as objects pairs of morphisms

product-diagram-object

and as morphisms it has commutative diagrams

product-diagram-morphism

In words, to say products are final is to say that for any object in this category, there is a unique map $ \varphi$ that factors through the product, so that $ \pi_1 \varphi = f$ and $ \pi_2 \varphi = g$. In a diagram, it is to claim the following commutes:

product-universal-property

If the product $ X \times Y$ exists for any pair of objects, we declare that the category $ \mathbf{C}$ has products.

Indeed, many familiar product constructions exist in pure mathematics: sets, groups, topological spaces, vector spaces, and rings all have products. In fact, so does the category of ML types. Given two types ‘a and ‘b, we can form the (aptly named) product type ‘a * ‘b. The canonical projections exist because ML supports parametric polymorphism. They are

fun leftProjection(x,y) = x
fun rightProjection(x,y) = y

And to construct the unique morphism to the product,

fun inducedMapToProduct(f,g) = fn a => (f(a), g(a))

We leave the uniqueness proof to the reader as a brief exercise.

There is a similar notion called a coproduct, denoted $ X \coprod Y$, in which everything is reversed: the arrows in the diagram category go to $ X \coprod Y$ and the object is initial in the diagram category. Explicitly, for a fixed $ X, Y$ the objects in the diagram category are again pairs of morphisms, but this time with arrows going to the central object

coprod-diagram-object

The morphisms are again commutative diagrams, but with the connecting morphism going away from the central object

coprod-diagram-morphism

And a coproduct is defined to be an initial object in this category. That is, a pair of morphisms $ i_1, i_2$ such that there is a unique connecting morphism in the following diagram.

coprod-universal-property

Coproducts are far less intuitive than products in their concrete realizations, but the universal property is no more complicated.  For the category of sets, the coproduct is a disjoint union (in which shared elements of two sets $ X, Y$ are forcibly considered different), and the canonical morphisms are “inclusion” maps (hence the switch from $ \pi$ to $ i$ in the diagram above). Specifically, if we define the coproduct

$ \displaystyle X \coprod Y = (X \times \left \{ 1 \right \}) \cup (Y \times \left \{ 2 \right \})$

as the set of “tagged” elements (the right entry in a tuple signifies which piece of the coproduct the left entry came from), then the inclusion maps $ i_1(x) = (x,1)$ and $ i_2(y) = (y,2)$ are the canonical morphisms.

There are similar notions of disjoint unions for topological spaces and graphs, which are their categories’ respective coproducts. However, in most categories the coproducts are dramatically different from “unions.” In group theory, it is a somewhat complicated object known as the free product. We mentioned free products in our hasty discussion of the fundamental group, but understanding why and where free groups naturally occur is quite technical. Similarly, coproducts of vector spaces (or $ R$-modules) are more like a product, with the stipulation that at most finitely many of the entries of a tuple are nonzero (e.g., formal linear combinations of things from the pieces). Even without understanding these examples, the reader should begin to believe that relatively simple universal properties can yield very useful objects with potentially difficult constructions in specific categories. The ubiquity of the concepts across drastically different fields of mathematics is part of why universal properties are called “universal.”

Luckily, the category of ML types has a nice coproduct which feels like a union, but it is not supported as a “native” language feature like products types are. Specifically, given two types ‘a, ‘b we can define the “coproduct type”

datatype ('a, 'b)Coproduct = left of 'a | right of 'b

Let’s prove this is actually a coproduct: fix two types ‘a and ‘b, and let $ i_1, i_2$ be the functions

fun leftInclusion(x) = left(x)
fun rightInclusion(y) = right(y)

Then given any other pair of functions $ f,g$ which accept as input types ‘a and ‘b, respectively, there is a unique function $ \varphi$ operating on the coproduct type. We construct it as follows.

fun inducedCoproductMap(f,g) =
   let
      theMap(left(a)) = f(a)
      theMap(right(b)) = g(b)
   in
      theMap
   end

The uniqueness of this construction is self-evident. This author finds it fascinating that these definitions are so deep and profound (indeed, category theory is heralded as the queen of abstraction), but their realizations are trivially obvious to the working programmer. Perhaps this is a statement about how well-behaved the category of ML types is.

Continuing On

So far we have seen three relatively simple examples of universal properties, and explored how they are realized in some categories. We should note before closing two things. The first is that not every category has objects with these universal properties. Unfortunately poset categories don’t serve as a good counterexample for this (they have both products and coproducts; what are they?), but it may be the case that in some categories only some pairs of objects have products or coproducts, while others do not.

Second, there are many more universal properties that we haven’t covered here. For instance, the notion of a limit is a universal property, as is the notion of a “free” object. There are kernels, pull-backs, equalizers, and many other ad-hoc universal properties without names. And for every universal property there is a corresponding “dual” property that results from reversing the arrows in every diagram, as we did with coproducts. We will visit the relevant ones as they come up in our explorations.

In the next few posts we’ll encounter functors and the concept of functoriality, and start asking some poignant questions about familiar programmatic constructions.

Until then!