This series on topology has been long and hard, but we’re are quickly approaching the topics where we can actually write programs. For this and the next post on homology, the most important background we will need is a solid foundation in linear algebra, specifically in row-reducing matrices (and the interpretation of row-reduction as a change of basis of a linear operator).

Last time we engaged in a whirlwind tour of the fundamental group and homotopy theory. And we mean “whirlwind” as it sounds; it was all over the place in terms of organization. The most important fact that one should take away from that discussion is the idea that we can compute, algebraically, some qualitative features about a topological space related to “n-dimensional holes.” For one-dimensional things, a hole would look like a circle, and for two dimensional things, it would look like a hollow sphere, etc. More importantly, we saw that this algebraic data, which we called the fundamental group, is a topological invariant. That is, if two topological spaces have different fundamental groups, then they are “fundamentally” different under the topological lens (they are not homeomorphic, and not even homotopy equivalent).

Unfortunately the main difficulty of homotopy theory (and part of what makes it so interesting) is that these “holes” interact with each other in elusive and convoluted ways, and the algebra reflects it almost too well. Part of the problem with the fundamental group is that it deftly eludes our domain of interest: computing them is complicated!

What we really need is a *coarser* invariant. If we can find a “stupider” invariant, it might just be simple enough to compute efficiently. Perhaps unsurprisingly, these will take the form of finitely-generated abelian groups (the most well-understood class of groups), with one for each dimension. Now we’re starting to see exactly why algebraic topology is so difficult; it has an immense list of prerequisite topics! If we’re willing to skip over some of the more nitty gritty details (and we must lest we take a *huge* diversion to discuss Tor and the exact sequences in the universal coefficient theorem), then we can also do the same calculations over a field. In other words, the algebraic objects we’ll define called “homology groups” are really *vector spaces*, and so row-reduction will be our basic computational tool to analyze them.

Once we have the basic theory down, we’ll see how we can write a program which accepts as input any topological space (represented in a particular form) and produces as output a list of the homology groups in every dimension. The dimensions of these vector spaces (their *ranks*, as finitely-generated abelian groups) are interpreted as the number of holes in the space for each dimension.

## Recall Simplicial Complexes

In our post on constructing topological spaces, we defined the standard -simplex and the simplicial complex. We recall the latter definition here, and expand upon it.

**Definition:** A *simplicial complex* is a topological space realized as a union of any collection of simplices (of possibly varying dimension) which has the following two properties:

- Any face of a simplex is also in .
- The intersection of any two simplices of is also a simplex of .

We can realize a simplicial complex by gluing together pieces of increasing dimension. First start by taking a collection of vertices (0-simplices) . Then take a collection of intervals (1-simplices) and glue their endpoints onto the vertices in any way. Note that because we require every face of an interval to again be a simplex in our complex, we *must* glue each endpoint of an interval onto a vertex in . Continue this process with , a set of 2-simplices, we must glue each edge precisely along an edge of . We can continue this process until we reach a terminating set . It is easy to see that the union of the form a simplicial complex. Define the *dimension* of the cell complex to be .

There are some picky restrictions on how we glue things that we should mention. For instance, we could not contract all edges of a 2-simplex and glue it all to a single vertex in . The reason for this is that would no longer be a 2-simplex! Indeed, we’ve destroyed its original vertex set. The gluing process hence needs to preserve the original simplex’s boundary. Moreover, one property that follows from the two conditions above is that any simplex in the complex is uniquely determined by its vertices (for otherwise, the intersection of two such non-uniquely specified simplices would not be a single simplex).

We also have to remember that we’re imposing a specific ordering on the vertices of a simplex. In particular, if we label the vertices of an -simplex , then this imposes an orientation on the edges where an edge of the form has the orientation if , and otherwise. The faces, then, are “oriented” in increasing order of their three vertices. Higher dimensional simplices are oriented in a similar way, though we rarely try to picture this (the theory of orientations is a question best posted for smooth manifolds; we won’t be going there any time soon). Here are, for example, two different ways to pick orientations of a 2-simplex:

It is true, but a somewhat lengthy exercise, that the topology of a simplicial complex does not change under a consistent shuffling of the orientations across all its simplices. Nor does it change depending on how we realize a space as a simplicial complex. These kinds of results are crucial to the welfare of the theory, but have been proved once and we won’t bother reproving them here.

As a larger example, here is a simplicial complex representing the torus. It’s quite a bit more complicated than our usual quotient of a square, but it’s based on the same idea. The left and right edges are glued together, as are the top and bottom, with appropriate orientations. The only difficulty is that we need each simplex to be uniquely determined by its vertices. While this construction does not use the smallest possible number of simplices to satisfy that condition, it is the simplest to think about.

Taking a known topological space (like the torus) and realizing it as a simplicial complex is known as *triangulating* the space. A space which can be realized as a simplicial complex is called *triangulable*.

The nicest thing about the simplex is that it has an easy-to-describe boundary. Geometrically, it’s obvious: the boundary of the line segment is the two endpoints; the boundary of the triangle is the union of all three of its edges; the tetrahedron has four triangular faces as its boundary; etc. But because we need an *algebraic* way to describe holes, we want an algebraic way to describe the boundary. In particular, we have two important criterion that any algebraic definition must satisfy to be reasonable:

- A boundary itself has no boundary.
- The property of being boundariless (at least in low dimensions) coincides with our intuitive idea of what it means to be a loop.

Of course, just as with homotopy these holes interact in ways we’re about to see, so we need to be totally concrete before we can proceed.

## The Chain Group and the Boundary Operator

In order to define an algebraic boundary, we have to realize simplices themselves as algebraic objects. This is not so difficult to do: just take all “formal sums” of simplices in the complex. More rigorously, let be the set of -simplices in the simplicial complex . Define the *chain group* to be the -vector space with for a basis. The elements of the -th chain group are called *k-chain*s *on *. That’s right, if are two -simplices, then we just blindly define a bunch of new “chains” as all possible “sums” and scalar multiples of the simplices. For example, sums involving two elements would look like for some . Indeed, we include any finite sum of such simplices, as is standard in taking the span of a set of basis vectors in linear algebra.

Just for a quick example, take this very simple simplicial complex:

We’ve labeled all of the simplices above, and we can describe the chain groups quite easily. The zero-th chain group is the -linear span of the set of vertices . Geometrically, we might think of “the union” of two points as being, e.g., the sum . And if we want to have two copies of and five copies of , that might be thought of as . Of course, there are geometrically meaningless sums like , but it will turn out that the algebra we use to talk about *holes* will not falter because of it. It’s nice to have this geometric idea of what an algebraic expression can “mean,” but in light of this nonsense it’s not a good idea to get too wedded to the interpretations.

Likewise, is the linear span of the set with coefficients in . So we can talk about a “path” as a sum of simplices like . Here we use a negative coefficient to signify that we’re travelling “against” the orientation of an edge. Note that since the order of the terms is irrelevant, the same “path” is given by, e.g. , which geometrically is ridiculous if we insist on reading the terms from left to right.

The same idea extends to higher dimensional groups, but as usual the visualization grows difficult. For example, in above, the chain group is the vector space spanned by . But does it make sense to have a path of triangles? Perhaps, but the geometric analogies certainly become more tenuous as dimension grows. The benefit, however, is if we come up with good algebraic definitions for the low-dimensional cases, the *algebra* is easy to generalize to higher dimensions.

So now we will define the boundary operator on chain groups, a linear map by starting in lower dimensions and generalizing. A single vertex should always be boundariless, so for each vertex. Extending linearly to the entire chain group, we have is identically the zero map on zero-chains. For 1-simplices we have a more substantial definition: if a simplex has its orientation as , then the boundary should be . That is, it’s the front end of the edge minus the back end. This defines the boundary operator on the basis elements, and we can again extend linearly to the entire group of 1-chains.

Why is this definition more sensible than, say, ? Using our example above, let’s see how it operates on a “path.” If we have a sum like , then the boundary is computed as

That is, the result was the endpoint of our path minus the starting point of our path . It is not hard to prove that this will work in general, since each successive edge in a path will cancel out the ending vertex of the edge before it and the starting vertex of the edge after it: the result is just one big alternating sum.

Even more importantly is that if the “path” is a loop (the starting and ending points are the same in our naive way to write the paths), then the boundary is zero. Indeed, any time the boundary is zero then one can rewrite the sum as a sum of “loops,” (though one might have to trivially introduce cancelling factors). And so our condition for a chain to be a “loop,” which is just one step away from being a “hole,” is if it is in the kernel of the boundary operator. We have a special name for such chains: they are called *cycles*.

For 2-simplices, the definition is not so much harder: if we have a simplex like , then the boundary should be . If one rewrites this in a different order, then it will become apparent that this is just a path traversing the boundary of the simplex with the appropriate orientations. We wrote it in this “backwards” way to lead into the general definition: the simplices are ordered by which vertex does not occur in the face in question ( omitted from the first, from the second, and from the third).

We are now ready to extend this definition to arbitrary simplices, but a nice-looking definition requires a bit more notation. Say we have a k-simplex which looks like . Abstractly, we can write it just using the numbers, as . And moreover, we can denote the *removal* of a vertex from this list by putting a hat over the removed index. So represents the simplex which has all of the vertices from 0 to excluding the vertex . To represent a single-vertex simplex, we will often drop the square brackets, e.g. for . This can make for some awkward looking math, but is actually standard notation once the correct context has been established.

Now the boundary operator is defined on the standard -simplex with orientation via the alternating sum

It is trivial (but perhaps notationally hard to parse) to see that this coincides with our low-dimensional examples above. But now that we’ve defined it for the basis elements of a chain group, we automatically get a linear operator on the entire chain group by extending linearly on chains.

**Definition: **The *k-**cycles *on are those chains in the kernel of . We will call k-cycles *boundariless*. The *k-boundaries* are the image of .

We should note that we are making a serious abuse of notation here, since technically is defined on only a single chain group. What we should do is define for a fixed dimension, and always put the subscript. In practice this is only done when it is crucial to be clear which dimension is being talked about, and otherwise the dimension is always inferred from the context. If we want to compose the boundary operator in one dimension with the boundary operator in another dimension (say, ), it is usually written . This author personally supports the abolition of the subscripts for the boundary map, because subscripts are a nuisance in algebraic topology.

All of that notation discussion is so we can make the following observation: . That is, every chain which is a boundary of a higher-dimensional chain is boundariless! This should make sense in low-dimension: if we take the boundary of a 2-simplex, we get a cycle of three 1-simplices, and the boundary of this chain is zero. Indeed, we can formally prove it from the definition for general simplices (and extend linearly to achieve the result for all simplices) by writing out . With a keen eye, the reader will notice that the terms cancel out and we get zero. The reason is entirely in which coefficients are negative; the second time we apply the boundary operator the power on (-1) shifts by one index. We will leave the full details as an exercise to the reader.

So this fits our two criteria: low-dimensional examples make sense, and boundariless things (cycles) represent loops.

## Recasting in Algebraic Terms, and the Homology Group

For the moment let’s give boundary operators subscripts . If we recast things in algebraic terms, we can call the k-cycles , and this will be a subspace (and a subgroup) of since kernels are always linear subspaces. Moreover, the set of *k-boundaries*, that is, the image of , is a subspace (subgroup) of . As we just saw, every boundary is itself boundariless, so is a sub*set* of , and since the image of a linear map is always a linear subspace of the range, we get that it is a subspace too.

All of this data is usually expressed in one big diagram: each of the chain groups are organized in order of decreasing dimension, and the boundary maps connect them.

Since our example (the “simple space” of two triangles from the previous section) only has simplices in dimensions zero, one, and two, we additionally extend the sequence of groups to an infinite sequence by adding trivial groups and zero maps, as indicated. The condition that , which is equivalent to , is what makes this sequence a *chain complex*. As a side note, every sequence of abelian groups and group homomorphisms which satisfies the boundary requirement is called an algebraic chain complex. This foreshadows that there are many different types of homology theory, and they are unified by these kinds of algebraic conditions.

Now, geometrically we want to say, “The holes are all those cycles (loops) which don’t arise as the boundaries of higher-dimensional things.” In algebraic terms, this would correspond to a *quotient* space (really, a quotient group, which we covered in our first primer on groups) of the k-cycles by the k-boundaries. That is, a cycle would be considered a “trivial hole” if it is a boundary, and two “different” cycles would be considered the same hole if their difference is a k-boundary. This is the spirit of homology, and formally, we define the homology group (vector space) as follows.

**Definition:** The -th *homology group* of a simplicial complex , denoted , is the quotient vector space . Two elements of a homology group which are equivalent (their difference is a boundary) are called *homologous*.

The number of -dimensional holes in is thus realized as the dimension of as a vector space.

The quotient mechanism really is doing all of the work for us here. Any time we have two holes and we’re wondering whether they represent truly *different* holes in the space (perhaps we have a closed loop of edges, and another which is slightly longer but does not quite use the same edges), we can determine this by taking their difference and seeing if it bounds a higher-dimensional chain. If it does, then the two chains are the same, and if it doesn’t then the two chains carry intrinsically different topological information.

For particular dimensions, there are some neat facts (which obviously require further proof) that make this definition more concrete.

- The dimension of is the number of connected components of . Therefore, computing homology generalizes the graph-theoretic methods of computing connected components.
- is the abelianization of the fundamental group of . Roughly speaking, is the closest approximation of by a vector space.

Now that we’ve defined the homology group, let’s end this post by computing all the homology groups for this example space:

This is a sphere (which can be triangulated as the boundary of a tetrahedron) with an extra “arm.” Note how the edge needs an extra vertex to maintain uniqueness. This space is a nice example because it has one-dimensional homology in dimension zero (one connected component), dimension one (the arm is like a copy of the circle), and dimension two (the hollow sphere part). Let’s verify this algebraically.

Let’s start by labelling the vertices of the tetrahedron 0, 1, 2, 3, so that the extra arm attaches at 0 and 2, and call the extra vertex on the arm 4. This completely determines the orientations for the entire simplex, as seen below.

Indeed, the chain groups are easy to write down:

We can easily write down the images of each , they’re just the span of the images of each basis element under .

The zero-th homology is the kernel of modulo the image of . The angle brackets are a shorthand for “span.”

Since is actually the zero map, and all five vertices generate the kernel. The quotient construction imposes that two vertices (two elements of the homology group) are considered equivalent if their difference is a boundary. It is easy to see that (indeed, just by the first four generators of the image) all vertices are equivalent to 0, so there is a unique generator of homology, and the vector space is isomorphic to . There is exactly one connected component. Geometrically we can realize this, because two vertices are homologous if and only if there is a “path” of edges from one vertex to the other. This chain will indeed have as its image the difference of the two vertices.

We can compute the first homology in an analogous way, compute the kernel and image separately, and then compute the quotient.

It takes a bit of combinatorial analysis to show that this is precisely the kernel of , and we will have a better method for it in the next post, but indeed this is it. As the image of is precisely the first four basis elements, the quotient is just the one-dimensional vector space spanned by . Hence , and there is one one-dimensional hole.

Since there are no 3-simplices, the homology group is simply the kernel of , which is not hard to see is just generated by the chain representing the “sphere” part of the space: . The second homology group is thus again and there is one two-dimensional hole in .

So there we have it!

## Looking Forward

Next time, we will give a more explicit algorithm for computing homology for finite simplicial complexes, and it will essentially be a variation of row-reduction which simultaneously rewrites the matrix representations of the boundary operators with respect to a canonical basis. This will allow us to simply count entries on the digaonals of the two matrices, and the difference will be the dimension of the quotient space, and hence the number of holes.

Until then!

Is there a typo in the boundary operator: should the sum start from 0?

Yes, thanks for catching that.

This is an excellent post and, in my opinion, much more approachable/motivated than the explanation found in Hatcher’s book (which seems to be the go-to text these days). Thank you for writing this!!

Thanks for reading!

thanks! btw, i think “e_1+e_4-e_5+e_3” wants to (twice) be “e_1+e_4-e_5-e_3”.

Yes, if we want a 1-chain that represents a directed path whose edges match head to tail. However, note that the linear combination e_1 + e_4 – e_5 + e_3 remains a bona fide 1-chain, that is, an element of C_1(X).

Thanks for the nice post. Typo: I believe ker(del_1) should be span{[0,1] + [1,3] – [0,3], …}.

In the section “Recall Simplicial Complexes” you write: “Here are, for example, two different ways to pick orientations of a 2-simplex”. However, as shown they’re not actually different. The two displayed orientations of the 2-simplex are the same — written for example as (0, 1, 2) or any even permutation of it — following the circular arrows. The other (opposite) orientation, not pictured, would be (0, 2, 1), also represented by any even permutation.

In “Looking Forward”, you write: “… a variation of row-reduction which simultaneously rewrites the matrix representations of the boundary operators \partial_{k+1}, \partial_k with respect to a canonical basis. This will allow us to simply count entries on the diagonals of the two matrices, and the difference will be the dimension of the quotient space.” Strictly speaking, the dimension of the quotient space will be the difference of “the dimension of C_k(X) minus the number of diagonal entries for \partial_k” and the number of diagonal entries for \partial_{k+1}.

What is the font of the text that you are using int his blog. It’s beautiful.