In our last post in this series on topology, we defined the homology group. Specifically, we built up a topological space as a simplicial complex (a mess of triangles glued together), we defined an algebraic way to represent collections of simplices called chains as vectors in a vector space, we defined the boundary homomorphism as a linear map on chains, and finally defined the homology groups as the quotient vector spaces
The number of holes in was just the dimension of this quotient space.
In this post we will be quite a bit more explicit. Because the chain groups are vector spaces and the boundary mappings are linear maps, they can be represented as matrices whose dimensions depend on our simplicial complex structure. Better yet, if we have explicit representations of our chains by way of a basis, then we can use row-reduction techniques to write the matrix in a standard form.
Of course the problem arises when we want to work with two matrices simultaneously (to compute the kernel-mod-image quotient above). This is not computationally any more difficult, but it requires some theoretical fiddling. We will need to dip a bit deeper into our linear algebra toolboxes to see how it works, so the rusty reader should brush up on their linear algebra before continuing (or at least take some time to sort things out if or when confusion strikes).
Two Big Matrices
Recall our example simplicial complex from last time.
We will compute of this simplex (which we saw last time was ) in a more algorithmic way than we did last time.
Once again, we label the vertices 0-4 so that the extra “arm” has vertex 4 in the middle, and its two endpoints are 0 and 2. This gave us orientations on all of the simplices, and the following chain groups. Since the vertex labels (and ordering) are part of the data of a simplicial complex, we have made no choices in writing these down.
Now given our known definitions of as an alternating sum from last time, we can give a complete specification of the boundary map as a matrix. For , this would be
where the row labels are the basis for and the column labels are the basis for . Similarly, is
The reader is encouraged to check that these matrices are written correctly by referring to the formula for as given last time.
Remember the crucial property of , that . Indeed, the composition of the two boundary maps just corresponds to the matrix product of the two matrices, and one can verify by hand that the above two matrices multiply to the zero matrix.
We know from basic linear algebra how to compute the kernel of a linear map expressed as a matrix: column reduce and inspect the columns of zeros. Since the process of row reducing is really a change of basis, we can encapsulate the reduction inside a single invertible matrix , which, when left-multiplied by , gives us the reduced form of the latter. So write the reduced form of as .
However, now we’re using two different sets of bases for the shared vector space involved in and . In general, it will no longer be the case that . The way to alleviate this is to perform the “corresponding” change of basis in . To make this idea more transparent, we return to the basics.
Changing Two Matrices Simultaneously
Recall that a matrix represents a linear map between two vector spaces . The actual entries of depend crucially on the choice of a basis for the domain and codomain. Indeed, if form a basis for and for , then the -th column of the matrix representation is defined to be the coefficients of the representation of in terms of the . We hope to have nailed this concept down firmly in our first linear algebra primer.
Recall further that row operations correspond to changing a basis for the codomain, and column operations correspond to changing a basis for the domain. For example, the idea of swapping columns in gives a new matrix which is the representation of with respect to the (ordered) basis for which swaps the order of . Similar things happen for all column operations (they all correspond to manipulations of the basis for ), while analogously row operations implicitly transform the basis for the codomain. Note, though, that the connection between row operations and transformations of the basis for the codomain are slightly more complicated than they are for the column operations. We will explicitly see how it works later in the post.
And so if we’re working with two maps and , and we change a basis for in via column reductions, then in order to be consistent, we need to change the basis for in via “complementary” row reductions. That is, if we call the change of basis matrix , then we’re implicitly sticking in between the composition to get . This is not the same map as , but we can make it the same map by adding a in the right place:
Indeed, whenever is a change of basis matrix so is (trivially), and moreover the operations that performs on the columns of are precisely the operations that performs on the rows of (this is because elementary row operations take different forms when multiplied on the left or right).
Coming back to our boundary operators, we want a canonical way to view the image of as sitting inside the kernel of . If we go ahead and use column reductions to transform into a form where the kernel is easy to read off (as the columns consisting entirely of zeroes), then the corresponding row operations, when performed on will tell us exactly the image of inside the kernel of .
This last point is true precisely because . This fact guarantees that the irrelevant rows of the reduced version of are all zero.
Let’s go ahead and see this in action on our two big matrices above. For , the column reduction matrix is
And the product is
Now the inverse of , which is the corresponding basis change for , is
and the corresponding reduced form of is
As a side note, we got these matrices by slightly modifying the code from our original post on row reduction to output the change of basis matrix in addition to performing row reduction. It turns out one can implement column reduction as row reduction of the transpose, and the change of basis matrix you get from this process will be the transpose of the change of basis matrix you want (by ). Though the code is particularly ad-hoc, we include it with the rest of the code used in this post on this blog’s Github page.
Now let’s inspect the two matrices and more closely. The former has four “pivots” left over, and this corresponds to the rank of the matrix being 4. Moreover, the four basis vectors representing the columns with nonzero pivots, which we’ll call (we don’t care what their values are), span a complementary subspace to the kernel of . Hence, the remaining four vectors (which we’ll call ) span the kernel. In particular, this says that the kernel has dimension 4.
On the other hand, we performed the same transformation of the basis of for . Looking at the matrix that resulted, we see that the first four rows and the last row (representing ) are entirely zeros and so the image of intersects their span trivially. and the remaining three rows (representing ) have nonzero pivots. This tells us exactly that the image of is spanned by .
And now, the coup de grâce, the quotient to get homology is simply
And the dimension of the homology group is 1, as desired.
The General Algorithm
It is no coincidence that things worked out at nicely as they did. The process we took of simultaneously rewriting two matrices with respect to a common basis is the bulk of the algorithm to compute homology. Since we’re really only interested in the dimensions of the homology groups, we just need to count pivots. If the number of pivots arising in is and the number of pivots arising in is , and the dimension of is , then the dimension is exactly
And it is no coincidence that the pivots lined up so nicely to allow us to count dimensions this way. It is a minor exercise to prove it formally, but the fact that the composition implies that the reduced version of will have an almost reduced row-echelon form (the only difference being the rows of zeros interspersed above, below, and possibly between pivot rows).
As the reader may have guessed at this point, we don’t actually need to compute and . Instead of this, we can perform the column/row reductions simultaneously on the two matrices. The above analysis helped us prove the algorithm works, and with that guarantee we can throw out the analytical baggage and just compute the damn thing.
Indeed, assuming the input is already processed as two matrices representing the boundary operators with respect to the standard bases of the chain groups, computing homology is only slightly more difficult than row reducing in the first place. Putting our homology where our mouth is, we’ve implemented the algorithm in Python. As usual, the entire code used in this post is available on this blog’s Github page.
The first step is writing auxiliary functions to do elementary row and column operations on matrices. For this post, we will do everything in numpy (which makes the syntax shorter than standard Python syntax, but dependent on the numpy library).
import numpy def rowSwap(A, i, j): temp = numpy.copy(A[i, :]) A[i, :] = A[j, :] A[j, :] = temp def colSwap(A, i, j): temp = numpy.copy(A[:, i]) A[:, i] = A[:, j] A[:, j] = temp def scaleCol(A, i, c): A[:, i] *= c*numpy.ones(A.shape) def scaleRow(A, i, c): A[i, :] *= c*numpy.ones(A.shape) def colCombine(A, addTo, scaleCol, scaleAmt): A[:, addTo] += scaleAmt * A[:, scaleCol] def rowCombine(A, addTo, scaleRow, scaleAmt): A[addTo, :] += scaleAmt * A[scaleRow, :]
From here, the main meat of the algorithm is doing column reduction on one matrix, and applying the corresponding row operations on the other.
def simultaneousReduce(A, B): if A.shape != B.shape: raise Exception("Matrices have the wrong shape.") numRows, numCols = A.shape # col reduce A i,j = 0,0 while True: if i >= numRows or j >= numCols: break if A[i][j] == 0: nonzeroCol = j while nonzeroCol < numCols and A[i,nonzeroCol] == 0: nonzeroCol += 1 if nonzeroCol == numCols: j += 1 continue colSwap(A, j, nonzeroCol) rowSwap(B, j, nonzeroCol) pivot = A[i,j] scaleCol(A, j, 1.0 / pivot) scaleRow(B, j, 1.0 / pivot) for otherCol in range(0, numCols): if otherCol == j: continue if A[i, otherCol] != 0: scaleAmt = -A[i, otherCol] colCombine(A, otherCol, j, scaleAmt) rowCombine(B, j, otherCol, -scaleAmt) i += 1; j+= 1 return A,B
This more or less parallels the standard algorithm for row-reduction (with the caveat that all the indices are swapped to do column-reduction). The only somewhat confusing line is the call to rowCombine, which explicitly realizes the corresponding row operation as the inverse of the performed column operation. Note that for row operations, the correspondence between operations on the basis and operations on the rows is not as direct as it is for columns. What’s given above is the true correspondence. Writing down lots of examples will reveal why, and we leave that as an exercise to the reader.
Then the actual algorithm to compute homology is just a matter of counting pivots. Here are two pivot-counting functions in a typical numpy fashion:
def numPivotCols(A): z = numpy.zeros(A.shape) return [numpy.all(A[:, j] == z) for j in range(A.shape)].count(False) def numPivotRows(A): z = numpy.zeros(A.shape) return [numpy.all(A[i, :] == z) for i in range(A.shape)].count(False)
And the final function is just:
def bettiNumber(d_k, d_kplus1): A, B = numpy.copy(d_k), numpy.copy(d_kplus1) simultaneousReduce(A, B) dimKChains = A.shape kernelDim = dimKChains - numPivotCols(A) imageDim = numPivotRows(B) return kernelDim - imageDim
And there we have it! We’ve finally tackled the beast, and written a program to compute algebraic features of a topological space.
The reader may be curious as to why we didn’t come up with a more full-bodied representation of a simplicial complex and write an algorithm which accepts a simplicial complex and computes all of its homology groups. We’ll leave this direct approach as a (potentially long) exercise to the reader, because coming up in this series we are going to do one better. Instead of computing the homology groups of just one simplicial complex using by repeating one algorithm many times, we’re going to compute all the homology groups of a whole family of simplicial complexes in a single bound. This family of simplicial complexes will be constructed from a data set, and so, in grandiose words, we will compute the topological features of data.
If it sounds exciting, that’s because it is! We’ll be exploring a cutting-edge research field known as persistent homology, and we’ll see some of the applications of this theory to data analysis.
This series on topology has been long and hard, but we’re are quickly approaching the topics where we can actually write programs. For this and the next post on homology, the most important background we will need is a solid foundation in linear algebra, specifically in row-reducing matrices (and the interpretation of row-reduction as a change of basis of a linear operator).
Last time we engaged in a whirlwind tour of the fundamental group and homotopy theory. And we mean “whirlwind” as it sounds; it was all over the place in terms of organization. The most important fact that one should take away from that discussion is the idea that we can compute, algebraically, some qualitative features about a topological space related to “n-dimensional holes.” For one-dimensional things, a hole would look like a circle, and for two dimensional things, it would look like a hollow sphere, etc. More importantly, we saw that this algebraic data, which we called the fundamental group, is a topological invariant. That is, if two topological spaces have different fundamental groups, then they are “fundamentally” different under the topological lens (they are not homeomorphic, and not even homotopy equivalent).
Unfortunately the main difficulty of homotopy theory (and part of what makes it so interesting) is that these “holes” interact with each other in elusive and convoluted ways, and the algebra reflects it almost too well. Part of the problem with the fundamental group is that it deftly eludes our domain of interest: we don’t know a general method to compute the damn things!
What we really need is a coarser invariant. If we can find a “stupider” invariant, it might just be simple enough to compute. Perhaps unsurprisingly, these will take the form of finitely-generated abelian groups (the most well-understood class of groups), with one for each dimension. Now we’re starting to see exactly why algebraic topology is so difficult; it has an immense list of prerequisite topics! If we’re willing to skip over some of the more nitty gritty details (and we must lest we take a huge diversion to discuss Tor and the exact sequences in the universal coefficient theorem), then we can also do the same calculations over a field. In other words, the algebraic objects we’ll define called “homology groups” are really vector spaces, and so row-reduction will be our basic computational tool to analyze them.
Once we have the basic theory down, we’ll see how we can write a program which accepts as input any topological space (represented in a particular form) and produces as output a list of the homology groups in every dimension. The dimensions of these vector spaces (their ranks, as finitely-generated abelian groups) are interpreted as the number of holes in the space for each dimension.
Recall Simplicial Complexes
In our post on constructing topological spaces, we defined the standard -simplex and the simplicial complex. We recall the latter definition here, and expand upon it.
Definition: A simplicial complex is a topological space realized as a union of any collection of simplices (of possibly varying dimension) which has the following two properties:
- Any face of a simplex is also in .
- The intersection of any two simplices of is also a simplex of .
We can realize a simplicial complex by gluing together pieces of increasing dimension. First start by taking a collection of vertices (0-simplices) . Then take a collection of intervals (1-simplices) and glue their endpoints onto the vertices in any way. Note that because we require every face of an interval to again be a simplex in our complex, we must glue each endpoint of an interval onto a vertex in . Continue this process with , a set of 2-simplices, we must glue each edge precisely along an edge of . We can continue this process until we reach a terminating set . It is easy to see that the union of the form a simplicial complex. Define the dimension of the cell complex to be .
There are some picky restrictions on how we glue things that we should mention. For instance, we could not contract all edges of a 2-simplex and glue it all to a single vertex in . The reason for this is that would no longer be a 2-simplex! Indeed, we’ve destroyed its original vertex set. The gluing process hence needs to preserve the original simplex’s boundary. Moreover, one property that follows from the two conditions above is that any simplex in the complex is uniquely determined by its vertices (for otherwise, the intersection of two such non-uniquely specified simplices would not be a single simplex).
We also have to remember that we’re imposing a specific ordering on the vertices of a simplex. In particular, if we label the vertices of an -simplex , then this imposes an orientation on the edges where an edge of the form has the orientation if , and otherwise. The faces, then, are “oriented” in increasing order of their three vertices. Higher dimensional simplices are oriented in a similar way, though we rarely try to picture this (the theory of orientations is a question best posted for smooth manifolds; we won’t be going there any time soon). Here are, for example, two different ways to pick orientations of a 2-simplex:
It is true, but a somewhat lengthy exercise, that the topology of a simplicial complex does not change under a consistent shuffling of the orientations across all its simplices. Nor does it change depending on how we realize a space as a simplicial complex. These kinds of results are crucial to the welfare of the theory, but have been proved once and we won’t bother reproving them here.
As a larger example, here is a simplicial complex representing the torus. It’s quite a bit more complicated than our usual quotient of a square, but it’s based on the same idea. The left and right edges are glued together, as are the top and bottom, with appropriate orientations. The only difficulty is that we need each simplex to be uniquely determined by its vertices. While this construction does not use the smallest possible number of simplices to satisfy that condition, it is the simplest to think about.
Taking a known topological space (like the torus) and realizing it as a simplicial complex is known as triangulating the space. A space which can be realized as a simplicial complex is called triangulable.
The nicest thing about the simplex is that it has an easy-to-describe boundary. Geometrically, it’s obvious: the boundary of the line segment is the two endpoints; the boundary of the triangle is the union of all three of its edges; the tetrahedron has four triangular faces as its boundary; etc. But because we need an algebraic way to describe holes, we want an algebraic way to describe the boundary. In particular, we have two important criterion that any algebraic definition must satisfy to be reasonable:
- A boundary itself has no boundary.
- The property of being boundariless (at least in low dimensions) coincides with our intuitive idea of what it means to be a loop.
Of course, just as with homotopy these holes interact in ways we’re about to see, so we need to be totally concrete before we can proceed.
The Chain Group and the Boundary Operator
In order to define an algebraic boundary, we have to realize simplices themselves as algebraic objects. This is not so difficult to do: just take all “formal sums” of simplices in the complex. More rigorously, let be the set of -simplices in the simplicial complex . Define the chain group to be the -vector space with for a basis. The elements of the -th chain group are called k-chains on . That’s right, if are two -simplices, then we just blindly define a bunch of new “chains” as all possible “sums” and scalar multiples of the simplices. For example, sums involving two elements would look like for some . Indeed, we include any finite sum of such simplices, as is standard in taking the span of a set of basis vectors in linear algebra.
Just for a quick example, take this very simple simplicial complex:
We’ve labeled all of the simplices above, and we can describe the chain groups quite easily. The zero-th chain group is the -linear span of the set of vertices . Geometrically, we might think of “the union” of two points as being, e.g., the sum . And if we want to have two copies of and five copies of , that might be thought of as . Of course, there are geometrically meaningless sums like , but it will turn out that the algebra we use to talk about holes will not falter because of it. It’s nice to have this geometric idea of what an algebraic expression can “mean,” but in light of this nonsense it’s not a good idea to get too wedded to the interpretations.
Likewise, is the linear span of the set with coefficients in . So we can talk about a “path” as a sum of simplices like . Here we use a negative coefficient to signify that we’re travelling “against” the orientation of an edge. Note that since the order of the terms is irrelevant, the same “path” is given by, e.g. , which geometrically is ridiculous if we insist on reading the terms from left to right.
The same idea extends to higher dimensional groups, but as usual the visualization grows difficult. For example, in above, the chain group is the vector space spanned by . But does it make sense to have a path of triangles? Perhaps, but the geometric analogies certainly become more tenuous as dimension grows. The benefit, however, is if we come up with good algebraic definitions for the low-dimensional cases, the algebra is easy to generalize to higher dimensions.
So now we will define the boundary operator on chain groups, a linear map by starting in lower dimensions and generalizing. A single vertex should always be boundariless, so for each vertex. Extending linearly to the entire chain group, we have is identically the zero map on zero-chains. For 1-simplices we have a more substantial definition: if a simplex has its orientation as , then the boundary should be . That is, it’s the front end of the edge minus the back end. This defines the boundary operator on the basis elements, and we can again extend linearly to the entire group of 1-chains.
Why is this definition more sensible than, say, ? Using our example above, let’s see how it operates on a “path.” If we have a sum like , then the boundary is computed as
That is, the result was the endpoint of our path minus the starting point of our path . It is not hard to prove that this will work in general, since each successive edge in a path will cancel out the ending vertex of the edge before it and the starting vertex of the edge after it: the result is just one big alternating sum.
Even more importantly is that if the “path” is a loop (the starting and ending points are the same in our naive way to write the paths), then the boundary is zero. Indeed, any time the boundary is zero then one can rewrite the sum as a sum of “loops,” (though one might have to trivially introduce cancelling factors). And so our condition for a chain to be a “loop,” which is just one step away from being a “hole,” is if it is in the kernel of the boundary operator. We have a special name for such chains: they are called cycles.
For 2-simplices, the definition is not so much harder: if we have a simplex like , then the boundary should be . If one rewrites this in a different order, then it will become apparent that this is just a path traversing the boundary of the simplex with the appropriate orientations. We wrote it in this “backwards” way to lead into the general definition: the simplices are ordered by which vertex does not occur in the face in question ( omitted from the first, from the second, and from the third).
We are now ready to extend this definition to arbitrary simplices, but a nice-looking definition requires a bit more notation. Say we have a k-simplex which looks like . Abstractly, we can write it just using the numbers, as . And moreover, we can denote the removal of a vertex from this list by putting a hat over the removed index. So represents the simplex which has all of the vertices from 0 to excluding the vertex . To represent a single-vertex simplex, we will often drop the square brackets, e.g. for . This can make for some awkward looking math, but is actually standard notation once the correct context has been established.
Now the boundary operator is defined on the standard -simplex with orientation via the alternating sum
It is trivial (but perhaps notationally hard to parse) to see that this coincides with our low-dimensional examples above. But now that we’ve defined it for the basis elements of a chain group, we automatically get a linear operator on the entire chain group by extending linearly on chains.
Definition: The k-cycles on are those chains in the kernel of . We will call k-cycles boundariless. The k-boundaries are the image of .
We should note that we are making a serious abuse of notation here, since technically is defined on only a single chain group. What we should do is define for a fixed dimension, and always put the subscript. In practice this is only done when it is crucial to be clear which dimension is being talked about, and otherwise the dimension is always inferred from the context. If we want to compose the boundary operator in one dimension with the boundary operator in another dimension (say, ), it is usually written . This author personally supports the abolition of the subscripts for the boundary map, because subscripts are a nuisance in algebraic topology.
All of that notation discussion is so we can make the following observation: . That is, every chain which is a boundary of a higher-dimensional chain is boundariless! This should make sense in low-dimension: if we take the boundary of a 2-simplex, we get a cycle of three 1-simplices, and the boundary of this chain is zero. Indeed, we can formally prove it from the definition for general simplices (and extend linearly to achieve the result for all simplices) by writing out . With a keen eye, the reader will notice that the terms cancel out and we get zero. The reason is entirely in which coefficients are negative; the second time we apply the boundary operator the power on (-1) shifts by one index. We will leave the full details as an exercise to the reader.
So this fits our two criteria: low-dimensional examples make sense, and boundariless things (cycles) represent loops.
Recasting in Algebraic Terms, and the Homology Group
For the moment let’s give boundary operators subscripts . If we recast things in algebraic terms, we can call the k-cycles , and this will be a subspace (and a subgroup) of since kernels are always linear subspaces. Moreover, the set of k-boundaries, that is, the image of , is a subspace (subgroup) of . As we just saw, every boundary is itself boundariless, so is a subset of , and since the image of a linear map is always a linear subspace of the range, we get that it is a subspace too.
All of this data is usually expressed in one big diagram: each of the chain groups are organized in order of decreasing dimension, and the boundary maps connect them.
Since our example (the “simple space” of two triangles from the previous section) only has simplices in dimensions zero, one, and two, we additionally extend the sequence of groups to an infinite sequence by adding trivial groups and zero maps, as indicated. The condition that , which is equivalent to , is what makes this sequence a chain complex. As a side note, every sequence of abelian groups and group homomorphisms which satisfies the boundary requirement is called an algebraic chain complex. This foreshadows that there are many different types of homology theory, and they are unified by these kinds of algebraic conditions.
Now, geometrically we want to say, “The holes are all those cycles (loops) which don’t arise as the boundaries of higher-dimensional things.” In algebraic terms, this would correspond to a quotient space (really, a quotient group, which we covered in our first primer on groups) of the k-cycles by the k-boundaries. That is, a cycle would be considered a “trivial hole” if it is a boundary, and two “different” cycles would be considered the same hole if their difference is a k-boundary. This is the spirit of homology, and formally, we define the homology group (vector space) as follows.
Definition: The -th homology group of a simplicial complex , denoted , is the quotient vector space . Two elements of a homology group which are equivalent (their difference is a boundary) are called homologous.
The number of -dimensional holes in is thus realized as the dimension of as a vector space.
The quotient mechanism really is doing all of the work for us here. Any time we have two holes and we’re wondering whether they represent truly different holes in the space (perhaps we have a closed loop of edges, and another which is slightly longer but does not quite use the same edges), we can determine this by taking their difference and seeing if it bounds a higher-dimensional chain. If it does, then the two chains are the same, and if it doesn’t then the two chains carry intrinsically different topological information.
For particular dimensions, there are some neat facts (which obviously require further proof) that make this definition more concrete.
- The dimension of is the number of connected components of . Therefore, computing homology generalizes the graph-theoretic methods of computing connected components.
- is the abelianization of the fundamental group of . Roughly speaking, is the closest approximation of by a vector space.
Now that we’ve defined the homology group, let’s end this post by computing all the homology groups for this example space:
This is a sphere (which can be triangulated as the boundary of a tetrahedron) with an extra “arm.” Note how the edge needs an extra vertex to maintain uniqueness. This space is a nice example because it has one-dimensional homology in dimension zero (one connected component), dimension one (the arm is like a copy of the circle), and dimension two (the hollow sphere part). Let’s verify this algebraically.
Let’s start by labelling the vertices of the tetrahedron 0, 1, 2, 3, so that the extra arm attaches at 0 and 2, and call the extra vertex on the arm 4. This completely determines the orientations for the entire simplex, as seen below.
Indeed, the chain groups are easy to write down:
We can easily write down the images of each , they’re just the span of the images of each basis element under .
The zero-th homology is the kernel of modulo the image of . The angle brackets are a shorthand for “span.”
Since is actually the zero map, and all five vertices generate the kernel. The quotient construction imposes that two vertices (two elements of the homology group) are considered equivalent if their difference is a boundary. It is easy to see that (indeed, just by the first four generators of the image) all vertices are equivalent to 0, so there is a unique generator of homology, and the vector space is isomorphic to . There is exactly one connected component. Geometrically we can realize this, because two vertices are homologous if and only if there is a “path” of edges from one vertex to the other. This chain will indeed have as its image the difference of the two vertices.
We can compute the first homology in an analogous way, compute the kernel and image separately, and then compute the quotient.
It takes a bit of combinatorial analysis to show that this is precisely the kernel of , and we will have a better method for it in the next post, but indeed this is it. As the image of is precisely the first four basis elements, the quotient is just the one-dimensional vector space spanned by . Hence , and there is one one-dimensional hole.
Since there are no 3-simplices, the homology group is simply the kernel of , which is not hard to see is just generated by the chain representing the “sphere” part of the space: . The second homology group is thus again and there is one two-dimensional hole in .
So there we have it!
Next time, we will give a more explicit algorithm for computing homology for finite simplicial complexes, and it will essentially be a variation of row-reduction which simultaneously rewrites the matrix representations of the boundary operators with respect to a canonical basis. This will allow us to simply count entries on the digaonals of the two matrices, and the difference will be the dimension of the quotient space, and hence the number of holes.
Being part of the subject of algebraic topology, this post assumes the reader has read our previous primers on both topology and group theory. As a warning to the reader, it is more advanced than most of the math presented on this blog, and it is woefully incomplete. Nevertheless, the aim is to provide a high level picture of the field with a peek at the details.
An Intuitive and Totally Uncomputable Topological Invariant
Our eventual goal is to get comfortable with the notion of the “homology group” of a topological space. That is, to each topological space we will associate a group (really, a family of groups) in such a way that whenever two topological spaces are homeomorphic their associated groups will be isomorphic. In other words, we will be able to distinguish between two spaces by computing their associated groups (if even one of the groups is different).
In general, there may be many many ways to associate a group with an object (for instance, it could be a kind of symmetry group or a group action). But what we want to do, and what will motivate both this post and the post on homology, is figure out a reasonable way to count holes in a space. Of course, the difficult part of this is determining what it means mathematically to have a “hole” in a space.
The simplest example of this is a circle :
That is, we think of this space (not embedded in any particular Euclidean space) as having a single one-dimensional “hole” in it. Furthermore, a sphere has a two-dimensional “hole” in it (the hollow interior).
So our approach to make “holes” rigorous will follow a commonly tread mathematical trail: come up with a definition which fits with our intuition for these special cases, and explore how the definition generalizes to other cases. In this post we will stick exclusively to one-dimensional holes (with a small exception when we take a peek at the chaos of higher homotopy groups at the end of this post), and the main object we use to represent them is called the fundamental group.
On the other hand, we will find that the fundamental group is too unwieldy to compute (and for deep reasons). Since we want to be able to readily compute the number of holes in twisted and tied-up spaces, we will need to scrap the fundamental group and try something else. Instead, we’ll derive various kinds of other groups associated to a topological space, which will collectively fall under the name homology groups. After that, we will be able to actually write a program which computes homology of sufficiently nice spaces (simplicial complexes). Our main use for this will actually be to compute the topological features of a data set, but let’s cross that bridge when we get to it.
For now, we will develop the idea of homotopy and the fundamental group. It will show us how to think geometrically about topology and give us our first interplay between topology and algebra.
Paths and Homotopy
For the remainder of this post (and all of our future posts in topology), a map will always mean a continuous function. The categorical way to say this, which we will use increasingly often as we approach more advanced topics, is by saying that continuous maps are “the morphisms in the category of topological spaces.”
So we define by a path in a topological space to be a map (that is, a continuous function ). There are many kinds of paths in a space:
Paths are necessarily oriented, and we can think of the starting point of the path as , and the ending point as . And if we label the starting point, say, and the ending point we will call a path from to . We call the endpoints of . We also speak of “moving along a path,” in the sense that as we increase from zero to one, we continuously move along its image in . We certainly allow paths to intersect themselves, as in the blue path given above, and we do not require that paths are smooth (in fact, they need not be differentiable in any sense, as in the green path above). We will call a path simple if it is injective (that is, it doesn’t intersect itself). The black and green paths in the figure above are simple. The most trivial of paths is the constant path. We will denote it by and it is simply defined by sending the entire interval to a single point .
The language of paths already gives us a nice topological invariant. We call a space path-connected if for every two points there exists a path from to . The knowledgeable reader will recognize that this is distinct from the usual topological notion of connectedness (which is in fact weaker than path-connectedness). It is not hard to see that if two topological spaces are homeomorphic, then is path connected if and only if is path connected. As a quick warmup proof, we note that any map giving a homeomorphism is continuous, and the composition of a path with any map is again a path:
By path connectivity in and the fact that is surjective, we can always find a path between any two points : just find a path between and shoot it through (compose it with) . The same argument goes in the reverse direction using , and this establishes the if and only if.
Back to our mission of describing holes, we want to be able to continuously transform two paths into each other. That is, in the following picture, we want to be able to say that the red and blue paths are “equivalent” because we can continuously slide one to the other, while always keeping the endpoints fixed.
On the other hand, if there is a hole in the space, as shown by the black disk below, no way to slide the red path to the blue path can be continuous; it would need to “jump” over the hole, which is a non-continuous operation. Indeed, no matter how small this hole is we can never overcome this problem; even if just a single point is missing, the nature of continuity ensures it.
In order to make this rigorous, we need to define a “path of paths,” so to speak. That is, we need a continuously parameterized family of continuous maps which for the parameter 0 gives the first path, and for 1 the second path.
Definition: Let be two paths in with the same endpoints. A homotopy from to is a family of paths such that the following properties hold:
- as paths in
- as paths in
- For all , the path has the same endpoints as and .
- is continuous in and simultaneously
If there is a homotopy between two paths, we call the two paths homotopic, and denote the relationship .
We often think of as the time variable, describing how far along we are in the transformation from to . Here is a nice animation showing a homotopy of two paths:
But don’t be fooled by the suggestive nature of these paths. While it will often be the case that two paths which are homeomorphic as topological spaces in themselves (viewed as subspaces of ) will be homotopic, this is neither a sufficient nor a necessary condition for a homotopy to exist. Indeed, one can come up with a homotopy between a simple curve and a non simple curve, but these are not homeomorphic spaces (one has a cut-point). And in our picture above we give an example of two simple curves which are not homotopic: there is a hole in that stops them from being so. So the existence of a homotopy depends almost entirely on the space that the paths are in, and on where the paths are in that space. This is good, because in the end we don’t care about the paths. We just care about what they tell us about .
Often times we will not need to give explicit homotopies; they can be quite complicated to construct. But here’s one widely used example: let us take two paths in the Euclidean plane from to . Then we can construct the straight line homotopy between them as follows. Set . For we get the path ; for we get the path ; and for any fixed value of we get a path from to (plug in and verify the endpoints agree, and that the function is continuous in both variables). This example shows that all paths with the same endpoints in are homotopic. The same is true of in general, and we will use this fact later.
Because every path is homotopic to itself (the constant homotopy for all ), homotopy is symmetric ( gives an inverse homotopy to ), and homotopy is transitive (given in the parenthetical below), homotopy becomes an equivalence relation on paths in . So we may speak of the equivalence class of a path as the set of all paths homotopic to it.
(Transitivity: given homotopies and from , we can construct a homotopy from as follows. The main difficulty is that the variable in our homotopy must go from 0 to 1, and so we cannot directly compose and . Instead, we “run twice as fast” and then “run twice as fast.” That is, we define piecewise to be when and when . The composition of continuous functions is continuous, so changing the variable doesn’t break continuity. Moreover, since at time the homotopies coincide, and elsewhere they are continuous, we preserve continuity everywhere.)
This equivalence relation is precisely what we’ll use to define the existence of a hole. Intuitively speaking, if two paths with the same endpoints are not homotopic to each other, then there must be some obstruction stopping the homotopy. Then the number of distinct equivalence classes of paths with fixed endpoints will count the number of holes (plus one). Because endpoints are a little bit messy (one has to pick endpoints for non-equivalent paths to count holes), we instead implement this idea with loops.
Definition: A path is a loop if . Since the two endpoints of a loop are the same, we call it the basepoint.
With loops or with paths, we need one more additional operation. The operation is called path composition, but it essentially “doing one path after the other.” That is, let be paths in such that . Then to compose after , denoted via juxtaposition , is the path defined by when and when . The same argument for “composing” homotopies works here: we run along at twice the normal speed and then at twice its normal speed to get .
And now we turn to the main theorem of this post.
The Fundamental Group
Instead of looking at paths, we will work with loops with a fixed basepoint. The amazing thing that happens is that the set of equivalence classes of loops (with respect to homotopy) forms a group.
Definition: Let be a topological space and fix a point . Define by the set of equivalence classes of loops with basepoint . This set is called the fundamental group of (although we have not yet proved it is a group), or the first homotopy group of .
Theorem: The fundamental group is a group with respect to the operation of path composition.
Proof. In order to prove this we must verify the group operations. Let’s recall them here:
- The group must be closed under the group operation: clearly the composition of two loops based at is again a loop at , but we need to verify this is well-defined. That is, the operation gives the same value regardless of which representative we choose from the equivalence class.
- The group operation must be associative.
- The group must have an identity element: the constant path will fill this role.
- Every element must have an inverse: we will show that the loop which “runs in reverse” will serve this purpose.
To prove the first point, let be loops, and suppose are homotopic to respectively. Then it should be the case that is homotopic to . Suppose that are homotopies between relevant loops. Then we can define a new homotopy which runs the two homotopies simultaneously, and composes their paths for any fixed . This is almost identical to the original definition of a path composition, since we simply need to define our new family of loops by composing the loops and . That is,
So the operation is well-defined on equivalence classes.
Associativity is messier to prove, but has a similar mechanic. We just need to define a homotopy which scales the speed of each of the paths for a fixed to be in sync. We omit the details for brevity.
For the identity, let be a path and denote the constant path at $late x_0$. Then we must find a homotopy from to and to . Since the argument is symmetric we prove the former. The path sits at for half of the time and performs at twice speed for the rest. All we need to do is “continuously move” the time at which we stop and start from to , and run faster for the remaining time. We can equivalently perform this idea backwards (the algebra is simpler) to get
for , and
To verify this works, plugging in gives precisely the definition of the composition , and for we recover . Moreover for any value of setting gives and gives . Finally, the point at which the two pieces meet is , and we only need verify that the piecewise definition agrees there. That is, the argument of must be zero for that value of regardless of the choice of . Indeed it is.
Finally, for inverses define to be the “reverse path” of . Now we simply need to prove that is homotopic to . That is, we need to run part way, and then run starting from that spot in the reverse direction. The point at which we stop to turn back is , which continuously moves from to . We leave the formal specification of this homotopy as an exercise to the reader (hint: we need to appropriately change the speed at which run in order to make everything fit together). From here on we will identify the two notations as homotopy equivalence classes of paths.
The high-level implications of this theorem are quite important. Although we have not proved it here, the fundamental group is a topological invariant. That is, if and are two topological spaces, , and there is an isomorphism which takes , then are isomorphic as groups. That is (and this is the important bit) if and have different fundamental groups, then we know immediately that they cannot be homeomorphic. The way to prove this is to use to construct a new induced homomorphism of groups and proving that it is an isomorphism. We give a description of this in the next section.
In fact, we can say something stronger: the fundamental group is a homotopy invariant. That is, we can generalize the idea of a path homotopy into a “homotopy of maps,” and say that two spaces are homotopy equivalent if there are maps with (the identity function on ) and . The important fact is that two spaces which are homotopy equivalent necessarily have isomorphic fundamental groups. It should also be clear that homeomorphic spaces are homotopy equivalent (the homeomorphism map is also a homotopy equivalence map), so this realizes the fundamental group as a topological invariant as well. We will not prove this here, and instead refer to Hatcher, Chapter 1.
An important example of two spaces which are homotopy equivalent but not homeomorphic is that of the Möbius band and the circle. Proving it requires some additional tools we haven’t covered on this blog, but the idea is that one can squeeze the Möbius strip down onto its center circle, and so the loops in the former correspond to loops in the latter. So the Möbius band also has fundamental group .
Additionally, if the space is path-connected, then the choice of a basepoint is irrelevant. A sketch of the proof goes as follows: let be two choices of basepoints, and pick a path from to . Then we can naturally take any loop based at to a corresponding loop at by composing as in the following picture (here we traverse the paths in left-to-right order, which is the opposite of how function composition usually works; this is so we can read the paths from left-to-right in the order they are traversed in the diagram).
More rigorously this operation induces a homomorphism on the fundamental group (we define this fully in the next section), and for path connected spaces this is an isomorphism of groups. And so we can speak of the fundamental group when we work with sufficiently nice spaces (and in practice we always will).
Computing Fundamental Groups
So now we have seen that the fundamental group is in fact a group, but how can we compute it? Groups can be large and wild, and so can topological spaces. So it’s really impressive that we can compute these groups at all.
First off, the simplest possibility is that the fundamental group of is the identity group. That is, that every loop is homotopic to the trivial loop. In this case we call simply connected. For example, Euclidean space is simply connected. We gave the proof above showing that any two paths with the same endpoints in are homotopic, and the proof is the same in general. Since all loops are homotopic to each other, they are all homotopic to the trivial loop, and so is the trivial group. The picture becomes more interesting when we start to inspect subspaces of Euclidean space with nontrivial holes in them.
In fact, the first nontrivial fundamental group one computes in developing this theory is that of the circle . We again defer the proof to Hatcher because it is quite long, but the essential idea is as follows. View the circle as a subset of the complex plane , and fix the basepoint . The only distinct kinds of loops are those loops which loop around the circle times clockwise, or times counterclockwise, where is arbitrary. That is, we can construct a function sending to the loop which goes times around the circle (negative make it go in the negative direction). This map is an isomorphism.
From this one computation we get quite a few gains. As we will see in a moment, many fundamental groups are free products of with added relators, and the upcoming van Kampen theorem tells us how these relators are determined. On the other hand, we have already seen on this blog that the fundamental group of the circle is powerful enough to prove the fundamental theorem of algebra! It is clear from this that is a powerful notion.
Let’s take a moment to interpret this group in terms of the number of “holes” of . There may be some confusion because this group is infinite, but there is only one hole. Indeed, for each “hole” of a space one would expect to get a copy of , since one may run a loop arbitrarily many times around that hole in either clockwise or counterclockwise direction. Recall the classification theorem for finitely generated abelian groups from our recent primer. Since every such abelian group decomposes into a finite number of copies of and a torsion part, then we should be interpreting the number of holes of as the rank of (the number of copies of ).
This is great! But unfortunately nothing guarantees that the fundamental group is abelian, and for more complicated spaces things aren’t even so simple in the abelian case. In a moment, we will see an example of a space with a nonabelian fundamental group (in fact, we’ve already seen this space on this blog). One of its pieces will be . So our interpretation would say that this space has no holes in it, but what does the extra torsion factor mean? As it turns out the specific factor happens to correspond to non-orientability.
In fact it is true that for every group , there is a two-dimensional topological space whose fundamental group is (we haven’t defined dimension yet, but the notion of a surface is intuitive enough). So any weird torsion factor shows up in the fundamental group of some sufficiently awkward space. When one starts to investigate more and more of these contrived spaces, one ceases to care about the “intuitive” interpretations of the fundamental group. The group simply becomes a topological invariant, and the extra factors are just extra ways to tell spaces apart.
We want to state a powerful tool for computing fundamental groups whose rigorous version is called the van Kampen theorem. The intuition is as follows. Imagine we have a so-called “wedge” of two circles.
From our intuition of this space as two copies of a circle, we would expect its fundamental group to be . Unfortunately it is not, because we can immediately see that the fundamental group of this space is not commutative. In particular, let us label the loops and , and give them orientation:
Then the loop (we traverse these loops in left-to-right order for ease of reading) is not homotopic to the loop . Instead, this group should be the free group on the generators . Recall from our second primer on group theory that this means the two generators have no nontrivial relations between them. The reason it is free is because the intersection of the two circles (whose fundamental groups we know separately) is simply connected. If it were not, then there would be some relations. For instance, let us look at the torus , and recall its formulation as a quotient of a disk.
Here the two top edges and sides are identified, and so we can label the loops as follows:
Then the boundary of the original disk is just . Since any loop bounding a disk is homotopic to a constant loop (the straight line homotopy works here), we see that the loop in . That is, we still have two generators corresponding to the longitudinal and latitudinal circles, but traversing them in the right order is the same as doing nothing. This youtube video gives an animation showing the homotopy in action. So the fundamental group of the torus has presentation:
This is obviously just , since the defining relation is precisely saying that the two generators commute: . That is, it is the free abelian group on two generators.
Before we can prove the theorem in general we need to define an induced homomorphism. Given two spaces and a map , one gets a canonical induced map . If we consider basepoints, then we also require that preserves the basepoint. The induced map is defined by setting to be the equivalence class of the loop of . Recalling that is indeed a loop whenever is continuous, it is not hard to see that this is a homomorphism of groups. It is certainly not in general an isomorphism; for instance the trivial map which sends all of to a single point will not preserve nontrivial loops. But as we will see in the van Kampen theorem, for some maps it is.
One interesting example of an induced homomorphism is that of the inclusion map. Let be a subspace, and let be the inclusion map . This will often not induce an isomorphism on fundamental groups. For example, the inclusion of is not a constant map, but it induces the constant map on fundamental groups since there is only one group homomorphism from any group to the trivial group . That is, the kernel of is all of . Intuitively we are “filling in” the hole in with the ambient space from so that the loop generating is homotopic to the trivial loop. Thus, we are killing all of the loops of the circle.
The more important remark for the van Kampen theorem is, recalling our primer on group theory, that any collection of homomorphisms on groups extends uniquely to a homomorphism on the free product . The main goal of this theorem is to give us a way to build up fundamental groups in the same way we build up topological spaces. And it does so precisely by saying that this map on the free product is surjective. Using the first isomorphism theorem (again see our primer), this will allow us to compute the fundamental group of a space given subspaces and by analyzing the kernels of the homomorphisms induced by the inclusions .
The last thing we need to set up this theorem is a diagram. If we have two subspaces and we look at the inclusion , we could define it in one of two equivalent ways: first by going through and then to , or else by going through first. The diagram is as follows:
However, the induced homomorphism will depend on this choice! So we denote to include into the first piece, and to include on the second piece . The diagram here is:
Now we can state the theorem (and it is still quite a mouthful).
Theorem: (van Kampen) Let be the union of a family of path-connected open subspaces , each of which contains the chosen basepoint . Let be inclusions, the induced homomorphisms, and the unique extension of the inclusion to the free product .
- If all intersections of pairs are path-connected, then is a surjection.
- If in addition all triple intersections are path-connected, then the kernel of is the smallest normal subgroup generated by the elements for .
In particular, the first isomorphism theorem gives an isomorphism . That is, we can compute the fundamental group of by knowing the fundamental groups of the pieces and a little bit of extra information.
We do not have the stamina to prove such a massive theorem on this blog. However, since we have done so much just to state it, we would be cheating the reader by omitting any examples of its usage.
Let’s again look at the torus . Viewing it as in our previous primer on constructing topological spaces, it is the following quotient of a disk:
Split the disk into two subspaces as follows:
Note how these spaces overlap in an annulus whose fundamental group we’ve already seen is . Moreover, the fundamental group of is trivial, and the fundamental group of is . To see the latter, note that the torus with a disk removed is homotopy equivalent to a wedge of two circles. A simple exercise (again using the van Kampen theorem) finishes the computation of . As we said, the intersection has fundamental group (it is just an annulus).
So according to the van Kampen theorem, the fundamental group of is the free group on two generators, modulo the normal subgroup found by identifying the images of the two possible induced homomorphisms . On one hand the image going through is trivial because the group itself is trivial. On the other hand the image going through is easily seen to be homeomorphic to an oriented traversal of the boundary path . Indeed, this gives rise to a single relator: . So this verifies the presentation we gave for the fundamental group of the torus above.
A nearly identical argument gives nearly identical presentations for the Klein bottle. There is a slightly different presentation for the real projective plane, and it is interesting because there is a topological hole, but the group is just . We leave this as an exercise to the reader, using the representation as a quotient of the disk provided in our previous primer.
Our second example is that of the n-sphere . We already know , but in fact is trivial for all larger . Inductively, we can construct from two copies of the open ball by taking one to be the northern hemisphere, one to be the southern hemisphere, and their intersection to be the equator (or rather, something homotopy equivalent to ).
For the case of , we have that each hemisphere is an open ball centered at the poles (though the center is irrelevant), and the intersection is an annulus which is homotopy equivalent to . Each of the two pieces is simply connected (in fact, homeomorphic to ), and so by van Kampen’s theorem the fundamental group of is the free product of two trivial groups (modulo the trivial subgroup), and is hence the trivial group.
This argument works in general: if we know that each of the have trivial fundamental groups, and each of the possible intersections are path-connected (which is easy to see by looking at the construction via a simplicial complex), then van Kampen’s theorem guarantees that the fundamental group of is trivial.
So as we have seen, the fundamental group is a nice way to compute the number (or the structure) of the one-dimensional holes of a topological space. Unfortunately, even for the nicest of spaces (simplicial complexes) the problem of computing fundamental groups is in general undecidable. In fact, we get stuck at a simpler problem. The problem of determining whether the fundamental group of a finite simplicial complex is trivial is undecidable.
That is, for programmers the fundamental group is practically useless.
Higher Homotopy Groups
There is an obvious analogue for 2-, 3-, and n-dimensional holes. That is, we can define the -th homotopy group of a space , denoted to be the set of homotopy equivalence classes of maps . Homotopy groups for are called higher homotopy groups.
As one would expect, higher homotopy groups are much more difficult and even harder to compute. In fact, the only reason we bring this up in this primer is to intimidate the reader: we don’t even know a general way to compute the higher homotopy groups of the sphere.
In particular, here is a table of the known higher homotopy groups of the sphere.
The thick black line shows the boundary between where the patterns are well-known and understood (below) and the untamed wilderness (above). This table solidifies how ridiculous the higher homotopy groups can be. As such, they are unsuitable for computational purposes.
Nevertheless, the homotopy groups are relatively easy to understand in terms of intuition, because a homotopy is easily visualized. In our next primer, we will trade this ease of intuition for ease of computation. In particular, we will develop the notion of a homology group, and for simplicial complexes their computation will be about as simple as matrix reduction.
Once this is done, we will extend the idea of homology to apply to data sets (which should not themselves be considered as topological spaces), and we will be able to compute the topological features of a data set. This is our ultimate goal, but the mathematics we lay down along the way will have their own computational problems that we can explore on this blog.
Until next time!