Update: the mistakes made in the code posted here are fixed and explained in a subsequent post (one minor code bug was fixed here, and a less minor conceptual bug is fixed in the linked post).
In our last post in this series on topology, we defined the homology group. Specifically, we built up a topological space as a simplicial complex (a mess of triangles glued together), we defined an algebraic way to represent collections of simplices called chains as vectors in a vector space, we defined the boundary homomorphism $ \partial_k$ as a linear map on chains, and finally defined the homology groups as the quotient vector spaces
The number of holes in $ X$ was just the dimension of this quotient space.
In this post we will be quite a bit more explicit. Because the chain groups are vector spaces and the boundary mappings are linear maps, they can be represented as matrices whose dimensions depend on our simplicial complex structure. Better yet, if we have explicit representations of our chains by way of a basis, then we can use row-reduction techniques to write the matrix in a standard form.
Of course the problem arises when we want to work with two matrices simultaneously (to compute the kernel-mod-image quotient above). This is not computationally any more difficult, but it requires some theoretical fiddling. We will need to dip a bit deeper into our linear algebra toolboxes to see how it works, so the rusty reader should brush up on their linear algebra before continuing (or at least take some time to sort things out if or when confusion strikes).
Without further ado, let’s do an extended example and work our ways toward a general algorithm. As usual, all of the code used for this post is available on this blog’s Github page.
Two Big Matrices
Recall our example simplicial complex from last time.
We will compute $ H_1$ of this simplex (which we saw last time was $ \mathbb{Q}$) in a more algorithmic way than we did last time.
Once again, we label the vertices 0-4 so that the extra “arm” has vertex 4 in the middle, and its two endpoints are 0 and 2. This gave us orientations on all of the simplices, and the following chain groups. Since the vertex labels (and ordering) are part of the data of a simplicial complex, we have made no choices in writing these down.
Now given our known definitions of $ \partial_k$ as an alternating sum from last time, we can give a complete specification of the boundary map as a matrix. For $ \partial_1$, this would be
where the row labels are the basis for $ C_0(X)$ and the column labels are the basis for $ C_1(X)$. Similarly, $ \partial_2$ is
The reader is encouraged to check that these matrices are written correctly by referring to the formula for $ \partial$ as given last time.
Remember the crucial property of $ \partial$, that $ \partial^2 = \partial_k \partial_{k+1} = 0$. Indeed, the composition of the two boundary maps just corresponds to the matrix product of the two matrices, and one can verify by hand that the above two matrices multiply to the zero matrix.
We know from basic linear algebra how to compute the kernel of a linear map expressed as a matrix: column reduce and inspect the columns of zeros. Since the process of row reducing is really a change of basis, we can encapsulate the reduction inside a single invertible matrix $ A$, which, when left-multiplied by $ \partial$, gives us the reduced form of the latter. So write the reduced form of $ \partial_1$ as $ \partial_1 A$.
However, now we’re using two different sets of bases for the shared vector space involved in $ \partial_1$ and $ \partial_2$. In general, it will no longer be the case that $ \partial_kA\partial_{k+1} = 0$. The way to alleviate this is to perform the “corresponding” change of basis in $ \partial_{k+1}$. To make this idea more transparent, we return to the basics.
Changing Two Matrices Simultaneously
Recall that a matrix $ M$ represents a linear map between two vector spaces $ f : V \to W$. The actual entries of $ M$ depend crucially on the choice of a basis for the domain and codomain. Indeed, if $ v_i$ form a basis for $ V$ and $ w_j$ for $ W$, then the $ k$-th column of the matrix representation $ M$ is defined to be the coefficients of the representation of $ f(v_k)$ in terms of the $ w_j$. We hope to have nailed this concept down firmly in our first linear algebra primer.
Recall further that row operations correspond to changing a basis for the codomain, and column operations correspond to changing a basis for the domain. For example, the idea of swapping columns $ i,j$ in $ M$ gives a new matrix which is the representation of $ f$ with respect to the (ordered) basis for $ V$ which swaps the order of $ v_i , v_j$. Similar things happen for all column operations (they all correspond to manipulations of the basis for $ V$), while analogously row operations implicitly transform the basis for the codomain. Note, though, that the connection between row operations and transformations of the basis for the codomain are slightly more complicated than they are for the column operations. We will explicitly see how it works later in the post.
And so if we’re working with two maps $ A: U \to V$ and $ B: V \to W$, and we change a basis for $ V$ in $ B$ via column reductions, then in order to be consistent, we need to change the basis for $ V$ in $ A$ via “complementary” row reductions. That is, if we call the change of basis matrix $ Q$, then we’re implicitly sticking $ Q$ in between the composition $ BA$ to get $ (BQ)A$. This is not the same map as $ BA$, but we can make it the same map by adding a $ Q^{-1}$ in the right place:
$ \displaystyle BA = B(QQ^{-1})A = (BQ)(Q^{-1}A)$
Indeed, whenever $ Q$ is a change of basis matrix so is $ Q^{-1}$ (trivially), and moreover the operations that $ Q$ performs on the columns of $ B$ are precisely the operations that $ Q^{-1}$ performs on the rows of $ A$ (this is because elementary row operations take different forms when multiplied on the left or right).
Coming back to our boundary operators, we want a canonical way to view the image of $ \partial_{k+1}$ as sitting inside the kernel of $ \partial_k$. If we go ahead and use column reductions to transform $ \partial_k$ into a form where the kernel is easy to read off (as the columns consisting entirely of zeroes), then the corresponding row operations, when performed on $ \partial_{k+1}$ will tell us exactly the image of $ \partial_{k+1}$ inside the kernel of $ \partial_k$.
This last point is true precisely because $ \textup{im} \partial_{k+1} \subset \textup{ker} \partial_k$. This fact guarantees that the irrelevant rows of the reduced version of $ \partial_{k+1}$ are all zero.
Let’s go ahead and see this in action on our two big matrices above. For $ \partial_1$, the column reduction matrix is
As a side note, we got these matrices by slightly modifying the code from our original post on row reduction to output the change of basis matrix in addition to performing row reduction. It turns out one can implement column reduction as row reduction of the transpose, and the change of basis matrix you get from this process will be the transpose of the change of basis matrix you want (by $ (AB)^\textup{T} = (B^\textup{T}A^\textup{T})$). Though the code is particularly ad-hoc, we include it with the rest of the code used in this post on this blog’s Github page.
Now let’s inspect the two matrices $ \partial_1 A$ and $ A^{-1} \partial_2$ more closely. The former has four “pivots” left over, and this corresponds to the rank of the matrix being 4. Moreover, the four basis vectors representing the columns with nonzero pivots, which we’ll call $ v_1, v_2, v_3, v_4$ (we don’t care what their values are), span a complementary subspace to the kernel of $ \partial_1$. Hence, the remaining four vectors (which we’ll call $ v_5, v_6, v_7, v_8$) span the kernel. In particular, this says that the kernel has dimension 4.
On the other hand, we performed the same transformation of the basis of $ C_1(X)$ for $ \partial_2$. Looking at the matrix that resulted, we see that the first four rows and the last row (representing $ v_1, v_2, v_3, v_4, v_8$) are entirely zeros and so the image of $ \partial_2$ intersects their span trivially. and the remaining three rows (representing $ v_5, v_6, v_7$) have nonzero pivots. This tells us exactly that the image of $ \partial_2$ is spanned by $ v_5, v_6, v_7$.
And now, the coup de grâce, the quotient to get homology is simply
And the dimension of the homology group is 1, as desired.
The General Algorithm
It is no coincidence that things worked out at nicely as they did. The process we took of simultaneously rewriting two matrices with respect to a common basis is the bulk of the algorithm to compute homology. Since we’re really only interested in the dimensions of the homology groups, we just need to count pivots. If the number of pivots arising in $ \partial_k$ is $ y$ and the number of pivots arising in $ \partial_{k+1}$ is $ z$, and the dimension of $ C_k(X)$ is $ n$, then the dimension is exactly
$ (n-y) – z = \textup{dim}(\textup{ker} \partial_k) – \textup{dim}(\textup{im}\partial_{k+1})$
And it is no coincidence that the pivots lined up so nicely to allow us to count dimensions this way. It is a minor exercise to prove it formally, but the fact that the composition $ \partial_k \partial_{k+1} = 0$ implies that the reduced version of $ \partial_{k+1}$ will have an almost reduced row-echelon form (the only difference being the rows of zeros interspersed above, below, and possibly between pivot rows).
As the reader may have guessed at this point, we don’t actually need to compute $ A$ and $ A^{-1}$. Instead of this, we can perform the column/row reductions simultaneously on the two matrices. The above analysis helped us prove the algorithm works, and with that guarantee we can throw out the analytical baggage and just compute the damn thing.
Indeed, assuming the input is already processed as two matrices representing the boundary operators with respect to the standard bases of the chain groups, computing homology is only slightly more difficult than row reducing in the first place. Putting our homology where our mouth is, we’ve implemented the algorithm in Python. As usual, the entire code used in this post is available on this blog’s Github page.
The first step is writing auxiliary functions to do elementary row and column operations on matrices. For this post, we will do everything in numpy (which makes the syntax shorter than standard Python syntax, but dependent on the numpy library).
From here, the main meat of the algorithm is doing column reduction on one matrix, and applying the corresponding row operations on the other.
def simultaneousReduce(A, B):
if A.shape[1] != B.shape[0]:
raise Exception("Matrices have the wrong shape.")
numRows, numCols = A.shape # col reduce A
i,j = 0,0
while True:
if i >= numRows or j >= numCols:
break
if A[i][j] == 0:
nonzeroCol = j
while nonzeroCol < numCols and A[i,nonzeroCol] == 0:
nonzeroCol += 1
if nonzeroCol == numCols:
i += 1
continue
colSwap(A, j, nonzeroCol)
rowSwap(B, j, nonzeroCol)
pivot = A[i,j]
scaleCol(A, j, 1.0 / pivot)
scaleRow(B, j, 1.0 / pivot)
for otherCol in range(0, numCols):
if otherCol == j:
continue
if A[i, otherCol] != 0:
scaleAmt = -A[i, otherCol]
colCombine(A, otherCol, j, scaleAmt)
rowCombine(B, j, otherCol, -scaleAmt)
i += 1; j+= 1
return A,B
This more or less parallels the standard algorithm for row-reduction (with the caveat that all the indices are swapped to do column-reduction). The only somewhat confusing line is the call to rowCombine, which explicitly realizes the corresponding row operation as the inverse of the performed column operation. Note that for row operations, the correspondence between operations on the basis and operations on the rows is not as direct as it is for columns. What’s given above is the true correspondence. Writing down lots of examples will reveal why, and we leave that as an exercise to the reader.
Then the actual algorithm to compute homology is just a matter of counting pivots. Here are two pivot-counting functions in a typical numpy fashion:
def numPivotCols(A):
z = numpy.zeros(A.shape[0])
return [numpy.all(A[:, j] == z) for j in range(A.shape[1])].count(False)
def numPivotRows(A):
z = numpy.zeros(A.shape[1])
return [numpy.all(A[i, :] == z) for i in range(A.shape[0])].count(False)
And the final function is just:
def bettiNumber(d_k, d_kplus1):
A, B = numpy.copy(d_k), numpy.copy(d_kplus1)
simultaneousReduce(A, B)
dimKChains = A.shape[1]
kernelDim = dimKChains - numPivotCols(A)
imageDim = numPivotRows(B)
return kernelDim - imageDim
And there we have it! We’ve finally tackled the beast, and written a program to compute algebraic features of a topological space.
The reader may be curious as to why we didn’t come up with a more full-bodied representation of a simplicial complex and write an algorithm which accepts a simplicial complex and computes all of its homology groups. We’ll leave this direct approach as a (potentially long) exercise to the reader, because coming up in this series we are going to do one better. Instead of computing the homology groups of just one simplicial complex using by repeating one algorithm many times, we’re going to compute all the homology groups of a whole family of simplicial complexes in a single bound. This family of simplicial complexes will be constructed from a data set, and so, in grandiose words, we will compute the topological features of data.
If it sounds exciting, that’s because it is! We’ll be exploring a cutting-edge research field known as persistent homology, and we’ll see some of the applications of this theory to data analysis.
This series on topology has been long and hard, but we’re are quickly approaching the topics where we can actually write programs. For this and the next post on homology, the most important background we will need is a solid foundation in linear algebra, specifically in row-reducing matrices (and the interpretation of row-reduction as a change of basis of a linear operator).
Last time we engaged in a whirlwind tour of the fundamental group and homotopy theory. And we mean “whirlwind” as it sounds; it was all over the place in terms of organization. The most important fact that one should take away from that discussion is the idea that we can compute, algebraically, some qualitative features about a topological space related to “n-dimensional holes.” For one-dimensional things, a hole would look like a circle, and for two dimensional things, it would look like a hollow sphere, etc. More importantly, we saw that this algebraic data, which we called the fundamental group, is a topological invariant. That is, if two topological spaces have different fundamental groups, then they are “fundamentally” different under the topological lens (they are not homeomorphic, and not even homotopy equivalent).
Unfortunately the main difficulty of homotopy theory (and part of what makes it so interesting) is that these “holes” interact with each other in elusive and convoluted ways, and the algebra reflects it almost too well. Part of the problem with the fundamental group is that it deftly eludes our domain of interest: computing them is complicated!
What we really need is a coarser invariant. If we can find a “stupider” invariant, it might just be simple enough to compute efficiently. Perhaps unsurprisingly, these will take the form of finitely-generated abelian groups (the most well-understood class of groups), with one for each dimension. Now we’re starting to see exactly why algebraic topology is so difficult; it has an immense list of prerequisite topics! If we’re willing to skip over some of the more nitty gritty details (and we must lest we take a huge diversion to discuss Tor and the exact sequences in the universal coefficient theorem), then we can also do the same calculations over a field. In other words, the algebraic objects we’ll define called “homology groups” are really vector spaces, and so row-reduction will be our basic computational tool to analyze them.
Once we have the basic theory down, we’ll see how we can write a program which accepts as input any topological space (represented in a particular form) and produces as output a list of the homology groups in every dimension. The dimensions of these vector spaces (their ranks, as finitely-generated abelian groups) are interpreted as the number of holes in the space for each dimension.
Recall Simplicial Complexes
In our post on constructing topological spaces, we defined the standard $ k$-simplex and the simplicial complex. We recall the latter definition here, and expand upon it.
Definition: A simplicial complex is a topological space realized as a union of any collection of simplices (of possibly varying dimension) $ \Sigma$ which has the following two properties:
Any face of a simplex $ \Sigma$ is also in $ \Sigma$.
The intersection of any two simplices of $ \Sigma$ is also a simplex of $ \Sigma$.
We can realize a simplicial complex by gluing together pieces of increasing dimension. First start by taking a collection of vertices (0-simplices) $ X_0$. Then take a collection of intervals (1-simplices) $ X_1$ and glue their endpoints onto the vertices in any way. Note that because we require every face of an interval to again be a simplex in our complex, we must glue each endpoint of an interval onto a vertex in $ X_0$. Continue this process with $ X_2$, a set of 2-simplices, we must glue each edge precisely along an edge of $ X_1$. We can continue this process until we reach a terminating set $ X_n$. It is easy to see that the union of the $ X_i$ form a simplicial complex. Define the dimension of the cell complex to be $ n$.
There are some picky restrictions on how we glue things that we should mention. For instance, we could not contract all edges of a 2-simplex $ \sigma$ and glue it all to a single vertex in $ X_0$. The reason for this is that $ \sigma$ would no longer be a 2-simplex! Indeed, we’ve destroyed its original vertex set. The gluing process hence needs to preserve the original simplex’s boundary. Moreover, one property that follows from the two conditions above is that any simplex in the complex is uniquely determined by its vertices (for otherwise, the intersection of two such non-uniquely specified simplices would not be a single simplex).
We also have to remember that we’re imposing a specific ordering on the vertices of a simplex. In particular, if we label the vertices of an $ n$-simplex $ 0, \dots, n$, then this imposes an orientation on the edges where an edge of the form $ \left \{ i,j \right \}$ has the orientation $ (i,j)$ if $ i < j$, and $ (j,i)$ otherwise. The faces, then, are “oriented” in increasing order of their three vertices. Higher dimensional simplices are oriented in a similar way, though we rarely try to picture this (the theory of orientations is a question best posted for smooth manifolds; we won’t be going there any time soon). Here are, for example, two different ways to pick orientations of a 2-simplex:
Two possible orientations of a 2-simplex.
It is true, but a somewhat lengthy exercise, that the topology of a simplicial complex does not change under a consistent shuffling of the orientations across all its simplices. Nor does it change depending on how we realize a space as a simplicial complex. These kinds of results are crucial to the welfare of the theory, but have been proved once and we won’t bother reproving them here.
As a larger example, here is a simplicial complex representing the torus. It’s quite a bit more complicated than our usual quotient of a square, but it’s based on the same idea. The left and right edges are glued together, as are the top and bottom, with appropriate orientations. The only difficulty is that we need each simplex to be uniquely determined by its vertices. While this construction does not use the smallest possible number of simplices to satisfy that condition, it is the simplest to think about.
A possible realization of the torus as a simplicial complex. As an exercise, the reader is invited to label the edges and fill in the orientations on the simplices to be consistent across the entire complex. Remember that the result should coincide with our classical construction via the quotient of the disk, so some of the edges on the sides will coincide with those on the opposite sides, and the orientations must line up.
Taking a known topological space (like the torus) and realizing it as a simplicial complex is known as triangulating the space. A space which can be realized as a simplicial complex is called triangulable.
The nicest thing about the simplex is that it has an easy-to-describe boundary. Geometrically, it’s obvious: the boundary of the line segment is the two endpoints; the boundary of the triangle is the union of all three of its edges; the tetrahedron has four triangular faces as its boundary; etc. But because we need an algebraic way to describe holes, we want an algebraic way to describe the boundary. In particular, we have two important criterion that any algebraic definition must satisfy to be reasonable:
A boundary itself has no boundary.
The property of being boundariless (at least in low dimensions) coincides with our intuitive idea of what it means to be a loop.
Of course, just as with homotopy these holes interact in ways we’re about to see, so we need to be totally concrete before we can proceed.
The Chain Group and the Boundary Operator
In order to define an algebraic boundary, we have to realize simplices themselves as algebraic objects. This is not so difficult to do: just take all “formal sums” of simplices in the complex. More rigorously, let $ X_k$ be the set of $ k$-simplices in the simplicial complex $ X$. Define the chain group $ C_k(X)$ to be the $ \mathbb{Q}$-vector space with $ X_k$ for a basis. The elements of the $ k$-th chain group are called k-chains on $ X$. That’s right, if $ \sigma, \sigma’$ are two $ k$-simplices, then we just blindly define a bunch of new “chains” as all possible “sums” and scalar multiples of the simplices. For example, sums involving two elements would look like $ a\sigma + b\sigma’$ for some $ a,b \in \mathbb{Q}$. Indeed, we include any finite sum of such simplices, as is standard in taking the span of a set of basis vectors in linear algebra.
Just for a quick example, take this very simple simplicial complex:
We’ve labeled all of the simplices above, and we can describe the chain groups quite easily. The zero-th chain group $ C_0(X)$ is the $ \mathbb{Q}$-linear span of the set of vertices $ \left \{ v_1, v_2, v_3, v_4 \right \}$. Geometrically, we might think of “the union” of two points as being, e.g., the sum $ v_1 + v_3$. And if we want to have two copies of $ v_1$ and five copies of $ v_3$, that might be thought of as $ 2v_1 + 5v_3$. Of course, there are geometrically meaningless sums like $ \frac{1}{2}v_4 – v_2 – \frac{11}{6}v_1$, but it will turn out that the algebra we use to talk about holes will not falter because of it. It’s nice to have this geometric idea of what an algebraic expression can “mean,” but in light of this nonsense it’s not a good idea to get too wedded to the interpretations.
Likewise, $ C_1(X)$ is the linear span of the set $ \left \{ e_1, e_2, e_3, e_4, e_5 \right \}$ with coefficients in $ \mathbb{Q}$. So we can talk about a “path” as a sum of simplices like $ e_1 + e_4 – e_5 + e_3$. Here we use a negative coefficient to signify that we’re travelling “against” the orientation of an edge. Note that since the order of the terms is irrelevant, the same “path” is given by, e.g. $ -e_5 + e_4 + e_1 + e_3$, which geometrically is ridiculous if we insist on reading the terms from left to right.
The same idea extends to higher dimensional groups, but as usual the visualization grows difficult. For example, in $ C_2(X)$ above, the chain group is the vector space spanned by $ \left \{ \sigma_1, \sigma_2 \right \}$. But does it make sense to have a path of triangles? Perhaps, but the geometric analogies certainly become more tenuous as dimension grows. The benefit, however, is if we come up with good algebraic definitions for the low-dimensional cases, the algebra is easy to generalize to higher dimensions.
So now we will define the boundary operator on chain groups, a linear map $ \partial : C_k(X) \to C_{k-1}(X)$ by starting in lower dimensions and generalizing. A single vertex should always be boundariless, so $ \partial v = 0$ for each vertex. Extending linearly to the entire chain group, we have $ \partial$ is identically the zero map on zero-chains. For 1-simplices we have a more substantial definition: if a simplex has its orientation as $ (v_1, v_2)$, then the boundary $ \partial (v_1, v_2)$ should be $ v_2 – v_1$. That is, it’s the front end of the edge minus the back end. This defines the boundary operator on the basis elements, and we can again extend linearly to the entire group of 1-chains.
Why is this definition more sensible than, say, $ v_1 + v_2$? Using our example above, let’s see how it operates on a “path.” If we have a sum like $ e_1 + e_4 – e_5 – e_3$, then the boundary is computed as
That is, the result was the endpoint of our path $ v_2$ minus the starting point of our path $ v_1$. It is not hard to prove that this will work in general, since each successive edge in a path will cancel out the ending vertex of the edge before it and the starting vertex of the edge after it: the result is just one big alternating sum.
Even more importantly is that if the “path” is a loop (the starting and ending points are the same in our naive way to write the paths), then the boundary is zero. Indeed, any time the boundary is zero then one can rewrite the sum as a sum of “loops,” (though one might have to trivially introduce cancelling factors). And so our condition for a chain to be a “loop,” which is just one step away from being a “hole,” is if it is in the kernel of the boundary operator. We have a special name for such chains: they are called cycles.
For 2-simplices, the definition is not so much harder: if we have a simplex like $ (v_0, v_1, v_2)$, then the boundary should be $ (v_1,v_2) – (v_0,v_2) + (v_0,v_1)$. If one rewrites this in a different order, then it will become apparent that this is just a path traversing the boundary of the simplex with the appropriate orientations. We wrote it in this “backwards” way to lead into the general definition: the simplices are ordered by which vertex does not occur in the face in question ($ v_0$ omitted from the first, $ v_1$ from the second, and $ v_2$ from the third).
We are now ready to extend this definition to arbitrary simplices, but a nice-looking definition requires a bit more notation. Say we have a k-simplex which looks like $ (v_0, v_1, \dots, v_k)$. Abstractly, we can write it just using the numbers, as $ [0,1,\dots, k]$. And moreover, we can denote the removal of a vertex from this list by putting a hat over the removed index. So $ [0,1,\dots, \hat{i}, \dots, k]$ represents the simplex which has all of the vertices from 0 to $ k$ excluding the vertex $ v_i$. To represent a single-vertex simplex, we will often drop the square brackets, e.g. $ 3$ for $ [3]$. This can make for some awkward looking math, but is actually standard notation once the correct context has been established.
Now the boundary operator is defined on the standard $ n$-simplex with orientation $ [0,1,\dots, n]$ via the alternating sum
It is trivial (but perhaps notationally hard to parse) to see that this coincides with our low-dimensional examples above. But now that we’ve defined it for the basis elements of a chain group, we automatically get a linear operator on the entire chain group by extending $ \partial$ linearly on chains.
Definition: The k-cycles on $ X$ are those chains in the kernel of $ \partial$. We will call k-cycles boundariless. The k-boundaries are the image of $ \partial$.
We should note that we are making a serious abuse of notation here, since technically $ \partial$ is defined on only a single chain group. What we should do is define $ \partial_k : C_k(X) \to C_{k-1}(X)$ for a fixed dimension, and always put the subscript. In practice this is only done when it is crucial to be clear which dimension is being talked about, and otherwise the dimension is always inferred from the context. If we want to compose the boundary operator in one dimension with the boundary operator in another dimension (say, $ \partial_{k-1} \partial_k$), it is usually written $ \partial^2$. This author personally supports the abolition of the subscripts for the boundary map, because subscripts are a nuisance in algebraic topology.
All of that notation discussion is so we can make the following observation: $ \partial^2 = 0$. That is, every chain which is a boundary of a higher-dimensional chain is boundariless! This should make sense in low-dimension: if we take the boundary of a 2-simplex, we get a cycle of three 1-simplices, and the boundary of this chain is zero. Indeed, we can formally prove it from the definition for general simplices (and extend linearly to achieve the result for all simplices) by writing out $ \partial^2([0,1,\dots, n])$. With a keen eye, the reader will notice that the terms cancel out and we get zero. The reason is entirely in which coefficients are negative; the second time we apply the boundary operator the power on (-1) shifts by one index. We will leave the full details as an exercise to the reader.
So this fits our two criteria: low-dimensional examples make sense, and boundariless things (cycles) represent loops.
Recasting in Algebraic Terms, and the Homology Group
For the moment let’s give boundary operators subscripts $ \partial_k : C_k(X) \to C_{k-1}(X)$. If we recast things in algebraic terms, we can call the k-cycles $ Z_k(X) = \textup{ker}(\partial_k)$, and this will be a subspace (and a subgroup) of $ C_k(X)$ since kernels are always linear subspaces. Moreover, the set $ B_k(X)$ of k-boundaries, that is, the image of $ \partial_{k+1}$, is a subspace (subgroup) of $ Z_k(X)$. As we just saw, every boundary is itself boundariless, so $ B_k(X)$ is a subset of $ Z_k(X)$, and since the image of a linear map is always a linear subspace of the range, we get that it is a subspace too.
All of this data is usually expressed in one big diagram: each of the chain groups are organized in order of decreasing dimension, and the boundary maps connect them.
Since our example (the “simple space” of two triangles from the previous section) only has simplices in dimensions zero, one, and two, we additionally extend the sequence of groups to an infinite sequence by adding trivial groups and zero maps, as indicated. The condition that $ \textup{im} \partial_{k+1} \subset \textup{ker} \partial_k$, which is equivalent to $ \partial^2 = 0$, is what makes this sequence a chain complex. As a side note, every sequence of abelian groups and group homomorphisms which satisfies the boundary requirement is called an algebraic chain complex. This foreshadows that there are many different types of homology theory, and they are unified by these kinds of algebraic conditions.
Now, geometrically we want to say, “The holes are all those cycles (loops) which don’t arise as the boundaries of higher-dimensional things.” In algebraic terms, this would correspond to a quotient space (really, a quotient group, which we covered in our first primer on groups) of the k-cycles by the k-boundaries. That is, a cycle would be considered a “trivial hole” if it is a boundary, and two “different” cycles would be considered the same hole if their difference is a k-boundary. This is the spirit of homology, and formally, we define the homology group (vector space) as follows.
Definition: The $ k$-th homology group of a simplicial complex $ X$, denoted $ H_k(X)$, is the quotient vector space $ Z_k(X) / B_k(X) = \textup{ker}(\partial_k) / \textup{im}(\partial_{k+1})$. Two elements of a homology group which are equivalent (their difference is a boundary) are called homologous.
The number of $ k$-dimensional holes in $ X$ is thus realized as the dimension of $ H_k(X)$ as a vector space.
The quotient mechanism really is doing all of the work for us here. Any time we have two holes and we’re wondering whether they represent truly different holes in the space (perhaps we have a closed loop of edges, and another which is slightly longer but does not quite use the same edges), we can determine this by taking their difference and seeing if it bounds a higher-dimensional chain. If it does, then the two chains are the same, and if it doesn’t then the two chains carry intrinsically different topological information.
For particular dimensions, there are some neat facts (which obviously require further proof) that make this definition more concrete.
The dimension of $ H_0(X)$ is the number of connected components of $ X$. Therefore, computing homology generalizes the graph-theoretic methods of computing connected components.
$ H_1(X)$ is the abelianization of the fundamental group of $ X$. Roughly speaking, $ H_1(X)$ is the closest approximation of $ \pi_1(X)$ by a $ \mathbb{Q}$ vector space.
Now that we’ve defined the homology group, let’s end this post by computing all the homology groups for this example space:
This is a sphere (which can be triangulated as the boundary of a tetrahedron) with an extra “arm.” Note how the edge needs an extra vertex to maintain uniqueness. This space is a nice example because it has one-dimensional homology in dimension zero (one connected component), dimension one (the arm is like a copy of the circle), and dimension two (the hollow sphere part). Let’s verify this algebraically.
Let’s start by labelling the vertices of the tetrahedron 0, 1, 2, 3, so that the extra arm attaches at 0 and 2, and call the extra vertex on the arm 4. This completely determines the orientations for the entire simplex, as seen below.
Since $ \partial_0$ is actually the zero map, $ Z_0(X) = C_0(X)$ and all five vertices generate the kernel. The quotient construction imposes that two vertices (two elements of the homology group) are considered equivalent if their difference is a boundary. It is easy to see that (indeed, just by the first four generators of the image) all vertices are equivalent to 0, so there is a unique generator of homology, and the vector space is isomorphic to $ \mathbb{Q}$. There is exactly one connected component. Geometrically we can realize this, because two vertices are homologous if and only if there is a “path” of edges from one vertex to the other. This chain will indeed have as its image the difference of the two vertices.
We can compute the first homology $ H_1(X)$ in an analogous way, compute the kernel and image separately, and then compute the quotient.
It takes a bit of combinatorial analysis to show that this is precisely the kernel of $ \partial_1$, and we will have a better method for it in the next post, but indeed this is it. As the image of $ \partial_2$ is precisely the first four basis elements, the quotient is just the one-dimensional vector space spanned by $ [0,2] + [2,4] – [0,4]$. Hence $ H_1(X) = \mathbb{Q}$, and there is one one-dimensional hole.
Since there are no 3-simplices, the homology group $ H_2(X)$ is simply the kernel of $ \partial_2$, which is not hard to see is just generated by the chain representing the “sphere” part of the space: $ [1,2,3] – [0,2,3] + [0,1,3] – [0,1,2]$. The second homology group is thus again $ \mathbb{Q}$ and there is one two-dimensional hole in $ X$.
So there we have it!
Looking Forward
Next time, we will give a more explicit algorithm for computing homology for finite simplicial complexes, and it will essentially be a variation of row-reduction which simultaneously rewrites the matrix representations of the boundary operators $ \partial_{k+1}, \partial_k$ with respect to a canonical basis. This will allow us to simply count entries on the digaonals of the two matrices, and the difference will be the dimension of the quotient space, and hence the number of holes.
Being part of the subject of algebraic topology, this post assumes the reader has read our previous primers on both topology and group theory. As a warning to the reader, it is more advanced than most of the math presented on this blog, and it is woefully incomplete. Nevertheless, the aim is to provide a high level picture of the field with a peek at the details.
An Intuitive Topological Invariant
Our eventual goal is to get comfortable with the notion of the “homology group” of a topological space. That is, to each topological space we will associate a group (really, a family of groups) in such a way that whenever two topological spaces are homeomorphic their associated groups will be isomorphic. In other words, we will be able to distinguish between two spaces by computing their associated groups (if even one of the groups is different).
In general, there may be many many ways to associate a group with an object (for instance, it could be a kind of symmetry group or a group action). But what we want to do, and what will motivate both this post and the post on homology, is figure out a reasonable way to count holes in a space. Of course, the difficult part of this is determining what it means mathematically to have a “hole” in a space.
The simplest example of this is a circle $ S^1$:
That is, we think of this space (not embedded in any particular Euclidean space) as having a single one-dimensional “hole” in it. Furthermore, a sphere $ S^2$ has a two-dimensional “hole” in it (the hollow interior).
So our approach to make “holes” rigorous will follow a commonly tread mathematical trail: come up with a definition which fits with our intuition for these special cases, and explore how the definition generalizes to other cases. In this post we will stick exclusively to one-dimensional holes (with a small exception when we take a peek at the chaos of higher homotopy groups at the end of this post), and the main object we use to represent them is called the fundamental group.
On the other hand, we will find that the fundamental group is too unwieldy to compute (and for deep reasons). Since we want to be able to readily compute the number of holes in twisted and tied-up spaces, we will need to scrap the fundamental group and try something else. Instead, we’ll derive various kinds of other groups associated to a topological space, which will collectively fall under the name homology groups. After that, we will be able to actually write a program which computes homology of sufficiently nice spaces (simplicial complexes). Our main use for this will actually be to compute the topological features of a data set, but let’s cross that bridge when we get to it.
For now, we will develop the idea of homotopy and the fundamental group. It will show us how to think geometrically about topology and give us our first interplay between topology and algebra.
Paths and Homotopy
For the remainder of this post (and all of our future posts in topology), a map will always mean a continuous function. The categorical way to say this, which we will use increasingly often as we approach more advanced topics, is by saying that continuous maps are “the morphisms in the category of topological spaces.”
So we define by a path in a topological space $ X$ to be a map $ f(t):[0,1] \to X$ (that is, a continuous function $ [0,1] \to X$). There are many kinds of paths in a space:
An example of three paths in the plane.
Paths are necessarily oriented, and we can think of the starting point of the path as $ f(0)$, and the ending point as $ f(1)$. And if we label the starting point, say, $ a$ and the ending point $ b$ we will call $ f$ a path from $ a$ to $ b$. We call $ a,b$ the endpoints of $ f$. We also speak of “moving along a path,” in the sense that as we increase $ t$ from zero to one, we continuously move along its image in $ X$. We certainly allow paths to intersect themselves, as in the blue path given above, and we do not require that paths are smooth (in fact, they need not be differentiable in any sense, as in the green path above). We will call a path simple if it is injective (that is, it doesn’t intersect itself). The black and green paths in the figure above are simple. The most trivial of paths is the constant path. We will denote it by $ 1_x$ and it is simply defined by sending the entire interval to a single point $ t \mapsto x$.
The language of paths already gives us a nice topological invariant. We call a space $ X$ path-connected if for every two points $ a,b \in X$ there exists a path from $ a$ to $ b$. The knowledgeable reader will recognize that this is distinct from the usual topological notion of connectedness (which is in fact weaker than path-connectedness). It is not hard to see that if two topological spaces $ X, Y$ are homeomorphic, then $ X$ is path connected if and only if $ Y$ is path connected. As a quick warmup proof, we note that any map giving a homeomorphism $ \varphi: X \to Y$ is continuous, and the composition of a path with any map is again a path:
$ \displaystyle \varphi \circ f: [0,1] \to Y$
By path connectivity in $ X$ and the fact that $ \varphi$ is surjective, we can always find a path between any two points $ a,b \in Y$: just find a path between $ \varphi^{-1}(a), \varphi^{-1}(b) \in X$ and shoot it through (compose it with) $ \varphi$. The same argument goes in the reverse direction using $ f^{-1}$, and this establishes the if and only if.
Back to our mission of describing holes, we want to be able to continuously transform two paths into each other. That is, in the following picture, we want to be able to say that the red and blue paths are “equivalent” because we can continuously slide one to the other, while always keeping the endpoints fixed.
We can continuously transform the red path into the blue path; these two paths are homotopic.
On the other hand, if there is a hole in the space, as shown by the black disk below, no way to slide the red path to the blue path can be continuous; it would need to “jump” over the hole, which is a non-continuous operation. Indeed, no matter how small this hole is we can never overcome this problem; even if just a single point is missing, the nature of continuity ensures it.
The black “hole” in this plane makes it so the red path cannot be continuously transformed into the blue path; these paths are not homotopic.
In order to make this rigorous, we need to define a “path of paths,” so to speak. That is, we need a continuously parameterized family of continuous maps which for the parameter 0 gives the first path, and for 1 the second path.
Definition: Let $ f, g: [0,1] \to X$ be two paths in $ X$ with the same endpoints. A homotopy from $ f$ to $ g$ is a family of paths $ H_s: [0,1] \to X$ such that the following properties hold:
$ H_0 = f$ as paths in $ X$
$ H_1 = g$ as paths in $ X$
For all $ s$, the path $ H_s(t)$ has the same endpoints as $ f$ and $ g$.
$ H_s(t)$ is continuous in $ s$ and $ t$ simultaneously
If there is a homotopy between two paths, we call the two paths homotopic, and denote the relationship $ f \simeq g$.
We often think of $ s$ as the time variable, describing how far along we are in the transformation from $ f$ to $ g$. Here is a nice animation showing a homotopy of two paths:
A homotopy of paths, credit Wikipedia.
But don’t be fooled by the suggestive nature of these paths. While it will often be the case that two paths which are homeomorphic as topological spaces in themselves (viewed as subspaces of $ X$) will be homotopic, this is neither a sufficient nor a necessary condition for a homotopy to exist. Indeed, one can come up with a homotopy between a simple curve and a non simple curve, but these are not homeomorphic spaces (one has a cut-point). And in our picture above we give an example of two simple curves which are not homotopic: there is a hole in $ X$ that stops them from being so. So the existence of a homotopy depends almost entirely on the space that the paths are in, and on where the paths are in that space. This is good, because in the end we don’t care about the paths. We just care about what they tell us about $ X$.
Often times we will not need to give explicit homotopies; they can be quite complicated to construct. But here’s one widely used example: let us take two paths $ f,f’$ in the Euclidean plane from $ a$ to $ b$. Then we can construct the straight line homotopy between them as follows. Set $ H_s(t) = sf(t) + (1-s)f'(t)$. For $ s=0$ we get the path $ f'(t)$; for $ s=1$ we get the path $ f$; and for any fixed value of $ s$ we get a path from $ a$ to $ b$ (plug in $ t=0,1$ and verify the endpoints agree, and that the function is continuous in both variables). This example shows that all paths with the same endpoints in $ \mathbb{R}^2$ are homotopic. The same is true of $ \mathbb{R}^n$ in general, and we will use this fact later.
Because every path is homotopic to itself (the constant homotopy $ H_s(t) = f(t)$ for all $ s$), homotopy is symmetric ($ H_{-s}(t)$ gives an inverse homotopy $ g$ to $ f$), and homotopy is transitive (given in the parenthetical below), homotopy becomes an equivalence relation on paths in $ X$. So we may speak of the equivalence class of a path $ f$ as the set of all paths homotopic to it.
(Transitivity: given homotopies $ H_s(t)$ and $ G_{s}(t)$ from $ f \to g \to h$, we can construct a homotopy from $ f \to h$ as follows. The main difficulty is that the variable in our homotopy must go from 0 to 1, and so we cannot directly compose $ H$ and $ G$. Instead, we “run $ H$ twice as fast” and then “run $ G$ twice as fast.” That is, we define $ \Phi_s(t)$ piecewise to be $ H_{2s}(t)$ when $ s \in [0,1/2]$ and $ G_{2s-1}(t)$ when $ s \in [1/2, 1]$. The composition of continuous functions is continuous, so changing the $ s$ variable doesn’t break continuity. Moreover, since at time $ s=1/2$ the homotopies $ G,H$ coincide, and elsewhere they are continuous, we preserve continuity everywhere.)
This equivalence relation is precisely what we’ll use to define the existence of a hole. Intuitively speaking, if two paths with the same endpoints are not homotopic to each other, then there must be some obstruction stopping the homotopy. Then the number of distinct equivalence classes of paths with fixed endpoints will count the number of holes (plus one). Because endpoints are a little bit messy (one has to pick $ n+1$ endpoints for $ n$ non-equivalent paths to count $ n-1$ holes), we instead implement this idea with loops.
Definition: A path $ f:[0,1] \to X$ is a loop if $ f(0)=f(1)$. Since the two endpoints of a loop are the same, we call it the basepoint.
With loops or with paths, we need one more additional operation. The operation is called path composition, but it essentially “doing one path after the other.” That is, let $ f, g$ be paths in $ X$ such that $ f(1) = g(0)$. Then to compose $ g$ after $ f$, denoted via juxtaposition $ gf$, is the path defined by $ gf(t) = f(2t)$ when $ t \in [0,1/2]$ and $ gf(t) = g(2t-1)$ when $ t \in [1/2, 1]$. The same argument for “composing” homotopies works here: we run along $ f$ at twice the normal speed and then $ g$ at twice its normal speed to get $ gf$.
And now we turn to the main theorem of this post.
The Fundamental Group
Instead of looking at paths, we will work with loops with a fixed basepoint. The amazing thing that happens is that the set of equivalence classes of loops (with respect to homotopy) forms a group.
Definition: Let $ X$ be a topological space and fix a point $ x_0 \in X$. Define by $ \pi_1(X,x_0)$ the set of equivalence classes of loops with basepoint $ x_0$. This set is called the fundamental group of $ X$ (although we have not yet proved it is a group), or the first homotopy group of $ X$.
Theorem: The fundamental group is a group with respect to the operation of path composition.
Proof. In order to prove this we must verify the group operations. Let’s recall them here:
The group must be closed under the group operation: clearly the composition of two loops based at $ x_0$ is again a loop at $ x_0$, but we need to verify this is well-defined. That is, the operation gives the same value regardless of which representative we choose from the equivalence class.
The group operation must be associative.
The group must have an identity element: the constant path $ t \mapsto x_0$ will fill this role.
Every element must have an inverse: we will show that the loop which “runs in reverse” will serve this purpose.
To prove the first point, let $ f, g$ be loops, and suppose $ f’, g’$ are homotopic to $ f,g$ respectively. Then it should be the case that $ fg$ is homotopic to $ f’g’$. Suppose that $ H^1, H^2$ are homotopies between relevant loops. Then we can define a new homotopy which runs the two homotopies simultaneously, and composes their paths for any fixed $ s$. This is almost identical to the original definition of a path composition, since we simply need to define our new family of loops $ H$ by composing the loops $ H^1_s$ and $ H^2_s$. That is,
$ H_s(t) = H^1_s(2t)$ for $ t \in [0,1/2]$ and
$ H_s(t) = H^2_s(2t-1)$ for $ t \in [1/2, 1]$
So the operation is well-defined on equivalence classes.
Associativity is messier to prove, but has a similar mechanic. We just need to define a homotopy which scales the speed of each of the paths for a fixed $ s$ to be in sync. We omit the details for brevity.
For the identity, let $ f$ be a path and $ e$ denote the constant path at $late x_0$. Then we must find a homotopy from $ ef$ to $ f$ and $ fe$ to $ f$. Since the argument is symmetric we prove the former. The path $ ef$ sits at $ x_0$ for half of the time $ t \in [0,1]$ and performs $ f$ at twice speed for the rest. All we need to do is “continuously move” the time at which we stop $ e$ and start $ f$ from $ t=1/2$ to $ t=0$, and run $ f$ faster for the remaining time. We can equivalently perform this idea backwards (the algebra is simpler) to get
$ H_s(t) = x_0$ for $ t \in [0, s/2]$, and
$ H_s(t) = f((2t-s)/(2-s))$ for $ t \in [s/2, 1]$.
To verify this works, plugging in $ s=0$ gives precisely the definition of the composition $ ef$, and for $ s=1$ we recover $ f(t)$. Moreover for any value of $ s$ setting $ t=0$ gives $ x_0$ and $ t=1$ gives $ f(1)$. Finally, the point at which the two pieces meet is $ t = s/2$, and we only need verify that the piecewise definition agrees there. That is, the argument of $ f$ must be zero for that value of $ t$ regardless of the choice of $ s$. Indeed it is.
Finally, for inverses define $ f^-(t) = f(1-t)$ to be the “reverse path” of $ f$. Now we simply need to prove that $ ff^-$ is homotopic to $ e$. That is, we need to run $ f$ part way, and then run $ f^-$ starting from that spot in the reverse direction. The point at which we stop to turn back is $ t=(1-s)$, which continuously moves from $ t=1$ to $ t=0$. We leave the formal specification of this homotopy as an exercise to the reader (hint: we need to appropriately change the speed at which $ f, f^-$ run in order to make everything fit together). From here on we will identify the two notations $ f^{-1} = f^-$ as homotopy equivalence classes of paths. $ \square$
The high-level implications of this theorem are quite important. Although we have not proved it here, the fundamental group is a topological invariant. That is, if $ X$ and $ Y$ are two topological spaces, $ x_0 \in X, y_0 \in Y$, and there is an isomorphism $ f:X \to Y$ which takes $ x_0 \mapsto y_0$, then $ \pi_1(X,x_0) \cong \pi_1(Y,y_0)$ are isomorphic as groups. That is (and this is the important bit) if $ X$ and $ Y$ have different fundamental groups, then we know immediately that they cannot be homeomorphic. The way to prove this is to use $ f$ to construct a new induced homomorphism of groups and proving that it is an isomorphism. We give a description of this in the next section.
In fact, we can say something stronger: the fundamental group is a homotopy invariant. That is, we can generalize the idea of a path homotopy into a “homotopy of maps,” and say that two spaces $ X, Y$ are homotopy equivalent if there are maps $ f: X \to Y, g: Y \to X$ with $ fg \simeq \textup{id}_Y$ (the identity function on $ Y$) and $ gf \simeq \textup{id}_X$. The important fact is that two spaces which are homotopy equivalent necessarily have isomorphic fundamental groups. It should also be clear that homeomorphic spaces are homotopy equivalent (the homeomorphism map is also a homotopy equivalence map), so this realizes the fundamental group as a topological invariant as well. We will not prove this here, and instead refer to Hatcher, Chapter 1.
An important example of two spaces which are homotopy equivalent but not homeomorphic is that of the Möbius band and the circle. Proving it requires some additional tools we haven’t covered on this blog, but the idea is that one can squeeze the Möbius strip down onto its center circle, and so the loops in the former correspond to loops in the latter. So the Möbius band also has fundamental group $ \mathbb{Z}$.
Additionally, if the space $ X$ is path-connected, then the choice of a basepoint is irrelevant. A sketch of the proof goes as follows: let $ x_0, x_1$ be two choices of basepoints, and pick a path $ p$ from $ x_0$ to $ x_1$. Then we can naturally take any loop $ f$ based at $ x_1$ to a corresponding loop at $ x_0$ by composing $ p^-fp$ as in the following picture (here we traverse the paths in left-to-right order, which is the opposite of how function composition usually works; this is so we can read the paths from left-to-right in the order they are traversed in the diagram).
An example of a change of basepoint map.
More rigorously this operation induces a homomorphism on the fundamental group (we define this fully in the next section), and for path connected spaces this is an isomorphism of groups. And so we can speak of the fundamental group $ \pi_1(X)$ when we work with sufficiently nice spaces (and in practice we always will).
Computing Fundamental Groups
So now we have seen that the fundamental group is in fact a group, but how can we compute it? Groups can be large and wild, and so can topological spaces. So it’s really impressive that we can compute these groups at all.
First off, the simplest possibility is that the fundamental group of $ X$ is the identity group. That is, that every loop is homotopic to the trivial loop. In this case we call $ X$ simply connected. For example, Euclidean space is simply connected. We gave the proof above showing that any two paths with the same endpoints in $ \mathbb{R}^2$ are homotopic, and the proof is the same in general. Since all loops are homotopic to each other, they are all homotopic to the trivial loop, and so $ \pi_1(\mathbb{R}^n) = 1$ is the trivial group. The picture becomes more interesting when we start to inspect subspaces of Euclidean space with nontrivial holes in them.
In fact, the first nontrivial fundamental group one computes in developing this theory is that of the circle $ \pi_1(S^1) = \mathbb{Z}$. We again defer the proof to Hatcher because it is quite long, but the essential idea is as follows. View the circle as a subset of the complex plane $ \mathbb{C}$, and fix the basepoint $ x_0 = 1$. The only distinct kinds of loops are those loops which loop around the circle $ n$ times clockwise, or $ n$ times counterclockwise, where $ n \in \mathbb{Z}$ is arbitrary. That is, we can construct a function $ f: \mathbb{Z} \to \pi_1(S^1)$ sending $ n$ to the loop $ t \mapsto e^{2 \pi i nt}$ which goes $ n$ times around the circle (negative $ n$ make it go in the negative direction). This map is an isomorphism.
From this one computation we get quite a few gains. As we will see in a moment, many fundamental groups are free products of $ \mathbb{Z}$ with added relators, and the upcoming van Kampen theorem tells us how these relators are determined. On the other hand, we have already seen on this blog that the fundamental group of the circle is powerful enough to prove the fundamental theorem of algebra! It is clear from this that $ \pi_1$ is a powerful notion.
Let’s take a moment to interpret this group in terms of the number of “holes” of $ S^1$. There may be some confusion because this group is infinite, but there is only one hole. Indeed, for each “hole” of a space one would expect to get a copy of $ \mathbb{Z}$, since one may run a loop arbitrarily many times around that hole in either clockwise or counterclockwise direction. Recall the classification theorem for finitely generated abelian groups from our recent primer. Since every such abelian group decomposes into a finite number of copies of $ \mathbb{Z}$ and a torsion part, then we should be interpreting the number of holes of $ X$ as the rank of $ \pi_1(X)$ (the number of copies of $ \mathbb{Z}$).
This is great! But unfortunately nothing guarantees that the fundamental group is abelian, and for more complicated spaces things aren’t even so simple in the abelian case. In a moment, we will see an example of a space with a nonabelian fundamental group (in fact, we’ve already seen this space on this blog). One of its pieces will be $ \mathbb{Z}/2\mathbb{Z}$. So our interpretation would say that this space has no holes in it, but what does the extra torsion factor mean? As it turns out the specific factor $ \mathbb{Z}/2\mathbb{Z}$ happens to correspond to non-orientability.
In fact it is true that for every group $ G$, there is a two-dimensional topological space whose fundamental group is $ G$ (we haven’t defined dimension yet, but the notion of a surface is intuitive enough). So any weird torsion factor shows up in the fundamental group of some sufficiently awkward space. When one starts to investigate more and more of these contrived spaces, one ceases to care about the “intuitive” interpretations of the fundamental group. The group simply becomes a topological invariant, and the extra factors are just extra ways to tell spaces apart.
We want to state a powerful tool for computing fundamental groups whose rigorous version is called the van Kampen theorem. The intuition is as follows. Imagine we have a so-called “wedge” of two circles.
A wedge of two circles: two circles which intersect at a single point.
From our intuition of this space as two copies of a circle, we would expect its fundamental group to be $ \mathbb{Z}^2$. Unfortunately it is not, because we can immediately see that the fundamental group of this space is not commutative. In particular, let us label the loops $ a$ and $ b$, and give them orientation:
Then the loop $ ab$ (we traverse these loops in left-to-right order for ease of reading) is not homotopic to the loop $ ba$. Instead, this group should be the free group on the generators $ a, b$. Recall from our second primer on group theory that this means the two generators have no nontrivial relations between them. The reason it is free is because the intersection of the two circles (whose fundamental groups we know separately) is simply connected. If it were not, then there would be some relations. For instance, let us look at the torus $ X = T^1$, and recall its formulation as a quotient of a disk.
Here the two top edges and sides are identified, and so we can label the loops as follows:
Then the boundary of the original disk is just $ aba^{-1}b^{-1}$. Since any loop bounding a disk is homotopic to a constant loop (the straight line homotopy works here), we see that the loop $ aba^{-1}b^{-1} = 1$ in $ \pi_1(X)$. That is, we still have two generators $ a,b$ corresponding to the longitudinal and latitudinal circles, but traversing them in the right order is the same as doing nothing. This youtube video gives an animation showing the homotopy in action. So the fundamental group of the torus has presentation:
This is obviously just $ \mathbb{Z}^2$, since the defining relation is precisely saying that the two generators commute: $ ab=ba$. That is, it is the free abelian group on two generators.
Before we can prove the theorem in general we need to define an induced homomorphism. Given two spaces $ X, Y$ and a map $ f: X \to Y$, one gets a canonical induced map $ f^*: \pi_1(X) \to \pi_1(Y)$. If we consider basepoints, then we also require that $ f$ preserves the basepoint. The induced map is defined by setting $ f^*(p)$ to be the equivalence class of the loop $ fp$ of $ Y$. Recalling that $ fp$ is indeed a loop whenever $ f$ is continuous, it is not hard to see that this is a homomorphism of groups. It is certainly not in general an isomorphism; for instance the trivial map which sends all of $ X$ to a single point will not preserve nontrivial loops. But as we will see in the van Kampen theorem, for some maps it is.
One interesting example of an induced homomorphism is that of the inclusion map. Let $ Y \subset X$ be a subspace, and let $ i: Y \hookrightarrow X$ be the inclusion map $ i(y) = y$. This will often not induce an isomorphism $ i^*$ on fundamental groups. For example, the inclusion of $ S^1 \hookrightarrow \mathbb{R}^2$ is not a constant map, but it induces the constant map on fundamental groups since there is only one group homomorphism from any group to the trivial group $ \mathbb{Z} \to 1$. That is, the kernel of $ i^*$ is all of $ \mathbb{Z}$. Intuitively we are “filling in” the hole in $ S^1$ with the ambient space from $ \mathbb{R}^2$ so that the loop generating $ \mathbb{Z}$ is homotopic to the trivial loop. Thus, we are killing all of the loops of the circle.
The more important remark for the van Kampen theorem is, recalling our primer on group theory, that any collection of homomorphisms on groups $ \varphi_{\alpha} : G_{\alpha} \to H$ extends uniquely to a homomorphism on the free product $ \varphi: \ast_{\alpha} G_{\alpha} \to H$. The main goal of this theorem is to give us a way to build up fundamental groups in the same way we build up topological spaces. And it does so precisely by saying that this $ \varphi$ map on the free product is surjective. Using the first isomorphism theorem (again see our primer), this will allow us to compute the fundamental group of a space $ X$ given subspaces $ A_{\alpha}$ and by analyzing the kernels of the homomorphisms induced by the inclusions $ i_{\alpha}: A_{\alpha} \to X$.
The last thing we need to set up this theorem is a diagram. If we have two subspaces $ A_{\alpha}, A_{\beta}$ and we look at the inclusion $ i: A_{\alpha} \cap A_{\beta} \to X$, we could define it in one of two equivalent ways: first by going through $ A_{\alpha}$ and then to $ X$, or else by going through $ A_{\beta}$ first. The diagram is as follows:
However, the induced homomorphism will depend on this choice! So we denote $ i_1^* : A_{\alpha} \cap A_{\beta} \to A_{\alpha}$ to include into the first piece, and $ i_2^*$ to include on the second piece $ A_{\beta}$. The diagram here is:
Now we can state the theorem (and it is still quite a mouthful).
Theorem: (van Kampen) Let $ X$ be the union of a family of path-connected open subspaces $ A_{\alpha}$, each of which contains the chosen basepoint $ x_0 \in X$. Let $ i_{\alpha}: A_{\alpha} \to X$ be inclusions, $ i_{\alpha}^*$ the induced homomorphisms, and $ \varphi$ the unique extension of the inclusion $ i_{\alpha}^*$ to the free product $ \ast_{\alpha} \pi_1(A_{\alpha}, x_0)$.
If all intersections of pairs $ A_{\alpha} \cap A_{\beta}$ are path-connected, then $ \varphi$ is a surjection.
If in addition all triple intersections $ A_{\alpha} \cap A_{\beta} \cap A_{\gamma}$ are path-connected, then the kernel of $ \varphi$ is the smallest normal subgroup $ N$ generated by the elements $ i_1^*(x)i_2^*(x)^{-1}$ for $ x \in \pi_1(A_{\alpha} \cap A_{\beta})$.
In particular, the first isomorphism theorem gives an isomorphism $ \ast \pi_1(A_{\alpha}) / N \cong \pi_1(X)$. That is, we can compute the fundamental group of $ X$ by knowing the fundamental groups of the pieces $ A_{\alpha}$ and a little bit of extra information.
We do not have the stamina to prove such a massive theorem on this blog. However, since we have done so much just to state it, we would be cheating the reader by omitting any examples of its usage.
Let’s again look at the torus $ T^1$. Viewing it as in our previous primer on constructing topological spaces, it is the following quotient of a disk:
Split the disk into two subspaces $ A,B$ as follows:
Note how these spaces overlap in an annulus whose fundamental group we’ve already seen is $ \mathbb{Z}$. Moreover, the fundamental group of $ A$ is trivial, and the fundamental group of $ B$ is $ \mathbb{Z} \ast \mathbb{Z} = \left \langle a,b \right \rangle$. To see the latter, note that the torus with a disk removed is homotopy equivalent to a wedge of two circles. A simple exercise (again using the van Kampen theorem) finishes the computation of $ \pi_1(B)$. As we said, the intersection $ A \cap B$ has fundamental group $ \mathbb{Z} = \left \langle c \right \rangle$ (it is just an annulus).
So according to the van Kampen theorem, the fundamental group of $ T^1$ is the free group on two generators, modulo the normal subgroup found by identifying the images of the two possible induced homomorphisms $ \pi_1(A \cap B) \hookrightarrow \pi_1(X)$. On one hand the image going through $ \pi_1(A)$ is trivial because the group itself is trivial. On the other hand the image going through $ \pi_1(B)$ is easily seen to be homeomorphic to an oriented traversal of the boundary path $ aba^{-1}b^{-1}$. Indeed, this gives rise to a single relator: $ aba^{-1}b^{-1} = 1$. So this verifies the presentation we gave for the fundamental group of the torus above.
A nearly identical argument gives nearly identical presentations for the Klein bottle. There is a slightly different presentation for the real projective plane, and it is interesting because there is a topological hole, but the group is just $ \mathbb{Z}/2\mathbb{Z}$. We leave this as an exercise to the reader, using the representation as a quotient of the disk provided in our previous primer.
Our second example is that of the n-sphere $ S^n$. We already know $ \pi_1(S^1) = \mathbb{Z}$, but in fact $ \pi_1(S^n)$ is trivial for all larger $ n$. Inductively, we can construct $ S^n$ from two copies of the open ball $ B^n$ by taking one to be the northern hemisphere, one to be the southern hemisphere, and their intersection to be the equator (or rather, something homotopy equivalent to $ S^{n-1}$).
For the case of $ S^2$, we have that each hemisphere is an open ball centered at the poles (though the center is irrelevant), and the intersection is an annulus which is homotopy equivalent to $ S^1$. Each of the two pieces is simply connected (in fact, homeomorphic to $ \mathbb{R^n}$), and so by van Kampen’s theorem the fundamental group of $ S^2$ is the free product of two trivial groups (modulo the trivial subgroup), and is hence the trivial group.
This argument works in general: if we know that each of the $ D^n$ have trivial fundamental groups, and each of the possible intersections $ S^{n-1}$ are path-connected (which is easy to see by looking at the construction via a simplicial complex), then van Kampen’s theorem guarantees that the fundamental group of $ S^n$ is trivial.
So as we have seen, the fundamental group is a nice way to compute the number (or the structure) of the one-dimensional holes of a topological space. Unfortunately, even for the nicest of spaces (simplicial complexes) the problem of computing fundamental groups is in general undecidable. In fact, we get stuck at a simpler problem. The problem of determining whether the fundamental group of a finite simplicial complex is trivial is undecidable.
There is an obvious analogue for 2-, 3-, and n-dimensional holes. That is, we can define the $ n$-th homotopy group of a space $ X$, denoted $ \pi_n(X)$ to be the set of homotopy equivalence classes of maps $ S^n \to X$. Homotopy groups $ \pi_n(X)$ for $ n > 1$ are called higher homotopy groups.
As one would expect, higher homotopy groups are much more difficult and even harder to compute. In fact, the only reason we bring this up in this primer is to intimidate the reader: we don’t even know a general way to compute the higher homotopy groups of the sphere.
In particular, here is a table of the known higher homotopy groups of the sphere.
The known higher homotopy groups of spheres. Credit Wikipedia.
The thick black line shows the boundary between where the patterns are well-known and understood (below) and the untamed wilderness (above). This table solidifies how ridiculous the higher homotopy groups can be. As such, they are unsuitable for computational purposes.
Nevertheless, the homotopy groups are relatively easy to understand in terms of intuition, because a homotopy is easily visualized. In our next primer, we will trade this ease of intuition for ease of computation. In particular, we will develop the notion of a homology group, and for simplicial complexes their computation will be about as simple as matrix reduction.
Once this is done, we will extend the idea of homology to apply to data sets (which should not themselves be considered as topological spaces), and we will be able to compute the topological features of a data set. This is our ultimate goal, but the mathematics we lay down along the way will have their own computational problems that we can explore on this blog.
This post assumes familiarity with some basic concepts in algebraic topology, specifically what a group is and the definition of the fundamental group of a topological space.
The fundamental theorem of algebra has quite a few number of proofs (enough to fill a book!). In fact, it seems a new tool in mathematics can prove its worth by being able to prove the fundamental theorem in a different way. This series of proofs of the fundamental theorem also highlights how in mathematics there are many many ways to prove a single theorem, and in re-proving an established theorem we introduce new concepts and strategies. And perhaps most of all, we accentuate the unifying beauty of mathematics.
Problem: Let $ p(z)$ be a non-constant polynomial with coefficients in $ \mathbb{C}$. Prove $ p(z)$ has a root in $ \mathbb{C}$.
Solution: We may assume without loss of generality that $ p(z)$ is monic. So let
Indeed, by assumption the denominators are never zero, so this function is continuous for all $ s \in [0,1]$. Further, each value $ f_r(s)$ is on the unit circle in $ \mathbb{C}$ by virtue of the scaling denominator ($ |f_r(s)| = 1$ for all $ s,r$). Finally, $ f_r(0) = (p(r)/p(r)) / |p(r)/p(r)| = 1,$ and $ f_r(1)$ yields the same value, so this is a closed path based at 1.
We note this function is continuous in both $ s$ and $ r$ (indeed, they are simply rational functions defined for all $ s,r$), so that $ f_r(s)$ is a homotopy of loops as $ r$ varies. If $ r=0,$ then the function is constant for all $ s$, and so for any fixed $ r,$ the loop $ f_r(s)$ is homotopic to the constant loop.
Now fix a value of $ r$ which is larger than both $ |a_0| + \dots + |a_{n-1}|$ and $ 1$. For $ |z| = r$, we have
And hence $ |z^n| > |a_0 + a_1z + \dots + a_{n-1}z^{n-1}|$. It follows that the polynomial $ p_t(z) = z^n + t(a_{n-1}z^{n-1} + \dots + a_0)$ has no roots when both $ |z| = r$ and $ 0 \leq t \leq 1$. Fixing this $ r$, and replacing $ p$ with $ p_t(z)$ in the formula for $ f_r(s)$, we have a homotopy from $ f_r(s)$ (when $ t=1$, nothing is changed) to the loop which winds around the unit circle $ n$ times, where $ n$ is the degree of the polynomial. Indeed, plug in $ t=0$ to get $ f_r(s) = (r^ne^{2 \pi ins}/r^n)/|r^ne^{2 \pi ins}/r^n|$, which is the loop $ \omega_n(s) = e^{2 \pi ins}$.
In other words, we have shown that the homotopy classes of $ f_r$ and $ \omega_n$ are equal, but $ f_r$ is homotopic to the constant map. Translating this into fundamental groups, as $ \pi_1(S^1,1) = \mathbb{Z}$, we note that $ [\omega_n] = [f_r] = 0$, but if $ \omega_n = 0$ then it must be the case that $ n = 0$, as $ \mathbb{Z}$ is the free group generated by $ \omega_1$. Hence, the degree of $ p$ to begin with must have been 0, and so $ p$ must be constant. $ \square$