# Fixing Bugs in “Computing Homology”

A few awesome readers have posted comments in Computing Homology to the effect of, “Your code is not quite correct!” And they’re right! Despite the almost year since that post’s publication, I haven’t bothered to test it for more complicated simplicial complexes, or even the basic edge cases! When I posted it the mathematics just felt so solid to me that it had to be right (the irony is rich, I know).

As such I’m apologizing for my lack of rigor and explaining what went wrong, the fix, and giving some test cases. As of the publishing of this post, the Github repository for Computing Homology has been updated with the correct code, and some more examples.

The main subroutine was the simultaneousReduce function which I’ll post in its incorrectness below

def simultaneousReduce(A, B):
if A.shape[1] != B.shape[0]:
raise Exception("Matrices have the wrong shape.")

numRows, numCols = A.shape # col reduce A

i,j = 0,0
while True:
if i >= numRows or j >= numCols:
break

if A[i][j] == 0:
nonzeroCol = j
while nonzeroCol < numCols and A[i,nonzeroCol] == 0:
nonzeroCol += 1

if nonzeroCol == numCols:
j += 1
continue

colSwap(A, j, nonzeroCol)
rowSwap(B, j, nonzeroCol)

pivot = A[i,j]
scaleCol(A, j, 1.0 / pivot)
scaleRow(B, j, 1.0 / pivot)

for otherCol in range(0, numCols):
if otherCol == j:
continue
if A[i, otherCol] != 0:
scaleAmt = -A[i, otherCol]
colCombine(A, otherCol, j, scaleAmt)
rowCombine(B, j, otherCol, -scaleAmt)

i += 1; j+= 1

return A,B


It’s a beast of a function, and the persnickety detail was just as beastly: this snippet should have an $i += 1$ instead of a $j$.

if nonzeroCol == numCols:
j += 1
continue


This is simply what happens when we’re looking for a nonzero entry in a row to use as a pivot for the corresponding column, but we can’t find one and have to move to the next row. A stupid error on my part that would be easily caught by proper test cases.

The next mistake is a mathematical misunderstanding. In short, the simultaneous column/row reduction process is not enough to get the $\partial_{k+1}$ matrix into the right form! Let’s see this with a nice example, a triangulation of the Mobius band. There are a number of triangulations we could use, many of which are seen in these slides. The one we’ll use is the following.

It’s first and second boundary maps are as follows (in code, because latex takes too much time to type out)

mobiusD1 = numpy.array([
[-1,-1,-1,-1, 0, 0, 0, 0, 0, 0],
[ 1, 0, 0, 0,-1,-1,-1, 0, 0, 0],
[ 0, 1, 0, 0, 1, 0, 0,-1,-1, 0],
[ 0, 0, 0, 1, 0, 0, 1, 0, 1, 1],
])

mobiusD2 = numpy.array([
[ 1, 0, 0, 0, 1],
[ 0, 0, 0, 1, 0],
[-1, 0, 0, 0, 0],
[ 0, 0, 0,-1,-1],
[ 0, 1, 0, 0, 0],
[ 1,-1, 0, 0, 0],
[ 0, 0, 0, 0, 1],
[ 0, 1, 1, 0, 0],
[ 0, 0,-1, 1, 0],
[ 0, 0, 1, 0, 0],
])


And if we were to run the above code on it we’d get a first Betti number of zero (which is incorrect, it’s first homology group has rank 1). Here are the reduced matrices.

>>> A1, B1 = simultaneousReduce(mobiusD1, mobiusD2)
>>> A1
array([[1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0, 0, 0, 0]])
>>> B1
array([[ 0,  0,  0,  0,  0],
[ 0,  0,  0,  0,  0],
[ 0,  0,  0,  0,  0],
[ 0,  0,  0,  0,  0],
[ 0,  1,  0,  0,  0],
[ 1, -1,  0,  0,  0],
[ 0,  0,  0,  0,  1],
[ 0,  1,  1,  0,  0],
[ 0,  0, -1,  1,  0],
[ 0,  0,  1,  0,  0]])


The first reduced matrix looks fine; there’s nothing we can do to improve it. But the second one is not quite fully reduced! Notice that rows 5, 8 and 10 are not linearly independent. So we need to further row-reduce the nonzero part of this matrix before we can read off the true rank in the way we described last time. This isn’t so hard (we just need to reuse the old row-reduce function we’ve been using), but why is this allowed? It’s just because the corresponding column operations for those row operations are operating on columns of all zeros! So we need not worry about screwing up the work we did in column reducing the first matrix, as long as we only work with the nonzero rows of the second.

Of course, nothing is stopping us from ignoring the “corresponding” column operations, since we know we’re already done there. So we just have to finish row reducing this matrix.

This changes our bettiNumber function by adding a single call to a row-reduce function which we name so as to be clear what’s happening. The resulting function is

def bettiNumber(d_k, d_kplus1):
A, B = numpy.copy(d_k), numpy.copy(d_kplus1)
simultaneousReduce(A, B)
finishRowReducing(B)

dimKChains = A.shape[1]
kernelDim = dimKChains - numPivotCols(A)
imageDim = numPivotRows(B)

return kernelDim - imageDim


And running this on our Mobius band example gives:

>>> bettiNumber(mobiusD1, mobiusD2))
1


As desired. Just to make sure things are going swimmingly under the hood, we can check to see how finishRowReducing does after calling simultaneousReduce

>>> simultaneousReduce(mobiusD1, mobiusD2)
(array([[1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0, 0, 0, 0]]), array([[ 0,  0,  0,  0,  0],
[ 0,  0,  0,  0,  0],
[ 0,  0,  0,  0,  0],
[ 0,  0,  0,  0,  0],
[ 0,  1,  0,  0,  0],
[ 1, -1,  0,  0,  0],
[ 0,  0,  0,  0,  1],
[ 0,  1,  1,  0,  0],
[ 0,  0, -1,  1,  0],
[ 0,  0,  1,  0,  0]]))
>>> finishRowReducing(mobiusD2)
array([[1, 0, 0, 0, 0],
[0, 1, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 1, 0],
[0, 0, 0, 0, 1],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0]])


Indeed, finishRowReducing finishes row reducing the second boundary matrix. Note that it doesn’t preserve how the rows of zeros lined up with the pivot columns of the reduced version of $\partial_1$ as it did in the previous post, but since in the end we’re only counting pivots it doesn’t matter how we switch rows. The “zeros lining up” part is just for a conceptual understanding of how the image lines up with the kernel for a valid simplicial complex.

In fixing this issue we’ve also fixed an issue another commenter mentioned, that you couldn’t blindly plug in the zero matrix for $\partial_0$ and get zeroth homology (which is the same thing as connected components). After our fix you can.

Of course there still might be bugs, but I have so many drafts lined up on this blog (and research papers to write, experiments to run, theorems to prove), that I’m going to put off writing a full test suite. I’ll just have to update this post with new bug fixes as they come. There’s just so much math and so little time :) But extra kudos to my amazing readers who were diligent enough to run examples and spot my error. I’m truly blessed to have you on my side.

Also note that this isn’t the most efficient way to represent the simplicial complex data, or the most efficient row reduction algorithm. If you’re going to run the code on big inputs, I suggest you take advantage of sparse matrix algorithms for doing this sort of stuff. You can represent the simplices as entries in a dictionary and do all sorts of clever optimizations to make the algorithm effectively linear time in the number of simplices.

Until next time!

# Computing Homology

Update: the mistakes made in the code posted here are fixed and explained in a subsequent post (one minor code bug was fixed here, and a less minor conceptual bug is fixed in the linked post).

In our last post in this series on topology, we defined the homology group. Specifically, we built up a topological space as a simplicial complex (a mess of triangles glued together), we defined an algebraic way to represent collections of simplices called chains as vectors in a vector space, we defined the boundary homomorphism $\partial_k$ as a linear map on chains, and finally defined the homology groups as the quotient vector spaces

$\displaystyle H_k(X) = \frac{\textup{ker} \partial_k}{\textup{im} \partial_{k+1}}$.

The number of holes in $X$ was just the dimension of this quotient space.

In this post we will be quite a bit more explicit. Because the chain groups are vector spaces and the boundary mappings are linear maps, they can be represented as matrices whose dimensions depend on our simplicial complex structure. Better yet, if we have explicit representations of our chains by way of a basis, then we can use row-reduction techniques to write the matrix in a standard form.

Of course the problem arises when we want to work with two matrices simultaneously (to compute the kernel-mod-image quotient above). This is not computationally any more difficult, but it requires some theoretical fiddling. We will need to dip a bit deeper into our linear algebra toolboxes to see how it works, so the rusty reader should brush up on their linear algebra before continuing (or at least take some time to sort things out if or when confusion strikes).

Without further ado, let’s do an extended example and work our ways toward a general algorithm. As usual, all of the code used for this post is available on this blog’s Github page.

## Two Big Matrices

Recall our example simplicial complex from last time.

We will compute $H_1$ of this simplex (which we saw last time was $\mathbb{Q}$) in a more algorithmic way than we did last time.

Once again, we label the vertices 0-4 so that the extra “arm” has vertex 4 in the middle, and its two endpoints are 0 and 2. This gave us orientations on all of the simplices, and the following chain groups. Since the vertex labels (and ordering) are part of the data of a simplicial complex, we have made no choices in writing these down.

$\displaystyle C_0(X) = \textup{span} \left \{ 0,1,2,3,4 \right \}$

$\displaystyle C_1(X) = \textup{span} \left \{ [0,1], [0,2], [0,3], [0,4], [1,2], [1,3],[2,3],[2,4] \right \}$

$\displaystyle C_2(X) = \textup{span} \left \{ [0,1,2], [0,1,3], [0,2,3], [1,2,3] \right \}$

Now given our known definitions of $\partial_k$ as an alternating sum from last time, we can give a complete specification of the boundary map as a matrix. For $\partial_1$, this would be

$\displaystyle \partial_1 = \bordermatrix{ & [0,1] & [0,2] & [0,3] & [0,4] & [1,2] & [1,3] & [2,3] & [2,4] \cr 0 & -1 & -1 & -1 & -1 & 0 & 0 & 0 & 0\cr 1 & 1 & 0 & 0 & 0 & -1 & -1 & 0 & 0\cr 2 & 0 & 1 & 0 & 0 & 1 & 0 & -1 & -1 \cr 3 & 0 & 0 & 1 & 0 & 0 & 1 & 1 & 0 \cr 4 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 }$,

where the row labels are the basis for $C_0(X)$ and the column labels are the basis for $C_1(X)$. Similarly, $\partial_2$ is

$\displaystyle \partial_2 = \bordermatrix{ & [0,1,2] & [0,1,3] & [0,2,3] & [1,2,3] \cr [0,1] & 1 & 1 & 0 & 0\cr [0,2] & -1 & 0 & 1 & 0\cr [0,3] & 0 & -1 & -1 & 0\cr [0,4] & 0 & 0 & 0 & 0\cr [1,2] & 1 & 0 & 0 & 1\cr [1,3] & 0 & 1 & 0 & -1\cr [2,3] & 0 & 0 & 1 & 1\cr [2,4] & 0 & 0 & 0 & 0}$

The reader is encouraged to check that these matrices are written correctly by referring to the formula for $\partial$ as given last time.

Remember the crucial property of $\partial$, that $\partial^2 = \partial_k \partial_{k+1} = 0$. Indeed, the composition of the two boundary maps just corresponds to the matrix product of the two matrices, and one can verify by hand that the above two matrices multiply to the zero matrix.

We know from basic linear algebra how to compute the kernel of a linear map expressed as a matrix: column reduce and inspect the columns of zeros. Since the process of row reducing is really a change of basis, we can encapsulate the reduction inside a single invertible matrix $A$, which, when left-multiplied by $\partial$, gives us the reduced form of the latter. So write the reduced form of $\partial_1$ as $\partial_1 A$.

However, now we’re using two different sets of bases for the shared vector space involved in $\partial_1$ and $\partial_2$. In general, it will no longer be the case that $\partial_kA\partial_{k+1} = 0$. The way to alleviate this is to perform the “corresponding” change of basis in $\partial_{k+1}$. To make this idea more transparent, we return to the basics.

## Changing Two Matrices Simultaneously

Recall that a matrix $M$ represents a linear map between two vector spaces $f : V \to W$. The actual entries of $M$ depend crucially on the choice of a basis for the domain and codomain. Indeed, if $v_i$ form a basis for $V$ and $w_j$ for $W$, then the $k$-th column of the matrix representation $M$ is defined to be the coefficients of the representation of $f(v_k)$ in terms of the $w_j$. We hope to have nailed this concept down firmly in our first linear algebra primer.

Recall further that row operations correspond to changing a basis for the codomain, and column operations correspond to changing a basis for the domain. For example, the idea of swapping columns $i,j$ in $M$ gives a new matrix which is the representation of $f$ with respect to the (ordered) basis for $V$ which swaps the order of $v_i , v_j$. Similar things happen for all column operations (they all correspond to manipulations of the basis for $V$), while analogously row operations implicitly transform the basis for the codomain. Note, though, that the connection between row operations and transformations of the basis for the codomain are slightly more complicated than they are for the column operations. We will explicitly see how it works later in the post.

And so if we’re working with two maps $A: U \to V$ and $B: V \to W$, and we change a basis for $V$ in $B$ via column reductions, then in order to be consistent, we need to change the basis for $V$ in $A$ via “complementary” row reductions. That is, if we call the change of basis matrix $Q$, then we’re implicitly sticking $Q$ in between the composition $BA$ to get $(BQ)A$. This is not the same map as $BA$, but we can make it the same map by adding a $Q^{-1}$ in the right place:

$\displaystyle BA = B(QQ^{-1})A = (BQ)(Q^{-1}A)$

Indeed, whenever $Q$ is a change of basis matrix so is $Q^{-1}$ (trivially), and moreover the operations that $Q$ performs on the columns of $B$ are precisely the operations that $Q^{-1}$ performs on the rows of $A$ (this is because elementary row operations take different forms when multiplied on the left or right).

Coming back to our boundary operators, we want a canonical way to view the image of $\partial_{k+1}$ as sitting inside the kernel of $\partial_k$. If we go ahead and use column reductions to transform $\partial_k$ into a form where the kernel is easy to read off (as the columns consisting entirely of zeroes), then the corresponding row operations, when performed on $\partial_{k+1}$ will tell us exactly the image of $\partial_{k+1}$ inside the kernel of $\partial_k$.

This last point is true precisely because $\textup{im} \partial_{k+1} \subset \textup{ker} \partial_k$. This fact guarantees that the irrelevant rows of the reduced version of $\partial_{k+1}$ are all zero.

Let’s go ahead and see this in action on our two big matrices above. For $\partial_1$, the column reduction matrix is

$\displaystyle A = \begin{pmatrix} 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 & -1 & 0 & 1 & 1\\ 0 & 0 & 0 & 1 & 0 & -1 & -1 & 0\\ -1 & -1 & -1 & -1 & 0 & 0 & 0 & -1\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix}$

And the product $\partial_1 A$ is

$\displaystyle \partial_1 A = \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\ -1 & -1 & -1 & -1 & 0 & 0 & 0 & 0 \end{pmatrix}$

Now the inverse of $A$, which is the corresponding basis change for $\partial_2$, is

$\displaystyle A^{-1} = \begin{pmatrix} -1 & -1 & -1 & -1 & -0 & -0 & -0 & -0\\ 1 & 0 & 0 & 0 & -1 & -1 & 0 & 0\\ 0 & 1 & 0 & 0 & 1 & 0 & -1 & -1\\ 0 & 0 & 1 & 0 & 0 & 1 & 1 & 0\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix}$

and the corresponding reduced form of $\partial_2$ is

$\displaystyle A^{-1} \partial_2 = \begin{pmatrix} 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 1\\ 0 & 1 & 0 & -1\\ 0 & 0 & 1 & 1\\ 0 & 0 & 0 & 0 \end{pmatrix}$

As a side note, we got these matrices by slightly modifying the code from our original post on row reduction to output the change of basis matrix in addition to performing row reduction. It turns out one can implement column reduction as row reduction of the transpose, and the change of basis matrix you get from this process will be the transpose of the change of basis matrix you want (by $(AB)^\textup{T} = (B^\textup{T}A^\textup{T})$). Though the code is particularly ad-hoc, we include it with the rest of the code used in this post on this blog’s Github page.

Now let’s inspect the two matrices $\partial_1 A$ and $A^{-1} \partial_2$ more closely. The former has four “pivots” left over, and this corresponds to the rank of the matrix being 4. Moreover, the four basis vectors representing the columns with nonzero pivots, which we’ll call $v_1, v_2, v_3, v_4$ (we don’t care what their values are), span a complementary subspace to the kernel of $\partial_1$. Hence, the remaining four vectors (which we’ll call $v_5, v_6, v_7, v_8$) span the kernel. In particular, this says that the kernel has dimension 4.

On the other hand, we performed the same transformation of the basis of $C_1(X)$ for $\partial_2$. Looking at the matrix that resulted, we see that the first four rows and the last row (representing $v_1, v_2, v_3, v_4, v_8$) are entirely zeros and so the image of $\partial_2$ intersects their span trivially. and the remaining three rows (representing $v_5, v_6, v_7$) have nonzero pivots. This tells us exactly that the image of $\partial_2$ is spanned by $v_5, v_6, v_7$.

And now, the coup de grâce, the quotient to get homology is simply

$\displaystyle \frac{ \textup{span} \left \{ v_5, v_6, v_7, v_8 \right \}}{ \textup{span} \left \{ v_5, v_6, v_7 \right \}} = \textup{span} \left \{ v_8 \right \}$

And the dimension of the homology group is 1, as desired.

## The General Algorithm

It is no coincidence that things worked out at nicely as they did. The process we took of simultaneously rewriting two matrices with respect to a common basis is the bulk of the algorithm to compute homology. Since we’re really only interested in the dimensions of the homology groups, we just need to count pivots. If the number of pivots arising in $\partial_k$ is $y$ and the number of pivots arising in $\partial_{k+1}$ is $z$, and the dimension of $C_k(X)$ is $n$, then the dimension is exactly

$(n-y) - z = \textup{dim}(\textup{ker} \partial_k) - \textup{dim}(\textup{im}\partial_{k+1})$

And it is no coincidence that the pivots lined up so nicely to allow us to count dimensions this way. It is a minor exercise to prove it formally, but the fact that the composition $\partial_k \partial_{k+1} = 0$ implies that the reduced version of $\partial_{k+1}$ will have an almost reduced row-echelon form (the only difference being the rows of zeros interspersed above, below, and possibly between pivot rows).

As the reader may have guessed at this point, we don’t actually need to compute $A$ and $A^{-1}$. Instead of this, we can perform the column/row reductions simultaneously on the two matrices. The above analysis helped us prove the algorithm works, and with that guarantee we can throw out the analytical baggage and just compute the damn thing.

Indeed, assuming the input is already processed as two matrices representing the boundary operators with respect to the standard bases of the chain groups, computing homology is only slightly more difficult than row reducing in the first place. Putting our homology where our mouth is, we’ve implemented the algorithm in Python. As usual, the entire code used in this post is available on this blog’s Github page.

The first step is writing auxiliary functions to do elementary row and column operations on matrices. For this post, we will do everything in numpy (which makes the syntax shorter than standard Python syntax, but dependent on the numpy library).

import numpy

def rowSwap(A, i, j):
temp = numpy.copy(A[i, :])
A[i, :] = A[j, :]
A[j, :] = temp

def colSwap(A, i, j):
temp = numpy.copy(A[:, i])
A[:, i] = A[:, j]
A[:, j] = temp

def scaleCol(A, i, c):
A[:, i] *= c*numpy.ones(A.shape[0])

def scaleRow(A, i, c):
A[i, :] *= c*numpy.ones(A.shape[1])

A[:, addTo] += scaleAmt * A[:, scaleCol]

A[addTo, :] += scaleAmt * A[scaleRow, :]


From here, the main meat of the algorithm is doing column reduction on one matrix, and applying the corresponding row operations on the other.

def simultaneousReduce(A, B):
if A.shape[1] != B.shape[0]:
raise Exception("Matrices have the wrong shape.")

numRows, numCols = A.shape # col reduce A

i,j = 0,0
while True:
if i >= numRows or j >= numCols:
break

if A[i][j] == 0:
nonzeroCol = j
while nonzeroCol < numCols and A[i,nonzeroCol] == 0:
nonzeroCol += 1

if nonzeroCol == numCols:
i += 1
continue

colSwap(A, j, nonzeroCol)
rowSwap(B, j, nonzeroCol)

pivot = A[i,j]
scaleCol(A, j, 1.0 / pivot)
scaleRow(B, j, 1.0 / pivot)

for otherCol in range(0, numCols):
if otherCol == j:
continue
if A[i, otherCol] != 0:
scaleAmt = -A[i, otherCol]
colCombine(A, otherCol, j, scaleAmt)
rowCombine(B, j, otherCol, -scaleAmt)

i += 1; j+= 1

return A,B


This more or less parallels the standard algorithm for row-reduction (with the caveat that all the indices are swapped to do column-reduction). The only somewhat confusing line is the call to rowCombine, which explicitly realizes the corresponding row operation as the inverse of the performed column operation. Note that for row operations, the correspondence between operations on the basis and operations on the rows is not as direct as it is for columns. What’s given above is the true correspondence. Writing down lots of examples will reveal why, and we leave that as an exercise to the reader.

Then the actual algorithm to compute homology is just a matter of counting pivots. Here are two pivot-counting functions in a typical numpy fashion:

def numPivotCols(A):
z = numpy.zeros(A.shape[0])
return [numpy.all(A[:, j] == z) for j in range(A.shape[1])].count(False)

def numPivotRows(A):
z = numpy.zeros(A.shape[1])
return [numpy.all(A[i, :] == z) for i in range(A.shape[0])].count(False)


And the final function is just:

def bettiNumber(d_k, d_kplus1):
A, B = numpy.copy(d_k), numpy.copy(d_kplus1)
simultaneousReduce(A, B)

dimKChains = A.shape[1]
kernelDim = dimKChains - numPivotCols(A)
imageDim = numPivotRows(B)

return kernelDim - imageDim


And there we have it! We’ve finally tackled the beast, and written a program to compute algebraic features of a topological space.

The reader may be curious as to why we didn’t come up with a more full-bodied representation of a simplicial complex and write an algorithm which accepts a simplicial complex and computes all of its homology groups. We’ll leave this direct approach as a (potentially long) exercise to the reader, because coming up in this series we are going to do one better. Instead of computing the homology groups of just one simplicial complex using by repeating one algorithm many times, we’re going to compute all the homology groups of a whole family of simplicial complexes in a single bound. This family of simplicial complexes will be constructed from a data set, and so, in grandiose words, we will compute the topological features of data.

If it sounds exciting, that’s because it is! We’ll be exploring a cutting-edge research field known as persistent homology, and we’ll see some of the applications of this theory to data analysis.

Until then!

# Row Reduction Over A Field

We’re quite eager to get to applications of algebraic topology to things like machine learning (in particular, persistent homology). Even though there’s a massive amount of theory behind it (and we do plan to cover some of the theory), a lot of the actual computations boil down to working with matrices. Of course, this means we’re in the land of linear algebra; for a refresher on the terminology, see our primers on linear algebra.

In addition to applications of algebraic topology, our work with matrices in this post will allow us to solve important optimization problems, including linear programming. We will investigate these along the way.

## Matrices Make the World Go Round

Fix two vector spaces $V, W$ of finite dimensions $m,n$ (resp), and fix bases for both spaces. Recall that an $n \times m$ matrix uniquely represents a linear map $V \to W$, and moreover all such linear maps arise this way. Our goal is to write these linear maps in as simple a way as possible, so that we can glean useful information from them.

The data we want from a linear map $f$ are: a basis for the kernel of $f$, a basis for the image of $f$, the dimensions of both, and the eigenvalues and eigenvectors of $f$ (which is only defined if in addition $V=W$). As we have already seen, just computing the latter gives us a wealth of information about things like the internet and image similarity. In future posts, we will see how the former allows us to infer the shape (topology) of a large data set.

For this article, we will assume our vector spaces lie over the field $\mathbb{R}$, but later we will relax this assumption (indeed, they won’t even be vector spaces anymore!). Luckily for us, the particular choice of bases for $V$ and $W$ is irrelevant. The function $f$ is fixed, and the only thing which changes is the matrix representation of $f$ with respect to our choice of bases. If we pick the right bases, then the pieces of information above are quite easy to determine. Rigorously, we say two $n \times m$ matrices $A,B$ are row equivalent if there exists an invertible matrix $M$ with $B = MA$. The reader should determine what the appropriate dimensions of this matrix are (hint: it is square). Moreover, we have the following proposition (stated without proof) which characterizes row equivalent matrices:

Proposition: Two matrices are row equivalent if and only if they represent the same linear map $V \to W$ up to a choice of basis for $W$.

The “row” part of row equivalence is rather suggestive. Indeed, it turns out that our desired form $B$ can be achieved by a manipulation of the rows of $A$, which is equivalently the left multiplication by a special matrix $M$.

## Reduced Row Echelon Form, and Elementary Matrices

Before we go on, we should describe the convenient form we want to find. The historical name for it is reduced row echelon form, and it imposes a certain form on the rows of a matrix $A$:

• its nonzero rows are above all of its zero rows,
• the leftmost entry in each nonzero row is 1,
• the leftmost entry in a row is strictly to the left of the leftmost entry in the next row, and
• the leftmost entry in each nonzero row is the only nonzero entry in its column.

The reduced row echelon form is canonical, meaning it is uniquely determined for any matrix (this is not obvious, but for brevity we refer the reader to an external proof). For example, the following matrix $\mathbb{R}^5 \to \mathbb{R}^3$ has the given reduced row echelon form:

$\begin{pmatrix} 1 & 2 & 3 & 4 & 5 \\ 1 & 2 & 4 & 5 & 6 \\ 0 & 0 & 2 & 1 & 0 \end{pmatrix} \to \begin{pmatrix} 1 & 2 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & -1 \\ 0 & 0 & 0 & 1 & 2 \end{pmatrix}$

The entries which contain a 1 and have zeros in the corresponding row and column are called pivots, and this name comes from their use in the algorithm below.

Calling this a “form” of the original matrix is again suggestive: a matrix in reduced row echelon form is just a representation with respect to a different basis (in fact, just a different basis for the codomain $W$). To prove this, we will show a matrix is row equivalent to its reduced row echelon form. But before we do that, we should verify that the reduced row echelon form actually gives us the information we want.

For the rightmost matrix above, and assuming we know the correct choice of basis for $W$ is $w_1, \dots, w_k$, we can determine a basis for the image quite easily. Indeed, if the $j$th column contains a single 1 in the $i$th row, then the vector $f(v_j) = w_i$ is in the image of $f$. Moreover, if we do this for each nonzero row (and because each nonzero row has a pivot) we obtain a basis for the whole image of $f$ as a subset of the $w_1 , \dots, w_k$. Indeed, they are linearly independent as they form a basis of $W$, and they span the image of $f$ because each basis vector $w_i$ which was not found in the above way corresponds to a row of all zeros. In other words, it is clear from the entries of the reduced row echelon form matrix that no vectors $f(v)$ expand as a linear combination of the unchosen $w_i$.

To put a matrix into reduced row echelon form, we allow ourselves three elementary row operations, and give an algorithm describing in what order to perform them. The operations are:

• swap the positions of any two rows,
• multiply any row by a nonzero constant, and
• add a nonzero multiple of one row to another row.

Indeed, we should verify that these operations behave well. We can represent each one by the left multiplication by an appropriate matrix. Swapping the $i$th and $j$th rows corresponds to the identity matrix with the same rows swapped. Multiplying the $i$th row by a constant $c$ corresponds to the identity matrix with $c$ in the $i,i$ entry. And adding $c$ times row $j$ to row $i$ corresponds to the identity matrix with a $c$ in the $i,j$ entry. We call these matrices elementary matrices, and any sequence of elementary row operations corresponds to left multiplication by a product of elementary matrices. As we will see by our algorithm below, these row operations are enough to put any matrix (with entries in a field) into reduced row echelon form, hence proving that all matrices are row equivalent to some matrix in reduced row echelon form.

Before we describe this algorithm, we should make one important construction which will be useful in the future. Fixing the dimension $n$ of our elementary matrices we note three things: the identity matrix is an elementary matrix, every elementary matrix is invertible, and the inverse of an elementary matrix is again an elementary matrix. In particular, every product of elementary matrices is invertible (the product of the inverses in reverse order), and so we can describe the group generated by all elementary matrices. We call this group the general linear group, denoted $\textup{GL}_n( \mathbb{R})$, and note that it has a very important place in the theory of Lie groups, which is (very roughly) the study of continuous symmetry. It has its name because we note that a matrix is in $\textup{GL}_n(\mathbb{R})$ if and only if its columns are linearly independent (and invertible). This happens precisely when it is row equivalent to the identity matrix. In other words, for any such matrix $A$ there exists a product of elementary matrices $E_1 \dots E_n$ such that $E_1 \dots E_n A = I_n$, and hence $A = E_n^{-1} \dots E_1^{-1}$. So we can phrase the question of whether a matrix is invertible as whether it is in $\textup{GL}_n(\mathbb{R})$, and answer it by finding its reduced row echelon form. So without further ado:

## The Row Reduction Algorithm

Now that we’ve equipped ourselves with the right tools, let’s describe the algorithm which will transform any matrix into reduced row echelon form. We will do it straight in Python, explaining the steps along the way.

def rref(matrix):
numRows = len(matrix)
numCols = len(matrix[0])

i,j = 0,0

The first part is straightforward: get the dimensions of the matrix and initialize $i,j$ to 0. $i,j$ represent the current row and column under inspection, respectively. We then start a loop:

   while True:
if i >= numRows or j >= numCols:
break

if matrix[i][j] == 0:
nonzeroRow = i
while nonzeroRow < numRows and matrix[nonzeroRow][j] == 0:
nonzeroRow += 1

if nonzeroRow == numRows:
j += 1
continue

Here we check the base cases: if our indices have exceeded the bounds of our matrix, then we are done. Next, we need to find a pivot and put it in the right place, and essentially we work by induction on the columns. Since we are working over a field, any nonzero element can be a pivot, as we my just divide the entire row by the value of the leftmost entry to get a 1 in the right place. We just need to find a row with a nonzero value, and we prefer to pick the row which has the leftmost nonzero value, and if there are many rows with that property we pick the one in the row with the smallest index. In other words, we fix the leftmost column, try to find a pivot there by scanning downwards, and if we find none, we increment the column index and begin our search again. Once we find it, we may swap the two rows and save the pivot:

      temp = matrix[i]
matrix[i] = matrix[nonzeroRow]
matrix[nonzeroRow] = temp

pivot = matrix[i][j]
matrix[i] = [x / pivot for x in matrix[i]]

Once we have found a pivot, we simply need to eliminate the remaining entries in the column. We know this won’t affect any previously inspected columns, because by the inductive hypothesis any entries which are to the left of our pivot are zero.

      for otherRow in range(0, numRows):
if otherRow == i:
continue
if matrix[otherRow][j] != 0:
matrix[otherRow] = [y - matrix[otherRow][j]*x
for (x,y) in zip(matrix[i], matrix[otherRow])]

i += 1; j+= 1

return matrix

After zeroing out the entries in the $j$th column, we may start to look for the next pivot, and since it can’t be in the same column or row, we may restrict our search to the sub-matrix starting at the $i+1, j+1$ entry. Once the while loop has terminated, we have processed all pivots, and we are done.

We encourage the reader to work out a few examples of this on small matrices, and modify our program to print out the matrix at each step of modification to verify the result. As usual, the reader may find the entire program on this blog’s Github page.

From here, determining the information we want is just a matter of reading the entries of the matrix and presenting it in the desired way. To determine the change of basis for $W$ necessary to write the matrix as desired, one may modify the above algorithm to accept a second square matrix $Q$ whose rows contain the starting basis (usually, the identity matrix for the standard basis vectors), and apply the same elementary row operations to $Q$ as we do to $A$. The reader should try to prove that this does what it should, and we leave any further notes to a discussion in the comments.

Next time, we’ll relax the assumption that we’re working over a field. This will involve some discussion of rings and $R$-modules, but in the end we will work over very familiar number rings, like the integers $\mathbb{Z}$ and the integers modulo $n$, and our work will be similar to the linear algebra we all know and love.

Until then!