The Two-Dimensional Fourier Transform and Digital Watermarking

We’ve studied the Fourier transform quite a bit on this blog: with four primers and the Fast Fourier Transform algorithm under our belt, it’s about time we opened up our eyes to higher dimensions.

Indeed, in the decades since Cooley & Tukey’s landmark paper, the most interesting applications of the discrete Fourier transform have occurred in dimensions greater than 1. But for all our work we haven’t yet discussed what it means to take an “n-dimensional” Fourier transform. Our past toiling and troubling will pay off, though, because the higher Fourier transform and its 1-dimensional cousin are quite similar. Indeed, the shortest way to describe the n-dimensional transform is as the 1-dimensional transform with inner products of vector variables replacing regular products of variables.

In this post we’ll flush out these details. We’ll define the multivariable Fourier transform and it’s discrete partner, implement an algorithm to compute it (FFT-style), and then apply the transform to the problem of digitally watermarking images.

As usual, all the code, images, and examples used in this post are available on this blog’s Github page.

Sweeping Some Details Under the Rug

We spent our first and second primers on Fourier analysis describing the Fourier series in one variable, and taking a limit of the period to get the Fourier transform in one variable. By all accounts, it was a downright mess of notation and symbol manipulation that culminated in the realization that the Fourier series looks a lot like a Riemann sum. So it was in one dimension, it is in arbitrary dimension, but to save our stamina for the applications we’re going to treat the n-dimensional transform differently. We’ll use the 1-dimensional transform as a model, and magically generalize it to operate on a vector-valued variable. Then the reader will take it on faith that we could achieve the same end as a limit of some kind of multidimensional Fourier series (and all that nonsense with Schwarz functions and tempered distributions is left to the analysts), or if not we’ll provide external notes with the full details.

So we start with a real-valued (or complex-valued) function f : \mathbb{R}^n \to \mathbb{R}, and we write the variable as x = (x_1, \dots, x_n), so that we can stick to using the notation f(x). Rather than think of the components of x as “time variables” as we did in the one-dimensional case, we’ll usually think of x as representing physical space. And so the periodic behavior of the function f represents periodicity in space. On the other hand our transformed variables will be “frequency” in space, and this will correspond to a vector variable \xi = (\xi_1, \dots, \xi_n). We’ll come back to what the heck “periodicity in space” means momentarily.

Remember that in one dimension the Fourier transform was defined by

\displaystyle \mathscr{F}f(s) = \int_{-\infty}^\infty e^{-2\pi ist}f(t) dt.

And it’s inverse transform was

\displaystyle \mathscr{F}^{-1}g(t) = \int_{-\infty}^\infty e^{2\pi ist}f(s) ds.

Indeed, with the vector x replacing t and \xi replacing s, we have to figure out how to make an analogous definition. The obvious thing to do is to take the place where st is multiplied and replace it with the inner product of x and \xi, which for this post I’ll write x \cdot \xi (usually I write \left \langle x, \xi \right \rangle). This gives us the n-dimensional transform

\displaystyle \mathscr{F}f(\xi) = \int_{\mathbb{R}^n} e^{-2\pi i x \cdot \xi}f(x) dx,

and its inverse

\displaystyle \mathscr{F}^{-1}g(t) = \int_{\mathbb{R}^n} e^{2\pi i x \cdot \xi}f( \xi ) d \xi

Note that the integral is over all of \mathbb{R}^n. To give a clarifying example, if we are in two dimensions we can write everything out in coordinates: x = (x_1, x_2), \xi = (\xi_1, \xi_2), and the formula for the transform becomes

\displaystyle \mathscr{F}f(\xi_1, \xi_2) = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} e^{-2 \pi i (x_1 \xi_1 + x_2 \xi_2)} f(\xi_1, \xi_2) dx_1 dx_2.

Now that’s a nasty integral if I’ve ever seen one. But for our purposes in this post, this will be as nasty as it gets, for we’re primarily concerned with image analysis. So representing things as vectors of arbitrary dimension is more compact, and we don’t lose anything for it.

Periodicity in Space? It’s All Mostly the Same

Because arithmetic with vectors and arithmetic with numbers is so similar, it turns out that most of the properties of the 1-dimensional Fourier transform hold in arbitrary dimension. For example, the duality of the Fourier transform and its inverse holds, because for vectors e^{-2 \pi i x \cdot (-\xi)} = e^{2 \pi i x \cdot \xi}. So just like in on dimension, we have

\mathscr{F}f(-\xi) = \mathscr{F}^{-1}f(\xi)

And again we have correspondences between algebraic operations: convolution in the spatial domain corresponds to convolution in the frequency domain, the spectrum is symmetric about the origin, etc.

At a more geometric level, though, the Fourier transform does the same sort of thing as it did in the one-dimensional case. Again the complex exponentials form the building blocks of any function we want, and performing a Fourier transform on an n-dimensional function decomposes that function into its frequency components. So a function that is perfectly periodic corresponds to a Fourier spectrum that’s perfectly concentrated at a point.

But what the hell, the reader might ask, is ‘periodicity in space’? Since we’re talking about images anyway, the variables we care about (the coordinates of a pixel) are spatial variables. You could, if you were so inclined, have a function of multiple time variables, and to mathematicians a physical interpretation of dimension is just that, an interpretation. But as confusing as it might sound, it’s actually not so hard to understand the Fourier transform when it’s specialized to image analysis. The idea is that complex exponentials e^{\pm 2 \pi i s \cdot \xi} oscillate in the x variable for a fixed \xi (and since \mathscr{F} has \xi as its input, we do want to fix \xi). The brief mathematical analysis goes like this: if we fix \xi then the complex exponential is periodic with magnitudinal peaks along parallel lines spaced out at a distance of 1/ \left \| \xi \right \| apart. In particular any image is a sum of a bunch of these “complex exponential with a fixed \xi” images that look like stripes with varying widths and orientations (what you see here is just the real part of a particular complex exponential).

Any image can be made from a sum of a whole lot of images like this one. This corresponds to a single point in the Fourier spectrum.

Any image can be made from a sum of a whole lot of images like the ones on top. They correspond to single points in the Fourier spectrum (and their symmetries), as on bottom.

What you see on top is an image, and on bottom its Fourier spectrum. That is, each brightly colored pixel corresponds to a point [x_1, x_2] with a large magnitude for that frequency component |\mathscr{F}f[x_1, x_2]|.

It might be a bit surprising that every image can be constructed as a sum of stripey things, but so was it that any sound can be constructed as a sum of sines and cosines. It’s really just a statement about a basis of some vector space of functions. The long version of this story is laid out beautifully in pages 4 – 7 of these notes. The whole set of notes is wonderful, but this section is mathematically tidy and needs no background; the remainder of the notes outline the details about multidimensional Fourier series mentioned earlier, as well as a lot of other things. In higher dimensions the “parallel lines” idea is much the same, but with lines replaced by hyperplanes normal to the given vector.

Discretizing the Transform

Recall that for a continuous function f of one variable, we spent a bit of time figuring out how to find a good discrete approximation of f, how to find a good discrete approximation of the Fourier transform \mathscr{F}f, and how to find a quick way to transition between the two. In brief: f was approximated by a vector of samples (f[0], f[1], \dots, f[N]), reconstructed the original function (which was only correct at the sampled points) and computed the Fourier transform of that, calling it the discrete Fourier transform, or DFT. We got to this definition, using square brackets to denote list indexing (or vector indexing, whatever):

Definition: Let f = (f[1], \dots f[N]) be a vector in \mathbb{R}^N. Then the discrete Fourier transform of f is defined by the vector (\mathscr{F}f[1], \dots, \mathscr{F}f[N]), where

\displaystyle \mathscr{F}f[j] = \sum_{k=0}^{N-1} f[k]e^{-2 \pi i jk/N}

Just as with the one-dimensional case, we can do the same analysis and arrive at a discrete approximation of an n-dimensional function. Instead of a vector it would be an N \times N \times \dots \times N matrix, where there are n terms in the matrix, one for each variable. In two dimensions, this means the discrete approximation of a function is a matrix of samples taken at evenly-spaced intervals in both directions.

Sticking with two dimensions, the Fourier transform is then a linear operator taking matrices to matrices (which is called a tensor if you want to scare people). It has its own representation like the one above, where each term is a double sum. In terms of image analysis, we can imagine that each term in the sum requires us to look at every pixel of the original image

Definition: Let f = (f[s,t]) be a vector in \mathbb{R}^N \times \mathbb{R}^M, where s ranges from 0, \dots, N-1 and t from 0, \dots, M-1. Then the discrete Fourier transform of f is defined by the vector (\mathscr{F}f[s,t]), where each entry is given by

\displaystyle \mathscr{F}f[x_1, x_2] = \sum_{s=0}^{N-1} \sum_{t=0}^{M-1} f[s, t] e^{-2 \pi i (s x_1 / N + t x_2 / M)}

In the one-dimensional case the inverse transform had a sign change in the exponent and an extra 1/N normalization factor. Similarly, in two dimensions the inverse transform has a normalization factor of 1/NM (1 over the total number of samples). Again we use a capital F to denote the transformed version of f. The higher dimensional transforms are analogous: you get n sums, one for each component, and the normalization factor is the inverse of the total number of samples.

\displaystyle \mathscr{F}^{-1}F[x_1, x_2] = \frac{1}{NM} \sum_{s=0}^{N-1} \sum_{t=0}^{M-1} f[s,t] e^{2 \pi i (sx_1 / N + tx_2 / M)}

Unfortunately, the world of the DFT disagrees a lot on the choice of normalization factor. It turns out that all that really matters is that the exponent is negated in the inverse, and that the product of the constant terms on both the transform and its inverse is 1/NM. So some people will normalize both the Fourier transform and its inverse by 1/ \sqrt{NM}. The reason for this is that it makes the transform and its inverse more similar-looking (it’s just that, cosmetic). The choice of normalization isn’t particularly important for us, but beware: non-canonical choices are out there, and they do affect formulas by adding multiplicative constants.

The Fast Fourier Transform, Revisited

Now one might expect that there is another clever algorithm to drastically reduce the runtime of the 2-dimensional DFT, akin to the fast Fourier transform algorithm (FFT). But actually there is almost no additional insight required to understand the “fast” higher dimensional Fourier transform algorithm, because all the work was done for us in the one dimensional case.

All that we do is realize that each of the inner summations is a 1-dimensional DFT. That is, if we write the inner-most sum as a function of two parameters

\displaystyle g(s, x_2) = \sum_{t=0}^{M-1} f(s,t) e^{-2 \pi i (tx_2 / M)}

then the 2-dimensional FFT is simply

\displaystyle \mathscr{F}f[x_1, x_2] = \sum_{s=0}^{N-1} g(s, x_2) e^{-2 \pi i (sx_1/N)}

But now notice, that we can forget that g(s,x_2) was ever a separate, two-dimensional function. Indeed, since it only depends on the x_2 parameter from out of the sum this is precisely the formula for a 1-dimensional DFT! And so if we want to compute the 2-dimensional DFT using the 1-dimensional FFT algorithm, we can compute the matrix of 1-dimensional DFT entries for all choices of s, x_2 by fixing each value of s in turn and running FFT on the resulting “column” of values. If you followed the program from our last FFT post, then the only difficulty is in understanding how the data is shuffled around and which variables are fixed during the computation of the sub-DFT’s.

To remedy the confusion, we give an example. Say we have the following 3×3 matrix whose DFT we want to compute. Remember, these values are the sampled values of a 2-variable function.

\displaystyle \begin{pmatrix} f[0,0] & f[0,1] & f[0,2] \\ f[1,0] & f[1,1] & f[1,2] \\ f[2,0] & f[2,1] & f[2,2] \end{pmatrix}

The first step in the algorithm is to fix a choice of row, s, and compute the DFT of the resulting row. So let’s fix s = 0, and then we have the resulting row

\displaystyle f_0 = (f[0,0], f[0,1], f[0,2])

It’s DFT is computed (intentionally using the same notation as the inner summation above), as

\displaystyle g[0,x_2] = (\mathscr{F}f_0)[x_2] = \sum_{t=0}^{M-1} f_0[t] e^{- 2 \pi i (t x_2 / M)}

Note that f_0[t] = f[s,t] for our fixed choice of s=0. And so if we do this for all N rows (all 3 rows, in this example), we’ll have performed N FFT’s of size M to get a matrix of values

\displaystyle \begin{pmatrix} g[0,0] & g[0,1] & g[0,2] \\ g[1,0] & g[1,1] & g[1,2] \\ g[2,0] & g[2,1] & g[2,2] \end{pmatrix}

Now we want to compute the rest of the 2-dimensional DFT to the end, and it’s easy: now each column consists of the terms in the outermost sum above (since s is the iterating variable). So if we fix a value of x_2, say x_2 = 1, we get the resulting column

\displaystyle g_1 = (g[0, 1], g[1,1], g[2,1])

and computing a DFT on this row gives

\displaystyle \mathscr{F}f[x_1, 1] = \sum_{s=0}^{N-1} g_1[s] e^{-2 \pi i sx_1 / N}.

Expanding the definition of g as a DFT gets us back to the original formula for the 2-dimensional DFT, so we know we did it right. In the end we get a matrix of the computed DFT values for all x_1, x_2.

Let’s analyze the runtime of this algorithm: in the first round of DFT’s we computed N DFT’s of size M, requiring a total of O(N M \log M), since we know FFT takes time O(M \log M) for a list of length M. In the second round we did it the other way around, computing M DFT’s of size N each, giving a total of

O(NM \log M + NM \log N) = O(NM (\log N + \log M)) = O(NM \log (NM))

In other words, if the size of the image is n = NM, then we are achieving an O(n \log n)-time algorithm, which was precisely the speedup that the FFT algorithm gave us for one-dimension. We also know a lower bound on this problem: we can’t do better than NM since we have to look at every pixel at least once. So we know that we’re only a logarithmic factor away from a trivial lower bound. And indeed, all other known DFT algorithms have the same runtime. Without any assumptions on the input data (or any parallelization), nobody knows of a faster algorithm.

Now let’s turn to the code. If we use our FFT algorithm from last time, the pure Python one (read: very slow), then we can implement the 2D Fourier transform in just two lines of Python code. Full disclosure: we left out some numpy stuff in this code for readability. You can view the entire source file on this blog’s Github page.

def fft2d(matrix):
   fftRows = [fft(row) for row in matrix]
   return transpose([fft(row) for row in transpose(fftRows)])

And we can test it on a simple matrix with one nonzero value in it:

A = [[0,0,0,0], [0,1,0,0], [0,0,0,0], [0,0,0,0]]
for row in fft2d(A):
   print(', '.join(['%.3f + %.3fi' % (x.real, x.imag) for x in row]))

The output is (reformatted in LaTeX, obviously):

\displaystyle \begin{pmatrix} 1 & -i & -1 & i \\ -i & -1 & i & 1 \\ -1 & i & 1 & -i \\ i & 1 & -i & -1 \end{pmatrix}

The reader can verify by hand that this is correct (there’s only one nonzero term in the double sum, so it just boils down to figuring out the complex exponential e^{2 \pi i (x_1 + x_2 / 4)}). We leave it as an additional exercise to the reader to implement the inverse transform, as well as to generalize this algorithm to higher dimensional DFTs.

Some Experiments and Animations

As we did with the 1-dimensional FFT, we’re now going to switch to using an industry-strength FFT algorithm for the applications. We’ll be using the numpy library and its “fft2″ function, along with scipy’s ndimage module for image manipulation. Getting all of this set up was a nightmare (thank goodness for people who guide users like me through this stuff, but even then the headache seemed unending!). As usual, all of the code and images used in the making of this post is available on this blog’s Github page.

And so we can start playing with a sample image, a still from one of my favorite television shows:

sherlock

The Fourier transform of this image (after we convert it to grayscale) can be computed in python:

def fourierSpectrumExample(filename):
   A = ndimage.imread(filename, flatten=True)
   unshiftedfft = numpy.fft.fft2(A)
   spectrum = numpy.log10(numpy.absolute(unshiftedfft) + numpy.ones(A.shape))
   misc.imsave("%s-spectrum-unshifted.png" % (filename.split('.')[0]), spectrum)

With the result:

sherlock-spectrum-unshifted

The Fourier spectrum of Sherlock and Watson (and London).

A few notes: we use the ndimage library to load the image and flatten the colors to grayscale. Then, after we compute the spectrum, we shift and take a logarithm. This is because the raw spectrum values are too massive; plotting them without modification makes the image contrast too high.

Something is odd, though, because the brightest regions are on the edges of the image, where we might expect the highest-frequency elements to be. Actually, it turns out that a raw DFT (as computed by numpy, anyhow) is “shifted.” That is, the indices are much like they were in our original FFT post, so that the “center” of the spectrum (the lowest frequency component) is actually in the corner of the image array.

The numpy folks have a special function designed to alleviate this called fftshift. Applying it before we plot the image gives the following spectrum:

sherlock-spectrum

Now that’s more like it. For more details on what’s going on with shifting and how to use the shifting functions, see this matlab thread. (As a side note, the “smudges” in this image are interesting. We wonder what property of the original image contributes to the smudges)

Shifted or unshifted, this image represents the frequency spectrum of the image. In other words, we could take the inverse DFT of each pixel (and its symmetric partner) of this image separately, add them all together, and get back to our original image! We did just that using a different image (one of size 266 x 189, requiring a mere 25137 frequency components), to produce this video of the process:

Many thanks to James Hance for his relentlessly cheerful art (I have a reddish version of this particular masterpiece on my bedroom wall).

For the interested reader, I followed this youtube video’s recommended workflow to make the time-lapsed movie, along with some additional steps to make the videos play side by side. It took quite a while to generate and process the images, and the frames take up a lot of space. So instead of storing all the frames, the interested reader may find the script used to generate the frames on this blog’s Github page (along with all of the rest of the code used in this blog post).

Digital Watermarking

Now we turn to the main application of Fourier transforms to this post, the task of adding an invisible digital watermark to an image. Just in case the reader lives in a cave, a watermark is a security device used to protect the ownership or authenticity of a particular good. Usually they’re used on money to prevent counterfeits, but they’re often applied to high-resolution images on the web to protect copyrights. But perhaps more than just protect existing copyrights, watermarks as they’re used today are ugly, and mostly prevent people from taking the image (paid for or not) in the first place. Here’s an example from a big proponent of ugly watermarks, Shutterstock.com.

stock-photo

Now if you were the business of copyright litigation, you’d make a lot of money by suing people who took your clients’ images without permission. So rather than prevent people from stealing in the first place, you could put in an invisible watermark into all of your images and then crawl the web looking for stolen images with your watermark. It would be easy enough to automate (Google already did most of the work for you, if you just want to use Google’s search by image feature).

Now I’m more on the side of Fair Use For All, so I wouldn’t hope for a company to actually implement this and make using the internet that much scarier of a place. But the idea makes for an interesting thought experiment and blog post. The idea is simply to modify the spectrum of an image by adding in small, artificial frequency components. That is, the watermarked image will look identical to the original image to a human, but the Fourier spectrum will contain suspicious entries that we can extract if we know where to look.

Implementing the watermarking feature is quite easy, so let’s do that first. Let’s work again with James Hance’s fine artwork.

hance-up-sw-bw

Let’s call our image’s pixel matrix A and say we’re working with grayscale images for simplicity (for color, we just do the same thing to all three color channels). Then we can define a watermark matrix W by the following procedure:

  1. Pick a radius r, a length L, a watermark strength \alpha, and a secret key k.
  2. Using k as a seed to a random number generator, define a random binary vector v of length L.
  3. Pick a subset S of the circle of coordinates centered at the image’s center of radius r, chosen or rejected based on the entries of v.
  4. Let W be the matrix of all zeros (of the same dimension as A with 1′s in the entries of S.
  5. Compute the watermarked image as \mathscr{F}^{-1}(\mathscr{F}(A) + \alpha W). That is, compute the DFT of A, add \alpha W to it, and then compute the inverse Fourier transform of the result.

The code for this is simple enough. To create a random vector:

import random
def randomVector(seed, length):
   random.seed(secretKey)
   return [random.choice([0,1]) for _ in range(length)]

To make the watermark (and flush out all of the technical details of how it’s done:

def makeWatermark(imageShape, radius, secretKey, vectorLength=50):
   watermark = numpy.zeros(imageShape)
   center = (int(imageShape[0] / 2) + 1, int(imageShape[1] / 2) + 1)

   vector = randomVector(secretKey, vectorLength)

   x = lambda t: center[0] + int(radius * math.cos(t * 2 * math.pi / vectorLength))
   y = lambda t: center[1] + int(radius * math.sin(t * 2 * math.pi / vectorLength))
   indices = [(x(t), y(t)) for t in range(vectorLength)]

   for i,location in enumerate(indices):
      watermark[location] = vector[i]

   return watermark

We use the usual parameterization of the circle as t \mapsto (\cos(2 \pi t / n), \sin(2 \pi t / n) scaled to the appropriate radius. Here’s what the watermark looks like as a spectrum:

watermark-spectrum

It’s hard to see the individual pixels, so click it to enlarge.

And then applying a given watermark to an image is super simple.

def applyWatermark(imageMatrix, watermarkMatrix, alpha):
   shiftedDFT = fftshift(fft2(imageMatrix))
   watermarkedDFT = shiftedDFT + alpha * watermarkMatrix
   watermarkedImage = ifft2(ifftshift(watermarkedDFT))

   return watermarkedImage

And that’s all there is to it! One might wonder how the choice of \alpha affects the intensity of the watermark, and indeed here we show a few example values of this method applied to Hance’s piece:

Click to enlarge. It appears that it's not until alpha becomes egregiously large that we notice the effects. This could be in part due to the fact that this is an image of a canvas (which has lots of small textures in the background).

Click to enlarge. The effects are most visible in the rightmost image where alpha = 1,000,000

It appears that it’s not until \alpha becomes egregiously large (over 10,000) that we visibly notice the effects. This could be in part due to the fact that this is an image of a canvas (which has lots of small textures in the background). But it’s good to keep in mind the range of acceptable values when designing a decoding mechanism.

Indeed, a decoding mechanism is conceptually much messier; it’s the art to the encoding mechanism’s science. This paper details one possible way to do it, which is essentially to scale everything up or down to 512×512 pixels and try circles of every possible radius until you find one (or don’t) which is statistically similar to the your random vector. And note that since we have the secret key we can generate the exact same random vector. So what the author of that paper suggests is to extract each circle of pixels from the Fourier spectrum, treating it as a single vector with first entry at angle 0. Then you do some statistical magic (compute cross-correlation or some other similarity measure) between the extracted pixels and your secret-key-generated random vector. If they’re sufficiently similar, then you’ve found your watermark, and otherwise there’s no watermark present.

The code required to do this only requires a few extra lines that aren’t present in the code we’re already presented in this article (numpy does cross-correlation for you), so we leave it as an exercise to the reader: write a program that determines if an image contains our watermark, and test the algorithm on various \alpha and with modifications of the image like rotation, scaling, cropping, and jpeg compression. Part of the benefit of Fourier-based techniques is the resilience of the spectrum to mild applications of these transformations.

Next time we’ll use the Fourier transform to do other cool things to images, like designing filters and combining images in interesting ways.

Until then!

About these ads

Computing Homology

Update: the mistakes made in the code posted here are fixed and explained in a subsequent post (one minor code bug was fixed here, and a less minor conceptual bug is fixed in the linked post).

In our last post in this series on topology, we defined the homology group. Specifically, we built up a topological space as a simplicial complex (a mess of triangles glued together), we defined an algebraic way to represent collections of simplices called chains as vectors in a vector space, we defined the boundary homomorphism \partial_k as a linear map on chains, and finally defined the homology groups as the quotient vector spaces

\displaystyle H_k(X) = \frac{\textup{ker} \partial_k}{\textup{im} \partial_{k+1}}.

The number of holes in X was just the dimension of this quotient space.

In this post we will be quite a bit more explicit. Because the chain groups are vector spaces and the boundary mappings are linear maps, they can be represented as matrices whose dimensions depend on our simplicial complex structure. Better yet, if we have explicit representations of our chains by way of a basis, then we can use row-reduction techniques to write the matrix in a standard form.

Of course the problem arises when we want to work with two matrices simultaneously (to compute the kernel-mod-image quotient above). This is not computationally any more difficult, but it requires some theoretical fiddling. We will need to dip a bit deeper into our linear algebra toolboxes to see how it works, so the rusty reader should brush up on their linear algebra before continuing (or at least take some time to sort things out if or when confusion strikes).

Without further ado, let’s do an extended example and work our ways toward a general algorithm. As usual, all of the code used for this post is available on this blog’s Github page.

Two Big Matrices

Recall our example simplicial complex from last time.

circle-wedge-sphere

We will compute H_1 of this simplex (which we saw last time was \mathbb{Q}) in a more algorithmic way than we did last time.

Once again, we label the vertices 0-4 so that the extra “arm” has vertex 4 in the middle, and its two endpoints are 0 and 2. This gave us orientations on all of the simplices, and the following chain groups. Since the vertex labels (and ordering) are part of the data of a simplicial complex, we have made no choices in writing these down.

\displaystyle C_0(X) = \textup{span} \left \{ 0,1,2,3,4 \right \}

\displaystyle C_1(X) = \textup{span} \left \{ [0,1], [0,2], [0,3], [0,4], [1,2], [1,3],[2,3],[2,4] \right \}

\displaystyle C_2(X) = \textup{span} \left \{ [0,1,2], [0,1,3], [0,2,3], [1,2,3] \right \}

Now given our known definitions of \partial_k as an alternating sum from last time, we can give a complete specification of the boundary map as a matrix. For \partial_1, this would be

\displaystyle \partial_1 = \bordermatrix{  & [0,1] & [0,2] & [0,3] & [0,4] & [1,2] & [1,3] & [2,3] & [2,4] \cr  0 & -1 & -1 & -1 & -1 & 0 & 0 & 0 & 0\cr  1 & 1 & 0 & 0 & 0 & -1 & -1 & 0 & 0\cr  2 & 0 & 1 & 0 & 0 & 1 & 0 & -1 & -1 \cr  3 & 0 & 0 & 1 & 0 & 0 & 1 & 1 & 0 \cr  4 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 },

where the row labels are the basis for C_0(X) and the column labels are the basis for C_1(X). Similarly, \partial_2 is

\displaystyle \partial_2 = \bordermatrix{  & [0,1,2] & [0,1,3] & [0,2,3] & [1,2,3] \cr  [0,1] & 1 & 1 & 0 & 0\cr  [0,2] & -1 & 0 & 1 & 0\cr  [0,3] & 0 & -1 & -1 & 0\cr  [0,4] & 0 & 0 & 0 & 0\cr  [1,2] & 1 & 0 & 0 & 1\cr  [1,3] & 0 & 1 & 0 & -1\cr  [2,3] & 0 & 0 & 1 & 1\cr  [2,4] & 0 & 0 & 0 & 0}

The reader is encouraged to check that these matrices are written correctly by referring to the formula for \partial as given last time.

Remember the crucial property of \partial, that \partial^2 = \partial_k \partial_{k+1} = 0. Indeed, the composition of the two boundary maps just corresponds to the matrix product of the two matrices, and one can verify by hand that the above two matrices multiply to the zero matrix.

We know from basic linear algebra how to compute the kernel of a linear map expressed as a matrix: column reduce and inspect the columns of zeros. Since the process of row reducing is really a change of basis, we can encapsulate the reduction inside a single invertible matrix A, which, when left-multiplied by \partial, gives us the reduced form of the latter. So write the reduced form of \partial_1 as \partial_1 A.

However, now we’re using two different sets of bases for the shared vector space involved in \partial_1 and \partial_2. In general, it will no longer be the case that \partial_kA\partial_{k+1} = 0. The way to alleviate this is to perform the “corresponding” change of basis in \partial_{k+1}. To make this idea more transparent, we return to the basics.

Changing Two Matrices Simultaneously

Recall that a matrix M represents a linear map between two vector spaces f : V \to W. The actual entries of M depend crucially on the choice of a basis for the domain and codomain. Indeed, if v_i form a basis for V and w_j for W, then the k-th column of the matrix representation M is defined to be the coefficients of the representation of f(v_k) in terms of the w_j. We hope to have nailed this concept down firmly in our first linear algebra primer.

Recall further that row operations correspond to changing a basis for the codomain, and column operations correspond to changing a basis for the domain. For example, the idea of swapping columns i,j in M gives a new matrix which is the representation of f with respect to the (ordered) basis for V which swaps the order of v_i , v_j. Similar things happen for all column operations (they all correspond to manipulations of the basis for V), while analogously row operations implicitly transform the basis for the codomain. Note, though, that the connection between row operations and transformations of the basis for the codomain are slightly more complicated than they are for the column operations. We will explicitly see how it works later in the post.

And so if we’re working with two maps A: U \to V and B: V \to W, and we change a basis for V in B via column reductions, then in order to be consistent, we need to change the basis for V in A via “complementary” row reductions. That is, if we call the change of basis matrix Q, then we’re implicitly sticking Q in between the composition BA to get (BQ)A. This is not the same map as BA, but we can make it the same map by adding a Q^{-1} in the right place:

\displaystyle BA = B(QQ^{-1})A = (BQ)(Q^{-1}A)

Indeed, whenever Q is a change of basis matrix so is Q^{-1} (trivially), and moreover the operations that Q performs on the columns of B are precisely the operations that Q^{-1} performs on the rows of A (this is because elementary row operations take different forms when multiplied on the left or right).

Coming back to our boundary operators, we want a canonical way to view the image of \partial_{k+1} as sitting inside the kernel of \partial_k. If we go ahead and use column reductions to transform \partial_k into a form where the kernel is easy to read off (as the columns consisting entirely of zeroes), then the corresponding row operations, when performed on \partial_{k+1} will tell us exactly the image of \partial_{k+1} inside the kernel of \partial_k.

This last point is true precisely because \textup{im} \partial_{k+1} \subset \textup{ker} \partial_k. This fact guarantees that the irrelevant rows of the reduced version of \partial_{k+1} are all zero.

Let’s go ahead and see this in action on our two big matrices above. For \partial_1, the column reduction matrix is

\displaystyle A =  \begin{pmatrix}  0 & 1 & 0 & 0 & 1 & 1 & 0 & 0\\  0 & 0 & 1 & 0 & -1 & 0 & 1 & 1\\  0 & 0 & 0 & 1 & 0 & -1 & -1 & 0\\  -1 & -1 & -1 & -1 & 0 & 0 & 0 & -1\\  0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\  0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\  0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\  0 & 0 & 0 & 0 & 0 & 0 & 0 & 1  \end{pmatrix}

And the product \partial_1 A is

\displaystyle \partial_1 A =  \begin{pmatrix}  1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\  0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\  0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\  0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\  -1 & -1 & -1 & -1 & 0 & 0 & 0 & 0  \end{pmatrix}

Now the inverse of A, which is the corresponding basis change for \partial_2, is

\displaystyle A^{-1} =  \begin{pmatrix}  -1 & -1 & -1 & -1 & -0 & -0 & -0 & -0\\  1 & 0 & 0 & 0 & -1 & -1 & 0 & 0\\  0 & 1 & 0 & 0 & 1 & 0 & -1 & -1\\  0 & 0 & 1 & 0 & 0 & 1 & 1 & 0\\  0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\  0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\  0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\  0 & 0 & 0 & 0 & 0 & 0 & 0 & 1  \end{pmatrix}

and the corresponding reduced form of \partial_2 is

\displaystyle A^{-1} \partial_2 =  \begin{pmatrix}  0 & 0 & 0 & 0\\  0 & 0 & 0 & 0\\  0 & 0 & 0 & 0\\  0 & 0 & 0 & 0\\  1 & 0 & 0 & 1\\  0 & 1 & 0 & -1\\  0 & 0 & 1 & 1\\  0 & 0 & 0 & 0  \end{pmatrix}

As a side note, we got these matrices by slightly modifying the code from our original post on row reduction to output the change of basis matrix in addition to performing row reduction. It turns out one can implement column reduction as row reduction of the transpose, and the change of basis matrix you get from this process will be the transpose of the change of basis matrix you want (by (AB)^\textup{T} = (B^\textup{T}A^\textup{T})). Though the code is particularly ad-hoc, we include it with the rest of the code used in this post on this blog’s Github page.

Now let’s inspect the two matrices \partial_1 A and A^{-1} \partial_2 more closely. The former has four “pivots” left over, and this corresponds to the rank of the matrix being 4. Moreover, the four basis vectors representing the columns with nonzero pivots, which we’ll call v_1, v_2, v_3, v_4 (we don’t care what their values are), span a complementary subspace to the kernel of \partial_1. Hence, the remaining four vectors (which we’ll call v_5, v_6, v_7, v_8) span the kernel. In particular, this says that the kernel has dimension 4.

On the other hand, we performed the same transformation of the basis of C_1(X) for \partial_2. Looking at the matrix that resulted, we see that the first four rows and the last row (representing v_1, v_2, v_3, v_4, v_8) are entirely zeros and so the image of \partial_2 intersects their span trivially. and the remaining three rows (representing v_5, v_6, v_7) have nonzero pivots. This tells us exactly that the image of \partial_2 is spanned by v_5, v_6, v_7.

And now, the coup de grâce, the quotient to get homology is simply

\displaystyle \frac{ \textup{span} \left \{ v_5, v_6, v_7, v_8 \right \}}{ \textup{span} \left \{ v_5, v_6, v_7 \right \}} = \textup{span} \left \{ v_8 \right \}

And the dimension of the homology group is 1, as desired.

The General Algorithm

It is no coincidence that things worked out at nicely as they did. The process we took of simultaneously rewriting two matrices with respect to a common basis is the bulk of the algorithm to compute homology. Since we’re really only interested in the dimensions of the homology groups, we just need to count pivots. If the number of pivots arising in \partial_k is y and the number of pivots arising in \partial_{k+1} is z, and the dimension of C_k(X) is n, then the dimension is exactly

(n-y) - z = \textup{dim}(\textup{ker} \partial_k) - \textup{dim}(\textup{im}\partial_{k+1})

And it is no coincidence that the pivots lined up so nicely to allow us to count dimensions this way. It is a minor exercise to prove it formally, but the fact that the composition \partial_k \partial_{k+1} = 0 implies that the reduced version of \partial_{k+1} will have an almost reduced row-echelon form (the only difference being the rows of zeros interspersed above, below, and possibly between pivot rows).

As the reader may have guessed at this point, we don’t actually need to compute A and A^{-1}. Instead of this, we can perform the column/row reductions simultaneously on the two matrices. The above analysis helped us prove the algorithm works, and with that guarantee we can throw out the analytical baggage and just compute the damn thing.

Indeed, assuming the input is already processed as two matrices representing the boundary operators with respect to the standard bases of the chain groups, computing homology is only slightly more difficult than row reducing in the first place. Putting our homology where our mouth is, we’ve implemented the algorithm in Python. As usual, the entire code used in this post is available on this blog’s Github page.

The first step is writing auxiliary functions to do elementary row and column operations on matrices. For this post, we will do everything in numpy (which makes the syntax shorter than standard Python syntax, but dependent on the numpy library).

import numpy

def rowSwap(A, i, j):
   temp = numpy.copy(A[i, :])
   A[i, :] = A[j, :]
   A[j, :] = temp

def colSwap(A, i, j):
   temp = numpy.copy(A[:, i])
   A[:, i] = A[:, j]
   A[:, j] = temp

def scaleCol(A, i, c):
   A[:, i] *= c*numpy.ones(A.shape[0])

def scaleRow(A, i, c):
   A[i, :] *= c*numpy.ones(A.shape[1])

def colCombine(A, addTo, scaleCol, scaleAmt):
   A[:, addTo] += scaleAmt * A[:, scaleCol]

def rowCombine(A, addTo, scaleRow, scaleAmt):
   A[addTo, :] += scaleAmt * A[scaleRow, :]

From here, the main meat of the algorithm is doing column reduction on one matrix, and applying the corresponding row operations on the other.

def simultaneousReduce(A, B):
   if A.shape[1] != B.shape[0]:
      raise Exception("Matrices have the wrong shape.")

   numRows, numCols = A.shape # col reduce A

   i,j = 0,0
   while True:
      if i >= numRows or j >= numCols:
         break

      if A[i][j] == 0:
         nonzeroCol = j
         while nonzeroCol < numCols and A[i,nonzeroCol] == 0:
            nonzeroCol += 1

         if nonzeroCol == numCols:
            i += 1
            continue

         colSwap(A, j, nonzeroCol)
         rowSwap(B, j, nonzeroCol)

      pivot = A[i,j]
      scaleCol(A, j, 1.0 / pivot)
      scaleRow(B, j, 1.0 / pivot)

      for otherCol in range(0, numCols):
         if otherCol == j:
            continue
         if A[i, otherCol] != 0:
            scaleAmt = -A[i, otherCol]
            colCombine(A, otherCol, j, scaleAmt)
            rowCombine(B, j, otherCol, -scaleAmt)

      i += 1; j+= 1

   return A,B

This more or less parallels the standard algorithm for row-reduction (with the caveat that all the indices are swapped to do column-reduction). The only somewhat confusing line is the call to rowCombine, which explicitly realizes the corresponding row operation as the inverse of the performed column operation. Note that for row operations, the correspondence between operations on the basis and operations on the rows is not as direct as it is for columns. What’s given above is the true correspondence. Writing down lots of examples will reveal why, and we leave that as an exercise to the reader.

Then the actual algorithm to compute homology is just a matter of counting pivots. Here are two pivot-counting functions in a typical numpy fashion:

def numPivotCols(A):
   z = numpy.zeros(A.shape[0])
   return [numpy.all(A[:, j] == z) for j in range(A.shape[1])].count(False)

def numPivotRows(A):
   z = numpy.zeros(A.shape[1])
   return [numpy.all(A[i, :] == z) for i in range(A.shape[0])].count(False)

And the final function is just:

def bettiNumber(d_k, d_kplus1):
   A, B = numpy.copy(d_k), numpy.copy(d_kplus1)
   simultaneousReduce(A, B)

   dimKChains = A.shape[1]
   kernelDim = dimKChains - numPivotCols(A)
   imageDim = numPivotRows(B)

   return kernelDim - imageDim

And there we have it! We’ve finally tackled the beast, and written a program to compute algebraic features of a topological space.

The reader may be curious as to why we didn’t come up with a more full-bodied representation of a simplicial complex and write an algorithm which accepts a simplicial complex and computes all of its homology groups. We’ll leave this direct approach as a (potentially long) exercise to the reader, because coming up in this series we are going to do one better. Instead of computing the homology groups of just one simplicial complex using by repeating one algorithm many times, we’re going to compute all the homology groups of a whole family of simplicial complexes in a single bound. This family of simplicial complexes will be constructed from a data set, and so, in grandiose words, we will compute the topological features of data.

If it sounds exciting, that’s because it is! We’ll be exploring a cutting-edge research field known as persistent homology, and we’ll see some of the applications of this theory to data analysis.

Until then!

Double Angle Trigonometric Formulas

Problem: Derive the double angle identities

\sin(2\theta) = 2\sin(\theta)\cos(\theta)\\ \cos(2\theta) = \cos^2(\theta) - \sin^2(\theta)

Solution: Recall from linear algebra how one rotates a point in the plane. The matrix of rotation (derived by seeing where (1,0) and (0,1) go under a rotation by \theta, and writing those coordinates in the columns) is

A = \begin{pmatrix} \cos(\theta) & -\sin(\theta) \\ \sin(\theta) & \cos(\theta) \end{pmatrix}

Next, note that to rotate a point twice by \theta, we simply multiply the point (as a vector) by A twice. That is, multiply by A^2:

AAv = A^2v

Computing A^2 gives the following matrix:

A^2 = \begin{pmatrix} \cos^2(\theta) - \sin^2(\theta) & -2\sin(\theta)\cos(\theta) \\ 2\sin(\theta)\cos(\theta) & \cos^2(\theta) - \sin^2(\theta) \end{pmatrix}

But rotating twice by \theta is the same as rotating once by 2\theta, so we have the equality:

\begin{pmatrix} \cos(2\theta) & -\sin(2\theta) \\ \sin(2\theta) & \cos(2\theta) \end{pmatrix} = \begin{pmatrix} \cos^2(\theta) - \sin^2(\theta) & -2\sin(\theta)\cos(\theta) \\ 2\sin(\theta)\cos(\theta) & \cos^2(\theta) - \sin^2(\theta) \end{pmatrix}

The matrices are equal, so they must be equal entrywise, giving the identities we desire. \square

Discussion: There are (painful, messy) ways to derive these identities by drawing triangles on the unit circle and cultishly chanting “soh-cah-toa.” The key idea in this proof that one might study geometric transformations, and it is a truly mature viewpoint of mathematics. Specifically, over the last two hundred years the field of mathematics has changed focus from the study of mathematical “things” to the study of transformations of mathematical things. This proof is an elementary example of the power such perspective can provide. If you want to be really high-brow, start asking about transformations of transformations of things, and transformations of those transformations, and recurse until you’re doing something original.

Inner Product Spaces – A Primer

This primer is a precursor to a post on eigenfaces, a technique for facial recognition. For a more basic introduction to linear algebra, see our first primer on the subject.

Motivations

Vector spaces alone are not enough to do a lot of the interesting things we’d like them to do. Since a vector space is a generalization of Euclidean space, it is natural for us to investigate more specific types of vector spaces which are more akin to Euclidean space. In particular, we want to include the notion of a dot product. By admitting additional structure to a vector space, we may perform more computations, and hopefully get more interesting results.

Again, since we are developing mathematics for use in computing, all vector spaces in this post are finite-dimensional, and the field under question is always \mathbb{R} or \mathbb{C}, but usually the former. It suffices to recognize the vast amount of work and results in infinite-dimensional spaces, but we will not need it here.

Inner Product Spaces

The standard dot product operation in Euclidean space is defined as

\left \langle (x_1, \dots , x_n), (y_1, \dots, y_n) \right \rangle = \sum \limits_{i = 1}^n x_iy_i

So we take the dot product and generalize it to an operation on an arbitrary vector space.

Definition: An inner product on a vector space V over a field F is a function \left \langle \cdot, \cdot \right \rangle : V \times V \to F which satisfies the following properties for all x,y,z \in V, c \in F:

  • \left \langle x,y \right \rangle = \overline{ \left \langle y, x\right \rangle }, where the bar denotes complex conjugate. For real fields, this is just symmetry in the arguments.
  • \left \langle cx, y \right \rangle = c \left \langle x,y \right \rangle
  • \left \langle x+y, z \right \rangle = \left \langle x,z \right \rangle + \left \langle y,z \right \rangle
  • \left \langle x,x \right \rangle \geq 0 and \left \langle x,x \right \rangle = 0 if and only if x=0.

We recommend the novice reader invent some simple and wild operations on two vectors, and confirm or deny that they are inner products.

Notice that the second and third conditions imply that if the second argument of an inner product is fixed, then the resulting map V \to F is a linear map (since it maps vectors to the underlying field, it has a special name: a linear functional). We leave it as an exercise to the reader to investigate linearity facts about the map resulting from fixing the first argument (hint: things get conjugated).

We call a vector space with an associated inner product and inner product space. Of course, the most natural example of an inner product space is any Euclidean space \mathbb{R}^n with the dot product. However, there are many other interesting inner products, including ones which involve matrix multiplication, integration, and random variables. As interesting as they may be (and though the results we develop here hold for them), we have no need for them at this point in time. We will stick entirely to inner product spaces embedded in \mathbb{R}^n, and the standard dot product will suffice.

Now from any inner product we may induce a norm on V, which is colloquially the “distance” of a vector from the origin.

Definition: A norm on V is a function \left \| \cdot \right \| : V \to R which satisfies the following for all x,y \in V, c \in F:

  • \left \| x \right \| \geq 0, with equality if and only if x=0
  • \left \| cx \right \| = |c| \left \| x \right \|
  • \left \| x+y \right \| \leq \left \| x \right \| + \left \| y \right \| (the infamous triangle inequality)

If we recall the standard Euclidean norm, we see that it is just \left \| x \right \| = \sqrt{\left \langle x,x \right \rangle}. Indeed, for any inner product this definition satisfies the axioms of a norm, and so it is a natural generalization of “distance” between points in any inner product space.

In particular, those readers familiar with topology (or at least metric space analysis), will immediately recognize that a norm induces a metric on V, defined by d(x,y)= \left \| x-y \right \| , where of course x-y is the vector between x and y. Hence, every inner product space has a very rigid (metrized) topology and a fruitful geometry.

Additionally, any vector of norm 1 is called a unit vector, or a normal vector.

Now we look at vectors which have interesting relationships under inner products: specifically, when \left \langle x,y \right \rangle = 0.

Orthogonality and Orthonormal Bases

Definition: Two vectors x,y \in V are orthogonal if \left \langle x,y \right \rangle = 0. A set S of vectors is called orthogonal if the vectors in S are pairwise orthogonal.

Orthogonal vectors naturally generalize perpendicularity in Euclidean space. In particular, two vectors in \mathbb{R}^n are orthogonal if and only if the subspaces (lines) spanned by them are perpendicular.

The simplest examples of orthogonal vectors are the standard basis vectors (1,0, \dots, 0) \dots (0, \dots ,1), with respect to the usual dot product. Although, any scalar multiple of a basis vector \lambda e_i may replace e_i while still preserving the orthogonality of the set.

Orthogonality gives a new insight into the nature of inner products. Specifically, \left \langle x,y \right \rangle gives (almost) the length of the projection of x onto y. In other words, \left \langle x,y \right \rangle is the component of x that points in the direction of y (scaled by the length of y).

Now we may define projection maps to get “projection onto y” more faithfully than a plain inner product:

Definition: The projection of x onto y, denoted p_y(x), is the map V \to V defined by p_y(x) = \left \langle x, y \right \rangle \frac{y}{\left \| y \right \|^2}.

The reader may easily verify that the vectors p_y(x) and x-p_y(x) are indeed orthogonal, though the computation is awkward. This will come in handy later when we want to build orthogonal sets of vectors.

In addition, we may obviously decompose any vector v into its two orthogonal components with respect to another vector w via this projection: v = (v-p_w(v)) + p_w(v).

In addition to orthogonality, the standard basis vectors have norm 1. We call such a basis for an inner product space an orthonormal basis (a portmanteau of orthogonal and normal). We commonly denote an orthonormal set of vectors (e_1, \dots, e_n), perhaps a ritualistic attempt to summon the power of the standard basis vectors.

Given an orthonormal basis for an inner product space V (e_1, \dots, e_n), we may decompose any vector into its basis representation rather easily:

\displaystyle v = p_{e_1}(v) + \dots + p_{e_n}(v) = \left \langle v,e_1 \right \rangle e_1 + \dots + \left \langle v,e_n \right \rangle e_n,

Since the norm of each e_i is 1, we may skip the division by the square norm of e_i in the projection maps. Now, recalling that every vector can be written uniquely as a linear combination of the basis vectors, we see that the inner products \left \langle v, e_i \right \rangle are precisely those coefficients. The recognition of this fact leads us to an important theorem, with a necessary preliminary definition:

Definition: Two inner product spaces V, W are isometric if there exists a linear isomorphism f:V \to W which preserves the inner products in the respective spaces. i.e., \left \langle x, y \right \rangle_V = \left \langle f(x), f(y) \right \rangle_W for all x,y \in V.

Whereas, linear isomorphisms between vector spaces are the mathematically rigorous way of saying the two vector spaces are identical, isometric vector spaces give the “sameness” of the inner products. Hence, isometric vector spaces have equivalent metrics, topologies, and geometries, with merely a different choice of basis and representation of the vectors. Now, as we had that every finite-dimensional vector space was isomorphic to \mathbb{R}^n, we will soon see that every finite-dimensional inner product space is isometric to \mathbb{R}^n with the usual dot product.

Theorem: Any finite dimensional real inner product space V which has an orthonormal basis is isometric to \mathbb{R}^n with the usual dot product.

Proof: Define f: \mathbb{R}^n \to V by f((x_1, \dots, x_n)) = x_1e_1 + \dots + x_ne_n. Now, by computation, we see that

\left \langle f((a_1, \dots , a_n)), f((b_1, \dots , b_n)) \right \rangle
= \left \langle a_1e_1 + \dots + a_ne_n, b_1e_1 + \dots + b_ne_n \right \rangle
= \sum \limits_{i=1}^n a_i \left \langle e_i, b_1e_1 + \dots + b_ne_n \right \rangle
= \sum \limits_{i=1}^n \sum \limits_{j=1}^n a_i b_j \left \langle e_i, e_j \right \rangle
= a_1b_1 + \dots a_nb_n = \left \langle (a_1, \dots a_n), (b_1, \dots b_n) \right \rangle \square

Now, all we must prove is that every finite-dimensional inner product space has an orthonormal basis.

Theorem (Gram-Schmidt): Every basis of a finite-dimensional inner product space may be transformed into an orthonormal basis.

Proof: We do so by construction, using the previously introduced projection maps p_y(x). Given a basis (x_1, \dots , x_n), we compute the following algorithm:

  1. \displaystyle e_1 = \frac{x_1}{\left \| x_1 \right \|}
  2. For each i = 2, \dots , n:
    \

    1. Let z_i = \sum \limits_{j=1}^{i-1} p_{e_j}(x_i)
      \
    2. \displaystyle e_i = \frac{x_i - z_i}{\left \| x_i - z_i \right \|}

The reader may verify computationally that this process produces orthonormal vectors, but the argument is better phrased geometrically: each z_i is the projection of some new vector onto the subspace generated by all previously computed orthonormal vectors. By subtracting x_i - z_i, we take the part of x_i that is orthogonal to all vectors in that subspace. This is guaranteed to be nonzero because of the linear independence of our original list. And so, e_i is orthogonal to every vector preceding it in the algorithm (with indices j < i). Finally, we normalize each e_i to make them unit vectors, and we are done. \square

We note that this algorithm takes O(n^3), since we may compute the O(n^2) needed inner products ahead of time, and then there remains O(n) steps to compute each of the e_i.

This result now proves that every real finite-dimensional inner product space is isometric to \mathbb{R}^n. With this new insight, we may effectively do all our work in \mathbb{R}^n with the usual dot product, realizing that the results there hold for all relevant inner product spaces. In other words, our “simplification” at the beginning of this post, restricting our work to \mathbb{R}^n, was not at all a simplification. Proving statements about \mathbb{R}^n gives us equivalent statements about all real finite-dimensional inner product spaces. Wonderful!

Bases of Eigenvectors

There is one more important topic we wish to discuss here: the importance of bases of eigenvectors. In particular, given a linear operator A: V \to V, if one has a basis of eigenvectors for A, then A has a diagonal representation.

In particular, if A has a basis of eigenvectors (v_1, \dots , v_n), then the expansion of Av_i in terms of the basis vectors is just \lambda_i v_i, where \lambda_i is the corresponding eigenvalue. Thus, the matrix corresponding to A looks like:

\begin{pmatrix} \lambda_1 & 0 & \cdots & 0 \\ 0 & \lambda_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \lambda_n  \end{pmatrix}

Here we count multiplicity of eigenvalues.

The existence of a diagonal representation for a matrix has interesting computational implications. For instance, we often wish to take high powers of a matrix, such as in counting paths in a graph, or working with graphical computations. Unfortunately, each successive matrix multiplication takes O(n^2) computations. If we wish to compute A^m, this takes O(mn^2) time. However, if the matrix has a diagonal representation, we may spend the O(n^2) it takes to convert the matrix to its diagonal form, take powers of the diagonal matrix by simply taking the powers of the diagonal entries, and then convert it back. Indeed, multiplying two diagonal matrices together is just as easy as multiplying the diagonal entries together, as the reader may verify. This optimization reduces computation to O(n^2 + mn) = O(nm), since we are assuming m is very large.

Of course, then we are left with the problem of quickly computing eigenvectors. What worries us even more is that we might not have a basis of eigenvectors (some matrices don’t have any!). We instead take a slightly different route, which serves our purposes better. Specifically, we will be using this information to compute eigenvectors of symmetric matrices (A = A^{\textup{T}}). For this, we refer to a grand theorem:

The Spectral Theorem: Every real symmetric matrix has an orthonormal basis consisting of eigenvectors.

The proof goes beyond the scope of this post (see: characteristic polynomials and self-adjoint operators), but it is very useful for us. In particular, by finding these eigenvectors we may both have a diagonal representation for our matrix, and also compute projections in a jiffy! We will see the astounding applications of this quite soon.

So Even with two primers on linear algebra, we have still only scratched the surface of this wonderful subject. In the future we may continue this series of primers by discussing the linear algebra inherent in many optimization problems. Be sure to look out for it.

Until next time!