## Variations on a theme

Back in 2014 I wrote a post called How to Conquer Tensorphobia that should end up on Math Programming’s “greatest hits” album. One aspect of tensors I neglected to discuss was the connection between the modern views of tensors and the practical views of linear algebra. I feel I need to write this because every year or two I forget why it makes sense.

The basic question is:

What the hell is going on with the outer product of vectors?

The simple answer, the one that has never satisfied me, is that the outer product of is the matrix whose entry is the product . This doesn’t satisfy me because it’s an explanation by fiat. It lacks motivation and you’re supposed to trust (or verify) things magically work out. To me this definition is like the definition of matrix multiplication: having it dictated to you before you understand why it makes sense is a cop out. Math isn’t magic, it needs to make perfect sense.

The answer I like, and the one I have to re-derive every few years because I never wrote it down, is a little bit longer.

To borrow a programming term, the basic confusion is a type error. We start with two vectors, in some vector space (let’s say everything is finite dimensional), and we magically turn them into a matrix. Let me reiterate for dramatic effect: we start with vectors, which I have always thought of as *objects*, things you can stretch, rotate, or give as a gift to your cousin Mauricio. While a matrix is a *mapping*, a thing that takes vectors as input and spits out new vectors. Sure, you can play with the mappings, feed them kibble and wait for them to poop or whatever. And sure, sometimes vectors *are themselves* maps, like in the vector space of real-valued functions (where the word “matrix” is a stretch, since it’s infinite dimensional).

But to take two vectors and *poof* get a mapping of *all* vectors, it’s a big jump. And part of the discomfort is that it feels random. To be happy we have to understand that the construction is natural, or even better *canonical*, meaning this should be the *only* way to turn two vectors into a linear map. Then the definition would make sense.

So let’s see how we can do that. Just to be clear, everything we do in this post will be for finite-dimensional vector spaces over , but we’ll highlight the caveats when they come up.

## Dual vector spaces

The first step is understanding how to associate a vector with a linear map in a “natural” or “canonical” way. There is one obvious candidate: if you give me a vector , I can make a linear map by taking the dot product of with the input. So I can associate

The dash is a placeholder for the input. Another way to say it is to define and say the association takes . So this “association,” taking to the inner product, is itself a mapping from to “maps from to .” Note that is linear in and because that’s part of the definition of an inner product.

To avoid saying “maps from to ” all the time, we’ll introduce some notation.

**Definition: **Let be a vector space over a field . The set of -linear maps from is called . More generally, the set of -linear maps from to another -vector space is called .

“Hom” stands for “homomorphism,” and in general it just means “maps with the structure I care about.” For this post , but most of what we say here will be true for any field. If you go deeply into this topic, it matters whether is algebraically closed, or has finite characteristic, but for simplicity we’ll ignore all of that. We’ll also ignore the fact that these maps are called *linear functionals* and this is where the name “functional analysis” comes from. All we really want to do is understand the definition of the outer product.

Another bit of notation for brevity:

**Definition: **Let be a -vector space. The *dual vector space* for , denoted , is .

So the “vector-to-inner-product” association we described above is a map . It takes in and spits out .

Now here’s where things start to get canonical (interesting). First, is itself a vector space. This is an easy exercise, and the details are not too important for us, but I’ll say the key: if you want to add two functions, you just add their (real number) outputs. In fact we can say more:

**Theorem:** and are isomorphic as vector spaces, and the map is the canonical isomorphism.

Confessions of a mathematician: we’re sweeping some complexity under the rug. When we upgraded our vector space to an inner product space, we fixed a specific (but arbitrary) inner product on . For finite dimensional vector spaces it makes no difference, because every finite-dimensional -inner product space is isomorphic to with the usual inner product. But the theorem is devastatingly false for infinite-dimensional vector spaces. There are two reasons: (1) there are many (non-canonical) choices of inner products and (2) the mapping for any given inner product need not span . Luckily we’re in finite dimensions so we can ignore all that. [Edit: see Emilio’s comments for a more detailed discussion of what’s being swept under the rug, and how we’re ignoring the categorical perspective when we say “natural” and “canonical.”]

Before we make sense of the isomorphism let’s talk more about . First off, it’s not even entirely obvious that is finite-dimensional. On one hand, if is a basis of then we can quickly prove that are linearly independent in . Indeed, if they weren’t then there’d be some linear combination that is the zero function, meaning that for *every *vector , the following is zero

But since the inner product is linear in both arguments we get that for every . And this can only happen when is the zero vector (prove this).

One consequence is that the linear map is injective. So we can think of as “sitting inside” . Now here’s a very slick way to show that the span all of . First we can assume our basis is actually an *orthonormal* basis with respect to our inner product (this is without loss of generality). Then we write any linear map as

To show these two are actually equal, it’s enough to show they agree on a basis for . That is, if you plug in to the function on the left- and right-hand side of the above, you’ll get the same thing. The orthonormality of the basis makes it work, since all the irrelevant inner products are zero.

In case you missed it, that completes the proof that and are isomorphic. Now when I say that the isomorphism is “canonical,” I mean that if you’re willing to change the basis of and , then is the square identity matrix, i.e. the only isomorphism between any two finite vector spaces (up to a change of basis).

## Tying in tensors

At this point we have a connection between single vectors and linear maps whose codomain has dimension 1. If we want to understand the outer product, we need a connection between *pairs* of vectors and matrices, i.e. . In other words, we’d like to find a canonical isomorphism between and . But already it’s not possible because the spaces have different dimensions. If then the former has dimension and the latter has dimension . So any “natural” relation between these spaces has to be a way to embed as a subspace via some injective map.

There are two gaping problems with this approach. First, the outer product is *not linear* as a map from . To see this, take any , pick any scalar . Scaling the pair means scaling both components to , and so the outer product is the matrix .

The second problem is that the only way to make a subspace of (up to a change of basis) is to map to the first two rows of a matrix with zeros elsewhere. This is canonical but it doesn’t have the properties that the outer product promises us. Indeed, the outer product let’s us uniquely decompose a matrix as a “sum of rank 1 matrices,” but we don’t get a *unique* decomposition of a matrix as a sum of these two-row things. We also don’t even get a well-defined rank by decomposing into a sum of two-row matrices (you can get cancellation by staggering the sum). This injection is decisively useless.

It would seem like we’re stuck, until we think back to our association between and . If we take one of our two vectors, say , and pair it with , we can ask how could be turned into a linear map in . A few moments of guessing and one easily discovers the map

In words, we’re scaling by the inner product of and . In geometric terms, we project onto and scale by the signed length of that projection. Let’s call this map , so that the association maps . The thought process of “easily discovering” this is to think, “What can you do with a function and an input ? Plug it in. Then what can you do with the resulting number and a vector ? Scale .”

If you look closely you’ll see we’ve just defined the outer product. This is because the outer product works by saying is a matrix, which acts on a vector by doing . But the important thing is that, because and are canonically isomorphic, this is a mapping

Now again, this mapping is not linear. In fact, it’s *bilinear*, and if there’s one thing we know about bilinear maps, it’s that tensors are their gatekeepers. If you recall our previous post on tensorphobia, this means that this bilinear map “factors through” the tensor product in a canonical way. So the true heart of this association is a map defined by

And now the punchline,

**Theorem: ** is an isomorphism of vector spaces.

*Proof.* If is a basis for then it’s enough to show that forms a basis for . Since we already know and there are of the , all we need to do is show that the ‘s are linearly independent. For brevity let me remove the ‘s and call .

Suppose they are not linearly independent. Then there is some choice of scalars so that the linear combination below is the identically zero function

In other words, if I plug in any from my (orthonormal) basis, the result is zero. So let’s plug in .

The orthonormality makes all of the when , so we get a linear combination of the being zero. Since the form a basis, it must be that all the . The same thing happens when you plug in or any other , and so all the are zero, proving linear independence.

This theorem immediately implies some deep facts, such as that every matrix can be uniquely decomposed as a sum of the ‘s. Moreover, facts like the ‘s being rank 1 are immediate: by definition the maps scale a single vector by some number. So of course the image will be one-dimensional. Finding a useful basis with which to decompose a matrix is where things get truly fascinating, and we’ll see that next time when we study the singular value decomposition.

In the mean time, this understanding generalizes nicely (via induction/recursion) to higher dimensional tensors. And you don’t need to talk about slices or sub-tensors or lose your sanity over -tuples of indices.

Lastly, all of this duality stuff provides a “coordinate-free” way to think about the transpose of a linear map. We can think of the “transpose” operation as a linear map which (even in infinite dimensions) has the following definition. If then is a linear map taking to . The latter is a function in , so we need to say what it does on inputs . The only definition that doesn’t introduce any type errors is . A more compact way to say this is that .

Until next time!

I am going to have to re-read and think about this essay some. But I do like my early thoughts about it. Tensors have always been a struggle for me and no small part of it is that I’ve just never been able to turn shuffling the symbols of an outer product into something I have a feel for.

This reply maybe belongs in the 2014 post, or maybe in left field; my mind is somewhat reflexive about the words “dual” and “linear”. But isn’t the construction of a plane through points (or dually the intersection in a point of planes) a multilinear map? Is this unrelated to outer product?

Not about this post, but given that you don’t like the deep learning book, what about the post in wildml.com, for example about LSTM (long short term memory). Perhaps it could be interesting a post about this.

Do you have any intuition for dual spaces and adjoint maps (a.k.a. dual maps)? Realizing the relationship between the first isomorphism theorem and the rank-nullity was an epiphany and it resulted by considering linear algebra in a co-ordinate free way. I’ve been reading about annihilators, dual spaces, and adjoints recently, and it doesn’t seem to click in the same way quotient spaces and quotient maps did.

It sounds like you’re looking for a category theory approach, the best of which I’ve seen (aimed at a first year grad student in math) is the book Algebra Chapter 0 by Paolo Aluffi.

I think of the dual space as the space of all linear observations I can make on a vector space which read 0 at the origin. Given a map from V to W, I can see the points of V inside of W. Therefore, if I can make a particular observation on W like distance to the right from the origin in some units of measurement (i.e. a vector in W), I can use my map to make an observation of V. I do this by first looking looking at where V sits inside W via my map, and use the observable on W to output a number. This is what the adjoint of a map is, it tells you how the linear map is affecting the outcome of certain (only linear!) observations. It is a very natural construction.

This is a nice perspective, since it helps me think about symmetric tensors as polynomial functions. This is what mathematicians call Algebraic Geometry

This is a nice perspective, but the assertion that V and V* are canonically isomorphic is plain wrong. That is only true in the presence of an inner product, and if you change the inner product then the canonical isomorphism will change (even in finite dimension). In the absence of an inner product (or at least of a natural one, which is true for very many natural vector spaces), V and V* are only isomorphic because they have the same dimension, but there is provably no natural isomorphism (in the category theoretic sense) between the two.

The unnaturalness (in the categorical sense) of the isomorphism comes from the freedom to pick a basis, not the choice of inner product. I am perfectly happy to sweep this under the rug: the definition of the isomorphism doesn’t change even if you pick a different basis; it’s only unnatural because isomorphisms between vector spaces translate to abnormal actions on the dual. That’s a meta-insight I don’t expect the intended audience for this post to care about, since they are hardly doing linear algebra in a coordinate-free way.

But’s not even natural in the non-technical sense! In the case where V is one-dimensional, an isomorphism between V and its dual is literally a choice of units to measure length! So If I interpret what you write in the nontechnical sense, then saying that the isomorphism is “canonical” is saying that there is a canonical unit of measurement. I’m pretty sure the word canonical was created to not apply to this situation.

BUT there are interesting examples where they are canonically isomorphic. That is the case of R^n, which has a canonical basis. That’s because R^n is defined to be the n-dim vector space with a choice of basis. So, R^n canonically isomorphic to its dual.

I should point out that if you have a basis you can obtain an inner product, by it to be declaring to be the unique inner product with that basis being orthonormal. But, you lose information. You can see this by rotating and/or reflecting an othonormal basis to get a different orthonormal basis (with respect to the same inner product). In other words, you can’t recover the basis. Nonetheless, given an inner product, you have such an isomorphism. This last part is what you use implicitly when applying Dirac ba-ket notation, and is foundational to how the universe behaves.

Nonetheless I really liked the post!

That’s sort of the thing – if you change a basis then the isomorphism does change, and the basis of the dual transforms under the inverse-transpose (A^T)^-1 of the matrix A that transforms the basis of the primal. The two transformations can coincide (in which case the isomorphism doesn’t) only if A is orthogonal, i.e. if you change between two orthonormal bases.

It’s not a problem to sweep things under the rug, but I think it does matter to say exactly what is being ignored. In this case, it’s not just the topological considerations that come with a choice of inner product in infinite dimensions; rather, it’s the fact that the isomorphism V→V* is in direct correspondence with the chosen inner product: change one and it changes the other, and vice versa. Talking about canonical isomorphisms between V and V* without at least a passing mention that they’re only canonical if you never change the inner product (and that that is also getting swept under the rug) seems a bit at odds to me with the spirit of this really cool blog.

I think the vector outer product is intuitively neat in dirac notation, where it clearly projects the target vector onto the right vector of the product, and returns the left vector of the product weighted by the value of that inner product.

https://en.wikipedia.org/wiki/Bra%E2%80%93ket_notation#Outer_products

I strongly disagree. In fact, I detest that notation 🙂 But to each their own