Metric Spaces — A Primer

The Blessing of Distance

We have often mentioned the idea of a “metric” on this blog, and we briefly described a formal definition for it. Colloquially, a metric is simply the mathematical notion of a distance function, with certain well-behaved properties. Since we’re now starting to cover a few more metrics (and things which are distinctly not metrics) in the context of machine learning algorithms, we find it pertinent to lay out the definition once again, discuss some implications, and explore a few basic examples.

The most important thing to take away from this discussion is that not all spaces have a notion of distance. For a space to have a metric is a strong property with far-reaching mathematical consequences. Essentially, metrics impose a topology on a space, which the reader can think of as the contortionist’s flavor of geometry. We’ll explore this idea after a few examples.

On the other hand, from a practical standpoint one can still do interesting things without a true metric. The downside is that work relying on (the various kinds of) non-metrics doesn’t benefit as greatly from existing mathematics. This can often spiral into empirical evaluation, where justifications and quantitative guarantees are not to be found.

Metrics and Metric Spaces

Given a set $ X$, we say $ X$ is a metric space if it comes equipped with a special function $ d(x,y)$ that can compute the distance between any two points $ x,y$ of $ X$. Specifically, $ d$ must satisfy the axioms of a metric.

Definition: A function $ d: X \times X \to \mathbb{R}$ is a metric if it satisfies the following three properties for any choice of elements $ x, y, z \in X$.

  • $ d(x,y) \geq 0$ (non-negativity), and $ d(x,y) = 0$ if and only if $ x=y$.
  • $ d(x,y) = d(y,x)$ (symmetry)
  • $ d(x,y) + d(y,z) \geq d(x,z)$ (triangle inequality)

Our goal now is to convince the reader that these three axioms are sensible for every notion of distance to satisfy. The first bullet claims that the distance between any two things can never be negative (hence called “non-negativity”), and that the distance between two things can only be zero if those two things are actually the same thing. The second bullet is a matter of perspective; the distance function reads, “the distance between $ x$ and $ y$,” and this shouldn’t change based on which element comes first in the sentence. This is the “symmetry” condition.

 

If one wants to prove a function is a metric, the third bullet is often the hardest property to establish. It’s called the triangle inequality, and in words it says that the lengths of edges of triangles make sense if you measure them with $ d$. Thinking of $ x, y, z$ as the vertices of a triangle, such as the one at left, we don’t want the length of one edge to be longer than the combined lengths of the other two edges. It’s a basic fact of Euclidean geometry that such a triangle cannot be drawn.

triangle.png

Walking from one vertex to another has an obviously shortest route: the straight line path

Pedantically, we notice that the third bullet above uses $ \geq$, which includes that $ d(x,y) + d(y,z) = d(x,z)$. It is not hard to see that this occurs (in Euclidean space, at least) when $ y$ lies on the line segment between $ x$ and $ z$. In this case it’s not truly a triangle, but it’s just convenient to pack it under the same name.

Aside from analyzing the abstract properties of a metric, the best way to understand this definition is to explore lots and lots of examples.

Of Norms and Graphs and Levenshtein, of Taxicabs and Kings

The simplest metric one could construct is called the discrete metric. It is defined by $ d(x,y) = 0$ if $ x = y$ and $ d(x,y) = 1$ otherwise. The symmetry and non-negativity conditions are trivially satisfied, and the triangle inequality is easy to prove. If $ d(x,y) + d(y,z) < d(x,z) \leq 1$, then it must be that $ x \neq z$, but both $ x=y$ and $ y=z$. The transitivity of equality, however, implies $ x=z$, a contradiction.

The discrete metric is completely useless for practical purposes, because all it can do is tell one that two things are equal or not equal. We don’t need a metric to do this in real life. On the other hand, mathematically this metric has a lot of uses. It serves as a conveniently pathological counterexample allowing one to gauge the plausibility of purported theorems in topology. These sorts of things usually only show up in the realm of point-set topology, which we haven’t breached yet on this blog, so we’ll leave it as a relevant link for now (hit the first link to page 41).

The most well known metric by far is the Euclidean metric. In $ n$ dimensions, this is just

$ \displaystyle d((x_1, \dots, x_n), (y_1, \dots, y_n)) = \sqrt{(y_1 – x_1)^2 + \dots + (y_n – x_n)^2}$

The non-negativity and symmetry of this metric follow from the fact that $ (a – b)^2 = (b – a)^2 \geq 0$ for all real numbers $ a,b$. The triangle inequality is a bit more difficult to prove, and without using a cop-out like Minkowski’s inequality, one would need to prove the pythagorean theorem for $ \mathbb{R}^n$, which implies the Cauchy-Schwarz inequality, which in turn implies the triangle inequality. Instead, we will do this at the end of this primer for a general vector space and a general inner product. The special case of the usual Euclidean dot product (which induces the Euclidean metric as above) will follow trivially.

The next metric we will inspect is the taxicab metric, also known as the Manhattan metric for the way it mimics driving distances on a grid of streets.

The picture below illustrates this: the green line represents usual Euclidean distance between the two black dots, while the blue, red, and yellow lines all represent the same distance via the taxicab metric. In particular, the distance is the sum of the lengths of the individual line segments, and it’s easy to see that the choice of path is irrelevant.

Screen Shot 2016-06-25 at 5.46.29 PM

The red, yellow, and blue lines are all paths of equal length from the bottom left to the top right.

To make this more rigorous mathematically, we will pick the simplest possible path (the red one) to see that the distance is simply the sum of the differences of the x- and y-coordinates in absolute value. This generalizes to the following formula for arbitrary dimension.

$ \displaystyle d((x_1, \dots, x_n), (y_1, \dots, y_n)) = |x_1 – y_1| + |x_2 – y_2| + \dots + |x_n – y_n|$

For reasons the measure-theorist is familiar with, this metric is sometimes called the $ L_1$ metric. Much like the Euclidean metric, it also arises from a vector space (albeit not in the usual way). This function is non-negative and symmetric for the same reasons the Euclidean metric is. We will again defer the proof of the triangle inequality to the end of this post.

Next, we have the maximum metric, also known as the Chebyshev metric, which measures the distance it takes a king to travel from one point on a chessboard to another.

image source: chessguru.net

The picture to the left shows this in action. In particular, the distance required for a king to move from one square to another is exactly the maximum of the horizontal and vertical distances between the two squares.

If we generalized the chessboard and king’s movement rules to an arbitrary number of dimensions, it would result in taking the maximum of $ |x_i – y_i|$ over each coordinate indexed by $ i$.

Rigorously written, the maximum metric is defined by

$ \displaystyle d(x,y) = \max_i (|x_i – y_i|)$

As usual, symmetry and non-negativity are completely obvious. The triangle inequality is not so hard here. If $ z = x + y$, then

$ \max_i |z_i| = \max_i |x_i + y_i| \leq \max_i (|x_i| + |y_i|) \leq \max_i |x_i| + \max_i |y_i|$.

This is the distance between $ z$ and 0, and the more general result follows by translating the points in question (and it is easy to see that translation preserves distance).

Next, we can construct a metric on any undirected, weighted (or unweighted) graph $ G$, and we naturally call it the graph metric. The space is the set of vertices in $ G$, and the distance between two vertices is the shortest path between them, as per the weighting. In the case that $ G$ is unweighted, we can equivalently count the number of edges in the shortest path (shortest by edge count) or assume all edge weights are equal to 1. By virtue of being undirected and weights being non-negative, the symmetry and non-negativity conditions are trivially satisfied. The triangle inequality is (unusually) trivial as well. For if the shortest path from $ x$ to $ z$ were longer than the paths from $ x$ to $ y$ and $ y$ to $ z$ for some other vertex $ y$, then the latter is shorter than the former, a contradiction. For those readers familiar with group theory, this idea extends naturally to the metrics on groups, using the Cayley graph.

The last example is one we’ve explored at length on this blog, and that is the Levenshtein metric on words. Overly rigorously, it is a metric on a free monoid where we allow substitutions, insertions, and deletions. The reader is encouraged to read more about it in our post on metrics on words. An even simpler version of this is called the Hamming distance, where we only allow substitutions and the two words being compared must have the same length.

A Few Vague Words on Topology

As we have mentioned briefly before on this blog, a notion of distance allows one to consider any geometric construction that relies only on distance. The easiest examples are circles and ellipses, but one can also talk about convergent sequences, and other more analytic ideas.

But more importantly to mathematicians, metrics generate very tractable topologies. While we’re not suited to begin a full-blown discussion of topology in this primer, we will at least say that a topology is simply a definition of open sets, subject to appropriate properties. For a metric, open sets are usually defined again by circles. That is, one might define the open sets to be all the unions and finite intersections of open disks, where an open disk is a set of the form $ \left \{ y : d(x,y) < C \right \}$ for some center $ x$, and some constant $ C$.

The structure induced by these open sets is very flexible. In particular, two topological spaces are said to be equivalent if a function between them preserves open sets in both directions. This allows for all sorts of uncanny stretching and bending, such as those used to turn a sphere inside-out. The formal word for such a function is a homeomorphism, and the two spaces are said to be homeomorphic. One would be right to think that without certain assumptions, topologies could be wild and crazy beyond our imagination. The important point for this post is that a topology coming from a metric space is particularly well-behaved (at least as far as topologies go), satisfying a number of helpful properties for their analysis.

While it might seem weird and arbitrary to talk of open sets as a “structure” of a space, it turns out to yield a very rich theory and surprising applications. We plan to explore some of these applications on this blog in the distant future.

Inner Product Spaces: a Reprise

In what follows we will give a detailed but elementary treatment of the triangle inequality for a general inner product space. One should note that there are even more general spaces that allow for metrics with the triangle inequality, but these usually involve measure theory or take the triangle inequality as an axiom. In this post, we want to see the triangle inequality occur as a result of the existence of an inner product. For a refresher on inner product spaces and the basic definitions, the reader should refer to our primer on the subject.

Before we continue, we should also note which inner products induce which metrics. For the Euclidean metric it is obviously the Euclidean inner product. For the taxicab metric one should refer to the second page of these notes.

Let $ V$ be an inner product space, and let $ v \in V$. We define the norm of $ v$ to be $ \| v \| = \sqrt{\left \langle v,v \right \rangle}$. This coincides with the usual Euclidean norm if we use the Euclidean inner product, and the $ L_p$ norm if we use the appropriate integral inner product.

There are some trivial properties one would expect to be true of norms, such as non-negativity, $ \| v \| = 0$ if and only if $ v = 0$, and $ \| av \| = |a| \| v \|$ for scalars $ a$. We leave these as exercises to the reader.

As we noted in our primer on inner product spaces, two vectors $ v,w$ are said to be orthogonal if $ \left \langle v,w \right \rangle = 0$. From this we can prove the Pythagorean Theorem for an inner product space.

Theorem: If $ u,v$ are orthogonal vectors, then $ \| u + v \|^2 = \| u \|^2 + \| v \|^2$.

Proof. By definition, $ \| u + v \|^2 = \left \langle u+v, u+v \right \rangle$, and this expands by linearity of the inner product to

$ \displaystyle \|u \|^2 + \| v \|^2 + \left \langle u,v \right \rangle + \left \langle v,u \right \rangle$

As the two vectors are orthogonal, the right two terms are zero, giving the desired result. $ \square$

Now given a vector $ v$, we describe a useful way to decompose another vector $ u$ into two parts, where one is orthogonal to $ v$ and one is a scalar multiple of $ v$. A simple computation gives a unique result:

$ \displaystyle u = \frac{\left \langle u,v \right \rangle}{\| v \|^2}v + \left ( u – \frac{\left \langle u,v \right \rangle}{\|v \|^2}v \right )$

We call the first term the projection of $ u$ onto $ v$. The second term is then simply the remainder after subtracting off the projection. This construction helps us understand the relationship between two vectors, but it also helps us understand the relationship between the inner product and the norm, as in the following theorem.

Theorem: (The Cauchy-Schwarz Inequality). For all $ u,v \in V$,

$ | \left \langle u,v \right \rangle | \leq \|u \| \|v \|$.

Proof. If $ v=0$ then the inequality trivially holds, so suppose $ v \neq 0$. Consider the square norm of the orthogonal decomposition of $ u$ onto $ v$, where we denote the orthogonal part by $ w$.

$ \displaystyle u = \frac{\left \langle u,v \right \rangle}{\| v \|^2}v + w$

By the Pythagorean Theorem, we have

$ \displaystyle \| u \|^2 = \left \| \frac{\left \langle u,v \right \rangle}{\| v \|^2}v \right \|^2 + \| w \|^2$

Since the scalar multiples pass squared through the norm, this is the same as

$ \displaystyle \| u \|^2 = \frac{|\left \langle u,v \right \rangle |^2}{\| v \|^2} + \|w \|^2$

Since norms are non-negative, we can omit the $ w$ part and get an inequality

$ \displaystyle \| u \|^2 \geq \frac{| \left \langle u,v \right \rangle |^2}{\| v \|^2}$.

Multiplying both sides by $ \| v \|^2$ and taking square roots gives the result. $ \square$

And now we may finally get to the triangle inequality for norms, which says that $ \| u + v \| \leq \| u \| + \| v \|$. Before we prove this, note that we can bring this back to the world of metrics by defining a metric based on the norm as $ d(u,v) = \| u-v \|$, and the statement about the triangle inequality translates to what we expect it should. So concluding this primer we present the proof of the triangle inequality.

Theorem: For all $ u,v \in V, \| u+v \| \leq \| u \| + \| v \|$.

Proof. Expanding $ \| u + v \|^2$ using the properties of the inner product we get

$ \displaystyle \| u + v \|^2 = \| u \|^2 + \| v \|^2 + 2 \textup{Re}\left \langle u,v \right \rangle$

Where “Re” stands for the real part of the (possibly complex-valued) inner product. As the real part bounded by the complex absolute value, we introduce our first inequality as

$ \displaystyle \| u +v \|^2 \leq \| u \|^2 + \| v \|^2 + 2|\left \langle u,v \right \rangle|$

By the Cauchy-Schwarz inequality, the last term is bounded by the norms of $ u$ and $ v$, giving

$ \displaystyle \| u+v \|^2 \leq \| u \|^2 + \| v \|^2 + 2 \| u \| \| v \| = (\| u \| + \| v \|)^2$

And taking square roots gives the result. $ \square$

Metrics on Words

We are about to begin a series where we analyze large corpora of English words. In particular, we will use a probabilistic analysis of Google’s ngrams to solve various tasks such as spelling correction, word segmentation, on-line typing prediction, and decoding substitution ciphers. This will hopefully take us on a wonderful journey through elementary probability, dynamic programming algorithms, and optimization.

As usual, the code implemented in this post is available from this blog’s Github page, and we encourage the reader to use the code to implement our suggested exercises. But before we get there, we should investigate some properties of our domain: the set of all finite strings of letters.

Words, Words, Words.

If we consider a fixed alphabet $ \Sigma$ (any set, really, but for our purposes a finite one), we may consider the set $ \Sigma^*$ of all finite strings of elements from $ \Sigma$, also called words. For example, given $ \Sigma = \left \{ u,v,w \right \}$, we have the word $ u \cdot w \cdot w \cdot v \cdot u \in \Sigma^*$. Also, we allow the empty word $ \varepsilon$ to be a string of length zero. The formal name for the “star” operation is the Kleene star. Most of our work here will be done over the English alphabet of letters $ \Sigma = \left \{ a, b, c, \dots , z \right \}$.

As usual, we are looking for some sort of underlying structure. Here the structure is that two words can be concatenated to make a larger string. In the parlance of abstract algebra, $ \Sigma^*$ is a monoid with respect to the concatenation operation. If we denote the operation by (pretending it is) multiplication, we write $ u \cdot v = uv$, and the monoid structure just means two things. First, the $ \cdot$ operation is associative, so that any three words $ r,s,t$ satisfy $ (r \cdot s) \cdot t = r \cdot (s \cdot t)$. Second, it has an identity element (here the empty word), so that $ \varepsilon w = w \varepsilon$ for all words $ w$. For computer scientists, these are just natural properties of functions like C’s strcat(), but in mathematics they define the structure of the space of all words. To be completely clear, these two properties (a set with an associative binary operation and an identity element) define a monoid.

We make a few abuses of notation here. In every monoid the operation is a pretend multiplication, so in general we will call it multiplication. We will also write strings (abusively, “products”) as $ a^4b^2$, which would formally be $ a \cdot a \cdot a \cdot a \cdot b \cdot b$. We lose nothing by the abuses, but gain brevity.

The Kleene starred monoid $ \Sigma^*$ has an additional property; it is the free monoid generated by $ \Sigma$. This won’t mean anything to the reader who isn’t familiar with universal properties, but it essentially tells us that any word $ w \in \Sigma^*$ is uniquely written as a product of letters in $ \Sigma$.

Now, the structure we’ve described here is not particularly rich. In fact, free objects are the algebraic objects which are usually “completely understood.” For our purposes the language of abstract algebra is just a mature setting for our discussion, but the concepts we introduce will give an extra perspective on the topic. In other words, as we don’t plan to use any monoids more complicated than the English alphabet free monoid described above, we have no interesting (general) theorems to apply to our activities.

Before we turn to a more concrete setting, we have one more definition. A monoid homomorphism between two monoids $ M_1, M_2$ is a function $ f : M_1 \to M_2$ which respects the multiplication operations and preserves the identity element. Rigorously, we have that all words $ u,v \in M_1$ satisfy $ f(uv) = f(u)f(v)$, where the operation on the left side of the equality (before the application of $ f$) is multiplication in $ M_1$, and the one on the right hand side is multiplication in $ M_2$.

One easy example of a monoid homomorphism from our English alphabet free monoid is the length homomorphism. Rigorously, the set of natural numbers $ \mathbb{N}$ is a monoid under addition, and the function $ \textup{length} : \Sigma^* \to \mathbb{N}$ which assigns to each word its length is a homomorphism of monoids. This is intuitively clear: the length of a concatenation of two words is the sum of the lengths of the pieces.

A more complex example which shows up in functional programming has to do with lists. Let $ X, Y$ be two classes of objects of some fixed types, then we may consider $ X^*$ as the set of all lists of objects in $ X$. This is again a free monoid over $ X$ with the operation of list appending and the empty list as the identity. Note that $ X$ sits inside $ X^*$ in a natural way: each element of $ X$ can be considered a list of length one. With this understanding, we may be sloppy in calling the “product” of $ x,y \in X$ the list $ xy \in X^*$ (note, $ X$ itself need not have any operations).

Now for any fixed operation $ g : X \to Y$, we may form the map homomorphism $ \mu_g: X^* \to Y^*$ inductively as follows:

$ \mu_g(\varepsilon) = \varepsilon$
$ \mu_g(x_1 \dots x_n) = g(x_1) \mu_g(x_2 \dots x_n))$

This is precisely the map operation defined in our primer on functional programming. We encourage the reader to investigate how to phrase the other two functions (filter and fold) as monoid homomorphisms, or prove it cannot be done (thanks to Matt for pointing out this author’s mistake with regards to that).

Metrics, and String Comparisons

Since our goal is to do things like spelling correction, we are quite interested in strings of letters which are not actually words. We want to be able to tell someone that the word “beleive” is probably a misspelling of the word “believe.” So let us fix our alphabet $ \Sigma = \left \{ a, b, c, \dots , z \right \}$ and consider the free monoid $ \Sigma^*$. As we have noted, this is the set of all words one could type with the lowercase English alphabet, so it includes all of our egregious typos. It is a simplistic model, since we ignore punctuation, capitalization, and stylistic marks that convey meaning. But it is as good a place as any to start our investigation.

To mathematically describe what it means for a misspelled word to be “almost” the intended word, we need to bring in the concept of a metric. In other words, we want to view our set $ \Sigma^*$ as a metric space in which we can measure the distance between any two words (when viewed as a metric space, we will call them points). Then the set of all valid English words $ E \subset \Sigma^*$ is a subspace. To correct a misspelled word $ w \in \Sigma^*$, we can simply use the closest point in $ E$ with respect to the metric.

Of course, the hard part is describing the right metric. But before we get there, we must define a metric so we know what properties to aim for in constructing a metric on words.

Definition: A metric $ d : X \times X \to \mathbb{R}$ is a function on a set $ X$ which has the following three properties for all $ x,y,z \in X$

  • $ d(x,y) \geq 0$, and $ d(x,y) = 0$ if and only if $ x = y$.
  • $ d(x,y) = d(y,x)$
  • $ d(x,y) + d(y,z) \geq d(x,z)$ (the triangle inequality)

A space $ X$ equipped with a fixed metric $ d$ is said to be a metric space.

There are plenty of interesting examples of metrics, and we refer the interested reader to Wikipedia, or to any introductory topology text (or the end of a real analysis text). We will focus on the Levenshtein metric.

If we think for a minute we can come up with a list of ways that people make typing mistakes. Sometimes we omit letters (as in diferent), sometimes we add too many letters (e.g., committment), and sometimes we substitute one letter for another (missussippi could be a phonetic error, or a slip of the finger on a qwerty keyboard). Furthermore, we can traverse from one word to another by a sequence of such operations (at worst, delete all letters and then insert the right letters). So it would make sense to take the distance between two words to be the smallest number of such transformations required to turn one word into another.

More rigorously, let $ u = u_1 \dots u_k$ be the unique way to write $ u$ as a product of letters, and let $ v = v_1 \dots v_j$ be the same for $ v$. An elementary edit of $ u$ is one of the following:

  • a deletion: the transformation $ u_1 \dots u_i \dots u_k \to u_1 \dots \widehat{u_i} \dots u_k$ for some $ 1 \leq i \leq k$, where the hat omits omission in the $ i$-th spot.
  • an insertion: the transformation $ u_1 \dots u_k \to u_1 \dots u_i x \dots u_k $ for some $ 1 \leq i \leq k$, and some letter $ x \in \Sigma$.
  • a substitution: the transformation $ u_1 \dots u_i \dots u_k \to u_1 \dots u_{i-1}xu_{i+1} \dots u_k$ for some $ 1 \leq i \leq k-1$ and some letter $ x$.

Then an edit from $ u$ to $ v$ is a sequence of elementary edits which begins with $ u= u_1 \dots u_k$ and ends in $ v= v_1 \dots v_j$. The length of an edit is the number of elementary edits in the sequence. Finally, we define the edit distance between $ u$ and $ v$, denoted $ d(u,v)$, as the length of the shortest edit from $ u$ to $ v$.

To verify this is a metric, we note that all edits have non-negative length, and the only edit of length zero is the edit which does nothing, so if $ d(x,y) = 0$ it follows that $ x = y$. Second, we note that edits are symmetric inherently, in that if we have an edit from $ x$ to $ y$, we may simply reverse the sequence and we have a valid edit from $ y$ to $ x$. Clearly, the property of being the shortest edit is not altered by reversal.

Last, we must verify the triangle inequality. Let $ x,y,z$ be words; we want to show $ d(x,z) \leq d(x,y) + d(y,z)$. Take two shortest edits between $ x,y$ and $ y,z$, and note that their composition is a valid edit from $ x$ to $ z$. Following our definition, by “compose” we mean combine the two sequences of operations into one sequence in the obvious way. Since this is an edit, its length can be no smaller than the shortest edit from $ x$ to $ z$, proving the claim.

So $ d$ is in fact a metric, and historically it is called Levenshtein’s metric.

A Topological Aside

Before we get to implementing this metric, we have a few observations to make. First, we note that the shortest edit between two words is far from unique. In particular, the needed substitutions, insertions, and deletions often commute (i.e. the order of operations is irrelevant). Furthermore, instead of simply counting the number of operations required, we could assign each operation a cost, and call the total cost of an edit the sum of the costs of each elementary edit. This yields a large class of different metrics, and one could conceivably think of new operations (combinations of elementary operations) to assign lower costs. Indeed, we will do just that soon enough.

Second, and more interestingly, this metric provides quite a bit of structure on our space. It is a well known fact that every metric induces a topology. In other words, there is a topology generated by the open balls $ \left \{ x : d(x,y) < r \right \}$ for all possible radii $ r \in \mathbb{R}$ and all centers $ y$. We can also characterize the topology from another viewpoint: consider the infinite graph $ G$ where each vertex is a word in $ \Sigma^*$ and two words have a connecting edge if there exists an elementary edit between them. Then edit distance in $ \Sigma^*$ is just the length of a shortest path in $ G$, and so the spaces are isometric, and hence homeomorphic (they have identical topologies). Indeed, this is often generalized to the word metric on a group, which is beyond the scope of this post (indeed, we haven’t gotten anywhere close to group theory yet on this blog!).

For those of us unfamiliar with topology or graph theory, we can still imagine geometric notions that get to the intuitive heart of what “induced topology” means for words. For example, we can describe a circle of radius $ r$ centered at a word $ w$ quite easily: it is just the set of all words whose edit distance from $ w$ is exactly $ r$. As a concrete example, the circle of radius 1 centered at the word $ a$ is

$ \left \{ \varepsilon, b, c, \dots , z, aa, ab, ac, \dots , az, ba, ca, \dots , za \right \}$

In fact, any geometric construction that can be phrased entirely in terms of distance has an interpretation in this setting. We encourage the reader to think of more.

Python Implementation, and a Peek at Dynamic Programming

Of course, what use are these theoretical concerns to us if we can’t use it to write a spell-checker? To actually implement the damn thing, we need a nontrivial algorithm. So now let’s turn to Python.

Our first observation is that we don’t actually care what the edits are, we just care about the number of edits. Since the edits only operate on single characters, we can define the behavior recursively. Specifically, suppose we have two words $ u = u_1 \dots u_k$ and $ v_1 \dots v_j$. If $ u_k = v_j$, we can leave the last characters the same and inductively work with the remaining letters. If not, we find the shortest edit between $ u_1 \dots u_{k-1}$ and $ v_1 \dots v_{j}$, as if our last operation were a deletion of $ u_k$. Similarly, we can inductively find the shortest distance between $ u_1 \dots u_k$ and $ v_1 \dots v_{j-1}$, as if our last move were an insertion of $ v_j$ to the end of $ u$. Finally, we could find the shortest distance between $ u_1 \dots u_{k-1}$ and $ v_1 \dots v_{j-1}$, as if our last move were a substitution of $ u_k$ for $ v_j$. For the base case, if any word is empty, then the only possible edit is inserting/deleting all the letters in the other word.

Here is precisely that algorithm, written in Python:

def dist(word1, word2):
   if not word1 or not word2:
      return max(len(word1), len(word2))
   elif word1[-1] == word2[-1]:
      return dist(word1[:-1], word2[:-1])
   else:
      return 1 + min(dist(word1[:-1], word2),
                     dist(word1, word2[:-1]),
                     dist(word1[:-1], word2[:-1]))

Here the [:-1] syntax indicates a slice of the first $ n-1$ characters of an $ n$ character string. Note again that as we don’t actually care what the operations are, we can simply assume we’re doing the correct transformation, and just add 1 to our recursive calls. For a proof of correctness, we refer the reader to Wikipedia (Sorry! It’s just a ton of case-checking). We also note that recursion in Python can be extremely slow for large inputs. There is of course a method of building up a cost matrix from scratch which would perform better, but we feel this code is more legible, and leave the performance tuning as an exercise to the reader. For more information on dynamic programming, see this blog’s primer on the subject.

The cautious programmer will note the above algorithm is terribly wasteful! For instance, suppose we’re investigating the distance between $ foo$ and $ bar$. Through our recursive calls, we’ll first investigate the distance between $ fo$ and $ bar$, during which we recursively investigate $ fo$ versus $ ba$. Once that’s finished, we go ahead and investigate the other branch, $ foo$ versus $ ba$, during which we look at $ fo$ versus $ ba$ once more, even though we already computed it in the first branch! What’s worse, is that we have a third branch that computes $ fo$ versus $ ba$ again! Doing a bit of algorithm analysis, we realize that this algorithm is $ O(3^{\min(n,m)})$, where $ m, n$ are the lengths of the two compared words. Unacceptable!

To fix this, we need to keep track of prior computations. The technical term is memoized recursion, and essentially we want to save old computations in a lookup table for later reference. In mostly-Python:

cache = {}
def memoizedFunction(args):
   if args not in cache:
      cache[args] = doTheComputation(args)
   return cache[args]

To actually implement this, it turns out we don’t need to change the above code at all. Instead, we will use a decorator to modify the function as we wish. Here’s the code, which is essentially an extra layer of indirection applied to the above pseudocode.

def memoize(f):
   cache = {}

   def memoizedFunction(*args):
      if args not in cache:
         cache[args] = f(*args)
      return cache[args]

   memoizedFunction.cache = cache
   return memoizedFunction

Here the function memoize() will accept our distance function, and return a new function which encapsulates the memo behavior. To use it, we simply use

def f(x):
   ...

equivalentButMemoizedFunction = memoize(f)

But luckily, Python gives a nice preprocessor macro to avoid writing this for every function we wish to memoize. Instead, we may simply write

@memoize
def f(x):
   ...

And Python will make the appropriate replacements of calls to f with the appropriate calls to the memoized function. Convenient! For further discussion, see our post on this technique in the program gallery.

Applying this to our Levenshtein metric, we see an impressive speedup, and a quick analysis shows the algorithm takes $ O(nm)$, where $ n, m$ are the lengths of the two words being compared. Indeed, we are comparing (at worst) all possible prefixes of the two words, and for each of the $ n$ prefixes of one word, we compute a distance to all $ m$ prefixes of the other word. The memoization prevents us from doing any computation twice.

To this author, this approach is the most natural implementation, but there are other approaches worth investigating. In particular, Python limits the recursion depth to a few hundred. If we try to compare, say, two DNA sequences, this algorithm will quickly overflow. There are a number of ways to fix this, the most appropriate of which would be tail call optimization (in this author’s humble opinion). Unfortunately, we’d need to tweak the algorithm a bit to put the recursive call in tail position, Python does not support tail call optimization, and manually putting things in continuation-passing style is annoying, obfuscating, and quite ugly. If we decide in the future to do DNA sequence analysis, we will return to this problem.

In the future, we plan to provide another Python primer, with a focus on dynamic algorithms. Other methods for solving this problem will arise there. Indeed, I’m teaching an introductory Python programming course next semester, so this will be a good refresher.

Transpositions, and Other Enhancements

One other significant kind of typo is a transposition. Often times we type the correct letters in a word, but jumble the order of two words. In a spell checker, we want the word $ thier$ to be closer to the word $ their$ than it is to the word $ cheer$, but with the Levenshtein metric the two pairs have equal distance (two substitutions each). We can enhance the metric by making transpositions have a cost of 1. Historically, this extended metric is called the Damerau-Levenshtein metric. Indeed, Damerau himself gave evidence that transpositions, along with the other three elementary edits, account for over 85% of human typing errors. Then again, that was back in the sixties, and typing has changed in many ways since then (not the least of which is a change in a typist’s vocabulary).

Adding transpositions to the algorithm above seems straightforward, but there are some nontrivial details to consider. For instance, we may first transpose two letters and then insert a new letter between them, as in the transformation from $ ta$ to $ act$. If we are not careful, we might prohibit such legal transformations in our algorithm. Here is an implementation, which again uses the memoization decorator.

@memoize
def dist2(word1, word2):
   if not word1 or not word2:
      return max(len(word1), len(word2))
   elif word1[-1] == word2[-1]:
      return dist2(word1[:-1], word2[:-1])
   else:
      minDist = 1 + min(dist2(word1[:-1], word2),
                        dist2(word1, word2[:-1]),
                        dist2(word1[:-1], word2[:-1]))
      # transpositions
      if len(word1) &gt; 1 and len(word2) &gt; 1:
         if word1[-2] == word2[-1]:
            transposedWord1 = word1[:-2] + word1[-1] + word1[-2]
            minDist = min(minDist, dist2(transposedWord1[:-1], word2))
         if word2[-2] == word1[-1]:
            transposedWord2 = word2[:-2] + word2[-1] + word2[-2]
            minDist = min(minDist, dist2(word1, transposedWord2[:-1]))
   return minDist

Indeed, we must inspect both possible transpositions, and the symmetry of the example above shows the need for both branches. The proof that this extended metric is still a metric and the proof of algorithmic correctness are nearly identical to the plain Levenshtein metric.

So that was fun. Here are some other ideas we leave as exercises to the reader. First, if we allow ourselves to fix a keyboard layout (for many languages with Latin-based alphabets, the standard is qwerty with minor substitutions), we could factor that in to our analysis of letter substitutions and incorrect insertions. For instance, the word $ ribies$ is just as close to $ rabies$ as it is to $ rubies$, but it is less likely the user meant to type the first word, since $ u$ is physically closer to $ i$ than $ a$ is. To implement this, we can modify the above algorithm to accept a look-up table of physical distances (approximations) between keys. Instead of adding 1 in the relevant branches, we can add a cost according to the look-up table. At the coarsest, we could construct a graph with vertices representing letters, edges representing physical adjacencies, and use the shortest graph path in place of physical key distance.

We also note (from our mature vantage point) that this algorithm is not restricted to strings, but can be performed on any free monoid. This includes the example we mentioned earlier of lists. So we could generalize the algorithm to operate on any piece of data which has such an identity element and binary operation, and satisfies the freedom condition. My knowledge of Python is still somewhat limited, but the method for achieving this generalization comes in many names in many languages: in Java it’s interfaces, in C++ it’s templating, in Haskell it’s a typeclass. In Python, there is a fancy thing called duck-typing, and we leave this for our next Python primer.

Next time, we’ll crack open some data files with actual English dictionaries in them, and see what we can do about solving interesting problems with them. Until then!