The Universal Properties of Map, Fold, and Filter

A lot of people who like functional programming often give the reason that the functional style is simply more elegant than the imperative style. When compelled or inspired to explain (as I did in my old post, How I Learned to Love Functional Programming), they often point to the three “higher-order” functions map, fold, and filter, as providing a unifying framework for writing and reasoning about programs. But how unifying are they, really? In this post we’ll give characterizations of these functions in terms of universal properties, and we’ll see that in fact fold is the “most” universal of the three, and its natural generalization gives a characterization of transformations of standard compound data types.

By being universal or having a universal property, we don’t mean that map, fold, and filter are somehow mystically connected to all programming paradigms. That might be true, but it’s not the point of this article. Rather, by saying something has a universal property we are making a precise mathematical statement that it is either an initial or final object (or the unique morphism determined by such) in some category.

That means that, as a fair warning to the reader, this post assumes some basic knowledge of category theory, but no more than what we’ve covered on this blog in the past. Of particular importance for the first half of this post is the concept of a universal property, and in the followup post we’ll need some basic knowledge of functors.

Map, Filter, and Fold

Recalling their basic definitions, map is a function which accepts as input a list L whose elements all have the same type (call it A), and a function f which maps A to another type B. Map then applies f to every entry of L to produce a new list whose entries all have type B.

In most languages, implementing the map function is an elementary exercise. Here is one possible definition in ML.

fun map(_, nil) = nil
  | map(f, (head::tail)) = f(head) :: map(f, tail)

Next, filter takes as input a boolean-valued predicate p : A \to \textup{bool} on types A and a list L whose entries have type A, and produces a list of those entries of L which satisfy the predicate. Again, it’s implementation in ML might be:

fun filter(_, nil) = nil
  | filter(p, (head::tail)) = if p(head)
                                 then (head :: filter(p, tail))
                                 else filter(p, tail)

Finally, fold is a function which “reduces” a list L of entries with type A down to a single value of type B. It accepts as input a function f : A \times B \to B, and an initial value v \in B, and produces a value of type B by recursively applying f as follows:

fun fold(_, v, nil) = v
  | fold(f, v, (head::tail)) = f(head, fold(f, v, tail))

(If this definition is mysterious, you’re probably not ready for the rest of this post.)

One thing that’s nice about fold is that we can define map and filter in terms of it:

fun map(f, L) = fold((fn x, xs => (f(x):xs)), [], L)
fun filter(p, L) = fold((fn x, xs => if p(x) then x:xs else xs end), [], L)

We’ll see that this is no accident: fold happens to have the “most general” universality of the three functions, but as a warm-up and a reminder we first investigate the universality of map and filter.

Free Monoids and Map

Map is the easiest function of the three to analyze, but to understand it we need to understand something about monoids. A monoid is simple to describe, it’s just a set M with an associative binary operation denoted by multiplication and an identity element for that operation.

monoid homomoprhism between two monoids M, N is a function f: M \to N such that f(xy) = f(x)f(y). Here the multiplication on the left hand side of the equality is the operation from M and on the right it’s the one from N (this is the same definition as a group homomorphism). As is to be expected, monoids with monoid homomorphisms form a category.

We encounter simple monoids every day in programming which elucidate this definition. Strings with the operation of string concatenation form a monoid, and the empty string acts as an identity because concatenating a string to the empty string has no effect. Similarly, lists with list concatenation form a monoid, where the identity is the empty list. A nice example of a monoid homomorphism is the length function. We know it’s a homomorphism because the length of a concatenation of two lists is just the sum of the lengths of the two lists.

Integers also form a monoid, with addition as the operation and zero as the identity element. However, the list and string monoids have an extra special property that integers do not. For a number n you can always find -n so that n + (-n) = 0 is the identity element. But for lists, the only way to concatenate two lists and get the empty list is if both of the lists were empty to begin with. A monoid with this property is called free, and to fully understand it we should make a definition which won’t seem to mean anything at first.

Definition: Let \mathbf{C} be a category. Given a set A, the free object over A, denoted F(A), is an object of \mathbf{C} which is universal with respect to set-maps A \to B for any object B in \mathbf{C}.

As usual with a definition by universal property, we should elaborate as to exactly what’s going on. Let \mathbf{C} be a category whose objects are sets and let A a set, possibly not in this category. We can construct a new category, \mathbf{C}_{A \to X} whose objects are set-maps

\displaystyle f: A \to X

and whose morphisms are commutative diagrams of the form

free-object-cat-morphism

where \varphi is a morphism in \mathbf{C}. In other words, we simply require that \varphi(f_1(a)) = f_2(a) for every a \in A.

By saying that the free monoid on A satisfies this universal property for the category of monoids, we really mean that it is initial in this category of set-maps and commutative diagrams. That is, there is a monoid F(A) and a set-map i_A: A \to F(A) so that for every monoid Y and set-map f: A \to Y there is a unique monoid homomorphism from i_A to f. In other words, there is a unique monoid homomorphism \varphi making the following diagram commute:

free-object-univ-propFor example, if A is the set of all unicode characters, then F(A) is precisely the monoid of all strings on those characters. The map i_A is then the map taking a character a to the single-character string “a“. More generally, if T is any type in the category of types and computable functions, then F(T) is the type of homogeneous lists whose elements have type T.

We will now restrict our attention to lists and types, and we will denote the free (list) monoid on a type A as [A]. The universal property of map is then easy to describe, it’s just a specific instance of this more general universal property for the free monoids. Specifically, we work in the sub-category of homogeneous list monoids (we only allow objects which are [T] for some type T). The morphism i_A: A \to [A] just takes a value a of type A and spits out a single-item list [a]. We might hope that for any mapping A \to [B], there is a unique monoid homomorphism [A] \to [B] making the following diagram commute.

bad-map

And while this is true, the diagram lies because “map” does not achieve what \varphi does. The map f might send all of A to the empty list, and this would cause \varphi to be the trivial map. But if we decomposed f further to require it to send a to a single b, such as

still-bad-map

then things work out nicely. In particular, specifying a function f: A \to B uniquely defines how a list homomorphism operates on lists. In particular, the diagram forces the behavior for single-element lists: [a] \mapsto [f(a)]. And the property of \varphi being a monoid homomorphism forces \varphi to preserve order.

Indeed, we’ve learned from this analysis that the structure of the list involved is crucial in forming the universal property. Map is far from the only computable function making the diagram commute, but it is clearly the only monoid homomorphism. As we’ll see, specifying the order of operations is more generally described by fold, and we’ll be able to restate the universal property of map without relying on monoid homomorphisms.

Filter

The filter function is a small step up in complexity from map, but its universal property is almost exactly the same. Again filter can be viewed through the lens of a universal property for list monoids, because filter itself takes a predicate and produces a list monoid. We know this because filtering two lists by the same predicate and concatenating them is the same as concatenating them first and then filtering them. Indeed, the only difference here is that the diagram now looks like this

filter-bad

where p is our predicate, and j_A is defined by (a, T) \mapsto [a] and (a,F) \mapsto []. The composition j_A \circ p gives the map A \to [A] that we can extend uniquely to a monoid homomorphism [A] \to [A]. We won’t say any more on it, except to mention that again maintaining the order of the list is better described via fold.

Fold, and its Universal Property

Fold is different from map and filter in a very concrete way. Even though map and filter do specify that the order of the list should be preserved, it’s not an important part of their definition: filter should filter, and map should map, but no interaction occurs between different entries during the computation. In a sense, we got lucky that having everything be monoid homomorphisms resulted in preserving order without our specific intention.

Because it doesn’t necessarily produce lists and the operation can be quite weird, fold cannot exist without an explicit description of the order of operations. Let’s recall the definition of fold, and stick to the “right associative” order of operations sometimes specified by calling the function “foldr.” We will use foldr and fold interchangeably.

fun foldr(_, v, nil) = v
  | foldr(f, v, (head::tail)) = f(head, foldr(f, v, tail))

Even more, we can’t talk about foldr in terms of monoids. Even after fixing f and v, foldr need not produce a monoid homomorphism. So if we’re going to find a universal property of foldr, we’ll need a more general categorical picture.

One first try would be to view foldr as a morphism of sets

\displaystyle B \times \textup{Hom}(A \times B, B) \to \textup{Hom}([A], B)

The mapping is just f,b \mapsto \textup{foldr } f \textup{ } b, and this is just the code definition we gave above. One might hope that this mapping defines an isomorphism in the category of types and programs, but it’s easy to see that it does not. For example, let

A = \left \{ 1 \right \}, B = \left \{ 1,2 \right \}

Then the left hand side of the mapping above is a set of size 8 (there are eight ways to combine a choice of element in B with a map from A \times B \to B). But the right hand size is clearly infinite. In fact, it’s uncountably infinite, though not all possible mappings are realizable in programs of a reasonable length (in fact, very few are). So the morphism can’t possibly be a surjective and hence is not an isomorphism.

So what can we say about fold? The answer is a bit abstract, but it works out nicely.

Say we fix a type for our lists, A. We can define a category \mathbf{C}_A which has as objects the following morphisms

fold objects

By 1 we mean the type with one value (null, if you wish), and f is a morphism from a coproduct (i.e. there are implicit parentheses around A \times B). As we saw in our post on universal properties, a morphism from a coproduct is the same thing as a pair of functions which operates on each piece. Here one operates on 1 and the other on A \times B. Since morphisms from 1 are specified by the image of the sole value f(1) = b, we will often write such a function as b \amalg h, where h: A \times B \to B.

Now the morphisms in \mathbf{C}_A are the pairs of morphisms \varphi, \overline{\varphi} which make the following diagram commute:

fold morphimsHere we write \overline{\varphi} because it is determined by \varphi in a canonical way. It maps 1 \to 1 in the only way one can, and it maps (a,b) \mapsto (a, \varphi(b)). So we’re really specifying \varphi alone, but as we’ll see it’s necessary to include \overline{\varphi} as well; it will provide the “inductive step” in the definition of fold.

Now it turns out that fold satisfies the universal property of being initial in this category. Well, not quite. We’re saying first that the following object \mathscr{A} is initial,

initial object fold

Where cons is the list constructor A \times [A] \to [A] which sends (a_0, [a_1, \dots, a_n]) \mapsto [a_0, a_1, \dots, a_n]. By being initial, we mean that for any object \mathscr{B} given by the morphism b \amalg g: 1 \coprod A \times B \to B, there is a unique morphism from \mathscr{A} \to \mathscr{B}. The “fold” is precisely this unique morphism, which we denote by (\textup{fold g, b}). We implicitly know its “barred” counterpart making the following diagram commute.

fold-univ-propertyThis diagram has a lot going on in it, so let’s go ahead and recap. The left column represents an object \mathscr{A} we’re claiming is initial in this crazy category we’ve made. The right hand side is an arbitrary object \mathscr{B}, which is equivalently the data of an element b \in B and a mapping g: A \times B \to B. This is precisely the data needed to define fold. The dashed lines represent the unique morphisms making the diagram commute, whose existence and uniqueness is the defining property of what it means for an object to be initial in this category. Finally, we’re claiming that foldr, when we fix g and b, makes this diagram commute, and is hence the very same unique morphism we seek.

To prove all of this, we need to first show that the object \mathscr{A} is initial. That is, that any two morphisms we pick from \mathscr{A} \to \mathscr{B} must be equal. The first thing to notice is that the two objects 1 \coprod A \times [A] and [A] are really the same thing. That is, \textup{nil} \amalg \textup{cons} has a two-sided inverse which makes it an isomorphism in the category of types. Specifically, the inverse is the map \textup{split} sending

\textup{nil} \mapsto \textup{nil}
[a_1, \dots, a_n] \mapsto (a_1, [a_2, \dots, a_n])

So if we have a morphism \varphi, \overline{\varphi} from \mathscr{A} \to \mathscr{B}, and the diagram commutes, we can see that \varphi = (b \amalg g) \circ \overline{\varphi} \circ \textup{split}. We’re just going the long way around the diagram.

Supposing that we have two such morphisms \varphi_1, \varphi_2, we can prove they are equal by induction on the length of the input list. It is trivial to see that they both must send the empty list to b. Now suppose that for lists of length n-1 the two are equal. Given a list [a_1, \dots, a_n] we see that

\displaystyle \varphi_1([a_1, \dots, a_n]) = \varphi_1 \circ \textup{cons} (a_1, [a_2, \dots, a_n]) = g \circ \overline{\varphi_1} (a_1, [a_2, \dots, a_n])

where the last equality holds by the commutativity of the diagram. Continuing,

\displaystyle g \circ \overline{\varphi_1} (a_1, [a_2, \dots, a_n]) = g (a_1, \varphi_1([a_2, \dots, a_n])) = g(a_1, \varphi_2(a_2, \dots, a_n))

where the last equality holds by the inductive hypothesis. From here, we can reverse the equalities using \varphi_2 and it’s “barred” version to get back to \varphi_2([a_1, \dots, a_n]), proving the equality.

To show that fold actually makes the diagram commute is even simpler. In order to make the diagram commute we need to send the empty list to b, and we need to inductively send [a_1, \dots, a_n] to g(a_1, (\textup{fold g b})([a_2, \dots, a_n])), but this is the very definition of foldr!

So we’ve established that fold has this universal property. It’s easy now to see how map and filter fit into the picture. For mapping types A to C via f, just use the type [C] in place of B above, and have g(a, L) = \textup{cons}(f(a), L), and have b be the empty list. Filter similarly just needs a special definition for g.

A skeptical reader might ask: what does all of this give us? It’s a good question, and one that shouldn’t be taken lightly because I don’t have an immediate answer. I do believe that with some extra work we could use universal properties to give a trivial proof of the third homomorphism theorem for lists, which says that any function expressible as both a foldr and a foldl can be expressed as a list homomorphism. The proof would involve formulating a universal property for foldl, which is very similar to the one for foldr, and attaching the diagrams in a clever way to give the universal property of a monoid homomorphism for lists. Caveat emptor: this author has not written such a proof, but it seems likely that it would work.

More generally, any time we can represent the requirements of a list computation by an object like \mathscr{B}, we can represent the computation as a foldr. What good does this do us? Well, it might already be obvious when you can and can’t use fold. But in addition to proving when it’s possible to use fold, this new perspective generalizes very nicely to give us a characterization of arbitrary computations on compound data types. One might want to know when to perform fold-like operations on trees, or other sufficiently complicated beasts, and the universal property gives us a way to see when such a computation is possible.

That’s right, I said it: there’s more to the world than lists. Shun me if you must, but I will continue dream of great things.

In an effort to make the egregiously long posts on this blog slightly shorter, we’ll postpone our generalization of the universal property of fold until next time. There we’ll define “initial algebras” and show how to characterize “fold-like” computations any compound data type.

Until then!

Advertisements

Metrics on Words

We are about to begin a series where we analyze large corpora of English words. In particular, we will use a probabilistic analysis of Google’s ngrams to solve various tasks such as spelling correction, word segmentation, on-line typing prediction, and decoding substitution ciphers. This will hopefully take us on a wonderful journey through elementary probability, dynamic programming algorithms, and optimization.

As usual, the code implemented in this post is available from this blog’s Github page, and we encourage the reader to use the code to implement our suggested exercises. But before we get there, we should investigate some properties of our domain: the set of all finite strings of letters.

Words, Words, Words.

If we consider a fixed alphabet \Sigma (any set, really, but for our purposes a finite one), we may consider the set \Sigma^* of all finite strings of elements from \Sigma, also called words. For example, given \Sigma = \left \{ u,v,w \right \}, we have the word u \cdot w \cdot w \cdot v \cdot u \in \Sigma^*. Also, we allow the empty word \varepsilon to be a string of length zero. The formal name for the “star” operation is the Kleene star. Most of our work here will be done over the English alphabet of letters \Sigma = \left \{ a, b, c, \dots , z \right \}.

As usual, we are looking for some sort of underlying structure. Here the structure is that two words can be concatenated to make a larger string. In the parlance of abstract algebra, \Sigma^* is a monoid with respect to the concatenation operation. If we denote the operation by (pretending it is) multiplication, we write u \cdot v = uv, and the monoid structure just means two things. First, the \cdot operation is associative, so that any three words r,s,t satisfy (r \cdot s) \cdot t = r \cdot (s \cdot t). Second, it has an identity element (here the empty word), so that \varepsilon w = w \varepsilon for all words w. For computer scientists, these are just natural properties of functions like C’s strcat(), but in mathematics they define the structure of the space of all words. To be completely clear, these two properties (a set with an associative binary operation and an identity element) define a monoid.

We make a few abuses of notation here. In every monoid the operation is a pretend multiplication, so in general we will call it multiplication. We will also write strings (abusively, “products”) as a^4b^2, which would formally be a \cdot a \cdot a \cdot a \cdot b \cdot b. We lose nothing by the abuses, but gain brevity.

The Kleene starred monoid \Sigma^* has an additional property; it is the free monoid generated by \Sigma. This won’t mean anything to the reader who isn’t familiar with universal properties, but it essentially tells us that any word w \in \Sigma^* is uniquely written as a product of letters in \Sigma.

Now, the structure we’ve described here is not particularly rich. In fact, free objects are the algebraic objects which are usually “completely understood.” For our purposes the language of abstract algebra is just a mature setting for our discussion, but the concepts we introduce will give an extra perspective on the topic. In other words, as we don’t plan to use any monoids more complicated than the English alphabet free monoid described above, we have no interesting (general) theorems to apply to our activities.

Before we turn to a more concrete setting, we have one more definition. A monoid homomorphism between two monoids M_1, M_2 is a function f : M_1 \to M_2 which respects the multiplication operations and preserves the identity element. Rigorously, we have that all words u,v \in M_1 satisfy f(uv) = f(u)f(v), where the operation on the left side of the equality (before the application of f) is multiplication in M_1, and the one on the right hand side is multiplication in M_2.

One easy example of a monoid homomorphism from our English alphabet free monoid is the length homomorphism. Rigorously, the set of natural numbers \mathbb{N} is a monoid under addition, and the function \textup{length} : \Sigma^* \to \mathbb{N} which assigns to each word its length is a homomorphism of monoids. This is intuitively clear: the length of a concatenation of two words is the sum of the lengths of the pieces.

A more complex example which shows up in functional programming has to do with lists. Let X, Y be two classes of objects of some fixed types, then we may consider X^* as the set of all lists of objects in X. This is again a free monoid over X with the operation of list appending and the empty list as the identity. Note that X sits inside X^* in a natural way: each element of X can be considered a list of length one. With this understanding, we may be sloppy in calling the “product” of x,y \in X the list xy \in X^* (note, X itself need not have any operations).

Now for any fixed operation g : X \to Y, we may form the map homomorphism \mu_g: X^* \to Y^* inductively as follows:

\mu_g(\varepsilon) = \varepsilon
\mu_g(x_1 \dots x_n) = g(x_1) \mu_g(x_2 \dots x_n))

This is precisely the map operation defined in our primer on functional programming. We encourage the reader to investigate how to phrase the other two functions (filter and fold) as monoid homomorphisms, or prove it cannot be done (thanks to Matt for pointing out this author’s mistake with regards to that).

Metrics, and String Comparisons

Since our goal is to do things like spelling correction, we are quite interested in strings of letters which are not actually words. We want to be able to tell someone that the word “beleive” is probably a misspelling of the word “believe.” So let us fix our alphabet \Sigma = \left \{ a, b, c, \dots , z \right \} and consider the free monoid \Sigma^*. As we have noted, this is the set of all words one could type with the lowercase English alphabet, so it includes all of our egregious typos. It is a simplistic model, since we ignore punctuation, capitalization, and stylistic marks that convey meaning. But it is as good a place as any to start our investigation.

To mathematically describe what it means for a misspelled word to be “almost” the intended word, we need to bring in the concept of a metric. In other words, we want to view our set \Sigma^* as a metric space in which we can measure the distance between any two words (when viewed as a metric space, we will call them points). Then the set of all valid English words E \subset \Sigma^* is a subspace. To correct a misspelled word w \in \Sigma^*, we can simply use the closest point in E with respect to the metric.

Of course, the hard part is describing the right metric. But before we get there, we must define a metric so we know what properties to aim for in constructing a metric on words.

Definition: A metric d : X \times X \to \mathbb{R} is a function on a set X which has the following three properties for all x,y,z \in X

A space X equipped with a fixed metric d is said to be a metric space.

There are plenty of interesting examples of metrics, and we refer the interested reader to Wikipedia, or to any introductory topology text (or the end of a real analysis text). We will focus on the Levenshtein metric.

If we think for a minute we can come up with a list of ways that people make typing mistakes. Sometimes we omit letters (as in diferent), sometimes we add too many letters (e.g., committment), and sometimes we substitute one letter for another (missussippi could be a phonetic error, or a slip of the finger on a qwerty keyboard). Furthermore, we can traverse from one word to another by a sequence of such operations (at worst, delete all letters and then insert the right letters). So it would make sense to take the distance between two words to be the smallest number of such transformations required to turn one word into another.

More rigorously, let u = u_1 \dots u_k be the unique way to write u as a product of letters, and let v = v_1 \dots v_j be the same for v. An elementary edit of u is one of the following:

  • a deletion: the transformation u_1 \dots u_i \dots u_k \to u_1 \dots \widehat{u_i} \dots u_k for some 1 \leq i \leq k, where the hat omits omission in the i-th spot.
  • an insertion: the transformation u_1 \dots u_k \to u_1 \dots u_i x \dots u_k for some 1 \leq i \leq k, and some letter x \in \Sigma.
  • a substitution: the transformation u_1 \dots u_i \dots u_k \to u_1 \dots u_{i-1}xu_{i+1} \dots u_k for some 1 \leq i \leq k-1 and some letter x.

Then an edit from u to v is a sequence of elementary edits which begins with u= u_1 \dots u_k and ends in v= v_1 \dots v_j. The length of an edit is the number of elementary edits in the sequence. Finally, we define the edit distance between u and v, denoted d(u,v), as the length of the shortest edit from u to v.

To verify this is a metric, we note that all edits have non-negative length, and the only edit of length zero is the edit which does nothing, so if d(x,y) = 0 it follows that x = y. Second, we note that edits are symmetric inherently, in that if we have an edit from x to y, we may simply reverse the sequence and we have a valid edit from y to x. Clearly, the property of being the shortest edit is not altered by reversal.

Last, we must verify the triangle inequality. Let x,y,z be words; we want to show d(x,z) \leq d(x,y) + d(y,z). Take two shortest edits between x,y and y,z, and note that their composition is a valid edit from x to z. Following our definition, by “compose” we mean combine the two sequences of operations into one sequence in the obvious way. Since this is an edit, its length can be no smaller than the shortest edit from x to z, proving the claim.

So d is in fact a metric, and historically it is called Levenshtein’s metric.

A Topological Aside

Before we get to implementing this metric, we have a few observations to make. First, we note that the shortest edit between two words is far from unique. In particular, the needed substitutions, insertions, and deletions often commute (i.e. the order of operations is irrelevant). Furthermore, instead of simply counting the number of operations required, we could assign each operation a cost, and call the total cost of an edit the sum of the costs of each elementary edit. This yields a large class of different metrics, and one could conceivably think of new operations (combinations of elementary operations) to assign lower costs. Indeed, we will do just that soon enough.

Second, and more interestingly, this metric provides quite a bit of structure on our space. It is a well known fact that every metric induces a topology. In other words, there is a topology generated by the open balls \left \{ x : d(x,y) < r \right \} for all possible radii r \in \mathbb{R} and all centers y. We can also characterize the topology from another viewpoint: consider the infinite graph G where each vertex is a word in \Sigma^* and two words have a connecting edge if there exists an elementary edit between them. Then edit distance in \Sigma^* is just the length of a shortest path in G, and so the spaces are isometric, and hence homeomorphic (they have identical topologies). Indeed, this is often generalized to the word metric on a group, which is beyond the scope of this post (indeed, we haven’t gotten anywhere close to group theory yet on this blog!).

For those of us unfamiliar with topology or graph theory, we can still imagine geometric notions that get to the intuitive heart of what “induced topology” means for words. For example, we can describe a circle of radius r centered at a word w quite easily: it is just the set of all words whose edit distance from w is exactly r. As a concrete example, the circle of radius 1 centered at the word a is

\left \{ \varepsilon, b, c, \dots , z, aa, ab, ac, \dots , az, ba, ca, \dots , za \right \}

In fact, any geometric construction that can be phrased entirely in terms of distance has an interpretation in this setting. We encourage the reader to think of more.

Python Implementation, and a Peek at Dynamic Programming

Of course, what use are these theoretical concerns to us if we can’t use it to write a spell-checker? To actually implement the damn thing, we need a nontrivial algorithm. So now let’s turn to Python.

Our first observation is that we don’t actually care what the edits are, we just care about the number of edits. Since the edits only operate on single characters, we can define the behavior recursively. Specifically, suppose we have two words u = u_1 \dots u_k and v_1 \dots v_j. If u_k = v_j, we can leave the last characters the same and inductively work with the remaining letters. If not, we find the shortest edit between u_1 \dots u_{k-1} and v_1 \dots v_{j}, as if our last operation were a deletion of u_k. Similarly, we can inductively find the shortest distance between u_1 \dots u_k and v_1 \dots v_{j-1}, as if our last move were an insertion of v_j to the end of u. Finally, we could find the shortest distance between u_1 \dots u_{k-1} and v_1 \dots v_{j-1}, as if our last move were a substitution of u_k for v_j. For the base case, if any word is empty, then the only possible edit is inserting/deleting all the letters in the other word.

Here is precisely that algorithm, written in Python:

def dist(word1, word2):
   if not word1 or not word2:
      return max(len(word1), len(word2))
   elif word1[-1] == word2[-1]:
      return dist(word1[:-1], word2[:-1])
   else:
      return 1 + min(dist(word1[:-1], word2),
                     dist(word1, word2[:-1]),
                     dist(word1[:-1], word2[:-1]))

Here the [:-1] syntax indicates a slice of the first n-1 characters of an n character string. Note again that as we don’t actually care what the operations are, we can simply assume we’re doing the correct transformation, and just add 1 to our recursive calls. For a proof of correctness, we refer the reader to Wikipedia (Sorry! It’s just a ton of case-checking). We also note that recursion in Python can be extremely slow for large inputs. There is of course a method of building up a cost matrix from scratch which would perform better, but we feel this code is more legible, and leave the performance tuning as an exercise to the reader. For more information on dynamic programming, see this blog’s primer on the subject.

The cautious programmer will note the above algorithm is terribly wasteful! For instance, suppose we’re investigating the distance between foo and bar. Through our recursive calls, we’ll first investigate the distance between fo and bar, during which we recursively investigate fo versus ba. Once that’s finished, we go ahead and investigate the other branch, foo versus ba, during which we look at fo versus ba once more, even though we already computed it in the first branch! What’s worse, is that we have a third branch that computes fo versus ba again! Doing a bit of algorithm analysis, we realize that this algorithm is O(3^{\min(n,m)}), where m, n are the lengths of the two compared words. Unacceptable!

To fix this, we need to keep track of prior computations. The technical term is memoized recursion, and essentially we want to save old computations in a lookup table for later reference. In mostly-Python:

cache = {}
def memoizedFunction(args):
   if args not in cache:
      cache[args] = doTheComputation(args)
   return cache[args]

To actually implement this, it turns out we don’t need to change the above code at all. Instead, we will use a decorator to modify the function as we wish. Here’s the code, which is essentially an extra layer of indirection applied to the above pseudocode.

def memoize(f):
   cache = {}

   def memoizedFunction(*args):
      if args not in cache:
         cache[args] = f(*args)
      return cache[args]

   memoizedFunction.cache = cache
   return memoizedFunction

Here the function memoize() will accept our distance function, and return a new function which encapsulates the memo behavior. To use it, we simply use

def f(x):
   ...

equivalentButMemoizedFunction = memoize(f)

But luckily, Python gives a nice preprocessor macro to avoid writing this for every function we wish to memoize. Instead, we may simply write

@memoize
def f(x):
   ...

And Python will make the appropriate replacements of calls to f with the appropriate calls to the memoized function. Convenient! For further discussion, see our post on this technique in the program gallery.

Applying this to our Levenshtein metric, we see an impressive speedup, and a quick analysis shows the algorithm takes O(nm), where n, m are the lengths of the two words being compared. Indeed, we are comparing (at worst) all possible prefixes of the two words, and for each of the n prefixes of one word, we compute a distance to all m prefixes of the other word. The memoization prevents us from doing any computation twice.

To this author, this approach is the most natural implementation, but there are other approaches worth investigating. In particular, Python limits the recursion depth to a few hundred. If we try to compare, say, two DNA sequences, this algorithm will quickly overflow. There are a number of ways to fix this, the most appropriate of which would be tail call optimization (in this author’s humble opinion). Unfortunately, we’d need to tweak the algorithm a bit to put the recursive call in tail position, Python does not support tail call optimization, and manually putting things in continuation-passing style is annoying, obfuscating, and quite ugly. If we decide in the future to do DNA sequence analysis, we will return to this problem.

In the future, we plan to provide another Python primer, with a focus on dynamic algorithms. Other methods for solving this problem will arise there. Indeed, I’m teaching an introductory Python programming course next semester, so this will be a good refresher.

Transpositions, and Other Enhancements

One other significant kind of typo is a transposition. Often times we type the correct letters in a word, but jumble the order of two words. In a spell checker, we want the word thier to be closer to the word their than it is to the word cheer, but with the Levenshtein metric the two pairs have equal distance (two substitutions each). We can enhance the metric by making transpositions have a cost of 1. Historically, this extended metric is called the Damerau-Levenshtein metric. Indeed, Damerau himself gave evidence that transpositions, along with the other three elementary edits, account for over 85% of human typing errors. Then again, that was back in the sixties, and typing has changed in many ways since then (not the least of which is a change in a typist’s vocabulary).

Adding transpositions to the algorithm above seems straightforward, but there are some nontrivial details to consider. For instance, we may first transpose two letters and then insert a new letter between them, as in the transformation from ta to act. If we are not careful, we might prohibit such legal transformations in our algorithm. Here is an implementation, which again uses the memoization decorator.

@memoize
def dist2(word1, word2):
   if not word1 or not word2:
      return max(len(word1), len(word2))
   elif word1[-1] == word2[-1]:
      return dist2(word1[:-1], word2[:-1])
   else:
      minDist = 1 + min(dist2(word1[:-1], word2),
                        dist2(word1, word2[:-1]),
                        dist2(word1[:-1], word2[:-1]))
      # transpositions
      if len(word1) > 1 and len(word2) > 1:
         if word1[-2] == word2[-1]:
            transposedWord1 = word1[:-2] + word1[-1] + word1[-2]
            minDist = min(minDist, dist2(transposedWord1[:-1], word2))
         if word2[-2] == word1[-1]:
            transposedWord2 = word2[:-2] + word2[-1] + word2[-2]
            minDist = min(minDist, dist2(word1, transposedWord2[:-1]))
   return minDist

Indeed, we must inspect both possible transpositions, and the symmetry of the example above shows the need for both branches. The proof that this extended metric is still a metric and the proof of algorithmic correctness are nearly identical to the plain Levenshtein metric.

So that was fun. Here are some other ideas we leave as exercises to the reader. First, if we allow ourselves to fix a keyboard layout (for many languages with Latin-based alphabets, the standard is qwerty with minor substitutions), we could factor that in to our analysis of letter substitutions and incorrect insertions. For instance, the word ribies is just as close to rabies as it is to rubies, but it is less likely the user meant to type the first word, since u is physically closer to i than a is. To implement this, we can modify the above algorithm to accept a look-up table of physical distances (approximations) between keys. Instead of adding 1 in the relevant branches, we can add a cost according to the look-up table. At the coarsest, we could construct a graph with vertices representing letters, edges representing physical adjacencies, and use the shortest graph path in place of physical key distance.

We also note (from our mature vantage point) that this algorithm is not restricted to strings, but can be performed on any free monoid. This includes the example we mentioned earlier of lists. So we could generalize the algorithm to operate on any piece of data which has such an identity element and binary operation, and satisfies the freedom condition. My knowledge of Python is still somewhat limited, but the method for achieving this generalization comes in many names in many languages: in Java it’s interfaces, in C++ it’s templating, in Haskell it’s a typeclass. In Python, there is a fancy thing called duck-typing, and we leave this for our next Python primer.

Next time, we’ll crack open some data files with actual English dictionaries in them, and see what we can do about solving interesting problems with them. Until then!