Bezier Curves and Picasso

Pablo Picasso

Pablo Picasso in front of The Kitchen, photo by Herbert List.

Simplicity and the Artist

Some of my favorite of Pablo Picasso’s works are his line drawings. He did a number of them about animals: an owl, a camel, a butterfly, etc. This piece called “Dog” is on my wall:

Dachshund-Picasso-Sketch

(Jump to interactive demo where we recreate “Dog” using the math in this post)

These paintings are extremely simple but somehow strike the viewer as deeply profound. They give the impression of being quite simple to design and draw. A single stroke of the hand and a scribbled signature, but what a masterpiece! It simultaneously feels like a hasty afterthought and a carefully tuned overture to a symphony of elegance. In fact, we know that Picasso’s process was deep. For example, in 1945-1946, Picasso made a series of eleven drawings (lithographs, actually) showing the progression of his rendition of a bull. The first few are more or less lifelike, but as the series progresses we see the bull boiled down to its essence, the final painting requiring a mere ten lines. Along the way we see drawings of a bull that resemble some of Picasso’s other works (number 9 reminding me of the sculpture at Daley Center Plaza in Chicago). Read more about the series of lithographs here.

the bull

Picasso’s, “The Bull.” Photo taken by Jeremy Kun at the Art Institute of Chicago in 2013. Click to enlarge.

Now I don’t pretend to be a qualified artist (I couldn’t draw a bull to save my life), but I can recognize the mathematical aspects of his paintings, and I can write a damn fine program. There is one obvious way to consider Picasso-style line drawings as a mathematical object, and it is essentially the Bezier curve. Let’s study the theory behind Bezier curves, and then write a program to draw them. The mathematics involved requires no background knowledge beyond basic algebra with polynomials, and we’ll do our best to keep the discussion low-tech. Then we’ll explore a very simple algorithm for drawing Bezier curves, implement it in Javascript, and recreate one of Picasso’s line drawings as a sequence of Bezier curves.

The Bezier Curve and Parameterizations

When asked to conjure a “curve” most people (perhaps plagued by their elementary mathematics education) will either convulse in fear or draw part of the graph of a polynomial. While these are fine and dandy curves, they only represent a small fraction of the world of curves. We are particularly interested in curves which are not part of the graphs of any functions.

Three French curves.

For instance, a French curve is a physical template used in (manual) sketching to aid the hand in drawing smooth curves. Tracing the edges of any part of these curves will usually give you something that is not the graph of a function. It’s obvious that we need to generalize our idea of what a curve is a bit. The problem is that many fields of mathematics define a curve to mean different things.  The curves we’ll be looking at, called Bezier curves, are a special case of  single-parameter polynomial plane curves. This sounds like a mouthful, but what it means is that the entire curve can be evaluated with two polynomials: one for the $ x$ values and one for the $ y$ values. Both polynomials share the same variable, which we’ll call $ t$, and $ t$ is evaluated at real numbers.

An example should make this clear. Let’s pick two simple polynomials in $ t$, say $ x(t) = t^2$ and $ y(t) = t^3$. If we want to find points on this curve, we can just choose values of $ t$ and plug them into both equations. For instance, plugging in $ t = 2$ gives the point $ (4, 8)$ on our curve. Plotting all such values gives a curve that is definitely not the graph of a function:

example-curve

But it’s clear that we can write any single-variable function $ f(x)$ in this parametric form: just choose $ x(t) = t$ and $ y(t) = f(t)$. So these are really more general objects than regular old functions (although we’ll only be working with polynomials in this post).

Quickly recapping, a single-parameter polynomial plane curve is defined as a pair of polynomials $ x(t), y(t)$ in the same variable $ t$. Sometimes, if we want to express the whole gadget in one piece, we can take the coefficients of common powers of $ t$ and write them as vectors in the $ x$ and $ y$ parts. Using the $ x(t) = t^2, y(t) = t^3$ example above, we can rewrite it as

$ \mathbf{f}(t) = (0,1) t^3 + (1,0) t^2$

Here the coefficients are points (which are the same as vectors) in the plane, and we represent the function $ f$ in boldface to emphasize that the output is a point. The linear-algebraist might recognize that pairs of polynomials form a vector space, and further combine them as $ (0, t^3) + (t^2, 0) = (t^2, t^3)$. But for us, thinking of points as coefficients of a single polynomial is actually better.

We will also restrict our attention to single-parameter polynomial plane curves for which the variable $ t$ is allowed to range from zero to one. This might seem like an awkward restriction, but in fact every finite single-parameter polynomial plane curve can be written this way (we won’t bother too much with the details of how this is done). For the purpose of brevity, we will henceforth call a “single-parameter polynomial plane curve where $ t$ ranges from zero to one” simply a “curve.”

Now there are some very nice things we can do with curves. For instance, given any two points in the plane $ P = (p_1, p_2), Q = (q_1, q_2)$ we can describe the straight line between them as a curve: $ \mathbf{L}(t) = (1-t)P + tQ$. Indeed, at $ t=0$ the value $ \mathbf{L}(t)$ is exactly $ P$, at $ t=1$ it’s exactly $ Q$, and the equation is a linear polynomial in $ t$. Moreover (without getting too much into the calculus details), the line $ \mathbf{L}$ travels at “unit speed” from $ P$ to $ Q$. In other words, we can think of $ \mathbf{L}$ as describing the motion of a particle from $ P$ to $ Q$ over time, and at time $ 1/4$ the particle is a quarter of the way there, at time $ 1/2$ it’s halfway, etc. (An example of a straight line which doesn’t have unit speed is, e.g. $ (1-t^2) P + t^2 Q$.)

More generally, let’s add a third point $ R$. We can describe a path which goes from $ P$ to $ R$, and is “guided” by $ Q$ in the middle. This idea of a “guiding” point is a bit abstract, but computationally no more difficult. Instead of travelling from one point to another at constant speed, we want to travel from one line to another at constant speed. That is, call the two curves describing lines from $ P \to Q$ and $ Q \to R$ $ \mathbf{L}_1, \mathbf{L_2}$, respectively. Then the curve “guided” by $ Q$ can be written as a curve

$ \displaystyle \mathbf{F}(t) = (1-t)\mathbf{L}_1(t) + t \mathbf{L}_2(t)$

Multiplying this all out gives the formula

$ \displaystyle \mathbf{F}(t) = (1-t)^2 P + 2t(1-t)Q + t^2 R$

We can interpret this again in terms of a particle moving. At the beginning of our curve the value of $ t$ is small, and so we’re sticking quite close to the line $ \mathbf{L}_1$ As time goes on the point $ \mathbf{F}(t)$ moves along the line between the points $ \mathbf{L}_1(t)$ and $ \mathbf{L}_2(t)$, which are themselves moving. This traces out a curve which looks like this

Screen Shot 2013-05-07 at 9.38.42 PM

This screenshot was taken from a wonderful demo by data visualization consultant Jason Davies. It expresses the mathematical idea quite superbly, and one can drag the three points around to see how it changes the resulting curve. One should play with it for at least five minutes.

The entire idea of a Bezier curve is a generalization of this principle: given a list $ P_0, \dots, P_n$ of points in the plane, we want to describe a curve which travels from the first point to the last, and is “guided” in between by the remaining points. A Bezier curve is a realization of such a curve (a single-parameter polynomial plane curve) which is the inductive continuation of what we described above: we travel at unit speed from a Bezier curve defined by the first $ n-1$ points in the list to the curve defined by the last $ n-1$ points. The base case is the straight-line segment (or the single point, if you wish). Formally,

Definition: Given a list of points in the plane $ P_0, \dots, P_n$ we define the degree $ n-1$ Bezier curve recursively as

$ \begin{aligned} \mathbf{B}_{P_0}(t) &= P_0 \\ \mathbf{B}_{P_0 P_1 \dots P_n}(t) &= (1-t)\mathbf{B}_{P_0 P_1 \dots P_{n-1}} + t \mathbf{B}_{P_1P_2 \dots P_n}(t) \end{aligned}$

We call $ P_0, \dots, P_n$ the control points of $ \mathbf{B}$.

While the concept of travelling at unit speed between two lower-order Bezier curves is the real heart of the matter (and allows us true computational insight), one can multiply all of this out (using the formula for binomial coefficients) and get an explicit formula. It is:

$ \displaystyle \mathbf{B}_{P_0 \dots P_n} = \sum_{k=0}^n \binom{n}{k}(1-t)^{n-k}t^k P_k$

And for example, a cubic Bezier curve with control points $ P_0, P_1, P_2, P_3$ would have equation

$ \displaystyle (1-t)^3 P_0 + 3(1-t)^2t P_1 + 3(1-t)t^2 P_2 + t^3 P_3$

Higher dimensional Bezier curves can be quite complicated to picture geometrically. For instance, the following is a fifth-degree Bezier curve (with six control points).

BezierCurve

A degree five Bezier curve, credit Wikipedia.

The additional line segments drawn show the recursive nature of the curve. The simplest are the green points, which travel from control point to control point. Then the blue points travel on the line segments between green points, the pink travel along the line segments between blue, the orange between pink, and finally the red point travels along the line segment between the orange points.

Without the recursive structure of the problem (just seeing the curve) it would be a wonder how one could actually compute with these things. But as we’ll see, the algorithm for drawing a Bezier curve is very natural.

Bezier Curves as Data, and de Casteljau’s Algorithm

Let’s derive and implement the algorithm for painting a Bezier curve to a screen using only the ability to draw straight lines. For simplicity, we’ll restrict our attention to degree-three (cubic) Bezier curves. Indeed, every Bezier curve can be written as a combination of cubic curves via the recursive definition, and in practice cubic curves balance computational efficiency and expressiveness. All of the code we present in this post will be in Javascript, and is available on this blog’s Github page.

So then a cubic Bezier curve is represented in a program by a list of four points. For example,

var curve = [[1,2], [5,5], [4,0], [9,3]];

Most graphics libraries (including the HTML5 canvas standard) provide a drawing primitive that can output Bezier curves given a list of four points. But suppose we aren’t given such a function. Suppose that we only have the ability to draw straight lines. How would one go about drawing an approximation to a Bezier curve? If such an algorithm exists (it does, and we’re about to see it) then we could make the approximation so fine that it is visually indistinguishable from a true Bezier curve.

The key property of Bezier curves that allows us to come up with such an algorithm is the following:

Any cubic Bezier curve $ \mathbf{B}$ can be split into two, end to end,
which together trace out the same curve as $ \mathbf{B}$.

Let see exactly how this is done. Let $ \mathbf{B}(t)$ be a cubic Bezier curve with control points $ P_0, P_1, P_2, P_3$, and let’s say we want to split it exactly in half. We notice that the formula for the curve when we plug in $ 1/2$, which is

$ \displaystyle \mathbf{B}(1/2) = \frac{1}{2^3}(P_0 + 3P_1 + 3P_2 + P_3)$

Moreover, our recursive definition gave us a way to evaluate the point in terms of smaller-degree curves. But when these are evaluated at 1/2 their formulae are similarly easy to write down. The picture looks like this:

subdivision

The green points are the degree one curves, the pink points are the degree two curves, and the blue point is the cubic curve. We notice that, since each of the curves are evaluated at $ t=1/2$, each of these points can be described as the midpoints of points we already know. So $ m_0 = (P_0 + P_1) / 2, q_0 = (m_0 + m_1)/2$, etc.

In fact, the splitting of the two curves we want is precisely given by these points. That is, the “left” half of the curve is given by the curve $ \mathbf{L}(t)$ with control points $ P_0, m_0, q_0, \mathbf{B}(1/2)$, while the “right” half $ \mathbf{R}(t)$ has control points $ \mathbf{B}(1/2), q_1, m_2, P_3$.

How can we be completely sure these are the same Bezier curves? Well, they’re just polynomials. We can compare them for equality by doing a bunch of messy algebra. But note, since $ \mathbf{L}(t)$ only travels halfway along $ \mathbf{B}(t)$, to check they are the same is to equate $ \mathbf{L}(t)$ with $ \mathbf{B}(t/2)$, since as $ t$ ranges from zero to one, $ t/2$ ranges from zero to one half. Likewise, we can compare $ \mathbf{B}((t+1)/2)$ with $ \mathbf{R}(t)$.

The algebra is very messy, but doable. As a test of this blog’s newest tools, here’s a screen cast of me doing the algebra involved in proving the two curves are identical.


Now that that’s settled, we have a nice algorithm for splitting a cubic Bezier (or any Bezier) into two pieces. In Javascript,

function subdivide(curve) {
   var firstMidpoints = midpoints(curve);
   var secondMidpoints = midpoints(firstMidpoints);
   var thirdMidpoints = midpoints(secondMidpoints);

   return [[curve[0], firstMidpoints[0], secondMidpoints[0], thirdMidpoints[0]],
          [thirdMidpoints[0], secondMidpoints[1], firstMidpoints[2], curve[3]]];
}

Here “curve” is a list of four points, as described at the beginning of this section, and the output is a list of two curves with the correct control points. The “midpoints” function used is quite simple, and we include it here for compelteness:

function midpoints(pointList) {
   var midpoint = function(p, q) {
      return [(p[0] + q[0]) / 2.0, (p[1] + q[1]) / 2.0];
   };

   var midpointList = new Array(pointList.length - 1);
   for (var i = 0; i < midpointList.length; i++) {
      midpointList[i] = midpoint(pointList[i], pointList[i+1]);
   }

   return midpointList;
}

It just accepts as input a list of points and computes their sequential midpoints. So a list of $ n$ points is turned into a list of $ n-1$ points. As we saw, we need to call this function $ d-1$ times to compute the segmentation of a degree $ d$ Bezier curve.

As explained earlier, we can keep subdividing our curve over and over until each of the tiny pieces are basically lines. That is, our function to draw a Bezier curve from the beginning will be as follows:

function drawCurve(curve, context) {
   if (isFlat(curve)) {
      drawSegments(curve, context);
   } else {
      var pieces = subdivide(curve);
      drawCurve(pieces[0], context);
      drawCurve(pieces[1], context);
   }
}

In words, as long as the curve isn’t “flat,” we want to subdivide and draw each piece recursively. If it is flat, then we can simply draw the three line segments of the curve and be reasonably sure that it will be a good approximation. The context variable sitting there represents the canvas to be painted to; it must be passed through to the “drawSegments” function, which simply paints a straight line to the canvas.

Of course this raises the obvious question: how can we tell if a Bezier curve is flat? There are many ways to do so. One could compute the angles of deviation (from a straight line) at each interior control point and add them up. Or one could compute the volume of the enclosed quadrilateral. However, computing angles and volumes is usually not very nice: angles take a long time to compute and volumes have stability issues, and the algorithms which are stable are not very simple. We want a measurement which requires only basic arithmetic and perhaps a few logical conditions to check.

It turns out there is such a measurement. It’s originally attributed to Roger Willcocks, but it’s quite simple to derive by hand.

Essentially, we want to measure the “flatness” of a cubic Bezier curve by computing the distance of the actual curve at time $ t$ from where the curve would be at time $ t$ if the curve were a straight line.

Formally, given $ \mathbf{B}(t)$ with control points $ P_0, P_1, P_2, P_3$ as usual, we can define the straight-line Bezier cubic as the colossal sum

$ \displaystyle \mathbf{S}(t) = (1-t)^3P_0 + 3(1-t)^2t \left ( \frac{2}{3}P_0 + \frac{1}{3}P_3 \right ) + 3(1-t)t^2 \left ( \frac{1}{3}P_0 + \frac{2}{3}P_3 \right ) + t^3 P_3 $

There’s nothing magical going on here. We’re simply giving the Bezier curve with control points $ P_0, \frac{2}{3}P_0 + \frac{1}{3}P_3, \frac{1}{3}P_0 + \frac{2}{3}P_3, P_3$. One should think about this as points which are a 0, 1/3, 2/3, and 1 fraction of the way from $ P_0$ to $ P_3$ on a straight line.

Then we define the function $ d(t) = \left \| \mathbf{B}(t) – \mathbf{S}(t) \right \|$ to be the distance between the two curves at the same time $ t$. The flatness value of $ \mathbf{B}$ is the maximum of $ d$ over all values of $ t$. If this flatness value is below a certain tolerance level, then we call the curve flat.

With a bit of algebra we can simplify this expression. First, the value of $ t$ for which the distance is maximized is the same as when its square is maximized, so we can omit the square root computation at the end and take that into account when choosing a flatness tolerance.

Now lets actually write out the difference as a single polynomial. First, we can cancel the 3’s in $ \mathbf{S}(t)$ and write the polynomial as

$ \displaystyle \mathbf{S}(t) = (1-t)^3 P_0 + (1-t)^2t (2P_0 + P_3) + (1-t)t^2 (P_0 + 2P_3) + t^3 P_3$

and so $ \mathbf{B}(t) – \mathbf{S}(t)$ is (by collecting coefficients of the like terms $ (1-t)^it^j$)

$ \displaystyle (1-t)^2t (3 P_1 – 2P_0 – P_3) + (1-t)t^2 (3P_2 – P_0 – 2P_3) $

Factoring out the $ (1-t)t$ from both terms and setting $ a = 3P_1 – 2P_0 – P_3$, $ b = 3P_2 – P_0 – 2P_3$, we get

$ \displaystyle d^2(t) = \left \| (1-t)t ((1-t)a + tb) \right \|^2 = (1-t)^2t^2 \left \| (1-t)a + tb \right \|^2$

Since the maximum of a product is at most the product of the maxima, we can bound the above quantity by the product of the two maxes. The reason we want to do this is because we can easily compute the two maxes separately. It wouldn’t be hard to compute the maximum without splitting things up, but this way ends up with fewer computational steps for our final algorithm, and the visual result is equally good.

Using some elementary single-variable calculus, the maximum value of $ (1-t)^2t^2$ for $ 0 \leq t \leq 1$ turns out to be $ 1/16$. And the norm of a vector is just the sum of squares of its components. If $ a = (a_x, a_y)$ and $ b = (b_x, b_y)$, then the norm above is exactly

$ \displaystyle ((1-t)a_x + tb_x)^2 + ((1-t)a_y + tb_y)^2$

And notice: for any real numbers $ z, w$ the quantity $ (1-t)z + tw$ is exactly the straight line from $ z$ to $ w$ we know so well. The maximum over all $ t$ between zero and one is obviously the maximum of the endpoints $ z, w$. So the max of our distance function $ d^2(t)$ is bounded by

$ \displaystyle \frac{1}{16} (\textup{max}(a_x^2, b_x^2) + \textup{max}(a_y^2, b_y^2))$

And so our condition for being flat is that this bound is smaller than some allowable tolerance. We may safely factor the 1/16 into this tolerance bound, and so this is enough to write a function.

function isFlat(curve) {
   var tol = 10; // anything below 50 is roughly good-looking

   var ax = 3.0*curve[1][0] - 2.0*curve[0][0] - curve[3][0]; ax *= ax;
   var ay = 3.0*curve[1][1] - 2.0*curve[0][1] - curve[3][1]; ay *= ay;
   var bx = 3.0*curve[2][0] - curve[0][0] - 2.0*curve[3][0]; bx *= bx;
   var by = 3.0*curve[2][1] - curve[0][1] - 2.0*curve[3][1]; by *= by;

   return (Math.max(ax, bx) + Math.max(ay, by) <= tol);
}

And there we have it. We write a simple HTML page to access a canvas element and a few extra helper functions to draw the line segments when the curve is flat enough, and present the final result in this interactive demonstration (you can perturb the control points).

The picture you see on that page (given below) is my rendition of Picasso’s “Dog” drawing as a sequence of nine Bezier curves. I think the resemblance is uncanny 🙂

Picasso's "Dog," redesigned as a sequence of eight bezier curves.

Picasso’s “Dog,” redesigned as a sequence of nine bezier curves.

While we didn’t invent the drawing itself (and hence shouldn’t attach our signature to it), we did come up with the representation as a sequence of Bezier curves. It only seems fitting to present that as the work of art. Here we’ve distilled the representation down to a single file: the first line is the dimension of the canvas, and each subsequent line represents a cubic Bezier curve. Comments are included for readability.

bezier-dog-for-web

“Dog” Jeremy Kun, 2013. Click to enlarge.

Because standardizing things seems important, we define a new filetype “.bezier”, which has the format given above:

int int
(int) curve 
(int) curve
...

Where the first two ints specify the size of the canvas, the first (optional) int on each line specifies the width of the stroke, and a “curve” has the form

[int,int] [int,int] ... [int,int]

If an int is omitted at the beginning of a line, this specifies a width of three pixels.

In a general .bezier file we allow a curve to have arbitrarily many control points, though the code we gave above does not draw them that generally. As an exercise, write a program which accepts as input a .bezier file and produces as output an image of the drawing. This will require an extension of the algorithm above for drawing arbitrary Bezier curves, which loops its computation of the midpoints and keeps track of which end up in the resulting subdivision. Alternatively, one could write a program which accepts as input a .bezier file with only cubic Bezier curves, and produces as output an SVG file of the drawing (SVG only supports cubic Bezier curves). So a .bezier file is a simplification (fewer features) and an extension (Bezier curves of arbitrary degree) of an SVG file.

We didn’t go as deep into the theory of Bezier curves as we could have. If the reader is itching for more (and a more calculus-based approach), see this lengthy primer. It contains practically everything one could want to know about Bezier curves, with nice interactive demos written in Processing.

Low-Complexity Art

There are some philosophical implications of what we’ve done today with Picasso’s “Dog.” Previously on this blog we’ve investigated the idea of low-complexity art, and it’s quite relevant here. The thesis is that “beautiful” art has a small description length, and more formally the “complexity” of some object (represented by text) is the length of the shortest program that outputs that object given no inputs. More on that in our primer on Kolmogorov complexity. The fact that we can describe Picasso’s line drawings with a small number of Bezier curves (and a relatively short program to output the bezier curves) is supposed to be a deep statement about the beauty of the art itself. Obviously this is very subjective, but not without its proponents.

There has been a bit of recent interest in computers generating art. For instance, this recent programming competition (in Dutch) gave the task of generating art similar to the work of Piet Mondrian. The idea is that the more elegant the algorithm, the higher it would be scored. The winner used MD5 hashes to generate Mondrian pieces, and there were many many other impressive examples (the link above has a gallery of submissions).

In our earlier post on low-complexity art, we explored the possibility of representing all images within a coordinate system involving circles with shaded interiors. But it’s obvious that such a coordinate system wouldn’t be able to represent “Dog” with very low complexity. It seems that Bezier curves are a much more natural system of coordinates. Some of the advantages include that length of lines and slight perturbations don’t affect the resulting complexity. A cubic Bezier curve can be described by any set of four points, and more “intricate” (higher complexity) descriptions of curves require a larger number of points. Bezier curves can be scaled up arbitrarily, and this doesn’t significantly change the complexity of the curve (although scaling many orders of magnitude will introduce a logarithmic factor complexity increase, this is quite small). Curves with larger stroke are slightly more complex than those with smaller stroke, and representing many small sharp bends require more curves than long, smooth arcs.

On the downside, it’s not so easy to represent a circle as a Bezier curve. In fact, it is impossible to do so exactly. Despite the simplicity of this object (it’s even defined as a single polynomial, albeit in two variables), the best one can do is approximate it. The same goes for ellipses. There are actually ways to overcome this (the concept of rational Bezier curves which are quotients of polynomials), but they add to the inherent complexity of the drawing algorithm and the approximations using regular Bezier curves are good enough.

And so we define the complexity of a drawing to be the number of bits in its .bezier file representation. Comments are ignored in this calculation.

The real prize, and what we’ll explore next time, is to find a way to generate art automatically. That is to do one of two things:

  1. Given some sort of “seed,” write a program that produces a pseudo-random line drawing.
  2. Given an image, produce a .bezier image which accurately depicts the image as a line drawing.

We will attempt to explore these possibilities in the follow-up to this post. Depending on how things go, this may involve some local search algorithms, genetic algorithms, or other methods.

Until then!

Addendum: want to buy a framed print of the source code for “Dog”? Head over to our page on Society6.

Seam Carving for Content-Aware Image Scaling

The Problem with Cropping

Every programmer or graphic designer with some web development experience can attest to the fact that finding good images that have an exactly specified size is a pain. Since the dimensions of the sought picture are usually inflexible, an uncomfortable compromise can come in the form of cropping a large image down to size or scaling the image to have appropriate dimensions.

Both of these solutions are undesirable. In the example below, the caterpillar looks distorted in the scaled versions (top right and bottom left), and in the cropped version (bottom right) it’s more difficult to tell that the caterpillar is on a leaf; we have lost the surrounding context.

scaling-gone-wrong

In this post we’ll look at a nice heuristic method for rescaling images called seam-carving, which pays attention to the contents of the image as it resacles. In particular, it only removes or adds pixels to the image that the viewer is least-likely to notice. In all but the most extreme cases it will avoid the ugly artifacts introduced by cropping and scaling, and with a bit of additional scaffolding it becomes a very useful addition to a graphic designer’s repertoire. At first we will focus on scaling an image down, and then we will see that the same technique can be used to enlarge an image.

Before we begin, we should motivate the reader with some examples of its use.

example-seam-carving

It’s clear that the caterpillar is far less distorted in all versions, and even in the harshly rescaled version, parts of the green background are preserved. Although the leaf is warped a little, it is still present, and it’s not obvious that the image was manipulated.

Now that the reader’s appetite has been whet, let’s jump into the mathematics of it. This method was pioneered by Avidan and Shamir, and the impatient reader can jump straight to their paper (which contains many more examples). In this post we hope to fill in the background and show a working implementation.

Images as Functions

One common way to view an image is as an approximation to a function of two real variables. Suppose we have an $ n \times m$-pixel image ($ n$ rows and $ m$ columns of pixels). For simplicity (during the next few paragraphs), we will also assume that the pixel values of an image are grayscale intensity values between 0 and 255. Then we can imagine the pixel values as known integer values of a function $ f: \mathbb{R}^n \times \mathbb{R}^m \to \mathbb{R}$. That is, if we take two integers $ 0 \leq x < n$ and $ 0 \leq y < m$ then we know the value $ f(x,y)$; it’s just the intensity value at the corresponding pixel. For values outside these ranges, we can impose arbitrary values for $ I$ (we don’t care what’s happening outside the image).

Moreover, it makes sense to assume that $ f$ is a well-behaved function in between the pixels (i.e. it is differentiable). And so we can make reasonable guessed as to the true derivative of $ f$ by looking at the differences between adjacent pixels. There are many ways to get a good approximation of the derivative of an image function, but we should pause a moment to realize why this is important to nail down for the purpose of resizing images.

A good rule of thumb with images is that regions of an image which are most important to the viewer are those which contain drastic changes in intensity or color. For instance, consider this portrait of Albert Einstein.

Which parts of this image first catch the eye? The unkempt hair, the wrinkled eyes, the bushy mustache? Certainly not the misty background, or the subtle shadows on his chin.

Indeed, one could even claim that an image having a large derivative at a certain pixel corresponds to high information content there (of course this is not true of all images, but perhaps it’s reasonable to claim this for photographs). And if we want to scale an image down in size, we are interested in eliminating those regions which have the smallest information content. Of course we cannot avoid losing some information: the image after resizing is smaller than the original, and a reasonable algorithm should not add any new information. But we can minimize the damage by intelligently picking which parts to remove; our naive assumption is that a small derivative at a pixel implies a small amount of information.

Of course we can’t just remove “regions” of an image to change its proportions. We have to remove the same number of pixels in each row or column to reduce the corresponding dimension (width or height, resp.). Before we get to that, though, let’s write a program to compute the gradient. For this program and the rest of the post we will use the Processing programming language, and our demonstrations will use the Javascript cross-compiler processing.js. The nice thing about Processing is that if you know Java then you know processing. All the basic language features are the same, and it’s just got an extra few native types and libraries to make graphics rendering and image displaying easier. As usual, all of the code used in this blog post is available on this blog’s Github page.

Let’s compute the gradient of this picture, and call the picture $ I$:

A very nice picture whose gradient we can compute. It was taken by the artist Ria Czichotzki.

Since this is a color image, we will call it a function $ I: \mathbb{R}^2 \to \mathbb{R}^3$, in the sense that the input is a plane coordinate $ (x,y)$, and the output $ I(x,y) = (r,g,b)$ is a triple of color intensity values. We will approximate the image’s partial derivative $ \left \langle \partial I / \partial x, \partial I / \partial y \right \rangle$ at $ (x,y)$ by inspecting values of $ I$ in a neighborhood of the point:

$ I(x-1,y), I(x+1, y), I(x,y-1), I(x,y+1)$.

For each pixel we call the value $ |I(x+1,y) – I(x-1,y)| / 2$ the partial derivative in the $ x$ direction, and $ |I(x,y+1) – I(x,y-1)| / 2$ the partial in the $ y$ direction. Note that the values $ I(x,y)$ are vectors, so the norm signs here are really computing the distance between the two values of $ I$.

There are two ways to see why this makes sense as an approximation. The first is analytic: by definition, the partial derivative $ \partial I / \partial x$ is a limit:

$ \displaystyle \lim_{h \to 0} \frac{|I(x+h,y) – I(x,y)|}{h}$

It turns out that this limit is equivalent to

$ \displaystyle \lim_{h \to 0} \frac{|I(x+h,y) – I(x-h,y)|}{2h}$

And the closer $ h$ gets to zero the better the approximation of the limit is. Since the closest we can make $ h$ is $ h=1$ (we don’t know any other values of $ I$ with nonzero $ h$), we plug in the corresponding values for neighboring pixels. The partial $ \partial I / \partial y$ is similar.

The second way to view it is geometric.

The slope of the blue secant line is not a bad approximation to the derivative at x, provided the resolution is fine enough.

The slope of the blue secant line is not a bad approximation to the derivative at x, provided the resolution is fine enough.

The salient fact here is that a nicely-behaved curve at $ x$ will have a derivative close to the secant line between the points $ (x-1, f(x-1))$ and $ (x+1, f(x+1))$. Indeed, this idea inspires the original definition of the derivative. The slope of the secant line is just $ (f(x+1) – f(x-1)) / 2$. As we saw in our post on numerical integration, we can do much better than a linear guess (specifically, we can use do any order of polynomial interpolation we wish), but for the purposes of displaying the concept of seam-carving, a linear guess will suffice.

And so with this intuitive understanding of how to approximate the gradient, the algorithm to actually do it is a straightforward loop. Here we compute the horizontal gradient (that is, the derivative $ \partial I / \partial x$).

PImage horizontalGradient(PImage img) {
   color left, right;
   int center;
   PImage newImage = createImage(img.width, img.height, RGB);

   for (int x = 0; x &lt; img.width; x++) {
      for (int y = 0; y &lt; img.height; y++) {
         center = x + y*img.width;

         left = x == 0 ? img.pixels[center] : img.pixels[(x-1) + y*img.width];
         right = x == img.width-1 ? img.pixels[center] : img.pixels[(x+1) + y*img.width];

         newImage.pixels[center] = color(colorDistance(left, right));
      }
   }

   return newImage;
}

The details are a bit nit-picky, but the idea is simple. If we’re inspecting a non-edge pixel, then we can use the formula directly and compute the values of the neighboring left and right pixels. Otherwise, the “left” pixel or the “right” pixel will be outside the bounds of the image, and so we replace it with the pixel we’re inspecting. Mathematically, we’d be computing the difference $ |I(x, y) – I(x+1, y)|$ and $ |I(x-1,y) – I(x, y)|$. Additionally, since we’ll later only be interested in the relative sizes of the gradient, we can ignore the factor of 1/2 in the formula we derived.

The parts of this code that are specific to Processing also deserve some attention. Specifically, we use the built-in types PImage and color, for representing images and colors, respectively. The “createImage” function creates an empty image of the specified size. And peculiarly, the pixels of a PImage are stored as a one-dimensional array. So as we’re iterating through the rows and columns, we must compute the correct location of the sought pixel in the pixel array (this is why we have a variable called “center”). Finally, as in Java, the ternary if notation is used to keep the syntax short, and those two lines simply check for the boundary conditions we stated above.

The last unexplained bit of the above code is the “colorDistance” function. As our image function $ I(x,y)$ has triples of numbers as values, we need to compute the distance between two values via the standard distance formula. We have encapsulated this in a separate function. Note that because (in this section of the blog) we are displaying the results in an image, we have to convert to an integer at the end.

int colorDistance(color c1, color c2) {
   float r = red(c1) - red(c2);
   float g = green(c1) - green(c2);
   float b = blue(c1) - blue(c2);
   return (int)sqrt(r*r + g*g + b*b);
}

Let’s see this in action on the picture we introduced earlier.

gradient-girlThe reader who is interested in comparing the two more closely may visit this interactive page. Note that we only compute the horizontal gradient, so certain locations in the image have a large derivative but are still dark in this image. For instance, the top of the door in the background and the wooden bars supporting the bottom of the chair are dark despite the vertical color variations.

The vertical gradient computation is entirely analogous, and is left as an exercise to the reader.

Since we want to inspect both vertical and horizontal gradients, we will call the total gradient matrix $ G$ the matrix whose entries $ g_{i,j}$ are the sums of the magnitudes of the horizontal and vertical gradients at $ i,j$:

$ \displaystyle g_{i,j} = \left | \frac{\partial I}{\partial x} (i,j) \right | + \left | \frac{\partial I}{\partial y} (i,j) \right |$

The function $ e(x,y) = g_{x,y}$ is often called an energy function for $ I$. We will mention now that there are other energy functions one can consider, and use this energy function for the remainder of this post.

Seams, and Dynamic Programming

Back to the problem of resizing, we want a way to remove only those regions of an image that have low total gradient across all of the pixels in the region removed. But of course when resizing an image we must maintain the rectangular shape, and so we have to add or remove the same number of pixels in each column or row.

For the purpose of scaling an image down in width (and the other cases are similar), we have a few options. We could find the pixel in each row with minimal total gradient and remove it. More conservatively, we could remove those columns with minimal gradient (as a sum of the total gradient of each pixel in the column). More brashly, we could just remove pixels of lowest gradient willy-nilly from the image, and slide the rows left.

If none of these ideas sound like they would work, it’s because they don’t. We encourage the unpersuaded reader to try out each possibility on a variety of images to see just how poorly they perform. But of these options, removing an entire column happens to distort the image less than the others. Indeed, the idea of a “seam” in an image is just a slight generalization of a column. Intuitively, a seam $ s_i$ is a trail of pixels traversing the image from the bottom to the top, and at each step the pixel trail can veer to the right or left by at most one pixel.

Definition: Let $ I$ be an $ n \times m$ image with nonnegative integer coordinates indexed from zero. A vertical seam in $ I$ is a list of coordinates $ s_i = (x_i, y_i)$ with the following properties:

  • $ y_0 = 0$ is at the bottom of the image.
  • $ y_{n-1} = n-1$ is at the top of the image.
  • $ y_i$ is strictly increasing.
  • $ |x_i – x_{i+1}| \leq 1$ for all $ 0 \leq i < n-1$.

These conditions simply formalize what we mean by a seam. The first and second impose that the seam traverses from top to bottom. The third requires the seam to always “go up,” so that there is only one pixel in each row. The last requires the seam to be “connected” in the sense that it doesn’t veer too far at any given step.

Here are some examples of some vertical seams. One can easily define horizontal seams by swapping the placement of $ x, y$ in the above list of conditions.

glacier_canyon_h_shr_seams

So the goal is now to remove the seams of lowest total gradient. Here the total gradient of a seam is just the sum of the energy values of the pixels in the seam.

Unfortunately there are many more seams to choose from than columns (or even individual pixels). It might seem difficult at first to find the seam with the minimal total gradient. Luckily, if we’re only interested in minima, we can use dynamic programming to compute the minimal seam ending at any given pixel in linear time.

We point the reader unfamiliar with dynamic programming to our Python primer on this topic. In this case, the sub-problem we’re working with is the minimal total gradient value of all seams from the bottom of the image to a fixed pixel. Let’s call this value $ v(a,b)$. If we know $ v(a,b)$ for all pixels below, say, row $ i$, then we can compute the $ v(i+1,b)$ for the entire row $ i+1$ by taking pixel $ (i+1,j)$, and adding its gradient value to the minimum of the values of possible predecessors in a seam, $ v(i,j-1), v(i,j), v(i,j+1)$ (respecting the appropriate boundary conditions).

Once we’ve computed $ v(a,b)$ for the entire matrix, we can look at the minimal value at the top of the image $ \min_j v(n,j)$, and work backwards down the image to compute which seam gave us this minimum.

Let’s make this concrete and compute the function $ v$ as a two-dimensional array called “seamFitness.”

void computeVerticalSeams() {
   seamFitness = new float[img.width][img.height];
   for (int i = 0; i &lt; img.width; i++) {
      seamFitness[i][0] = gradientMagnitude[i][0];
   }

   for (int y = 1; y &lt; img.height; y++) {
      for (int x = 0; x &lt; img.width; x++) {
         seamFitness[x][y] = gradientMagnitude[x][y];

         if (x == 0) {
            seamFitness[x][y] += min(seamFitness[x][y-1], seamFitness[x+1][y-1]);
         } else if (x == img.width-1) {
            seamFitness[x][y] += min(seamFitness[x][y-1], seamFitness[x-1][y-1]);
         } else {
            seamFitness[x][y] += min(seamFitness[x-1][y-1], seamFitness[x][y-1], seamFitness[x+1][y-1]);
         }
      }
   }
}

We have two global variables at work here (global is bad, I know, but it’s Processing; it’s made for prototyping). The seamFitness array, and the gradientMagnitude array. We assume at the start of this function that the gradientMagnitude array is filled with sensible values.

Here we first initialize the zero’th row of the seamFitness array to have the same values as the gradient of the image. This is simply because a seam of length 1 has only one gradient value. Note here the coordinates are a bit backwards: the first coordinate represents the choice of a column, and the second represents the choice of a row. We can think of the coordinate axes of our image function having the origin in the bottom-left, the same as we might do mathematically.

Then we iterate over the rows in the matrix, and in each column we compute the fitness based on the fitness of the previous row. That’s it 🙂

To actually remove a seam, we need to create a new image of the right size, and shift the pixels to the right (or left) of the image into place. The details are technically important, but tedious to describe fully. So we leave the inspection of the code as an exercise to the reader. We provide the Processing code on this blog’s Github page, and show an example of its use below. Note each the image resizes every time the user clicks within the image.

seam-carving-demo

Photograph by Raphael Goetter.

It’s interesting (and indeed the goal) to see how at first nothing is warped, and then the lines on the walls curve around the woman’s foot, and then finally the woman’s body is distorted before she gets smushed into a tiny box by the oppressive mouse.

As a quick side note, we attempted to provide an interactive version of this Processing program online in the same way we did for the gradient computation example. Processing is quite nice in that any Processing program (which doesn’t use any fancy Java libraries) can be cross-compiled to Javascript via the processing.js library. This is what we did for the gradient example. But in doing so for the (admittedly inefficient and memory-leaky) seam-carving program, it appeared to run an order of magnitude slower in the browser than locally. This was this author’s first time using Processing, so the reason for the drastic jump in runtime is unclear. If any readers are familiar with processing.js, a clarification would be very welcome in the comments.

Inserting Seams, Removing Objects, and Videos

In addition to removing seams to scale an image down, one can just as easily insert seams to make an image larger. To insert a seam, just double each pixel in the seam and push the rest of the pixels on the row to the right. The process is not hard, but it requires avoiding one pitfall: if we just add a single seam at a time, then the seam with minimum total energy will never change! So we’ll just add the same seam over and over again. Instead, if we want to add $ k$ seams, one should compute the minimum $ k$ seams and insert them all. If the desired resize is too large, then the programmer should pick an appropriate batch size and add seams in batches.

Another nice technique that comes from the seam-carving algorithm is to intelligently protect or destroy specific regions in the image. To do this requires a minor modification of the gradient computation, but the rest of the algorithm is identical. To protect a region, provide some way of user input specifying which pixels in the image are important, and give those pixels an artificially large gradient value (e.g., the maximum value of an integer). If the down-scaling is not too extreme, the seam computations will be guaranteed not to use any of those pixels, and inserted seams will never repeat those pixels. To remove a region, we just give the desired pixels an arbitrarily low gradient value. Then these pixels will be guaranteed to occur in the minimal seams, and will be removed from the picture.

The technique of seam-carving is a very nice tool, and as we just saw it can be extended to a variety of other techniques. In fact, seam-carving and its applications to object removal and image resizing are implemented in all of the recent versions of Photoshop. The techniques are used to adapt applications to environments with limited screen space, such as a mobile phone or tablet. Seam carving can even be adapted for use in videos. This involves an extension of the dynamic program to work across multiple frames, formally finding a minimal graph cut between two frames so that each piece of the cut is a seam in the corresponding frame. Of course there is a lot more detail to it (and the paper linked above uses this detail to improve the basic image-resizing algorithm), but that’s the rough idea.

We’ve done precious little on this blog with images, but we’d like to get more into graphics programming. There’s a wealth of linear algebra, computational geometry, and artificial intelligence hiding behind most of the computer games we like to play, and it would be fun to dive deeper into these topics. Of course, with every new post this author suggests ten new directions for this blog to go. It’s a curse and a blessing.

Until next time!

Complete Sequences and Magic Tricks

Numberphile posted a video today describing a neat trick based on complete sequences:

The mathematics here is pretty simple, but I noticed at the end of the video that Dr. Grime was constructing the cards by hand, when really this is a job for a computer program. I thought it would be a nice warmup exercise (and a treat to all of the Numberphile viewers) to write a program to construct the cards for any complete sequence.

For the sake of bringing some definitions into it, let’s review the idea of the card trick.

Definition: A sequence of integers $ a_n$ is complete if every natural number $ k$ can be represented as a sum of numbers in $ a_n$.

The examples used in the video are the binary numbers and the fibonacci numbers. The latter is a famous theorem Zeckendorf’s Theorem. Other famous complete sequences include the prime numbers (if you add 1) and the lazy caterer’s sequence.

Now the question is, given a complete sequence, how do we generate the cards from the video? One might recall our post on dynamic programming, in which we wrote a program to compute the optimal coin change for a given amount of money. Indeed, this method works, but we have some added structure in this problem: we know that any number can only be used once. In that case, it is easy to see that there is no need for dynamic programming. Instead, we just use a greedy algorithm.

That is, we can determine which card contains a number $ k$ (which number in the sequence occurs in the representation of $ k$) as follows. Start with the largest card smaller than $ k$, call it $ n$, and we know that $ n$ shows up in the representation of $ k$. To find the next number in the representation, simply repeat the process with the difference $ k-n$. The next number to appear in the representation of $ k$ is hence the largest card less than $ k-n$. We can repeat this process until we run out of cards or the difference becomes zero.

One interesting fact about this algorithm is that it will always produce the smallest representation. That is, in performing the card trick, one is guaranteed the least amount of work in remembering which cards contained the hidden number, and the least amount of work in adding those numbers up.

To implement this algorithm in code, we used Javascript. We chose Javascript so that the reader can run the program and view its source. But we extract the most important bit of the code here:

// compute each card
var j;
for (i = 1; i <= usersMax; i++) {
   var remainder = i;
   for (j = numbers.length-1; j >= 0; j--) {
      if (numbers[j] <= remainder) {
         cards[j].push(i);
         remainder -= numbers[j];
      }
   }
}

In words, the “numbers” array contains the complete sequence in question, the cards array is an array of arrays, where each element in the array represents one card. We then loop over all numbers (the $ i$ variable), and for each number we check the sequence in decreasing order to look for membership of $ i$ on a card.

For the sake of brevity, we omit all of the code to deal with input and output, which comprises all of the remaining code in this program.

So to use the program, for example with prime numbers as the complete sequence, one would type “1,2,3,5,7,13,17” into the first text box, and whatever the maximum allowable guess is in the second box, and click the “Generate cards” button.

Enjoy!

The Cellular Automaton Method for Cave Generation

Dear reader, this post has an interactive simulation! We encourage you to play with it as you read the article below.

In our series of posts on cellular automata, we explored Conway’s classic Game of Life and discovered some interesting patterns therein. And then in our primers on computing theory, we built up a theoretical foundation for similar kinds of machines, including a discussion of Turing machines and the various computational complexity classes surrounding them. But cellular automata served us pretty exclusively as a toy. It was a basic model of computation, which we were interested in only for its theoretical universality. One wouldn’t expect too many immediately practical (and efficient) applications of something which needs a ridiculous scale to perform basic logic. In fact, it’s amazing that there are as many as there are.

In this post we’ll look at one particular application of cellular automata to procedural level generation in games.

An example of a non-randomly generated cave level from Bethesda’s The Elder Scrolls series.

The Need for More Caves

Level design in video games is a time-consuming and difficult task. It’s extremely difficult for humans to hand-craft areas that both look natural and are simultaneously fun to play in. This is particularly true of the multitude of contemporary role-playing games modeled after Dungeons and Dragons, in which players move through a series of areas defeating enemies, collecting items, and developing their character. With a high demand for such games and so many levels in each game, it would save an unfathomable amount of money to have computers generate the levels on the fly. Perhaps more importantly, a game with randomly generated levels inherently has a much higher replay value.

The idea of randomized content generation (often called procedural generation) is not particularly new. It has been around at least since the 1980’s. Back then, computers simply didn’t have enough space to store large, complex levels in memory. To circumvent this problem, video game designers simply generated the world as the player moved through it. This opened up an infinitude of possible worlds for the user to play in, and the seminal example of this is a game called Rogue, which has since inspired series such as Diablo, Dwarf Fortress, and many many others. The techniques used to design these levels have since been refined and expanded into a toolbox of techniques which have become ubiquitous in computer graphics and game development.

We’ll explore more of these techniques in the future, but for now we’ll see how a cellular automaton can be used to procedurally generate two-dimensional cave-like maps.

A Quick Review of Cellular Automata

While the interested reader can read more about cellular automata on this blog, we will give a quick refresher here.

For our purposes here, a 2-dimensional cellular automaton is a grid of cells $ G$, where each cell $ c \in G$ is in one of a fixed number of states, and has a pre-determined and fixed set of neighbors. Then $ G$ is updated by applying a fixed rule to each cell simultaneously, and the process is repeated until something interesting happens or boredom strikes the observer. The most common kind of cellular automaton, called a ‘Life-like automaton,’ has only two states, ‘dead’ and ‘alive’ (for us, 0 and 1), and the rule applied to each cell is given as conditions to be ‘born’ or ‘survive’ based on the number of adjacent live cells. This is often denoted “Bx/Sy” where x and y are lists of single digit numbers. Furthermore, the choice of neighborhood is the eight nearest cells (i.e., including the diagonally-adjacent ones). For instance, B3/S23 is the cellular automaton rule where a cell is born if it has three living neighbors, and it survives if it has either two or three living neighbors, and dies otherwise. Technically, these are called ‘Life-like automata,’ because they are modest generalizations of Conway’s original Game of Life. We give an example of a B3/S23 cellular automaton initialized by a finite grid of randomly populated cells below. Note that each of the black (live) cells in the resulting stationary objects satisfy the S23 part of the rule, but none of the neighboring white (dead) cells satisfy the B3 condition.

A cellular automaton should really be defined for an arbitrary graph (or more generally, an arbitrary state space). There is really nothing special about a grid other than that it’s easy to visualize. Indeed, some cellular automata are designed for hexagonal grids, others are embedded on a torus, and still others are one- or three-dimensional. Of course, nothing stops automata from existing in arbitrary dimension, or from operating with arbitrary (albeit deterministic) rules, but to avoid pedantry we won’t delve into a general definition here. It would take us into a discussion of discrete dynamical systems (of which there are many, often with interesting pictures).

It All Boils Down to a Simple Rule

Now the particular cellular automaton we will use for cave generation is simply B678/S345678, applied to a random initial grid with a fixed live border. We interpret the live cells as walls, and the dead cells as open space. This rule should intuitively work: walls will stay walls even if more cells are born nearby, but isolated or near-isolated cells will often be removed. In other words, this cellular automaton should ‘smooth out’ a grid arrangement to some extent. Here is an example animation quickly sketched up in Mathematica to witness the automaton in action:

An example cave generated via the automaton rule B678/S345678. The black cells are alive, and the white cells are dead.

As usual, the code to generate this animation (which is only a slight alteration to the code used in our post on cellular automata) is available on this blog’s Github page.

This map is already pretty great! It has a number of large open caverns, and they are connected by relatively small passageways. With a bit of imagination, it looks absolutely cavelike!

We should immediately note that there is no guarantee that the resulting regions of whitespace will be connected. We got lucky with this animation, in that there are only two disconnected components, and one is quite small. But in fact one can be left with multiple large caves which have no connecting paths.

Furthermore, we should note the automaton’s rapid convergence to a stable state. Unlike Conway’s Game of Life, in practice this automaton almost always converges within 15 steps, and this author has yet to see any oscillatory patterns. Indeed, they are unlikely to exist because the survival rate is so high, and our initial grid has an even proportion of live and dead cells. There is no overpopulation that causes cells to die off, so once a cell is born it will always survive. The only cells that do not survive are those that begin isolated. In a sense, B678/S345678 is designed to prune the sparse areas of the grid, and fill in the dense areas by patching up holes.

We should also note that the initial proportion of cells which are alive has a strong effect on the density of the resulting picture.  For the animation we displayed above, we initially chose that 45% of the cells would be live. If we increase that a mere 5%, we get a picture like the following.

A cave generated with the initial proportion of live cells equal to 0.5

As expected, there are many more disconnected caverns. Some game designers prefer a denser grid combined with heuristic methods to connect the caverns. Since our goal is just to explore the mathematical ideas, we will leave this as a parameter in our final program.

Javascript Implementation, and Greater Resolution

One important thing to note is that B678/S345678 doesn’t scale well to fine grid sizes. For instance, if we increase the grid size to 200×200, we get something resembling an awkward camouflage pattern.

A 200×200 grid cave generation. Click the image to enlarge it.

What we really want is a way to achieve the major features of the low-resolution image on a larger grid. Since cellular automata are inherently local manipulations, we should not expect any modification of B678/S345678 to do this for us. Instead, we will use B678/345678 to create a low-resolution image, increase its resolution manually, and smooth it out with — you guessed it — another cellular automaton! We’ll design this automaton specifically for the purpose of smoothing out corners.

To increase the resolution, we may simply divide the cells into four pieces. The picture doesn’t change, but the total number of cells increases fourfold. There are a few ways to do this programmatically, but the way we chose simply uses the smallest resolution possible, and simulates higher resolution by doing block computations. The interested programmer can view our Javascript program available on this blog’s Github page to see this directly (or view the page source of this post’s interactive simulator).

To design a “smoothing automaton,” we should investigate more closely what we need to improve on in the above examples. In particular, once we increase the resolution, we will have a lot of undesirable convex and concave corners. Since a “corner” is simply a block satisfying certain local properties, we can single those out to be removed by an automaton. It’s easy to see that convex corners have exactly 3 live neighbors, so we should not allow those cells to survive. Similarly, the white cell just outside a concave corner has 5 live neighbors, so we should allow that cell to be born. On the other hand, we still want the major properties of our old B678/S345678 to still apply, so we can simply add 5 to the B part and remove 3 from the S part. Lastly, for empirical reasons, we also decide to kill off cells with 4 live neighbors.

And so our final “smoothing automaton” is simply B5678/S5678.

We present this application as an interactive javascript program. Some basic instructions:

  • The “Apply B678/S345678” button does what you’d expect: it applies B678/S345678 to the currently displayed grid. It iterates the automaton 20 times in an animation.
  • The “Apply B5678/S5678” button applies the smoothing automaton, but it does so only once, allowing the user to control the degree of smoothing at the specific resolution level.
  • The “Increase Resolution” button splits each cell into four, and may be applied until the cell size is down to a single pixel.
  • The “Reset” button resets the entire application, creating a new random grid.

We used this program to generate a few interesting looking pictures by varying the order in which we pressed the various buttons (it sounds silly, but it’s an exploration!). First, a nice cave:

An example of a higher resolution cave created with our program. In order to achieve similar results, First apply B678/S345678, and then alternate increasing the resolution and applying B5678/S5678 1-3 times.

We note that this is not perfect. There are some obvious and awkward geometric artifacts lingering in this map, mostly in the form of awkwardly straight diagonal lines and awkwardly flawless circles. Perhaps one might imagine the circles are the bases of stalactites or stalagmites. But on the whole, in terms of keeping the major features of the original automaton present while smoothing out corners, this author thinks B5678/S5678 has done a phenomenal job. Further to the cellular automaton’s defense, when the local properties are applied uniformly across the entire grid, such regularities are bound to occur. That’s just another statement of the non-chaotic nature of B5678/S5678 (in stark contrast to Conway’s Game of Life).

There are various modifications one could perform (or choose not to, depending on the type of game) to make the result more accessible for the player. For instance, one could remove all regions which fit inside a sufficiently small circle, or add connections between the disconnected components at some level of resolution. This would require some sort of connected-component labeling, which is a nontrivial task; current research goes into optimizing connected-component algorithms for large-scale grids. We plan to cover such topics on this blog in the future.

Another example of a cool picture we created with this application might be considered a more “retro” style of cave.

Apply S678/B345678 once, and increase the resolution as much as possible before applying B5678/S5678 as many times as desired.

We encourage the reader to play around with the program to see what other sorts of creations one can make. As of the time of this writing, changing the initial proportion of live cells (50%) or changing the automaton rules cannot be done in the browser; it requires one to modify the source code. We may implement the ability to control these in the browser given popular demand, but (of course) it would be a wonderful exercise for the intermediate Javascript programmer.

Caves in Three Dimensions

It’s clear that this same method can be extended to a three-dimensional model for generating caverns in a game like Minecraft. While we haven’t personally experimented with three-dimensional cellular automata here on this blog, it’s far from a new idea. Once we reach graphics programming on this blog (think: distant future) we plan to revisit the topic and see what we can do.

Until then!