Low Complexity Art

The Art of Omission

Whether in painting, fiction, film, landscape architecture, or paper folding, art is often said to be the art of omission. Simplicity breeds elegance, and engages the reader at a deep, aesthetic level.

A prime example is the famous six-word story written by Ernest Hemingway:

For sale: baby shoes, never worn.

He called it his best work, and rightfully so. To say so much with this simple picture is a monumental feat that authors have been trying to recreate since Hemingway’s day. Unsurprisingly, some mathematicians (for whom the art of proof had better not omit anything!) want to apply their principles to describe elegance.

Computation and Complexity

This study of artistic elegance will be from a computational perspective, and it will be based loosely on the paper of the same name. While we include the main content of the paper in a condensed form, we will deviate in two important ways: we alter an axiom with justification, and we provide a working implementation for the reader’s use. We do not require extensive working knowledge of theoretical computation, but the informed reader should be aware that everything here is theoretically performed on a Turing machine, but the details are unimportant.

So let us begin with the computational characterization of simplicity. Unfortunately, due to our own lack of knowledge of the subject, we will overlook the underlying details and take them for granted. [At some point in the future, we will provide a primer on Kolmogorov complexity. We just ordered a wonderful book on it, and can’t wait to dig into it!]

Here we recognize that all digital images are strings of bits, and so when we speak of the complexity of a string, in addition to meaning strings in general, we specifically mean the complexity of an image.

Definition: The Kolmogorov complexity of a string is the length of the shortest program which generates it.

In order to specify “length” appropriately, we must fix some universal description language, so that all programs have the same frame of reference. Any Turing-complete programming language will do, so let us choose Python for the following examples. More specifically, there exists a universal Turing machine $ U$, for which any program on any machine may be translated (compiled) into an equivalent program for $ U$ by a program of fixed size. Hence, the measure of Kolmogorov complexity, when a fixed machine is specified (in this case Python), is objective over the class of all outputs.

Here is a simple example illustrating Kolmogorov complexity: consider the string of one hundred zeros. This string is obviously not very “complex,” in the sense that one could write a very short program to generate it. In Python:

print "0" * 100

One can imagine that a compiler which optimizes for brevity would output rather short assembly code as well, with a single print instruction and a conditional branch, and some constants. On the other hand, we want to call a string like

“00111010010000101101001110101000111101”

complex, because it follows no apparent pattern. Indeed, in Python the shortest program to output this string is just to print the string itself:

print "00111010010000101101001110101000111101"

And so we see that this random string of ones and zeros has a higher Kolmogorov complexity than the string of all zeros. In other words, the boring string of all zeros is “simple,” while the other is “complicated.”

Kolmogorov himself proved that there is no algorithm to compute Kolmogorov complexity (the number itself) for any input. In other words, the problem of determining exact Kolmogorov complexity is undecidable (by reduction from the halting problem; see the Turing machines primer). So we will not try in vain to actually get a number for the Kolmogorov complexity of arbitrary programs, although it is easy to count the lengths of these provably short examples, and instead we speak of complexity in terms of bounds and relativity.

Kolmogorov Meets Picasso

To apply this to art, we want to ask, “for a given picture, what is the length of the shortest program that outputs it?” This will tell us whether a picture is simple or complex. Unfortunately for us, most pictures are neither generated by programs, nor do they have obvious programmatic representations. More feasibly, we can ask, “can we come up with pictures which have low Kolmogorov complexity and are also beautiful?” This is truly a tough task.

To do so, we must first invent an encoding for pictures, and write a program to interpret the encoding. That’s the easy part. Then, the true test, we must paint a beautiful picture.

We don’t pretend to be capable of such artistry. However, there are some who have created an encoding based on circles and drawn very nice pictures with it. Here we will present those pictures as motivation, and then develop a very similar encoding method, providing the code and examples for the reader to play with.

Jürgen Schmidhuber, a long time proponent of low-complexity art, spent a very long time (on the order of thousands of sketches) creating drawings using his circle encoding method, and here are some of his results:

    

Marvelous. Our creations will be much uglier. But we admit, one must start somewhere, and it might as well be where we feel most comfortable: mathematics and programming.

Magnificence Meets Method

There are many possible encodings for drawings. We will choose one which is fairly easy to implement, and based on intersecting circles. The strokes in a drawing are arcs of these circles. We call the circles used to generate drawings legal circles, while the arcs are legal arcs. Here is an axiomatic specification of how to generate legal circles:

  1. Arbitrarily define the a circle $ C$ with radius 1 as legal. All other circles are generated with respect to this circle. Define a second legal circle whose center is on $ C$, and also has radius 1.
  2. Wherever two legal circles of equal radius intersect, a third circle of equal radius is centered at the point of intersection.
  3. Every legal circle of radius $ r$ has at its center another legal circle of radius $ r/2$.

A legal arc is then simply any arc of a legal circle, and a legal drawing is any list of legal arcs, where each arc has a width corresponding to some fixed set of values. Now we generate all circles which intersect the interior of the base circle $ C$, and sort them first by radius, then by $ x$ coordinate, then $ y$ coordinate. Now given a specified order on the circles, we may number them from 1 to $ n$, and specify a particular circle by its index in the list. In this way, we have defined a coordinate space of arcs, with points of the form (center, thickness, arc-start, arc-end), where the arc-start and art-end coordinates are measured in radians.

We describe the programmatic construction of these circles later. For now, here is the generated picture of all circles which intersect the unit circle up to radius $ 2^{-5}$:

The legal circles

In addition, we provide an animation showing the different layers:

And another animation displaying the list circles sorted by index in increasing order. For an animated GIF, this file has a large size (5MB), and so we link to it separately.

As we construct smaller and smaller circles, the interior of the base circle is covered up by a larger proportion of legally usable area. By using obscenely small circles, we may theoretically construct any drawing. On the other hand, what we care about is how much information is needed to do so.

Because of our nice well ordering on circles, those circles with very small radii will have huge indices! Indeed, there are about four circles of radius $ 2^{-i-1}$ for each circle of radius $ 2^{-i}$ in any fixed area. Then, we can measure the complexity of a drawing by how many characters its list of legal arcs requires. Clearly, a rendition of Starry Night would have a large number of high-indexed circles, and hence have high Kolmogorov complexity. (On second thought, I wonder how hard it would be to get a rough sketch of a Starry-Night-esque picture in this circle encoding…it might not be all that complex).

Note that Schmidhuber defines things slightly differently. In particular, he requires that the endpoints of a legal arc must be the intersection points of two other legal arcs, making the arc-start and arc-end coordinates integers instead of radian measures. We respectfully disagree with this axiom, and we explain why here:

Which of the two arcs is more “complex”?

Of the two arcs in the picture to the left, which would you say is more complex, the larger or the smaller? We observe that two arcs of the same circle, regardless of how long or short they are, should not be significantly different in complexity.

Schmidhuber, on the other hand, implicitly claims that arcs which begin or terminate at non-standard locations (locations which only correspond to the intersections of sufficiently small circles) should be deemed more complex. But this can be a difference as small as $ \pi/100$, and it drastically alters the complexity. We consider this specification unrealistic, at least to the extent to which human beings consider complexity in art. So we stick to radians.

Indeed, our model does alter the complexity for some radian measures, simply because finely specifying fractions requires more bits than integral values. But the change in complexity is hardly as drastic.

In addition, Schmidhuber allows for region shading between legal arcs. Since we did not find an easy way to implement this in Mathematica, we skipped it as extraneous.

Such Stuff as Programs are Made of

We implemented this circle encoding in Mathematica. The reader is encouraged to download and experiment with the full notebook, available from this blog’s Github page. We will explain the important bits here.

First, we have a function to compute all the circles whose centers lie on a given circle:

borderCircleCenters[{x_, y_}, r_] :=
  Table[{x + r Cos[i 2 Pi/6], y + r Sin[i 2 Pi/6]}, {i, 0, 5}];

We arbitrarily picked the first legal circle to be the unit circle, defined with center (0,0), while the second has center (1,0). This made generating all legal circles a relatively simple search task. In addition, we recognize that any arbitrary second chosen circle is simply a rotation of this chosen configuration, so one may rotate their final drawing to accommodate for a different initialization step.

Second, we have the brute-force search of all circles. We loop through all circles in a list, generating the six border circles appropriately, and then filtering out the ones we need, repeating until we have all the circles which intersect the interior of the unit circle. Note our inefficiencies: we search out as far as radius 2 to find small circles which do not necessarily intersect the unit circle, and we calculate the border circles of each circle many times. On the other hand, finding all circles as small as radius $ 2^{-5}$ takes about a minute on an Intel Atom processor, which is not so slow to need excessive tuning for a prototype’s sake.

getAllCenters[r_] := Module[{centers, borderCenters, searchR,
                             ord, rt},
   ord[{a_, b_}, {c_, d_}] := If[a < c, True, b < d];
   centers = {{0, 0}};

   rt = Power[r, 1/2];
   While[Norm[centers[[-1]]] <= Min[2, 1 + rt],
    borderCenters = Map[borderCircleCenters[#, r] &, centers];
    centers = centers \[Union] Flatten[borderCenters, 1]];

   Sort[Select[centers, Norm[#] < 1 + r &], ord]
   ];

Finally, we have a function to extract from the resulting list of all centers the center and radius of a given index, and a function to convert a coordinate to its graphical representation:

(* extracts a pair {center, radius} given the
   index of the circle *)
indexToCenterRadius[layeredCenters_, index_] :=
  Module[{row, length, counter},
   row = 1;
   length = Length[layeredCenters[[row]]];
   counter = index;

   While[counter > length,
    counter -= length;
    row++;
    length = Length[layeredCenters[[row]]];
    ];

   {layeredCenters[[row, counter]], 1/2^(row - 1)}
   ];

drawArc[{index_, thickness_, arcStart_, arcEnd_}] :=
  Module[{center, radius},
   {center, radius} = indexToCenterRadius[allCenters, index];
   Graphics[{Thickness[thickness],
     Circle[center, radius, {arcStart, arcEnd}]},
     ImagePadding -> 5, PlotRange -> {{-1, 1}, {-1, 1}},
     ImageSize -> {400, 400}]
   ];

And a front-end style function, which takes a list of coordinates and draws the resulting picture:

paint[coordinates_] := Show[Map[drawArc, coordinates]];

Any omitted details (at least one global variable name) are clarified in the notebook.

Now, with our paintbrush in hand, we unveil our very first low-complexity piece of art. Behold! Surprised Mr. Moustache Witnessing a Collapsing Soufflé:

Surprised Mr. Moustache, © Jeremy Kun, 2011

It’s coordinates are:

{{7, 0.005, 0, 2 Pi}, {197, 0.002, 0, 2 Pi},
{299, 0.002, 0, 2 Pi}, {783, 0.002, 0, 2 Pi},
{2140, 0.001, 0, 2 Pi}, {3592, 0.001, 0, 2 Pi},
{22, 0.004, 8 Pi/6, 10 Pi/6}, {29, 0.004, 4 Pi/3, 5 Pi/3},
 {21, 0.004, Pi/3, 2 Pi/3}, {28, 0.004, Pi/3, 2 Pi/3}}

Okay, so it’s lame, and took all of ten minutes to create (guess-and-check on the indices is quick, thanks to Mathematica’s interpreter). But it has low Kolmogorov complexity! And that’s got to count for something, right?

Even if you disagree with our obviously inspired artistic genius, the Mathematica framework for creating such drawings is free and available for anyone to play with. So please, should you have any artistic talent at all (and access to Mathematica), we would love to see your low-complexity art! If we somehow come across three days of being locked in a room with access to nothing but a computer and a picture of Starry Night, we might attempt to recreate a sketch of it for this blog. But until then, we will explore other avenues.

Happy sketching!

Addendum: Note that the outstanding problem here is how to algorithmically take a given picture (or specification of what one wants to draw), and translate it into this system of coordinates. As of now, no such algorithm is known, and hence we call the process of making a drawing art. We may attempt to find such a method in the future, but it is likely hard, and if we produced an algorithm even a quarter as good as we might hope, we would likely publish a paper first, and blog about it second.

The Wild World of Cellular Automata

So far on this blog we’ve been using mathematics to help us write interesting and useful programs. For this post (and for more in the future, I hope) we use an interesting program to drive its study as a mathematical object. For the uninformed reader, I plan to provide an additional primer on the theory of computation, but for the obvious reason it interests me more to write on their applications first. So while this post will not require too much rigorous mathematical knowledge, the next one we plan to write will.

Cellular Automata

There is a long history of mathematical models for computation. One very important one is the Turing Machine, which is the foundation of our implementations of actual computers today. On the other end of the spectrum, one of the simpler models of computation (often simply called a system) is a cellular automaton. Surprisingly enough, there are deep connections between the two. But before we get ahead of ourselves, let’s see what these automata can do.

A cellular automaton is a space of cells, where each cell has a fixed number of possible states, and a set of rules for when one state transitions to another. At each state, all cells are updated simultaneously according to the transition rules. After a pedantic, yet interesting, example, we will stick to a special two-dimensional automata ($ n \times n$ grids of cells), where the available states are 1 or 0. We will alternate freely between saying “1 and 0,” “on and off,” and “live and dead.”

Consider a 1-dimensional grid of cells which has infinite length in either direction (recalling Turing Machines, an infinite tape), where each cell can contain either a 0 or 1. For the sets of rules, we say that if a cell has any immediately adjacent neighbor which is on, then in the next generation the cell is on. Otherwise, the cell is off. We may sum up this set of rules with the following picture (credit to Wolfram MathWorld):

The state transition rule for our simple cellular automaton.

The first row represents the possible pre-transition states, and the second row is the resulting state for the center cell in the next generation. Intuitively, we may think of these as bacteria reproducing in a petri dish, where there are rigorous rules on when a bacteria dies or is born. If we start with a single cell turned on, and display each successive generation as a row in a 2-dimensional grid, we result in the following orderly pattern (again, credit to Wolfram MathWorld for the graphic):

The resulting pattern in our simple cellular automaton.

While this pattern is relatively boring, there are many interesting patterns resulting from other transition rules (which are just as succinct). To see a list of all such elementary cellular automaton, see Wolfram MathWorld’s page on the topic. Indeed, Stephen Wolfram was the first to classify these patterns, so the link is appropriate.

Because a personification of this simulation appears to resemble competition, these cellular automata are sometimes called zero-player games. Though it borrows terminology from the field of game theory, we do not analyze any sort of strategy, but rather observe the patterns emerging from various initial configurations. There are often nice local or global equilibria; these are the treasures to discover.

As we increase the complexity of the rules, the complexity of the resulting patterns increases as well. (Although, rule 30 of the elementary automata is sufficiently complex, even exhibiting true mathematical chaos, I hardly believe that anyone studies elementary automata anymore)

So let’s increase the dimension of our grid to 2, and explore John Conway’s aptly named Game of Life.

What Life From Yonder Automaton Breaks!

For Life, our automaton has the following parameters: an infinite two-dimensional grid of cells, states that are either on or off, and some initial configuration of the cells called a seed. There are three transition rules:

  1. Any live cell with fewer than two or more than three living neighbors dies.
  2. Any dead cell with exactly three living neighbors becomes alive.
  3. In any other case, the cell remains as it was.

Originally formulated by John Conway around 1970, this game was originally just a mathematical curiosity. Before we go into too much detail in the mathematical discoveries which made this particular game famous, let’s write it and explore some of the patterns it creates.

Note: this is precisely the kind of mathematical object that delights mathematicians. One creates an ideal mathematical object in one’s own mind, gives it life (no pun intended), and soon the creation begins to speak back to its creator, exhibiting properties far surpassing its original conception. We will see this very process in the Game of Life.

The rules of Life are not particularly hard to implement. We did so in Mathematica, so that we may use its capability to easily produce animations. Here is the main workhorse of our implementation. We provide all of the code used here in a Mathematica notebook on this blog’s Github page.

(* We abbreviate 'nbhd' for neighborhood *)
getNbhd[A_, i_, j_] := A[[i - 1 ;; i + 1, j - 1 ;; j + 1]];

evaluateCell[A_, i_, j_] :=
  Module[{nbhd, cell = A[[i, j]], numNeighbors},

   (* no man's land edge strategy *)
   If[i == 1 || j == 1 || i == Length[A] || j == Length[A[[1]]],
    Return[0]];

   nbhd = getNbhd[A, i, j];
   numNeighbors = Apply[Plus, Flatten[nbhd]];

   If[cell == 1 && (numNeighbors - 1 < 2 || numNeighbors - 1 > 3),
    Return[0]];
   If[cell == 0 && numNeighbors == 3, Return[1]];
   Return[cell];
   ];

evaluateAll[A_] := Table[evaluateCell[A, i, j],
   {i, 1, Length[A]}, {j, 1, Length[A[[1]]]}];

This implementations creates a few significant limitations to our study of this system. First, we have a fixed array size instead of an infinite grid. This means we need some case to handle live cells reaching the edge of the system. Fortunately, at this introductory stage in our investigation we can ignore patterns which arise too close to the border of our array, recognizing that the edge strategy tampers with the evolution of the system. Hence, we adopt the no man’s land edge strategy, which simply allows no cell to be born on the border of our array. One interesting alternative is to have the edges wrap around, thus treating the square grid as the surface of a torus. For small grids, this strategy can actually tamper with our central patterns, but for a large fixed grid, it is a viable strategy.

Second, we do not optimize our array operations to take advantage of sparse matrices. Since most cells will usually be dead, we really only need to check the neighborhoods of live cells and dead cells which have at least one live neighbor. We could keep track of the positions of live cells in a hash set, checking only those and their immediate neighbors at each step. It would not take much to modify the above code to do this, but for brevity and pedantry we exclude it, leaving the optimization as an exercise to the reader.

Finally, to actually display this code we combine Mathematica’s ArrayPlot and NestList functions to achieve a list of frames, which we then animate:

makeFrames[A_, n_] := Map[
  ArrayPlot[#, Mesh -> True]&, NestList[evaluateAll, A, n]];

animate[frames_] := ListAnimate[frames, 8, ControlPlacement -> Top];

randomLife = makeFrames[RandomInteger[1, {20, 20}], 200];
animate[randomLife]

Throwing any mathematical thoughts we might have to the wind, we just run it! Here’s the results for our first try:

What a beauty. The initial chaos almost completely stabilizes after just a few iterations. We see that there exist stationary patterns, the 2×2 square in the bottom left and the space-invader in the top right. Finally, after the identity crisis in the bottom right flounders for a while, we get an oscillating pattern!

Now hold on, because we recognize that this oscillator (which we henceforth dub, the flame) is resting against the no man’s land. So it might not be genuine, and only oscillate because the edge allows it to. However, we notice that one of the patterns which precedes the flame is a 3×3 live square with a dead center. Let’s try putting this square by itself to see what happens. In order to do this, we have an extra few lines of code to transform a list of local coordinates to a pattern centered in a larger grid.

patternToGrid[pts_List, n_] :=
  With[{xOff = Floor[n/2] - Floor[Max[Map[#[[2]] &, pts]]/2],
        yOff = Floor[n/2] - Floor[Max[Map[#[[1]] &, pts]]/2]},
   SparseArray[Map[# + {yOff, xOff} -> 1 &, pts], {n, n}, 0]];
square = {{1, 1}, {1, 2}, {1, 3}, {2, 1}, {2, 3},
  {3, 1}, {3, 2}, {3, 3}};

Combining the resulting two lines with the earlier code for animation, we produce the following pattern:

While we didn’t recover our coveted flame from before, we have at least verified that natural oscillators exist. It’s not hard to see that one of the four pieces above constitutes the smallest oscillator, for any oscillator requires at least three live cells in every generation, and this has exactly three in each generation. No less populated (static or moving) pattern could possibly exist indefinitely.

Before we return to our attempt to recreate the flame, let’s personify this animation. If we think of the original square as a densely packed community, we might tend to interpret this pattern as a migration. The packed population breaks up and migrates to form four separate communities, each of which is just the right size to sustain itself indefinitely. The astute reader may ask whether this is always the case: does every pattern dissipate into a stable pattern? Indeed, this was John Conway’s original question, and we will return to it in a moment.

For now, we notice that the original square preceding the flame grew until its side hit a wall. Now we realize that the wall was essential in its oscillation. So, let us use the symmetry in the pattern to artificially create a “wall” in the form of another origin square. After a bit of tweaking to get the spacing right (three cells separating the squares), we arrive at the following unexpected animation:

We admit, with four symmetrically oscillating flames, it looks more like a jellyfish than a fire. But while we meant to produce two flames, we ended up with four! Quite marvelous. Here is another beautiful reject, which we got by placing the two squares only one cell apart. Unfortunately, it evaporates rather quickly. We call it, the fleeting butterfly.

We refrain from experimenting with other perturbations of the two-square initial configuration for the sake of completing this post by the end of the year. If the reader happens to find an interesting pattern, he shouldn’t hesitate to post a comment!

Now, before returning to the stabilization question, we consider one more phenomenon: moving patterns. Consider the following initial configuration:

A few mundane calculations show that in four generations this pattern repeats itself, but a few cells to the south-east. This glider pattern will fly indefinitely to its demise in no man’s land, as we see below.

Awesome. And clearly, we can exploit the symmetry of this object to shoot the glider in all four directions. Let’s see what happens when they collide!

Well that was dumb. It’s probably too symmetric. We leave it as an exercise to the reader to slightly modify the initial position (given in the Mathematica notebook on this blog’s Github page) and witness the hopefully ensuing chaos.

Now you may have noticed that these designs are very pretty. Indeed, before the post intermission (there’s still loads more to explore), we will quickly investigate this idea.

Automata in Design

Using automata in design might seem rather far-fetched, and certainly would be difficult to implement (if not impossible) in an environment such as Photoshop or with CSS. But, recalling our post on Randomness in Design, it is only appropriate to show a real-world example of a design based on a cellular automaton (specifically, it seems to use something similar to rule 30 of the elementary automata). The prominent example at hand is the Conus seashell.

A Conus shell.

The Conus has cells which secrete pigment according to some unknown set of rules. That the process is a cellular automaton is stated but unsupported on Wikipedia. As unfortunate as that is, we may still appreciate that the final result looks like it was generated from a cellular automaton, and we can reproduce such designs with one. If I had more immediate access to a graphics library and had a bit more experience dealing with textures, I would gladly produce something. If at some point in the future I do get such experience, I would like to return to this topic and see what I can do. For the moment, however, we just admire the apparent connection.

A Tantalizing Peek

We have yet to breach the question of stabilization. In fact, though we started talking about models for computation, we haven’t actually computed anything besides pretty pictures yet! We implore the reader to have patience, and assert presciently that the question of stabilization comes first.

On one hand, we can prove that from any initial configuration Life always stabilizes, arriving at a state where cell population growth cannot continue. Alternatively, we could discover an initial configuration which causes unbounded population growth. The immature reader will notice that this mathematical object would not be very interesting if the former were the case, and so it is likely the latter. Indeed, without unbounded growth we wouldn’t be able to compute much! Before we actually find such a pattern, we realize that unbounded growth is possible in two different ways. First, a moving pattern (like the glider) may leave cells in its wake which do not disappear. Similarly, a stationary pattern may regularly emit moving patterns. Next time, we will give the canonical examples of such patterns, and show their use in turning Life into a model for computation. Finally, we have some additional ideas to spice Life up, but we will leave those as a surprise, defaulting to exclude them if they don’t pan out.

Until next time!

Prime Design

The goal of this post is to use prime numbers to make interesting and asymmetric graphics, and to do so in the context of the web design language CSS.

Number Patterns

For the longest time numbers have fascinated mathematicians and laymen alike. Patterns in numbers are decidedly simple to recognize, and the proofs of these patterns range from trivially elegant to Fields Medal worthy. Here’s an example of a simple one that computer science geeks will love:

Theorem: $ \sum \limits_{i=0}^{n} 2^i = 2^{n+1}-1$ for all natural numbers $ n$.

If you’re a mathematician, you might be tempted to use induction, but if you’re a computer scientist, you might think of using neat representations for powers of 2…

Proof: Consider the base 2 representation of $ 2^i$, which is a 1 in the $ i$th place and zeros everywhere else. Then we may write the summation as

$ \begin{matrix} & 100 & \dots & 0 \\ & 010 & \dots & 0 \\ & 001 & \dots & 0 \\ & & \vdots & \\ + & 000 & \dots & 1 \\ = & 111 & \dots & 1 \end{matrix}$

And clearly adding one to this sum gives the next largest power of 2. $ \square$

This proof extends quite naturally to all $ k$ powers, giving the following identity. Try to prove it yourself using base $ k$ number representations!

$ \sum \limits_{i=0}^{n} k^i = \dfrac{k^{n+1}-1}{k-1}$

The only other straightforward proof of this fact would require induction on $ n$, and as a reader points out in the comments (and I repeat in the proof gallery), it’s not so bad. But it was refreshing to discover this little piece of art on my own (and it dispelled my boredom during a number theory class). Number theory is full of such treasures.

Primes

Though there are many exciting ways in which number patterns overlap, there seems to be one grand, overarching undiscovered territory that drives research and popular culture’s fascination with numbers: the primes.

The first few prime numbers are $ 2,3,5,7,11,13,17,19,23, \dots $. Many elementary attempts to characterize the prime numbers admit results implying intractability. Here are a few:

  • There are infinitely many primes.
  • For any natural number $ n$, there exist two primes $ p_1, p_2$ with no primes between them and $ |p_1 – p_2| \geq n$. (there are arbitrarily large gaps between primes)
  • It is conjectured that for any natural number $ n$, there exist two primes $ p_1, p_2$ larger than $ n$ with $ |p_1 – p_2| = 2$. (no matter how far out you go, there are still primes that are as close together as they can possibly be)

Certainly then, these mysterious primes must be impossible to precisely characterize with some sort of formula. Indeed, it is simple to prove that there exists no polynomial formula with rational coefficients that always yields primes*, so the problem of generating primes via some formula is very hard. Even then, much interest has gone into finding polynomials which generate many primes (the first significant such example was $ n^2 +n +41$, due to Euler, which yields primes for $ n < 40$), and this was one of the driving forces behind algebraic number theory, the study of special number rings called integral domains.

*Aside: considering the amazing way that the closed formula for the Fibonacci numbers uses irrational numbers to arrive at integers, I cannot immediately conclude whether the same holds for polynomials with arbitrary coefficients, or elementary/smooth functions in general. This question could be closely related to the Riemann hypothesis, and I’d expect a proof either way to be difficult. If any readers are more knowledgeable about this, please feel free to drop a comment.

However, the work of many great mathematicians over thousands of years is certainly not in vain. Despite their seeming randomness, the pattern in primes lies in their distribution, not in their values.

Theorem: Let $ \pi(n)$ be the number of primes less than or equal to $ n$ (called the prime counting function). Then

$ \lim \limits_{n \rightarrow \infty} \dfrac{\pi(n)}{n / \log(n)} = 1$

Intuitively, this means that $ \pi(n)$ is about $ n / \log(n)$ for large $ n$, or more specifically that if one picks a random number near $ n$, the chance of it being prime is about $ 1/ \log(n)$. Much of the work on prime numbers (including equivalent statements to the Riemann hypothesis) deals with these prime counting functions and their growth rates. But stepping back, this is a fascinatingly counterintuitive result: we can say with confidence how many primes there are in any given range, but determining what they are is exponentially harder!

And what’s more, many interesting features of the prime numbers have been just stumbled upon by accident. Unsurprisingly, these results are among the most confounding. Take, for instance, the following construction. Draw a square spiral starting with 1 in the center, and going counter-clockwise as below:

Number Spiral

If you circle all the prime numbers you’ll notice many of them spectacularly lie on common diagonals! If you continue this process for a long time, you’ll see that the primes continue to lie on diagonals, producing a puzzling pattern of dashed cross-hatches. This Ulam Spiral was named after its discoverer, Stanislaw Ulam, and the reasons for its appearance are still unknown (though conjectured).

All of this wonderful mathematics aside, our interest in the primes is in its apparent lack of patterns.

Primes in Design

One very simple but useful property of primes is in least common denominators. The product of two numbers is well known to equal the product of their least common multiple and greatest common divisor. In symbols:

$ \textup{gcd}(p,q) \textup{lcm}(p,q) = pq$

We are particularly interested in the case when $ p$ and $ q$ are prime, because then their greatest (and only) common divisor is 1, making this equation

$ \textup{lcm}(p,q) = pq$

The least common multiple manifests itself concretely in patterns. Using the numbers six and eight, draw two rows of 0’s and 1’s with a 1 every sixth character in the first row and every 8th character in the second. You’ll quickly notice that the ones line up every twenty-fourth character, the lcm of six and eight:

000001000001000001000001000001000001000001000001
000000010000000100000001000000010000000100000001

Using two numbers $ p,q$ which are coprime (their greatest common divisor is 1, but they are not necessarily prime; say, 9 and 16), then the 1’s in their two rows would line up every $ pq$ characters. Now for pretty numbers like six and eight, there still appears to be a mirror symmetry in the distribution of 1’s and 0’s above. However, if the two numbers are prime, this symmetry is much harder to see. Try 5 and 7:

0000100001000010000100001000010000100001000010000100001000010000100001
0000001000000100000010000001000000100000010000001000000100000010000001

There is much less obvious symmetry, and with larger primes it  becomes even harder to tell that the choice of match up isn’t random.

This trivial observation allows us to create marvelous, seemingly non-repeating patterns, provided we use large enough primes. However, patterns in strings of 1’s and 0’s are not quite visually appealing enough, so we will resort to overlaying multiple backgrounds in CSS. Consider the following three images, which have widths 23, 41, and 61 pixels, respectively.

23

41

61

Each has a prime width, semi-transparent color, and a portion of the image is deleted to achieve stripes when the image is x-repeated. Applying our reasoning from the 1’s and 0’s, this pattern will only repeat once every $ \textup{lcm}(23,41,61) = 23*41*61 = 57523$ pixels! As designers, this gives us a naturally non-repeating pattern of stripes, and we can control the frequency of repetition in our choice of numbers.

Here is the CSS code to achieve the result:

html {
   background-image: url(23.png), url(41.png), url(61.png);
}

I’m using Google Chrome, so this is all the CSS that’s needed. With other browsers you may need a few additional lines like “height: 100%” or “margin: 0”, but I’m not going to worry too much about that because any browser which supports multiple background images should get the rest right. Here’s the result of applying the CSS to a blank HTML webpage:

Now I’m no graphic designer by any stretch of my imagination. So as a warning to the reader, using these three particular colors may result in an eyesore more devastating than an 80’s preteen bedroom, but it illustrates the point of the primes, that on my mere 1440×900 display, the pattern never repeats itself. So brace yourself, and click the thumbnail to see the full image.

Now, to try something at least a little more visually appealing, we do the same process with circles of various sizes on square canvases with prime length sides ranging from 157×157 pixels to 419×419. Further, I included a little bash script to generate a css file with randomized background image coordinates. Here is the CSS file I settled on:

html {
   background-image: url(443.png), url(419.png), url(359.png),
      url(347.png), url(157.png), url(193.png), url(257.png),
      url(283.png);
   background-position: 29224 10426, 25224 24938, 8631 32461,
      22271 15929, 13201 7320, 30772 13876, 11482 15854,
      31716, 21968;
}

With the associated bash script generating it:

#! /bin/bash

echo &quot;html {&quot;
echo -n &quot;   background-image: url(443.png), url(419.png), &quot;
echo -n &quot;url(359.png), url(347.png), url(157.png), url(193.png), &quot;
echo -n &quot;url(257.png), url(283.png);&quot;
echo -n &quot;   background-position: &quot;

for i in {1..7}
do
	echo -n &quot;$RANDOM $RANDOM, &quot;
done

echo &quot;$RANDOM, $RANDOM;&quot;
echo &quot;}&quot; 

Prime Circles

And here is the result. Again, this is not a serious attempt at a work of art. But while you might not call it visually beautiful, nobody can deny that its simplicity and its elegant mathematical undercurrent carry their own aesthetic beauty. This method, sometimes called the cicada principle, has recently attracted a following, and the Design Festival blog has a gallery of images, and a few that stood out. These submissions are the true works of art, though upon closer inspection many of them seem to use such large image sizes that there is only one tile on a small display, which means the interesting part (the patterns) can’t be seen without a ridiculously large screen or contrived html page widths.

So there you have it. Prime numbers contribute to interesting, unique designs that in their simplest form require very little planning. Designs become organic; they grow from just a few prime seedlings to a lush, screen-filling ecosystem. Of course, for those graphical savants out there, the possibilities are endless. But for the rest of us, we can use these principles to quickly build large-scale, visually appealing designs, leaving math-phobic designers in the dust.

It would make me extremely happy if any readers who play around and come up with a cool design submit them. Just send a link to a place where your design is posted, and if I get enough submissions I can create a gallery of my own 🙂

Until next time!