Learning to Love Complex Numbers

This post is intended for people with a little bit of programming experience and no prior mathematical background.

So let’s talk about numbers.

Numbers are curious things. On one hand, they represent one of the most natural things known to humans, which is quantity. It’s so natural to humans that even newborn babies are in tune with the difference between quantities of objects between 1 and 3, in that they notice when quantity changes much more vividly than other features like color or shape.

But our familiarity with quantity doesn’t change the fact that numbers themselves (as an idea) are a human invention. And they’re not like most human inventions, the kinds where you have to tinker with gears or circuits to get a machine that makes your cappuccino. No, these are mathematical inventions. These inventions exist only in our minds.

Numbers didn’t always exist. A long time ago, back when the Greeks philosophers were doing their philosophizing, negative numbers didn’t exist! In fact, it wasn’t until 1200 AD that the number zero was first considered in Europe. Zero, along with negative numbers and fractions and square roots and all the rest, were invented primarily to help people solve more problems than they could with the numbers they had available. That is, numbers were invented primarily as a way for people to describe their ideas in a useful way. People simply  wondered “is there a number whose square gives you 2?” And after a while they just decided there was and called it \sqrt{2} because they didn’t have a better name for it. 

But with these new solutions came a host of new problems. You see, although I said mathematical inventions only exist in our minds, once they’re invented they gain a life of their own. You start to notice patterns in your mathematical objects and you have to figure out why they do the things they do. And numbers are a perfectly good example of this: once I notice that I can multiply a number by itself, I can ask how often these “perfect squares” occur. That is, what’s the pattern in the numbers 1^2, 2^2, 3^2, 4^2, \dots? If you think about it for a while, you’ll find that square numbers have a very special relationship with odd numbers.

Other times, however, the things you invent turn out to make no sense at all, and you can prove they never existed in the first place! It’s an odd state of affairs, but we’re going to approach the subject of complex numbers from this mindset. We’re going to come up with a simple idea, the idea that negative numbers can be perfect squares, and explore the world of patterns it opens up. Along the way we’ll do a little bit of programming to help explore, give some simple proofs to solidify our intuition, and by the end we’ll see how these ideas can cause wonderful patterns like this one:

mandelbrot

The number i

Let’s bring the story back around to squares. One fact we all remember about numbers is that squaring a number gives you something non-negative. 7^2 = 49, (-2)^2 = 4, 0^2 = 0, and so on. But it certainly doesn’t have to be this way. What if we got sick of that stupid fact and decided to invent a new number whose square was negative? Which negative, you ask? Well it doesn’t really matter, because I can always stretch it larger or smaller so that it’s square is -1.

Let’s see how: if you say that your made-up number x makes x^2 = -7, then I can just use \frac{x}{\sqrt{7}} to get a number whose square is -1. If you’re going to invent a number that’s supposed to interact with our usual numbers, then you have to be allowed to add, subtract, and multiply x with regular old real numbers, and the usual properties would have to still work. So it would have to be true that (x / \sqrt{7})^2 = x^2 / \sqrt{7}^2 = -7/7 = -1.

So because it makes no difference (this is what mathematicians mean by, “without loss of generality”) we can assume that the number we’re inventing will have a square of negative one. Just to line up with history, let’s call the new number i. So there it is: i exists and i^2 = -1. And now that we are “asserting” that i plays nicely with real numbers, we get these natural rules for adding and subtracting and multiplying and dividing. For example

  • 1 + i is a new number, which we’ll just call 1+i. And if we added two of these together, (1+ i) + (1+i), we can combine the real parts and the i parts to get 2 + 2i. Same goes for subtraction. In general a complex number looks like a + bi, because as we’ll see in the other points you can simplify every simple arithmetic expression down to just one “real number” part and one “real number times i” part.
  • We can multiply 3 \cdot i, and we’ll just call it 3i, and we require that multiplication distributes across addition (that the FOIL rule works). So that, for example, (2 - i)(1 + 3i) = (2 + 6i - i - 3i^2) = (2 + 3) + (6i - i) = (5 + 5i).
  • Dividing is a significantly more annoying. Say we want to figure out what 1 / (1+i) is (in fact, it’s not even obvious that this should look like a regular number! But it does). The 1 / a notation just means we’re looking for a number which, when we multiply by the denominator a, we get back to 1. So we’re looking to find out when (a + bi)(1 + i) = 1 + 0i where a and b are variables we’re trying to solve for. If we multiply it out we get (a-b) + (a + b)i = 1 + 0i, and since the real part and the i part have to match up, we know that a - b = 1 and a + b = 0. If we solve these two equations, we find that a = 1/2, b = -1/2 works great. If we want to figure out something like (2 + 3i) / (1 - i), we just find out what 1 / (1- i) is first, and then multiply the result by (2+3i).

So that was tedious and extremely boring, and we imagine you didn’t even read it (that’s okay, it really is boring!). All we’re doing is establishing ground rules for the game, so if you come across some arithmetic that doesn’t make sense, you can refer back to this list to see what’s going on. And once again, for the purpose of this post, we’re asserting that all these laws hold. Maybe some laws follow from others, but as long as we don’t come up with any nasty self-contradictions we’ll be fine.

And now we turn to the real questions: is i the only square root of -1? Does i itself have a square root? If it didn’t, we’d be back to where we started, with some numbers (the non-i numbers) having square roots while others don’t. And so we’d feel the need to make all the i numbers happy by making up more numbers to be their square roots, and then worrying what if these new numbers don’t have square roots and…gah!

I’ll just let you in on the secret to save us from this crisis. It turns out that i does have a square root in terms of other i numbers, but in order to find it we’ll need to understand i from a different angle, and that angle turns out to be geometry.

Geometry? How is geometry going to help me understand numbers!?

It’s a valid question and part of why complex numbers are so fascinating. And I don’t mean geometry like triangles and circles and parallel lines (though there will be much talk of angles), I mean transformations in the sense that we’ll be “stretching,” “squishing,” and “rotating” numbers. Maybe another time I can tell you why for me “geometry” means stretching and rotating; it’s a long but very fun story.

The clever insight is that you can represent complex numbers as geometric objects in the first place. To do it, you just think of a + bi as a pair of numbers (a,b), (the pair of real part and i part), and then plot that point on a plane. For us, the x-axis will be the “real” axis, and the y-axis will be the i-axis. So the number (3 - 4i) is plotted 3 units in the positive x direction and 4 units in the negative y direction. Like this:

single-complex-number

The “j” instead of “i” is not a typo, but a disappointing fact about the programming language we used to make this image. We’ll talk more about why later.

We draw it as an arrow for a good reason. Stretching, squishing, rotating, and reflecting will all be applied to the arrow, keeping its tail fixed at the center of the axes. Sometimes the arrow is called a “vector,” but we won’t use that word because here it’s synonymous with “complex number.”

So let’s get started squishing stuff.

Stretching, Squishing, Rotating

Before we continue I should clear up some names. We call a number that has an i in it a complex number, and we call the part without the i the real part (like 2 in 2-i) and the part with i the complex part.

Python is going to be a great asset for us in exploring complex numbers, so let’s jump right into it. It turns out that Python natively supports complex numbers, and I wrote a program for drawing complex numbers. I used it to make the plot above. The program depends on a library I hate called matplotlib, and so the point of the program is to shield you from as much pain as possible and focus on complex numbers. You can use the program by downloading it from this blog’s Github page, along with everything else I made in writing this post. All you need to know how to do is call a function, and I’ve done a bit of window dressing removal to simplify things (I really hate matplotlib).

Here’s the function header:

# plotComplexNumbers : [complex] -> None
# display a plot of the given list of complex numbers
def plotComplexNumbers(numbers):
   ...

Before we show some examples of how to use it, we have to understand how to use complex numbers in Python. It’s pretty simple, except that Python was written by people who hate math, and so they decided the complex number would be represented by j instead of i (people who hate math are sometimes called “engineers,” and they use j out of spite. Not really, though).

So in Python it’s just like any other computation. For example:

>>> (1 + 1j)*(4 - 2j) == (6+2j)
True
>>> 1 / (1+1j)
(0.5-0.5j)

And so calling the plotting function with a given list of complex numbers is as simple as importing the module and calling the function

from plotcomplex import plot
plot.plotComplexNumbers([(-1+1j), (1+2j), (-1.5 - 0.5j), (.6 - 1.8j)])

Here’s the result

example-complex-plot

So let’s use plots like this one to explore what “multiplication by i” does to a complex number. It might not seem exciting at first, but I promise there’s a neat punchline.

Even without plotting it’s pretty easy to tell what multiplying by i does to some numbers. It takes 1 to i, moves i to i^2 = -1, it takes -1 to -i, and -i to -i \cdot i = 1.

What’s the pattern in these? well if we plot all these numbers, they’re all at right angles in counter-clockwise order. So this might suggest that multiplication by i does some kind of rotation. Is that always the case? Well lets try it with some other more complicated numbers. Click the plots below to enlarge.

Well, it looks close but it’s hard to tell. Some of the axes are squished and stretched, so it might be that our images don’t accurately represent the numbers (the real world can be such a pain). Well when visual techniques fail, we can attempt to prove it.

Clearly multiplying by i does some kind of rotation, maybe with other stuff too, and it shouldn’t be so hard to see that multiplying by i does the same thing no matter which number you use (okay, the skeptical readers will say that’s totally hard to see, but we’ll prove it super rigorously in a minute). So if we take any number and multiply it by i once, then twice, then three times, then four, and if we only get back to where we started at four multiplications, then each rotation had to be a quarter turn.

Indeed,

\displaystyle (a + bi) i^4 = (ai - b) i^3 = (-a - bi) i^2 = (-ai + b) i = a + bi

This still isn’t all that convincing, and we want to be 100% sure we’re right. What we really need is a way to arithmetically compute the angle between two complex numbers in their plotted forms. What we’ll do is find a way to measure the angle of one complex number with the x-axis, and then by subtraction we can get angles between arbitrary points. For example, in the figure below \theta = \theta_1 - \theta_2.

angle-example

One way to do this is with trigonometry: the geometric drawing of a + bi is the hypotenuse of a right triangle with the x-axis.

triangle-example

And so if r is the length of the arrow, then by the definition of sine and cosine, \cos(\theta) = a/r, \sin(\theta) = b/r. If we have r, \theta, and r > 0, we can solve for a unique a and b, so instead of representing a complex number in terms of the pair of numbers (a,b), we can represent it with the pair of numbers (r, \theta). And the conversion between the two is just

a + bi = r \cos(\theta) + (r \sin(\theta)) i

The (r, \theta) representation is called the polar representation, while the (a,b) representation is called the rectangular representation or the Cartesian representation. Converting between polar and Cartesian coordinates fills the pages of many awful pre-calculus textbooks (despite the fact that complex numbers don’t exist in classical calculus). Luckily for us Python has built-in functions to convert between the two representations for us.

>>> import cmath
>>> cmath.polar(1 + 1j)
(1.4142135623730951, 0.7853981633974483)
>>> z = cmath.polar(1 + 1j)
>>> cmath.rect(z[0], z[1])
(1.0000000000000002+1j)

It’s a little bit inaccurate on the rounding, but it’s fine for our purposes.

So how do we compute the angle between two complex numbers? Just convert each to the polar form, and subtract the second coordinates. So if we get back to our true goal, to figure out what multiplication by i does, we can just do everything in polar form. Here’s a program that computes the angle between two complex numbers.

def angleBetween(z, w):
   zPolar, wPolar = cmath.polar(z), cmath.polar(w)
   return wPolar[1] - zPolar[1]

print(angleBetween(1 + 1j, (1 + 1j) * 1j))
print(angleBetween(2 - 3j, (2 - 3j) * 1j))
print(angleBetween(-0.5 + 7j, (-0.5 + 7j) * 1j))

Running it gives

1.5707963267948966
1.5707963267948966
-4.71238898038469

Note that the decimal form of \pi/2 is 1.57079…, and that the negative angle is equivalent to \pi/2 if you add a full turn of 2\pi to it. So programmatically we can see that for every input we try multiplying by i rotates 90 degrees.

But we still haven’t proved it works. So let’s do that now. To say what the angle is between r \cos (\theta) + ri \sin (\theta) and i \cdot [r \cos (\theta) + ri \sin(\theta)] = -r \sin (\theta) + ri \cos(\theta), we need to transform the second number into the usual polar form (where the i is on the sine part and not the cosine part). But we know, or I’m telling you now, this nice fact about sine and cosine:

\displaystyle \sin(\theta + \pi/2) = cos(\theta)
\displaystyle \cos(\theta + \pi / 2) = -\sin(\theta)

This fact is maybe awkward to write out algebraically, but it’s just saying that if you shift the whole sine curve a little bit you get the cosine curve, and if you keep shifting it you get the opposite of the sine curve (and if you kept shifting it even more you’d eventually get back to the sine curve; they’re called periodic for this reason).

So immediately we can rewrite the second number as r \cos(\theta + \pi/2) + i r \sin (\theta + \pi/2). The angle is the same as the original angle plus a right angle of \pi/2. Neat!

Applying this same idea to (a + bi) \cdot (c + di), it’s not much harder to prove that multiplying two complex numbers in general multiplies their lengths and adds their angles. So if a complex number z has its magnitude r smaller than 1, multiplying by z squishes and rotates whatever is being multiplied. And if the magnitude is greater than 1, it stretches and rotates. So we have a super simple geometric understanding of how arithmetic with complex numbers works. And as we’re about to see, all this stretching and rotating results in some really weird (and beautifully mysterious!) mathematics and programs.

But before we do that we still have one question to address, the question that started this whole geometric train of thought: does i have a square root? Indeed, I’m just looking for a number such that, when I square its length and double its angle, I get i = \cos(\pi/2) + i \sin(\pi/2). Indeed, the angle we want is \pi/4, and the length we want is r = 1, which means \sqrt{i} = \cos(\pi/4) + i \sin(\pi/4). Sweet! There is another root if you play with the signs, see if you can figure it out.

In fact it’s a very deeper and more beautiful theorem (“theorem” means “really important fact”) called the fundamental theorem of algebra. And essentially it says that the complex numbers are complete. That is, we can always find square roots, cube roots, or anything roots of numbers involving i. It actually says a lot more, but it’s easier to appreciate the rest of it after you do more math than we’re going to do in this post.

On to pretty patterns!

The Fractal

So here’s a little experiment. Since every point in the plane is the end of some arrow representing a complex number, we can imagine transforming the entire complex plane by transforming each number by the same rule. The most interesting simple rule we can think of: squaring! So though it might strain your capacity for imagination, try to visualize the idea like this. Squaring a complex number is the same as squaring it’s length and doubling its angle. So imagine: any numbers whose arrows are longer than 1 will grow much bigger, arrows shorter than 1 will shrink, and arrows of length exactly one will stay the same length (arrows close to length 1 will grow/shrink much more slowly than those far away from 1). And complex numbers with small positive angles will increase their angle, but only a bit, while larger angles will grow faster.

Here’s an animation made by Douglas Arnold showing what happens to the set of complex numbers a + bi with 0 \leq a, b \leq 1 or -1 < a,b < 0. Again, imagine every point is the end of a different arrow for the corresponding complex number. The animation is for a single squaring, and the points move along the arc they would travel if one rotated/stretched them smoothly.

complex-squaring

So that’s pretty, but this is by all accounts a well-behaved transformation. It’s “predictable,” because for example we can always tell which complex numbers will get bigger and bigger (in length) and which will get smaller.

What if, just for the sake of tinkering, we changed the transformation a little bit? That is, instead of sending z = a+bi to z^2 (I’ll often write this z \mapsto z^2), what if we sent

\displaystyle z \mapsto z^2 + 1

Now it’s not so obvious: which vectors will grow and which will shrink? Notice that it’s odd because adding 1 only changes the real part of the number. So a number whose length is greater than 1 can become small under this transformation. For example, i is sent to 0, so something slightly larger would also be close to zero. Indeed, 5i/4 \mapsto -9/16.

So here’s an interesting question: are there any complex numbers that will stay small even if I keep transforming like this forever? Specifically, if I call f(z) = z^2, f^2(z) = f(f(z)), and likewise f^k(z) for k repeated transformations of z, is there a number z so that for every k f^k(z) < 2? “Obvious” choices like z=0 don’t work, and neither do random guesses like z=i or z=1. So should we guess the answer is no?

Before we jump to conclusions let’s write a program to see what happens for more than our random guesses. The program is simple: we’ll define the “square plus one” function, and then repeatedly apply that function to a number for some long number of times (say, 250 times). If the length of the number stays under 2 after so many tries, we’ll call it “small forever,” and otherwise we’ll call it “not small forever.”

def squarePlusOne(z):
   return z*z + 1

def isSmallForever(z, f):
   k = 0

   while abs(z) < 2:
      z = f(z)
      k += 1

      if k > 250:
         return True

   return False

This isSmallForever function is generic: you can give it any function f and it will repeatedly call f on z until the result grows bigger than 2 in length. Note that the abs function is a built-in Python function for computing the length of a complex number.

Then I wrote a classify function, which you can give a window and a small increment, and it will produce a grid of zeros and ones marking the results of isSmallForever. The details of the function are not that important. I also wrote a function that turns the grid into a picture. So here’s an example of how we’d use it:

from plotcomplex.plot import gridToImage

def classifySquarePlusOne(z):
   return isSmallForever(z, squarePlusOne)

grid = classify(classifySquarePlusOne) # the other arguments are defaulted to [-2,2], [-2,2], 0.1
gridToImage(grid)

And here’s the result. Points colored black grow beyond 2, and white points stay small for the whole test.

Looks like they'll always grow big.

Looks like they’ll always grow big.

So it looks like repeated squaring plus one will always make complex numbers grow big. That’s not too exciting, but we can always make it more exciting. What happens if we replace the 1 in z^2 + 1 with a different complex number? For example, if we do z^2 - 1 then will things always grow big?

You can randomly guess and see that 0 will never grow big, because 0^2 - 1 = -1 and (-1)^2 - 1 = 0. It will just oscillate forever. So with -1 some numbers will grow and some will not! Let’s use the same routine above to see which:

def classifySquareMinusOne(z):
      return isSmallForever(z, squareMinusOne)

grid = classify(classifySquareMinusOne)
gridToImage(grid)

And the result:

second-attempt

Now that’s a more interesting picture! Let’s ramp up the resolution

grid = classify(classifySquareMinusOne, step=0.001)
gridToImage(grid)

second-attempt-zoomed

Gorgeous. If you try this at home you’ll notice, however, that this took a hell of a long time to run. Speeding up our programs is very possible, but it’s a long story for another time. For now we can just be patient.

Indeed, this image has a ton of interesting details! It looks almost circular in the middle, but if we zoom in we can see that it’s more like a rippling wave

second-attempt-zoomed2

It’s pretty incredible, and a huge question is jumping out at me: what the heck is causing this pattern to occur? What secret does -1 know that +1 doesn’t that makes the resulting pattern so intricate?

But an even bigger question is this. We just discovered that some values of c make z \mapsto z^2 + c result in interesting patterns, and which values do not? Even if we just, say, fix the starting point to zero: what is the pattern in the complex numbers that would tell me when this transformation makes zero blow up, and when it keeps zero small?

Sounds like a job for another program. This time we’ll use a nice little Python feature called a closure, which we define a function that saves the information that exists when it’s created for later. It will let us write a function that takes in c and produces a function that transforms according to z \mapsto z^2+c.

def squarePlusC(c):
   def f(z):
      return z*z + c

   return f

And we can use the very same classification/graphing function from before to do this.

def classifySquarePlusC(c):
   return isSmallForever(0, squarePlusC(c))

grid = classify(classifySquarePlusC, xRange=(-2, 1), yRange=(-1, 1), step=0.005)
gridToImage(grid)

And the result:

mandelbrot

Stunning. This wonderful pattern, which is still largely not understood today, is known as the Mandelbrot set. That is, the white points are the points in the Mandlebrot set, and the black points are not in it. The detail on the border of this thing is infinitely intricate. For example, we can change the window in our little program to zoom in on a particular region.

mandelbrot-zoomed

And if you keep zooming in you keep getting more and more detail. This was true of the specific case of z^2 - 1, but somehow the patterns in the Mandelbrot set are much more varied and interesting. And if you keep going down eventually you’ll see patterns that look like the original Mandelbrot set. We can already kind of see that happening above. The name for this idea is a fractal, and the z^2 - 1 image has it too. Fractals are a fascinating and mysterious subject studied in a field called discrete dynamical systems. Many people dedicate their entire lives to studying these things, and it’s for good reason. There’s a lot to learn and even more that’s unknown!

So this is the end of our journey for now. I’ve posted all of the code we used in the making of this post so you can continue to play, but here are some interesting ideas.

  • The Mandelbrot set (and most fractals) are usually colored. The way they’re colored is as follows. Rather than just say true or false when zero blows up beyond 2 in length, you return the number of iterations k that happened. Then you pick a color based on how big k is. There’s a link below that lets you play with this. In fact, adding colors shows that there is even more intricate detail happening outside the Mandelbrot set that’s too faint to see in our pictures above. Such as this.
  • Some very simple questions about fractals are very hard to answer. For example, is the Mandelbrot set connected? That is, is it possible to “walk” from every point in the Mandelbrot set to every other point without leaving the set? Despite the scattering of points in the zoomed in picture above that suggest the answer is no, the answer is actually yes! This is a really difficult thing to prove, however.
  • The patterns in many fractals are often used to generate realistic looking landscapes and generate pseudo randomness. So fractals are not just mathematical curiosities.
  • You should definitely be experimenting with this stuff! What happens if you change the length threshold from 2 to some bigger number? What about a smaller number? What if you do powers different than 2? There’s so much to explore!
  • The big picture thing to take away from this is that it’s not the numbers themselves that are particularly interesting, it’s the transformations of the numbers that generate these patterns! The interesting questions are what kinds of things are the same under these transformations, and what things are different. This is a very general idea in mathematics, and the more math you do the more you’ll find yourself wondering about useful and bizarre transformations.

For the chance to keep playing with the Mandelbrot set, check out this Mandelbrot grapher that works in your browser. It lets you drag rectangles to zoom further in on regions of interest. It’s really fun.

Until next time!

About these ads

Sending and Authenticating Messages with Elliptic Curves

Last time we saw the Diffie-Hellman key exchange protocol, and discussed the discrete logarithm problem and the related Diffie-Hellman problem, which form the foundation for the security of most protocols that use elliptic curves. Let’s continue our journey to investigate some more protocols.

Just as a reminder, the Python implementations of these protocols are not at all meant for practical use, but for learning purposes. We provide the code on this blog’s Github page, but for the love of security don’t actually use them.

Shamir-Massey-Omura

Recall that there are lots of ways to send encrypted messages if you and your recipient share some piece of secret information, and the Diffie-Hellman scheme allows one to securely generate a piece of shared secret information. Now we’ll shift gears and assume you don’t have a shared secret, nor any way to acquire one. The first cryptosystem in that vein is called the Shamir-Massey-Omura protocol. It’s only slightly more complicated to understand than Diffie-Hellman, and it turns out to be equivalently difficult to break.

The idea is best explained by metaphor. Alice wants to send a message to Bob, but all she has is a box and a lock for which she has the only key. She puts the message in the box and locks it with her lock, and sends it to Bob. Bob can’t open the box, but he can send it back with a second lock on it for which Bob has the only key. Upon receiving it, Alice unlocks her lock, sends the box back to Bob, and Bob can now open the box and retrieve the message.

To celebrate the return of Game of Thrones, we’ll demonstrate this protocol with an original Lannister Infographic™.

Assuming the box and locks are made of magical unbreakable Valyrian steel, nobody but Jamie will be able to read the message.

Assuming the box and locks are made of magically unbreakable Valyrian steel, nobody but Bob (also known as Jamie) will be able to read the message.

Now fast forward through the enlightenment, industrial revolution, and into the age of information. The same idea works, and it’s significantly faster over long distances. Let C be an elliptic curve over a finite field k (we’ll fix k = \mathbb{Z}/p for some prime p, though it works for general fields too). Let n be the number of points on C.

Alice’s message is going to be in the form of a point M on C. She’ll then choose her secret integer 0 < s_A < p and compute s_AM (locking the secret in the box), sending the result to Bob. Bob will likewise pick a secret integer s_B, and send s_Bs_AM back to Alice.

Now the unlocking part: since s_A \in \mathbb{Z}/p is a field, Alice can “unlock the box” by computing the inverse s_A^{-1} and computing s_BM = s_A^{-1}s_Bs_AM. Now the “box” just has Bob’s lock on it. So Alice sends s_BM back to Bob, and Bob performs the same process to evaluate s_B^{-1}s_BM = M, thus receiving the message.

Like we said earlier, the security of this protocol is equivalent to the security of the Diffie-Hellman problem. In this case, if we call z = s_A^{-1} and y = s_B^{-1}, and P = s_As_BM, then it’s clear that any eavesdropper would have access to P, zP, and yP, and they would be tasked with determining zyP, which is exactly the Diffie-Hellman problem.

Now Alice’s secret message comes in the form of a point on an elliptic curve, so how might one translate part of a message (which is usually represented as an integer) into a point? This problem seems to be difficult in general, and there’s no easy answer. Here’s one method originally proposed by Neal Koblitz that uses a bit of number theory trickery.

Let C be given by the equation y^2 = x^3 + ax + b, again over \mathbb{Z}/p. Suppose 0 \leq m < p/100 is our message. Define for any 0 \leq j < 100 the candidate x-points x_j = 100m + j. Then call our candidate y^2-values s_j = x_j^3 + ax_j + b. Now for each j we can compute x_j, s_j, and so we’ll pick the first one for which s_j is a square in \mathbb{Z}/p and we’ll get a point on the curve. How can we tell if s_j is a square? One condition is that s_j^{(p-1)/2} \equiv 1 \mod p. This is a basic fact about quadratic residues modulo primes; see these notes for an introduction and this Wikipedia section for a dense summary.

Once we know it’s a square, we can compute the square root depending on whether p \equiv 1 \mod 4 or p \equiv 3 \mod 4. In the latter case, it’s just s_j^{(p+1)/4} \mod p. Unfortunately the former case is more difficult (really, the difficult part is p \equiv 1 \mod 8). You can see Section 1.5 of this textbook for more details and three algorithms, or you could just pick primes congruent to 3 mod 4.

I have struggled to find information about the history of the Shamir-Massey-Omura protocol; every author claims it’s not widely used in practice, and the only reason seems to be that this protocol doesn’t include a suitable method for authenticating the validity of a message. In other words, some “man in the middle” could be intercepting messages and tricking you into thinking he is your intended recipient. Coupling this with the difficulty of encoding a message as a point seems to be enough to make cryptographers look for other methods. Another reason could be that the system was patented in 1982 and is currently held by SafeNet, one of the US’s largest security providers. All of their products have generic names so it’s impossible to tell if they’re actually using Shamir-Massey-Omura. I’m no patent lawyer, but it could simply be that nobody else is allowed to implement the scheme.

Digital Signatures

Indeed, the discussion above raises the question: how does one authenticate a message? The standard technique is called a digital signature, and we can implement those using elliptic curve techniques as well. To debunk the naive idea, one cannot simply attach some static piece of extra information to the message. An attacker could just copy that information and replicate it to forge your signature on another, potentially malicious document. In other words, a signature should only work for the message it was used to sign. The technique we’ll implement was originally proposed by Taher Elgamal, and is called the ElGamal signature algorithm. We’re going to look at a special case of it.

So Alice wants to send a message m with some extra information that is unique to the message and that can be used to verify that it was sent by Alice. She picks an elliptic curve E over \mathbb{F}_q in such a way that the number of points on E is br, where b is a small integer and r is a large prime.

Then, as in Diffie-Hellman, she picks a base point Q that has order r and a secret integer s (which is permanent), and computes P = sQ. Alice publishes everything except s:

Public information: \mathbb{F}_q, E, b, r, Q, P

Let Alice’s message m be represented as an integer at most r (there are a few ways to get around this if your message is too long). Now to sign m Alice picks a message specific k < r and computes what I’ll call the auxiliary point A = kQ. Let A = (x, y). Alice then computes the signature g = k^{-1}(m + s x) \mod r. The signed message is then (m, A, g), which Alice can safely send to Bob.

Before we see how Bob verifies the message, notice that the signature integer involves everything: Alice’s secret key, the message-specific secret integer k, and most importantly the message. Remember that this is crucial: we want the signature to work only for the message that it was used to sign. If the same k is used for multiple messages then the attacker can find out your secret key! (And this has happened in practice; see the end of the post.)

So Bob receives (m, A, g), and also has access to all of the public information listed above. Bob authenticates the message by computing the auxiliary point via a different route. First, he computes c = g^{-1} m \mod r and d = g^{-1}x \mod r, and then A' = cQ + dP. If the message was signed by Alice then A' = A, since we can just write out the definition of everything:

authentication-formula

Now to analyze the security. The attacker wants to be able to take any message m' and produce a signature A', g' that will pass validation with Alice’s public information. If the attacker knew how to solve the discrete logarithm problem efficiently this would be trivial: compute s and then just sign like Alice does. Without that power there are still a few options. If the attacker can figure out the message-specific integer k, then she can compute Alice’s secret key s as follows.

Given g = k^{-1}(m + sx) \mod r, compute kg \equiv (m + sx) \mod r. Compute d = gcd(x, r), and you know that this congruence has only d possible solutions modulo r. Since s is less than r, the attacker can just try all options until they find P = sQ. So that’s bad, but in a properly implemented signature algorithm finding k is equivalently hard to solving the discrete logarithm problem, so we can assume we’re relatively safe from that.

On the other hand one could imagine being able to conjure the pieces of the signature A', g' by some method that doesn’t involve directly finding Alice’s secret key. Indeed, this problem is less well-studied than the Diffie-Hellman problem, but most cryptographers believe it’s just as hard. For more information, this paper surveys the known attacks against this signature algorithm, including a successful attack for fields of characteristic two.

Signature Implementation

We can go ahead and implement the signature algorithm once we’ve picked a suitable elliptic curve. For the purpose of demonstration we’ll use a small curve, E: y^2 = x^3 + 3x + 181 over F = \mathbb{Z}/1061, whose number of points happens to have the a suitable prime factorization (1047 = 3 \cdot 349). If you’re interested in counting the number of points on an elliptic curve, there are many theorems and efficient algorithms to do this, and if you’ve been reading this whole series something then an algorithm based on the Baby-Step Giant-Step idea would be easy to implement. For the sake of brevity, we leave it as an exercise to the reader.

Note that the code we present is based on the elliptic curve and finite field code we’re been implementing as part of this series. All of the code used in this post is available on this blog’s Github page.

The basepoint we’ll pick has to have order 349, and E has plenty of candidates. We’ll use (2, 81), and we’ll randomly generate a secret key that’s less than 349 (eight bits will do). So our setup looks like this:

if __name__ == "__main__":
   F = FiniteField(1061, 1)

   # y^2 = x^3 + 3x + 181
   curve = EllipticCurve(a=F(3), b=F(181))
   basePoint = Point(curve, F(2), F(81))
   basePointOrder = 349
   secretKey = generateSecretKey(8)
   publicKey = secretKey * basePoint

Then so sign a message we generate a random key, construct the auxiliary point and the signature, and return:

def sign(message, basePoint, basePointOrder, secretKey):
   modR = FiniteField(basePointOrder, 1)
   oneTimeSecret = generateSecretKey(len(bin(basePointOrder)) - 3) # numbits(order) - 1

   auxiliaryPoint = oneTimeSecret * basePoint
   signature = modR(oneTimeSecret).inverse() *
         (modR(message) + modR(secretKey) * modR(auxiliaryPoint[0]))

   return (message, auxiliaryPoint, signature)

So far so good. Note that we generate the message-specific k at random, and this implies we need a high-quality source of randomness (what’s called a cryptographically-secure pseudorandom number generator). In absence of that there are proposed deterministic methods for doing it. See this draft proposal of Thomas Pornin, and this paper of Daniel Bernstein for another.

Now to authenticate, we follow the procedure from earlier.

def authentic(signedMessage, basePoint, basePointOrder, publicKey):
   modR = FiniteField(basePointOrder, 1)
   (message, auxiliary, signature) = signedMessage

   sigInverse = modR(signature).inverse() # sig can be an int or a modR already
   c, d = sigInverse * modR(message), sigInverse * modR(auxiliary[0])

   auxiliaryChecker = int(c) * basePoint + int(d) * publicKey
   return auxiliaryChecker == auxiliary

Continuing with our example, we pick a message represented as an integer smaller than r, sign it, and validate it.

>>> message = 123
>>> signedMessage = sign(message, basePoint, basePointOrder, secretKey)
>>> signedMessage
(123, (220 (mod 1061), 234 (mod 1061)), 88 (mod 349))
>>> authentic(signedMessage, basePoint, basePointOrder, publicKey)
True

So there we have it, a nice implementation of the digital signature algorithm.

When Digital Signatures Fail

As we mentioned, it’s extremely important to avoid using the same k for two different messages. If you do, then you’ll get two signed messages (m_1, A_1, g_1), (m_2, A_2, g_2), but by definition the two g‘s have a ton of information in common! An attacker can recognize this immediately because A_1 = A_2, and figure out the secret key s as follows. First write

\displaystyle g_1 - g_2 \equiv k^{-1}(m_1 + sx) - k^{-1}(m_2 + sx) \equiv k^{-1}(m_1 - m_2) \mod r.

Now we have something of the form \text{known}_1 \equiv (k^{-1}) \text{known}_2 \mod r, and similarly to the attack described earlier we can try all possibilities until we find a number that satisfies A = kQ. Then once we have k we have already seen how to find s. Indeed, it would be a good exercise for the reader to implement this attack.

The attack we just described it not an idle threat. Indeed, the Sony corporation, producers of the popular Playstation video game console, made this mistake in signing software for Playstation 3. A digital signature algorithm makes sense to validate software, because Sony wants to ensure that only Sony has the power to publish games. So Sony developers act as one party signing the data on a disc, and the console will only play a game with a valid signature. Note that the asymmetric setup is necessary because if the console had shared a secret with Sony (say, stored as plaintext within the hardware of the console), anyone with physical access to the machine could discover it.

Now here come the cringing part. Sony made the mistake of using the same k to sign every game! Their mistake was discovered in 2010 and made public at a cryptography conference. This video of the humorous talk includes a description of the variant Sony used and the attacker describe how the mistake should have been corrected. Without a firmware update (I believe Sony’s public key information was stored locally so that one could authenticate games without an internet connection), anyone could sign a piece of software and create games that are indistinguishable from something produced by Sony. That includes malicious content that, say, installs software that sends credit card information to the attacker.

So here we have a tidy story: a widely used cryptosystem with a scare story of what will go wrong when you misuse it. In the future of this series, we’ll look at other things you can do with elliptic curves, including factoring integers and testing for primality. We’ll also see some normal forms of elliptic curves that are used in place of the Weierstrass normal form for various reasons.

Until next time!

Stable Marriages and Designing Markets

Here is a fun puzzle. Suppose we have a group of 10 men and 10 women, and each of the men has sorted the women in order of their preference for marriage (that is, a man prefers to marry a woman earlier in his list over a woman later in the list). Likewise, each of the women has sorted the men in order of marriageability. We might ask if there is any way that we, the omniscient cupids of love, can decide who should marry to make everyone happy.

Of course, the word happy is entirely imprecise. The mathematician balks at the prospect of leaving such terms undefined! In this case, it’s quite obvious that not everyone will get their first pick. Indeed, if even two women prefer the same man someone will have to settle for less than their top choice. So if we define happiness in this naive way, the problem is obviously not solvable in general.

Now what if instead of aiming for each individual’s maximum happiness we instead shoot for mutual contentedness? That is, what if “happiness” here means that nobody will ever have an incentive to cheat on their spouse? It turns out that for a mathematical version of this condition, we can always find a suitable set of marriages! These mathematical formalisms include some assumptions, such as that preferences never change and that no new individuals are added to the population. But it is nevertheless an impressive theorem that we can achieve stability no matter what everyone’s preferences are. In this post we’ll give the classical algorithm which constructs so-called “stable marriages,” and we’ll prove its correctness. Then we’ll see a slight generalization of the algorithm, in which the marriages are “polygamous,” and we’ll apply it to the problem of assigning students to internships.

As usual, all of the code used in this post is available for download at this blog’s Github page.

Historical Notes

The original algorithm for computing stable marriages was discovered by Lloyd Shapley and David Gale in the early 1960’s. Shapely and Alvin Roth went on to dedicate much of their career to designing markets and applying the stable marriage problem and its generalizations to such problems. In 2012 they jointly received the Nobel prize in economics for their work on this problem. If you want to know more about what “market design” means and why it’s needed (and you have an hour to spare), consider watching the talk below by Alvin Roth at the Simons Institute’s 2013 Symposium on the Visions of the Theory of Computing. Roth spends most of his time discussing the state of one particular economy, medical students and residence positions at hospitals, which he was asked to redesign. It’s quite a fascinating tale, although some of the deeper remarks assume knowledge of the algorithm we cover in this post.

Alvin Roth went on to apply the ideas presented in the video to economic systems in Boston and New York City public schools, kidney exchanges, and others. They all had the same sort of structure: both parties have preferences and stability makes sense. So he actually imposed the protocol we’re about to describe in order to guarantee that the process terminates to a stable arrangement (and automating it saves everyone involved a lot of time, stress, and money! Watch the video above for more on that).

The Monogamous Stable Marriage Algorithm

Let’s formally set up the problem. Let X = \left \{ 1, 2, \dots, n \right \} be a set of n suitors and Y = \left \{ 1,2,\dots ,n \right \} be a set of n “suited.” Let \textup{pref}_{X \to Y}: X \to S_n be a list of preferences for the suitors. In words, \textup{pref}_{X \to Y} accepts as input a suitor, and produces as output an ordering on the suited members of Y. We denote the output set as S_n, which the group theory folks will recognize as the permutation group on 1, \dots, n. Likewise, there is a function \textup{pref}_{Y \to X}: Y \to S_n describing the preferences of each of the suited.

An example will help clarify these stuffy definitions. If X = \left \{ 1, 2, 3 \right \} and Y = \left \{ 1, 2, 3 \right \}, then to say that

\textup{pref}_{X \to Y}(2) = (3, 1, 2)

is to say that the second suitor prefers the third member of Y the most, and then the first member of Y, and then the second. The programmer might imagine that the datum of the problem consists of two dictionaries (one for X and one for Y) whose keys are integers and whose values are lists of integers which contain 1 through n in some order.

A solution to the problem, then, is a way to match (or marry) suitors with suited. Specifically, a matching is a bijection m: X \to Y, so that x is matched with m(x). The reason we use a bijection is because the marriages are monogamous: only one suitor can be matched with one suited and vice versa. Later we’ll see this condition dropped so we can apply it to a more realistic problem of institutions (suited) which can accommodate many applicants (suitors). Because suitor and suited are awkward to say, we’ll use the familiar, antiquated, and politically incorrect terms “men and women.”

Now if we’re given a monogamous matching m, a pair x \in X, y \in Y is called unstable for m if both x,y prefer each other over their partners assigned by m. That is, (x,y) is unstable for m if y appears before m(y) in the preference list for x, \textup{pref}_{X \to Y}(x), and likewise x appears before m^{-1}(y) in \textup{pref}_{Y \to X}(y).

Another example to clarify: again let X = Y = \left \{ 1,2,3 \right \} and suppose for simplicity that our matching m pairs m(i) = i. If man 2 has the preference list (3,2,1) and woman 3 has the preference list (2,1,3), then 2 and 3 together form an unstable pair for m, because they would rather be with each other over their current partners. That is, they have a mutual incentive to cheat on their spouses. We say that the matching is unstable or admits an unstable pair if there are any unstable pairs for it, and we call the entire matching stable if it doesn’t admit any unstable pairs.

Unlike real life, mathematically unstable marriages need not have constant arguments.

Unlike real life, mathematically unstable marriages need not feature constant arguments.

So the question at hand is: is there an algorithm which, given access to to the two sets of preferences, can efficiently produce a stable matching? We can also wonder whether a stable matching is guaranteed to exist, and the answer is yes. In fact, we’ll prove this and produce an efficient algorithm in one fell swoop.

The central concept of the algorithm is called deferred acceptance. The gist is like this. The algorithm operates in rounds. During each round, each man will “propose” to a woman, and each woman will pick the best proposal available. But the women will not commit to their pick. They instead reject all other suitors, who go on to propose to their second choices in the next round. At that stage each woman (who now may have a more preferred suitor than in the first round) may replace her old pick with a new one. The process continues in this manner until each man is paired with a woman. In this way, each of the women defers accepting any proposal until the end of the round, progressively increasing the quality of her choice. Likewise, the men progressively propose less preferred matches as the rounds progress.

It’s easy to argue such a process must eventually converge. Indeed, the contrary means there’s some sort of cycle in the order of proposals, but each man proposes to only strictly less preferred women than any previous round, and the women can only strictly increase the quality of their held pick. Mathematically, we’re using an important tool called monotonicity. That some quantity can only increase or decrease as time goes on, and since the quantity is bounded, we must eventually reach a local maximum. From there, we can prove that any local maximum satisfies the property we want (here, that the matching is stable), and we win. Indeed, supposing to the contrary that we have a pair (x,y) which is unstable for the matching m produced at the end of this process, then it must have been the case that x proposed to y in some earlier round. But y has as her final match some other suitor x' = m^{-1}(y) whom she prefers less than x. Though she may have never picked x at any point in the algorithm, she can only end up with the worse choice x' if at some point y chose a suitor that was less preferred than the suitor she already had. Since her choices are monotonic this cannot happen, so no unstable pairs can exist.

Rather than mathematically implement the algorithm in pseudocode, let’s produce the entire algorithm in Python to make the ideas completely concrete.

Python Implementation

We start off with some simple data definitions for the two parties which, in the renewed interest of generality, refer to as Suitor and Suited.

class Suitor(object):
   def __init__(self, id, prefList):
      self.prefList = prefList
      self.rejections = 0 # num rejections is also the index of the next option
      self.id = id

   def preference(self):
      return self.prefList[self.rejections]

   def __repr__(self):
      return repr(self.id)

A Suitor is simple enough: he has an id representing his “index” in the set of Suitors, and a preference list prefList which in its i-th position contains the Suitor’s i-th most preferred Suited. This is identical to our mathematical representation from earlier, where a list like (2,3,1) means that the Suitor prefers the second Suited most and the first Suited least. Knowing the algorithm ahead of time, we add an additional piece of data: the number of rejections the Suitor has seen so far. This will double as the index of the Suited that the Suitor is currently proposing to. Indeed, the preference function provides a thin layer of indirection allowing us to ignore the underlying representation, so long as one updates the number of rejections appropriately.

Now for the Suited.

class Suited(object):
   def __init__(self, id, prefList):
      self.prefList = prefList
      self.held = None
      self.currentSuitors = set()
      self.id = id

   def __repr__(self):
      return repr(self.id)

A Suited likewise has a list of preferences and an id, but in addition she has a held attribute for the currently held Suitor, and a list currentSuitors of Suitors that are currently proposing to her. Hence we can define a reject method which accepts no inputs, and returns a list of rejected suitors, while updating the woman’s state to hold onto her most preferred suitor.

   def reject(self):
      if len(self.currentSuitors) == 0:
         return set()

      if self.held is not None:
         self.currentSuitors.add(self.held)

      self.held = min(self.currentSuitors, key=lambda suitor: self.prefList.index(suitor.id))
      rejected = self.currentSuitors - set([self.held])
      self.currentSuitors = set()

      return rejected

The call to min does all the work: finding the Suitor that appears first in her preference list. The rest is bookkeeping. Now the algorithm for finding a stable marriage, following the deferred acceptance algorithm, is simple.

# monogamousStableMarriage: [Suitor], [Suited] -> {Suitor -> Suited}
# construct a stable (monogamous) marriage between suitors and suiteds
def monogamousStableMarriage(suitors, suiteds):
   unassigned = set(suitors)

   while len(unassigned) > 0:
      for suitor in unassigned:
         suiteds[suitor.preference()].currentSuitors.add(suitor)
      unassigned = set()

      for suited in suiteds:
         unassigned |= suited.reject()

      for suitor in unassigned:
         suitor.rejections += 1

   return dict([(suited.held, suited) for suited in suiteds])

All the Suitors are unassigned to begin with. Each iteration of the loop corresponds to a round of the algorithm: the Suitors are added to the currentSuitors list of their next most preferred Suited. Then the Suiteds “simultaneously” reject some Suitors, whose rejection counts are upped by one and returned to the pool of unassigned Suitors. Once every Suited has held onto a Suitor we’re done.

Given a matching, we can define a function that verifies by brute force that the marriage is stable.

# verifyStable: [Suitor], [Suited], {Suitor -> Suited} -> bool
# check that the assignment of suitors to suited is a stable marriage
def verifyStable(suitors, suiteds, marriage):
   import itertools
   suitedToSuitor = dict((v,k) for (k,v) in marriage.items())
   precedes = lambda L, item1, item2: L.index(item1) < L.index(item2)

   def suitorPrefers(suitor, suited):
      return precedes(suitor.prefList, suited.id, marriage[suitor].id)

   def suitedPrefers(suited, suitor):
      return precedes(suited.prefList, suitor.id, suitedToSuitor[suited].id)

   for (suitor, suited) in itertools.product(suitors, suiteds):
      if suited != marriage[suitor] and suitorPrefers(suitor, suited) and suitedPrefers(suited, suitor):
         return False, (suitor.id, suited.id)

   return

Indeed, we can test the algorithm on an instance of the problem.

>>> suitors = [Suitor(0, [3,5,4,2,1,0]), Suitor(1, [2,3,1,0,4,5]),
...            Suitor(2, [5,2,1,0,3,4]), Suitor(3, [0,1,2,3,4,5]),
...            Suitor(4, [4,5,1,2,0,3]), Suitor(5, [0,1,2,3,4,5])]
>>> suiteds = [Suited(0, [3,5,4,2,1,0]), Suited(1, [2,3,1,0,4,5]),
...            Suited(2, [5,2,1,0,3,4]), Suited(3, [0,1,2,3,4,5]),
...            Suited(4, [4,5,1,2,0,3]), Suited(5, [0,1,2,3,4,5])]
>>> marriage = monogamousStableMarriage(suitors, suiteds)
{3: 0, 4: 4, 5: 1, 1: 2, 2: 5, 0: 3}
>>> verifyStable(suitors, suiteds, marriage)
True

We encourage the reader to check this by hand (this one only took two rounds). Even better, answer the question of whether the algorithm could ever require n steps to converge for 2n individuals, where you get to pick the preference list to try to make this scenario happen.

Stable Marriages with Capacity

We can extend this algorithm to work for “polygamous” marriages in which one Suited can accept multiple Suitors. In fact, the two problems are entirely the same! Just imagine duplicating a Suited with large capacity into many Suiteds with capacity of 1. This particular reduction is not very efficient, but it allows us to see that the same proof of convergence and correctness applies. We can then modify our classes and algorithm to account for it, so that (for example) instead of a Suited “holding” a single Suitor, she holds a set of Suitors. We encourage the reader to try extending our code above to the polygamous case as an exercise, and we’ve provided the solution in the code repository for this post on this blog’s Github page.

Ways to Make it Harder

When you study algorithmic graph problems as much as I do, you start to get disheartened. It seems like every problem is NP-hard or worse. So when we get a situation like this, a nice, efficient algorithm with very real consequences and interpretations, you start to get very excited. In between our heaves of excitement, we imagine all the other versions of this problem that we could solve and Nobel prizes we could win. Unfortunately the landscape is bleaker than that, and most extensions of stable marriage problems are NP-complete.

For example, what if we allow ties? That is, one man can be equally happy with two women. This is NP-complete. However, it turns out his extension can be formulated as an integer programming problem, and standard optimization techniques can be used to approximate a solution.

What if, thinking about the problem in terms of medical students and residencies, we allow people to pick their preferences as couples? Some med students are married, after all, and prefer to be close to their spouse even if it means they have a less preferred residency. NP-hard again. See page 53 (pdf page 71) of these notes for a more detailed investigation. The problem is essentially that there is not always a stable matching, and so even determining whether there is one is NP-complete.

So there are a lot of ways to enrich the problem, and there’s an interesting line between tractable and hard in the worst case. As a (relatively difficult) exercise, try to solve the “roommates” version of the problem, where there is no male/female distinction (anyone can be matched with anyone). It turns out to have a tractable solution, and the algorithm is similar to the one outlined in this post.

Until next time!

PS. I originally wrote this post about a year ago when I was contacted by someone in industry who agreed to provide some (anonymized) data listing the preferences of companies and interns applying to work at those companies. Not having heard from them for almost a year, I figure it’s a waste to let this finished post collect dust at the risk of not having an interesting data set. But if you, dear reader, have any data you’d like to provide that fits into the framework of stable marriages, I’d love to feature your company/service on my blog (and solve the matching problem) in exchange for the data. The only caveat is that the data would have to be public, so you would have to anonymize it.

Programming with Finite Fields

Back when I was first exposed to programming language design, I decided it would be really cool if there were a language that let you define your own number types and then do all your programming within those number types. And since I get excited about math, I think of really exotic number types (Boolean rings, Gaussian integers, Octonions, oh my!). I imagined it would be a language feature, so I could do something like this:

use gaussianintegers as numbers

x = 1 + i
y = 2 - 3i
print(x*y)

z = 2 + 3.5i     # error

I’m not sure why I thought this would be so cool. Perhaps I felt like I would be teaching a computer math. Or maybe the next level of abstraction in playing god by writing programs is to play god by designing languages (and I secretly satisfy a massive god complex by dictating the actions of my computer).

But despite not writing a language of my own, programming with weird number systems still has a special place in my heart. It just so happens that we’re in the middle of a long series on elliptic curves, and in the next post we’ll actually implement elliptic curve arithmetic over a special kind of number type (the finite field). In this post, we’ll lay the groundwork by implementing number types in Python that allow us to work over any finite field. This is actually a pretty convoluted journey, and to be totally rigorous we’d need to prove a bunch of lemmas, develop a bunch of ring theory, and prove the correctness of a few algorithms involving polynomials.

Instead of taking the long and winding road, we’ll just state the important facts with links to proofs, prove the easy stuff, and focus more heavily than usual on the particular Python implementation details. As usual, all of the code used in this post is available on this blog’s Github page.

Integers Modulo Primes

The simples kind of finite field is the set of integers modulo a prime. We’ve dealt with this number field extensively on this blog (in groups, rings, fields, with RSA, etc.), but let’s recall what it is. The modulo operator \mod (in programming it’s often denoted %) is a binary operation on integers such that x \mod y is the unique positive remainder of x when divided by y.

Definition: Let p be a prime number. The set \mathbb{Z}/p consists of the numbers \left \{ 0, 1, \dots, p-1 \right \}. If you endow it with the operations of addition (mod p) and multiplication (mod p), it forms a field.

To say it’s a field is just to say that arithmetic more or less behaves the way we expect it to, and in particular that every nonzero element has a (unique) multiplicative inverse. Making a number type for \mathbb{Z}/p in Python is quite simple.

def IntegersModP(p):
   class IntegerModP(FieldElement):
      def __init__(self, n):
         self.n = n % p
         self.field = IntegerModP

      def __add__(self, other): return IntegerModP(self.n + other.n)
      def __sub__(self, other): return IntegerModP(self.n - other.n)
      def __mul__(self, other): return IntegerModP(self.n * other.n)
      def __truediv__(self, other): return self * other.inverse()
      def __div__(self, other): return self * other.inverse()
      def __neg__(self): return IntegerModP(-self.n)
      def __eq__(self, other): return isinstance(other, IntegerModP) and self.n == other.n
      def __abs__(self): return abs(self.n)
      def __str__(self): return str(self.n)
      def __repr__(self): return '%d (mod %d)' % (self.n, self.p)

      def __divmod__(self, divisor):
         q,r = divmod(self.n, divisor.n)
         return (IntegerModP(q), IntegerModP(r))

      def inverse(self):
         ...?

   IntegerModP.p = p
   IntegerModP.__name__ = 'Z/%d' % (p)
   return IntegerModP

We’ve done a couple of things worth note here. First, all of the double-underscore methods are operator overloads, so they are called when one tries to, e.g., add two instances of this class together. We’ve also implemented a division algorithm via __divmod__ which computes a (quotient, remainder) pair for the appropriate division. The built in Python function divmod function does this for integers, and we can overload it for a custom type. We’ll write a more complicated division algorithm later in this post. Finally, we’re dynamically creating our class so that different primes will correspond to different types. We’ll come back to why this encapsulation is a good idea later, but it’s crucial to make our next few functions reusable and elegant.

Here’s an example of the class in use:

>>> mod7 = IntegersModP(7)
>>> mod7(3) + mod7(6)
2 (mod 7)

The last (undefined) function in the IntegersModP class, the inverse function, is our only mathematical hurdle. Luckily, we can compute inverses in a generic way, using an algorithm called the extended Euclidean algorithm. Here’s the mathematics.

Definition: An element d is called a greatest common divisor (gcd) of a,b if it divides both a and b, and for every other z dividing both a and b, z divides d. For \mathbb{Z}/p gcd’s and we denote it as \gcd(a,b). [1]

Note that we called it ‘a’ greatest common divisor. In general gcd’s need not be unique, though for integers one often picks the positive gcd. We’ll actually see this cause a tiny programmatic bug later in this post, but let’s push on for now.

Theorem: For any two integers a,b \in \mathbb{Z} there exist unique x,y \in \mathbb{Z} such that ax + by = \gcd(a,b).

We could beat around the bush and try to prove these things in various ways, but when it comes down to it there’s one algorithm of central importance that both computes the gcd and produces the needed linear combination x,y. The algorithm is called the Euclidean algorithm. Here is a simple version that just gives the gcd.

def gcd(a, b):
   if abs(a) < abs(b):
      return gcd(b, a)

   while abs(b) > 0:
      q,r = divmod(a,b)
      a,b = b,r

   return a

This works by the simple observation that \gcd(a, aq+r) = \gcd(a,r) (this is an easy exercise to prove directly). So the Euclidean algorithm just keeps applying this rule over and over again: take the remainder when dividing the bigger argument by the smaller argument until the remainder becomes zero. Then the \gcd(x,0) = x because everything divides zero.

Now the so-called ‘extended’ Euclidean algorithm just keeps track of some additional data as it goes (the partial quotients and remainders). Here’s the algorithm.

def extendedEuclideanAlgorithm(a, b):
   if abs(b) > abs(a):
      (x,y,d) = extendedEuclideanAlgorithm(b, a)
      return (y,x,d)

   if abs(b) == 0:
      return (1, 0, a)

   x1, x2, y1, y2 = 0, 1, 1, 0
   while abs(b) > 0:
      q, r = divmod(a,b)
      x = x2 - q*x1
      y = y2 - q*y1
      a, b, x2, x1, y2, y1 = b, r, x1, x, y1, y

   return (x2, y2, a)

Indeed, the reader who hasn’t seen this stuff before is encouraged to trace out a run for the numbers 4864, 3458. Their gcd is 38 and the two integers are 32 and -45, respectively.

How does this help us compute inverses? Well, if we want to find the inverse of a modulo p, we know that their gcd is 1. So compute the x,y such that ax + py = 1, and then reduce both sides mod p. You get ax + 0 = 1 \mod p, which means that x \mod p is the inverse of a. So once we have the extended Euclidean algorithm our inverse function is trivial to write!

def inverse(self):
   x,y,d = extendedEuclideanAlgorithm(self.n, self.p)
   return IntegerModP(x)

And indeed it works as expected:

>>> mod23 = IntegersModP(23)
>>> mod23(7).inverse()
10 (mod 23)
>>> mod23(7).inverse() * mod23(7)
1 (mod 23)

Now one very cool thing, and something that makes some basic ring theory worth understanding, is that we can compute the gcd of any number type using the exact same code for the Euclidean algorithm, provided we implement an abs function and a division algorithm. Via a chain of relatively easy-to-prove lemmas, if your number type has enough structure (in particular, if it has a division algorithm that satisfies some properties), then greatest common divisors are well-defined, and the Euclidean algorithm gives us that special linear combination. And using the same trick above in finite fields, we can use the Euclidean algorithm to compute inverses.

But in order to make things work programmatically we need to be able to deal with the literal ints 0 and 1 in the algorithm. That is, we need to be able to silently typecast integers up to whatever number type we’re working with. This makes sense because all rings have 0 and 1, but it requires a bit of scaffolding to implement. In particular, typecasting things sensibly is really difficult if you aren’t careful. And the problems are compounded in a language like Python that blatantly ignores types whenever possible. [2]

So let’s take a quick break to implement a tiny type system with implicit typecasting.

[1] The reader familiar with our series on category theory will recognize this as the product of two integers in a category whose arrows represent divisibility. So by abstract nonsense, this proves that gcd’s are unique up to multiplication by a unit in any ring.
[2] In the process of writing the code for this post, I was sorely missing the stronger type systems of Java and Haskell. Never thought I’d say that, but it’s true.

A Tiny Generic Type System

The main driving force behind our type system will be a decorator called @typecheck. We covered decorators toward the end of our primer on dynamic programming, but in short a decorator is a Python syntax shortcut that allows some pre- or post-processing to happen to a function in a reusable way. All you need to do to apply the pre/post-processing is prefix the function definition with the name of the decorator.

Our decorator will be called typecheck, and it will decorate binary operations on our number types. In its basic form, our type checker will work as follows: if the types of the two operands are the same, then the decorator will just pass them on through to the operator. Otherwise, it will try to do some typecasting, and if that fails it will raise exceptions with reckless abandon.

def typecheck(f):
   def newF(self, other):
      if type(self) is not type(other):
         try:
            other = self.__class__(other)
         except TypeError:
            message = 'Not able to typecast %s of type %s to type %s in function %s'
            raise TypeError(message % (other, type(other).__name__, type(self).__name__, f.__name__))
         except Exception as e:
            message = 'Type error on arguments %r, %r for functon %s. Reason:%s'
            raise TypeError(message % (self, other, f.__name__, type(other).__name__, type(self).__name__, e))

      return f(self, other)

   return newF

So this is great, but there are two issues. The first is that this will only silently typecast if the thing we’re casting is on the right-hand side of the expression. In other words, the following will raise an exception complaining that you can’t add ints to Mod7 integers.

>>> x = IntegersModP(7)(1)
>>> 1 + x

What we need are the right-hand versions of all the operator overloads. They are the same as the usual operator overloads, but Python gives preference to the left-hand operator overloads. Anticipating that we will need to rewrite these silly right-hand overloads for every number type, and they’ll all be the same, we make two common base classes.

class DomainElement(object):
   def __radd__(self, other): return self + other
   def __rsub__(self, other): return -self + other
   def __rmul__(self, other): return self * other

class FieldElement(DomainElement):
   def __truediv__(self, other): return self * other.inverse()
   def __rtruediv__(self, other): return self.inverse() * other
   def __div__(self, other): return self.__truediv__(other)
   def __rdiv__(self, other): return self.__rtruediv__(other)

And we can go ahead and make our IntegersModP a subclass of FieldElement. [3]

But now we’re wading into very deep waters. In particular, we know ahead of time that our next number type will be for Polynomials (over the integers, or fractions, or \mathbb{Z}/p, or whatever). And we’ll want to do silent typecasting from ints and IntegersModP to Polynomials! The astute reader will notice the discrepancy. What will happen if I try to do this?

>>> MyInteger() + MyPolynomial()

Let’s take this slowly: by our assumption both MyInteger and MyPolynomial have the __add__ and __radd__ functions defined on them, and each tries to typecast the other the appropriate type. But which is called? According to Python’s documentation if the left-hand side has an __add__ function that’s called first, and the right-hand sides’s __radd__ function is only sought if no __add__ function is found for the left operand.

Well that’s a problem, and we’ll deal with it in a half awkward and half elegant way. What we’ll do is endow our number types with an “operatorPrecedence” constant. And then inside our type checker function we’ll see if the right-hand operand is an object of higher precedence. If it is, we return the global constant NotImplemented, which Python takes to mean that no __add__ function was found, and it proceeds to look for __radd__. And so with this modification our typechecker is done. [4]

def typecheck(f):
   def newF(self, other):
      if (hasattr(other.__class__, 'operatorPrecedence') and
            other.__class__.operatorPrecedence > self.__class__.operatorPrecedence):
         return NotImplemented

      if type(self) is not type(other):
         try:
            other = self.__class__(other)
         except TypeError:
            message = 'Not able to typecast %s of type %s to type %s in function %s'
            raise TypeError(message % (other, type(other).__name__, type(self).__name__, f.__name__))
         except Exception as e:
            message = 'Type error on arguments %r, %r for functon %s. Reason:%s'
            raise TypeError(message % (self, other, f.__name__, type(other).__name__, type(self).__name__, e))

      return f(self, other)

   return newF

We add a default operatorPrecedence of 1 to the DomainElement base class. Now this function answers our earlier question of why we want to encapsulate the prime modulus into the IntegersModP class. If this typechecker is really going to be generic, we need to be able to typecast an int by passing the single int argument to the type constructor with no additional information! Indeed, this will be the same pattern for our polynomial class and the finite field class to follow.

Now there is still one subtle problem. If we try to generate two copies of the same number type from our number-type generator (in other words, the following code snippet), we’ll get a nasty exception.

>>> mod7 = IntegersModP(7)
>>> mod7Copy = IntegersModP(7)
>>> mod7(1) + mod7Copy(2)
... fat error ...

The reason for this is that in the type-checker we’re using the Python built-in ‘is’ which checks for identity, not semantic equality. To fix this, we simply need to memoize the IntegersModP function (and all the other functions we’ll use to generate number types) so that there is only ever one copy of a number type in existence at a time.

So enough Python hacking: let’s get on with implementing finite fields!

[3] This also compels us to make some slight modifications to the constructor for IntegersModP, but they’re not significant enough to display here. Check out the Github repo if you want to see.
[4] This is truly a hack, and we’ve considered submitting a feature request to the Python devs. It is conceivably useful for the operator-overloading aficionado. I’d be interested to hear your thoughts in the comments as to whether this is a reasonable feature to add to Python.

Polynomial Arithmetic

Recall from our finite field primer that every finite field can be constructed as a quotient of a polynomial ring with coefficients in \mathbb{Z}/p by some prime ideal. We spelled out exactly what this means in fine detail in the primer, so check that out before reading on.

Indeed, to construct a finite field we need to find some irreducible monic polynomial f with coefficients in \mathbb{Z}/p, and then the elements of our field will be remainders of arbitrary polynomials when divided by f. In order to determine if they’re irreducible we’ll need to compute a gcd. So let’s build a generic polynomial type with a polynomial division algorithm, and hook it into our gcd framework.

We start off in much the same way as with the IntegersModP:

# create a polynomial with coefficients in a field; coefficients are in
# increasing order of monomial degree so that, for example, [1,2,3]
# corresponds to 1 + 2x + 3x^2
@memoize
def polynomialsOver(field=fractions.Fraction):

   class Polynomial(DomainElement):
      operatorPrecedence = 2
      factory = lambda L: Polynomial([field(x) for x in L])

      def __init__(self, c):
         if type(c) is Polynomial:
            self.coefficients = c.coefficients
         elif isinstance(c, field):
            self.coefficients = [c]
         elif not hasattr(c, '__iter__') and not hasattr(c, 'iter'):
            self.coefficients = [field(c)]
         else:
            self.coefficients = c

         self.coefficients = strip(self.coefficients, field(0))

      def isZero(self): return self.coefficients == []

      def __repr__(self):
         if self.isZero():
            return '0'

         return ' + '.join(['%s x^%d' % (a,i) if i > 0 else '%s'%a
                              for i,a in enumerate(self.coefficients)])

      def __abs__(self): return len(self.coefficients)
      def __len__(self): return len(self.coefficients)
      def __sub__(self, other): return self + (-other)
      def __iter__(self): return iter(self.coefficients)
      def __neg__(self): return Polynomial([-a for a in self])

      def iter(self): return self.__iter__()
      def leadingCoefficient(self): return self.coefficients[-1]
      def degree(self): return abs(self) - 1

All of this code just lays down conventions. A polynomial is a list of coefficients (in increasing order of their monomial degree), the zero polynomial is the empty list of coefficients, and the abs() of a polynomial is one plus its degree. [5] Finally, instead of closing over a prime modulus, as with IntegersModP, we’re closing over the field of coefficients. In general you don’t have to have polynomials with coefficients in a field, but if they do come from a field then you’re guaranteed to get a sensible Euclidean algorithm. In the formal parlance, if k is a field then k[x] is a Euclidean domain. And for our goal of defining finite fields, we will always have coefficients from \mathbb{Z}/p, so there’s no problem.

Now we can define things like addition, multiplication, and equality using our typechecker to silently cast and watch for errors.

      @typecheck
      def __eq__(self, other):
         return self.degree() == other.degree() and all([x==y for (x,y) in zip(self, other)])

      @typecheck
      def __add__(self, other):
         newCoefficients = [sum(x) for x in itertools.zip_longest(self, other, fillvalue=self.field(0))]
         return Polynomial(newCoefficients)

      @typecheck
      def __mul__(self, other):
         if self.isZero() or other.isZero():
            return Zero()

         newCoeffs = [self.field(0) for _ in range(len(self) + len(other) - 1)]

         for i,a in enumerate(self):
            for j,b in enumerate(other):
               newCoeffs[i+j] += a*b

         return Polynomial(newCoeffs)

Notice that, if the underlying field of coefficients correctly implements the operator overloads, none of this depends on the coefficients. Reusability, baby!

And we can finish off with the division algorithm for polynomials.

      @typecheck
      def __divmod__(self, divisor):
         quotient, remainder = Zero(), self
         divisorDeg = divisor.degree()
         divisorLC = divisor.leadingCoefficient()

         while remainder.degree() >= divisorDeg:
            monomialExponent = remainder.degree() - divisorDeg
            monomialZeros = [self.field(0) for _ in range(monomialExponent)]
            monomialDivisor = Polynomial(monomialZeros + [remainder.leadingCoefficient() / divisorLC])

            quotient += monomialDivisor
            remainder -= monomialDivisor * divisor

         return quotient, remainder

Indeed, we’re doing nothing here but formalizing the grade-school algorithm for doing polynomial long division [6]. And we can finish off the function for generating this class by assigning the field member variable along with a class name. And we give it a higher operator precedence than the underlying field of coefficients so that an isolated coefficient is cast up to a constant polynomial.

@memoize
def polynomialsOver(field=fractions.Fraction):

   class Polynomial(DomainElement):
      operatorPrecedence = 2

      [... methods defined above ...]

   def Zero():
      return Polynomial([])

   Polynomial.field = field
   Polynomial.__name__ = '(%s)[x]' % field.__name__
   return Polynomial

We provide a modest test suite in the Github repository for this post, but here’s a sample test:

>>> Mod5 = IntegersModP(5)
>>> Mod11 = IntegersModP(11)
>>> polysOverQ = polynomialsOver(Fraction).factory
>>> polysMod5 = polynomialsOver(Mod5).factory
>>> polysMod11 = polynomialsOver(Mod11).factory
>>> polysOverQ([1,7,49]) / polysOverQ([7])
1/7 + 1 x^1 + 7 x^2
>>> polysMod5([1,7,49]) / polysMod5([7])
3 + 1 x^1 + 2 x^2
>>> polysMod11([1,7,49]) / polysMod11([7])
8 + 1 x^1 + 7 x^2

And indeed, the extended Euclidean algorithm works without modification, so we know our typecasting is doing what’s expected:

>>> p = polynomialsOver(Mod2).factory
>>> f = p([1,0,0,0,1,1,1,0,1,1,1]) # x^10 + x^9 + x^8 + x^6 + x^5 + x^4 + 1
>>> g = p([1,0,1,1,0,1,1,0,0,1])   # x^9 + x^6 + x^5 + x^3 + x^1 + 1
>>> theGcd = p([1,1,0,1]) # x^3 + x + 1
>>> x = p([0,0,0,0,1]) # x^4
>>> y = p([1,1,1,1,1,1]) # x^5 + x^4 + x^3 + x^2 + x + 1
>>> (x,y,theGcd) == extendedEuclideanAlgorithm(f, g)
True

[5] The mathematical name for the abs() function that we’re using is a valuation.
[6] One day we will talk a lot more about polynomial long division on this blog. You can do a lot of cool algebraic geometry with it, and the ideas there lead you to awesome applications like robot motion planning and automated geometry theorem proving.

Generating Irreducible Polynomials

Now that we’ve gotten Polynomials out of the way, we need to be able to generate irreducible polynomials over \mathbb{Z}/p of any degree we want. It might be surprising that irreducible polynomials of any degree exist [7], but in fact we know a lot more.

Theorem: The product of all irreducible monic polynomials of degree dividing m is equal to x^{p^m} - x.

This is an important theorem, but it takes a little bit more field theory than we have under our belts right now. You could summarize the proof by saying there is a one-to-one correspondence between elements of a field and monic irreducible polynomials, and then you say some things about splitting fields. You can see a more detailed proof outline here, but it assumes you’re familiar with the notion of a minimal polynomial. We will probably cover this in a future primer.

But just using the theorem we can get a really nice algorithm for determining if a polynomial f(x) of degree m is irreducible: we just look at its gcd with all the x^{p^k} - x for k smaller than m. If all the gcds are constants, then we know it’s irreducible, and if even one is a non-constant polynomial then it has to be irreducible. Why’s that? Because if you have some nontrivial gcd d(x) = \gcd(f(x), x^{p^k} - x) for k < m, then it’s a factor of f(x) by definition. And since we know all irreducible monic polynomials are factors of that this collection of polynomials, if the gcd is always 1 then there are no other possible factors to be divisors. (If there is any divisor then there will be a monic irreducible one!) So the candidate polynomial must be irreducible. In fact, with a moment of thought it’s clear that we can stop at k= m/2, as any factor of large degree will necessarily require corresponding factors of small degree. So the algorithm to check for irreducibility is just this simple loop:

def isIrreducible(polynomial, p):
   ZmodP = IntegersModP(p)
   poly = polynomialsOver(ZmodP).factory
   x = poly([0,1])
   powerTerm = x
   isUnit = lambda p: p.degree() == 0

   for _ in range(int(polynomial.degree() / 2)):
      powerTerm = powerTerm.powmod(p, polynomial)
      gcdOverZmodp = gcd(polynomial, powerTerm - x)
      if not isUnit(gcdOverZmodp):
         return False

   return True

We’re just computing the powers iteratively as x^p, (x^p)^p = x^{p^2}, \dots, x^{p^j} and in each step of the loop subtracting x and computing the relevant gcd. The powmod function is just there so that we can reduce the power mod our irreducible polynomial after each multiplication, keeping the degree of the polynomial small and efficient to work with.

Now generating an irreducible polynomial is a little bit harder than testing for one. With a lot of hard work, however, field theorists discovered that irreducible polynomials are quite common. In fact, if you just generate the coefficients of your degree n monic polynomial at random, the chance that you’ll get something irreducible is at least 1/n. So this suggests an obvious randomized algorithm: keep guessing until you find one.

def generateIrreduciblePolynomial(modulus, degree):
   Zp = IntegersModP(modulus)
   Polynomial = polynomialsOver(Zp)

   while True:
      coefficients = [Zp(random.randint(0, modulus-1)) for _ in range(degree)]
      randomMonicPolynomial = Polynomial(coefficients + [Zp(1)])

      if isIrreducible(randomMonicPolynomial, modulus):
         return randomMonicPolynomial

Since the probability of getting an irreducible polynomial is close to 1/n, we expect to require n trials before we find one. Moreover we could give a pretty tight bound on how likely it is to deviate from the expected number of trials. So now we can generate some irreducible polynomials!

>>> F23 = FiniteField(2,3)
>>> generateIrreduciblePolynomial(23, 3)
21 + 12 x^1 + 11 x^2 + 1 x^3

And so now we are well-equipped to generate any finite field we want! It’s just a matter of generating the polynomial and taking a modulus after every operation.

@memoize
def FiniteField(p, m, polynomialModulus=None):
   Zp = IntegersModP(p)
   if m == 1:
      return Zp

   Polynomial = polynomialsOver(Zp)
   if polynomialModulus is None:
      polynomialModulus = generateIrreduciblePolynomial(modulus=p, degree=m)

   class Fq(FieldElement):
      fieldSize = int(p ** m)
      primeSubfield = Zp
      idealGenerator = polynomialModulus
      operatorPrecedence = 3

      def __init__(self, poly):
         if type(poly) is Fq:
            self.poly = poly.poly
         elif type(poly) is int or type(poly) is Zp:
            self.poly = Polynomial([Zp(poly)])
         elif isinstance(poly, Polynomial):
            self.poly = poly % polynomialModulus
         else:
            self.poly = Polynomial([Zp(x) for x in poly]) % polynomialModulus

         self.field = Fq

      @typecheck
      def __add__(self, other): return Fq(self.poly + other.poly)
      @typecheck
      def __sub__(self, other): return Fq(self.poly - other.poly)
      @typecheck
      def __mul__(self, other): return Fq(self.poly * other.poly)
      @typecheck
      def __eq__(self, other): return isinstance(other, Fq) and self.poly == other.poly

      def __pow__(self, n): return Fq(pow(self.poly, n))
      def __neg__(self): return Fq(-self.poly)
      def __abs__(self): return abs(self.poly)
      def __repr__(self): return repr(self.poly) + ' \u2208 ' + self.__class__.__name__

      @typecheck
      def __divmod__(self, divisor):
         q,r = divmod(self.poly, divisor.poly)
         return (Fq(q), Fq(r))

      def inverse(self):
         if self == Fq(0):
            raise ZeroDivisionError

         x,y,d = extendedEuclideanAlgorithm(self.poly, self.idealGenerator)
         return Fq(x) * Fq(d.coefficients[0].inverse())

   Fq.__name__ = 'F_{%d^%d}' % (p,m)
   return Fq

And some examples of using it:

>>> F23 = FiniteField(2,3)
>>> x = F23([1,1])
>>> x
1 + 1 x^1 ∈ F_{2^3}
>>> x*x
1 + 0 x^1 + 1 x^2 ∈ F_{2^3}
>>> x**10
0 + 0 x^1 + 1 x^2 ∈ F_{2^3}
>>> 1 / x
0 + 1 x^1 + 1 x^2 ∈ F_{2^3}
>>> x * (1 / x)
1 ∈ F_{2^3}
>>> k = FiniteField(23, 4)
>>> k.fieldSize
279841
>>> k.idealGenerator
6 + 8 x^1 + 10 x^2 + 10 x^3 + 1 x^4
>>> y
9 + 21 x^1 + 14 x^2 + 12 x^3 ∈ F_{23^4}
>>> y*y
13 + 19 x^1 + 7 x^2 + 14 x^3 ∈ F_{23^4}
>>> y**5 - y
15 + 22 x^1 + 15 x^2 + 5 x^3 ∈ F_{23^4}

And that’s it! Now we can do arithmetic over any finite field we want.

[7] Especially considering that other wacky things happen like this: x^4 +1 is reducible over every finite field!

Some Words on Efficiency

There are a few things that go without stating about this program, but I’ll state them anyway.

The first is that we pay a big efficiency price for being so generic. All the typecasting we’re doing isn’t cheap, and in general cryptography needs to be efficient. For example, if I try to create a finite field of order 104729^{20}, it takes about ten to fifteen seconds to complete. This might not seem so bad for a one-time initialization, but it’s clear that our representation is somewhat bloated. We would display a graph of the expected time to perform various operations in various finite fields, but this post is already way too long.

In general, the larger and more complicated the polynomial you use to define your finite field, the longer operations will take (since dividing by a complicated polynomial is more expensive than dividing by a simple polynomial). For this reason and a few other reasons, a lot of research has gone into efficiently finding irreducible polynomials with, say, only three nonzero terms. Moreover, if we know in advance that we’ll only work over fields of characteristic two we can make a whole lot of additional optimizations. Essentially, all of the arithmetic reduces to really fast bitwise operations, and things like exponentiation can easily be implemented in hardware. But it also seems that the expense coming with large field characteristics corresponds to higher security, so some researchers have tried to meet in the middle an get efficient representations of other field characteristics.

In any case, the real purpose of our implementation in this post is not for efficiency. We care first and foremost about understanding the mathematics, and to have a concrete object to play with and use in the future for other projects. And we have accomplished just that.

Until next time!