Elliptic Curve Diffie-Hellman

So far in this series we’ve seen elliptic curves from many perspectives, including the elementary, algebraic, and programmatic ones. We implemented finite field arithmetic and connected it to our elliptic curve code. So we’re in a perfect position to feast on the main course: how do we use elliptic curves to actually do cryptography?

History

As the reader has heard countless times in this series, an elliptic curve is a geometric object whose points have a surprising and well-defined notion of addition. That you can add some points on some elliptic curves was a well-known technique since antiquity, discovered by Diophantus. It was not until the mid 19th century that the general question of whether addition always makes sense was answered by Karl Weierstrass. In 1908 Henri Poincaré asked about how one might go about classifying the structure of elliptic curves, and it was not until 1922 that Louis Mordell proved the fundamental theorem of elliptic curves, classifying their algebraic structure for most important fields.

While mathematicians have always been interested in elliptic curves (there is currently a million dollar prize out for a solution to one problem about them), its use in cryptography was not suggested until 1985. Two prominent researchers independently proposed it: Neal Koblitz at the University of Washington, and Victor Miller who was at IBM Research at the time. Their proposal was solid from the start, but elliptic curves didn’t gain traction in practice until around 2005. More recently, the NSA was revealed to have planted vulnerable national standards for elliptic curve cryptography so they could have backdoor access. You can see a proof and implementation of the backdoor at Aris Adamantiadis’s blog. For now we’ll focus on the cryptographic protocols themselves.

The Discrete Logarithm Problem

Koblitz and Miller had insights aplenty, but the central observation in all of this is the following.

Adding is easy on elliptic curves, but undoing addition seems hard.

What I mean by this is usually called the discrete logarithm problem. Here’s a formal definition. Recall that an additive group is just a set of things that have a well-defined addition operation, and the that notation ny means y + y + \dots + y (n times).

Definition: Let G be an additive group, and let x, y be elements of G so that x = ny for some integer n. The discrete logarithm problem asks one to find n when given x and y.

I like to give super formal definitions first, so let’s do a comparison. For integers this problem is very easy. If you give me 12 and 4185072, I can take a few seconds and compute that 4185072 = (348756) 12 using the elementary-school division algorithm (in the above notation, y=12, x=4185072, and n = 348756). The division algorithm for integers is efficient, and so it gives us a nice solution to the discrete logarithm problem for the additive group of integers \mathbb{Z}.

The reason we use the word “logarithm” is because if your group operation is multiplication instead of addition, you’re tasked with solving the equation x = y^n for n. With real numbers you’d take a logarithm of both sides, hence the name. Just in case you were wondering, we can also solve the multiplicative logarithm problem efficiently for rational numbers (and hence for integers) using the square-and-multiply algorithm. Just square y until doing so would make you bigger than x, then multiply by y until you hit x.

But integers are way nicer than they need to be. They are selflessly well-ordered. They give us division for free. It’s a computational charity! What happens when we move to settings where we don’t have a division algorithm? In mathematical lingo: we’re really interested in the case when G is just a group, and doesn’t have additional structure. The less structure we have, the harder it should be to solve problems like the discrete logarithm. Elliptic curves are an excellent example of such a group. There is no sensible ordering for points on an elliptic curve, and we don’t know how to do division efficiently. The best we can do is add y to itself over and over until we hit x, and it could easily happen that n (as a number) is exponentially larger than the number of bits in x and y.

What we really want is a polynomial time algorithm for solving discrete logarithms. Since we can take multiples of a point very fast using the double-and-add algorithm from our previous post, if there is no polynomial time algorithm for the discrete logarithm problem then “taking multiples” fills the role of a theoretical one-way function, and as we’ll see this opens the door for secure communication.

Here’s the formal statement of the discrete logarithm problem for elliptic curves.

Problem: Let E be an elliptic curve over a finite field k. Let P, Q be points on E such that P = nQ for some integer n. Let |P| denote the number of bits needed to describe the point P. We wish to find an algorithm which determines n and has runtime polynomial in |P| + |Q|. If we want to allow randomness, we can require the algorithm to find the correct n with probability at least 2/3.

So this problem seems hard. And when mathematicians and computer scientists try to solve a problem for many years and they can’t, the cryptographers get excited. They start to wonder: under the assumption that the problem has no efficient solution, can we use that as the foundation for a secure communication protocol?

The Diffie-Hellman Protocol and Problem

Let’s spend the rest of this post on the simplest example of a cryptographic protocol based on elliptic curves: the Diffie-Hellman key exchange.

A lot of cryptographic techniques are based on two individuals sharing a secret string, and using that string as the key to encrypt and decrypt their messages. In fact, if you have enough secret shared information, and you only use it once, you can have provably unbreakable encryption! We’ll cover this idea in a future series on the theory of cryptography (it’s called a one-time pad, and it’s not all that complicated). All we need now is motivation to get a shared secret.

Because what if your two individuals have never met before and they want to generate such a shared secret? Worse, what if their only method of communication is being monitored by nefarious foes? Can they possibly exchange public information and use it to construct a shared piece of secret information? Miraculously, the answer is yes, and one way to do it is with the Diffie-Hellman protocol. Rather than explain it abstractly let’s just jump right in and implement it with elliptic curves.

As hinted by the discrete logarithm problem, we only really have one tool here: taking multiples of a point. So say we’ve chosen a curve C and a point on that curve Q. Then we can take some secret integer n, and publish Q and nQ for the world to see. If the discrete logarithm problem is truly hard, then we can rest assured that nobody will be able to discover n.

How can we use this to established a shared secret? This is where Diffie-Hellman comes in. Take our two would-be communicators, Alice and Bob. Alice and Bob each pick a binary string called a secret key, which in interpreted as a number in this protocol. Let’s call Alice’s secret key s_A and Bob’s s_B, and note that they don’t have to be the same. As the name “secret key” suggests, the secret keys are held secret. Moreover, we’ll assume that everything else in this protocol, including all data sent between the two parties, is public.

So Alice and Bob agree ahead of time on a public elliptic curve C and a public point Q on C. We’ll sometimes call this point the base point for the protocol.

Bob can cunningly do the following trick: take his secret key s_B and send s_B Q to Alice. Equally slick Alice computes s_A Q and sends that to Bob. Now Alice, having s_B Q , computes s_A s_B Q. And Bob, since he has s_A Q, can compute s_B s_A Q. But since addition is commutative in elliptic curve groups, we know s_A s_B Q = s_B s_A Q. The secret piece of shared information can be anything derived from this new point, for example its x-coordinate.

If we want to talk about security, we have to describe what is public and what the attacker is trying to determine. In this case the public information consists of the points Q, s_AQ, s_BQ. What is the attacker trying to figure out? Well she really wants to eavesdrop on their subsequent conversation, that is, the stuff that encrypt with their new shared secret s_As_BQ. So the attacker wants find out s_As_BQ. And we’ll call this the Diffie-Hellman problem.

Diffie-Hellman Problem: Suppose you fix an elliptic curve E over a finite field k, and you’re given four points Q, aQ, bQ and P for some unknown integers a, b. Determine if P = abQ in polynomial time (in the lengths of Q, aQ, bQ, P).

On one hand, if we had an efficient solution to the discrete logarithm problem, we could easily use that to solve the Diffie-Hellman problem because we could compute a,b and them quickly compute abQ and check if it’s P. In other words discrete log is at least as hard as this problem. On the other hand nobody knows if you can do this without solving the discrete logarithm problem. Moreover, we’re making this problem as easy as we reasonably can because we don’t require you to be able to compute abQ. Even if some prankster gave you a candidate for abQ, all you have to do is check if it’s correct. One could imagine some test that rules out all fakes but still doesn’t allow us to compute the true point, which would be one way to solve this problem without being able to solve discrete log.

So this is our hardness assumption: assuming this problem has no efficient solution then no attacker, even with really lucky guesses, can feasibly determine Alice and Bob’s shared secret.

Python Implementation

The Diffie-Hellman protocol is just as easy to implement as you would expect. Here’s some Python code that does the trick. Note that all the code produced in the making of this post is available on this blog’s Github page.

def sendDH(privateKey, generator, sendFunction):
   return sendFunction(privateKey * generator)

def receiveDH(privateKey, receiveFunction):
   return privateKey * receiveFunction()

And using our code from the previous posts in this series we can run it on a small test.

import os

def generateSecretKey(numBits):
   return int.from_bytes(os.urandom(numBits // 8), byteorder='big')

if __name__ == "__main__":
   F = FiniteField(3851, 1)
   curve = EllipticCurve(a=F(324), b=F(1287))
   basePoint = Point(curve, F(920), F(303))

   aliceSecretKey = generateSecretKey(8)
   bobSecretKey = generateSecretKey(8)

   alicePublicKey = sendDH(aliceSecretKey, basePoint, lambda x:x)
   bobPublicKey = sendDH(bobSecretKey, basePoint, lambda x:x)

   sharedSecret1 = receiveDH(bobSecretKey, lambda: alicePublicKey)
   sharedSecret2 = receiveDH(aliceSecretKey, lambda: bobPublicKey)
   print('Shared secret is %s == %s' % (sharedSecret1, sharedSecret2))

Pythons os module allows us to access the operating system’s random number generator (which is supposed to be cryptographically secure) via the function urandom, which accepts as input the number of bytes you wish to generate, and produces as output a Python bytestring object that we then convert to an integer. Our simplistic (and totally insecure!) protocol uses the elliptic curve C defined by y^2 = x^3 + 324 x + 1287 over the finite field \mathbb{Z}/3851. We pick the base point Q = (920, 303), and call the relevant functions with placeholders for actual network transmission functions.

There is one issue we have to note. Say we fix our base point Q. Since an elliptic curve over a finite field can only have finitely many points (since the field only has finitely many possible pairs of numbers), it will eventually happen that nQ = 0 is the ideal point. Recall that the smallest value of n for which nQ = 0 is called the order of Q. And so when we’re generating secret keys, we have to pick them to be smaller than the order of the base point. Viewed from the other angle, we want to pick Q to have large order, so that we can pick large and difficult-to-guess secret keys. In fact, no matter what integer you use for the secret key it will be equivalent to some secret key that’s less than the order of Q. So if an attacker could guess the smaller secret key he wouldn’t need to know your larger key.

The base point we picked in the example above happens to have order 1964, so an 8-bit key is well within the bounds. A real industry-strength elliptic curve (say, Curve25519 or the curves used in the NIST standards*) is designed to avoid these problems. The order of the base point used in the Diffie-Hellman protocol for Curve25519 has gargantuan order (like 2^{256}). So 256-bit keys can easily be used. I’m brushing some important details under the rug, because the key as an actual string is derived from 256 pseudorandom bits in a highly nontrivial way.

So there we have it: a simple cryptographic protocol based on elliptic curves. While we didn’t experiment with a truly secure elliptic curve in this example, we’ll eventually extend our work to include Curve25519. But before we do that we want to explore some of the other algorithms based on elliptic curves, including random number generation and factoring.

Comments on Insecurity

Why do we use elliptic curves for this? Why not do something like RSA and do multiplication (and exponentiation) modulo some large prime?

Well, it turns out that algorithmic techniques are getting better and better at solving the discrete logarithm problem for integers mod p, leading some to claim that RSA is dead. But even if we will never find a genuinely efficient algorithm (polynomial time is good, but might not be good enough), these techniques have made it clear that the key size required to maintain high security in RSA-type protocols needs to be really big. Like 4096 bits. But for elliptic curves we can get away with 256-bit keys. The reason for this is essentially mathematical: addition on elliptic curves is not as well understood as multiplication is for integers, and the more complex structure of the group makes it seem inherently more difficult. So until some powerful general attacks are found, it seems that we can get away with higher security on elliptic curves with smaller key sizes.

I mentioned that the particular elliptic curve we chose was insecure, and this raises the natural question: what makes an elliptic curve/field/basepoint combination secure or insecure? There are a few mathematical pitfalls (including certain attacks we won’t address), but one major non-mathematical problem is called a side-channel attack. A side channel attack against a cryptographic protocol is one that gains additional information about users’ secret information by monitoring side-effects of the physical implementation of the algorithm.

The problem is that different operations, doubling a point and adding two different points, have very different algorithms. As a result, they take different amounts of time to complete and they require differing amounts of power. Both of these can be used to reveal information about the secret keys. Despite the different algorithms for arithmetic on Weierstrass normal form curves, one can still implement them to be secure. Naively, one might pad the two subroutines with additional (useless) operations so that they have more similar time/power signatures, but I imagine there are better methods available.

But much of what makes a curve’s domain parameters mathematically secure or insecure is still unknown. There are a handful of known attacks against very specific families of parameters, and so cryptography experts simply avoid these as they are discovered. Here is a short list of pitfalls, and links to overviews:

  1. Make sure the order of your basepoint has a short facorization (e.g., is 2p, 3p, or 4p for some prime p). Otherwise you risk attacks based on the Chinese Remainder Theorem, the most prominent of which is called Pohlig-Hellman.
  2. Make sure your curve is not supersingular. If it is you can reduce the discrete logarithm problem to one in a different and much simpler group.
  3. If your curve C is defined over \mathbb{Z}/p, make sure the number of points on C is not equal to p. Such a curve is called prime-field anomalous, and its discrete logarithm problem can be reduced to the (additive) version on integers.
  4. Don’t pick a small underlying field like \mathbb{F}_{2^m} for small mGeneral-purpose attacks can be sped up significantly against such fields.
  5. If you use the field \mathbb{F}_{2^m}, ensure that m is prime. Many believe that if m has small divisors, attacks based on some very complicated algebraic geometry can be used to solve the discrete logarithm problem more efficiently than any general-purpose method. This gives evidence that m being composite at all is dangerous, so we might as well make it prime.

This is a sublist of the list provided on page 28 of this white paper.

The interesting thing is that there is little about the algorithm and protocol that is vulnerable. Almost all of the vulnerabilities come from using bad curves, bad fields, or a bad basepoint. Since the known attacks work on a pretty small subset of parameters, one potentially secure technique is to just generate a random curve and a random point on that curve! But apparently all respected national agencies will refuse to call your algorithm “standards compliant” if you do this.

Next time we’ll continue implementing cryptographic protocols, including the more general public-key message sending and signing protocols.

Until then!

About these ads

Connecting Elliptic Curves with Finite Fields

So here we are. We’ve studied the general properties of elliptic curves, written a program for elliptic curve arithmetic over the rational numbers, and taken a long detour to get some familiarity with finite fields (the mathematical background and a program that implements arbitrary finite field arithmetic).

And now we want to get back on track and hook our elliptic curve program up with our finite field program to make everything work. And indeed, for most cases it’s just that simple! For example, take the point P = (2,1) on the elliptic curve y = x^3 + x + 1 with coefficients in \mathbb{Z}/5. Using purely code produced in previous posts, we can do arithmetic:

>>> F5 = FiniteField(5, 1)
>>> C = EllipticCurve(a=F5(1), b=F5(1))
>>> P = Point(C, F5(2), F5(1))
>>> P
(2 (mod 5), 1 (mod 5))
>>> 2*P
(2 (mod 5), 4 (mod 5))
>>> 3*P
Ideal

Here’s an example of the same curve y^2 = x^3 + x + 1 with coefficients over the finite field of order 25 \mathbb{F}_{5^2}.

>>> F25 = FiniteField(5,2)
>>> F25.idealGenerator
3 + 0 t^1 + 1 t^2
>>> curve = EllipticCurve(a=F25([1]), b=F25([1]))
>>> x = F25([2,1])
>>> y = F25([0,2])
>>> y*y - x*x*x - x - 1
0 ∈ F_{5^2}
>>> curve.testPoint(x,y)
True
>>> P = Point(curve, x, y)
>>> -P
(2 + 1 t^1, 0 + 3 t^1)
>>> P+P
(3 + 1 t^1, 2)
>>> 4*P
(3 + 2 t^1, 4 + 4 t^1)
>>> 9*P
Ideal

There are some subtle issues, though, in that we shouldn’t use the code we have to work over any finite field. But we’ve come very far and covered a lot of technical details, so let’s briefly remember how we got here.

Taking a Step Back

At the beginning there was only \mathbb{Q}, the field of rational numbers. We had a really nice geometric picture of elliptic curves over this field, and using that picture we developed an algorithm for (geometrically) adding points.

add-points-exampleIf we assume the equation of the elliptic curve had this nice form (the so-called Weierstrass normal form, y^2 = x^3 + ax + b), then we were able to translate the geometric algorithm into an algebraic one. This made it possible to write a program to perform the additions, and this was our first programmatic milestone. Along the way, we learned about groups and projective geometry, which I explained was the proper mathematical setting for elliptic curves. In that setting, we saw that for most fields, every elliptic curve could be modified into one in Weierstrass normal form without changing the algebraic structure of the set of solutions. Moreover, we saw that you can replace the field \mathbb{Q} with the field of your choice. The set of solutions to an elliptic curve still forms a group and the same algebraic point-adding algorithm works. It’s just an interesting quirk of mathematics that one way to represent elements of finite fields are as polynomial remainders when dividing by a “prime” polynomial (analogous to modular arithmetic with integers). So we spent a while actually implementing finite fields in terms of this representation.

The reader has probably heard of this, but in practice one uses a (very large) finite field for the coefficients of their elliptic curve. Often this is \mathbb{Z}/p for some really large prime p, or the field of 2^m elements for some large integer m. But one would naturally complain: there are so many (infinitely many!) finite fields to choose from! Which one should we use, and how did they choose these?

As with most engineering problems the answer is a trade-off, in this case between efficiency and security. Arithmetic is faster in fields of characteristic 2 (and easy to implement at the hardware level!) but a lot is known about the finite field of 2^m elements. In fact, if you are sloppy in picking m you’ll get no security at all! One prominent example is the so-called Weil descent attack, which breaks security assumptions for elliptic curve cryptography when m is not prime. These attacks use some sophisticated machinery, but this is how it goes. An abstract mathematical breakthrough can immediately invalidate cryptography based on certain elliptic curves.

But before we get neck-deep in cryptography we have an even bigger problem: for some finite fields, not every elliptic curve has a Weierstrass normal form! So our program isn’t expressive enough to represent all elliptic curves we might want to. We could avoid these curves in our applications, but that would be unnecessarily limiting. With a bit more careful work, we can devise a more general algorithm (and a different normal form) that works for all fields. But let’s understand the problem first.

In general, you can have an elliptic curve of the form \sum_{i+j=3} a_{i,j}x^iy^j = 0. That is, it’s just a really general degree 3 polynomial in two variables. If we assume the discriminant of this polynomial is nonzero, we’ll get a smooth curve. And then to get to the Weierstrass normal form involves a bunch of changes of variables. The problem is that the algebraic manipulations you do require you to multiply and divide by 2 and 3. In a field of either characteristic, these operations are either destructive (multiplying by zero) or totally illegal (dividing by zero), and they ruin Weierstrass’s day.

So what can we do?

Well it turns out that there is a more general Weierstrass normal form, unsurprisingly called the generalized Weierstrass normal form. It looks like this

\displaystyle y^2 + a_1 xy + a_3y = x^3 + a_2x^2 + a_4x + a_6

The same geometric idea of drawing lines works for this curve as well. It’s just that now the formula is way more complicated. It involves computing a bunch of helper constants and computing far more arithmetic. My colleague Daniel Ngheim was kind enough to code up the algorithm, and here it is

    def __add__(self, Q):
        if isinstance(Q, Ideal):
            return Point(self.curve, self.x, self.y)

        a1,a2,a3,a4,a6 = (self.curve.a1, self.curve.a2, self.curve.a3, self.curve.a4, self.curve.a6)

        if self.x == Q.x:
            x = self.x
            if self.y + Q.y + a1*x + a3 == 0:
                return Ideal(self.curve)
            else:
                c = ((3*x*x + 2*a2*x + a4 - a1*self.y) / (2*self.y + a1*x + a3))
                d = (-(x*x*x) + a4*x + 2*a6 - a3*self.y) / (2*self.y + a1*x + a3)
                Sum_x = c*c + a1*c - a2 - 2*self.x
                Sum_y = -(c + a1) * Sum_x - d - a3
                return Point(self.curve, Sum_x, Sum_y)
        else:
            c =  (Q.y - self.y) / (Q.x - self.x)
            d =  (self.y*Q.x - Q.y*self.x) / (Q.x - self.x)
            Sum_x = c*c + a1*c - a2 - self.x - Q.x
            Sum_y = -(c + a1)*Sum_x - d - a3
            return Point(self.curve, Sum_x, Sum_y)

   def __neg__(self):
      return Point(self.curve, self.x, -self.y - self.curve.a1*self.x - self.curve.a3)

I trust that the devoted reader could derive this algorithm by hand, but for a more detailed derivation see the book of Silverman (it’s a graduate level text, but the point is that if you’re not really serious about implementing elliptic curve cryptography then you shouldn’t worry about this more general algorithm).

One might start to wonder: are there still other forms of elliptic curves that we could use to get around some of the difficulties of the Weierstrass normal form? The answer is yes, but we’ll defer their discussion to a future post. The brief explanation is that through a different choice of variable changes you can get to a different form of curve, and the algorithms you get from writing out the algebraic equations for adding points are slightly more efficient.

For the remainder of this series we’ll just work with one family of finite fields, those fields of the form \mathbb{Z}/p for some large p. There is one particularly famous elliptic curve over this field that is used in some of the most secure applications in existence, and this will roughly be our target. In either case, we have provided the combined elliptic curve and finite field code (and the generalized elliptic curve class) on this blog’s Github page.

So in the next post we’ll actually start talking about cryptography and how to use elliptic curves to do things like generate a shared secret key.

Until then!

Programming with Finite Fields

Back when I was first exposed to programming language design, I decided it would be really cool if there were a language that let you define your own number types and then do all your programming within those number types. And since I get excited about math, I think of really exotic number types (Boolean rings, Gaussian integers, Octonions, oh my!). I imagined it would be a language feature, so I could do something like this:

use gaussianintegers as numbers

x = 1 + i
y = 2 - 3i
print(x*y)

z = 2 + 3.5i     # error

I’m not sure why I thought this would be so cool. Perhaps I felt like I would be teaching a computer math. Or maybe the next level of abstraction in playing god by writing programs is to play god by designing languages (and I secretly satisfy a massive god complex by dictating the actions of my computer).

But despite not writing a language of my own, programming with weird number systems still has a special place in my heart. It just so happens that we’re in the middle of a long series on elliptic curves, and in the next post we’ll actually implement elliptic curve arithmetic over a special kind of number type (the finite field). In this post, we’ll lay the groundwork by implementing number types in Python that allow us to work over any finite field. This is actually a pretty convoluted journey, and to be totally rigorous we’d need to prove a bunch of lemmas, develop a bunch of ring theory, and prove the correctness of a few algorithms involving polynomials.

Instead of taking the long and winding road, we’ll just state the important facts with links to proofs, prove the easy stuff, and focus more heavily than usual on the particular Python implementation details. As usual, all of the code used in this post is available on this blog’s Github page.

Integers Modulo Primes

The simples kind of finite field is the set of integers modulo a prime. We’ve dealt with this number field extensively on this blog (in groups, rings, fields, with RSA, etc.), but let’s recall what it is. The modulo operator \mod (in programming it’s often denoted %) is a binary operation on integers such that x \mod y is the unique positive remainder of x when divided by y.

Definition: Let p be a prime number. The set \mathbb{Z}/p consists of the numbers \left \{ 0, 1, \dots, p-1 \right \}. If you endow it with the operations of addition (mod p) and multiplication (mod p), it forms a field.

To say it’s a field is just to say that arithmetic more or less behaves the way we expect it to, and in particular that every nonzero element has a (unique) multiplicative inverse. Making a number type for \mathbb{Z}/p in Python is quite simple.

def IntegersModP(p):
   class IntegerModP(FieldElement):
      def __init__(self, n):
         self.n = n % p
         self.field = IntegerModP

      def __add__(self, other): return IntegerModP(self.n + other.n)
      def __sub__(self, other): return IntegerModP(self.n - other.n)
      def __mul__(self, other): return IntegerModP(self.n * other.n)
      def __truediv__(self, other): return self * other.inverse()
      def __div__(self, other): return self * other.inverse()
      def __neg__(self): return IntegerModP(-self.n)
      def __eq__(self, other): return isinstance(other, IntegerModP) and self.n == other.n
      def __abs__(self): return abs(self.n)
      def __str__(self): return str(self.n)
      def __repr__(self): return '%d (mod %d)' % (self.n, self.p)

      def __divmod__(self, divisor):
         q,r = divmod(self.n, divisor.n)
         return (IntegerModP(q), IntegerModP(r))

      def inverse(self):
         ...?

   IntegerModP.p = p
   IntegerModP.__name__ = 'Z/%d' % (p)
   return IntegerModP

We’ve done a couple of things worth note here. First, all of the double-underscore methods are operator overloads, so they are called when one tries to, e.g., add two instances of this class together. We’ve also implemented a division algorithm via __divmod__ which computes a (quotient, remainder) pair for the appropriate division. The built in Python function divmod function does this for integers, and we can overload it for a custom type. We’ll write a more complicated division algorithm later in this post. Finally, we’re dynamically creating our class so that different primes will correspond to different types. We’ll come back to why this encapsulation is a good idea later, but it’s crucial to make our next few functions reusable and elegant.

Here’s an example of the class in use:

>>> mod7 = IntegersModP(7)
>>> mod7(3) + mod7(6)
2 (mod 7)

The last (undefined) function in the IntegersModP class, the inverse function, is our only mathematical hurdle. Luckily, we can compute inverses in a generic way, using an algorithm called the extended Euclidean algorithm. Here’s the mathematics.

Definition: An element d is called a greatest common divisor (gcd) of a,b if it divides both a and b, and for every other z dividing both a and b, z divides d. For \mathbb{Z}/p gcd’s and we denote it as \gcd(a,b). [1]

Note that we called it ‘a’ greatest common divisor. In general gcd’s need not be unique, though for integers one often picks the positive gcd. We’ll actually see this cause a tiny programmatic bug later in this post, but let’s push on for now.

Theorem: For any two integers a,b \in \mathbb{Z} there exist unique x,y \in \mathbb{Z} such that ax + by = \gcd(a,b).

We could beat around the bush and try to prove these things in various ways, but when it comes down to it there’s one algorithm of central importance that both computes the gcd and produces the needed linear combination x,y. The algorithm is called the Euclidean algorithm. Here is a simple version that just gives the gcd.

def gcd(a, b):
   if abs(a) < abs(b):
      return gcd(b, a)

   while abs(b) > 0:
      q,r = divmod(a,b)
      a,b = b,r

   return a

This works by the simple observation that \gcd(a, aq+r) = \gcd(a,r) (this is an easy exercise to prove directly). So the Euclidean algorithm just keeps applying this rule over and over again: take the remainder when dividing the bigger argument by the smaller argument until the remainder becomes zero. Then the \gcd(x,0) = x because everything divides zero.

Now the so-called ‘extended’ Euclidean algorithm just keeps track of some additional data as it goes (the partial quotients and remainders). Here’s the algorithm.

def extendedEuclideanAlgorithm(a, b):
   if abs(b) > abs(a):
      (x,y,d) = extendedEuclideanAlgorithm(b, a)
      return (y,x,d)

   if abs(b) == 0:
      return (1, 0, a)

   x1, x2, y1, y2 = 0, 1, 1, 0
   while abs(b) > 0:
      q, r = divmod(a,b)
      x = x2 - q*x1
      y = y2 - q*y1
      a, b, x2, x1, y2, y1 = b, r, x1, x, y1, y

   return (x2, y2, a)

Indeed, the reader who hasn’t seen this stuff before is encouraged to trace out a run for the numbers 4864, 3458. Their gcd is 38 and the two integers are 32 and -45, respectively.

How does this help us compute inverses? Well, if we want to find the inverse of a modulo p, we know that their gcd is 1. So compute the x,y such that ax + py = 1, and then reduce both sides mod p. You get ax + 0 = 1 \mod p, which means that x \mod p is the inverse of a. So once we have the extended Euclidean algorithm our inverse function is trivial to write!

def inverse(self):
   x,y,d = extendedEuclideanAlgorithm(self.n, self.p)
   return IntegerModP(x)

And indeed it works as expected:

>>> mod23 = IntegersModP(23)
>>> mod23(7).inverse()
10 (mod 23)
>>> mod23(7).inverse() * mod23(7)
1 (mod 23)

Now one very cool thing, and something that makes some basic ring theory worth understanding, is that we can compute the gcd of any number type using the exact same code for the Euclidean algorithm, provided we implement an abs function and a division algorithm. Via a chain of relatively easy-to-prove lemmas, if your number type has enough structure (in particular, if it has a division algorithm that satisfies some properties), then greatest common divisors are well-defined, and the Euclidean algorithm gives us that special linear combination. And using the same trick above in finite fields, we can use the Euclidean algorithm to compute inverses.

But in order to make things work programmatically we need to be able to deal with the literal ints 0 and 1 in the algorithm. That is, we need to be able to silently typecast integers up to whatever number type we’re working with. This makes sense because all rings have 0 and 1, but it requires a bit of scaffolding to implement. In particular, typecasting things sensibly is really difficult if you aren’t careful. And the problems are compounded in a language like Python that blatantly ignores types whenever possible. [2]

So let’s take a quick break to implement a tiny type system with implicit typecasting.

[1] The reader familiar with our series on category theory will recognize this as the product of two integers in a category whose arrows represent divisibility. So by abstract nonsense, this proves that gcd’s are unique up to multiplication by a unit in any ring.
[2] In the process of writing the code for this post, I was sorely missing the stronger type systems of Java and Haskell. Never thought I’d say that, but it’s true.

A Tiny Generic Type System

The main driving force behind our type system will be a decorator called @typecheck. We covered decorators toward the end of our primer on dynamic programming, but in short a decorator is a Python syntax shortcut that allows some pre- or post-processing to happen to a function in a reusable way. All you need to do to apply the pre/post-processing is prefix the function definition with the name of the decorator.

Our decorator will be called typecheck, and it will decorate binary operations on our number types. In its basic form, our type checker will work as follows: if the types of the two operands are the same, then the decorator will just pass them on through to the operator. Otherwise, it will try to do some typecasting, and if that fails it will raise exceptions with reckless abandon.

def typecheck(f):
   def newF(self, other):
      if type(self) is not type(other):
         try:
            other = self.__class__(other)
         except TypeError:
            message = 'Not able to typecast %s of type %s to type %s in function %s'
            raise TypeError(message % (other, type(other).__name__, type(self).__name__, f.__name__))
         except Exception as e:
            message = 'Type error on arguments %r, %r for functon %s. Reason:%s'
            raise TypeError(message % (self, other, f.__name__, type(other).__name__, type(self).__name__, e))

      return f(self, other)

   return newF

So this is great, but there are two issues. The first is that this will only silently typecast if the thing we’re casting is on the right-hand side of the expression. In other words, the following will raise an exception complaining that you can’t add ints to Mod7 integers.

>>> x = IntegersModP(7)(1)
>>> 1 + x

What we need are the right-hand versions of all the operator overloads. They are the same as the usual operator overloads, but Python gives preference to the left-hand operator overloads. Anticipating that we will need to rewrite these silly right-hand overloads for every number type, and they’ll all be the same, we make two common base classes.

class DomainElement(object):
   def __radd__(self, other): return self + other
   def __rsub__(self, other): return -self + other
   def __rmul__(self, other): return self * other

class FieldElement(DomainElement):
   def __truediv__(self, other): return self * other.inverse()
   def __rtruediv__(self, other): return self.inverse() * other
   def __div__(self, other): return self.__truediv__(other)
   def __rdiv__(self, other): return self.__rtruediv__(other)

And we can go ahead and make our IntegersModP a subclass of FieldElement. [3]

But now we’re wading into very deep waters. In particular, we know ahead of time that our next number type will be for Polynomials (over the integers, or fractions, or \mathbb{Z}/p, or whatever). And we’ll want to do silent typecasting from ints and IntegersModP to Polynomials! The astute reader will notice the discrepancy. What will happen if I try to do this?

>>> MyInteger() + MyPolynomial()

Let’s take this slowly: by our assumption both MyInteger and MyPolynomial have the __add__ and __radd__ functions defined on them, and each tries to typecast the other the appropriate type. But which is called? According to Python’s documentation if the left-hand side has an __add__ function that’s called first, and the right-hand sides’s __radd__ function is only sought if no __add__ function is found for the left operand.

Well that’s a problem, and we’ll deal with it in a half awkward and half elegant way. What we’ll do is endow our number types with an “operatorPrecedence” constant. And then inside our type checker function we’ll see if the right-hand operand is an object of higher precedence. If it is, we return the global constant NotImplemented, which Python takes to mean that no __add__ function was found, and it proceeds to look for __radd__. And so with this modification our typechecker is done. [4]

def typecheck(f):
   def newF(self, other):
      if (hasattr(other.__class__, 'operatorPrecedence') and
            other.__class__.operatorPrecedence > self.__class__.operatorPrecedence):
         return NotImplemented

      if type(self) is not type(other):
         try:
            other = self.__class__(other)
         except TypeError:
            message = 'Not able to typecast %s of type %s to type %s in function %s'
            raise TypeError(message % (other, type(other).__name__, type(self).__name__, f.__name__))
         except Exception as e:
            message = 'Type error on arguments %r, %r for functon %s. Reason:%s'
            raise TypeError(message % (self, other, f.__name__, type(other).__name__, type(self).__name__, e))

      return f(self, other)

   return newF

We add a default operatorPrecedence of 1 to the DomainElement base class. Now this function answers our earlier question of why we want to encapsulate the prime modulus into the IntegersModP class. If this typechecker is really going to be generic, we need to be able to typecast an int by passing the single int argument to the type constructor with no additional information! Indeed, this will be the same pattern for our polynomial class and the finite field class to follow.

Now there is still one subtle problem. If we try to generate two copies of the same number type from our number-type generator (in other words, the following code snippet), we’ll get a nasty exception.

>>> mod7 = IntegersModP(7)
>>> mod7Copy = IntegersModP(7)
>>> mod7(1) + mod7Copy(2)
... fat error ...

The reason for this is that in the type-checker we’re using the Python built-in ‘is’ which checks for identity, not semantic equality. To fix this, we simply need to memoize the IntegersModP function (and all the other functions we’ll use to generate number types) so that there is only ever one copy of a number type in existence at a time.

So enough Python hacking: let’s get on with implementing finite fields!

[3] This also compels us to make some slight modifications to the constructor for IntegersModP, but they’re not significant enough to display here. Check out the Github repo if you want to see.
[4] This is truly a hack, and we’ve considered submitting a feature request to the Python devs. It is conceivably useful for the operator-overloading aficionado. I’d be interested to hear your thoughts in the comments as to whether this is a reasonable feature to add to Python.

Polynomial Arithmetic

Recall from our finite field primer that every finite field can be constructed as a quotient of a polynomial ring with coefficients in \mathbb{Z}/p by some prime ideal. We spelled out exactly what this means in fine detail in the primer, so check that out before reading on.

Indeed, to construct a finite field we need to find some irreducible monic polynomial f with coefficients in \mathbb{Z}/p, and then the elements of our field will be remainders of arbitrary polynomials when divided by f. In order to determine if they’re irreducible we’ll need to compute a gcd. So let’s build a generic polynomial type with a polynomial division algorithm, and hook it into our gcd framework.

We start off in much the same way as with the IntegersModP:

# create a polynomial with coefficients in a field; coefficients are in
# increasing order of monomial degree so that, for example, [1,2,3]
# corresponds to 1 + 2x + 3x^2
@memoize
def polynomialsOver(field=fractions.Fraction):

   class Polynomial(DomainElement):
      operatorPrecedence = 2
      factory = lambda L: Polynomial([field(x) for x in L])

      def __init__(self, c):
         if type(c) is Polynomial:
            self.coefficients = c.coefficients
         elif isinstance(c, field):
            self.coefficients = [c]
         elif not hasattr(c, '__iter__') and not hasattr(c, 'iter'):
            self.coefficients = [field(c)]
         else:
            self.coefficients = c

         self.coefficients = strip(self.coefficients, field(0))

      def isZero(self): return self.coefficients == []

      def __repr__(self):
         if self.isZero():
            return '0'

         return ' + '.join(['%s x^%d' % (a,i) if i > 0 else '%s'%a
                              for i,a in enumerate(self.coefficients)])

      def __abs__(self): return len(self.coefficients)
      def __len__(self): return len(self.coefficients)
      def __sub__(self, other): return self + (-other)
      def __iter__(self): return iter(self.coefficients)
      def __neg__(self): return Polynomial([-a for a in self])

      def iter(self): return self.__iter__()
      def leadingCoefficient(self): return self.coefficients[-1]
      def degree(self): return abs(self) - 1

All of this code just lays down conventions. A polynomial is a list of coefficients (in increasing order of their monomial degree), the zero polynomial is the empty list of coefficients, and the abs() of a polynomial is one plus its degree. [5] Finally, instead of closing over a prime modulus, as with IntegersModP, we’re closing over the field of coefficients. In general you don’t have to have polynomials with coefficients in a field, but if they do come from a field then you’re guaranteed to get a sensible Euclidean algorithm. In the formal parlance, if k is a field then k[x] is a Euclidean domain. And for our goal of defining finite fields, we will always have coefficients from \mathbb{Z}/p, so there’s no problem.

Now we can define things like addition, multiplication, and equality using our typechecker to silently cast and watch for errors.

      @typecheck
      def __eq__(self, other):
         return self.degree() == other.degree() and all([x==y for (x,y) in zip(self, other)])

      @typecheck
      def __add__(self, other):
         newCoefficients = [sum(x) for x in itertools.zip_longest(self, other, fillvalue=self.field(0))]
         return Polynomial(newCoefficients)

      @typecheck
      def __mul__(self, other):
         if self.isZero() or other.isZero():
            return Zero()

         newCoeffs = [self.field(0) for _ in range(len(self) + len(other) - 1)]

         for i,a in enumerate(self):
            for j,b in enumerate(other):
               newCoeffs[i+j] += a*b

         return Polynomial(newCoeffs)

Notice that, if the underlying field of coefficients correctly implements the operator overloads, none of this depends on the coefficients. Reusability, baby!

And we can finish off with the division algorithm for polynomials.

      @typecheck
      def __divmod__(self, divisor):
         quotient, remainder = Zero(), self
         divisorDeg = divisor.degree()
         divisorLC = divisor.leadingCoefficient()

         while remainder.degree() >= divisorDeg:
            monomialExponent = remainder.degree() - divisorDeg
            monomialZeros = [self.field(0) for _ in range(monomialExponent)]
            monomialDivisor = Polynomial(monomialZeros + [remainder.leadingCoefficient() / divisorLC])

            quotient += monomialDivisor
            remainder -= monomialDivisor * divisor

         return quotient, remainder

Indeed, we’re doing nothing here but formalizing the grade-school algorithm for doing polynomial long division [6]. And we can finish off the function for generating this class by assigning the field member variable along with a class name. And we give it a higher operator precedence than the underlying field of coefficients so that an isolated coefficient is cast up to a constant polynomial.

@memoize
def polynomialsOver(field=fractions.Fraction):

   class Polynomial(DomainElement):
      operatorPrecedence = 2

      [... methods defined above ...]

   def Zero():
      return Polynomial([])

   Polynomial.field = field
   Polynomial.__name__ = '(%s)[x]' % field.__name__
   return Polynomial

We provide a modest test suite in the Github repository for this post, but here’s a sample test:

>>> Mod5 = IntegersModP(5)
>>> Mod11 = IntegersModP(11)
>>> polysOverQ = polynomialsOver(Fraction).factory
>>> polysMod5 = polynomialsOver(Mod5).factory
>>> polysMod11 = polynomialsOver(Mod11).factory
>>> polysOverQ([1,7,49]) / polysOverQ([7])
1/7 + 1 x^1 + 7 x^2
>>> polysMod5([1,7,49]) / polysMod5([7])
3 + 1 x^1 + 2 x^2
>>> polysMod11([1,7,49]) / polysMod11([7])
8 + 1 x^1 + 7 x^2

And indeed, the extended Euclidean algorithm works without modification, so we know our typecasting is doing what’s expected:

>>> p = polynomialsOver(Mod2).factory
>>> f = p([1,0,0,0,1,1,1,0,1,1,1]) # x^10 + x^9 + x^8 + x^6 + x^5 + x^4 + 1
>>> g = p([1,0,1,1,0,1,1,0,0,1])   # x^9 + x^6 + x^5 + x^3 + x^1 + 1
>>> theGcd = p([1,1,0,1]) # x^3 + x + 1
>>> x = p([0,0,0,0,1]) # x^4
>>> y = p([1,1,1,1,1,1]) # x^5 + x^4 + x^3 + x^2 + x + 1
>>> (x,y,theGcd) == extendedEuclideanAlgorithm(f, g)
True

[5] The mathematical name for the abs() function that we’re using is a valuation.
[6] One day we will talk a lot more about polynomial long division on this blog. You can do a lot of cool algebraic geometry with it, and the ideas there lead you to awesome applications like robot motion planning and automated geometry theorem proving.

Generating Irreducible Polynomials

Now that we’ve gotten Polynomials out of the way, we need to be able to generate irreducible polynomials over \mathbb{Z}/p of any degree we want. It might be surprising that irreducible polynomials of any degree exist [7], but in fact we know a lot more.

Theorem: The product of all irreducible monic polynomials of degree dividing m is equal to x^{p^m} - x.

This is an important theorem, but it takes a little bit more field theory than we have under our belts right now. You could summarize the proof by saying there is a one-to-one correspondence between elements of a field and monic irreducible polynomials, and then you say some things about splitting fields. You can see a more detailed proof outline here, but it assumes you’re familiar with the notion of a minimal polynomial. We will probably cover this in a future primer.

But just using the theorem we can get a really nice algorithm for determining if a polynomial f(x) of degree m is irreducible: we just look at its gcd with all the x^{p^k} - x for k smaller than m. If all the gcds are constants, then we know it’s irreducible, and if even one is a non-constant polynomial then it has to be irreducible. Why’s that? Because if you have some nontrivial gcd d(x) = \gcd(f(x), x^{p^k} - x) for k < m, then it’s a factor of f(x) by definition. And since we know all irreducible monic polynomials are factors of that this collection of polynomials, if the gcd is always 1 then there are no other possible factors to be divisors. (If there is any divisor then there will be a monic irreducible one!) So the candidate polynomial must be irreducible. In fact, with a moment of thought it’s clear that we can stop at k= m/2, as any factor of large degree will necessarily require corresponding factors of small degree. So the algorithm to check for irreducibility is just this simple loop:

def isIrreducible(polynomial, p):
   ZmodP = IntegersModP(p)
   poly = polynomialsOver(ZmodP).factory
   x = poly([0,1])
   powerTerm = x
   isUnit = lambda p: p.degree() == 0

   for _ in range(int(polynomial.degree() / 2)):
      powerTerm = powerTerm.powmod(p, polynomial)
      gcdOverZmodp = gcd(polynomial, powerTerm - x)
      if not isUnit(gcdOverZmodp):
         return False

   return True

We’re just computing the powers iteratively as x^p, (x^p)^p = x^{p^2}, \dots, x^{p^j} and in each step of the loop subtracting x and computing the relevant gcd. The powmod function is just there so that we can reduce the power mod our irreducible polynomial after each multiplication, keeping the degree of the polynomial small and efficient to work with.

Now generating an irreducible polynomial is a little bit harder than testing for one. With a lot of hard work, however, field theorists discovered that irreducible polynomials are quite common. In fact, if you just generate the coefficients of your degree n monic polynomial at random, the chance that you’ll get something irreducible is at least 1/n. So this suggests an obvious randomized algorithm: keep guessing until you find one.

def generateIrreduciblePolynomial(modulus, degree):
   Zp = IntegersModP(modulus)
   Polynomial = polynomialsOver(Zp)

   while True:
      coefficients = [Zp(random.randint(0, modulus-1)) for _ in range(degree)]
      randomMonicPolynomial = Polynomial(coefficients + [Zp(1)])

      if isIrreducible(randomMonicPolynomial, modulus):
         return randomMonicPolynomial

Since the probability of getting an irreducible polynomial is close to 1/n, we expect to require n trials before we find one. Moreover we could give a pretty tight bound on how likely it is to deviate from the expected number of trials. So now we can generate some irreducible polynomials!

>>> F23 = FiniteField(2,3)
>>> generateIrreduciblePolynomial(23, 3)
21 + 12 x^1 + 11 x^2 + 1 x^3

And so now we are well-equipped to generate any finite field we want! It’s just a matter of generating the polynomial and taking a modulus after every operation.

@memoize
def FiniteField(p, m, polynomialModulus=None):
   Zp = IntegersModP(p)
   if m == 1:
      return Zp

   Polynomial = polynomialsOver(Zp)
   if polynomialModulus is None:
      polynomialModulus = generateIrreduciblePolynomial(modulus=p, degree=m)

   class Fq(FieldElement):
      fieldSize = int(p ** m)
      primeSubfield = Zp
      idealGenerator = polynomialModulus
      operatorPrecedence = 3

      def __init__(self, poly):
         if type(poly) is Fq:
            self.poly = poly.poly
         elif type(poly) is int or type(poly) is Zp:
            self.poly = Polynomial([Zp(poly)])
         elif isinstance(poly, Polynomial):
            self.poly = poly % polynomialModulus
         else:
            self.poly = Polynomial([Zp(x) for x in poly]) % polynomialModulus

         self.field = Fq

      @typecheck
      def __add__(self, other): return Fq(self.poly + other.poly)
      @typecheck
      def __sub__(self, other): return Fq(self.poly - other.poly)
      @typecheck
      def __mul__(self, other): return Fq(self.poly * other.poly)
      @typecheck
      def __eq__(self, other): return isinstance(other, Fq) and self.poly == other.poly

      def __pow__(self, n): return Fq(pow(self.poly, n))
      def __neg__(self): return Fq(-self.poly)
      def __abs__(self): return abs(self.poly)
      def __repr__(self): return repr(self.poly) + ' \u2208 ' + self.__class__.__name__

      @typecheck
      def __divmod__(self, divisor):
         q,r = divmod(self.poly, divisor.poly)
         return (Fq(q), Fq(r))

      def inverse(self):
         if self == Fq(0):
            raise ZeroDivisionError

         x,y,d = extendedEuclideanAlgorithm(self.poly, self.idealGenerator)
         return Fq(x) * Fq(d.coefficients[0].inverse())

   Fq.__name__ = 'F_{%d^%d}' % (p,m)
   return Fq

And some examples of using it:

>>> F23 = FiniteField(2,3)
>>> x = F23([1,1])
>>> x
1 + 1 x^1 ∈ F_{2^3}
>>> x*x
1 + 0 x^1 + 1 x^2 ∈ F_{2^3}
>>> x**10
0 + 0 x^1 + 1 x^2 ∈ F_{2^3}
>>> 1 / x
0 + 1 x^1 + 1 x^2 ∈ F_{2^3}
>>> x * (1 / x)
1 ∈ F_{2^3}
>>> k = FiniteField(23, 4)
>>> k.fieldSize
279841
>>> k.idealGenerator
6 + 8 x^1 + 10 x^2 + 10 x^3 + 1 x^4
>>> y
9 + 21 x^1 + 14 x^2 + 12 x^3 ∈ F_{23^4}
>>> y*y
13 + 19 x^1 + 7 x^2 + 14 x^3 ∈ F_{23^4}
>>> y**5 - y
15 + 22 x^1 + 15 x^2 + 5 x^3 ∈ F_{23^4}

And that’s it! Now we can do arithmetic over any finite field we want.

[7] Especially considering that other wacky things happen like this: x^4 +1 is reducible over every finite field!

Some Words on Efficiency

There are a few things that go without stating about this program, but I’ll state them anyway.

The first is that we pay a big efficiency price for being so generic. All the typecasting we’re doing isn’t cheap, and in general cryptography needs to be efficient. For example, if I try to create a finite field of order 104729^{20}, it takes about ten to fifteen seconds to complete. This might not seem so bad for a one-time initialization, but it’s clear that our representation is somewhat bloated. We would display a graph of the expected time to perform various operations in various finite fields, but this post is already way too long.

In general, the larger and more complicated the polynomial you use to define your finite field, the longer operations will take (since dividing by a complicated polynomial is more expensive than dividing by a simple polynomial). For this reason and a few other reasons, a lot of research has gone into efficiently finding irreducible polynomials with, say, only three nonzero terms. Moreover, if we know in advance that we’ll only work over fields of characteristic two we can make a whole lot of additional optimizations. Essentially, all of the arithmetic reduces to really fast bitwise operations, and things like exponentiation can easily be implemented in hardware. But it also seems that the expense coming with large field characteristics corresponds to higher security, so some researchers have tried to meet in the middle an get efficient representations of other field characteristics.

In any case, the real purpose of our implementation in this post is not for efficiency. We care first and foremost about understanding the mathematics, and to have a concrete object to play with and use in the future for other projects. And we have accomplished just that.

Until next time!

Elliptic Curves as Python Objects

Last time we saw a geometric version of the algorithm to add points on elliptic curves. We went quite deep into the formal setting for it (projective space \mathbb{P}^2), and we spent a lot of time talking about the right way to define the “zero” object in our elliptic curve so that our issues with vertical lines would disappear.

With that understanding in mind we now finally turn to code, and write classes for curves and points and implement the addition algorithm. As usual, all of the code we wrote in this post is available on this blog’s Github page.

Points and Curves

Every introductory programming student has probably written the following program in some language for a class representing a point.

class Point(object):
    def __init__(self, x, y):
        self.x = x
        self.y = y

It’s the simplest possible nontrivial class: an x and y value initialized by a constructor (and in Python all member variables are public).

We want this class to represent a point on an elliptic curve, and overload the addition and negation operators so that we can do stuff like this:

p1 = Point(3,7)
p2 = Point(4,4)
p3 = p1 + p2

But as we’ve spent quite a while discussing, the addition operators depend on the features of the elliptic curve they’re on (we have to draw lines and intersect it with the curve). There are a few ways we could make this happen, but in order to make the code that uses these classes as simple as possible, we’ll have each point contain a reference to the curve they come from. So we need a curve class.

It’s pretty simple, actually, since the class is just a placeholder for the coefficients of the defining equation. We assume the equation is already in the Weierstrass normal form, but if it weren’t one could perform a whole bunch of algebra to get it in that form (and you can see how convoluted the process is in this short report or page 115 (pdf p. 21) of this book). To be safe, we’ll add a few extra checks to make sure the curve is smooth.

class EllipticCurve(object):
   def __init__(self, a, b):
      # assume we're already in the Weierstrass form
      self.a = a
      self.b = b

      self.discriminant = -16 * (4 * a*a*a + 27 * b * b)
      if not self.isSmooth():
         raise Exception("The curve %s is not smooth!" % self)

   def isSmooth(self):
      return self.discriminant != 0

   def testPoint(self, x, y):
      return y*y == x*x*x + self.a * x + self.b

   def __str__(self):
      return 'y^2 = x^3 + %Gx + %G' % (self.a, self.b)

   def __eq__(self, other):
      return (self.a, self.b) == (other.a, other.b)

And here’s some examples of creating curves

>>> EllipticCurve(a=17, b=1)
y^2 = x^3 + 17x + 1
>>> EllipticCurve(a=0, b=0)
Traceback (most recent call last):
  [...]
Exception: The curve y^2 = x^3 + 0x + 0 is not smooth!

So there we have it. Now when we construct a Point, we add the curve as the extra argument and a safety-check to make sure the point being constructed is on the given elliptic curve.

class Point(object):
   def __init__(self, curve, x, y):
      self.curve = curve # the curve containing this point
      self.x = x
      self.y = y

      if not curve.testPoint(x,y):
         raise Exception("The point %s is not on the given curve %s" % (self, curve))

Note that this last check will serve as a coarse unit test for all of our examples. If we mess up then more likely than not the “added” point won’t be on the curve at all. More precise testing is required to be bullet-proof, of course, but we leave explicit tests to the reader as an excuse to get their hands wet with equations.

Some examples:

>>> c = EllipticCurve(a=1,b=2)
>>> Point(c, 1, 2)
(1, 2)
>>> Point(c, 1, 1)
Traceback (most recent call last):
  [...]
Exception: The point (1, 1) is not on the given curve y^2 = x^3 + 1x + 2

Before we go ahead and implement addition and the related functions, we need to be decide how we want to represent the ideal point [0 : 1 : 0]. We have two options. The first is to do everything in projective coordinates and define a whole system for doing projective algebra. Considering we only have one point to worry about, this seems like overkill (but could be fun). The second option, and the one we’ll choose, is to have a special subclass of Point that represents the ideal point.

class Ideal(Point):
   def __init__(self, curve):
      self.curve = curve

   def __str__(self):
      return "Ideal"

Note the inheritance is denoted by the parenthetical (Point) in the first line. Each function we define on a Point will require a 1-2 line overriding function in this subclass, so we will only need a small amount of extra bookkeeping. For example, negation is quite easy.

class Point(object):
   ...
   def __neg__(self):
      return Point(self.curve, self.x, -self.y)

class Ideal(Point):
   ...
   def __neg__(self):
      return self

Note that Python allows one to override the prefix-minus operation by defining __neg__ on a custom object. There are similar functions for addition (__add__), subtraction, and pretty much every built-in python operation. And of course addition is where things get more interesting. For the ideal point it’s trivial.

class Ideal(Point):
   ...
   def __add__(self, Q):
      return Q

Why does this make sense? Because (as we’ve said last time) the ideal point is the additive identity in the group structure of the curve. So by all of our analysis, P + 0 = 0 + P = P, and the code is satisfyingly short.

For distinct points we have to follow the algorithm we used last time. Remember that the trick was to form the line L(x) passing through the two points being added, substitute that line for y in the elliptic curve, and then figure out the coefficient of x^2 in the resulting polynomial. Then, using the two existing points, we could solve for the third root of the polynomial using Vieta’s formula.

In order to do that, we need to analytically solve for the coefficient of the x^2 term of the equation L(x)^2 = x^3 + ax + b. It’s tedious, but straightforward. First, write

\displaystyle L(x) = \left ( \frac{y_2 - y_1}{x_2 - x_1} \right ) (x - x_1) + y_1

The first step of expanding L(x)^2 gives us

\displaystyle L(x)^2 = y_1^2 + 2y_1 \left ( \frac{y_2 - y_1}{x_2 - x_1} \right ) (x - x_1) + \left [ \left (\frac{y_2 - y_1}{x_2 - x_1} \right ) (x - x_1) \right ]^2

And we notice that the only term containing an x^2 part is the last one. Expanding that gives us

\displaystyle \left ( \frac{y_2 - y_1}{x_2 - x_1} \right )^2 (x^2 - 2xx_1 + x_1^2)

And again we can discard the parts that don’t involve x^2. In other words, if we were to rewrite L(x)^2 = x^3 + ax + b as 0 = x^3 - L(x)^2 + ax + b, we’d expand all the terms and get something that looks like

\displaystyle 0 = x^3 - \left ( \frac{y_2 - y_1}{x_2 - x_1} \right )^2 x^2 + C_1x + C_2

where C_1, C_2 are some constants that we don’t need. Now using Vieta’s formula and calling x_3 the third root we seek, we know that

\displaystyle x_1 + x_2 + x_3 = \left ( \frac{y_2 - y_1}{x_2 - x_1} \right )^2

Which means that x_3 = \left ( \frac{y_2 - y_1}{x_2 - x_1} \right )^2 - x_2 - x_1. Once we have x_3, we can get y_3 from the equation of the line y_3 = L(x_3).

Note that this only works if the two points we’re trying to add are different! The other two cases were if the points were the same or lying on a vertical line. These gotchas will manifest themselves as conditional branches of our add function.

class Point(object):
   ...
   def __add__(self, Q):
      if isinstance(Q, Ideal):
         return self

      x_1, y_1, x_2, y_2 = self.x, self.y, Q.x, Q.y

      if (x_1, y_1) == (x_2, y_2):
         # use the tangent method
         ...
      else:
         if x_1 == x_2:
            return Ideal(self.curve) # vertical line

         # Using Vieta's formula for the sum of the roots
         m = (y_2 - y_1) / (x_2 - x_1)
         x_3 = m*m - x_2 - x_1
         y_3 = m*(x_3 - x_1) + y_1

         return Point(self.curve, x_3, -y_3)

First, we check if the two points are the same, in which case we use the tangent method (which we do next). Supposing the points are different, if their x values are the same then the line is vertical and the third point is the ideal point. Otherwise, we use the formula we defined above. Note the subtle and crucial minus sign at the end! The point (x_3, y_3) is the third point of intersection, but we still have to do the reflection to get the sum of the two points.

Now for the case when the points P, Q are actually the same. We’ll call it P = (x_1, y_1), and we’re trying to find 2P = P+P. As per our algorithm, we compute the tangent line J(x) at P. In order to do this we need just a tiny bit of calculus. To find the slope of the tangent line we implicitly differentiate the equation y^2 = x^3 + ax + b and get

\displaystyle \frac{dy}{dx} = \frac{3x^2 + a}{2y}

The only time we’d get a vertical line is when the denominator is zero (you can verify this by taking limits if you wish), and so y=0 implies that P+P = 0 and we’re done. The fact that this can ever happen for a nonzero P should be surprising to any reader unfamiliar with groups! But without delving into a deep conversation about the different kinds of group structures out there, we’ll have to settle for such nice surprises.

In the other case y \neq 0, we plug in our x,y values into the derivative and read off the slope m as (3x_1^2 + a)/(2y_1). Then using the same point slope formula for a line, we get J(x) = m(x-x_1) + y_1, and we can use the same technique (and the same code!) from the first case to finish.

There is only one minor wrinkle we need to smooth out: can we be sure Vieta’s formula works? In fact, the real problem is this: how do we know that x_1 is a double root of the resulting cubic? Well, this falls out again from that very abstract and powerful theorem of Bezout. There is a lot of technical algebraic geometry (and a very interesting but complicated notion of dimension) hiding behind the curtain here. But for our purposes it says that our tangent line intersects the elliptic curve with multiplicity 2, and this gives us a double root of the corresponding cubic.

And so in the addition function all we need to do is change the slope we’re using. This gives us a nice and short implementation

def __add__(self, Q):
      if isinstance(Q, Ideal):
         return self

      x_1, y_1, x_2, y_2 = self.x, self.y, Q.x, Q.y

      if (x_1, y_1) == (x_2, y_2):
         if y_1 == 0:
            return Ideal(self.curve)

         # slope of the tangent line
         m = (3 * x_1 * x_1 + self.curve.a) / (2 * y_1)
      else:
         if x_1 == x_2:
            return Ideal(self.curve)

         # slope of the secant line
         m = (y_2 - y_1) / (x_2 - x_1)

      x_3 = m*m - x_2 - x_1
      y_3 = m*(x_3 - x_1) + y_1

      return Point(self.curve, x_3, -y_3)

What’s interesting is how little the data of the curve comes into the picture. Nothing depends on b, and only one of the two cases depends on a. This is one reason the Weierstrass normal form is so useful, and it may bite us in the butt later in the few cases we don’t have it (for special number fields).

Here are some examples.

>>> C = EllipticCurve(a=-2,b=4)
>>> P = Point(C, 3, 5)
>>> Q = Point(C, -2, 0)
>>> P+Q
(0.0, -2.0)
>>> Q+P
(0.0, -2.0)
>>> Q+Q
Ideal
>>> P+P
(0.25, 1.875)
>>> P+P+P
Traceback (most recent call last):
  ...
Exception: The point (-1.958677685950413, 0.6348610067618328) is not on the given curve y^2 = x^3 + -2x + 4!

>>> x = -1.958677685950413
>>> y = 0.6348610067618328
>>> y*y - x*x*x + 2*x - 4
-3.9968028886505635e-15

And so we crash headfirst into our first floating point arithmetic issue. We’ll vanquish this monster more permanently later in this series (in fact, we’ll just scrap it entirely and define our own number system!), but for now here’s a quick fix:

>>> import fractions
>>> frac = fractions.Fraction
>>> C = EllipticCurve(a = frac(-2), b = frac(4))
>>> P = Point(C, frac(3), frac(5))
>>> P+P+P
(Fraction(-237, 121), Fraction(845, 1331))

Now that we have addition and negation, the rest of the class is just window dressing. For example, we want to be able to use the subtraction symbol, and so we need to implement __sub__

def __sub__(self, Q):
   return self + -Q

Note that because the Ideal point is a subclass of point, it inherits all of these special functions while it only needs to override __add__ and __neg__. Thank you, polymorphism! The last function we want is a scaling function, which efficiently adds a point to itself n times.

class Point(object):
   ...
   def __mul__(self, n):
      if not isinstance(n, int):
         raise Exception("Can't scale a point by something which isn't an int!")
      else:
            if n < 0:
                return -self * -n
            if n == 0:
                return Ideal(self.curve)
            else:
                Q = self
                R = self if n & 1 == 1 else Ideal(self.curve)

                i = 2
                while i <= n:
                    Q = Q + Q

                    if n & i == i:
                        R = Q + R

                    i = i << 1
   return R

   def __rmul__(self, n):
      return self * n

class Ideal(Point):
    ...
    def __mul__(self, n):
        if not isinstance(n, int):
            raise Exception("Can't scale a point by something which isn't an int!")
        else:
            return self

The scaling function allows us to quickly compute nP = P + P + \dots + P (n times). Indeed, the fact that we can do this more efficiently than performing n additions is what makes elliptic curve cryptography work. We’ll take a deeper look at this in the next post, but for now let’s just say what the algorithm is doing.

Given a number written in binary n = b_kb_{k-1}\dots b_1b_0, we can write nP as

\displaystyle b_0 P + b_1 2P + b_2 4P + \dots + b_k 2^k P

The advantage of this is that we can compute each of the P, 2P, 4P, \dots, 2^kP iteratively using only k additions by multiplying by 2 (adding something to itself) k times. Since the number of bits in n is k= \log(n), we’re getting a huge improvement over n additions.

The algorithm is given above in code, but it’s a simple bit-shifting trick. Just have i be some power of two, shifted by one at the end of every loop. Then start with Q_0 being P, and replace Q_{j+1} = Q_j + Q_j, and in typical programming fashion we drop the indices and overwrite the variable binding at each step (Q = Q+Q). Finally, we have a variable R to which Q_j is added when the j-th bit of n is a 1 (and ignored when it’s 0). The rest is bookkeeping.

Note that __mul__ only allows us to write something like P * n, but the standard notation for scaling is n * P. This is what __rmul__ allows us to do.

We could add many other helper functions, such as ones to allow us to treat points as if they were lists, checking for equality of points, comparison functions to allow one to sort a list of points in lex order, or a function to transform points into more standard types like tuples and lists. We have done a few of these that you can see if you visit the code repository, but we’ll leave flushing out the class as an exercise to the reader.

Some examples:

>>> import fractions
>>> frac = fractions.Fraction
>>> C = EllipticCurve(a = frac(-2), b = frac(4))
>>> P = Point(C, frac(3), frac(5))
>>> Q = Point(C, frac(-2), frac(0))
>>> P-Q
(Fraction(0, 1), Fraction(-2, 1))
>>> P+P+P+P+P
(Fraction(2312883, 1142761), Fraction(-3507297955, 1221611509))
>>> 5*P
(Fraction(2312883, 1142761), Fraction(-3507297955, 1221611509))
>>> Q - 3*P
(Fraction(240, 1), Fraction(3718, 1))
>>> -20*P
(Fraction(872171688955240345797378940145384578112856996417727644408306502486841054959621893457430066791656001, 520783120481946829397143140761792686044102902921369189488390484560995418035368116532220330470490000), Fraction(-27483290931268103431471546265260141280423344817266158619907625209686954671299076160289194864753864983185162878307166869927581148168092234359162702751, 11884621345605454720092065232176302286055268099954516777276277410691669963302621761108166472206145876157873100626715793555129780028801183525093000000))

As one can see, the precision gets very large very quickly. One thing we’ll do to avoid such large numbers (but hopefully not sacrifice security) is to work in finite fields, the simplest version of which is to compute modulo some prime.

So now we have a concrete understanding of the algorithm for adding points on elliptic curves, and a working Python program to do this for rational numbers or floating point numbers (if we want to deal with precision issues). Next time we’ll continue this train of thought and upgrade our program (with very little work!) to work over other simple number fields. Then we’ll delve into the cryptographic issues, and talk about how one might encode messages on a curve and use algebraic operations to encode their messages.

Until then!