Sending and Authenticating Messages with Elliptic Curves

Last time we saw the Diffie-Hellman key exchange protocol, and discussed the discrete logarithm problem and the related Diffie-Hellman problem, which form the foundation for the security of most protocols that use elliptic curves. Let’s continue our journey to investigate some more protocols.

Just as a reminder, the Python implementations of these protocols are not at all meant for practical use, but for learning purposes. We provide the code on this blog’s Github page, but for the love of security don’t actually use them.

Shamir-Massey-Omura

Recall that there are lots of ways to send encrypted messages if you and your recipient share some piece of secret information, and the Diffie-Hellman scheme allows one to securely generate a piece of shared secret information. Now we’ll shift gears and assume you don’t have a shared secret, nor any way to acquire one. The first cryptosystem in that vein is called the Shamir-Massey-Omura protocol. It’s only slightly more complicated to understand than Diffie-Hellman, and it turns out to be equivalently difficult to break.

The idea is best explained by metaphor. Alice wants to send a message to Bob, but all she has is a box and a lock for which she has the only key. She puts the message in the box and locks it with her lock, and sends it to Bob. Bob can’t open the box, but he can send it back with a second lock on it for which Bob has the only key. Upon receiving it, Alice unlocks her lock, sends the box back to Bob, and Bob can now open the box and retrieve the message.

To celebrate the return of Game of Thrones, we’ll demonstrate this protocol with an original Lannister Infographic™.

Assuming the box and locks are made of magical unbreakable Valyrian steel, nobody but Jamie will be able to read the message.

Assuming the box and locks are made of magically unbreakable Valyrian steel, nobody but Bob (also known as Jamie) will be able to read the message.

Now fast forward through the enlightenment, industrial revolution, and into the age of information. The same idea works, and it’s significantly faster over long distances. Let C be an elliptic curve over a finite field k (we’ll fix k = \mathbb{Z}/p for some prime p, though it works for general fields too). Let n be the number of points on C.

Alice’s message is going to be in the form of a point M on C. She’ll then choose her secret integer 0 < s_A < p and compute s_AM (locking the secret in the box), sending the result to Bob. Bob will likewise pick a secret integer s_B, and send s_Bs_AM back to Alice.

Now the unlocking part: since s_A \in \mathbb{Z}/p is a field, Alice can “unlock the box” by computing the inverse s_A^{-1} and computing s_BM = s_A^{-1}s_Bs_AM. Now the “box” just has Bob’s lock on it. So Alice sends s_BM back to Bob, and Bob performs the same process to evaluate s_B^{-1}s_BM = M, thus receiving the message.

Like we said earlier, the security of this protocol is equivalent to the security of the Diffie-Hellman problem. In this case, if we call z = s_A^{-1} and y = s_B^{-1}, and P = s_As_BM, then it’s clear that any eavesdropper would have access to P, zP, and yP, and they would be tasked with determining zyP, which is exactly the Diffie-Hellman problem.

Now Alice’s secret message comes in the form of a point on an elliptic curve, so how might one translate part of a message (which is usually represented as an integer) into a point? This problem seems to be difficult in general, and there’s no easy answer. Here’s one method originally proposed by Neal Koblitz that uses a bit of number theory trickery.

Let C be given by the equation y^2 = x^3 + ax + b, again over \mathbb{Z}/p. Suppose 0 \leq m < p/100 is our message. Define for any 0 \leq j < 100 the candidate x-points x_j = 100m + j. Then call our candidate y^2-values s_j = x_j^3 + ax_j + b. Now for each j we can compute x_j, s_j, and so we’ll pick the first one for which s_j is a square in \mathbb{Z}/p and we’ll get a point on the curve. How can we tell if s_j is a square? One condition is that s_j^{(p-1)/2} \equiv 1 \mod p. This is a basic fact about quadratic residues modulo primes; see these notes for an introduction and this Wikipedia section for a dense summary.

Once we know it’s a square, we can compute the square root depending on whether p \equiv 1 \mod 4 or p \equiv 3 \mod 4. In the latter case, it’s just s_j^{(p+1)/4} \mod p. Unfortunately the former case is more difficult (really, the difficult part is p \equiv 1 \mod 8). You can see Section 1.5 of this textbook for more details and three algorithms, or you could just pick primes congruent to 3 mod 4.

I have struggled to find information about the history of the Shamir-Massey-Omura protocol; every author claims it’s not widely used in practice, and the only reason seems to be that this protocol doesn’t include a suitable method for authenticating the validity of a message. In other words, some “man in the middle” could be intercepting messages and tricking you into thinking he is your intended recipient. Coupling this with the difficulty of encoding a message as a point seems to be enough to make cryptographers look for other methods. Another reason could be that the system was patented in 1982 and is currently held by SafeNet, one of the US’s largest security providers. All of their products have generic names so it’s impossible to tell if they’re actually using Shamir-Massey-Omura. I’m no patent lawyer, but it could simply be that nobody else is allowed to implement the scheme.

Digital Signatures

Indeed, the discussion above raises the question: how does one authenticate a message? The standard technique is called a digital signature, and we can implement those using elliptic curve techniques as well. To debunk the naive idea, one cannot simply attach some static piece of extra information to the message. An attacker could just copy that information and replicate it to forge your signature on another, potentially malicious document. In other words, a signature should only work for the message it was used to sign. The technique we’ll implement was originally proposed by Taher Elgamal, and is called the ElGamal signature algorithm. We’re going to look at a special case of it.

So Alice wants to send a message m with some extra information that is unique to the message and that can be used to verify that it was sent by Alice. She picks an elliptic curve E over \mathbb{F}_q in such a way that the number of points on E is br, where b is a small integer and r is a large prime.

Then, as in Diffie-Hellman, she picks a base point Q that has order r and a secret integer s (which is permanent), and computes P = sQ. Alice publishes everything except s:

Public information: \mathbb{F}_q, E, b, r, Q, P

Let Alice’s message m be represented as an integer at most r (there are a few ways to get around this if your message is too long). Now to sign m Alice picks a message specific k < r and computes what I’ll call the auxiliary point A = kQ. Let A = (x, y). Alice then computes the signature g = k^{-1}(m + s x) \mod r. The signed message is then (m, A, g), which Alice can safely send to Bob.

Before we see how Bob verifies the message, notice that the signature integer involves everything: Alice’s secret key, the message-specific secret integer k, and most importantly the message. Remember that this is crucial: we want the signature to work only for the message that it was used to sign. If the same k is used for multiple messages then the attacker can find out your secret key! (And this has happened in practice; see the end of the post.)

So Bob receives (m, A, g), and also has access to all of the public information listed above. Bob authenticates the message by computing the auxiliary point via a different route. First, he computes c = g^{-1} m \mod r and d = g^{-1}x \mod r, and then A' = cQ + dP. If the message was signed by Alice then A' = A, since we can just write out the definition of everything:

authentication-formula

Now to analyze the security. The attacker wants to be able to take any message m' and produce a signature A', g' that will pass validation with Alice’s public information. If the attacker knew how to solve the discrete logarithm problem efficiently this would be trivial: compute s and then just sign like Alice does. Without that power there are still a few options. If the attacker can figure out the message-specific integer k, then she can compute Alice’s secret key s as follows.

Given g = k^{-1}(m + sx) \mod r, compute kg \equiv (m + sx) \mod r. Compute d = gcd(x, r), and you know that this congruence has only d possible solutions modulo r. Since s is less than r, the attacker can just try all options until they find P = sQ. So that’s bad, but in a properly implemented signature algorithm finding k is equivalently hard to solving the discrete logarithm problem, so we can assume we’re relatively safe from that.

On the other hand one could imagine being able to conjure the pieces of the signature A', g' by some method that doesn’t involve directly finding Alice’s secret key. Indeed, this problem is less well-studied than the Diffie-Hellman problem, but most cryptographers believe it’s just as hard. For more information, this paper surveys the known attacks against this signature algorithm, including a successful attack for fields of characteristic two.

Signature Implementation

We can go ahead and implement the signature algorithm once we’ve picked a suitable elliptic curve. For the purpose of demonstration we’ll use a small curve, E: y^2 = x^3 + 3x + 181 over F = \mathbb{Z}/1061, whose number of points happens to have the a suitable prime factorization (1047 = 3 \cdot 349). If you’re interested in counting the number of points on an elliptic curve, there are many theorems and efficient algorithms to do this, and if you’ve been reading this whole series something then an algorithm based on the Baby-Step Giant-Step idea would be easy to implement. For the sake of brevity, we leave it as an exercise to the reader.

Note that the code we present is based on the elliptic curve and finite field code we’re been implementing as part of this series. All of the code used in this post is available on this blog’s Github page.

The basepoint we’ll pick has to have order 349, and E has plenty of candidates. We’ll use (2, 81), and we’ll randomly generate a secret key that’s less than 349 (eight bits will do). So our setup looks like this:

if __name__ == "__main__":
   F = FiniteField(1061, 1)

   # y^2 = x^3 + 3x + 181
   curve = EllipticCurve(a=F(3), b=F(181))
   basePoint = Point(curve, F(2), F(81))
   basePointOrder = 349
   secretKey = generateSecretKey(8)
   publicKey = secretKey * basePoint

Then so sign a message we generate a random key, construct the auxiliary point and the signature, and return:

def sign(message, basePoint, basePointOrder, secretKey):
   modR = FiniteField(basePointOrder, 1)
   oneTimeSecret = generateSecretKey(len(bin(basePointOrder)) - 3) # numbits(order) - 1

   auxiliaryPoint = oneTimeSecret * basePoint
   signature = modR(oneTimeSecret).inverse() *
         (modR(message) + modR(secretKey) * modR(auxiliaryPoint[0]))

   return (message, auxiliaryPoint, signature)

So far so good. Note that we generate the message-specific k at random, and this implies we need a high-quality source of randomness (what’s called a cryptographically-secure pseudorandom number generator). In absence of that there are proposed deterministic methods for doing it. See this draft proposal of Thomas Pornin, and this paper of Daniel Bernstein for another.

Now to authenticate, we follow the procedure from earlier.

def authentic(signedMessage, basePoint, basePointOrder, publicKey):
   modR = FiniteField(basePointOrder, 1)
   (message, auxiliary, signature) = signedMessage

   sigInverse = modR(signature).inverse() # sig can be an int or a modR already
   c, d = sigInverse * modR(message), sigInverse * modR(auxiliary[0])

   auxiliaryChecker = int(c) * basePoint + int(d) * publicKey
   return auxiliaryChecker == auxiliary

Continuing with our example, we pick a message represented as an integer smaller than r, sign it, and validate it.

>>> message = 123
>>> signedMessage = sign(message, basePoint, basePointOrder, secretKey)
>>> signedMessage
(123, (220 (mod 1061), 234 (mod 1061)), 88 (mod 349))
>>> authentic(signedMessage, basePoint, basePointOrder, publicKey)
True

So there we have it, a nice implementation of the digital signature algorithm.

When Digital Signatures Fail

As we mentioned, it’s extremely important to avoid using the same k for two different messages. If you do, then you’ll get two signed messages (m_1, A_1, g_1), (m_2, A_2, g_2), but by definition the two g‘s have a ton of information in common! An attacker can recognize this immediately because A_1 = A_2, and figure out the secret key s as follows. First write

\displaystyle g_1 - g_2 \equiv k^{-1}(m_1 + sx) - k^{-1}(m_2 + sx) \equiv k^{-1}(m_1 - m_2) \mod r.

Now we have something of the form \text{known}_1 \equiv (k^{-1}) \text{known}_2 \mod r, and similarly to the attack described earlier we can try all possibilities until we find a number that satisfies A = kQ. Then once we have k we have already seen how to find s. Indeed, it would be a good exercise for the reader to implement this attack.

The attack we just described it not an idle threat. Indeed, the Sony corporation, producers of the popular Playstation video game console, made this mistake in signing software for Playstation 3. A digital signature algorithm makes sense to validate software, because Sony wants to ensure that only Sony has the power to publish games. So Sony developers act as one party signing the data on a disc, and the console will only play a game with a valid signature. Note that the asymmetric setup is necessary because if the console had shared a secret with Sony (say, stored as plaintext within the hardware of the console), anyone with physical access to the machine could discover it.

Now here come the cringing part. Sony made the mistake of using the same k to sign every game! Their mistake was discovered in 2010 and made public at a cryptography conference. This video of the humorous talk includes a description of the variant Sony used and the attacker describe how the mistake should have been corrected. Without a firmware update (I believe Sony’s public key information was stored locally so that one could authenticate games without an internet connection), anyone could sign a piece of software and create games that are indistinguishable from something produced by Sony. That includes malicious content that, say, installs software that sends credit card information to the attacker.

So here we have a tidy story: a widely used cryptosystem with a scare story of what will go wrong when you misuse it. In the future of this series, we’ll look at other things you can do with elliptic curves, including factoring integers and testing for primality. We’ll also see some normal forms of elliptic curves that are used in place of the Weierstrass normal form for various reasons.

Until next time!

Stable Marriages and Designing Markets

Here is a fun puzzle. Suppose we have a group of 10 men and 10 women, and each of the men has sorted the women in order of their preference for marriage (that is, a man prefers to marry a woman earlier in his list over a woman later in the list). Likewise, each of the women has sorted the men in order of marriageability. We might ask if there is any way that we, the omniscient cupids of love, can decide who should marry to make everyone happy.

Of course, the word happy is entirely imprecise. The mathematician balks at the prospect of leaving such terms undefined! In this case, it’s quite obvious that not everyone will get their first pick. Indeed, if even two women prefer the same man someone will have to settle for less than their top choice. So if we define happiness in this naive way, the problem is obviously not solvable in general.

Now what if instead of aiming for each individual’s maximum happiness we instead shoot for mutual contentedness? That is, what if “happiness” here means that nobody will ever have an incentive to cheat on their spouse? It turns out that for a mathematical version of this condition, we can always find a suitable set of marriages! These mathematical formalisms include some assumptions, such as that preferences never change and that no new individuals are added to the population. But it is nevertheless an impressive theorem that we can achieve stability no matter what everyone’s preferences are. In this post we’ll give the classical algorithm which constructs so-called “stable marriages,” and we’ll prove its correctness. Then we’ll see a slight generalization of the algorithm, in which the marriages are “polygamous,” and we’ll apply it to the problem of assigning students to internships.

As usual, all of the code used in this post is available for download at this blog’s Github page.

Historical Notes

The original algorithm for computing stable marriages was discovered by Lloyd Shapley and David Gale in the early 1960′s. Shapely and Alvin Roth went on to dedicate much of their career to designing markets and applying the stable marriage problem and its generalizations to such problems. In 2012 they jointly received the Nobel prize in economics for their work on this problem. If you want to know more about what “market design” means and why it’s needed (and you have an hour to spare), consider watching the talk below by Alvin Roth at the Simons Institute’s 2013 Symposium on the Visions of the Theory of Computing. Roth spends most of his time discussing the state of one particular economy, medical students and residence positions at hospitals, which he was asked to redesign. It’s quite a fascinating tale, although some of the deeper remarks assume knowledge of the algorithm we cover in this post.

Alvin Roth went on to apply the ideas presented in the video to economic systems in Boston and New York City public schools, kidney exchanges, and others. They all had the same sort of structure: both parties have preferences and stability makes sense. So he actually imposed the protocol we’re about to describe in order to guarantee that the process terminates to a stable arrangement (and automating it saves everyone involved a lot of time, stress, and money! Watch the video above for more on that).

The Monogamous Stable Marriage Algorithm

Let’s formally set up the problem. Let X = \left \{ 1, 2, \dots, n \right \} be a set of n suitors and Y = \left \{ 1,2,\dots ,n \right \} be a set of n “suited.” Let \textup{pref}_{X \to Y}: X \to S_n be a list of preferences for the suitors. In words, \textup{pref}_{X \to Y} accepts as input a suitor, and produces as output an ordering on the suited members of Y. We denote the output set as S_n, which the group theory folks will recognize as the permutation group on 1, \dots, n. Likewise, there is a function \textup{pref}_{Y \to X}: Y \to S_n describing the preferences of each of the suited.

An example will help clarify these stuffy definitions. If X = \left \{ 1, 2, 3 \right \} and Y = \left \{ 1, 2, 3 \right \}, then to say that

\textup{pref}_{X \to Y}(2) = (3, 1, 2)

is to say that the second suitor prefers the third member of Y the most, and then the first member of Y, and then the second. The programmer might imagine that the datum of the problem consists of two dictionaries (one for X and one for Y) whose keys are integers and whose values are lists of integers which contain 1 through n in some order.

A solution to the problem, then, is a way to match (or marry) suitors with suited. Specifically, a matching is a bijection m: X \to Y, so that x is matched with m(x). The reason we use a bijection is because the marriages are monogamous: only one suitor can be matched with one suited and vice versa. Later we’ll see this condition dropped so we can apply it to a more realistic problem of institutions (suited) which can accommodate many applicants (suitors). Because suitor and suited are awkward to say, we’ll use the familiar, antiquated, and politically incorrect terms “men and women.”

Now if we’re given a monogamous matching m, a pair x \in X, y \in Y is called unstable for m if both x,y prefer each other over their partners assigned by m. That is, (x,y) is unstable for m if y appears before m(y) in the preference list for x, \textup{pref}_{X \to Y}(x), and likewise x appears before m^{-1}(y) in \textup{pref}_{Y \to X}(y).

Another example to clarify: again let X = Y = \left \{ 1,2,3 \right \} and suppose for simplicity that our matching m pairs m(i) = i. If man 2 has the preference list (3,2,1) and woman 3 has the preference list (2,1,3), then 2 and 3 together form an unstable pair for m, because they would rather be with each other over their current partners. That is, they have a mutual incentive to cheat on their spouses. We say that the matching is unstable or admits an unstable pair if there are any unstable pairs for it, and we call the entire matching stable if it doesn’t admit any unstable pairs.

Unlike real life, mathematically unstable marriages need not have constant arguments.

Unlike real life, mathematically unstable marriages need not feature constant arguments.

So the question at hand is: is there an algorithm which, given access to to the two sets of preferences, can efficiently produce a stable matching? We can also wonder whether a stable matching is guaranteed to exist, and the answer is yes. In fact, we’ll prove this and produce an efficient algorithm in one fell swoop.

The central concept of the algorithm is called deferred acceptance. The gist is like this. The algorithm operates in rounds. During each round, each man will “propose” to a woman, and each woman will pick the best proposal available. But the women will not commit to their pick. They instead reject all other suitors, who go on to propose to their second choices in the next round. At that stage each woman (who now may have a more preferred suitor than in the first round) may replace her old pick with a new one. The process continues in this manner until each man is paired with a woman. In this way, each of the women defers accepting any proposal until the end of the round, progressively increasing the quality of her choice. Likewise, the men progressively propose less preferred matches as the rounds progress.

It’s easy to argue such a process must eventually converge. Indeed, the contrary means there’s some sort of cycle in the order of proposals, but each man proposes to only strictly less preferred women than any previous round, and the women can only strictly increase the quality of their held pick. Mathematically, we’re using an important tool called monotonicity. That some quantity can only increase or decrease as time goes on, and since the quantity is bounded, we must eventually reach a local maximum. From there, we can prove that any local maximum satisfies the property we want (here, that the matching is stable), and we win. Indeed, supposing to the contrary that we have a pair (x,y) which is unstable for the matching m produced at the end of this process, then it must have been the case that x proposed to y in some earlier round. But y has as her final match some other suitor x' = m^{-1}(y) whom she prefers less than x. Though she may have never picked x at any point in the algorithm, she can only end up with the worse choice x' if at some point y chose a suitor that was less preferred than the suitor she already had. Since her choices are monotonic this cannot happen, so no unstable pairs can exist.

Rather than mathematically implement the algorithm in pseudocode, let’s produce the entire algorithm in Python to make the ideas completely concrete.

Python Implementation

We start off with some simple data definitions for the two parties which, in the renewed interest of generality, refer to as Suitor and Suited.

class Suitor(object):
   def __init__(self, id, prefList):
      self.prefList = prefList
      self.rejections = 0 # num rejections is also the index of the next option
      self.id = id

   def preference(self):
      return self.prefList[self.rejections]

   def __repr__(self):
      return repr(self.id)

A Suitor is simple enough: he has an id representing his “index” in the set of Suitors, and a preference list prefList which in its i-th position contains the Suitor’s i-th most preferred Suited. This is identical to our mathematical representation from earlier, where a list like (2,3,1) means that the Suitor prefers the second Suited most and the first Suited least. Knowing the algorithm ahead of time, we add an additional piece of data: the number of rejections the Suitor has seen so far. This will double as the index of the Suited that the Suitor is currently proposing to. Indeed, the preference function provides a thin layer of indirection allowing us to ignore the underlying representation, so long as one updates the number of rejections appropriately.

Now for the Suited.

class Suited(object):
   def __init__(self, id, prefList):
      self.prefList = prefList
      self.held = None
      self.currentSuitors = set()
      self.id = id

   def __repr__(self):
      return repr(self.id)

A Suited likewise has a list of preferences and an id, but in addition she has a held attribute for the currently held Suitor, and a list currentSuitors of Suitors that are currently proposing to her. Hence we can define a reject method which accepts no inputs, and returns a list of rejected suitors, while updating the woman’s state to hold onto her most preferred suitor.

   def reject(self):
      if len(self.currentSuitors) == 0:
         return set()

      if self.held is not None:
         self.currentSuitors.add(self.held)

      self.held = min(self.currentSuitors, key=lambda suitor: self.prefList.index(suitor.id))
      rejected = self.currentSuitors - set([self.held])
      self.currentSuitors = set()

      return rejected

The call to min does all the work: finding the Suitor that appears first in her preference list. The rest is bookkeeping. Now the algorithm for finding a stable marriage, following the deferred acceptance algorithm, is simple.

# monogamousStableMarriage: [Suitor], [Suited] -> {Suitor -> Suited}
# construct a stable (monogamous) marriage between suitors and suiteds
def monogamousStableMarriage(suitors, suiteds):
   unassigned = set(suitors)

   while len(unassigned) > 0:
      for suitor in unassigned:
         suiteds[suitor.preference()].currentSuitors.add(suitor)
      unassigned = set()

      for suited in suiteds:
         unassigned |= suited.reject()

      for suitor in unassigned:
         suitor.rejections += 1

   return dict([(suited.held, suited) for suited in suiteds])

All the Suitors are unassigned to begin with. Each iteration of the loop corresponds to a round of the algorithm: the Suitors are added to the currentSuitors list of their next most preferred Suited. Then the Suiteds “simultaneously” reject some Suitors, whose rejection counts are upped by one and returned to the pool of unassigned Suitors. Once every Suited has held onto a Suitor we’re done.

Given a matching, we can define a function that verifies by brute force that the marriage is stable.

# verifyStable: [Suitor], [Suited], {Suitor -> Suited} -> bool
# check that the assignment of suitors to suited is a stable marriage
def verifyStable(suitors, suiteds, marriage):
   import itertools
   suitedToSuitor = dict((v,k) for (k,v) in marriage.items())
   precedes = lambda L, item1, item2: L.index(item1) < L.index(item2)

   def suitorPrefers(suitor, suited):
      return precedes(suitor.prefList, suited.id, marriage[suitor].id)

   def suitedPrefers(suited, suitor):
      return precedes(suited.prefList, suitor.id, suitedToSuitor[suited].id)

   for (suitor, suited) in itertools.product(suitors, suiteds):
      if suited != marriage[suitor] and suitorPrefers(suitor, suited) and suitedPrefers(suited, suitor):
         return False, (suitor.id, suited.id)

   return

Indeed, we can test the algorithm on an instance of the problem.

>>> suitors = [Suitor(0, [3,5,4,2,1,0]), Suitor(1, [2,3,1,0,4,5]),
...            Suitor(2, [5,2,1,0,3,4]), Suitor(3, [0,1,2,3,4,5]),
...            Suitor(4, [4,5,1,2,0,3]), Suitor(5, [0,1,2,3,4,5])]
>>> suiteds = [Suited(0, [3,5,4,2,1,0]), Suited(1, [2,3,1,0,4,5]),
...            Suited(2, [5,2,1,0,3,4]), Suited(3, [0,1,2,3,4,5]),
...            Suited(4, [4,5,1,2,0,3]), Suited(5, [0,1,2,3,4,5])]
>>> marriage = monogamousStableMarriage(suitors, suiteds)
{3: 0, 4: 4, 5: 1, 1: 2, 2: 5, 0: 3}
>>> verifyStable(suitors, suiteds, marriage)
True

We encourage the reader to check this by hand (this one only took two rounds). Even better, answer the question of whether the algorithm could ever require n steps to converge for 2n individuals, where you get to pick the preference list to try to make this scenario happen.

Stable Marriages with Capacity

We can extend this algorithm to work for “polygamous” marriages in which one Suited can accept multiple Suitors. In fact, the two problems are entirely the same! Just imagine duplicating a Suited with large capacity into many Suiteds with capacity of 1. This particular reduction is not very efficient, but it allows us to see that the same proof of convergence and correctness applies. We can then modify our classes and algorithm to account for it, so that (for example) instead of a Suited “holding” a single Suitor, she holds a set of Suitors. We encourage the reader to try extending our code above to the polygamous case as an exercise, and we’ve provided the solution in the code repository for this post on this blog’s Github page.

Ways to Make it Harder

When you study algorithmic graph problems as much as I do, you start to get disheartened. It seems like every problem is NP-hard or worse. So when we get a situation like this, a nice, efficient algorithm with very real consequences and interpretations, you start to get very excited. In between our heaves of excitement, we imagine all the other versions of this problem that we could solve and Nobel prizes we could win. Unfortunately the landscape is bleaker than that, and most extensions of stable marriage problems are NP-complete.

For example, what if we allow ties? That is, one man can be equally happy with two women. This is NP-complete. However, it turns out his extension can be formulated as an integer programming problem, and standard optimization techniques can be used to approximate a solution.

What if, thinking about the problem in terms of medical students and residencies, we allow people to pick their preferences as couples? Some med students are married, after all, and prefer to be close to their spouse even if it means they have a less preferred residency. NP-hard again. See page 53 (pdf page 71) of these notes for a more detailed investigation. The problem is essentially that there is not always a stable matching, and so even determining whether there is one is NP-complete.

So there are a lot of ways to enrich the problem, and there’s an interesting line between tractable and hard in the worst case. As a (relatively difficult) exercise, try to solve the “roommates” version of the problem, where there is no male/female distinction (anyone can be matched with anyone). It turns out to have a tractable solution, and the algorithm is similar to the one outlined in this post.

Until next time!

PS. I originally wrote this post about a year ago when I was contacted by someone in industry who agreed to provide some (anonymized) data listing the preferences of companies and interns applying to work at those companies. Not having heard from them for almost a year, I figure it’s a waste to let this finished post collect dust at the risk of not having an interesting data set. But if you, dear reader, have any data you’d like to provide that fits into the framework of stable marriages, I’d love to feature your company/service on my blog (and solve the matching problem) in exchange for the data. The only caveat is that the data would have to be public, so you would have to anonymize it.

Elliptic Curve Diffie-Hellman

So far in this series we’ve seen elliptic curves from many perspectives, including the elementary, algebraic, and programmatic ones. We implemented finite field arithmetic and connected it to our elliptic curve code. So we’re in a perfect position to feast on the main course: how do we use elliptic curves to actually do cryptography?

History

As the reader has heard countless times in this series, an elliptic curve is a geometric object whose points have a surprising and well-defined notion of addition. That you can add some points on some elliptic curves was a well-known technique since antiquity, discovered by Diophantus. It was not until the mid 19th century that the general question of whether addition always makes sense was answered by Karl Weierstrass. In 1908 Henri Poincaré asked about how one might go about classifying the structure of elliptic curves, and it was not until 1922 that Louis Mordell proved the fundamental theorem of elliptic curves, classifying their algebraic structure for most important fields.

While mathematicians have always been interested in elliptic curves (there is currently a million dollar prize out for a solution to one problem about them), its use in cryptography was not suggested until 1985. Two prominent researchers independently proposed it: Neal Koblitz at the University of Washington, and Victor Miller who was at IBM Research at the time. Their proposal was solid from the start, but elliptic curves didn’t gain traction in practice until around 2005. More recently, the NSA was revealed to have planted vulnerable national standards for elliptic curve cryptography so they could have backdoor access. You can see a proof and implementation of the backdoor at Aris Adamantiadis’s blog. For now we’ll focus on the cryptographic protocols themselves.

The Discrete Logarithm Problem

Koblitz and Miller had insights aplenty, but the central observation in all of this is the following.

Adding is easy on elliptic curves, but undoing addition seems hard.

What I mean by this is usually called the discrete logarithm problem. Here’s a formal definition. Recall that an additive group is just a set of things that have a well-defined addition operation, and the that notation ny means y + y + \dots + y (n times).

Definition: Let G be an additive group, and let x, y be elements of G so that x = ny for some integer n. The discrete logarithm problem asks one to find n when given x and y.

I like to give super formal definitions first, so let’s do a comparison. For integers this problem is very easy. If you give me 12 and 4185072, I can take a few seconds and compute that 4185072 = (348756) 12 using the elementary-school division algorithm (in the above notation, y=12, x=4185072, and n = 348756). The division algorithm for integers is efficient, and so it gives us a nice solution to the discrete logarithm problem for the additive group of integers \mathbb{Z}.

The reason we use the word “logarithm” is because if your group operation is multiplication instead of addition, you’re tasked with solving the equation x = y^n for n. With real numbers you’d take a logarithm of both sides, hence the name. Just in case you were wondering, we can also solve the multiplicative logarithm problem efficiently for rational numbers (and hence for integers) using the square-and-multiply algorithm. Just square y until doing so would make you bigger than x, then multiply by y until you hit x.

But integers are way nicer than they need to be. They are selflessly well-ordered. They give us division for free. It’s a computational charity! What happens when we move to settings where we don’t have a division algorithm? In mathematical lingo: we’re really interested in the case when G is just a group, and doesn’t have additional structure. The less structure we have, the harder it should be to solve problems like the discrete logarithm. Elliptic curves are an excellent example of such a group. There is no sensible ordering for points on an elliptic curve, and we don’t know how to do division efficiently. The best we can do is add y to itself over and over until we hit x, and it could easily happen that n (as a number) is exponentially larger than the number of bits in x and y.

What we really want is a polynomial time algorithm for solving discrete logarithms. Since we can take multiples of a point very fast using the double-and-add algorithm from our previous post, if there is no polynomial time algorithm for the discrete logarithm problem then “taking multiples” fills the role of a theoretical one-way function, and as we’ll see this opens the door for secure communication.

Here’s the formal statement of the discrete logarithm problem for elliptic curves.

Problem: Let E be an elliptic curve over a finite field k. Let P, Q be points on E such that P = nQ for some integer n. Let |P| denote the number of bits needed to describe the point P. We wish to find an algorithm which determines n and has runtime polynomial in |P| + |Q|. If we want to allow randomness, we can require the algorithm to find the correct n with probability at least 2/3.

So this problem seems hard. And when mathematicians and computer scientists try to solve a problem for many years and they can’t, the cryptographers get excited. They start to wonder: under the assumption that the problem has no efficient solution, can we use that as the foundation for a secure communication protocol?

The Diffie-Hellman Protocol and Problem

Let’s spend the rest of this post on the simplest example of a cryptographic protocol based on elliptic curves: the Diffie-Hellman key exchange.

A lot of cryptographic techniques are based on two individuals sharing a secret string, and using that string as the key to encrypt and decrypt their messages. In fact, if you have enough secret shared information, and you only use it once, you can have provably unbreakable encryption! We’ll cover this idea in a future series on the theory of cryptography (it’s called a one-time pad, and it’s not all that complicated). All we need now is motivation to get a shared secret.

Because what if your two individuals have never met before and they want to generate such a shared secret? Worse, what if their only method of communication is being monitored by nefarious foes? Can they possibly exchange public information and use it to construct a shared piece of secret information? Miraculously, the answer is yes, and one way to do it is with the Diffie-Hellman protocol. Rather than explain it abstractly let’s just jump right in and implement it with elliptic curves.

As hinted by the discrete logarithm problem, we only really have one tool here: taking multiples of a point. So say we’ve chosen a curve C and a point on that curve Q. Then we can take some secret integer n, and publish Q and nQ for the world to see. If the discrete logarithm problem is truly hard, then we can rest assured that nobody will be able to discover n.

How can we use this to established a shared secret? This is where Diffie-Hellman comes in. Take our two would-be communicators, Alice and Bob. Alice and Bob each pick a binary string called a secret key, which in interpreted as a number in this protocol. Let’s call Alice’s secret key s_A and Bob’s s_B, and note that they don’t have to be the same. As the name “secret key” suggests, the secret keys are held secret. Moreover, we’ll assume that everything else in this protocol, including all data sent between the two parties, is public.

So Alice and Bob agree ahead of time on a public elliptic curve C and a public point Q on C. We’ll sometimes call this point the base point for the protocol.

Bob can cunningly do the following trick: take his secret key s_B and send s_B Q to Alice. Equally slick Alice computes s_A Q and sends that to Bob. Now Alice, having s_B Q , computes s_A s_B Q. And Bob, since he has s_A Q, can compute s_B s_A Q. But since addition is commutative in elliptic curve groups, we know s_A s_B Q = s_B s_A Q. The secret piece of shared information can be anything derived from this new point, for example its x-coordinate.

If we want to talk about security, we have to describe what is public and what the attacker is trying to determine. In this case the public information consists of the points Q, s_AQ, s_BQ. What is the attacker trying to figure out? Well she really wants to eavesdrop on their subsequent conversation, that is, the stuff that encrypt with their new shared secret s_As_BQ. So the attacker wants find out s_As_BQ. And we’ll call this the Diffie-Hellman problem.

Diffie-Hellman Problem: Suppose you fix an elliptic curve E over a finite field k, and you’re given four points Q, aQ, bQ and P for some unknown integers a, b. Determine if P = abQ in polynomial time (in the lengths of Q, aQ, bQ, P).

On one hand, if we had an efficient solution to the discrete logarithm problem, we could easily use that to solve the Diffie-Hellman problem because we could compute a,b and them quickly compute abQ and check if it’s P. In other words discrete log is at least as hard as this problem. On the other hand nobody knows if you can do this without solving the discrete logarithm problem. Moreover, we’re making this problem as easy as we reasonably can because we don’t require you to be able to compute abQ. Even if some prankster gave you a candidate for abQ, all you have to do is check if it’s correct. One could imagine some test that rules out all fakes but still doesn’t allow us to compute the true point, which would be one way to solve this problem without being able to solve discrete log.

So this is our hardness assumption: assuming this problem has no efficient solution then no attacker, even with really lucky guesses, can feasibly determine Alice and Bob’s shared secret.

Python Implementation

The Diffie-Hellman protocol is just as easy to implement as you would expect. Here’s some Python code that does the trick. Note that all the code produced in the making of this post is available on this blog’s Github page.

def sendDH(privateKey, generator, sendFunction):
   return sendFunction(privateKey * generator)

def receiveDH(privateKey, receiveFunction):
   return privateKey * receiveFunction()

And using our code from the previous posts in this series we can run it on a small test.

import os

def generateSecretKey(numBits):
   return int.from_bytes(os.urandom(numBits // 8), byteorder='big')

if __name__ == "__main__":
   F = FiniteField(3851, 1)
   curve = EllipticCurve(a=F(324), b=F(1287))
   basePoint = Point(curve, F(920), F(303))

   aliceSecretKey = generateSecretKey(8)
   bobSecretKey = generateSecretKey(8)

   alicePublicKey = sendDH(aliceSecretKey, basePoint, lambda x:x)
   bobPublicKey = sendDH(bobSecretKey, basePoint, lambda x:x)

   sharedSecret1 = receiveDH(bobSecretKey, lambda: alicePublicKey)
   sharedSecret2 = receiveDH(aliceSecretKey, lambda: bobPublicKey)
   print('Shared secret is %s == %s' % (sharedSecret1, sharedSecret2))

Pythons os module allows us to access the operating system’s random number generator (which is supposed to be cryptographically secure) via the function urandom, which accepts as input the number of bytes you wish to generate, and produces as output a Python bytestring object that we then convert to an integer. Our simplistic (and totally insecure!) protocol uses the elliptic curve C defined by y^2 = x^3 + 324 x + 1287 over the finite field \mathbb{Z}/3851. We pick the base point Q = (920, 303), and call the relevant functions with placeholders for actual network transmission functions.

There is one issue we have to note. Say we fix our base point Q. Since an elliptic curve over a finite field can only have finitely many points (since the field only has finitely many possible pairs of numbers), it will eventually happen that nQ = 0 is the ideal point. Recall that the smallest value of n for which nQ = 0 is called the order of Q. And so when we’re generating secret keys, we have to pick them to be smaller than the order of the base point. Viewed from the other angle, we want to pick Q to have large order, so that we can pick large and difficult-to-guess secret keys. In fact, no matter what integer you use for the secret key it will be equivalent to some secret key that’s less than the order of Q. So if an attacker could guess the smaller secret key he wouldn’t need to know your larger key.

The base point we picked in the example above happens to have order 1964, so an 8-bit key is well within the bounds. A real industry-strength elliptic curve (say, Curve25519 or the curves used in the NIST standards*) is designed to avoid these problems. The order of the base point used in the Diffie-Hellman protocol for Curve25519 has gargantuan order (like 2^{256}). So 256-bit keys can easily be used. I’m brushing some important details under the rug, because the key as an actual string is derived from 256 pseudorandom bits in a highly nontrivial way.

So there we have it: a simple cryptographic protocol based on elliptic curves. While we didn’t experiment with a truly secure elliptic curve in this example, we’ll eventually extend our work to include Curve25519. But before we do that we want to explore some of the other algorithms based on elliptic curves, including random number generation and factoring.

Comments on Insecurity

Why do we use elliptic curves for this? Why not do something like RSA and do multiplication (and exponentiation) modulo some large prime?

Well, it turns out that algorithmic techniques are getting better and better at solving the discrete logarithm problem for integers mod p, leading some to claim that RSA is dead. But even if we will never find a genuinely efficient algorithm (polynomial time is good, but might not be good enough), these techniques have made it clear that the key size required to maintain high security in RSA-type protocols needs to be really big. Like 4096 bits. But for elliptic curves we can get away with 256-bit keys. The reason for this is essentially mathematical: addition on elliptic curves is not as well understood as multiplication is for integers, and the more complex structure of the group makes it seem inherently more difficult. So until some powerful general attacks are found, it seems that we can get away with higher security on elliptic curves with smaller key sizes.

I mentioned that the particular elliptic curve we chose was insecure, and this raises the natural question: what makes an elliptic curve/field/basepoint combination secure or insecure? There are a few mathematical pitfalls (including certain attacks we won’t address), but one major non-mathematical problem is called a side-channel attack. A side channel attack against a cryptographic protocol is one that gains additional information about users’ secret information by monitoring side-effects of the physical implementation of the algorithm.

The problem is that different operations, doubling a point and adding two different points, have very different algorithms. As a result, they take different amounts of time to complete and they require differing amounts of power. Both of these can be used to reveal information about the secret keys. Despite the different algorithms for arithmetic on Weierstrass normal form curves, one can still implement them to be secure. Naively, one might pad the two subroutines with additional (useless) operations so that they have more similar time/power signatures, but I imagine there are better methods available.

But much of what makes a curve’s domain parameters mathematically secure or insecure is still unknown. There are a handful of known attacks against very specific families of parameters, and so cryptography experts simply avoid these as they are discovered. Here is a short list of pitfalls, and links to overviews:

  1. Make sure the order of your basepoint has a short facorization (e.g., is 2p, 3p, or 4p for some prime p). Otherwise you risk attacks based on the Chinese Remainder Theorem, the most prominent of which is called Pohlig-Hellman.
  2. Make sure your curve is not supersingular. If it is you can reduce the discrete logarithm problem to one in a different and much simpler group.
  3. If your curve C is defined over \mathbb{Z}/p, make sure the number of points on C is not equal to p. Such a curve is called prime-field anomalous, and its discrete logarithm problem can be reduced to the (additive) version on integers.
  4. Don’t pick a small underlying field like \mathbb{F}_{2^m} for small mGeneral-purpose attacks can be sped up significantly against such fields.
  5. If you use the field \mathbb{F}_{2^m}, ensure that m is prime. Many believe that if m has small divisors, attacks based on some very complicated algebraic geometry can be used to solve the discrete logarithm problem more efficiently than any general-purpose method. This gives evidence that m being composite at all is dangerous, so we might as well make it prime.

This is a sublist of the list provided on page 28 of this white paper.

The interesting thing is that there is little about the algorithm and protocol that is vulnerable. Almost all of the vulnerabilities come from using bad curves, bad fields, or a bad basepoint. Since the known attacks work on a pretty small subset of parameters, one potentially secure technique is to just generate a random curve and a random point on that curve! But apparently all respected national agencies will refuse to call your algorithm “standards compliant” if you do this.

Next time we’ll continue implementing cryptographic protocols, including the more general public-key message sending and signing protocols.

Until then!

Where Math ∩ Programming is Headed

This week is Spring break at UI Chicago. While I’ll be spending most of it working, it does give me some downtime to reflect. We’ve come pretty far, dear reader, in these almost three years. I learned, you learned. We all laughed. My blog has become my infinite source of entertainment and an invaluable tool for synthesizing my knowledge.

But the more I write the more ideas I have for articles, and this has been accelerating. I’m currently sitting on 55 unfinished drafts ranging from just a title and an idea to an almost-completed post. A lot of these ideas have long chains of dependencies (I can’t help myself but write on all the background math I can stomach before I do the applications). So one day I decided to draw up a little dependency graph to map out my coarse future plans. Here it is:

A map of most of my current plans for blog posts and series, and their relationships to one another. Click to enlarge.

A map of most of my current plans for blog posts and series, and their relationships to one another. Click to enlarge.

Now all you elliptic curve fanatics can rest assured I’ll continue working that series to completion before starting on any of these big projects. This map basically gives a rough picture of things I’ve read about, studied, and been interested in over the past two years that haven’t already made it onto this blog. Some of the nodes represent passed milestones in my intellectual career, while others represent topics yet to be fully understood. Note very few specific applications are listed here (e.g., what might I use SVM to classify?), but I do have ideas for a lot of them. And note that these are very long term plans, some of which are likely never to come to fruition.

So nows your chance to speak up. What do you want to read about? What do you think is missing?