# A parlor trick for SET

Tai-Danae Bradley is one of the hosts of PBS Infinite Series, a delightful series of vignettes into fun parts of math. The video below is about the same of SET, a favorite among mathematicians. Specifically, Tai-Danae explains how SET cards lie in (using more technical jargon) a vector space over a finite field, and that valid sets correspond to lines. If you don’t immediately know how this would work, watch the video.

In this post I want to share a parlor trick for SET that I originally heard from Charlotte Chan. It uses the same ideas from the video above, which I’ll only review briefly.

In the game of SET you see a board of cards like the following, and players look for sets.

Image source: theboardgamefamily.com

A valid set is a triple of cards where, feature by feature, the characteristics on the cards are either all the same or all different. A valid set above is {one empty blue oval, two solid blue ovals, three shaded blue ovals}. The feature of “fill” is different on all the cards, but the feature of “color” is the same, etc.

In a game of SET, the cards are dealt in order from a shuffled deck, players race to claim sets, removing the set if it’s valid, and three cards are dealt to replace the removed set. Eventually the deck is exhausted and the game is over, and the winner is the player who collected the most sets.

There are a handful of mathematical tricks you can use to help you search for sets faster, but the parlor trick in this post adds a fun variant to the end of the game.

Play the game of SET normally, but when you get down to the last card in the deck, don’t reveal it. Keep searching for sets until everyone agrees no visible sets are left. Then you start the variant: the first player to guess the last un-dealt card in the deck gets a bonus set.

The math comes in when you discover that you don’t need to guess, or remember anything about the game that was just played! A clever stranger could walk into the room at the end of the game and win the bonus point.

Theorem: As long as every player claimed a valid set throughout the game, the information on the remaining board uniquely determines the last (un-dealt) card.

Before we get to the proof, some reminders. Recall that there are four features on a SET card, each of which has three options. Enumerate the options for each feature (e.g., {Squiggle, Oval, Diamond} = {0, 1, 2}).

While we will not need the geometry induced by this, this implies each card is a vector in the vector space $\mathbb{F}_3^4$, where $\mathbb{F}_3 = \mathbb{Z}/3\mathbb{Z}$ is the finite field of three elements, and the exponent means “dimension 4.” As Tai-Danae points out in the video, each SET is an affine line in this vector space. For example, if this is the enumeration:

Source: “The Joy of Set

Then using the enumeration, a set might be given by

$\displaystyle \{ (1, 1, 1, 1), (1, 2, 0, 1), (1, 0, 2, 1) \}$

The crucial feature for us is that the vector-sum (using the modular field arithmetic on each entry) of the cards in a valid set is the zero vector $(0, 0, 0, 0)$. This is because $1+1+1 = 0, 2+2+2 = 0,$ and $1+2+3=0$ are all true mod 3.

Proof of Theorem. Consider the vector-valued invariant $S_t$ equal to the sum of the remaining cards after $t$ sets have been taken. At the beginning of the game the deck has 81 cards that can be partitioned into valid sets. Because each valid set sums to the zero vector, $S_0 = (0, 0, 0, 0)$. Removing a valid set via normal play does not affect the invariant, because you’re subtracting a set of vectors whose sum is zero. So $S_t = 0$ for all $t$.

At the end of the game, the invariant still holds even if there are no valid sets left to claim. Let $x$ be the vector corresponding to the last un-dealt card, and $c_1, \dots, c_n$ be the remaining visible cards. Then $x + \sum_{i=1}^n c_i = (0,0,0,0)$, meaning $x = -\sum_{i=1}^n c_i$.

$\square$

I would provide an example, but I want to encourage everyone to play a game of SET and try it out live!

Charlotte, who originally showed me this trick, was quick enough to compute this sum in her head. So were the other math students we played SET with. It’s a bit easier than it seems since you can do the sum feature by feature. Even though I’ve known about this trick for years, I still require a piece of paper and a few minutes.

Because this is Math Intersect Programming, the reader is encouraged to implement this scheme as an exercise, and simulate a game of SET by removing randomly chosen valid sets to verify experimentally that this scheme works.

Until next time!

# Load Balancing and the Power of Hashing

Here’s a bit of folklore I often hear (and retell) that’s somewhere between a joke and deep wisdom: if you’re doing a software interview that involves some algorithms problem that seems hard, your best bet is to use hash tables.

More succinctly put: Google loves hash tables.

As someone with a passion for math and theoretical CS, it’s kind of silly and reductionist. But if you actually work with terabytes of data that can’t fit on a single machine, it also makes sense.

But to understand why hash tables are so applicable, you should have at least a fuzzy understanding of the math that goes into it, which is surprisingly unrelated to the actual act of hashing. Instead it’s the guarantees that a “random enough” hash provides that makes it so useful. The basic intuition is that if you have an algorithm that works well assuming the input data is completely random, then you can probably get a good guarantee by preprocessing the input by hashing.

In this post I’ll explain the details, and show the application to an important problem that one often faces in dealing with huge amounts of data: how to allocate resources efficiently (load balancing). As usual, all of the code used in the making of this post is available on Github.

Next week, I’ll follow this post up with another application of hashing to estimating the number of distinct items in a set that’s too large to store in memory.

## Families of Hash Functions

To emphasize which specific properties of hash functions are important for a given application, we start by introducing an abstraction: a hash function is just some computable function that accepts strings as input and produces numbers between 1 and $n$ as output. We call the set of allowed inputs $U$ (for “Universe”). A family of hash functions is just a set of possible hash functions to choose from. We’ll use a scripty $\mathscr{H}$ for our family, and so every hash function $h$ in $\mathscr{H}$ is a function $h : U \to \{ 1, \dots, n \}$.

You can use a single hash function $h$ to maintain an unordered set of objects in a computer. The reason this is a problem that needs solving is because if you were to store items sequentially in a list, and if you want to determine if a specific item is already in the list, you need to potentially check every item in the list (or do something fancier). In any event, without hashing you have to spend some non-negligible amount of time searching. With hashing, you can choose the location of an element $x \in U$ based on the value of its hash $h(x)$. If you pick your hash function well, then you’ll have very few collisions and can deal with them efficiently. The relevant section on Wikipedia has more about the various techniques to deal with collisions in hash tables specifically, but we want to move beyond that in this post.

Here we have a family of random hash functions. So what’s the use of having many hash functions? You can pick a hash randomly from a “good” family of hash functions. While this doesn’t seem so magical, it has the informal property that it makes arbitrary data “random enough,” so that an algorithm which you designed to work with truly random data will also work with the hashes of arbitrary data. Moreover, even if an adversary knows $\mathscr{H}$ and knows that you’re picking a hash function at random, there’s no way for the adversary to manufacture problems by feeding bad data. With overwhelming probability the worst-case scenario will not occur. Our first example of this is in load-balancing.

You can imagine load balancing in two ways, concretely and mathematically. In the concrete version you have a public-facing server that accepts requests from users, and forwards them to a back-end server which processes them and sends a response to the user. When you have a billion users and a million servers, you want to forward the requests in such a way that no server gets too many requests, or else the users will experience delays. Moreover, you’re worried that the League of Tanzanian Hackers is trying to take down your website by sending you requests in a carefully chosen order so as to screw up your load balancing algorithm.

The mathematical version of this problem usually goes with the metaphor of balls and bins. You have some collection of $m$ balls and $n$ bins in which to put the balls, and you want to put the balls into the bins. But there’s a twist: an adversary is throwing balls at you, and you have to put them into the bins before the next ball comes, so you don’t have time to remember (or count) how many balls are in each bin already. You only have time to do a small bit of mental arithmetic, sending ball $i$ to bin $f(i)$ where $f$ is some simple function. Moreover, whatever rule you pick for distributing the balls in the bins, the adversary knows it and will throw balls at you in the worst order possible.

A young man applying his knowledge of balls and bins. That’s totally what he’s doing.

There is one obvious approach: why not just pick a uniformly random bin for each ball? The problem here is that we need the choice to be persistent. That is, if the adversary throws the same ball at us a second time, we need to put it in the same bin as the first time, and it doesn’t count toward the overall load. This is where the ball/bin metaphor breaks down. In the request/server picture, there is data specific to each user stored on the back-end server between requests (a session), and you need to make sure that data is not lost for some reasonable period of time. And if we were to save a uniform random choice after each request, we’d need to store a number for every request, which is too much. In short, we need the mapping to be persistent, but we also want it to be “like random” in effect.

So what do you do? The idea is to take a “good” family of hash functions $\mathscr{H}$, pick one $h \in \mathscr{H}$ uniformly at random for the whole game, and when you get a request/ball $x \in U$ send it to server/bin $h(x)$. Note that in this case, the adversary knows your universal family $\mathscr{H}$ ahead of time, and it knows your algorithm of committing to some single randomly chosen $h \in \mathscr{H}$, but the adversary does not know which particular $h$ you chose.

The property of a family of hash functions that makes this strategy work is called 2-universality.

Definition: A family of functions $\mathscr{H}$ from some universe $U \to \{ 1, \dots, n \}$. is called 2-universal if, for every two distinct $x, y \in U$, the probability over the random choice of a hash function $h$ from $\mathscr{H}$ that $h(x) = h(y)$ is at most $1/n$. In notation,

$\displaystyle \Pr_{h \in \mathscr{H}}[h(x) = h(y)] \leq \frac{1}{n}$

I’ll give an example of such a family shortly, but let’s apply this to our load balancing problem. Our load-balancing algorithm would fail if, with even some modest probability, there is some server that receives many more than its fair share ($m/n$) of the $m$ requests. If $\mathscr{H}$ is 2-universal, then we can compute an upper bound on the expected load of a given server, say server 1. Specifically, pick any element $x$ which hashes to 1 under our randomly chosen $h$. Then we can compute an upper bound on the expected number of other elements that hash to 1. In this computation we’ll only use the fact that expectation splits over sums, and the definition of 2-universal. Call $\mathbf{1}_{h(y) = 1}$ the random variable which is zero when $h(y) \neq 1$ and one when $h(y) = 1$, and call $X = \sum_{y \in U} \mathbf{1}_{h(y) = 1}$. In words, $X$ simply represents the number of inputs that hash to 1. Then

So in expectation we can expect server 1 gets its fair share of requests. And clearly this doesn’t depend on the output hash being 1; it works for any server. There are two obvious questions.

1. How do we measure the risk that, despite the expectation we computed above, some server is overloaded?
2. If it seems like (1) is on track to happen, what can you do?

For 1 we’re asking to compute, for a given deviation $t$, the probability that $X - \mathbb{E}[X] > t$. This makes more sense if we jump to multiplicative factors, since it’s usually okay for a server to bear twice or three times its usual load, but not like $\sqrt{n}$ times more than it’s usual load. (Industry experts, please correct me if I’m wrong! I’m far from an expert on the practical details of load balancing.)

So we want to know what is the probability that $X - \mathbb{E}[X] > t \cdot \mathbb{E}[X]$ for some small number $t$, and we want this to get small quickly as $t$ grows. This is where the Chebyshev inequality becomes useful. For those who don’t want to click the link, for our sitauation Chebyshev’s inequality is the statement that, for any random variable $X$

$\displaystyle \Pr[|X - \mathbb{E}[X]| > t\mathbb{E}[X]] \leq \frac{\textup{Var}[X]}{t^2 \mathbb{E}^2[X]}.$

So all we need to do is compute the variance of the load of a server. It’s a bit of a hairy calculation to write down, but rest assured it doesn’t use anything fancier than the linearity of expectation and 2-universality. Let’s dive in. We start by writing the definition of variance as an expectation, and then we split $X$ up into its parts, expand the product and group the parts.

$\displaystyle \textup{Var}[X] = \mathbb{E}[(X - \mathbb{E}[X])^2] = \mathbb{E}[X^2] - (\mathbb{E}[X])^2$

The easy part is $(\mathbb{E}[X])^2$, it’s just $(1 + (m-1)/n)^2$, and the hard part is $\mathbb{E}[X^2]$. So let’s compute that

In order to continue (and get a reasonable bound) we need an additional property of our hash family which is not immediately spelled out by 2-universality. Specifically, we need that for every $h$ and $i$, $\Pr_x[h(x) = i] = O(\frac{1}{n})$. In other words, each hash function should evenly split the inputs across servers.

The reason this helps is because we can split $\Pr[h(x) = h(y) = 1]$  into $\Pr[h(x) = h(y) \mid h(x) = 1] \cdot \Pr[h(x) = 1]$. Using 2-universality to bound the left term, this quantity is at most $1/n^2$, and since there are $\binom{m}{2}$ total terms in the double sum above, the whole thing is at most $O(m/n + m^2 / n^2) = O(m^2 / n^2)$. Note that in our big-O analysis we’re assuming $m$ is much bigger than $n$.

Sweeping some of the details inside the big-O, this means that our variance is $O(m^2/n^2)$, and so our bound on the deviation of $X$ from its expectation by a multiplicative factor of $t$ is at most $O(1/t^2)$.

Now we computed a bound on the probability that a single server is not overloaded, but if we want to extend that to the worst-case server, the typical probability technique is to take the union bound over all servers. This means we just add up all the individual bounds and ignore how they relate. So the probability that some server has a load more than a multiplicative factor of $t$ is bounded from above $O(n/t^2)$. This is only less than one when $t = \Omega(\sqrt{n})$, so all we can say with this analysis is that (with some small constant probability) no server will have a load worse than $\sqrt{n}$ times more than the expected load.

So we have this analysis that seems not so good. If we have a million servers then the worst load on one server could potentially be a thousand times higher than the expected load. This doesn’t scale, and the problem could be in any (or all) of three places:

1. Our analysis is weak, and we should use tighter bounds because the true max load is actually much smaller.
2. Our hash families don’t have strong enough properties, and we should beef those up to get tighter bounds.
3. The whole algorithm sucks and needs to be improved.

It turns out all three are true. One heuristic solution is easy and avoids all math. Have some second server (which does not process requests) count hash collisions. When some server exceeds a factor of $t$ more than the expected load, send a message to the load balancer to randomly pick a new hash function from $\mathscr{H}$ and for any requests that don’t have existing sessions (this is included in the request data), use the new hash function. Once the old sessions expire, switch any new incoming requests from those IPs over to the new hash function.

But there are much better solutions out there. Unfortunately their analyses are too long for a blog post (they fill multiple research papers). Fortunately their descriptions and guarantees are easy to describe, and they’re easy to program. The basic idea goes by the name “the power of two choices,” which we explored on this blog in a completely different context of random graphs.

In more detail, the idea is that you start by picking two random hash functions $h_1, h_2 \in \mathscr{H}$, and when you get a new request, you compute both hashes, inspect the load of the two servers indexed by those hashes, and send the request to the server with the smaller load.

This has the disadvantage of requiring bidirectional talk between the load balancer and the server, rather than obliviously forwarding requests. But the advantage is an exponential decrease in the worst-case maximum load. In particular, the following theorem holds for the case where the hashes are fully random.

Theorem: Suppose one places $m$ balls into $n$ bins in order according to the following procedure: for each ball pick two uniformly random and independent integers $1 \leq i,j \leq n$, and place the ball into the bin with the smallest current size. If there are ties pick the bin with the smaller index. Then with high probability the largest bin has no more than $\Theta(m/n) + O(\log \log (n))$ balls.

This theorem appears to have been proved in a few different forms, with the best analysis being by Berenbrink et al. You can improve the constant on the $\log \log n$ by computing more than 2 hashes. How does this relate to a good family of hash functions, which is not quite fully random? Let’s explore the answer by implementing the algorithm in python.

## An example of universal hash functions, and the load balancing algorithm

In order to implement the load balancer, we need to have some good hash functions under our belt. We’ll go with the simplest example of a hash function that’s easy to prove nice properties for. Specifically each hash in our family just performs some arithmetic modulo a random prime.

Definition: Pick any prime $p > m$, and for any $1 \leq a < p$ and $0 \leq b \leq n$ define $h_{a,b}(x) = (ax + b \mod p) \mod m$. Let $\mathscr{H} = \{ h_{a,b} \mid 0 \leq b < p, 1 \leq a < p \}$.

This family of hash functions is 2-universal.

Theorem: For every $x \neq y \in \{0, \dots, p\}$,

$\Pr_{h \in \mathscr{H}}[h(x) = h(y)] \leq 1/p$

Proof. To say that $h(x) = h(y)$ is to say that $ax+b = ay+b + i \cdot m \mod p$ for some integer $i$. I.e., the two remainders of $ax+b$ and $ay+b$ are equivalent mod $m$. The $b$‘s cancel and we can solve for $a$

$a = im (x-y)^{-1} \mod p$

Since $a \neq 0$, there are $p-1$ possible choices for $a$. Moreover, there is no point to pick $i$ bigger than $p/m$ since we’re working modulo $p$. So there are $(p-1)/m$ possible values for the right hand side of the above equation. So if we chose them uniformly at random, (remember, $x-y$ is fixed ahead of time, so the only choice is $a, i$), then there is a $(p-1)/m$ out of $p-1$ chance that the equality holds, which is at most $1/m$. (To be exact you should account for taking a floor of $(p-1)/m$ when $m$ does not evenly divide $p-1$, but it only decreases the overall probability.)

$\square$

If $m$ and $p$ were equal then this would be even more trivial: it’s just the fact that there is a unique line passing through any two distinct points. While that’s obviously true from standard geometry, it is also true when you work with arithmetic modulo a prime. In fact, it works using arithmetic over any field.

Implementing these hash functions is easier than shooting fish in a barrel.

import random

def draw(p, m):
a = random.randint(1, p-1)
b = random.randint(0, p-1)

return lambda x: ((a*x + b) % p) % m

To encapsulate the process a little bit we implemented a UniversalHashFamily class which computes a random probable prime to use as the modulus and stores $m$. The interested reader can see the Github repository for more.

If we try to run this and feed in a large range of inputs, we can see how the outputs are distributed. In this example $m$ is a hundred thousand and $n$ is a hundred (it’s not two terabytes, but give me some slack it’s a demo and I’ve only got my desktop!). So the expected bin size for any 2-universal family is just about 1,000.

>>> m = 100000
>>> n = 100
>>> H = UniversalHashFamily(numBins=n, primeBounds=[n, 2*n])
>>> results = []
>>> for simulation in range(100):
...    bins = [0] * n
...    h = H.draw()
...    for i in range(m):
...       bins[h(i)] += 1
...    results.append(max(bins))
...
>>> max(bins) # a single run
1228
>>> min(bins)
613
>>> max(results) # the max bin size over all runs
1228
>>> min(results)
1227

Indeed, the max is very close to the expected value.

But this example is misleading, because the point of this was that some adversary would try to screw us over by picking a worst-case input. If the adversary knew exactly which $h$ was chosen (which it doesn’t) then the worst case input would be the set of all inputs that have the given hash output value. Let’s see it happen live.

>>> h = H.draw()
>>> badInputs = [i for i in range(m) if h(i) == 9]
1227
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1227, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]

The expected size of a bin is 12, but as expected this is 100 times worse (linearly worse in $n$). But if we instead pick a random $h$ after the bad inputs are chosen, the result is much better.

>>> testInputs(n,m,badInputs) # randomly picks a hash
[19, 20, 20, 19, 18, 18, 17, 16, 16, 16, 16, 17, 18, 18, 19, 20, 20, 19, 18, 17, 17, 16, 16, 16, 16, 17, 18, 18, 19, 20, 20, 19, 18, 17, 17, 16, 16, 16, 16, 8, 8, 9, 9, 10, 10, 10, 10, 9, 9, 8, 8, 8, 8, 8, 8, 9, 9, 10, 10, 10, 10, 9, 9, 8, 8, 8, 8, 8, 8, 9, 9, 10, 10, 10, 10, 9, 8, 8, 8, 8, 8, 8, 8, 9, 9, 10, 10, 10, 10, 9, 8, 8, 8, 8, 8, 8, 8, 9, 9, 10]

However, if you re-ran this test many times, you’d eventually get unlucky and draw the hash function for which this actually is the worst input, and get a single huge bin. Other times you can get a bad hash in which two or three bins have all the inputs.

An interesting question is, what is really the worst-case input for this algorithm? I suspect it’s characterized by some choice of hash output values, taking all inputs for the chosen outputs. If this is the case, then there’s a tradeoff between the number of inputs you pick and how egregious the worst bin is. As an exercise to the reader, empirically estimate this tradeoff and find the best worst-case input for the adversary. Also, for your choice of parameters, estimate by simulation the probability that the max bin is three times larger than the expected value.

Now that we’ve played around with the basic hashing algorithm and made a family of 2-universal hashes, let’s see the power of two choices. Recall, this algorithm picks two random hash functions and sends an input to the bin with the smallest size. This obviously generalizes to $k$ choices, although the theoretical guarantee only improves by a constant factor, so let’s implement the more generic version.

class ChoiceHashFamily(object):
def __init__(self, hashFamily, queryBinSize, numChoices=2):
self.queryBinSize = queryBinSize
self.hashFamily = hashFamily
self.numChoices = numChoices

def draw(self):
hashes = [self.hashFamily.draw()
for _ in range(self.numChoices)]

def h(x):
indices = [h(x) for h in hashes]
counts = [self.queryBinSize(i) for i in indices]
count, index = min([(c,i) for (c,i) in zip(counts,indices)])
return index

return h

And if we test this with the bad inputs (as used previously, all the inputs that hash to 9), as a typical output we get

>>> bins
[15, 16, 15, 15, 16, 14, 16, 14, 16, 15, 16, 15, 15, 15, 17, 14, 16, 14, 16, 16, 15, 16, 15, 16, 15, 15, 17, 15, 16, 15, 15, 15, 15, 16, 15, 14, 16, 14, 16, 15, 15, 15, 14, 16, 15, 15, 15, 14, 17, 14, 15, 15, 14, 16, 13, 15, 14, 15, 15, 15, 14, 15, 13, 16, 14, 16, 15, 15, 15, 16, 15, 15, 13, 16, 14, 15, 15, 16, 14, 15, 15, 15, 11, 13, 11, 12, 13, 14, 13, 11, 11, 12, 14, 14, 13, 10, 16, 12, 14, 10]

And a typical list of bin maxima is

>>> results
[16, 16, 16, 18, 17, 365, 18, 16, 16, 365, 18, 17, 17, 17, 17, 16, 16, 17, 18, 16, 17, 18, 17, 16, 17, 17, 18, 16, 18, 17, 17, 17, 17, 18, 18, 17, 17, 16, 17, 365, 17, 18, 16, 16, 18, 17, 16, 18, 365, 16, 17, 17, 16, 16, 18, 17, 17, 17, 17, 17, 18, 16, 18, 16, 16, 18, 17, 17, 365, 16, 17, 17, 17, 17, 16, 17, 16, 17, 16, 16, 17, 17, 16, 365, 18, 16, 17, 17, 17, 17, 17, 18, 17, 17, 16, 18, 18, 17, 17, 17]

Those big bumps are the times when we picked an unlucky hash function, which is scarily large, although this bad event would be proportionally less likely as you scale up. But in the good case the load is clearly more even than the previous example, and the max load would get linearly smaller as you pick between a larger set of randomly chosen hashes (obviously).

Coupling this with the technique of switching hash functions when you start to observe a large deviation, and you have yourself an elegant solution.

In addition to load balancing, hashing has a ton of applications. Remember, the main key that you may want to use hashing is when you have an algorithm that works well when the input data is random. This comes up in streaming and sublinear algorithms, in data structure design and analysis, and many other places. We’ll be covering those applications in future posts on this blog.

Until then!

# Hamming’s Code

## Or how to detect and correct errors

Last time we made a quick tour through the main theorems of Claude Shannon, which essentially solved the following two problems about communicating over a digital channel.

1. What is the best encoding for information when you are guaranteed that your communication channel is error free?
2. Are there any encoding schemes that can recover from random noise introduced during transmission?

The answers to these questions were purely mathematical theorems, of course. But the interesting shortcoming of Shannon’s accomplishment was that his solution for the noisy coding problem (2) was nonconstructive. The question remains: can we actually come up with efficiently computable encoding schemes? The answer is yes! Marcel Golay was the first to discover such a code in 1949 (just a year after Shannon’s landmark paper), and Golay’s construction was published on a single page! We’re not going to define Golay’s code in this post, but we will mention its interesting status in coding theory later. The next year Richard Hamming discovered another simpler and larger family of codes, and went on to do some of the major founding work in coding theory. For his efforts he won a Turing Award and played a major part in bringing about the modern digital age. So we’ll start with Hamming’s codes.

We will assume some basic linear algebra knowledge, as detailed our first linear algebra primer. We will also use some basic facts about polynomials and finite fields, though the lazy reader can just imagine everything as binary $\{ 0,1 \}$ and still grok the important stuff.

Richard Hamming, inventor of Hamming codes. [image source]

## What is a code?

The formal definition of a code is simple: a code $C$ is just a subset of $\{ 0,1 \}^n$ for some $n$. Elements of $C$ are called codewords.

This is deceptively simple, but here’s the intuition. Say we know we want to send messages of length $k$, so that our messages are in $\{ 0,1 \}^k$. Then we’re really viewing a code $C$ as the image of some encoding function $\textup{Enc}: \{ 0,1 \}^k \to \{ 0,1 \}^n$. We can define $C$ by just describing what the set is, or we can define it by describing the encoding function. Either way, we will make sure that $\textup{Enc}$ is an injective function, so that no two messages get sent to the same codeword. Then $|C| = 2^k$, and we can call $k = \log |C|$ the message length of $C$ even if we don’t have an explicit encoding function.

Moreover, while in this post we’ll always work with $\{ 0,1 \}$, the alphabet of your encoded messages could be an arbitrary set $\Sigma$. So then a code $C$ would be a subset of tuples in $\Sigma^n$, and we would call $q = |\Sigma|$.

So we have these parameters $n, k, q$, and we need one more. This is the minimum distance of a code, which we’ll denote by $d$. This is defined to be the minimum Hamming distance between all distinct pairs of codewords, where by Hamming distance I just mean the number of coordinates that two tuples differ in. Recalling the remarks we made last time about Shannon’s nonconstructive proof, when we decode an encoded message $y$ (possibly with noisy bits) we look for the (unencoded) message $x$ whose encoding $\textup{Enc}(x)$ is as close to $y$ as possible. This will only work in the worst case if all pairs of codewords are sufficiently far apart. Hence we track the minimum distance of a code.

So coding theorists turn this mess of parameters into notation.

Definition: A code $C$ is called an $(n, k, d)_q$-code if

• $C \subset \Sigma^n$ for some alphabet $\Sigma$,
• $k = \log |C|$,
• $C$ has minimum distance $d$, and
• the alphabet $\Sigma$ has size $q$.

The basic goals of coding theory are:

1. For which values of these four parameters do codes exist?
2. Fixing any three parameters, how can we optimize the other one?

In this post we’ll see how simple linear-algebraic constructions can give optima for one of these problems, optimizing $k$ for $d=3$, and we’ll state a characterization theorem for optimizing $k$ for a general $d$. Next time we’ll continue with a second construction that optimizes a different bound called the Singleton bound.

## Linear codes and the Hamming code

A code is called linear if it can be identified with a linear subspace of some finite-dimensional vector space. In this post all of our vector spaces will be $\{ 0,1 \}^n$, that is tuples of bits under addition mod 2. But you can do the same constructions with any finite scalar field $\mathbb{F}_q$ for a prime power $q$, i.e. have your vector space be $\mathbb{F}_q^n$. We’ll go back and forth between describing a binary code $q=2$ over $\{ 0,1 \}$ and a code in $\mathbb{F}_q^n$. So to say a code is linear means:

• The zero vector is a codeword.
• The sum of any two codewords is a codeword.
• Any scalar multiple of a codeword is a codeword.

Linear codes are the simplest kinds of codes, but already they give a rich variety of things to study. The benefit of linear codes is that you can describe them in a lot of different and useful ways besides just describing the encoding function. We’ll use two that we define here. The idea is simple: you can describe everything about a linear subspace by giving a basis for the space.

Definition: generator matrix of a $(n,k,d)_q$-code $C$ is a $k \times n$ matrix $G$ whose rows form a basis for $C$.

There are a lot of equivalent generator matrices for a linear code (we’ll come back to this later), but the main benefit is that having a generator matrix allows one to encode messages $x \in \{0,1 \}^k$ by left multiplication $xG$. Intuitively, we can think of the bits of $x$ as describing the coefficients of the chosen linear combination of the rows of $G$, which uniquely describes an element of the subspace. Note that because a $k$-dimensional subspace of $\{ 0,1 \}^n$ has $2^k$ elements, we’re not abusing notation by calling $k = \log |C|$ both the message length and the dimension.

For the second description of $C$, we’ll remind the reader that every linear subspace $C$ has a unique orthogonal complement $C^\perp$, which is the subspace of vectors that are orthogonal to vectors in $C$.

Definition: Let $H^T$ be a generator matrix for $C^\perp$. Then $H$ is called a parity check matrix.

Note $H$ has the basis for $C^\perp$ as columns. This means it has dimensions $n \times (n-k)$. Moreover, it has the property that $x \in C$ if and only if the left multiplication $xH = 0$. Having zero dot product with all columns of $H$ characterizes membership in $C$.

The benefit of having a parity check matrix is that you can do efficient error detection: just compute $yH$ on your received message $y$, and if it’s nonzero there was an error! What if there were so many errors, and just the right errors that $y$ coincided with a different codeword than it started? Then you’re screwed. In other words, the parity check matrix is only guarantee to detect errors if you have fewer errors than the minimum distance of your code.

So that raises an obvious question: if you give me the generator matrix of a linear code can I compute its minimum distance? It turns out that this problem is NP-hard in general. In fact, you can show that this is equivalent to finding the smallest linearly dependent set of rows of the parity check matrix, and it is easier to see why such a problem might be hard. But if you construct your codes cleverly enough you can compute their distance properties with ease.

Before we do that, one more definition and a simple proposition about linear codes. The Hamming weight of a vector $x$, denoted $wt(x)$, is the number of nonzero entries in $x$.

Proposition: The minimum distance of a linear code $C$ is the minimum Hamming weight over all nonzero vectors $x \in C$.

Proof. Consider a nonzero $x \in C$. On one hand, the zero vector is a codeword and $wt(x)$ is by definition the Hamming distance between $x$ and zero, so it is an upper bound on the minimum distance. In fact, it’s also a lower bound: if $x,y$ are two nonzero codewords, then $x-y$ is also a codeword and $wt(x-y)$ is the Hamming distance between $x$ and $y$.

$\square$

So now we can define our first code, the Hamming code. It will be a $(n, k, 3)_2$-code. The construction is quite simple. We have fixed $d=3, q=2$, and we will also fix $l = n-k$. One can think of this as fixing $n$ and maximizing $k$, but it will only work for $n$ of a special form.

We’ll construct the Hamming code by describing a parity-check matrix $H$. In fact, we’re going to see what conditions the minimum distance $d=3$ imposes on $H$, and find out those conditions are actually sufficient to get $d=3$. We’ll start with 2. If we want to ensure $d \geq 2$, then you need it to be the case that no nonzero vector of Hamming weight 1 is a code word. Indeed, if $e_i$ is a vector with all zeros except a one in position $i$, then $e_i H = h_i$ is the $i$-th row of $H$. We need $e_i H \neq 0$, so this imposes the condition that no row of $H$ can be zero. It’s easy to see that this is sufficient for $d \geq 2$.

Likewise for $d \geq 3$, given a vector $y = e_i + e_j$ for some positions $i \neq j$, then $yH = h_i + h_j$ may not be zero. But because our sums are mod 2, saying that $h_i + h_j \neq 0$ is the same as saying $h_i \neq h_j$. Again it’s an if and only if. So we have the two conditions.

• No row of $H$ may be zero.
• All rows of $H$ must be distinct.

That is, any parity check matrix with those two properties defines a distance 3 linear code. The only question that remains is how large can $n$  be if the vectors have length $n-k = l$? That’s just the number of distinct nonzero binary strings of length $l$, which is $2^l - 1$. Picking any way to arrange these strings as the rows of a matrix (say, in lexicographic order) gives you a good parity check matrix.

Theorem: For every $l > 0$, there is a $(2^l - 1, 2^l - l - 1, 3)_2$-code called the Hamming code.

Since the Hamming code has distance 3, we can always detect if at most a single error occurs. Moreover, we can correct a single error using the Hamming code. If $x \in C$ and $wt(e) = 1$ is an error bit in position $i$, then the incoming message would be $y = x + e$. Now compute $yH = xH + eH = 0 + eH = h_i$ and flip bit $i$ of $y$. That is, whichever row of $H$ you get tells you the index of the error, so you can flip the corresponding bit and correct it. If you order the rows lexicographically like we said, then $h_i = i$ as a binary number. Very slick.

Before we move on, we should note one interesting feature of linear codes.

Definition: A code is called systematic if it can be realized by an encoding function that appends some number $n-k$ “check bits” to the end of each message.

The interesting feature is that all linear codes are systematic. The reason is as follows. The generator matrix $G$ of a linear code has as rows a basis for the code as a linear subspace. We can perform Gaussian elimination on $G$ and get a new generator matrix that looks like $[I \mid A]$ where $I$ is the identity matrix of the appropriate size and $A$ is some junk. The point is that encoding using this generator matrix leaves the message unchanged, and adds a bunch of bits to the end that are determined by $A$. It’s a different encoding function on $\{ 0,1\}^k$, but it has the same image in $\{ 0,1 \}^n$, i.e. the code is unchanged. Gaussian elimination just performed a change of basis.

If you work out the parameters of the Hamming code, you’ll see that it is a systematic code which adds $\Theta(\log n)$ check bits to a message, and we’re able to correct a single error in this code. An obvious question is whether this is necessary? Could we get away with adding fewer check bits? The answer is no, and a simple “information theoretic” argument shows this. A single index out of $n$ requires $\log n$ bits to describe, and being able to correct a single error is like identifying a unique index. Without logarithmically many bits, you just don’t have enough information.

## The Hamming bound and perfect codes

One nice fact about Hamming codes is that they optimize a natural problem: the problem of maximizing $d$ given a fixed choice of $n$, $k$, and $q$. To get this let’s define $V_n(r)$ denote the volume of a ball of radius $r$ in the space $\mathbb{F}_2^n$. I.e., if you fix any string (doesn’t matter which) $x$, $V_n(r)$ is the size of the set $\{ y : d(x,y) \leq r \}$, where $d(x,y)$ is the hamming distance.

There is a theorem called the Hamming bound, which describes a limit to how much you can pack disjoint balls of radius $r$ inside $\mathbb{F}_2^n$.

Theorem: If an $(n,k,d)_2$-code exists, then

$\displaystyle 2^k V_n \left ( \left \lfloor \frac{d-1}{2} \right \rfloor \right ) \leq 2^n$

Proof. The proof is quite simple. To say a code $C$ has distance $d$ means that for every string $x \in C$ there is no other string $y$ within Hamming distance $d$ of $x$. In other words, the balls centered around both $x,y$ of radius $r = \lfloor (d-1)/2 \rfloor$ are disjoint. The extra difference of one is for odd $d$, e.g. when $d=3$ you need balls of radius 1 to guarantee no overlap. Now $|C| = 2^k$, so the total number of strings covered by all these balls is the left-hand side of the expression. But there are at most $2^n$ strings in $\mathbb{F}_2^n$, establishing the desired inequality.

$\square$

Now a code is called perfect if it actually meets the Hamming bound exactly. As you probably guessed, the Hamming codes are perfect codes. It’s not hard to prove this, and I’m leaving it as an exercise to the reader.

The obvious follow-up question is whether there are any other perfect codes. The answer is yes, some of which are nonlinear. But some of them are “trivial.” For example, when $d=1$ you can just use the identity encoding to get the code $C = \mathbb{F}_2^n$. You can also just have a code which consists of a single codeword. There are also some codes that encode by repeating the message multiple times. These are called “repetition codes,” and all three of these examples are called trivial (as a definition). Now there are some nontrivial and nonlinear perfect codes I won’t describe here, but here is the nice characterization theorem.

Theorem [van Lint ’71, Tietavainen ‘73]: Let $C$ be a nontrivial perfect $(n,d,k)_q$ code. Then the parameters must either be that of a Hamming code, or one of the two:

• A $(23, 12, 7)_2$-code
• A $(11, 6, 5)_3$-code

The last two examples are known as the binary and ternary Golay codes, respectively, which are also linear. In other words, every possible set of parameters for a perfect code can be realized as one of these three linear codes.

So this theorem was a big deal in coding theory. The Hamming and Golay codes were both discovered within a year of each other, in 1949 and 1950, but the nonexistence of other perfect linear codes was open for twenty more years. This wrapped up a very neat package.

Next time we’ll discuss the Singleton bound, which optimizes for a different quantity and is incomparable with perfect codes. We’ll define the Reed-Solomon and show they optimize this bound as well. These codes are particularly famous for being the error correcting codes used in DVDs. We’ll then discuss the algorithmic issues surrounding decoding, and more recent connections to complexity theory.

Until then!

Posts in this series:

# Sending and Authenticating Messages with Elliptic Curves

Last time we saw the Diffie-Hellman key exchange protocol, and discussed the discrete logarithm problem and the related Diffie-Hellman problem, which form the foundation for the security of most protocols that use elliptic curves. Let’s continue our journey to investigate some more protocols.

Just as a reminder, the Python implementations of these protocols are not at all meant for practical use, but for learning purposes. We provide the code on this blog’s Github page, but for the love of security don’t actually use them.

## Shamir-Massey-Omura

Recall that there are lots of ways to send encrypted messages if you and your recipient share some piece of secret information, and the Diffie-Hellman scheme allows one to securely generate a piece of shared secret information. Now we’ll shift gears and assume you don’t have a shared secret, nor any way to acquire one. The first cryptosystem in that vein is called the Shamir-Massey-Omura protocol. It’s only slightly more complicated to understand than Diffie-Hellman, and it turns out to be equivalently difficult to break.

The idea is best explained by metaphor. Alice wants to send a message to Bob, but all she has is a box and a lock for which she has the only key. She puts the message in the box and locks it with her lock, and sends it to Bob. Bob can’t open the box, but he can send it back with a second lock on it for which Bob has the only key. Upon receiving it, Alice unlocks her lock, sends the box back to Bob, and Bob can now open the box and retrieve the message.

To celebrate the return of Game of Thrones, we’ll demonstrate this protocol with an original Lannister Infographic™.

Assuming the box and locks are made of magically unbreakable Valyrian steel, nobody but Bob (also known as Jamie) will be able to read the message.

Now fast forward through the enlightenment, industrial revolution, and into the age of information. The same idea works, and it’s significantly faster over long distances. Let $C$ be an elliptic curve over a finite field $k$ (we’ll fix $k = \mathbb{Z}/p$ for some prime $p$, though it works for general fields too). Let $n$ be the number of points on $C$.

Alice’s message is going to be in the form of a point $M$ on $C$. She’ll then choose her secret integer $0 < s_A < p$ and compute $s_AM$ (locking the secret in the box), sending the result to Bob. Bob will likewise pick a secret integer $s_B$, and send $s_Bs_AM$ back to Alice.

Now the unlocking part: since $s_A \in \mathbb{Z}/p$ is a field, Alice can “unlock the box” by computing the inverse $s_A^{-1}$ and computing $s_BM = s_A^{-1}s_Bs_AM$. Now the “box” just has Bob’s lock on it. So Alice sends $s_BM$ back to Bob, and Bob performs the same process to evaluate $s_B^{-1}s_BM = M$, thus receiving the message.

Like we said earlier, the security of this protocol is equivalent to the security of the Diffie-Hellman problem. In this case, if we call $z = s_A^{-1}$ and $y = s_B^{-1}$, and $P = s_As_BM$, then it’s clear that any eavesdropper would have access to $P, zP$, and $yP$, and they would be tasked with determining $zyP$, which is exactly the Diffie-Hellman problem.

Now Alice’s secret message comes in the form of a point on an elliptic curve, so how might one translate part of a message (which is usually represented as an integer) into a point? This problem seems to be difficult in general, and there’s no easy answer. Here’s one method originally proposed by Neal Koblitz that uses a bit of number theory trickery.

Let $C$ be given by the equation $y^2 = x^3 + ax + b$, again over $\mathbb{Z}/p$. Suppose $0 \leq m < p/100$ is our message. Define for any $0 \leq j < 100$ the candidate $x$-points $x_j = 100m + j$. Then call our candidate $y^2$-values $s_j = x_j^3 + ax_j + b$. Now for each $j$ we can compute $x_j, s_j$, and so we’ll pick the first one for which $s_j$ is a square in $\mathbb{Z}/p$ and we’ll get a point on the curve. How can we tell if $s_j$ is a square? One condition is that $s_j^{(p-1)/2} \equiv 1 \mod p$. This is a basic fact about quadratic residues modulo primes; see these notes for an introduction and this Wikipedia section for a dense summary.

Once we know it’s a square, we can compute the square root depending on whether $p \equiv 1 \mod 4$ or $p \equiv 3 \mod 4$. In the latter case, it’s just $s_j^{(p+1)/4} \mod p$. Unfortunately the former case is more difficult (really, the difficult part is $p \equiv 1 \mod 8$). You can see Section 1.5 of this textbook for more details and three algorithms, or you could just pick primes congruent to 3 mod 4.

I have struggled to find information about the history of the Shamir-Massey-Omura protocol; every author claims it’s not widely used in practice, and the only reason seems to be that this protocol doesn’t include a suitable method for authenticating the validity of a message. In other words, some “man in the middle” could be intercepting messages and tricking you into thinking he is your intended recipient. Coupling this with the difficulty of encoding a message as a point seems to be enough to make cryptographers look for other methods. Another reason could be that the system was patented in 1982 and is currently held by SafeNet, one of the US’s largest security providers. All of their products have generic names so it’s impossible to tell if they’re actually using Shamir-Massey-Omura. I’m no patent lawyer, but it could simply be that nobody else is allowed to implement the scheme.

## Digital Signatures

Indeed, the discussion above raises the question: how does one authenticate a message? The standard technique is called a digital signature, and we can implement those using elliptic curve techniques as well. To debunk the naive idea, one cannot simply attach some static piece of extra information to the message. An attacker could just copy that information and replicate it to forge your signature on another, potentially malicious document. In other words, a signature should only work for the message it was used to sign. The technique we’ll implement was originally proposed by Taher Elgamal, and is called the ElGamal signature algorithm. We’re going to look at a special case of it.

So Alice wants to send a message $m$ with some extra information that is unique to the message and that can be used to verify that it was sent by Alice. She picks an elliptic curve $E$ over $\mathbb{F}_q$ in such a way that the number of points on $E$ is $br$, where $b$ is a small integer and $r$ is a large prime.

Then, as in Diffie-Hellman, she picks a base point $Q$ that has order $r$ and a secret integer $s$ (which is permanent), and computes $P = sQ$. Alice publishes everything except $s$:

Public information: $\mathbb{F}_q, E, b, r, Q, P$

Let Alice’s message $m$ be represented as an integer at most $r$ (there are a few ways to get around this if your message is too long). Now to sign $m$ Alice picks a message specific $k < r$ and computes what I’ll call the auxiliary point $A = kQ$. Let $A = (x, y)$. Alice then computes the signature $g = k^{-1}(m + s x) \mod r$. The signed message is then $(m, A, g)$, which Alice can safely send to Bob.

Before we see how Bob verifies the message, notice that the signature integer involves everything: Alice’s secret key, the message-specific secret integer $k$, and most importantly the message. Remember that this is crucial: we want the signature to work only for the message that it was used to sign. If the same $k$ is used for multiple messages then the attacker can find out your secret key! (And this has happened in practice; see the end of the post.)

So Bob receives $(m, A, g)$, and also has access to all of the public information listed above. Bob authenticates the message by computing the auxiliary point via a different route. First, he computes $c = g^{-1} m \mod r$ and $d = g^{-1}x \mod r$, and then $A' = cQ + dP$. If the message was signed by Alice then $A' = A$, since we can just write out the definition of everything:

Now to analyze the security. The attacker wants to be able to take any message $m'$ and produce a signature $A', g'$ that will pass validation with Alice’s public information. If the attacker knew how to solve the discrete logarithm problem efficiently this would be trivial: compute $s$ and then just sign like Alice does. Without that power there are still a few options. If the attacker can figure out the message-specific integer $k$, then she can compute Alice’s secret key $s$ as follows.

Given $g = k^{-1}(m + sx) \mod r$, compute $kg \equiv (m + sx) \mod r$. Compute $d = gcd(x, r)$, and you know that this congruence has only $d$ possible solutions modulo $r$. Since $s$ is less than $r$, the attacker can just try all options until they find $P = sQ$. So that’s bad, but in a properly implemented signature algorithm finding $k$ is equivalently hard to solving the discrete logarithm problem, so we can assume we’re relatively safe from that.

On the other hand one could imagine being able to conjure the pieces of the signature $A', g'$ by some method that doesn’t involve directly finding Alice’s secret key. Indeed, this problem is less well-studied than the Diffie-Hellman problem, but most cryptographers believe it’s just as hard. For more information, this paper surveys the known attacks against this signature algorithm, including a successful attack for fields of characteristic two.

## Signature Implementation

We can go ahead and implement the signature algorithm once we’ve picked a suitable elliptic curve. For the purpose of demonstration we’ll use a small curve, $E: y^2 = x^3 + 3x + 181$ over $F = \mathbb{Z}/1061$, whose number of points happens to have the a suitable prime factorization ($1047 = 3 \cdot 349$). If you’re interested in counting the number of points on an elliptic curve, there are many theorems and efficient algorithms to do this, and if you’ve been reading this whole series something then an algorithm based on the Baby-Step Giant-Step idea would be easy to implement. For the sake of brevity, we leave it as an exercise to the reader.

Note that the code we present is based on the elliptic curve and finite field code we’re been implementing as part of this series. All of the code used in this post is available on this blog’s Github page.

The basepoint we’ll pick has to have order 349, and $E$ has plenty of candidates. We’ll use $(2, 81)$, and we’ll randomly generate a secret key that’s less than $349$ (eight bits will do). So our setup looks like this:

if __name__ == "__main__":
F = FiniteField(1061, 1)

# y^2 = x^3 + 3x + 181
curve = EllipticCurve(a=F(3), b=F(181))
basePoint = Point(curve, F(2), F(81))
basePointOrder = 349
secretKey = generateSecretKey(8)
publicKey = secretKey * basePoint

Then so sign a message we generate a random key, construct the auxiliary point and the signature, and return:

def sign(message, basePoint, basePointOrder, secretKey):
modR = FiniteField(basePointOrder, 1)
oneTimeSecret = generateSecretKey(len(bin(basePointOrder)) - 3) # numbits(order) - 1

auxiliaryPoint = oneTimeSecret * basePoint
signature = modR(oneTimeSecret).inverse() *
(modR(message) + modR(secretKey) * modR(auxiliaryPoint[0]))

return (message, auxiliaryPoint, signature)

So far so good. Note that we generate the message-specific $k$ at random, and this implies we need a high-quality source of randomness (what’s called a cryptographically-secure pseudorandom number generator). In absence of that there are proposed deterministic methods for doing it. See this draft proposal of Thomas Pornin, and this paper of Daniel Bernstein for another.

Now to authenticate, we follow the procedure from earlier.

def authentic(signedMessage, basePoint, basePointOrder, publicKey):
modR = FiniteField(basePointOrder, 1)
(message, auxiliary, signature) = signedMessage

sigInverse = modR(signature).inverse() # sig can be an int or a modR already
c, d = sigInverse * modR(message), sigInverse * modR(auxiliary[0])

auxiliaryChecker = int(c) * basePoint + int(d) * publicKey
return auxiliaryChecker == auxiliary

Continuing with our example, we pick a message represented as an integer smaller than $r$, sign it, and validate it.

>>> message = 123
>>> signedMessage = sign(message, basePoint, basePointOrder, secretKey)
>>> signedMessage
(123, (220 (mod 1061), 234 (mod 1061)), 88 (mod 349))
>>> authentic(signedMessage, basePoint, basePointOrder, publicKey)
True

So there we have it, a nice implementation of the digital signature algorithm.

## When Digital Signatures Fail

As we mentioned, it’s extremely important to avoid using the same $k$ for two different messages. If you do, then you’ll get two signed messages $(m_1, A_1, g_1), (m_2, A_2, g_2)$, but by definition the two $g$‘s have a ton of information in common! An attacker can recognize this immediately because $A_1 = A_2$, and figure out the secret key $s$ as follows. First write

$\displaystyle g_1 - g_2 \equiv k^{-1}(m_1 + sx) - k^{-1}(m_2 + sx) \equiv k^{-1}(m_1 - m_2) \mod r$.

Now we have something of the form $\text{known}_1 \equiv (k^{-1}) \text{known}_2 \mod r$, and similarly to the attack described earlier we can try all possibilities until we find a number that satisfies $A = kQ$. Then once we have $k$ we have already seen how to find $s$. Indeed, it would be a good exercise for the reader to implement this attack.

The attack we just described it not an idle threat. Indeed, the Sony corporation, producers of the popular Playstation video game console, made this mistake in signing software for Playstation 3. A digital signature algorithm makes sense to validate software, because Sony wants to ensure that only Sony has the power to publish games. So Sony developers act as one party signing the data on a disc, and the console will only play a game with a valid signature. Note that the asymmetric setup is necessary because if the console had shared a secret with Sony (say, stored as plaintext within the hardware of the console), anyone with physical access to the machine could discover it.

Now here come the cringing part. Sony made the mistake of using the same $k$ to sign every game! Their mistake was discovered in 2010 and made public at a cryptography conference. This video of the humorous talk includes a description of the variant Sony used and the attacker describe how the mistake should have been corrected. Without a firmware update (I believe Sony’s public key information was stored locally so that one could authenticate games without an internet connection), anyone could sign a piece of software and create games that are indistinguishable from something produced by Sony. That includes malicious content that, say, installs software that sends credit card information to the attacker.

So here we have a tidy story: a widely used cryptosystem with a scare story of what will go wrong when you misuse it. In the future of this series, we’ll look at other things you can do with elliptic curves, including factoring integers and testing for primality. We’ll also see some normal forms of elliptic curves that are used in place of the Weierstrass normal form for various reasons.

Until next time!