# Zero Knowledge Proofs for NP

Last time, we saw a specific zero-knowledge proof for graph isomorphism. This introduced us to the concept of an interactive proof, where you have a prover and a verifier sending messages back and forth, and the prover is trying to prove a specific claim to the verifier.

A zero-knowledge proof is a special kind of interactive proof in which the prover has some secret piece of knowledge that makes it very easy to verify a disputed claim is true. The prover’s goal, then, is to convince the verifier (a polynomial-time algorithm) that the claim is true without revealing any knowledge at all about the secret.

In this post we’ll see that, using a bit of cryptography, zero-knowledge proofs capture a much wider class of problems than graph isomorphism. Basically, if you believe that cryptography exists, every problem whose answers can be easily verified have zero-knowledge proofs (i.e., all of the class NP). Here are a bunch of examples. For each I’ll phrase the problem as a question, and then say what sort of data the prover’s secret could be.

• Given a boolean formula, is there an assignment of variables making it true? Secret: a satisfying assignment to the variables.
• Given a set of integers, is there a subset whose sum is zero? Secret: such a subset.
• Given a graph, does it have a 3-coloring? Secret: a valid 3-coloring.
• Given a boolean circuit, can it produce a specific output? Secret: a choice of inputs that produces the output.

The common link among all of these problems is that they are NP-hard (graph isomorphism isn’t known to be NP-hard). For us this means two things: (1) we think these problems are actually hard, so the verifier can’t solve them, and (2) if you show that one of them has a zero-knowledge proof, then they all have zero-knowledge proofs.

We’re going to describe and implement a zero-knowledge proof for graph 3-colorability, and in the next post we’ll dive into the theoretical definitions and talk about the proof that the scheme we present is zero-knowledge. As usual, all of the code used in making this post is available in a repository on this blog’s Github page. In the follow up to this post, we’ll dive into more nitty gritty details about the proof that this works, and study different kinds of zero-knowledge.

## One-way permutations

In a recent program gallery post we introduced the Blum-Blum-Shub pseudorandom generator. A pseudorandom generator is simply an algorithm that takes as input a short random string of length $s$ and produces as output a longer string, say, of length $3s$. This output string should not be random, but rather “indistinguishable” from random in a sense we’ll make clear next time. The underlying function for this generator is the “modular squaring” function $x \mapsto x^2 \mod M$, for some cleverly chosen $M$. The $M$ is chosen in such a way that makes this mapping a permutation. So this function is more than just a pseudorandom generator, it’s a one-way permutation.

If you have a primality-checking algorithm on hand (we do), then preparing the Blum-Blum-Shub algorithm is only about 15 lines of code.

def goodPrime(p):
return p % 4 == 3 and probablyPrime(p, accuracy=100)

def findGoodPrime(numBits=512):
candidate = 1

while not goodPrime(candidate):
candidate = random.getrandbits(numBits)

return candidate

def makeModulus(numBits=512):
return findGoodPrime(numBits) * findGoodPrime(numBits)

def blum_blum_shub(modulusLength=512):
modulus = makeModulus(numBits=modulusLength)

def f(inputInt):
return pow(inputInt, 2, modulus)

return f


The interested reader should check out the proof gallery post for more details about this generator. For us, having a one-way permutation is the important part (and we’re going to defer the formal definition of “one-way” until next time, just think “hard to get inputs from outputs”).

The other concept we need, which is related to a one-way permutation, is the notion of a hardcore predicate. Let $G(x)$ be a one-way permutation, and let $f(x) = b$ be a function that produces a single bit from a string. We say that $f$ is a hardcore predicate for $G$ if you can’t reliably compute $f(x)$ when given only $G(x)$.

Hardcore predicates are important because there are many one-way functions for which, when given the output, you can guess part of the input very reliably, but not the rest (e.g., if $g$ is a one-way function, $(x, y) \mapsto (x, g(y))$ is also one-way, but the $x$ part is trivially guessable). So a hardcore predicate formally measures, when given the output of a one-way function, what information derived from the input is hard to compute.

In the case of Blum-Blum-Shub, one hardcore predicate is simply the parity of the input bits.

def parity(n):
return sum(int(x) for x in bin(n)[2:]) % 2


## Bit Commitment Schemes

A core idea that will makes zero-knowledge proofs work for NP is the ability for the prover to publicly “commit” to a choice, and later reveal that choice in a way that makes it infeasible to fake their commitment. This will involve not just the commitment to a single bit of information, but also the transmission of auxiliary data that is provably infeasible to fake.

Our pair of one-way permutation $G$ and hardcore predicate $f$ comes in very handy. Let’s say I want to commit to a bit $b \in \{ 0,1 \}$. Let’s fix a security parameter that will measure how hard it is to change my commitment post-hoc, say $n = 512$. My process for committing is to draw a random string $x$ of length $n$, and send you the pair $(G(x), f(x) \oplus b)$, where $\oplus$ is the XOR operator on two bits.

The guarantee of a one-way permutation with a hardcore predicate is that if you only see $G(x)$, you can’t guess $f(x)$ with any reasonable edge over random guessing. Moreover, if you fix a bit $b$, and take an unpredictably random bit $y$, the XOR $b \oplus y$ is also unpredictably random. In other words, if $f(x)$ is hardcore, then so is $x \mapsto f(x) \oplus b$ for a fixed bit $b$. Finally, to reveal my commitment, I just send the string $x$ and let you independently compute $(G(x), f(x) \oplus b)$. Since $G$ is a permutation, that $x$ is the only $x$ that could have produced the commitment I sent you earlier.

Here’s a Python implementation of this scheme. We start with a generic base class for a commitment scheme.

class CommitmentScheme(object):
def __init__(self, oneWayPermutation, hardcorePredicate, securityParameter):
'''
oneWayPermutation: int -&gt; int
hardcorePredicate: int -&gt; {0, 1}
'''
self.oneWayPermutation = oneWayPermutation
self.hardcorePredicate = hardcorePredicate
self.securityParameter = securityParameter

# a random string of length self.securityParameter used only once per commitment
self.secret = self.generateSecret()

def generateSecret(self):
raise NotImplemented

def commit(self, x):
raise NotImplemented

def reveal(self):
return self.secret


Note that the “reveal” step is always simply to reveal the secret. Here’s the implementation subclass. We should also note that the security string should be chosen at random anew for every bit you wish to commit to. In this post we won’t reuse CommitmentScheme objects anyway.

class BBSBitCommitmentScheme(CommitmentScheme):
def generateSecret(self):
# the secret is a random quadratic residue
self.secret = self.oneWayPermutation(random.getrandbits(self.securityParameter))
return self.secret

def commit(self, bit):
unguessableBit = self.hardcorePredicate(self.secret)
return (
self.oneWayPermutation(self.secret),
unguessableBit ^ bit,  # python xor
)


One important detail is that the Blum-Blum-Shub one-way permutation is only a permutation when restricted to quadratic residues. As such, we generate our secret by shooting a random string through the one-way permutation to get a random residue. In fact this produces a uniform random residue, since the Blum-Blum-Shub modulus is chosen in such a way that ensures every residue has exactly four square roots.

Here’s code to check the verification is correct.

class BBSBitCommitmentVerifier(object):
def __init__(self, oneWayPermutation, hardcorePredicate):
self.oneWayPermutation = oneWayPermutation
self.hardcorePredicate = hardcorePredicate

def verify(self, securityString, claimedCommitment):
trueBit = self.decode(securityString, claimedCommitment)
unguessableBit = self.hardcorePredicate(securityString)  # wasteful, whatever
return claimedCommitment == (
self.oneWayPermutation(securityString),
unguessableBit ^ trueBit,  # python xor
)

def decode(self, securityString, claimedCommitment):
unguessableBit = self.hardcorePredicate(securityString)
return claimedCommitment[1] ^ unguessableBit


and an example of using it

if __name__ == "__main__":
import blum_blum_shub
securityParameter = 10
oneWayPerm = blum_blum_shub.blum_blum_shub(securityParameter)
hardcorePred = blum_blum_shub.parity

print('Bit commitment')
scheme = BBSBitCommitmentScheme(oneWayPerm, hardcorePred, securityParameter)
verifier = BBSBitCommitmentVerifier(oneWayPerm, hardcorePred)

for _ in range(10):
bit = random.choice([0, 1])
commitment = scheme.commit(bit)
secret = scheme.reveal()
trueBit = verifier.decode(secret, commitment)
valid = verifier.verify(secret, commitment)

print('{} == {}? {}; {} {}'.format(bit, trueBit, valid, secret, commitment))


Example output:

1 == 1? True; 524 (5685, 0)
1 == 1? True; 149 (22201, 1)
1 == 1? True; 476 (34511, 1)
1 == 1? True; 927 (14243, 1)
1 == 1? True; 608 (23947, 0)
0 == 0? True; 964 (7384, 1)
0 == 0? True; 373 (23890, 0)
0 == 0? True; 620 (270, 1)
1 == 1? True; 926 (12390, 0)
0 == 0? True; 708 (1895, 0)


As an exercise, write a program to verify that no other input to the Blum-Blum-Shub one-way permutation gives a valid verification. Test it on a small security parameter like $n=10$.

It’s also important to point out that the verifier needs to do some additional validation that we left out. For example, how does the verifier know that the revealed secret actually is a quadratic residue? In fact, detecting quadratic residues is believed to be hard! To get around this, we could change the commitment scheme reveal step to reveal the random string that was used as input to the permutation to get the residue (cf. BBSCommitmentScheme.generateSecret for the random string that needs to be saved/revealed). Then the verifier could generate the residue in the same way. As an exercise, upgrade the bit commitment an verifier classes to reflect this.

In order to get a zero-knowledge proof for 3-coloring, we need to be able to commit to one of three colors, which requires two bits. So let’s go overkill and write a generic integer commitment scheme. It’s simple enough: specify a bound on the size of the integers, and then do an independent bit commitment for every bit.

class BBSIntCommitmentScheme(CommitmentScheme):
def __init__(self, numBits, oneWayPermutation, hardcorePredicate, securityParameter=512):
'''
A commitment scheme for integers of a prespecified length numBits. Applies the
Blum-Blum-Shub bit commitment scheme to each bit independently.
'''
self.schemes = [BBSBitCommitmentScheme(oneWayPermutation, hardcorePredicate, securityParameter)
for _ in range(numBits)]
super().__init__(oneWayPermutation, hardcorePredicate, securityParameter)

def generateSecret(self):
self.secret = [x.secret for x in self.schemes]
return self.secret

def commit(self, integer):
# first pad bits to desired length
integer = bin(integer)[2:].zfill(len(self.schemes))
bits = [int(bit) for bit in integer]
return [scheme.commit(bit) for scheme, bit in zip(self.schemes, bits)]


And the corresponding verifier

class BBSIntCommitmentVerifier(object):
def __init__(self, numBits, oneWayPermutation, hardcorePredicate):
self.verifiers = [BBSBitCommitmentVerifier(oneWayPermutation, hardcorePredicate)
for _ in range(numBits)]

def decodeBits(self, secrets, bitCommitments):
return [v.decode(secret, commitment) for (v, secret, commitment) in
zip(self.verifiers, secrets, bitCommitments)]

def verify(self, secrets, bitCommitments):
return all(
bitVerifier.verify(secret, commitment)
for (bitVerifier, secret, commitment) in
zip(self.verifiers, secrets, bitCommitments)
)

def decode(self, secrets, bitCommitments):
decodedBits = self.decodeBits(secrets, bitCommitments)
return int(''.join(str(bit) for bit in decodedBits))


A sample usage:

if __name__ == "__main__":
import blum_blum_shub
securityParameter = 10
oneWayPerm = blum_blum_shub.blum_blum_shub(securityParameter)
hardcorePred = blum_blum_shub.parity

print('Int commitment')
scheme = BBSIntCommitmentScheme(10, oneWayPerm, hardcorePred)
verifier = BBSIntCommitmentVerifier(10, oneWayPerm, hardcorePred)
choices = list(range(1024))
for _ in range(10):
theInt = random.choice(choices)
commitments = scheme.commit(theInt)
secrets = scheme.reveal()
trueInt = verifier.decode(secrets, commitments)
valid = verifier.verify(secrets, commitments)

print('{} == {}? {}; {} {}'.format(theInt, trueInt, valid, secrets, commitments))


And a sample output:

527 == 527? True; [25461, 56722, 25739, 2268, 1185, 18226, 46375, 8907, 54979, 23095] [(29616, 0), (342, 1), (54363, 1), (63975, 0), (5426, 0), (9124, 1), (23973, 0), (44832, 0), (33044, 0), (68501, 0)]
67 == 67? True; [25461, 56722, 25739, 2268, 1185, 18226, 46375, 8907, 54979, 23095] [(29616, 1), (342, 1), (54363, 1), (63975, 1), (5426, 0), (9124, 1), (23973, 1), (44832, 1), (33044, 0), (68501, 0)]
729 == 729? True; [25461, 56722, 25739, 2268, 1185, 18226, 46375, 8907, 54979, 23095] [(29616, 0), (342, 1), (54363, 0), (63975, 1), (5426, 0), (9124, 0), (23973, 0), (44832, 1), (33044, 1), (68501, 0)]
441 == 441? True; [25461, 56722, 25739, 2268, 1185, 18226, 46375, 8907, 54979, 23095] [(29616, 1), (342, 0), (54363, 0), (63975, 0), (5426, 1), (9124, 0), (23973, 0), (44832, 1), (33044, 1), (68501, 0)]
614 == 614? True; [25461, 56722, 25739, 2268, 1185, 18226, 46375, 8907, 54979, 23095] [(29616, 0), (342, 1), (54363, 1), (63975, 1), (5426, 1), (9124, 1), (23973, 1), (44832, 0), (33044, 0), (68501, 1)]
696 == 696? True; [25461, 56722, 25739, 2268, 1185, 18226, 46375, 8907, 54979, 23095] [(29616, 0), (342, 1), (54363, 0), (63975, 0), (5426, 1), (9124, 0), (23973, 0), (44832, 1), (33044, 1), (68501, 1)]
974 == 974? True; [25461, 56722, 25739, 2268, 1185, 18226, 46375, 8907, 54979, 23095] [(29616, 0), (342, 0), (54363, 0), (63975, 1), (5426, 0), (9124, 1), (23973, 0), (44832, 0), (33044, 0), (68501, 1)]
184 == 184? True; [25461, 56722, 25739, 2268, 1185, 18226, 46375, 8907, 54979, 23095] [(29616, 1), (342, 1), (54363, 0), (63975, 0), (5426, 1), (9124, 0), (23973, 0), (44832, 1), (33044, 1), (68501, 1)]
136 == 136? True; [25461, 56722, 25739, 2268, 1185, 18226, 46375, 8907, 54979, 23095] [(29616, 1), (342, 1), (54363, 0), (63975, 0), (5426, 0), (9124, 1), (23973, 0), (44832, 1), (33044, 1), (68501, 1)]
632 == 632? True; [25461, 56722, 25739, 2268, 1185, 18226, 46375, 8907, 54979, 23095] [(29616, 0), (342, 1), (54363, 1), (63975, 1), (5426, 1), (9124, 0), (23973, 0), (44832, 1), (33044, 1), (68501, 1)]


Before we move on, we should note that this integer commitment scheme “blows up” the secret by quite a bit. If you have a security parameter $s$ and an integer with $n$ bits, then the commitment uses roughly $sn$ bits. A more efficient method would be to simply use a good public-key encryption scheme, and then reveal the secret key used to encrypt the message. While we implemented such schemes previously on this blog, I thought it would be more fun to do something new.

## A zero-knowledge proof for 3-coloring

First, a high-level description of the protocol. The setup: the prover has a graph $G$ with $n$ vertices $V$ and $m$ edges $E$, and also has a secret 3-coloring of the vertices $\varphi: V \to \{ 0, 1, 2 \}$. Recall, a 3-coloring is just an assignment of colors to vertices (in this case the colors are 0,1,2) so that no two adjacent vertices have the same color.

So the prover has a coloring $\varphi$ to be kept secret, but wants to prove that $G$ is 3-colorable. The idea is for the verifier to pick a random edge $(u,v)$, and have the prover reveal the colors of $u$ and $v$. However, if we run this protocol only once, there’s nothing to stop the prover from just lying and picking two distinct colors. If we allow the verifier to run the protocol many times, and the prover actually reveals the colors from their secret coloring, then after roughly $|V|$ rounds the verifier will know the entire coloring. Each step reveals more knowledge.

We can fix this with two modifications.

1. The prover first publicly commits to the coloring using a commitment scheme. Then when the verifier asks for the colors of the two vertices of a random edge, he can rest assured that the prover fixed a coloring that does not depend on the verifier’s choice of edge.
2. The prover doesn’t reveal colors from their secret coloring, but rather from a random permutation of the secret coloring. This way, when the verifier sees colors, they’re equally likely to see any two colors, and all the verifier will know is that those two colors are different.

So the scheme is: prover commits to a random permutation of the true coloring and sends it to the verifier; the verifier asks for the true colors of a given edge; the prover provides those colors and the secrets to their commitment scheme so the verifier can check.

The key point is that now the verifier has to commit to a coloring, and if the coloring isn’t a proper 3-coloring the verifier has a reasonable chance of picking an improperly colored edge (a one-in-$|E|$ chance, which is at least $1/|V|^2$). On the other hand, if the coloring is proper, then the verifier will always query a properly colored edge, and it’s zero-knowledge because the verifier is equally likely to see every pair of colors. So the verifier will always accept, but won’t know anything more than that the edge it chose is properly colored. Repeating this $|V|^2$-ish times, with high probability it’ll have queried every edge and be certain the coloring is legitimate.

Let’s implement this scheme. First the data types. As in the previous post, graphs are represented by edge lists, and a coloring is represented by a dictionary mapping a vertex to 0, 1, or 2 (the “colors”).

# a graph is a list of edges, and for simplicity we'll say
# every vertex shows up in some edge
exampleGraph = [
(1, 2),
(1, 4),
(1, 3),
(2, 5),
(2, 5),
(3, 6),
(5, 6)
]

exampleColoring = {
1: 0,
2: 1,
3: 2,
4: 1,
5: 2,
6: 0,
}


Next, the Prover class that implements that half of the protocol. We store a list of integer commitment schemes for each vertex whose color we need to commit to, and send out those commitments.

class Prover(object):
def __init__(self, graph, coloring, oneWayPermutation=ONE_WAY_PERMUTATION, hardcorePredicate=HARDCORE_PREDICATE):
self.graph = [tuple(sorted(e)) for e in graph]
self.coloring = coloring
self.vertices = list(range(1, numVertices(graph) + 1))
self.oneWayPermutation = oneWayPermutation
self.hardcorePredicate = hardcorePredicate
self.vertexToScheme = None

def commitToColoring(self):
self.vertexToScheme = {
v: commitment.BBSIntCommitmentScheme(
2, self.oneWayPermutation, self.hardcorePredicate
) for v in self.vertices
}

permutation = randomPermutation(3)
permutedColoring = {
v: permutation[self.coloring[v]] for v in self.vertices
}

return {v: s.commit(permutedColoring[v])
for (v, s) in self.vertexToScheme.items()}

def revealColors(self, u, v):
u, v = min(u, v), max(u, v)
if not (u, v) in self.graph:
raise Exception('Must query an edge!')

return (
self.vertexToScheme[u].reveal(),
self.vertexToScheme[v].reveal(),
)


In commitToColoring we randomly permute the underlying colors, and then compose that permutation with the secret coloring, committing to each resulting color independently. In revealColors we reveal only those colors for a queried edge. Note that we don’t actually need to store the permuted coloring, because it’s implicitly stored in the commitments.

It’s crucial that we reject any query that doesn’t correspond to an edge. If we don’t reject such queries then the verifier can break the protocol! In particular, by querying non-edges you can determine which pairs of nodes have the same color in the secret coloring. You can then chain these together to partition the nodes into color classes, and so color the graph. (After seeing the Verifier class below, implement this attack as an exercise).

Here’s the corresponding Verifier:

class Verifier(object):
def __init__(self, graph, oneWayPermutation, hardcorePredicate):
self.graph = [tuple(sorted(e)) for e in graph]
self.oneWayPermutation = oneWayPermutation
self.hardcorePredicate = hardcorePredicate
self.committedColoring = None
self.verifier = commitment.BBSIntCommitmentVerifier(2, oneWayPermutation, hardcorePredicate)

def chooseEdge(self, committedColoring):
self.committedColoring = committedColoring
self.chosenEdge = random.choice(self.graph)
return self.chosenEdge

def accepts(self, revealed):
revealedColors = []

for (w, bitSecrets) in zip(self.chosenEdge, revealed):
trueColor = self.verifier.decode(bitSecrets, self.committedColoring[w])
revealedColors.append(trueColor)
if not self.verifier.verify(bitSecrets, self.committedColoring[w]):
return False

return revealedColors[0] != revealedColors[1]


As expected, in the acceptance step the verifier decodes the true color of the edge it queried, and accepts if and only if the commitment was valid and the edge is properly colored.

Here’s the whole protocol, which is syntactically very similar to the one for graph isomorphism.

def runProtocol(G, coloring, securityParameter=512):
oneWayPermutation = blum_blum_shub.blum_blum_shub(securityParameter)
hardcorePredicate = blum_blum_shub.parity

prover = Prover(G, coloring, oneWayPermutation, hardcorePredicate)
verifier = Verifier(G, oneWayPermutation, hardcorePredicate)

committedColoring = prover.commitToColoring()
chosenEdge = verifier.chooseEdge(committedColoring)

revealed = prover.revealColors(*chosenEdge)
revealedColors = (
verifier.verifier.decode(revealed[0], committedColoring[chosenEdge[0]]),
verifier.verifier.decode(revealed[1], committedColoring[chosenEdge[1]]),
)
isValid = verifier.accepts(revealed)

print("{} != {} and commitment is valid? {}".format(
revealedColors[0], revealedColors[1], isValid
))

return isValid


And an example of running it

if __name__ == "__main__":
for _ in range(30):
runProtocol(exampleGraph, exampleColoring, securityParameter=10)


Here’s the output

0 != 2 and commitment is valid? True
1 != 0 and commitment is valid? True
1 != 2 and commitment is valid? True
2 != 0 and commitment is valid? True
1 != 2 and commitment is valid? True
2 != 0 and commitment is valid? True
0 != 2 and commitment is valid? True
0 != 2 and commitment is valid? True
0 != 1 and commitment is valid? True
0 != 1 and commitment is valid? True
2 != 1 and commitment is valid? True
0 != 2 and commitment is valid? True
2 != 0 and commitment is valid? True
2 != 0 and commitment is valid? True
1 != 0 and commitment is valid? True
1 != 0 and commitment is valid? True
0 != 2 and commitment is valid? True
2 != 1 and commitment is valid? True
0 != 2 and commitment is valid? True
0 != 2 and commitment is valid? True
2 != 1 and commitment is valid? True
1 != 0 and commitment is valid? True
1 != 0 and commitment is valid? True
2 != 1 and commitment is valid? True
2 != 1 and commitment is valid? True
1 != 0 and commitment is valid? True
0 != 2 and commitment is valid? True
1 != 2 and commitment is valid? True
1 != 2 and commitment is valid? True
0 != 1 and commitment is valid? True


So while we haven’t proved it rigorously, we’ve seen the zero-knowledge proof for graph 3-coloring. This automatically gives us a zero-knowledge proof for all of NP, because given any NP problem you can just convert it to the equivalent 3-coloring problem and solve that. Of course, the blowup required to convert a random NP problem to 3-coloring can be polynomially large, which makes it unsuitable for practice. But the point is that this gives us a theoretical justification for which problems have zero-knowledge proofs in principle. Now that we’ve established that you can go about trying to find the most efficient protocol for your favorite problem.

## Anticipatory notes

When we covered graph isomorphism last time, we said that a simulator could, without participating in the zero-knowledge protocol or knowing the secret isomorphism, produce a transcript that was drawn from the same distribution of messages as the protocol produced. That was all that it needed to be “zero-knowledge,” because anything the verifier could do with its protocol transcript, the simulator could do too.

We can do exactly the same thing for 3-coloring, exploiting the same “reverse order” trick where the simulator picks the random edge first, then chooses the color commitment post-hoc.

Unfortunately, both there and here I’m short-changing you, dear reader. The elephant in the room is that our naive simulator assumes the verifier is playing by the rules! If you want to define security, you have to define it against a verifier who breaks the protocol in an arbitrary way. For example, the simulator should be able to produce an equivalent transcript even if the verifier deterministically picks an edge, or tries to pick a non-edge, or tries to send gibberish. It takes a lot more work to prove security against an arbitrary verifier, but the basic setup is that the simulator can no longer make choices for the verifier, but rather has to invoke the verifier subroutine as a black box. (To compensate, the requirements on the simulator are relaxed quite a bit; more on that next time)

Because an implementation of such a scheme would involve a lot of validation, we’re going to defer the discussion to next time. We also need to be more specific about the different kinds of zero-knowledge, since we won’t be able to achieve perfect zero-knowledge with the simulator drawing from an identical distribution, but rather a computationally indistinguishable distribution.

We’ll define all this rigorously next time, and discuss the known theoretical implications and limitations. Next time will be cuffs-off theory, baby!

Until then!

# The Mathematics of Secret Sharing

Here’s a simple puzzle with a neat story. A rich old woman is drafting her will and wants to distribute her expansive estate equally amongst her five children. But her children are very greedy, and the woman knows that if he leaves her will unprotected her children will resort to nefarious measures to try to get more than their fair share. In one fearful scenario, she worries that the older four children will team up to bully the youngest child entirely out of his claim! She desperately wants them to cooperate, so she decides to lock the will away, and the key is a secret integer $N$. The question is, how can she distribute this secret number to her children so that the only way they can open the safe is if they are all present and willing?

A mathematical way to say this is: how can she distribute some information to her children so that, given all of their separate pieces of information, they can reconstruct the key, but for every choice of fewer than 5 children, there is no way to reliably recover the key? This is called the secret sharing problem. More generally, say we have an integer $N$ called the secret, a number of participants $k$, and a number required for reconstruction $r$. Then a secret sharing protocol is the data of a method for distributing information and a method for reconstructing the secret. The distributing method is an algorithm $D$ that accepts as input $N, k, r$ and produces as output a list of $k$ numbers $D(N, k) = (x_1, x_2, \dots, x_k)$. These are the numbers distributed to the $k$ participants. Then the reconstruction method is a function $R$ which accepts as input $r$ numbers $(y_1, \dots, y_r)$ and outputs a number $M$. We want two properties to hold :

• The reconstruction function $R$ outputs $N$ when given any $r$ of the numbers output by $D$.
• One cannot reliably reconstruct $N$ with fewer than $r$ of the numbers output by $D$.

The question is: does an efficient secret sharing protocol exist for every possible choice of $r \leq k$? In fact it does, and the one we’ll describe in this post is far more secure than the word “reliable” suggests. It will be so hard as to be mathematically impossible to reconstruct the secret from fewer than the desired number of pieces. Independently discovered by Adi Shamir in 1979, the protocol we’ll see in this post is wonderfully simple, and as we describe it we’ll build up a program to implement it. This time we’ll work in the Haskell programming language, and you can download the program from this blog’s Github page. And finally, a shout out to my friend Karishma Chadha who worked together with me on this post. She knows Haskell a lot better than I do.

## Polynomial Interpolation

The key to the secret sharing protocol is a beautiful fact about polynomials. Specifically, if you give me $k+1$ points in the plane with distinct $x$ values, then there is a unique degree $k$ polynomial that passes through the points. Just as importantly (and as a byproduct of this fact), there are infinitely many degree $k+1$ polynomials that pass through the same points. For example, if I give you the points $(1,2), (2,4), (-2,2)$, the only quadratic (degree 2) polynomial that passes through all of them is $1 + \frac{1}{2}x + \frac{1}{2} x^2$. The proof that you can always find such a polynomial is pretty painless, so let’s take it slowly and write a program as we go. Suppose you give me some list of $k+1$ points $(x_0, y_0), \dots, (x_k, y_k)$ and no two $x$ values are the same. The proof has two parts. First we have to prove existence, that some degree $k$ polynomial passes through the points, and then we have to prove that the polynomial is unique. The uniqueness part is easier, so let’s do the existence part first. Let’s start with just one point $(x_0, y_0)$. What’s a degree zero polynomial that passes through it? Just the constant function $f(x) = y_0$. For two points $(x_0, y_0), (x_1, y_1)$ it’s similarly easy, since we all probably remember from basic geometry that there’s a unique line passing through any two points. But let’s write the line in a slightly different way:

$\displaystyle f(x) = \frac{(x-x_1)}{x_0-x_1}y_0 + \frac{(x-x_0)}{x_1-x_0} y_1$

Why write it this way? Because now it should be obvious that the polynomial passes through our two points: if I plug in $x_0$ then the second term is zero and the first term is just $y_0(x_0 – x_1) / (x_0 – x_1) = y_0$, and likewise for $x_1$.

For example, if we’re given $(1, 3), (2, 5)$ we get:

$\displaystyle f(x) = \frac{(x – 2)}{(1-2)} \cdot 3 + \frac{(x-1)}{(2-1)} \cdot 5$

Plugging in $x = 1$ cancels the second term out, leaving $f(1) = \frac{1-2}{1-2} \cdot 3 = 3$, and plugging in $x = 2$ cancels the first term, leaving $f(2) = \frac{(2-1)}{(2-1)} \cdot 5 = 5$.

Now the hard step is generalizing this to three points. But the suggestive form above gives us a hint on how to continue.

$\displaystyle f(x) = \frac{(x-x_1)(x-x_2)}{(x_0-x_1)(x_0-x_2)}y_0+\frac{(x-x_0)(x-x_2)}{(x_1-x_0)(x_1-x_2)}y_1+ \frac{(x-x_0)(x-x_1)}{(x_2-x_0)(x_2-x_1)}y_2$

Notice that the numerators of the terms take on the form $y_j \prod_{i \ne j} (x-x_i)$, that is, a product $(x-x_0)(x-x_1), \dots, (x-x_n) y_j$ excluding $(x – x_j)$. Thus, all terms will cancel out to 0 if we plug in $x_i$, except one term, which has the form

$\displaystyle y_i \cdot \frac{\prod_{j \neq i} (x-x_j)}{\prod_{j \neq i} (x_i – x_j)}$

Here, the fraction on the right side of the term cancels out to 1 when $x_i$ is plugged in, leaving only $y_i$, the desired result. Now that we’ve written the terms in this general product form, we can easily construct examples for any number of points. We just do a sum of terms that look like this, one for each $y$ value. Try writing this out as a summation, if you feel comfortable with notation.

Let’s go further and write an algorithm to construct the polynomial for us. Some preliminaries: we encode a polynomial as a list of coefficients in degree-increasing order, so that $1 + 3x + 5x^3$ is represented by [1,3,0,5].

type Point = (Rational, Rational)
type Polynomial = [Rational] --Polynomials are represented in ascending degree order


Then we can write some simple functions for adding and multiplying polynomials

addPoly :: Polynomial -&gt; Polynomial -&gt; Polynomial
addPoly [] [] = []
addPoly [] xs = xs
addPoly xs [] = xs
addPoly (x:xs) (y:ys) = (x+y) : (addPoly xs ys)

multNShift :: Polynomial -&gt; (Rational, Int) -&gt; Polynomial
multNShift xs (y, shift) =
(replicate shift 0) ++ ( map ((*) y) xs)

multPoly :: Polynomial -&gt; Polynomial -&gt; Polynomial
multPoly [] [] = []
multPoly [] _ = []
multPoly _ [] = []
multPoly xs ys = foldr addPoly [] $map (multNShift ys)$ zip xs [0..]


In short, multNShift multiplies a polynomial by a monomial (like $3x^2 (1 + 7x + 2x^4)$), and multPoly does the usual distribution of terms, using multNShift to do most of the hard work. Then to construct the polynomial we need one more helper function to extract all elements of a list except a specific entry:

allBut :: Integer -&gt; [a] -&gt; [a]
allBut i list = snd $unzip$ filter (\ (index,_) -&gt; i /= index) $zip [0..] list  And now we can construct a polynomial from a list of points in the same way we did mathematically. findPolynomial :: [Point] -&gt; Polynomial findPolynomial points = let term (i, (xi,yi)) = let prodTerms = map (\ (xj, _) -&gt; [-xj/(xi - xj), 1/(xi - xj)])$ allBut i points
in multPoly [yi] $foldl multPoly [1] prodTerms in foldl addPoly []$ map term $zip [0..] points  Here the sub-function term constructs the$ i$-th term of the polynomial, and the remaining expression adds up all the terms. Remember that due to our choice of representation the awkward 1 sitting in the formula signifies the presence of$ x$. And that’s it! An example of it’s use to construct$ 3x – 1$: *Main&gt; findPolynomial [(1,2), (2,5)] [(-1) % 1,3 % 1]  Now the last thing we need to do is show that the polynomial we constructed in this way is unique. Here’s a proof. Suppose there are two degree$ n$polynomials$ f(x)$and$ g(x)$that pass through the$ n+1$given data points$ (x_0, y_0), (x_1, y_1), \dots , (x_n, y_n)$. Let$ h(x) = p(x) – q(x)$, and we want to show that$ h(x)$is the zero polynomial. This proves that$ f(x)$is unique because the only assumptions we made at the beginning were that$ f,g$both passed through the given points. Now since both$ f$and$ g$are degree$ n$polynomials,$ h$is a polynomial of degree at most$ n$. It is also true that$ h(x_i) = p(x_i) – q(x_i) = y_i – y_i = 0$where$ 0\leq i\leq n$. Thus, we have (at least)$ n+1$roots of this degree$ n$polynomial. But this can’t happen by the fundamental theorem of algebra! In more detail: if a nonzero degree$ \leq n$polynomial really could have$ n+1$distinct roots, then you could factor it into at least$ n+1$linear terms like$ h(x) = (x – x_0)(x – x_1) \dots (x – x_n)$. But since there are$ n+1$copies of$ x$,$ h$would need to be a degree$ n+1$polynomial! The only way to resolve this contradiction is if$ h$is actually the zero polynomial, and thus$ h(x) = f(x) – g(x) = 0$,$ f(x) = g(x)$. This completes the proof. Now that we know these polynomials exist and are unique, it makes sense to give them a name. So for a given set of$ k+1$points, call the unique degree$ k$polynomial that passes through them the interpolating polynomial for those points. ## Secret Sharing with Interpolating Polynomials Once you think to use interpolating polynomials, the connection to secret sharing seems almost obvious. If you want to distribute a secret to$ k$people so that$ r$of them can reconstruct it here’s what you do: 1. Pick a random polynomial$ p$of degree$ r-1$so that the secret is$ p(0)$. 2. Distribute the points$ (1, p(1)), (2, p(2)), \dots, (k, p(k))$. Then the reconstruction function is: take the points provided by at least$ r$participants, use them to reconstruct$ p$, and output$ p(0)$. That’s it! Step 1 might seem hard at first, but you can just notice that$ p(0)$is equivalent to the constant term of the polynomial, so you can pick$ r-1$random numbers for the other coefficients of$ p$and output them. In Haskell, makePolynomial :: Rational -&gt; Int -&gt; StdGen -&gt; Polynomial makePolynomial secret r generator = secret : map toRational (take (r-1)$ randomRs (1, (numerator(2*secret))) generator)

share :: Rational -&gt; Integer -&gt; Int -&gt; IO [Point]
share secret k r = do
generator &lt;- getStdGen
let poly = makePolynomial secret r generator
ys = map (eval poly) $map toRational [1..k] return$ zip [1..] ys


In words, we initialize the Haskell standard generator (which wraps the results inside an IO monad), then we construct a polynomial by letting the first coefficient be the secret and choosing random coefficients for the rest. And findPolynomial is the reconstruction function.

Finally, just to flush the program out a little more, we write a function that encodes or decodes a string as an integer.

encode :: String -&gt; Integer
encode str = let nums = zip [0..] $map (toInteger . ord) str integers = map (\(i, n) -&gt; shift n (i*8)) nums in foldl (+) 0 integers decode :: Integer -&gt; String decode 0 = &quot;&quot; decode num = if num &lt; 0 then error &quot;Can't decode a negative number&quot; else chr (fromInteger (num .&amp;. 127)) : (decode$ shift num (-8))


And then we have a function that shows the whole process in action.

example msg k r =
let secret = toRational $encode msg in do points (numerator x, numerator y)) points let subset = take r points encodedSecret = eval (findPolynomial subset) 0 putStrLn$ show $numerator encodedSecret putStrLn$ decode $numerator encodedSecret  And a function call: *Main&gt; example &quot;Hello world!&quot; 10 5 10334410032606748633331426632 [(1,34613972928232668944107982702),(2,142596447049264820443250256658),(3,406048862884360219576198642966),(4,916237517700482382735379150124),(5,1783927975542901326260203400662),(6,3139385067235193566437068631142),(7,5132372890379242119499357692158),(8,7932154809355236501627439048336),(9,11727493455321672728948666778334),(10,16726650726215353317537380574842)] 10334410032606748633331426632 Hello world!  ## Security The final question to really close this problem with a nice solution is, “How secure is this protocol?” That is, if you didn’t know the secret but you had$ r-1$numbers, could you find a way to recover the secret, oh, say, 0.01% of the time? Pleasingly, the answer is a solid no. This protocol has something way stronger, what’s called information-theoretic security. In layman’s terms, this means it cannot possibly be broken, period. That is, without taking advantage of some aspect of the random number generator, which we assume is a secure random number generator. But with that assumption the security proof is trivial. Here it goes. Pick a number$ M$that isn’t the secret$ N$. It’s any number you want. And say you only have$ r-1$of the correct numbers$ y_1, \dots, y_{r-1}$. Then there is a final number$ y_r$so that the protocol reconstructs$ M$instead of$ N$. This is no matter which of the unused$ x$-values you pick, no matter what$ M$and$ r-1$numbers you started with. This is simply because adding in$ (0, M)$defines a new polynomial$ q$, and you can use any point on$ q$as your$ r$-th number. Here’s what this means. A person trying to break the secret sharing protocol would have no way to tell if they did it correctly! If the secret is a message, then a bad reconstruction could produce any message. In information theory terms, knowing$ r-1$of the numbers provides no information about the actual message. In our story from the beginning of the post, no matter how much computing power one of the greedy children may have, the only algorithm they have to open the safe is to try every combination. The mother could make the combination have length in the millions of digits, or even better, the mother could encode the will as an integer and distribute that as the secret. I imagine there are some authenticity issues there, since one could claim to have reconstructed a false will, signatures and all, but there appear to be measures to account for this. One might wonder if this is the only known secret sharing protocol, and the answer is no. Essentially, any time you have an existence and uniqueness theorem in mathematics, and the objects you’re working with are efficiently constructible, then you have the potential for a secret sharing protocol. There are two more on Wikipedia. But people don’t really care to find new ones anymore because the known protocols are as good as it gets. On a broader level, the existence of efficient secret sharing protocols is an important fact used in the field of secure multiparty computation. Here the goal is for a group of individuals to compute a function depending on secret information from all of them, without revealing their secret information to anyone. A classic example of this is to compute the average of seven salaries without revealing any of the salaries. This was a puzzle featured on Car Talk, and it has a cute answer. See if you can figure it out. Until next time! # Sending and Authenticating Messages with Elliptic Curves Last time we saw the Diffie-Hellman key exchange protocol, and discussed the discrete logarithm problem and the related Diffie-Hellman problem, which form the foundation for the security of most protocols that use elliptic curves. Let’s continue our journey to investigate some more protocols. Just as a reminder, the Python implementations of these protocols are not at all meant for practical use, but for learning purposes. We provide the code on this blog’s Github page, but for the love of security don’t actually use them. ## Shamir-Massey-Omura Recall that there are lots of ways to send encrypted messages if you and your recipient share some piece of secret information, and the Diffie-Hellman scheme allows one to securely generate a piece of shared secret information. Now we’ll shift gears and assume you don’t have a shared secret, nor any way to acquire one. The first cryptosystem in that vein is called the Shamir-Massey-Omura protocol. It’s only slightly more complicated to understand than Diffie-Hellman, and it turns out to be equivalently difficult to break. The idea is best explained by metaphor. Alice wants to send a message to Bob, but all she has is a box and a lock for which she has the only key. She puts the message in the box and locks it with her lock, and sends it to Bob. Bob can’t open the box, but he can send it back with a second lock on it for which Bob has the only key. Upon receiving it, Alice unlocks her lock, sends the box back to Bob, and Bob can now open the box and retrieve the message. To celebrate the return of Game of Thrones, we’ll demonstrate this protocol with an original Lannister Infographic™. Assuming the box and locks are made of magically unbreakable Valyrian steel, nobody but Bob (also known as Jamie) will be able to read the message. Now fast forward through the enlightenment, industrial revolution, and into the age of information. The same idea works, and it’s significantly faster over long distances. Let$ C$be an elliptic curve over a finite field$ k$(we’ll fix$ k = \mathbb{Z}/p$for some prime$ p$, though it works for general fields too). Let$ n$be the number of points on$ C$. Alice’s message is going to be in the form of a point$ M$on$ C$. She’ll then choose her secret integer$ 0 < s_A < p$and compute$ s_AM$(locking the secret in the box), sending the result to Bob. Bob will likewise pick a secret integer$ s_B$, and send$ s_Bs_AM$back to Alice. Now the unlocking part: since$ s_A \in \mathbb{Z}/p$is a field, Alice can “unlock the box” by computing the inverse$ s_A^{-1}$and computing$ s_BM = s_A^{-1}s_Bs_AM$. Now the “box” just has Bob’s lock on it. So Alice sends$ s_BM$back to Bob, and Bob performs the same process to evaluate$ s_B^{-1}s_BM = M$, thus receiving the message. Like we said earlier, the security of this protocol is equivalent to the security of the Diffie-Hellman problem. In this case, if we call$ z = s_A^{-1}$and$ y = s_B^{-1}$, and$ P = s_As_BM$, then it’s clear that any eavesdropper would have access to$ P, zP$, and$ yP$, and they would be tasked with determining$ zyP$, which is exactly the Diffie-Hellman problem. Now Alice’s secret message comes in the form of a point on an elliptic curve, so how might one translate part of a message (which is usually represented as an integer) into a point? This problem seems to be difficult in general, and there’s no easy answer. Here’s one method originally proposed by Neal Koblitz that uses a bit of number theory trickery. Let$ C$be given by the equation$ y^2 = x^3 + ax + b$, again over$ \mathbb{Z}/p$. Suppose$ 0 \leq m < p/100$is our message. Define for any$ 0 \leq j < 100$the candidate$ x$-points$ x_j = 100m + j$. Then call our candidate$ y^2$-values$ s_j = x_j^3 + ax_j + b$. Now for each$ j$we can compute$ x_j, s_j$, and so we’ll pick the first one for which$ s_j$is a square in$ \mathbb{Z}/p$and we’ll get a point on the curve. How can we tell if$ s_j$is a square? One condition is that$ s_j^{(p-1)/2} \equiv 1 \mod p$. This is a basic fact about quadratic residues modulo primes; see these notes for an introduction and this Wikipedia section for a dense summary. Once we know it’s a square, we can compute the square root depending on whether$ p \equiv 1 \mod 4$or$ p \equiv 3 \mod 4$. In the latter case, it’s just$ s_j^{(p+1)/4} \mod p$. Unfortunately the former case is more difficult (really, the difficult part is$ p \equiv 1 \mod 8$). You can see Section 1.5 of this textbook for more details and three algorithms, or you could just pick primes congruent to 3 mod 4. I have struggled to find information about the history of the Shamir-Massey-Omura protocol; every author claims it’s not widely used in practice, and the only reason seems to be that this protocol doesn’t include a suitable method for authenticating the validity of a message. In other words, some “man in the middle” could be intercepting messages and tricking you into thinking he is your intended recipient. Coupling this with the difficulty of encoding a message as a point seems to be enough to make cryptographers look for other methods. Another reason could be that the system was patented in 1982 and is currently held by SafeNet, one of the US’s largest security providers. All of their products have generic names so it’s impossible to tell if they’re actually using Shamir-Massey-Omura. I’m no patent lawyer, but it could simply be that nobody else is allowed to implement the scheme. ## Digital Signatures Indeed, the discussion above raises the question: how does one authenticate a message? The standard technique is called a digital signature, and we can implement those using elliptic curve techniques as well. To debunk the naive idea, one cannot simply attach some static piece of extra information to the message. An attacker could just copy that information and replicate it to forge your signature on another, potentially malicious document. In other words, a signature should only work for the message it was used to sign. The technique we’ll implement was originally proposed by Taher Elgamal, and is called the ElGamal signature algorithm. We’re going to look at a special case of it. So Alice wants to send a message$ m$with some extra information that is unique to the message and that can be used to verify that it was sent by Alice. She picks an elliptic curve$ E$over$ \mathbb{F}_q$in such a way that the number of points on$ E$is$ br$, where$ b$is a small integer and$ r$is a large prime. Then, as in Diffie-Hellman, she picks a base point$ Q$that has order$ r$and a secret integer$ s$(which is permanent), and computes$ P = sQ$. Alice publishes everything except$ s$: Public information:$ \mathbb{F}_q, E, b, r, Q, P &fg=000000$Let Alice’s message$ m$be represented as an integer at most$ r$(there are a few ways to get around this if your message is too long). Now to sign$ m$Alice picks a message specific$ k < r$and computes what I’ll call the auxiliary point$ A = kQ$. Let$ A = (x, y)$. Alice then computes the signature$ g = k^{-1}(m + s x) \mod r$. The signed message is then$ (m, A, g)$, which Alice can safely send to Bob. Before we see how Bob verifies the message, notice that the signature integer involves everything: Alice’s secret key, the message-specific secret integer$ k$, and most importantly the message. Remember that this is crucial: we want the signature to work only for the message that it was used to sign. If the same$ k$is used for multiple messages then the attacker can find out your secret key! (And this has happened in practice; see the end of the post.) So Bob receives$ (m, A, g)$, and also has access to all of the public information listed above. Bob authenticates the message by computing the auxiliary point via a different route. First, he computes$ c = g^{-1} m \mod r$and$ d = g^{-1}x \mod r$, and then$ A’ = cQ + dP$. If the message was signed by Alice then$ A’ = A$, since we can just write out the definition of everything: Now to analyze the security. The attacker wants to be able to take any message$ m’$and produce a signature$ A’, g’$that will pass validation with Alice’s public information. If the attacker knew how to solve the discrete logarithm problem efficiently this would be trivial: compute$ s$and then just sign like Alice does. Without that power there are still a few options. If the attacker can figure out the message-specific integer$ k$, then she can compute Alice’s secret key$ s$as follows. Given$ g = k^{-1}(m + sx) \mod r$, compute$ kg \equiv (m + sx) \mod r$. Compute$ d = gcd(x, r)$, and you know that this congruence has only$ d$possible solutions modulo$ r$. Since$ s$is less than$ r$, the attacker can just try all options until they find$ P = sQ$. So that’s bad, but in a properly implemented signature algorithm finding$ k$is equivalently hard to solving the discrete logarithm problem, so we can assume we’re relatively safe from that. On the other hand one could imagine being able to conjure the pieces of the signature$ A’, g’$by some method that doesn’t involve directly finding Alice’s secret key. Indeed, this problem is less well-studied than the Diffie-Hellman problem, but most cryptographers believe it’s just as hard. For more information, this paper surveys the known attacks against this signature algorithm, including a successful attack for fields of characteristic two. ## Signature Implementation We can go ahead and implement the signature algorithm once we’ve picked a suitable elliptic curve. For the purpose of demonstration we’ll use a small curve,$ E: y^2 = x^3 + 3x + 181$over$ F = \mathbb{Z}/1061$, whose number of points happens to have the a suitable prime factorization ($ 1047 = 3 \cdot 349$). If you’re interested in counting the number of points on an elliptic curve, there are many theorems and efficient algorithms to do this, and if you’ve been reading this whole series something then an algorithm based on the Baby-Step Giant-Step idea would be easy to implement. For the sake of brevity, we leave it as an exercise to the reader. Note that the code we present is based on the elliptic curve and finite field code we’re been implementing as part of this series. All of the code used in this post is available on this blog’s Github page. The basepoint we’ll pick has to have order 349, and$ E$has plenty of candidates. We’ll use$ (2, 81)$, and we’ll randomly generate a secret key that’s less than$ 349$(eight bits will do). So our setup looks like this: if __name__ == &quot;__main__&quot;: F = FiniteField(1061, 1) # y^2 = x^3 + 3x + 181 curve = EllipticCurve(a=F(3), b=F(181)) basePoint = Point(curve, F(2), F(81)) basePointOrder = 349 secretKey = generateSecretKey(8) publicKey = secretKey * basePoint  Then so sign a message we generate a random key, construct the auxiliary point and the signature, and return: def sign(message, basePoint, basePointOrder, secretKey): modR = FiniteField(basePointOrder, 1) oneTimeSecret = generateSecretKey(len(bin(basePointOrder)) - 3) # numbits(order) - 1 auxiliaryPoint = oneTimeSecret * basePoint signature = modR(oneTimeSecret).inverse() * (modR(message) + modR(secretKey) * modR(auxiliaryPoint[0])) return (message, auxiliaryPoint, signature)  So far so good. Note that we generate the message-specific$ k$at random, and this implies we need a high-quality source of randomness (what’s called a cryptographically-secure pseudorandom number generator). In absence of that there are proposed deterministic methods for doing it. See this draft proposal of Thomas Pornin, and this paper of Daniel Bernstein for another. Now to authenticate, we follow the procedure from earlier. def authentic(signedMessage, basePoint, basePointOrder, publicKey): modR = FiniteField(basePointOrder, 1) (message, auxiliary, signature) = signedMessage sigInverse = modR(signature).inverse() # sig can be an int or a modR already c, d = sigInverse * modR(message), sigInverse * modR(auxiliary[0]) auxiliaryChecker = int(c) * basePoint + int(d) * publicKey return auxiliaryChecker == auxiliary  Continuing with our example, we pick a message represented as an integer smaller than$ r$, sign it, and validate it. &gt;&gt;&gt; message = 123 &gt;&gt;&gt; signedMessage = sign(message, basePoint, basePointOrder, secretKey) &gt;&gt;&gt; signedMessage (123, (220 (mod 1061), 234 (mod 1061)), 88 (mod 349)) &gt;&gt;&gt; authentic(signedMessage, basePoint, basePointOrder, publicKey) True  So there we have it, a nice implementation of the digital signature algorithm. ## When Digital Signatures Fail As we mentioned, it’s extremely important to avoid using the same$ k$for two different messages. If you do, then you’ll get two signed messages$ (m_1, A_1, g_1), (m_2, A_2, g_2)$, but by definition the two$ g$’s have a ton of information in common! An attacker can recognize this immediately because$ A_1 = A_2$, and figure out the secret key$ s$as follows. First write$ \displaystyle g_1 – g_2 \equiv k^{-1}(m_1 + sx) – k^{-1}(m_2 + sx) \equiv k^{-1}(m_1 – m_2) \mod r$. Now we have something of the form$ \text{known}_1 \equiv (k^{-1}) \text{known}_2 \mod r$, and similarly to the attack described earlier we can try all possibilities until we find a number that satisfies$ A = kQ$. Then once we have$ k$we have already seen how to find$ s$. Indeed, it would be a good exercise for the reader to implement this attack. The attack we just described it not an idle threat. Indeed, the Sony corporation, producers of the popular Playstation video game console, made this mistake in signing software for Playstation 3. A digital signature algorithm makes sense to validate software, because Sony wants to ensure that only Sony has the power to publish games. So Sony developers act as one party signing the data on a disc, and the console will only play a game with a valid signature. Note that the asymmetric setup is necessary because if the console had shared a secret with Sony (say, stored as plaintext within the hardware of the console), anyone with physical access to the machine could discover it. Now here come the cringing part. Sony made the mistake of using the same$ k$to sign every game! Their mistake was discovered in 2010 and made public at a cryptography conference. This video of the humorous talk includes a description of the variant Sony used and the attacker describe how the mistake should have been corrected. Without a firmware update (I believe Sony’s public key information was stored locally so that one could authenticate games without an internet connection), anyone could sign a piece of software and create games that are indistinguishable from something produced by Sony. That includes malicious content that, say, installs software that sends credit card information to the attacker. So here we have a tidy story: a widely used cryptosystem with a scare story of what will go wrong when you misuse it. In the future of this series, we’ll look at other things you can do with elliptic curves, including factoring integers and testing for primality. We’ll also see some normal forms of elliptic curves that are used in place of the Weierstrass normal form for various reasons. Until next time! # Elliptic Curve Diffie-Hellman So far in this series we’ve seen elliptic curves from many perspectives, including the elementary, algebraic, and programmatic ones. We implemented finite field arithmetic and connected it to our elliptic curve code. So we’re in a perfect position to feast on the main course: how do we use elliptic curves to actually do cryptography? ## History As the reader has heard countless times in this series, an elliptic curve is a geometric object whose points have a surprising and well-defined notion of addition. That you can add some points on some elliptic curves was a well-known technique since antiquity, discovered by Diophantus. It was not until the mid 19th century that the general question of whether addition always makes sense was answered by Karl Weierstrass. In 1908 Henri Poincaré asked about how one might go about classifying the structure of elliptic curves, and it was not until 1922 that Louis Mordell proved the fundamental theorem of elliptic curves, classifying their algebraic structure for most important fields. While mathematicians have always been interested in elliptic curves (there is currently a million dollar prize out for a solution to one problem about them), its use in cryptography was not suggested until 1985. Two prominent researchers independently proposed it: Neal Koblitz at the University of Washington, and Victor Miller who was at IBM Research at the time. Their proposal was solid from the start, but elliptic curves didn’t gain traction in practice until around 2005. More recently, the NSA was revealed to have planted vulnerable national standards for elliptic curve cryptography so they could have backdoor access. You can see a proof and implementation of the backdoor at Aris Adamantiadis’s blog. For now we’ll focus on the cryptographic protocols themselves. ## The Discrete Logarithm Problem Koblitz and Miller had insights aplenty, but the central observation in all of this is the following. Adding is easy on elliptic curves, but undoing addition seems hard. What I mean by this is usually called the discrete logarithm problem. Here’s a formal definition. Recall that an additive group is just a set of things that have a well-defined addition operation, and the that notation$ ny$means$ y + y + \dots + y$($ n$times). Definition: Let$ G$be an additive group, and let$ x, y$be elements of$ G$so that$ x = ny$for some integer$ n$. The discrete logarithm problem asks one to find$ n$when given$ x$and$ y$. I like to give super formal definitions first, so let’s do a comparison. For integers this problem is very easy. If you give me 12 and 4185072, I can take a few seconds and compute that$ 4185072 = (348756) 12$using the elementary-school division algorithm (in the above notation,$ y=12, x=4185072$, and$ n = 348756$). The division algorithm for integers is efficient, and so it gives us a nice solution to the discrete logarithm problem for the additive group of integers$ \mathbb{Z}$. The reason we use the word “logarithm” is because if your group operation is multiplication instead of addition, you’re tasked with solving the equation$ x = y^n$for$ n$. With real numbers you’d take a logarithm of both sides, hence the name. Just in case you were wondering, we can also solve the multiplicative logarithm problem efficiently for rational numbers (and hence for integers) using the square-and-multiply algorithm. Just square$ y$until doing so would make you bigger than$ x$, then multiply by$ y$until you hit$ x$. But integers are way nicer than they need to be. They are selflessly well-ordered. They give us division for free. It’s a computational charity! What happens when we move to settings where we don’t have a division algorithm? In mathematical lingo: we’re really interested in the case when$ G$is just a group, and doesn’t have additional structure. The less structure we have, the harder it should be to solve problems like the discrete logarithm. Elliptic curves are an excellent example of such a group. There is no sensible ordering for points on an elliptic curve, and we don’t know how to do division efficiently. The best we can do is add$ y$to itself over and over until we hit$ x$, and it could easily happen that$ n$(as a number) is exponentially larger than the number of bits in$ x$and$ y$. What we really want is a polynomial time algorithm for solving discrete logarithms. Since we can take multiples of a point very fast using the double-and-add algorithm from our previous post, if there is no polynomial time algorithm for the discrete logarithm problem then “taking multiples” fills the role of a theoretical one-way function, and as we’ll see this opens the door for secure communication. Here’s the formal statement of the discrete logarithm problem for elliptic curves. Problem: Let$ E$be an elliptic curve over a finite field$ k$. Let$ P, Q$be points on$ E$such that$ P = nQ$for some integer$ n$. Let$ |P|$denote the number of bits needed to describe the point$ P$. We wish to find an algorithm which determines$ n$and has runtime polynomial in$ |P| + |Q|$. If we want to allow randomness, we can require the algorithm to find the correct$ n$with probability at least 2/3. So this problem seems hard. And when mathematicians and computer scientists try to solve a problem for many years and they can’t, the cryptographers get excited. They start to wonder: under the assumption that the problem has no efficient solution, can we use that as the foundation for a secure communication protocol? ## The Diffie-Hellman Protocol and Problem Let’s spend the rest of this post on the simplest example of a cryptographic protocol based on elliptic curves: the Diffie-Hellman key exchange. A lot of cryptographic techniques are based on two individuals sharing a secret string, and using that string as the key to encrypt and decrypt their messages. In fact, if you have enough secret shared information, and you only use it once, you can have provably unbreakable encryption! We’ll cover this idea in a future series on the theory of cryptography (it’s called a one-time pad, and it’s not all that complicated). All we need now is motivation to get a shared secret. Because what if your two individuals have never met before and they want to generate such a shared secret? Worse, what if their only method of communication is being monitored by nefarious foes? Can they possibly exchange public information and use it to construct a shared piece of secret information? Miraculously, the answer is yes, and one way to do it is with the Diffie-Hellman protocol. Rather than explain it abstractly let’s just jump right in and implement it with elliptic curves. As hinted by the discrete logarithm problem, we only really have one tool here: taking multiples of a point. So say we’ve chosen a curve$ C$and a point on that curve$ Q$. Then we can take some secret integer$ n$, and publish$ Q$and$ nQ$for the world to see. If the discrete logarithm problem is truly hard, then we can rest assured that nobody will be able to discover$ n$. How can we use this to established a shared secret? This is where Diffie-Hellman comes in. Take our two would-be communicators, Alice and Bob. Alice and Bob each pick a binary string called a secret key, which in interpreted as a number in this protocol. Let’s call Alice’s secret key$ s_A$and Bob’s$ s_B$, and note that they don’t have to be the same. As the name “secret key” suggests, the secret keys are held secret. Moreover, we’ll assume that everything else in this protocol, including all data sent between the two parties, is public. So Alice and Bob agree ahead of time on a public elliptic curve$ C$and a public point$ Q$on$ C$. We’ll sometimes call this point the base point for the protocol. Bob can cunningly do the following trick: take his secret key$ s_B$and send$ s_B Q$to Alice. Equally slick Alice computes$ s_A Q$and sends that to Bob. Now Alice, having$ s_B Q $, computes$ s_A s_B Q$. And Bob, since he has$ s_A Q$, can compute$ s_B s_A Q$. But since addition is commutative in elliptic curve groups, we know$ s_A s_B Q = s_B s_A Q$. The secret piece of shared information can be anything derived from this new point, for example its$ x$-coordinate. If we want to talk about security, we have to describe what is public and what the attacker is trying to determine. In this case the public information consists of the points$ Q, s_AQ, s_BQ$. What is the attacker trying to figure out? Well she really wants to eavesdrop on their subsequent conversation, that is, the stuff that encrypt with their new shared secret$ s_As_BQ$. So the attacker wants find out$ s_As_BQ$. And we’ll call this the Diffie-Hellman problem. Diffie-Hellman Problem: Suppose you fix an elliptic curve$ E$over a finite field$ k$, and you’re given four points$ Q, aQ, bQ$and$ P$for some unknown integers$ a, b$. Determine if$ P = abQ$in polynomial time (in the lengths of$ Q, aQ, bQ, P$). On one hand, if we had an efficient solution to the discrete logarithm problem, we could easily use that to solve the Diffie-Hellman problem because we could compute$ a,b$and them quickly compute$ abQ$and check if it’s$ P$. In other words discrete log is at least as hard as this problem. On the other hand nobody knows if you can do this without solving the discrete logarithm problem. Moreover, we’re making this problem as easy as we reasonably can because we don’t require you to be able to compute$ abQ$. Even if some prankster gave you a candidate for$ abQ$, all you have to do is check if it’s correct. One could imagine some test that rules out all fakes but still doesn’t allow us to compute the true point, which would be one way to solve this problem without being able to solve discrete log. So this is our hardness assumption: assuming this problem has no efficient solution then no attacker, even with really lucky guesses, can feasibly determine Alice and Bob’s shared secret. ## Python Implementation The Diffie-Hellman protocol is just as easy to implement as you would expect. Here’s some Python code that does the trick. Note that all the code produced in the making of this post is available on this blog’s Github page. def sendDH(privateKey, generator, sendFunction): return sendFunction(privateKey * generator) def receiveDH(privateKey, receiveFunction): return privateKey * receiveFunction()  And using our code from the previous posts in this series we can run it on a small test. import os def generateSecretKey(numBits): return int.from_bytes(os.urandom(numBits // 8), byteorder='big') if __name__ == &quot;__main__&quot;: F = FiniteField(3851, 1) curve = EllipticCurve(a=F(324), b=F(1287)) basePoint = Point(curve, F(920), F(303)) aliceSecretKey = generateSecretKey(8) bobSecretKey = generateSecretKey(8) alicePublicKey = sendDH(aliceSecretKey, basePoint, lambda x:x) bobPublicKey = sendDH(bobSecretKey, basePoint, lambda x:x) sharedSecret1 = receiveDH(bobSecretKey, lambda: alicePublicKey) sharedSecret2 = receiveDH(aliceSecretKey, lambda: bobPublicKey) print('Shared secret is %s == %s' % (sharedSecret1, sharedSecret2))  Pythons os module allows us to access the operating system’s random number generator (which is supposed to be cryptographically secure) via the function urandom, which accepts as input the number of bytes you wish to generate, and produces as output a Python bytestring object that we then convert to an integer. Our simplistic (and totally insecure!) protocol uses the elliptic curve$ C$defined by$ y^2 = x^3 + 324 x + 1287$over the finite field$ \mathbb{Z}/3851$. We pick the base point$ Q = (920, 303)$, and call the relevant functions with placeholders for actual network transmission functions. There is one issue we have to note. Say we fix our base point$ Q$. Since an elliptic curve over a finite field can only have finitely many points (since the field only has finitely many possible pairs of numbers), it will eventually happen that$ nQ = 0$is the ideal point. Recall that the smallest value of$ n$for which$ nQ = 0$is called the order of$ Q$. And so when we’re generating secret keys, we have to pick them to be smaller than the order of the base point. Viewed from the other angle, we want to pick$ Q$to have large order, so that we can pick large and difficult-to-guess secret keys. In fact, no matter what integer you use for the secret key it will be equivalent to some secret key that’s less than the order of$ Q$. So if an attacker could guess the smaller secret key he wouldn’t need to know your larger key. The base point we picked in the example above happens to have order 1964, so an 8-bit key is well within the bounds. A real industry-strength elliptic curve (say, Curve25519 or the curves used in the NIST standards*) is designed to avoid these problems. The order of the base point used in the Diffie-Hellman protocol for Curve25519 has gargantuan order (like$ 2^{256}$). So 256-bit keys can easily be used. I’m brushing some important details under the rug, because the key as an actual string is derived from 256 pseudorandom bits in a highly nontrivial way. So there we have it: a simple cryptographic protocol based on elliptic curves. While we didn’t experiment with a truly secure elliptic curve in this example, we’ll eventually extend our work to include Curve25519. But before we do that we want to explore some of the other algorithms based on elliptic curves, including random number generation and factoring. ## Comments on Insecurity Why do we use elliptic curves for this? Why not do something like RSA and do multiplication (and exponentiation) modulo some large prime? Well, it turns out that algorithmic techniques are getting better and better at solving the discrete logarithm problem for integers mod$ p$, leading some to claim that RSA is dead. But even if we will never find a genuinely efficient algorithm (polynomial time is good, but might not be good enough), these techniques have made it clear that the key size required to maintain high security in RSA-type protocols needs to be really big. Like 4096 bits. But for elliptic curves we can get away with 256-bit keys. The reason for this is essentially mathematical: addition on elliptic curves is not as well understood as multiplication is for integers, and the more complex structure of the group makes it seem inherently more difficult. So until some powerful general attacks are found, it seems that we can get away with higher security on elliptic curves with smaller key sizes. I mentioned that the particular elliptic curve we chose was insecure, and this raises the natural question: what makes an elliptic curve/field/basepoint combination secure or insecure? There are a few mathematical pitfalls (including certain attacks we won’t address), but one major non-mathematical problem is called a side-channel attack. A side channel attack against a cryptographic protocol is one that gains additional information about users’ secret information by monitoring side-effects of the physical implementation of the algorithm. The problem is that different operations, doubling a point and adding two different points, have very different algorithms. As a result, they take different amounts of time to complete and they require differing amounts of power. Both of these can be used to reveal information about the secret keys. Despite the different algorithms for arithmetic on Weierstrass normal form curves, one can still implement them to be secure. Naively, one might pad the two subroutines with additional (useless) operations so that they have more similar time/power signatures, but I imagine there are better methods available. But much of what makes a curve’s domain parameters mathematically secure or insecure is still unknown. There are a handful of known attacks against very specific families of parameters, and so cryptography experts simply avoid these as they are discovered. Here is a short list of pitfalls, and links to overviews: 1. Make sure the order of your basepoint has a short facorization (e.g., is$ 2p, 3p,$or$ 4p$for some prime$ p$). Otherwise you risk attacks based on the Chinese Remainder Theorem, the most prominent of which is called Pohlig-Hellman. 2. Make sure your curve is not supersingular. If it is you can reduce the discrete logarithm problem to one in a different and much simpler group. 3. If your curve$ C$is defined over$ \mathbb{Z}/p$, make sure the number of points on$ C$is not equal to$ p$. Such a curve is called prime-field anomalous, and its discrete logarithm problem can be reduced to the (additive) version on integers. 4. Don’t pick a small underlying field like$ \mathbb{F}_{2^m}$for small$ m$. General-purpose attacks can be sped up significantly against such fields. 5. If you use the field$ \mathbb{F}_{2^m}$, ensure that$ m$is prime. Many believe that if$ m$has small divisors, attacks based on some very complicated algebraic geometry can be used to solve the discrete logarithm problem more efficiently than any general-purpose method. This gives evidence that$ m\$ being composite at all is dangerous, so we might as well make it prime.

This is a sublist of the list provided on page 28 of this white paper.

The interesting thing is that there is little about the algorithm and protocol that is vulnerable. Almost all of the vulnerabilities come from using bad curves, bad fields, or a bad basepoint. Since the known attacks work on a pretty small subset of parameters, one potentially secure technique is to just generate a random curve and a random point on that curve! But apparently all respected national agencies will refuse to call your algorithm “standards compliant” if you do this.

Next time we’ll continue implementing cryptographic protocols, including the more general public-key message sending and signing protocols.

Until then!