Zero Knowledge Proofs for NP

Last time, we saw a specific zero-knowledge proof for graph isomorphism. This introduced us to the concept of an interactive proof, where you have a prover and a verifier sending messages back and forth, and the prover is trying to prove a specific claim to the verifier.

A zero-knowledge proof is a special kind of interactive proof in which the prover has some secret piece of knowledge that makes it very easy to verify a disputed claim is true. The prover’s goal, then, is to convince the verifier (a polynomial-time algorithm) that the claim is true without revealing any knowledge at all about the secret.

In this post we’ll see that, using a bit of cryptography, zero-knowledge proofs capture a much wider class of problems than graph isomorphism. Basically, if you believe that cryptography exists, every problem whose answers can be easily verified have zero-knowledge proofs (i.e., all of the class NP). Here are a bunch of examples. For each I’ll phrase the problem as a question, and then say what sort of data the prover’s secret could be.

  • Given a boolean formula, is there an assignment of variables making it true? Secret: a satisfying assignment to the variables.
  • Given a set of integers, is there a subset whose sum is zero? Secret: such a subset.
  • Given a graph, does it have a 3-coloring? Secret: a valid 3-coloring.
  • Given a boolean circuit, can it produce a specific output? Secret: a choice of inputs that produces the output.

The common link among all of these problems is that they are NP-hard (graph isomorphism isn’t known to be NP-hard). For us this means two things: (1) we think these problems are actually hard, so the verifier can’t solve them, and (2) if you show that one of them has a zero-knowledge proof, then they all have zero-knowledge proofs.

We’re going to describe and implement a zero-knowledge proof for graph 3-colorability, and in the next post we’ll dive into the theoretical definitions and talk about the proof that the scheme we present is zero-knowledge. As usual, all of the code used in making this post is available in a repository on this blog’s Github page. In the follow up to this post, we’ll dive into more nitty gritty details about the proof that this works, and study different kinds of zero-knowledge.

One-way permutations

In a recent program gallery post we introduced the Blum-Blum-Shub pseudorandom generator. A pseudorandom generator is simply an algorithm that takes as input a short random string of length s and produces as output a longer string, say, of length 3s. This output string should not be random, but rather “indistinguishable” from random in a sense we’ll make clear next time. The underlying function for this generator is the “modular squaring” function x \mapsto x^2 \mod M, for some cleverly chosen M. The M is chosen in such a way that makes this mapping a permutation. So this function is more than just a pseudorandom generator, it’s a one-way permutation.

If you have a primality-checking algorithm on hand (we do), then preparing the Blum-Blum-Shub algorithm is only about 15 lines of code.

def goodPrime(p):
    return p % 4 == 3 and probablyPrime(p, accuracy=100)

def findGoodPrime(numBits=512):
    candidate = 1

    while not goodPrime(candidate):
        candidate = random.getrandbits(numBits)

    return candidate

def makeModulus(numBits=512):
    return findGoodPrime(numBits) * findGoodPrime(numBits)

def blum_blum_shub(modulusLength=512):
    modulus = makeModulus(numBits=modulusLength)

    def f(inputInt):
        return pow(inputInt, 2, modulus)

    return f

The interested reader should check out the proof gallery post for more details about this generator. For us, having a one-way permutation is the important part (and we’re going to defer the formal definition of “one-way” until next time, just think “hard to get inputs from outputs”).

The other concept we need, which is related to a one-way permutation, is the notion of a hardcore predicate. Let G(x) be a one-way permutation, and let f(x) = b be a function that produces a single bit from a string. We say that f is a hardcore predicate for G if you can’t reliably compute f(x) when given only G(x).

Hardcore predicates are important because there are many one-way functions for which, when given the output, you can guess part of the input very reliably, but not the rest (e.g., if g is a one-way function, (x, y) \mapsto (x, g(y)) is also one-way, but the x part is trivially guessable). So a hardcore predicate formally measures, when given the output of a one-way function, what information derived from the input is hard to compute.

In the case of Blum-Blum-Shub, one hardcore predicate is simply the parity of the input bits.

def parity(n):
    return sum(int(x) for x in bin(n)[2:]) % 2

Bit Commitment Schemes

A core idea that will makes zero-knowledge proofs work for NP is the ability for the prover to publicly “commit” to a choice, and later reveal that choice in a way that makes it infeasible to fake their commitment. This will involve not just the commitment to a single bit of information, but also the transmission of auxiliary data that is provably infeasible to fake.

Our pair of one-way permutation G and hardcore predicate f comes in very handy. Let’s say I want to commit to a bit b \in \{ 0,1 \}. Let’s fix a security parameter that will measure how hard it is to change my commitment post-hoc, say n = 512. My process for committing is to draw a random string x of length n, and send you the pair (G(x), f(x) \oplus b), where \oplus is the XOR operator on two bits.

The guarantee of a one-way permutation with a hardcore predicate is that if you only see G(x), you can’t guess f(x) with any reasonable edge over random guessing. Moreover, if you fix a bit b, and take an unpredictably random bit y, the XOR b \oplus y is also unpredictably random. In other words, if f(x) is hardcore, then so is x \mapsto f(x) \oplus b for a fixed bit b. Finally, to reveal my commitment, I just send the string x and let you independently compute (G(x), f(x) \oplus b). Since G is a permutation, that x is the only x that could have produced the commitment I sent you earlier.

Here’s a Python implementation of this scheme. We start with a generic base class for a commitment scheme.

class CommitmentScheme(object):
    def __init__(self, oneWayPermutation, hardcorePredicate, securityParameter):
        '''
            oneWayPermutation: int -> int
            hardcorePredicate: int -> {0, 1}
        '''
        self.oneWayPermutation = oneWayPermutation
        self.hardcorePredicate = hardcorePredicate
        self.securityParameter = securityParameter

        # a random string of length `self.securityParameter` used only once per commitment
        self.secret = self.generateSecret()

    def generateSecret(self):
        raise NotImplemented

    def commit(self, x):
        raise NotImplemented

    def reveal(self):
        return self.secret

Note that the “reveal” step is always simply to reveal the secret. Here’s the implementation subclass. We should also note that the security string should be chosen at random anew for every bit you wish to commit to. In this post we won’t reuse CommitmentScheme objects anyway.

class BBSBitCommitmentScheme(CommitmentScheme):
    def generateSecret(self):
        # the secret is a random quadratic residue
        self.secret = self.oneWayPermutation(random.getrandbits(self.securityParameter))
        return self.secret

    def commit(self, bit):
        unguessableBit = self.hardcorePredicate(self.secret)
        return (
            self.oneWayPermutation(self.secret),
            unguessableBit ^ bit,  # python xor
        )

One important detail is that the Blum-Blum-Shub one-way permutation is only a permutation when restricted to quadratic residues. As such, we generate our secret by shooting a random string through the one-way permutation to get a random residue. In fact this produces a uniform random residue, since the Blum-Blum-Shub modulus is chosen in such a way that ensures every residue has exactly four square roots.

Here’s code to check the verification is correct.

class BBSBitCommitmentVerifier(object):
    def __init__(self, oneWayPermutation, hardcorePredicate):
        self.oneWayPermutation = oneWayPermutation
        self.hardcorePredicate = hardcorePredicate

    def verify(self, securityString, claimedCommitment):
        trueBit = self.decode(securityString, claimedCommitment)
        unguessableBit = self.hardcorePredicate(securityString)  # wasteful, whatever
        return claimedCommitment == (
            self.oneWayPermutation(securityString),
            unguessableBit ^ trueBit,  # python xor
        )

    def decode(self, securityString, claimedCommitment):
        unguessableBit = self.hardcorePredicate(securityString)
        return claimedCommitment[1] ^ unguessableBit

and an example of using it

if __name__ == "__main__":
    import blum_blum_shub
    securityParameter = 10
    oneWayPerm = blum_blum_shub.blum_blum_shub(securityParameter)
    hardcorePred = blum_blum_shub.parity

    print('Bit commitment')
    scheme = BBSBitCommitmentScheme(oneWayPerm, hardcorePred, securityParameter)
    verifier = BBSBitCommitmentVerifier(oneWayPerm, hardcorePred)

    for _ in range(10):
        bit = random.choice([0, 1])
        commitment = scheme.commit(bit)
        secret = scheme.reveal()
        trueBit = verifier.decode(secret, commitment)
        valid = verifier.verify(secret, commitment)

        print('{} == {}? {}; {} {}'.format(bit, trueBit, valid, secret, commitment))

Example output:

1 == 1? True; 524 (5685, 0)
1 == 1? True; 149 (22201, 1)
1 == 1? True; 476 (34511, 1)
1 == 1? True; 927 (14243, 1)
1 == 1? True; 608 (23947, 0)
0 == 0? True; 964 (7384, 1)
0 == 0? True; 373 (23890, 0)
0 == 0? True; 620 (270, 1)
1 == 1? True; 926 (12390, 0)
0 == 0? True; 708 (1895, 0)

As an exercise, write a program to verify that no other input to the Blum-Blum-Shub one-way permutation gives a valid verification. Test it on a small security parameter like n=10.

It’s also important to point out that the verifier needs to do some additional validation that we left out. For example, how does the verifier know that the revealed secret actually is a quadratic residue? In fact, detecting quadratic residues is believed to be hard! To get around this, we could change the commitment scheme reveal step to reveal the random string that was used as input to the permutation to get the residue (cf. BBSCommitmentScheme.generateSecret for the random string that needs to be saved/revealed). Then the verifier could generate the residue in the same way. As an exercise, upgrade the bit commitment an verifier classes to reflect this.

In order to get a zero-knowledge proof for 3-coloring, we need to be able to commit to one of three colors, which requires two bits. So let’s go overkill and write a generic integer commitment scheme. It’s simple enough: specify a bound on the size of the integers, and then do an independent bit commitment for every bit.

class BBSIntCommitmentScheme(CommitmentScheme):
    def __init__(self, numBits, oneWayPermutation, hardcorePredicate, securityParameter=512):
        '''
            A commitment scheme for integers of a prespecified length `numBits`. Applies the
            Blum-Blum-Shub bit commitment scheme to each bit independently.
        '''
        self.schemes = [BBSBitCommitmentScheme(oneWayPermutation, hardcorePredicate, securityParameter)
                        for _ in range(numBits)]
        super().__init__(oneWayPermutation, hardcorePredicate, securityParameter)

    def generateSecret(self):
        self.secret = [x.secret for x in self.schemes]
        return self.secret

    def commit(self, integer):
        # first pad bits to desired length
        integer = bin(integer)[2:].zfill(len(self.schemes))
        bits = [int(bit) for bit in integer]
        return [scheme.commit(bit) for scheme, bit in zip(self.schemes, bits)]

And the corresponding verifier

class BBSIntCommitmentVerifier(object):
    def __init__(self, numBits, oneWayPermutation, hardcorePredicate):
        self.verifiers = [BBSBitCommitmentVerifier(oneWayPermutation, hardcorePredicate)
                          for _ in range(numBits)]

    def decodeBits(self, secrets, bitCommitments):
        return [v.decode(secret, commitment) for (v, secret, commitment) in
                zip(self.verifiers, secrets, bitCommitments)]

    def verify(self, secrets, bitCommitments):
        return all(
            bitVerifier.verify(secret, commitment)
            for (bitVerifier, secret, commitment) in
            zip(self.verifiers, secrets, bitCommitments)
        )

    def decode(self, secrets, bitCommitments):
        decodedBits = self.decodeBits(secrets, bitCommitments)
        return int(''.join(str(bit) for bit in decodedBits))

A sample usage:

if __name__ == "__main__":
    import blum_blum_shub
    securityParameter = 10
    oneWayPerm = blum_blum_shub.blum_blum_shub(securityParameter)
    hardcorePred = blum_blum_shub.parity

    print('Int commitment')
    scheme = BBSIntCommitmentScheme(10, oneWayPerm, hardcorePred)
    verifier = BBSIntCommitmentVerifier(10, oneWayPerm, hardcorePred)
    choices = list(range(1024))
    for _ in range(10):
        theInt = random.choice(choices)
        commitments = scheme.commit(theInt)
        secrets = scheme.reveal()
        trueInt = verifier.decode(secrets, commitments)
        valid = verifier.verify(secrets, commitments)

        print('{} == {}? {}; {} {}'.format(theInt, trueInt, valid, secrets, commitments))

And a sample output:

527 == 527? True; [25461, 56722, 25739, 2268, 1185, 18226, 46375, 8907, 54979, 23095] [(29616, 0), (342, 1), (54363, 1), (63975, 0), (5426, 0), (9124, 1), (23973, 0), (44832, 0), (33044, 0), (68501, 0)]
67 == 67? True; [25461, 56722, 25739, 2268, 1185, 18226, 46375, 8907, 54979, 23095] [(29616, 1), (342, 1), (54363, 1), (63975, 1), (5426, 0), (9124, 1), (23973, 1), (44832, 1), (33044, 0), (68501, 0)]
729 == 729? True; [25461, 56722, 25739, 2268, 1185, 18226, 46375, 8907, 54979, 23095] [(29616, 0), (342, 1), (54363, 0), (63975, 1), (5426, 0), (9124, 0), (23973, 0), (44832, 1), (33044, 1), (68501, 0)]
441 == 441? True; [25461, 56722, 25739, 2268, 1185, 18226, 46375, 8907, 54979, 23095] [(29616, 1), (342, 0), (54363, 0), (63975, 0), (5426, 1), (9124, 0), (23973, 0), (44832, 1), (33044, 1), (68501, 0)]
614 == 614? True; [25461, 56722, 25739, 2268, 1185, 18226, 46375, 8907, 54979, 23095] [(29616, 0), (342, 1), (54363, 1), (63975, 1), (5426, 1), (9124, 1), (23973, 1), (44832, 0), (33044, 0), (68501, 1)]
696 == 696? True; [25461, 56722, 25739, 2268, 1185, 18226, 46375, 8907, 54979, 23095] [(29616, 0), (342, 1), (54363, 0), (63975, 0), (5426, 1), (9124, 0), (23973, 0), (44832, 1), (33044, 1), (68501, 1)]
974 == 974? True; [25461, 56722, 25739, 2268, 1185, 18226, 46375, 8907, 54979, 23095] [(29616, 0), (342, 0), (54363, 0), (63975, 1), (5426, 0), (9124, 1), (23973, 0), (44832, 0), (33044, 0), (68501, 1)]
184 == 184? True; [25461, 56722, 25739, 2268, 1185, 18226, 46375, 8907, 54979, 23095] [(29616, 1), (342, 1), (54363, 0), (63975, 0), (5426, 1), (9124, 0), (23973, 0), (44832, 1), (33044, 1), (68501, 1)]
136 == 136? True; [25461, 56722, 25739, 2268, 1185, 18226, 46375, 8907, 54979, 23095] [(29616, 1), (342, 1), (54363, 0), (63975, 0), (5426, 0), (9124, 1), (23973, 0), (44832, 1), (33044, 1), (68501, 1)]
632 == 632? True; [25461, 56722, 25739, 2268, 1185, 18226, 46375, 8907, 54979, 23095] [(29616, 0), (342, 1), (54363, 1), (63975, 1), (5426, 1), (9124, 0), (23973, 0), (44832, 1), (33044, 1), (68501, 1)]

Before we move on, we should note that this integer commitment scheme “blows up” the secret by quite a bit. If you have a security parameter s and an integer with n bits, then the commitment uses roughly sn bits. A more efficient method would be to simply use a good public-key encryption scheme, and then reveal the secret key used to encrypt the message. While we implemented such schemes previously on this blog, I thought it would be more fun to do something new.

A zero-knowledge proof for 3-coloring

First, a high-level description of the protocol. The setup: the prover has a graph G with n vertices V and m edges E, and also has a secret 3-coloring of the vertices \varphi: V \to \{ 0, 1, 2 \}. Recall, a 3-coloring is just an assignment of colors to vertices (in this case the colors are 0,1,2) so that no two adjacent vertices have the same color.

So the prover has a coloring \varphi to be kept secret, but wants to prove that G is 3-colorable. The idea is for the verifier to pick a random edge (u,v), and have the prover reveal the colors of u and v. However, if we run this protocol only once, there’s nothing to stop the prover from just lying and picking two distinct colors. If we allow the verifier to run the protocol many times, and the prover actually reveals the colors from their secret coloring, then after roughly |V| rounds the verifier will know the entire coloring. Each step reveals more knowledge.

We can fix this with two modifications.

  1. The prover first publicly commits to the coloring using a commitment scheme. Then when the verifier asks for the colors of the two vertices of a random edge, he can rest assured that the prover fixed a coloring that does not depend on the verifier’s choice of edge.
  2. The prover doesn’t reveal colors from their secret coloring, but rather from a random permutation of the secret coloring. This way, when the verifier sees colors, they’re equally likely to see any two colors, and all the verifier will know is that those two colors are different.

So the scheme is: prover commits to a random permutation of the true coloring and sends it to the verifier; the verifier asks for the true colors of a given edge; the prover provides those colors and the secrets to their commitment scheme so the verifier can check.

The key point is that now the verifier has to commit to a coloring, and if the coloring isn’t a proper 3-coloring the verifier has a reasonable chance of picking an improperly colored edge (a one-in-|E| chance, which is at least 1/|V|^2). On the other hand, if the coloring is proper, then the verifier will always query a properly colored edge, and it’s zero-knowledge because the verifier is equally likely to see every pair of colors. So the verifier will always accept, but won’t know anything more than that the edge it chose is properly colored. Repeating this |V|^2-ish times, with high probability it’ll have queried every edge and be certain the coloring is legitimate.

Let’s implement this scheme. First the data types. As in the previous post, graphs are represented by edge lists, and a coloring is represented by a dictionary mapping a vertex to 0, 1, or 2 (the “colors”).

# a graph is a list of edges, and for simplicity we'll say
# every vertex shows up in some edge
exampleGraph = [
    (1, 2),
    (1, 4),
    (1, 3),
    (2, 5),
    (2, 5),
    (3, 6),
    (5, 6)
]

exampleColoring = {
    1: 0,
    2: 1,
    3: 2,
    4: 1,
    5: 2,
    6: 0,
}

Next, the Prover class that implements that half of the protocol. We store a list of integer commitment schemes for each vertex whose color we need to commit to, and send out those commitments.

class Prover(object):
    def __init__(self, graph, coloring, oneWayPermutation=ONE_WAY_PERMUTATION, hardcorePredicate=HARDCORE_PREDICATE):
        self.graph = [tuple(sorted(e)) for e in graph]
        self.coloring = coloring
        self.vertices = list(range(1, numVertices(graph) + 1))
        self.oneWayPermutation = oneWayPermutation
        self.hardcorePredicate = hardcorePredicate
        self.vertexToScheme = None

    def commitToColoring(self):
        self.vertexToScheme = {
            v: commitment.BBSIntCommitmentScheme(
                2, self.oneWayPermutation, self.hardcorePredicate
            ) for v in self.vertices
        }

        permutation = randomPermutation(3)
        permutedColoring = {
            v: permutation[self.coloring[v]] for v in self.vertices
        }

        return {v: s.commit(permutedColoring[v])
                for (v, s) in self.vertexToScheme.items()}

    def revealColors(self, u, v):
        u, v = min(u, v), max(u, v)
        if not (u, v) in self.graph:
            raise Exception('Must query an edge!')

        return (
            self.vertexToScheme[u].reveal(),
            self.vertexToScheme[v].reveal(),
        )

In commitToColoring we randomly permute the underlying colors, and then compose that permutation with the secret coloring, committing to each resulting color independently. In revealColors we reveal only those colors for a queried edge. Note that we don’t actually need to store the permuted coloring, because it’s implicitly stored in the commitments.

It’s crucial that we reject any query that doesn’t correspond to an edge. If we don’t reject such queries then the verifier can break the protocol! In particular, by querying non-edges you can determine which pairs of nodes have the same color in the secret coloring. You can then chain these together to partition the nodes into color classes, and so color the graph. (After seeing the Verifier class below, implement this attack as an exercise).

Here’s the corresponding Verifier:

class Verifier(object):
    def __init__(self, graph, oneWayPermutation, hardcorePredicate):
        self.graph = [tuple(sorted(e)) for e in graph]
        self.oneWayPermutation = oneWayPermutation
        self.hardcorePredicate = hardcorePredicate
        self.committedColoring = None
        self.verifier = commitment.BBSIntCommitmentVerifier(2, oneWayPermutation, hardcorePredicate)

    def chooseEdge(self, committedColoring):
        self.committedColoring = committedColoring
        self.chosenEdge = random.choice(self.graph)
        return self.chosenEdge

    def accepts(self, revealed):
        revealedColors = []

        for (w, bitSecrets) in zip(self.chosenEdge, revealed):
            trueColor = self.verifier.decode(bitSecrets, self.committedColoring[w])
            revealedColors.append(trueColor)
            if not self.verifier.verify(bitSecrets, self.committedColoring[w]):
                return False

        return revealedColors[0] != revealedColors[1]

As expected, in the acceptance step the verifier decodes the true color of the edge it queried, and accepts if and only if the commitment was valid and the edge is properly colored.

Here’s the whole protocol, which is syntactically very similar to the one for graph isomorphism.

def runProtocol(G, coloring, securityParameter=512):
    oneWayPermutation = blum_blum_shub.blum_blum_shub(securityParameter)
    hardcorePredicate = blum_blum_shub.parity

    prover = Prover(G, coloring, oneWayPermutation, hardcorePredicate)
    verifier = Verifier(G, oneWayPermutation, hardcorePredicate)

    committedColoring = prover.commitToColoring()
    chosenEdge = verifier.chooseEdge(committedColoring)

    revealed = prover.revealColors(*chosenEdge)
    revealedColors = (
        verifier.verifier.decode(revealed[0], committedColoring[chosenEdge[0]]),
        verifier.verifier.decode(revealed[1], committedColoring[chosenEdge[1]]),
    )
    isValid = verifier.accepts(revealed)

    print("{} != {} and commitment is valid? {}".format(
        revealedColors[0], revealedColors[1], isValid
    ))

    return isValid

And an example of running it

if __name__ == "__main__":
    for _ in range(30):
        runProtocol(exampleGraph, exampleColoring, securityParameter=10)

Here’s the output

0 != 2 and commitment is valid? True
1 != 0 and commitment is valid? True
1 != 2 and commitment is valid? True
2 != 0 and commitment is valid? True
1 != 2 and commitment is valid? True
2 != 0 and commitment is valid? True
0 != 2 and commitment is valid? True
0 != 2 and commitment is valid? True
0 != 1 and commitment is valid? True
0 != 1 and commitment is valid? True
2 != 1 and commitment is valid? True
0 != 2 and commitment is valid? True
2 != 0 and commitment is valid? True
2 != 0 and commitment is valid? True
1 != 0 and commitment is valid? True
1 != 0 and commitment is valid? True
0 != 2 and commitment is valid? True
2 != 1 and commitment is valid? True
0 != 2 and commitment is valid? True
0 != 2 and commitment is valid? True
2 != 1 and commitment is valid? True
1 != 0 and commitment is valid? True
1 != 0 and commitment is valid? True
2 != 1 and commitment is valid? True
2 != 1 and commitment is valid? True
1 != 0 and commitment is valid? True
0 != 2 and commitment is valid? True
1 != 2 and commitment is valid? True
1 != 2 and commitment is valid? True
0 != 1 and commitment is valid? True

So while we haven’t proved it rigorously, we’ve seen the zero-knowledge proof for graph 3-coloring. This automatically gives us a zero-knowledge proof for all of NP, because given any NP problem you can just convert it to the equivalent 3-coloring problem and solve that. Of course, the blowup required to convert a random NP problem to 3-coloring can be polynomially large, which makes it unsuitable for practice. But the point is that this gives us a theoretical justification for which problems have zero-knowledge proofs in principle. Now that we’ve established that you can go about trying to find the most efficient protocol for your favorite problem.

Anticipatory notes

When we covered graph isomorphism last time, we said that a simulator could, without participating in the zero-knowledge protocol or knowing the secret isomorphism, produce a transcript that was drawn from the same distribution of messages as the protocol produced. That was all that it needed to be “zero-knowledge,” because anything the verifier could do with its protocol transcript, the simulator could do too.

We can do exactly the same thing for 3-coloring, exploiting the same “reverse order” trick where the simulator picks the random edge first, then chooses the color commitment post-hoc.

Unfortunately, both there and here I’m short-changing you, dear reader. The elephant in the room is that our naive simulator assumes the verifier is playing by the rules! If you want to define security, you have to define it against a verifier who breaks the protocol in an arbitrary way. For example, the simulator should be able to produce an equivalent transcript even if the verifier deterministically picks an edge, or tries to pick a non-edge, or tries to send gibberish. It takes a lot more work to prove security against an arbitrary verifier, but the basic setup is that the simulator can no longer make choices for the verifier, but rather has to invoke the verifier subroutine as a black box. (To compensate, the requirements on the simulator are relaxed quite a bit; more on that next time)

Because an implementation of such a scheme would involve a lot of validation, we’re going to defer the discussion to next time. We also need to be more specific about the different kinds of zero-knowledge, since we won’t be able to achieve perfect zero-knowledge with the simulator drawing from an identical distribution, but rather a computationally indistinguishable distribution.

We’ll define all this rigorously next time, and discuss the known theoretical implications and limitations. Next time will be cuffs-off theory, baby!

Until then!

Zero Knowledge Proofs — A Primer

In this post we’ll get a strong taste for zero knowledge proofs by exploring the graph isomorphism problem in detail. In the next post, we’ll see how this relates to cryptography and the bigger picture. The goal of this post is to get a strong understanding of the terms “prover,” “verifier,” and “simulator,” and “zero knowledge” in the context of a specific zero-knowledge proof. Then next time we’ll see how the same concepts (though not the same proof) generalizes to a cryptographically interesting setting.

Graph isomorphism

Let’s start with an extended example. We are given two graphs G_1, G_2, and we’d like to know whether they’re isomorphic, meaning they’re the same graph, but “drawn” different ways.

The problem of telling if two graphs are isomorphic seems hard. The pictures above, which are all different drawings of the same graph (or are they?), should give you pause if you thought it was easy.

To add a tiny bit of formalism, a graph G is a list of edges, and each edge (u,v) is a pair of integers between 1 and the total number of vertices of the graph, say n. Using this representation, an isomorphism between G_1 and G_2 is a permutation \pi of the numbers \{1, 2, \dots, n \} with the property that (i,j) is an edge in G_1 if and only if (\pi(i), \pi(j)) is an edge of G_2. You swap around the labels on the vertices, and that’s how you get from one graph to another isomorphic one.

Given two arbitrary graphs as input on a large number of vertices n, nobody knows of an efficient—i.e., polynomial time in n—algorithm that can always decide whether the input graphs are isomorphic. Even if you promise me that the inputs are isomorphic, nobody knows of an algorithm that could construct an isomorphism. (If you think about it, such an algorithm could be used to solve the decision problem!)

A game

Now let’s play a game. In this game, we’re given two enormous graphs on a billion nodes. I claim they’re isomorphic, and I want to prove it to you. However, my life’s fortune is locked behind these particular graphs (somehow), and if you actually had an isomorphism between these two graphs you could use it to steal all my money. But I still want to convince you that I do, in fact, own all of this money, because we’re about to start a business and you need to know I’m not broke.

Is there a way for me to convince you beyond a reasonable doubt that these two graphs are indeed isomorphic? And moreover, could I do so without you gaining access to my secret isomorphism? It would be even better if I could guarantee you learn nothing about my isomorphism or any isomorphism, because even the slightest chance that you can steal my money is out of the question.

Zero knowledge proofs have exactly those properties, and here’s a zero knowledge proof for graph isomorphism. For the record, G_1 and G_2 are public knowledge, (common inputs to our protocol for the sake of tracking runtime), and the protocol itself is common knowledge. However, I have an isomorphism f: G_1 \to G_2 that you don’t know.

Step 1: I will start by picking one of my two graphs, say G_1, mixing up the vertices, and sending you the resulting graph. In other words, I send you a graph H which is chosen uniformly at random from all isomorphic copies of G_1. I will save the permutation \pi that I used to generate H for later use.

Step 2: You receive a graph H which you save for later, and then you randomly pick an integer t which is either 1 or 2, with equal probability on each. The number t corresponds to your challenge for me to prove H is isomorphic to G_1 or G_2. You send me back t, with the expectation that I will provide you with an isomorphism between H and G_t.

Step 3: Indeed, I faithfully provide you such an isomorphism. If I you send me t=1, I’ll give you back \pi^{-1} : H \to G_1, and otherwise I’ll give you back f \circ \pi^{-1}: H \to G_2. Because composing a fixed permutation with a uniformly random permutation is again a uniformly random permutation, in either case I’m sending you a uniformly random permutation.

Step 4: You receive a permutation g, and you can use it to verify that H is isomorphic to G_t. If the permutation I sent you doesn’t work, you’ll reject my claim, and if it does, you’ll accept my claim.

Before we analyze, here’s some Python code that implements the above scheme. You can find the full, working example in a repository on this blog’s Github page.

First, a few helper functions for generating random permutations (and turning their list-of-zero-based-indices form into a function-of-positive-integers form)

import random

def randomPermutation(n):
    L = list(range(n))
    random.shuffle(L)
    return L

def makePermutationFunction(L):
    return lambda i: L[i - 1] + 1

def makeInversePermutationFunction(L):
    return lambda i: 1 + L.index(i - 1)

def applyIsomorphism(G, f):
    return [(f(i), f(j)) for (i, j) in G]

Here’s a class for the Prover, the one who knows the isomorphism and wants to prove it while keeping the isomorphism secret:

class Prover(object):
    def __init__(self, G1, G2, isomorphism):
        '''
            isomomorphism is a list of integers representing
            an isomoprhism from G1 to G2.
        '''
        self.G1 = G1
        self.G2 = G2
        self.n = numVertices(G1)
        assert self.n == numVertices(G2)

        self.isomorphism = isomorphism
        self.state = None

    def sendIsomorphicCopy(self):
        isomorphism = randomPermutation(self.n)
        pi = makePermutationFunction(isomorphism)

        H = applyIsomorphism(self.G1, pi)

        self.state = isomorphism
        return H

    def proveIsomorphicTo(self, graphChoice):
        randomIsomorphism = self.state
        piInverse = makeInversePermutationFunction(randomIsomorphism)

        if graphChoice == 1:
            return piInverse
        else:
            f = makePermutationFunction(self.isomorphism)
            return lambda i: f(piInverse(i))

The prover has two methods, one for each round of the protocol. The first creates an isomorphic copy of G_1, and the second receives the challenge and produces the requested isomorphism.

And here’s the corresponding class for the verifier

class Verifier(object):
    def __init__(self, G1, G2):
        self.G1 = G1
        self.G2 = G2
        self.n = numVertices(G1)
        assert self.n == numVertices(G2)

    def chooseGraph(self, H):
        choice = random.choice([1, 2])
        self.state = H, choice
        return choice

    def accepts(self, isomorphism):
        '''
            Return True if and only if the given isomorphism
            is a valid isomorphism between the randomly
            chosen graph in the first step, and the H presented
            by the Prover.
        '''
        H, choice = self.state
        graphToCheck = [self.G1, self.G2][choice - 1]
        f = isomorphism

        isValidIsomorphism = (graphToCheck == applyIsomorphism(H, f))
        return isValidIsomorphism

Then the protocol is as follows:

def runProtocol(G1, G2, isomorphism):
    p = Prover(G1, G2, isomorphism)
    v = Verifier(G1, G2)

    H = p.sendIsomorphicCopy()
    choice = v.chooseGraph(H)
    witnessIsomorphism = p.proveIsomorphicTo(choice)

    return v.accepts(witnessIsomorphism)

Analysis: Let’s suppose for a moment that everyone is honestly following the rules, and that G_1, G_2 are truly isomorphic. Then you’ll always accept my claim, because I can always provide you with an isomorphism. Now let’s suppose that, actually I’m lying, the two graphs aren’t isomorphic, and I’m trying to fool you into thinking they are. What’s the probability that you’ll rightfully reject my claim?

Well, regardless of what I do, I’m sending you a graph H and you get to make a random choice of t = 1, 2 that I can’t control. If H is only actually isomorphic to either G_1 or G_2 but not both, then so long as you make your choice uniformly at random, half of the time I won’t be able to produce a valid isomorphism and you’ll reject. And unless you can actually tell which graph H is isomorphic to—an open problem, but let’s say you can’t—then probability 1/2 is the best you can do.

Maybe the probability 1/2 is a bit unsatisfying, but remember that we can amplify this probability by repeating the protocol over and over again. So if you want to be sure I didn’t cheat and get lucky to within a probability of one-in-one-trillion, you only need to repeat the protocol 30 times. To be surer than the chance of picking a specific atom at random from all atoms in the universe, only about 400 times.

If you want to feel small, think of the number of atoms in the universe. If you want to feel big, think of its logarithm.

Here’s the code that repeats the protocol for assurance.

def convinceBeyondDoubt(G1, G2, isomorphism, errorTolerance=1e-20):
    probabilityFooled = 1

    while probabilityFooled > errorTolerance:
        result = runProtocol(G1, G2, isomorphism)
        assert result
        probabilityFooled *= 0.5
        print(probabilityFooled)

Running it, we see it succeeds

$ python graph-isomorphism.py
0.5
0.25
0.125
0.0625
0.03125
 ...
<SNIP>
 ...
1.3552527156068805e-20
6.776263578034403e-21

So it’s clear that this protocol is convincing.

But how can we be sure that there’s no leakage of knowledge in the protocol? What does “leakage” even mean? That’s where this topic is the most difficult to nail down rigorously, in part because there are at least three a priori different definitions! The idea we want to capture is that anything that you can efficiently compute after the protocol finishes (i.e., you have the content of the messages sent to you by the prover) you could have computed efficiently given only the two graphs G_1, G_2, and the claim that they are isomorphic.

Another way to say it is that you may go through the verification process and feel happy and confident that the two graphs are isomorphic. But because it’s a zero-knowledge proof, you can’t do anything with that information more than you could have done if you just took the assertion on blind faith. I’m confident there’s a joke about religion lurking here somewhere, but I’ll just trust it’s funny and move on.

In the next post we’ll expand on this “leakage” notion, but before we get there it should be clear that the graph isomorphism protocol will have the strongest possible “no-leakage” property we can come up with. Indeed, in the first round the prover sends a uniform random isomorphic copy of G_1 to the verifier, but the verifier can compute such an isomorphism already without the help of the prover. The verifier can’t necessarily find the isomorphism that the prover used in retrospect, because the verifier can’t solve graph isomorphism. Instead, the point is that the probability space of “G_1 paired with an H made by the prover” and the probability space of “G_1 paired with H as made by the verifier” are equal. No information was leaked by the prover.

For the second round, again the permutation \pi used by the prover to generate H is uniformly random. Since composing a fixed permutation with a uniform random permutation also results in a uniform random permutation, the second message sent by the prover is uniformly random, and so again the verifier could have constructed a similarly random permutation alone.

Let’s make this explicit with a small program. We have the honest protocol from before, but now I’m returning the set of messages sent by the prover, which the verifier can use for additional computation.

def messagesFromProtocol(G1, G2, isomorphism):
    p = Prover(G1, G2, isomorphism)
    v = Verifier(G1, G2)

    H = p.sendIsomorphicCopy()
    choice = v.chooseGraph(H)
    witnessIsomorphism = p.proveIsomorphicTo(choice)

    return [H, choice, witnessIsomorphism]

To say that the protocol is zero-knowledge (again, this is still colloquial) is to say that anything that the verifier could compute, given as input the return value of this function along with G_1, G_2 and the claim that they’re isomorphic, the verifier could also compute given only G_1, G_2 and the claim that G_1, G_2 are isomorphic.

It’s easy to prove this, and we’ll do so with a python function called simulateProtocol.

def simulateProtocol(G1, G2):
    # Construct data drawn from the same distribution as what is
    # returned by messagesFromProtocol
    choice = random.choice([1, 2])
    G = [G1, G2][choice - 1]
    n = numVertices(G)

    isomorphism = randomPermutation(n)
    pi = makePermutationFunction(isomorphism)
    H = applyIsomorphism(G, pi)

    return H, choice, pi

The claim is that the distribution of outputs to messagesFromProtocol and simulateProtocol are equal. But simulateProtocol will work regardless of whether G_1, G_2 are isomorphic. Of course, it’s not convincing to the verifier because the simulating function made the choices in the wrong order, choosing the graph index before making H. But the distribution that results is the same either way.

So if you were to use the actual Prover/Verifier protocol outputs as input to another algorithm (say, one which tries to compute an isomorphism of G_1 \to G_2), you might as well use the output of your simulator instead. You’d have no information beyond hard-coding the assumption that G_1, G_2 are isomorphic into your program. Which, as I mentioned earlier, is no help at all.

In this post we covered one detailed example of a zero-knowledge proof. Next time we’ll broaden our view and see the more general power of zero-knowledge (that it captures all of NP), and see some specific cryptographic applications. Keep in mind the preceding discussion, because we’re going to re-use the terms “prover,” “verifier,” and “simulator” to mean roughly the same things as the classes Prover, Verifier and the function simulateProtocol.

Until then!

The Mathematics of Secret Sharing

Here’s a simple puzzle with a neat story. A rich old woman is drafting her will and wants to distribute her expansive estate equally amongst her five children. But her children are very greedy, and the woman knows that if he leaves her will unprotected her children will resort to nefarious measures to try to get more than their fair share. In one fearful scenario, she worries that the older four children will team up to bully the youngest child entirely out of his claim! She desperately wants them to cooperate, so she decides to lock the will away, and the key is a secret integer N. The question is, how can she distribute this secret number to her children so that the only way they can open the safe is if they are all present and willing?

estateA mathematical way to say this is: how can she distribute some information to her children so that, given all of their separate pieces of information, they can reconstruct the key, but for every choice of fewer than 5 children, there is no way to reliably recover the key? This is called the secret sharing problem. More generally, say we have an integer N called the secret, a number of participants k, and a number required for reconstruction r. Then a secret sharing protocol is the data of a method for distributing information and a method for reconstructing the secret. The distributing method is an algorithm D that accepts as input N, k, r and produces as output a list of k numbers D(N, k) = (x_1, x_2, \dots, x_k). These are the numbers distributed to the k participants. Then the reconstruction method is a function R which accepts as input r numbers (y_1, \dots, y_r) and outputs a number M. We want two properties to hold :

  • The reconstruction function R outputs N when given any r of the numbers output by D.
  • One cannot reliably reconstruct N with fewer than r of the numbers output by D.

The question is: does an efficient secret sharing protocol exist for every possible choice of r \leq k? In fact it does, and the one we’ll describe in this post is far more secure than the word “reliable” suggests. It will be so hard as to be mathematically impossible to reconstruct the secret from fewer than the desired number of pieces. Independently discovered by Adi Shamir in 1979, the protocol we’ll see in this post is wonderfully simple, and as we describe it we’ll build up a program to implement it. This time we’ll work in the Haskell programming language, and you can download the program from this blog’s Github page. And finally, a shout out to my friend Karishma Chadha who worked together with me on this post. She knows Haskell a lot better than I do.

Polynomial Interpolation

The key to the secret sharing protocol is a beautiful fact about polynomials. Specifically, if you give me k+1 points in the plane with distinct x values, then there is a unique degree k polynomial that passes through the points. Just as importantly (and as a byproduct of this fact), there are infinitely many degree k+1 polynomials that pass through the same points. For example, if I give you the points (1,2), (2,4), (-2,2), the only quadratic (degree 2) polynomial that passes through all of them is 1 + \frac{1}{2}x + \frac{1}{2} x^2.interpolating polynomial example The proof that you can always find such a polynomial is pretty painless, so let’s take it slowly and write a program as we go. Suppose you give me some list of k+1 points (x_0, y_0), \dots, (x_k, y_k) and no two x values are the same. The proof has two parts. First we have to prove existence, that some degree k polynomial passes through the points, and then we have to prove that the polynomial is unique. The uniqueness part is easier, so let’s do the existence part first. Let’s start with just one point (x_0, y_0). What’s a degree zero polynomial that passes through it? Just the constant function f(x) = y_0. For two points (x_0, y_0), (x_1, y_1) it’s similarly easy, since we all probably remember from basic geometry that there’s a unique line passing through any two points. But let’s write the line in a slightly different way:

\displaystyle f(x) = \frac{(x-x_1)}{x_0-x_1}y_0 + \frac{(x-x_0)}{x_1-x_0} y_1

Why write it this way? Because now it should be obvious that the polynomial passes through our two points: if I plug in x_0 then the second term is zero and the first term is just y_0(x_0 - x_1) / (x_0 - x_1) = y_0, and likewise for x_1.

For example, if we’re given (1, 3), (2, 5) we get:

\displaystyle f(x) = \frac{(x - 2)}{(1-2)} \cdot 3 + \frac{(x-1)}{(2-1)} \cdot 5

Plugging in x = 1 cancels the second term out, leaving f(1) = \frac{1-2}{1-2} \cdot 3 = 3, and plugging in x = 2 cancels the first term, leaving f(2) = \frac{(2-1)}{(2-1)} \cdot 5 = 5.

Now the hard step is generalizing this to three points. But the suggestive form above gives us a hint on how to continue.

\displaystyle f(x) = \frac{(x-x_1)(x-x_2)}{(x_0-x_1)(x_0-x_2)}y_0+\frac{(x-x_0)(x-x_2)}{(x_1-x_0)(x_1-x_2)}y_1+ \frac{(x-x_0)(x-x_1)}{(x_2-x_0)(x_2-x_1)}y_2

Notice that the numerators of the terms take on the form y_j \prod_{i \ne j} (x-x_i), that is, a product (x-x_0)(x-x_1), \dots, (x-x_n) y_j excluding (x - x_j). Thus, all terms will cancel out to 0 if we plug in x_i, except one term, which has the form

\displaystyle y_i \cdot \frac{\prod_{j \neq i} (x-x_j)}{\prod_{j \neq i} (x_i - x_j)}

Here, the fraction on the right side of the term cancels out to 1 when x_i is plugged in, leaving only y_i, the desired result. Now that we’ve written the terms in this general product form, we can easily construct examples for any number of points. We just do a sum of terms that look like this, one for each y value. Try writing this out as a summation, if you feel comfortable with notation.

Let’s go further and write an algorithm to construct the polynomial for us. Some preliminaries: we encode a polynomial as a list of coefficients in degree-increasing order, so that 1 + 3x + 5x^3 is represented by [1,3,0,5].

type Point = (Rational, Rational)
type Polynomial = [Rational] --Polynomials are represented in ascending degree order

Then we can write some simple functions for adding and multiplying polynomials

addPoly :: Polynomial -> Polynomial -> Polynomial
addPoly [] [] = []
addPoly [] xs = xs
addPoly xs [] = xs
addPoly (x:xs) (y:ys) = (x+y) : (addPoly xs ys)

multNShift :: Polynomial -> (Rational, Int) -> Polynomial
multNShift xs (y, shift) =
    (replicate shift 0) ++ ( map ((*) y) xs)

multPoly :: Polynomial -> Polynomial -> Polynomial
multPoly [] [] = []
multPoly [] _ = []
multPoly _ [] = []
multPoly xs ys = foldr addPoly [] $ map (multNShift ys) $ zip xs [0..]

In short, multNShift multiplies a polynomial by a monomial (like 3x^2 (1 + 7x + 2x^4)), and multPoly does the usual distribution of terms, using multNShift to do most of the hard work. Then to construct the polynomial we need one more helper function to extract all elements of a list except a specific entry:

allBut :: Integer -> [a] -> [a]
allBut i list = snd $ unzip $ filter (\ (index,_) -> i /= index) $ zip [0..] list

And now we can construct a polynomial from a list of points in the same way we did mathematically.

findPolynomial :: [Point] -> Polynomial
findPolynomial points =
   let term (i, (xi,yi)) =
          let prodTerms = map (\ (xj, _) -> [-xj/(xi - xj), 1/(xi - xj)]) $ allBut i points
          in multPoly [yi] $ foldl multPoly [1] prodTerms
   in foldl addPoly [] $ map term $ zip [0..] points

Here the sub-function term constructs the i-th term of the polynomial, and the remaining expression adds up all the terms. Remember that due to our choice of representation the awkward 1 sitting in the formula signifies the presence of x. And that’s it! An example of it’s use to construct 3x - 1:

*Main> findPolynomial [(1,2), (2,5)]
[(-1) % 1,3 % 1]

Now the last thing we need to do is show that the polynomial we constructed in this way is unique. Here’s a proof.

Suppose there are two degree n polynomials f(x)  and g(x) that pass through the n+1 given data points (x_0, y_0), (x_1, y_1), \dots , (x_n, y_n). Let h(x) = p(x) - q(x), and we want to show that h(x) is the zero polynomial. This proves that f(x) is unique because the only assumptions we made at the beginning were that f,g both passed through the given points. Now since both f and g are degree n polynomials, h is a polynomial of degree at most n. It is also true that h(x_i) = p(x_i) - q(x_i) = y_i - y_i = 0 where 0\leq i\leq n. Thus, we have (at least) n+1 roots of this degree n polynomial. But this can’t happen by the fundamental theorem of algebra! In more detail: if a nonzero degree \leq n polynomial really could have n+1 distinct roots, then you could factor it into at least n+1 linear terms like h(x) = (x - x_0)(x - x_1) \dots (x - x_n). But since there are n+1 copies of x, h would need to be a degree n+1 polynomial! The only way to resolve this contradiction is if h is actually the zero polynomial, and thus h(x) = f(x) - g(x) = 0, f(x) = g(x).

This completes the proof. Now that we know these polynomials exist and are unique, it makes sense to give them a name. So for a given set of k+1 points, call the unique degree k polynomial that passes through them the interpolating polynomial for those points.

Secret Sharing with Interpolating Polynomials

Once you think to use interpolating polynomials, the connection to secret sharing seems almost obvious. If you want to distribute a secret to k people so that r of them can reconstruct it here’s what you do:

  1. Pick a random polynomial p of degree r-1 so that the secret is p(0).
  2. Distribute the points (1, p(1)), (2, p(2)), \dots, (k, p(k)).

Then the reconstruction function is: take the points provided by at least r participants, use them to reconstruct p, and output p(0). That’s it! Step 1 might seem hard at first, but you can just notice that p(0) is equivalent to the constant term of the polynomial, so you can pick r-1 random numbers for the other coefficients of p and output them. In Haskell,

makePolynomial :: Rational -> Int -> StdGen -> Polynomial
makePolynomial secret r generator =
  secret : map toRational (take (r-1) $ randomRs (1, (numerator(2*secret))) generator)

share :: Rational -> Integer -> Int -> IO [Point]
share secret k r = do
  generator <- getStdGen
  let poly = makePolynomial secret r generator
      ys = map (eval poly) $ map toRational [1..k]
  return $ zip [1..] ys

In words, we initialize the Haskell standard generator (which wraps the results inside an IO monad), then we construct a polynomial by letting the first coefficient be the secret and choosing random coefficients for the rest. And findPolynomial is the reconstruction function.

Finally, just to flush the program out a little more, we write a function that encodes or decodes a string as an integer.

encode :: String -> Integer
encode str = let nums = zip [0..] $ map (toInteger . ord) str
                 integers = map (\(i, n) -> shift n (i*8)) nums
             in foldl (+) 0 integers

decode :: Integer -> String
decode 0 = ""
decode num = if num < 0
             then error "Can't decode a negative number"
             else chr (fromInteger (num .&. 127)) : (decode $ shift num (-8))

And then we have a function that shows the whole process in action.

example msg k r =
   let secret = toRational $ encode msg
   in do points  (numerator x, numerator y)) points
      let subset = take r points
          encodedSecret = eval (findPolynomial subset) 0
      putStrLn $ show $ numerator encodedSecret
      putStrLn $ decode $ numerator encodedSecret

And a function call:

*Main> example "Hello world!" 10 5
10334410032606748633331426632
[(1,34613972928232668944107982702),(2,142596447049264820443250256658),(3,406048862884360219576198642966),(4,916237517700482382735379150124),(5,1783927975542901326260203400662),(6,3139385067235193566437068631142),(7,5132372890379242119499357692158),(8,7932154809355236501627439048336),(9,11727493455321672728948666778334),(10,16726650726215353317537380574842)]
10334410032606748633331426632
Hello world!

Security

The final question to really close this problem with a nice solution is, “How secure is this protocol?” That is, if you didn’t know the secret but you had r-1 numbers, could you find a way to recover the secret, oh, say, 0.01% of the time?

Pleasingly, the answer is a solid no. This protocol has something way stronger, what’s called information-theoretic security. In layman’s terms, this means it cannot possibly be broken, period. That is, without taking advantage of some aspect of the random number generator, which we assume is a secure random number generator. But with that assumption the security proof is trivial. Here it goes.

Pick a number M that isn’t the secret N. It’s any number you want. And say you only have r-1 of the correct numbers y_1, \dots, y_{r-1}. Then there is a final number y_r so that the protocol reconstructs M instead of N. This is no matter which of the unused x-values you pick, no matter what M and r-1 numbers you started with. This is simply because adding in (0, M) defines a new polynomial q, and you can use any point on q as your r-th number.

Here’s what this means. A person trying to break the secret sharing protocol would have no way to tell if they did it correctly! If the secret is a message, then a bad reconstruction could produce any message. In information theory terms, knowing r-1 of the numbers provides no information about the actual message. In our story from the beginning of the post, no matter how much computing power one of the greedy children may have, the only algorithm they have to open the safe is to try every combination. The mother could make the combination have length in the millions of digits, or even better, the mother could encode the will as an integer and distribute that as the secret. I imagine there are some authenticity issues there, since one could claim to have reconstructed a false will, signatures and all, but there appear to be measures to account for this.

One might wonder if this is the only known secret sharing protocol, and the answer is no. Essentially, any time you have an existence and uniqueness theorem in mathematics, and the objects you’re working with are efficiently constructible, then you have the potential for a secret sharing protocol. There are two more on Wikipedia. But people don’t really care to find new ones anymore because the known protocols are as good as it gets.

On a broader level, the existence of efficient secret sharing protocols is an important fact used in the field of secure multiparty computation. Here the goal is for a group of individuals to compute a function depending on secret information from all of them, without revealing their secret information to anyone. A classic example of this is to compute the average of seven salaries without revealing any of the salaries. This was a puzzle featured on Car Talk, and it has a cute answer. See if you can figure it out.

Until next time!

Elliptic Curve Diffie-Hellman

So far in this series we’ve seen elliptic curves from many perspectives, including the elementary, algebraic, and programmatic ones. We implemented finite field arithmetic and connected it to our elliptic curve code. So we’re in a perfect position to feast on the main course: how do we use elliptic curves to actually do cryptography?

History

As the reader has heard countless times in this series, an elliptic curve is a geometric object whose points have a surprising and well-defined notion of addition. That you can add some points on some elliptic curves was a well-known technique since antiquity, discovered by Diophantus. It was not until the mid 19th century that the general question of whether addition always makes sense was answered by Karl Weierstrass. In 1908 Henri Poincaré asked about how one might go about classifying the structure of elliptic curves, and it was not until 1922 that Louis Mordell proved the fundamental theorem of elliptic curves, classifying their algebraic structure for most important fields.

While mathematicians have always been interested in elliptic curves (there is currently a million dollar prize out for a solution to one problem about them), its use in cryptography was not suggested until 1985. Two prominent researchers independently proposed it: Neal Koblitz at the University of Washington, and Victor Miller who was at IBM Research at the time. Their proposal was solid from the start, but elliptic curves didn’t gain traction in practice until around 2005. More recently, the NSA was revealed to have planted vulnerable national standards for elliptic curve cryptography so they could have backdoor access. You can see a proof and implementation of the backdoor at Aris Adamantiadis’s blog. For now we’ll focus on the cryptographic protocols themselves.

The Discrete Logarithm Problem

Koblitz and Miller had insights aplenty, but the central observation in all of this is the following.

Adding is easy on elliptic curves, but undoing addition seems hard.

What I mean by this is usually called the discrete logarithm problem. Here’s a formal definition. Recall that an additive group is just a set of things that have a well-defined addition operation, and the that notation ny means y + y + \dots + y (n times).

Definition: Let G be an additive group, and let x, y be elements of G so that x = ny for some integer n. The discrete logarithm problem asks one to find n when given x and y.

I like to give super formal definitions first, so let’s do a comparison. For integers this problem is very easy. If you give me 12 and 4185072, I can take a few seconds and compute that 4185072 = (348756) 12 using the elementary-school division algorithm (in the above notation, y=12, x=4185072, and n = 348756). The division algorithm for integers is efficient, and so it gives us a nice solution to the discrete logarithm problem for the additive group of integers \mathbb{Z}.

The reason we use the word “logarithm” is because if your group operation is multiplication instead of addition, you’re tasked with solving the equation x = y^n for n. With real numbers you’d take a logarithm of both sides, hence the name. Just in case you were wondering, we can also solve the multiplicative logarithm problem efficiently for rational numbers (and hence for integers) using the square-and-multiply algorithm. Just square y until doing so would make you bigger than x, then multiply by y until you hit x.

But integers are way nicer than they need to be. They are selflessly well-ordered. They give us division for free. It’s a computational charity! What happens when we move to settings where we don’t have a division algorithm? In mathematical lingo: we’re really interested in the case when G is just a group, and doesn’t have additional structure. The less structure we have, the harder it should be to solve problems like the discrete logarithm. Elliptic curves are an excellent example of such a group. There is no sensible ordering for points on an elliptic curve, and we don’t know how to do division efficiently. The best we can do is add y to itself over and over until we hit x, and it could easily happen that n (as a number) is exponentially larger than the number of bits in x and y.

What we really want is a polynomial time algorithm for solving discrete logarithms. Since we can take multiples of a point very fast using the double-and-add algorithm from our previous post, if there is no polynomial time algorithm for the discrete logarithm problem then “taking multiples” fills the role of a theoretical one-way function, and as we’ll see this opens the door for secure communication.

Here’s the formal statement of the discrete logarithm problem for elliptic curves.

Problem: Let E be an elliptic curve over a finite field k. Let P, Q be points on E such that P = nQ for some integer n. Let |P| denote the number of bits needed to describe the point P. We wish to find an algorithm which determines n and has runtime polynomial in |P| + |Q|. If we want to allow randomness, we can require the algorithm to find the correct n with probability at least 2/3.

So this problem seems hard. And when mathematicians and computer scientists try to solve a problem for many years and they can’t, the cryptographers get excited. They start to wonder: under the assumption that the problem has no efficient solution, can we use that as the foundation for a secure communication protocol?

The Diffie-Hellman Protocol and Problem

Let’s spend the rest of this post on the simplest example of a cryptographic protocol based on elliptic curves: the Diffie-Hellman key exchange.

A lot of cryptographic techniques are based on two individuals sharing a secret string, and using that string as the key to encrypt and decrypt their messages. In fact, if you have enough secret shared information, and you only use it once, you can have provably unbreakable encryption! We’ll cover this idea in a future series on the theory of cryptography (it’s called a one-time pad, and it’s not all that complicated). All we need now is motivation to get a shared secret.

Because what if your two individuals have never met before and they want to generate such a shared secret? Worse, what if their only method of communication is being monitored by nefarious foes? Can they possibly exchange public information and use it to construct a shared piece of secret information? Miraculously, the answer is yes, and one way to do it is with the Diffie-Hellman protocol. Rather than explain it abstractly let’s just jump right in and implement it with elliptic curves.

As hinted by the discrete logarithm problem, we only really have one tool here: taking multiples of a point. So say we’ve chosen a curve C and a point on that curve Q. Then we can take some secret integer n, and publish Q and nQ for the world to see. If the discrete logarithm problem is truly hard, then we can rest assured that nobody will be able to discover n.

How can we use this to established a shared secret? This is where Diffie-Hellman comes in. Take our two would-be communicators, Alice and Bob. Alice and Bob each pick a binary string called a secret key, which in interpreted as a number in this protocol. Let’s call Alice’s secret key s_A and Bob’s s_B, and note that they don’t have to be the same. As the name “secret key” suggests, the secret keys are held secret. Moreover, we’ll assume that everything else in this protocol, including all data sent between the two parties, is public.

So Alice and Bob agree ahead of time on a public elliptic curve C and a public point Q on C. We’ll sometimes call this point the base point for the protocol.

Bob can cunningly do the following trick: take his secret key s_B and send s_B Q to Alice. Equally slick Alice computes s_A Q and sends that to Bob. Now Alice, having s_B Q , computes s_A s_B Q. And Bob, since he has s_A Q, can compute s_B s_A Q. But since addition is commutative in elliptic curve groups, we know s_A s_B Q = s_B s_A Q. The secret piece of shared information can be anything derived from this new point, for example its x-coordinate.

If we want to talk about security, we have to describe what is public and what the attacker is trying to determine. In this case the public information consists of the points Q, s_AQ, s_BQ. What is the attacker trying to figure out? Well she really wants to eavesdrop on their subsequent conversation, that is, the stuff that encrypt with their new shared secret s_As_BQ. So the attacker wants find out s_As_BQ. And we’ll call this the Diffie-Hellman problem.

Diffie-Hellman Problem: Suppose you fix an elliptic curve E over a finite field k, and you’re given four points Q, aQ, bQ and P for some unknown integers a, b. Determine if P = abQ in polynomial time (in the lengths of Q, aQ, bQ, P).

On one hand, if we had an efficient solution to the discrete logarithm problem, we could easily use that to solve the Diffie-Hellman problem because we could compute a,b and them quickly compute abQ and check if it’s P. In other words discrete log is at least as hard as this problem. On the other hand nobody knows if you can do this without solving the discrete logarithm problem. Moreover, we’re making this problem as easy as we reasonably can because we don’t require you to be able to compute abQ. Even if some prankster gave you a candidate for abQ, all you have to do is check if it’s correct. One could imagine some test that rules out all fakes but still doesn’t allow us to compute the true point, which would be one way to solve this problem without being able to solve discrete log.

So this is our hardness assumption: assuming this problem has no efficient solution then no attacker, even with really lucky guesses, can feasibly determine Alice and Bob’s shared secret.

Python Implementation

The Diffie-Hellman protocol is just as easy to implement as you would expect. Here’s some Python code that does the trick. Note that all the code produced in the making of this post is available on this blog’s Github page.

def sendDH(privateKey, generator, sendFunction):
   return sendFunction(privateKey * generator)

def receiveDH(privateKey, receiveFunction):
   return privateKey * receiveFunction()

And using our code from the previous posts in this series we can run it on a small test.

import os

def generateSecretKey(numBits):
   return int.from_bytes(os.urandom(numBits // 8), byteorder='big')

if __name__ == "__main__":
   F = FiniteField(3851, 1)
   curve = EllipticCurve(a=F(324), b=F(1287))
   basePoint = Point(curve, F(920), F(303))

   aliceSecretKey = generateSecretKey(8)
   bobSecretKey = generateSecretKey(8)

   alicePublicKey = sendDH(aliceSecretKey, basePoint, lambda x:x)
   bobPublicKey = sendDH(bobSecretKey, basePoint, lambda x:x)

   sharedSecret1 = receiveDH(bobSecretKey, lambda: alicePublicKey)
   sharedSecret2 = receiveDH(aliceSecretKey, lambda: bobPublicKey)
   print('Shared secret is %s == %s' % (sharedSecret1, sharedSecret2))

Pythons os module allows us to access the operating system’s random number generator (which is supposed to be cryptographically secure) via the function urandom, which accepts as input the number of bytes you wish to generate, and produces as output a Python bytestring object that we then convert to an integer. Our simplistic (and totally insecure!) protocol uses the elliptic curve C defined by y^2 = x^3 + 324 x + 1287 over the finite field \mathbb{Z}/3851. We pick the base point Q = (920, 303), and call the relevant functions with placeholders for actual network transmission functions.

There is one issue we have to note. Say we fix our base point Q. Since an elliptic curve over a finite field can only have finitely many points (since the field only has finitely many possible pairs of numbers), it will eventually happen that nQ = 0 is the ideal point. Recall that the smallest value of n for which nQ = 0 is called the order of Q. And so when we’re generating secret keys, we have to pick them to be smaller than the order of the base point. Viewed from the other angle, we want to pick Q to have large order, so that we can pick large and difficult-to-guess secret keys. In fact, no matter what integer you use for the secret key it will be equivalent to some secret key that’s less than the order of Q. So if an attacker could guess the smaller secret key he wouldn’t need to know your larger key.

The base point we picked in the example above happens to have order 1964, so an 8-bit key is well within the bounds. A real industry-strength elliptic curve (say, Curve25519 or the curves used in the NIST standards*) is designed to avoid these problems. The order of the base point used in the Diffie-Hellman protocol for Curve25519 has gargantuan order (like 2^{256}). So 256-bit keys can easily be used. I’m brushing some important details under the rug, because the key as an actual string is derived from 256 pseudorandom bits in a highly nontrivial way.

So there we have it: a simple cryptographic protocol based on elliptic curves. While we didn’t experiment with a truly secure elliptic curve in this example, we’ll eventually extend our work to include Curve25519. But before we do that we want to explore some of the other algorithms based on elliptic curves, including random number generation and factoring.

Comments on Insecurity

Why do we use elliptic curves for this? Why not do something like RSA and do multiplication (and exponentiation) modulo some large prime?

Well, it turns out that algorithmic techniques are getting better and better at solving the discrete logarithm problem for integers mod p, leading some to claim that RSA is dead. But even if we will never find a genuinely efficient algorithm (polynomial time is good, but might not be good enough), these techniques have made it clear that the key size required to maintain high security in RSA-type protocols needs to be really big. Like 4096 bits. But for elliptic curves we can get away with 256-bit keys. The reason for this is essentially mathematical: addition on elliptic curves is not as well understood as multiplication is for integers, and the more complex structure of the group makes it seem inherently more difficult. So until some powerful general attacks are found, it seems that we can get away with higher security on elliptic curves with smaller key sizes.

I mentioned that the particular elliptic curve we chose was insecure, and this raises the natural question: what makes an elliptic curve/field/basepoint combination secure or insecure? There are a few mathematical pitfalls (including certain attacks we won’t address), but one major non-mathematical problem is called a side-channel attack. A side channel attack against a cryptographic protocol is one that gains additional information about users’ secret information by monitoring side-effects of the physical implementation of the algorithm.

The problem is that different operations, doubling a point and adding two different points, have very different algorithms. As a result, they take different amounts of time to complete and they require differing amounts of power. Both of these can be used to reveal information about the secret keys. Despite the different algorithms for arithmetic on Weierstrass normal form curves, one can still implement them to be secure. Naively, one might pad the two subroutines with additional (useless) operations so that they have more similar time/power signatures, but I imagine there are better methods available.

But much of what makes a curve’s domain parameters mathematically secure or insecure is still unknown. There are a handful of known attacks against very specific families of parameters, and so cryptography experts simply avoid these as they are discovered. Here is a short list of pitfalls, and links to overviews:

  1. Make sure the order of your basepoint has a short facorization (e.g., is 2p, 3p, or 4p for some prime p). Otherwise you risk attacks based on the Chinese Remainder Theorem, the most prominent of which is called Pohlig-Hellman.
  2. Make sure your curve is not supersingular. If it is you can reduce the discrete logarithm problem to one in a different and much simpler group.
  3. If your curve C is defined over \mathbb{Z}/p, make sure the number of points on C is not equal to p. Such a curve is called prime-field anomalous, and its discrete logarithm problem can be reduced to the (additive) version on integers.
  4. Don’t pick a small underlying field like \mathbb{F}_{2^m} for small mGeneral-purpose attacks can be sped up significantly against such fields.
  5. If you use the field \mathbb{F}_{2^m}, ensure that m is prime. Many believe that if m has small divisors, attacks based on some very complicated algebraic geometry can be used to solve the discrete logarithm problem more efficiently than any general-purpose method. This gives evidence that m being composite at all is dangerous, so we might as well make it prime.

This is a sublist of the list provided on page 28 of this white paper.

The interesting thing is that there is little about the algorithm and protocol that is vulnerable. Almost all of the vulnerabilities come from using bad curves, bad fields, or a bad basepoint. Since the known attacks work on a pretty small subset of parameters, one potentially secure technique is to just generate a random curve and a random point on that curve! But apparently all respected national agencies will refuse to call your algorithm “standards compliant” if you do this.

Next time we’ll continue implementing cryptographic protocols, including the more general public-key message sending and signing protocols.

Until then!