Problem: Determine if a number is prime, with an acceptably small error rate.
Solution: (in Python)
exponentOfTwo = 0
while n % 2 == 0:
n = n/2
exponentOfTwo += 1
return exponentOfTwo, n
def isWitness(possibleWitness, p, exponent, remainder):
possibleWitness = pow(possibleWitness, remainder, p)
if possibleWitness == 1 or possibleWitness == p – 1:
for _ in range(exponent):
possibleWitness = pow(possibleWitness, 2, p)
if possibleWitness == p – 1:
def probablyPrime(p, accuracy=100):
if p == 2 or p == 3: return True
if p < 2: return False
exponent, remainder = decompose(p – 1)
for _ in range(accuracy):
possibleWitness = random.randint(2, p – 2)
if isWitness(possibleWitness, p, exponent, remainder):
Discussion: This algorithm is known as the Miller-Rabin primality test, and it was a very important breakthrough in the study of probabilistic algorithms.
Efficiently testing whether a number is prime is a crucial problem in cryptography, because the security of many cryptosystems depends on the use of large randomly chosen primes. Indeed, we’ve seen one on this blog already which is in widespread use: RSA. Randomized algorithms also have quite useful applications in general, because it’s often that a solution which is correct with probability, say, is good enough for practice.
But from a theoretical and historical perspective, primality testing lied at the center of a huge problem in complexity theory. In particular, it is unknown whether algorithms which have access to randomness and can output probably correct answers are more powerful than those that don’t. The use of randomness in algorithms comes in a number of formalizations, the most prominent of which is called BPP (Bounded-error Probabilistic Polynomial time). The Miller-Rabin algorithm shows that primality testing is in BPP. On the other hand, algorithms solvable in polynomial time without randomness are in a class called P.
For a long time (from 1975 to 2002), it was unknown whether primality testing was in P or not. There are very few remaining important problems that have BPP algorithms but are not known to be in P. Polynomial identity testing is the main example, and until 2002 primality testing shared its title. Now primality has a known polynomial-time algorithm. One might argue that (in theory) the Miller-Rabin test is now useless, but it’s still a nice example of a nontrivial BPP algorithm.
The algorithm relies on the following theorem:
Theorem: if is a prime, let be the maximal power of 2 dividing , so that and is odd. Then for any , one of two things happens:
- for some .
The algorithm then simply operates as follows: pick nonzero at random until both of the above conditions fail. Such an is called a witness for the fact that is a composite. If is not a prime, then there is at least a 3/4 chance that a randomly chosen will be a witness.
We leave the proof of the theorem as an exercise. Start with the fact that (this is Fermat’s Little Theorem). Then use induction to take square roots (the result has to be +/-1 mod p), and continue until you get to .
The Python code above uses Python’s built in modular exponentiation function pow to do fast modular exponents. The isWitness function first checks and then all powers . The probablyPrime function then simply generates random potential witnesses and checks them via the previous function. The output of the function is True if and only if all of the needed modular equivalences hold for all witnesses inspected. The choice of endpoints being 2 and are because 1 and will always have exponents 1 mod .