Miller-Rabin Primality Test

Problem: Determine if a number is prime, with an acceptably small error rate.

Solution: (in Python)

import random

def decompose(n):
   exponentOfTwo = 0

   while n % 2 == 0:
      n = n/2
      exponentOfTwo += 1

   return exponentOfTwo, n

def isWitness(possibleWitness, p, exponent, remainder):
   possibleWitness = pow(possibleWitness, remainder, p)

   if possibleWitness == 1 or possibleWitness == p - 1:
      return False

   for _ in range(exponent):
      possibleWitness = pow(possibleWitness, 2, p)

      if possibleWitness == p - 1:
         return False

   return True

def probablyPrime(p, accuracy=100):
   if p == 2 or p == 3: return True
   if p < 2: return False

   exponent, remainder = decompose(p - 1)

   for _ in range(accuracy):
      possibleWitness = random.randint(2, p - 2)
      if isWitness(possibleWitness, p, exponent, remainder):
         return False

   return True

Discussion: This algorithm is known as the Miller-Rabin primality test, and it was a very important breakthrough in the study of probabilistic algorithms.

Efficiently testing whether a number is prime is a crucial problem in cryptography, because the security of many cryptosystems depends on the use of large randomly chosen primes. Indeed, we’ve seen one on this blog already which is in widespread use: RSA. Randomized algorithms also have quite useful applications in general, because it’s often that a solution which is correct with probability, say, $ 2^{-100}$ is good enough for practice.

But from a theoretical and historical perspective, primality testing lied at the center of a huge problem in complexity theory. In particular, it is unknown whether algorithms which have access to randomness and can output probably correct answers are more powerful than those that don’t. The use of randomness in algorithms comes in a number of formalizations, the most prominent of which is called BPP (Bounded-error Probabilistic Polynomial time). The Miller-Rabin algorithm shows that primality testing is in BPP. On the other hand, algorithms solvable in polynomial time without randomness are in a class called P.

For a long time (from 1975 to 2002), it was unknown whether primality testing was in P or not. There are very few remaining important problems that have BPP algorithms but are not known to be in P. Polynomial identity testing is the main example, and until 2002 primality testing shared its title. Now primality has a known polynomial-time algorithm. One might argue that (in theory) the Miller-Rabin test is now useless, but it’s still a nice example of a nontrivial BPP algorithm.

The algorithm relies on the following theorem:

Theorem: if $ p$ is a prime, let $ s$ be the maximal power of 2 dividing $ p-1$, so that $ p-1 = 2^{s}d$ and $ d$ is odd. Then for any $ 1 \leq n \leq p-1$, one of two things happens:

  • $ n^d = 1 \mod p$ or
  • $ n^{2^j d} = -1 \mod p$ for some $ 0 \leq j < s$.

The algorithm then simply operates as follows: pick nonzero $ n$ at random until both of the above conditions fail. Such an $ n$ is called a witness for the fact that $ p$ is a composite. If $ p$ is not a prime, then there is at least a 3/4 chance that a randomly chosen $ n$ will be a witness.

We leave the proof of the theorem as an exercise. Start with the fact that $ a^{p-1} = 1 \mod p$ (this is Fermat’s Little Theorem). Then use induction to take square roots (the result has to be +/-1 mod p), and continue until you get to $ a^{d}=1 \mod p$.

The Python code above uses Python’s built in modular exponentiation function pow to do fast modular exponents. The isWitness function first checks $ n^d = 1 \mod p$ and then all powers $ n^{2^j d} = -1 \mod p$. The probablyPrime function then simply generates random potential witnesses and checks them via the previous function. The output of the function is True if and only if all of the needed modular equivalences hold for all witnesses inspected. The choice of endpoints being 2 and $ p-2$ are because 1 and $ p-1$ will always have exponents 1 mod $ p$.

There are Infinitely Many Primes (Erdős)

Problem: Prove there are infinitely many primes

Solution: Denote by $ \pi(n)$ the number of primes less than or equal to $ n$. We will give a lower bound on $ \pi(n)$ which increases without bound as $ n \to \infty$.

Note that every number $ n$ can be factored as the product of a square free number $ r$ (a number which no square divides) and a square $ s^2$. In particular, to find $ s^2$ recognize that 1 is a square dividing $ n$, and there are finitely many squares dividing $ n$. So there must be a largest one, and then $ r = n/s^2$. We will give a bound on the number of such products $ rs^2$ which are less than or equal to $ n$.

If we want to count the number of square-free numbers less than $ n$, we can observe that each square free number is a product of distinct primes, and so (as in our false proof that there are finitely many primes) each square-free number corresponds to a subset of primes. At worst, we can allow these primes to be as large as $ n$ (for example, if $ n$ itself is prime), so there are no more than $ 2^{\pi(n)}$ such subsets.

Similarly, there are at most $ \sqrt{n}$ square numbers less than $ n$, since if $ x > \sqrt{n}$ then $ x^2 > n$.

At worst the two numbers $ r, s^2$ will be unrelated, so the total number of factorizations $ rs^2$ is at most the product $ 2^{\pi(n)}\sqrt{n}$. In other words,

$ 2^{\pi(n)}\sqrt{n} \geq n$

The rest is algebra: divide by $ \sqrt{n}$ and take logarithms to see that $ \pi(n) \geq \frac{1}{2} \log(n)$. Since $ \log(n)$ is unbounded as $ n$ grows, so must $ \pi(n)$. Hence, there are infinitely many primes.

Discussion: This is a classic analytical argument originally discovered by Paul Erdős. One of the classical ways to investigate the properties of prime numbers is to try to estimate $ \pi(n)$. In fact, much of the work of the famous number theorists of the past involved giving good approximations of $ \pi(n)$ in terms of logarithms. This usually involved finding good upper bounds and lower bounds and limits. Erdős’s proof is entirely in this spirit, although there are much closer and more accurate lower and upper bounds. In this proof we include a lot of values which are not actually valid factorizations (many larger choices of $ r, s^2$ will have their product larger than $ n$). But for the purposes of proving there are infinitely many primes, this bound is about as elegant as one can find.

Complete Sequences and Magic Tricks

Numberphile posted a video today describing a neat trick based on complete sequences:

The mathematics here is pretty simple, but I noticed at the end of the video that Dr. Grime was constructing the cards by hand, when really this is a job for a computer program. I thought it would be a nice warmup exercise (and a treat to all of the Numberphile viewers) to write a program to construct the cards for any complete sequence.

For the sake of bringing some definitions into it, let’s review the idea of the card trick.

Definition: A sequence of integers $ a_n$ is complete if every natural number $ k$ can be represented as a sum of numbers in $ a_n$.

The examples used in the video are the binary numbers and the fibonacci numbers. The latter is a famous theorem Zeckendorf’s Theorem. Other famous complete sequences include the prime numbers (if you add 1) and the lazy caterer’s sequence.

Now the question is, given a complete sequence, how do we generate the cards from the video? One might recall our post on dynamic programming, in which we wrote a program to compute the optimal coin change for a given amount of money. Indeed, this method works, but we have some added structure in this problem: we know that any number can only be used once. In that case, it is easy to see that there is no need for dynamic programming. Instead, we just use a greedy algorithm.

That is, we can determine which card contains a number $ k$ (which number in the sequence occurs in the representation of $ k$) as follows. Start with the largest card smaller than $ k$, call it $ n$, and we know that $ n$ shows up in the representation of $ k$. To find the next number in the representation, simply repeat the process with the difference $ k-n$. The next number to appear in the representation of $ k$ is hence the largest card less than $ k-n$. We can repeat this process until we run out of cards or the difference becomes zero.

One interesting fact about this algorithm is that it will always produce the smallest representation. That is, in performing the card trick, one is guaranteed the least amount of work in remembering which cards contained the hidden number, and the least amount of work in adding those numbers up.

To implement this algorithm in code, we used Javascript. We chose Javascript so that the reader can run the program and view its source. But we extract the most important bit of the code here:

// compute each card
var j;
for (i = 1; i <= usersMax; i++) {
   var remainder = i;
   for (j = numbers.length-1; j >= 0; j--) {
      if (numbers[j] <= remainder) {
         cards[j].push(i);
         remainder -= numbers[j];
      }
   }
}

In words, the “numbers” array contains the complete sequence in question, the cards array is an array of arrays, where each element in the array represents one card. We then loop over all numbers (the $ i$ variable), and for each number we check the sequence in decreasing order to look for membership of $ i$ on a card.

For the sake of brevity, we omit all of the code to deal with input and output, which comprises all of the remaining code in this program.

So to use the program, for example with prime numbers as the complete sequence, one would type “1,2,3,5,7,13,17” into the first text box, and whatever the maximum allowable guess is in the second box, and click the “Generate cards” button.

Enjoy!

Infinitely Many Primes (Using Topology)

Problem: Prove there are infinitely many prime numbers.

Solution: First recall that an arithmetic progression with difference $ d$ is a sequence of integers $ a_n \subset \mathbb{Z}$ so that for every pair $ a_k, a_{k+1}$ the difference $ a_{k+1} – a_k = d$.

We proceed be defining a topology on the set of integers by defining a basis $ B$ of unbounded (in both directions) arithmetic progressions. That is, an open set in this topology is an arbitrary union of arithmetic progressions from $ -\infty$ to $ \infty$. To verify this makes sense, we need only check that $ B$ is a basis. Indeed, $ B$ covers $ \mathbb{Z}$ since $ \mathbb{Z}$ is itself an arithmetic progression with difference 1. And any two arithmetic progressions $ a_n, b_n$ have intersection which is again an arithmetic progression: if the two progressions have equal difference then the intersection is empty or $ a_n = b_n$, and if their differences are $ d_1 \neq d_2$, then the intersection has difference $ \textup{lcm}(d_1, d_2)$. (We discussed this idea in a different context in our post on design with prime numbers.)

Now we notice that the basis sets are both open (by definition) and closed. The latter follows from the fact that there are only finitely many shifts of an arithmetic sequence. That is, if $ a_n$ is an arithmetic sequence with difference $ d$, then its complement is the union of all $ a_n + k$ where $ k$ ranges from 1 to $ d-1$ (and each of these sets are open, so the union is open as well).

Further we notice that no finite set can be open. Indeed, since the arithmetic sequences which form our basis are all infinite, and any open set is a union of sets in a basis, every open set in this topology must be infinite.

Now for a prime $ p$, let $ U_p$ be the arithmetic progression with difference $ p$ containing zero, and let $ U$ be the union of all $ U_p$. Supposing there were only finitely many primes, $ U$ is a finite union of closed sets and hence closed in this topology. But the only integers which are not multiples of some prime are $ \pm 1$. So $ \mathbb{Z} – U = \{ 1, -1 \}$. Since $ U$ is closed, its complement is open, but as we mentioned above no finite sets are open. This is a contradiction, and so there must be infinitely many primes. $ \square$

Discussion: Even though the definition of a topology is basic knowledge for a mathematician, this proof shows that a topology alone can carry a lot of interesting structure. Even more so, the ability to pick a topology on a space to suit one’s needs is a powerful tool. It reminds this author of the more elementary idea of picking colorings on a chessboard. Indeed, the entire field of algebraic geometry lives in the setting of one particular kind of topology on real or projective space. As one might expect, the open sets in a custom topology must be related to the task at hand. In the case of algebraic geometry, they’re the solution sets of families of polynomials in finitely many variables. Considering how important polynomials are in mathematics, one can imagine how complex and rich the resulting theories are. We plan to cover both basic topology and algebraic geometry in the future of this blog.