**Problem: **Determine if two polynomial expressions represent the same function. Specifically, if and are a polynomial with inputs, outputs and coefficients in a field , where is sufficiently large, then the problem is to determine if for every , in time polynomial in the number of bits required to write down and .

**Solution:** Let be the maximum degree of all terms in . Choose a finite set with . Repeat the following process 100 times:

- Choose inputs uniformly at random.
- Check if .

If every single time the two polynomials agree, accept the claim that they are equal. If they disagree on any input, reject. You will be wrong with probability at most .

**Discussion: **At first glance it’s unclear why this problem is hard.

If you have two representations of polynomials , say expressed in algebraic notation, why can’t you just do the algebra to convert them both into the same format, and see if they’re equal?

Unfortunately, that conversion can take exponential time. For example, suppose you have a polynomial . Though it only takes a few bits to write down, expressing it in a “canonical form,” often in the monomial form , would require exponentially many bits in the original representation. In general, it’s unknown how to algorithmically transform polynomials into a “canonical form” (so that they can be compared) in subexponential time.

Instead, the best we know how to do is treat the polynomials as black boxes and plug values into them.

Indeed, for single variable polynomials it’s well known that a nonzero degree polynomial has at most roots. A similar result is true for polynomials with many variables, and so we can apply that result to the polynomial to determine if . This theorem is so important (and easy to prove) that it deserves the name of *lemma.*

**The Schwartz-Zippel lemma. **Let be a nonzero polynomial of total degree over a field . Let be a finite subset of and let be chosen uniformly at random from . The probability that is at most .

*Proof. *By induction on the number of variables . For the case of , it’s the usual fact that a single-variable polynomial can have at most roots. Now for the inductive step, assume this is true for all polynomials with variables, and we will prove it for variables. Write as a polynomial in the variable , whose coefficients are other polynomials:

Here we’ve grouped by the powers of , so that are the coefficients of each . This is useful because we’ll be able to apply the inductive hypothesis to one of the ‘s, which have fewer variables.

Indeed, we claim there must be some which is nonzero for . Clearly, since is not the zero polynomial, some must be nonzero. If the only nonzero is , then we’re done because doesn’t depend on at all. Otherwise, take the largest nonzero . It’s true that the degree of is at most . This is true because the term has degree at most .

By the inductive hypothesis, if we choose and plug them into , we get zero with probability at most . The crucial part is that if this *polynomial* coefficient is nonzero, then the entire polynomial is nonzero. This is true even if an unlucky choice of causes the resulting evaluation .

To think about it a different way, imagine we’re evaluating the polynomial in phases. In the first phase, we pick the . We could also pick independently but not reveal what it is, for the sake of this story. Then we plug in the , and the result is a one-variable polynomial whose largest coefficient is . The inductive hypothesis tells us that this one-variable polynomial is the zero polynomial with probability at most . (It’s probably a smaller probability, since *all* the coefficients have to be zero, but we’re just considering the largest one for the sake of generality and simplicity)

Indeed, the resulting polynomial after we plug in has degree , so we can apply the inductive hypothesis to it as well, and the probability that it’s zero for a random choice of is at most .

Finally, the probability that both occur can be computed using basic probability algebra. Let be the event that, for these inputs, is zero, and the event that is zero for the and the additional .

Then .

Note the two quantities above that we don’t know are and , so we’ll bound them from above by 1. The rest of the quantities add up to exactly what we want, and so

which proves the theorem.

While this theorem is almost trivial to prove (it’s elementary induction, and the obvious kind), it can be used to solve polynomial identity testing, as well as finding perfect matchings in graphs and test numbers for primality.

But while the practical questions are largely solved–it’s hard to imagine a setting where you’d need faster primality testing than the existing randomized algorithms–the theory and philosophy of the result is much more interesting.

Indeed, checking two polynomials for equality has no known deterministic polynomial time algorithm. It’s one of a small class of problems, like integer factoring and the discrete logarithm, which are not known to be efficiently solvable in theory, but are also not known to be NP-hard, so there is still hope. The existence of this randomized algorithm increases hope (integer factorization sure doesn’t have one!). And more generally, the fact that there are so few natural problems in this class makes one wonder whether randomness is actually beneficial at all. From a polynomial-time-defined-as-efficient perspective, can every problem efficiently solvable with access to random bits also be solved without such access? In the computational complexity lingo, does P = BPP? Many experts think the answer is yes.