A parlor trick for SET

Tai-Danae Bradley is one of the hosts of PBS Infinite Series, a delightful series of vignettes into fun parts of math. The video below is about the same of SET, a favorite among mathematicians. Specifically, Tai-Danae explains how SET cards lie in (using more technical jargon) a vector space over a finite field, and that valid sets correspond to lines. If you don’t immediately know how this would work, watch the video.

In this post I want to share a parlor trick for SET that I originally heard from Charlotte Chan. It uses the same ideas from the video above, which I’ll only review briefly.

In the game of SET you see a board of cards like the following, and players look for sets.

SetCards

Image source: theboardgamefamily.com

A valid set is a triple of cards where, feature by feature, the characteristics on the cards are either all the same or all different. A valid set above is {one empty blue oval, two solid blue ovals, three shaded blue ovals}. The feature of “fill” is different on all the cards, but the feature of “color” is the same, etc.

In a game of SET, the cards are dealt in order from a shuffled deck, players race to claim sets, removing the set if it’s valid, and three cards are dealt to replace the removed set. Eventually the deck is exhausted and the game is over, and the winner is the player who collected the most sets.

There are a handful of mathematical tricks you can use to help you search for sets faster, but the parlor trick in this post adds a fun variant to the end of the game.

Play the game of SET normally, but when you get down to the last card in the deck, don’t reveal it. Keep searching for sets until everyone agrees no visible sets are left. Then you start the variant: the first player to guess the last un-dealt card in the deck gets a bonus set.

The math comes in when you discover that you don’t need to guess, or remember anything about the game that was just played! A clever stranger could walk into the room at the end of the game and win the bonus point.

Theorem: As long as every player claimed a valid set throughout the game, the information on the remaining board uniquely determines the last (un-dealt) card.

Before we get to the proof, some reminders. Recall that there are four features on a SET card, each of which has three options. Enumerate the options for each feature (e.g., {Squiggle, Oval, Diamond} = {0, 1, 2}).

While we will not need the geometry induced by this, this implies each card is a vector in the vector space $ \mathbb{F}_3^4$, where $ \mathbb{F}_3 = \mathbb{Z}/3\mathbb{Z}$ is the finite field of three elements, and the exponent means “dimension 4.” As Tai-Danae points out in the video, each SET is an affine line in this vector space. For example, if this is the enumeration:

joyofset

Source: “The Joy of Set

Then using the enumeration, a set might be given by

$ \displaystyle \{ (1, 1, 1, 1), (1, 2, 0, 1), (1, 0, 2, 1) \}$

The crucial feature for us is that the vector-sum (using the modular field arithmetic on each entry) of the cards in a valid set is the zero vector $ (0, 0, 0, 0)$. This is because $ 1+1+1 = 0, 2+2+2 = 0,$ and $ 1+2+3=0$ are all true mod 3.

Proof of Theorem. Consider the vector-valued invariant $ S_t$ equal to the sum of the remaining cards after $ t$ sets have been taken. At the beginning of the game the deck has 81 cards that can be partitioned into valid sets. Because each valid set sums to the zero vector, $ S_0 = (0, 0, 0, 0)$. Removing a valid set via normal play does not affect the invariant, because you’re subtracting a set of vectors whose sum is zero. So $ S_t = 0$ for all $ t$.

At the end of the game, the invariant still holds even if there are no valid sets left to claim. Let $ x$ be the vector corresponding to the last un-dealt card, and $ c_1, \dots, c_n$ be the remaining visible cards. Then $ x + \sum_{i=1}^n c_i = (0,0,0,0)$, meaning $ x = -\sum_{i=1}^n c_i$.

$ \square$

I would provide an example, but I want to encourage everyone to play a game of SET and try it out live!

Charlotte, who originally showed me this trick, was quick enough to compute this sum in her head. So were the other math students we played SET with. It’s a bit easier than it seems since you can do the sum feature by feature. Even though I’ve known about this trick for years, I still require a piece of paper and a few minutes.

Because this is Math Intersect Programming, the reader is encouraged to implement this scheme as an exercise, and simulate a game of SET by removing randomly chosen valid sets to verify experimentally that this scheme works.

Until next time!

A Spectral Analysis of Moore Graphs

For fixed integers $ r > 0$, and odd $ g$, a Moore graph is an $ r$-regular graph of girth $ g$ which has the minimum number of vertices $ n$ among all such graphs with the same regularity and girth.

(Recall, A the girth of a graph is the length of its shortest cycle, and it’s regular if all its vertices have the same degree)

Problem (Hoffman-Singleton): Find a useful constraint on the relationship between $ n$ and $ r$ for Moore graphs of girth $ 5$ and degree $ r$.

Note: Excluding trivial Moore graphs with girth $ g=3$ and degree $ r=2$, there are only two known Moore graphs: (a) the Petersen graph and (b) this crazy graph:

hoffman_singleton_graph_circle2

The solution to the problem shows that there are only a few cases left to check.

Solution: It is easy to show that the minimum number of vertices of a Moore graph of girth $ 5$ and degree $ r$ is $ 1 + r + r(r-1) = r^2 + 1$. Just consider the tree:

500px-petersen-as-moore-svg

This is the tree example for $ r = 3$, but the argument should be clear for any $ r$ from the branching pattern of the tree: $ 1 + r + r(r-1)$

Provided $ n = r^2 + 1$, we will prove that $ r$ must be either $ 3, 7,$ or $ 57$. The technique will be to analyze the eigenvalues of a special matrix derived from the Moore graph.

Let $ A$ be the adjacency matrix of the supposed Moore graph with these properties. Let $ B = A^2 = (b_{i,j})$. Using the girth and regularity we know:

  • $ b_{i,i} = r$ since each vertex has degree $ r$.
  • $ b_{i,j} = 0$ if $ (i,j)$ is an edge of $ G$, since any walk of length 2 from $ i$ to $ j$ would be able to use such an edge and create a cycle of length 3 which is less than the girth.
  • $ b_{i,j} = 1$ if $ (i,j)$ is not an edge, because (using the tree idea above), every two vertices non-adjacent vertices have a unique neighbor in common.

Let $ J_n$ be the $ n \times n$ matrix of all 1’s and $ I_n$ the identity matrix. Then

$ \displaystyle B = rI_n + J_n – I_n – A.$

We use this matrix equation to generate two equations whose solutions will restrict $ r$. Since $ A$ is a real symmetric matrix is has an orthonormal basis of eigenvectors $ v_1, \dots, v_n$ with eigenvalues $ \lambda_1 , \dots, \lambda_n$. Moreover, by regularity we know one of these vectors is the all 1’s vector, with eigenvalue $ r$. Call this $ v_1 = (1, \dots, 1), \lambda_1 = r$. By orthogonality of $ v_1$ with the other $ v_i$, we know that $ J_nv_i = 0$. We also know that, since $ A$ is an adjacency matrix with zeros on the diagonal, the trace of $ A$ is $ \sum_i \lambda_i = 0$.

Multiply the matrices in the equation above by any $ v_i$, $ i > 1$ to get

$ \displaystyle \begin{aligned}A^2v_i &= rv_i – v_i – Av_i \\ \lambda_i^2v_i &= rv_i – v_i – \lambda_i v_i \end{aligned}$

Rearranging and factoring out $ v_i$ gives $ \lambda_i^2 – \lambda_i – (r+1) = 0$. Let $ z = 4r – 3$, then the non-$ r$ eigenvalues must be one of the two roots: $ \mu_1 = (-1 + \sqrt{z}) / 2$ or $ \mu_2 = (-1 – \sqrt{z})/2$.

Say that $ \mu_1$ occurs $ a$ times and $ \mu_2$ occurs $ b$ times, then $ n = a + b + 1$. So we have the following equations.

$ \displaystyle \begin{aligned} a + b + 1 &= n \\ r + a \mu_1 + b\mu_2 &= 0 \end{aligned}$

From this equation you can easily derive that $ \sqrt{z}$ is an integer, and as a consequence $ r = (m^2 + 3) / 4$ for some integer $ m$. With a tiny bit of extra algebra, this gives

$ \displaystyle m(m^3 – 2m – 16(a-b)) = 15$

Implying that $ m$ divides $ 15$, meaning $ m \in \{ 1, 3, 5, 15\}$, and as a consequence $ r \in \{ 1, 3, 7, 57\}$.

$ \square$

Discussion: This is a strikingly clever use of spectral graph theory to answer a question about combinatorics. Spectral graph theory is precisely that, the study of what linear algebra can tell us about graphs. For an deeper dive into spectral graph theory, see the guest post I wrote on With High Probability.

If you allow for even girth, there are a few extra (infinite families of) Moore graphs, see Wikipedia for a list.

With additional techniques, one can also disprove the existence of any Moore graphs that are not among the known ones, with the exception of a possible Moore graph of girth $ 5$ and degree $ 57$ on $ n = 3250$ vertices. It is unknown whether such a graph exists, but if it does, it is known that

You should go out and find it or prove it doesn’t exist.

Hungry for more applications of linear algebra to combinatorics and computer science? The book Thirty-Three Miniatures is a fantastically entertaining book of linear algebra gems (it’s where I found the proof in this post). The exposition is lucid, and the chapters are short enough to read on my daily train commute.

Methods of Proof — Diagonalization

A while back we featured a post about why learning mathematics can be hard for programmers, and I claimed a major issue was not understanding the basic methods of proof (the lingua franca between intuition and rigorous mathematics). I boiled these down to the “basic four,” direct implication, contrapositive, contradiction, and induction. But in mathematics there is an ever growing supply of proof methods. There are books written about the “probabilistic method,” and I recently went to a lecture where the “linear algebra method” was displayed. There has been recent talk of a “quantum method” for proving theorems unrelated to quantum mechanics, and many more.

So in continuing our series of methods of proof, we’ll move up to some of the more advanced methods of proof. And in keeping with the spirit of the series, we’ll spend most of our time discussing the structural form of the proofs. This time, diagonalization.

Diagonalization

Perhaps one of the most famous methods of proof after the basic four is proof by diagonalization. Why do they call it diagonalization? Because the idea behind diagonalization is to write out a table that describes how a collection of objects behaves, and then to manipulate the “diagonal” of that table to get a new object that you can prove isn’t in the table.

The simplest and most famous example of this is the proof that there is no bijection between the natural numbers and the real numbers. We defined injections, and surjections and bijections, in two earlier posts in this series, but for new readers a bijection is just a one-to-one mapping between two collections of things. For example, one can construct a bijection between all positive integers and all even positive integers by mapping $ n$ to $ 2n$. If there is a bijection between two (perhaps infinite) sets, then we say they have the same size or cardinality. And so to say there is no bijection between the natural numbers and the real numbers is to say that one of these two sets (the real numbers) is somehow “larger” than the other, despite both being infinite in size. It’s deep, it used to be very controversial, and it made the method of diagonalization famous. Let’s see how it works.

Theorem: There is no bijection from the natural numbers $ \mathbb{N}$ to the real numbers $ \mathbb{R}$.

Proof. Suppose to the contrary (i.e., we’re about to do proof by contradiction) that there is a bijection $ f: \mathbb{N} \to \mathbb{R}$. That is, you give me a positive integer $ k$ and I will spit out $ f(k)$, with the property that different $ k$ give different $ f(k)$, and every real number is hit by some natural number $ k$ (this is just what it means to be a one-to-one mapping).

First let me just do some setup. I claim that all we need to do is show that there is no bijection between $ \mathbb{N}$ and the real numbers between 0 and 1. In particular, I claim there is a bijection from $ (0,1)$ to all real numbers, so if there is a bijection from $ \mathbb{N} \to (0,1)$ then we could combine the two bijections. To show there is a bijection from $ (0,1) \to \mathbb{R}$, I can first make a bijection from the open interval $ (0,1)$ to the interval $ (-\infty, 0) \cup (1, \infty)$ by mapping $ x$ to $ 1/x$. With a little bit of extra work (read, messy details) you can extend this to all real numbers. Here’s a sketch: make a bijection from $ (0,1)$ to $ (0,2)$ by doubling; then make a bijection from $ (0,2)$ to all real numbers by using the $ (0,1)$ part to get $ (-\infty, 0) \cup (1, \infty)$, and use the $ [1,2)$ part to get $ [0,1]$ by subtracting 1 (almost! To be super rigorous you also have to argue that the missing number 1 doesn’t change the cardinality, or else write down a more complicated bijection; still, the idea should be clear).

Okay, setup is done. We just have to show there is no bijection between $ (0,1)$ and the natural numbers.

The reason I did all that setup is so that I can use the fact that every real number in $ (0,1)$ has an infinite binary decimal expansion whose only nonzero digits are after the decimal point. And so I’ll write down the expansion of $ f(1)$ as a row in a table (an infinite row), and below it I’ll write down the expansion of $ f(2)$, below that $ f(3)$, and so on, and the decimal points will line up. The table looks like this.

firsttableThe $ d$’s above are either 0 or 1. I need to be a bit more detailed in my table, so I’ll index the digits of $ f(1)$ by $ b_{1,1}, b_{1,2}, b_{1,3}, \dots$, the digits of $ f(2)$ by $ b_{2,1}, b_{2,2}, b_{2,3}, \dots$, and so on. This makes the table look like this

secondtable

It’s a bit harder to read, but trust me the notation is helpful.

Now by the assumption that $ f$ is a bijection, I’m assuming that every real number shows up as a number in this table, and no real number shows up twice. So if I could construct a number that I can prove is not in the table, I will arrive at a contradiction: the table couldn’t have had all real numbers to begin with! And that will prove there is no bijection between the natural numbers and the real numbers.

Here’s how I’ll come up with such a number $ N$ (this is the diagonalization part). It starts with 0., and it’s first digit after the decimal is $ 1-b_{1,1}$. That is, we flip the bit $ b_{1,1}$ to get the first digit of $ N$. The second digit is $ 1-b_{2,2}$, the third is $ 1-b_{3,3}$, and so on. In general, digit $ i$ is $ 1-b_{i,i}$.

Now we show that $ N$ isn’t in the table. If it were, then it would have to be $ N = f(m)$ for some $ m$, i.e. be the $ m$-th row in the table. Moreover, by the way we built the table, the $ m$-th digit of $ N$ would be $ b_{m,m}$. But we defined $ N$ so that it’s $ m$-th digit was actually $ 1-b_{m,m}$. This is very embarrassing for $ N$ (it’s a contradiction!). So $ N$ isn’t in the table.

$ \square$

It’s the kind of proof that blows your mind the first time you see it, because it says that there is more than one kind of infinity. Not something you think about every day, right?

The Halting Problem

The second example we’ll show of a proof by diagonalization is the Halting Theorem, proved originally by Alan Turing, which says that there are some problems that computers can’t solve, even if given unbounded space and time to perform their computations. The formal mathematical model is called a Turing machine, but for simplicity you can think of “Turing machines” and “algorithms described in words” as the same thing. Or if you want it can be “programs written in programming language X.” So we’ll use the three words “Turing machine,” “algorithm,” and “program” interchangeably.

The proof works by actually defining a problem and proving it can’t be solved. The problem is called the halting problem, and it is the problem of deciding: given a program $ P$ and an input $ x$ to that program, will $ P$ ever stop running when given $ x$ as input? What I mean by “decide” is that any program that claims to solve the halting problem is itself required to halt for every possible input with the correct answer. A “halting problem solver” can’t loop infinitely!

So first we’ll give the standard proof that the halting problem can’t be solved, and then we’ll inspect the form of the proof more closely to see why it’s considered a diagonalization argument.

Theorem: The halting program cannot be solved by Turing machines.

Proof. Suppose to the contrary that $ T$ is a program that solves the halting problem. We’ll use $ T$ as a black box to come up with a new program I’ll call meta-$ T$, defined in pseudo-python as follows.

def metaT(P):
   run T on (P,P)
   if T says that P halts:
      loop infinitely
   else:
      halt and output "success!"

In words, meta-$ T$ accepts as input the source code of a program $ P$, and then uses $ T$ to tell if $ P$ halts (when given its own source code as input). Based on the result, it behaves the opposite of $ P$; if $ P$ halts then meta-$ T$ loops infinitely and vice versa. It’s a little meta, right?

Now let’s do something crazy: let’s run meta-$ T$ on itself! That is, run

metaT(metaT)

So meta. The question is what is the output of this call? The meta-$ T$ program uses $ T$ to determine whether meta-$ T$ halts when given itself as input. So let’s say that the answer to this question is “yes, it does halt.” Then by the definition of meta-$ T$, the program proceeds to loop forever. But this is a problem, because it means that metaT(metaT) (which is the original thing we ran) actually does not halt, contradicting $ T$’s answer! Likewise, if $ T$ says that metaT(metaT) should loop infinitely, that will cause meta-$ T$ to halt, a contradiction. So $ T$ cannot be correct, and the halting problem can’t be solved.

$ \square$

This theorem is deep because it says that you can’t possibly write a program to which can always detect bugs in other programs. Infinite loops are just one special kind of bug.

But let’s take a closer look and see why this is a proof by diagonalization. The first thing we need to convince ourselves is that the set of all programs is countable (that is, there is a bijection from $ \mathbb{N}$ to the set of all programs). This shouldn’t be so hard to see: you can list all programs in lexicographic order, since the set of all strings is countable, and then throw out any that are not syntactically valid programs. Likewise, the set of all inputs, really just all strings, is countable.

The second thing we need to convince ourselves of is that a problem corresponds to an infinite binary string. To do this, we’ll restrict our attention to problems with yes/no answers, that is where the goal of the program is to output a single bit corresponding to yes or no for a given input. Then if we list all possible inputs in increasing lexicographic order, a problem can be represented by the infinite list of bits that are the correct outputs to each input.

For example, if the problem is to determine whether a given binary input string corresponds to an even number, the representation might look like this:

010101010101010101...

Of course this all depends on the details of how one encodes inputs, but the point is that if you wanted to you could nail all this down precisely. More importantly for us we can represent the halting problem as an infinite table of bits. If the columns of the table are all programs (in lex order), and the rows of the table correspond to inputs (in lex order), then the table would have at entry $ (x,P)$ a 1 if $ P(x)$ halts and a 0 otherwise.


haltingtable

here $ b_{i,j}$ is 1 if $ P_j(x_i)$ halts and 0 otherwise. The table encodes the answers to the halting problem for all possible inputs.

Now we assume for contradiction sake that some program solves the halting problem, i.e. that every entry of the table is computable. Now we’ll construct the answers output by meta-$ T$ by flipping each bit of the diagonal of the table. The point is that meta-$ T$ corresponds to some row of the table, because there is some input string that is interpreted as the source code of meta-$ T$. Then we argue that the entry of the table for $ (\textup{meta-}T, \textup{meta-}T)$ contradicts its definition, and we’re done!

So these are two of the most high-profile uses of the method of diagonalization. It’s a great tool for your proving repertoire.

Until next time!

Learning a single-variable polynomial, or the power of adaptive queries

Problem: Alice chooses a secret polynomial $ p(x)$ with nonnegative integer coefficients. Bob wants to discover this polynomial by querying Alice for the value of $ p(x)$ for some integer $ x$ of Bob’s choice. What is the minimal number of queries Bob needs to determine $ p(x)$ exactly?

Solution: Two queries. The first is $ p(1)$, and if we call $ N = p(1) + 1$, then the second query is $ p(N)$.

To someone who is familiar with polynomials, this may seem shocking, and I’ll explain why it works in a second. After all, it’s very easy to prove that if Bob gives Alice all of his queries at the same time (if the queries are not adaptive), then it’s impossible to discover what $ p(x)$ is using fewer than $ \textup{deg}(p) + 1$ queries. This is due to a fact called polynomial interpolation, which we’ve seen on this blog before in the context of secret sharing. Specifically, there is a unique single-variable degree $ d$ polynomial passing through $ d+1$ points (with distinct $ x$-values). So if you knew the degree of $ p$, you could determine it easily. But Bob doesn’t know the degree of the polynomial, and there’s no way he can figure it out without adaptive queries! Indeed, if Bob tries and gives a set of $ d$ queries, Alice could have easily picked a polynomial of degree $ d+1$. So it’s literally impossible to solve this problem without adaptive queries.

The lovely fact is that once you allow adaptiveness, the number of queries you need doesn’t even depend on the degree of the secret polynomial!

Okay let’s get to the solution. It was crucial that our polynomial had nonnegative integer coefficients, because we’re going to do a tiny bit of number theory. Let $ p(x) = a_0 + a_1 x + \dots + a_d x^d$. First, note that $ p(1)$ is exactly the sum of the coefficients $ \sum_i a_i$, and in particular $ p(1) + 1$ is larger than any single coefficient. So call this $ N$, and query $ p(N)$. This gives us a number $ y_0$ of the form

$ \displaystyle y_0 = a_0 + a_1N + a_2N^2 + \dots + a_dN^d$

And because $ N$ is so big, we can compute $ a_0$ easily by computing $ y_0 \mod N$. Now set $ y_1 = (y_0 – a_0) / N$, and this has the form $ a_1 + a_2N + \dots + a_dN^{d-1}$. We can compute modulus again to get $ a_1$, and repeat until we have all the coefficients. We’ll stop once we get a $ y_i$ that is zero.

[Addendum 2018-02-14: implementation on github]

As a small technical note, this is a polynomial-time algorithm in the number of bits needed to write down $ p(x)$. So this demonstrates the power of adaptive queries: we get from something which is uncomputable with any number of queries to something which is efficiently computable with a constant number of queries.

The obvious follow-up question is: can you come up with an efficient algorithm if we allow the coefficients to be negative integers?