|The Fast Fourier Transform,
and a Noisy Signal
|Kolmogorov Complexity -
|Bezier Curves and Picasso||Computing Homology||A Sample of Standard ML
Problem: Determine if a number is prime, with an acceptably small error rate.
Solution: (in Python)
import random def decompose(n): exponentOfTwo = 0 while n % 2 == 0: n = n/2 exponentOfTwo += 1 return exponentOfTwo, n def isWitness(possibleWitness, p, exponent, remainder): possibleWitness = pow(possibleWitness, remainder, p) if possibleWitness == 1 or possibleWitness == p - 1: return False for _ in range(exponent): possibleWitness = pow(possibleWitness, 2, p) if possibleWitness == p - 1: return False return True def probablyPrime(p, accuracy=100): if p == 2 or p == 3: return True if p < 2: return False numTries = 0 exponent, remainder = decompose(p - 1) for _ in range(accuracy): possibleWitness = random.randint(2, p - 2) if isWitness(possibleWitness, p, exponent, remainder): return False return True
Discussion: This algorithm is known as the Miller-Rabin primality test, and it was a very important breakthrough in the study of probabilistic algorithms.
Efficiently testing whether a number is prime is a crucial problem in cryptography, because the security of many cryptosystems depends on the use of large randomly chosen primes. Indeed, we’ve seen one on this blog already which is in widespread use: RSA. Randomized algorithms also have quite useful applications in general, because it’s often that a solution which is correct with probability, say, is good enough for practice.
But from a theoretical and historical perspective, primality testing lied at the center of a huge problem in complexity theory. In particular, it is unknown whether algorithms which have access to randomness and can output probably correct answers are more powerful than those that don’t. The use of randomness in algorithms comes in a number of formalizations, the most prominent of which is called BPP (Bounded-error Probabilistic Polynomial time). The Miller-Rabin algorithm shows that primality testing is in BPP. On the other hand, algorithms solvable in polynomial time without randomness are in a class called P.
For a long time (from 1975 to 2002), it was unknown whether primality testing was in P or not. There are very few remaining important problems that have BPP algorithms but are not known to be in P. Polynomial identity testing is the main example, and until 2002 primality testing shared its title. Now primality has a known polynomial-time algorithm. One might argue that (in theory) the Miller-Rabin test is now useless, but it’s still a nice example of a nontrivial BPP algorithm.
The algorithm relies on the following theorem:
Theorem: if is a prime, let be the maximal power of 2 dividing , so that and is odd. Then for any , one of two things happens:
- for some .
The algorithm then simply operates as follows: pick nonzero at random until both of the above conditions fail. Such an is called a witness for the fact that is a composite. If is not a prime, then there is at least a 3/4 chance that a randomly chosen will be a witness.
We leave the proof of the theorem as an exercise. Start with the fact that (this is Fermat’s Little Theorem). Then use induction to take square roots (the result has to be +/-1 mod p), and continue until you get to .
The Python code above uses Python’s built in modular exponentiation function pow to do fast modular exponents. The isWitness function first checks and then all powers . The probablyPrime function then simply generates random potential witnesses and checks them via the previous function. The output of the function is True if and only if all of the needed modular equivalences hold for all witnesses inspected. The choice of endpoints being 2 and are because 1 and will always have exponents 1 mod .
There has been a lot of news recently on government surveillance of its citizens. The biggest two that have pervaded my news feeds are the protests in Turkey, which in particular have resulted in particular oppression of social media users, and the recent light on the US National Security Agency’s widespread “backdoor” in industry databases at Google, Verizon, Facebook, and others. It appears that the facts are in flux, as some companies have denied their involvement in this program, but regardless of the truth the eye of the public has landed firmly on questions of privacy.
Barack Obama weighed in on the controversy as well, being quoted as saying,
You can’t have 100% security and 100% privacy, and also zero inconvenience.
I don’t know what balance the US government hopes to strike, but what I do know is that privacy and convenience are technologically possible, and we need not relinquish security to attain it.
Before I elaborate, let me get my personal beliefs out of the way. I consider the threat of terrorism low compared to the hundreds of other ways I can die. I should know, as I personally have been within an fraction of my life for all (when I was seven I was hit by a bus, proclaimed dead, and revived). So I take traffic security much more seriously than terrorism, and the usual statistics will back me up in claiming one would be irrational to do otherwise. On the other hand, I also believe that I only need so much privacy. So I don’t mind making much of my personal information public, and I opt in to every one of Google’s tracking services in the hopes that my user experience can be improved. Indeed it has, as services like Google Now will, e.g., track my favorite bands for me based on my Google Play listening and purchasing habits, and alert me when there are concerts in my area. If only it could go one step further and alert me of trending topics in theoretical computer science! I have much more utility for timely knowledge of these sorts of things than I do for the privacy of my Facebook posts. Of course, ideologically I’m against violating privacy as a matter of policy, but this is a different matter. One can personally loathe a specific genre of music and still recognize its value and one’s right to enjoy it.
But putting my personal beliefs aside, I want to make it clear that there is no technological barrier to maintaining privacy and utility. This may sound shocking, but it rings true to the theoretical computer scientist. Researchers in cryptography have experienced this feeling many times, that their wildest cryptographic dreams are not only possible but feasible! Public-key encryption and digital signatures, secret sharing on a public channel, zero-knowledge verification, and many other protocols have been realized quite soon after being imagined. There are still some engineering barriers to implementing these technologies efficiently in large-scale systems, but with demand and a few years of focused work there is nothing stopping them from being used by the public. I want to use this short post to describe two of the more recent ideas that have pervaded the crypto community and provide references for further reading.
Differential Privacy and Fully Homomorphic Encryption
There are two facts which are well known in theoretical computer science that the general public is not aware of. The first is about the privacy of databases:
There is a way to mine information from a database without the ability to inspect individual entries in the database.
This is known as differential privacy. The second is no less magical:
There are secure encryption schemes which allow one to run programs on encrypted data and produce encrypted results, without the ability to decrypt the data.
This is known as fully homomorphic encryption.
The implications of these two facts should be obvious: search engines need not know our queries but can still fetch us search results and mine our information to serve ads, Facebook need not have access to our personal data but may still accurately predict new friends, grocery stores can even know what products to place side by side without knowing what any individual customer has purchased. Banks could process our transactions without knowing the amounts involved, or even the parties involved. Perhaps most importantly, governments can have access to databases (in the form of differentially private queries) and mine for the existence of threats without violating any individual user’s privacy. If they get an indication of a terrorist threat, then they can use the usual channels (court orders) to get access to specific individual data.
It’s easy to argue that these techniques will never become mainstream enough for individuals to benefit from it. Indeed, we’ve had cryptography for many years but few average users actively encrypt their communication for a lack of convenience. And then there are questions of policy: why would any company relinquish the ability to work directly with user data? And the cost of rearchitecturing existing services to utilize these technologies would be enough to dissuade most business leaders.
But the point of all this is that these are problems of policy that could in principle be solved without waiting for governments and corporations to get their act together. With enough demand for such services and with enough technologically-minded entrepreneurs (I’m looking at you, Silicon Valley), it would be just a matter of time before the world was differentially private. Mathematics cannot be revoked or legislated away.
Fully Homomorphic Encryption
A fully homomorphic encryption scheme is a normal encryption scheme (two functions “enc” and “dec” to encrypt and decrypt) with one additional function, which we’ll call “eval.” Very roughly, eval accepts as input the text of a program and a ciphertext, and produces as output a ciphertext such that the following diagram commutes:
That is, is our message, and runs on the encrypted version of our message. In practice this happens by lifting two operations, multiplication and addition, from plaintexts (which are usually number-representations of letters) to ciphertexts (again usually numbers). Once this is done one can simulate the functionality of an arbitrary circuit on the encrypted data without decrypting it. Those readers who have been following our category theory series will recognize these sorts of diagrams as being functorial. [Actually, at the time of this writing we have yet to look at functors, but we will soon!] So perhaps a better term would be “functorial encryption.”
I should emphasize: a truly homomorphic encryption scheme has the ability to run any computable function on the encrypted data. There is no loss of functionality in preserving the privacy from the program runner. The main use of this is to maintain privacy while deferring large computations to the cloud. We do this all the time, e.g. a search query, but it also applies to big websites like Reddit, which operate entirely on Amazon Web Services.
Fully homomorphic encryption was first envisaged by Rivest, Adleman (two of the inventors of RSA), and Dertouzos in the late seventies, mainly because the RSA encryption scheme is close to being homomorphic (one can multiply ciphertexts, but not add them). In 2009, Craig Gentry released the first working fully-homomorphic scheme based on the mathematical theory of ideal lattices, and later that year he (with a group of other researchers) came up with a second system that is arguably as simple as RSA; it operates on integers with modular arithmetic.
Gentry has produced a lot of research since then in homomorphic encryption, but the interested reader should probably start with his tutorial paper describing his arithmetic-based system. From there, there are existing implementations in Python (using Sage) and C++, both of which are freely available on github.
The main idea of differential privacy is that one can add noise to statistical data to protect the identities of individual records. Slightly more rigorously, a randomized algorithm is said to be -differentially private if for all possible datasets (inputs) which differ on a single record, and all possible collections of outputs of , the probability of correctly guessing from is not significantly different from that of . In particular, their quotient is at most (this choice of using is arbitrary, but makes the analysis nicer).
The motivation for differential privacy came from two notable events in which companies released “anonymized” data which was partially de-anonymized because it was too specific. The first was the million-dollar Netflix Prize contest to develop a better recommendation algorithm, and the second was the release of the Massachusetts Group Insurance Commission medical database. As such, many companies are very strict with how they handle their user data, and information sharing the medical community is practically nonexistent.
There are many known differentially private algorithms, and they’re much stronger than one would imagine at first. One can run random forests of decision trees, network trace analysis, query-click analysis, certain forms of clustering, and a whole host of combinatorial optimization problems. For a gentle introduction to differential privacy, see Christine Task’s lecture video, a Practical Beginner’s Guide to Differential Privacy. There is also an influential survey from Microsoft Research of Dwork. These go into much more detail about the abilities and inabilities of differential privacy than I could do here.
If there’s one thing to take away from this discussion, it’s that efficient protocols for ensuring privacy are out there waiting to be implemented in software. So while we complain and listen to others complain about governments violating our liberties (and indeed, this discussion is extremely important to have), let’s do a little mathematics, do a little computer science, and figure out how to make privacy the standard of measure in software.
Until next time!
I’ve been spending a little less time on my blog recently then I’d like to, but for good reason: I’ve been attending two weeks of research conferences, I’m getting ready for a summer internship in cybersecurity, and I’ve finally chosen an advisor.
Visions, STOC, and CCC
I’ve been taking a break from the Midwest for the last two weeks to attend some of this year’s seminal computer science theory conferences. The first (which is already over) was the Simon’s Institute Symposium on the Visions of the Theory of Computing. It wasn’t a typical computer science theory conference, because the Simon’s Institute exists to bring techniques from computer science to other fields. So there were many talks in life sciences, ranging from neurobiology (modelling the brain as a computer) to nanochemistry (using DNA to engineer computers) to physics (quantum computing). These kinds of sciences are interesting to me, but pretty far beyond my knowledge base. Other talks were on social networks (and the web), designing markets to be efficient, and computational privacy.
All of these talks shared a common theme: how can we apply the computational “lens” to everything? Interestingly, there were some very deep philosophical questions that computational insight can give answers to. For instance, Christos Papadimitriou gave a talk on evolution and sex, and raised an interesting question: what is the place of sex in evolution? Briefly, the problem with sex is as follows. Say you have a perfect human: perfect health, perfect genes, perfect intelligence, perfect everything. Because of sex, all of this human’s offspring are guaranteed to be imperfect. If the “goal” of evolution is to promote the fitness of a species, it seems that sexual reproduction works against us. Why then, is sexual reproduction the norm and asexual reproduction the exception?
Indeed, the same patterns hold in computer science. Genetic algorithms are the analogue of sexual reproduction, while “simulated annealing” (gradient descent with random mutation) is the analogue of asexual reproduction. And there we can actually experiment, and (without insulting anyone too heavily) it’s largely the case that the asexual methods outperform sexual methods.
But with computational (mathematical) models, we can actually analyze them to see what’s going on. And in fact, genetic algorithms are not optimizing a fitness function. Instead, they are simultaneously optimizing fitness and entropy in the gene pool. One might argue this allows for maximal adaptability, and there are some connections to learning-theory methods that mirror evolution (see Multiplicative Weights Update Algorithm).
This is pretty cool to think about, but more deeply there are philosophical claims about how we can apply the computational lens to other fields. The idea is that computer scientists can create and analyze generative models which reflect the empirical phenomena we observe. Indeed, this mentality is in our blood. It stems from (in the words of my advisor) a “deeply religious” view that everything is computation. And even beyond “big data” and other popular trends in the news, researchers in other fields are increasingly appreciative of the algorithmic mindset computer scientists have to offer.
The second two conferences, the Symposium on the Theory of Computing (STOC) and the Conference on Computational Complexity (CCC) are much more technical. STOC is a whirlwind of a conference, with around 15 twenty-minute talks per day (30 in total with two running in parallel at a time). And I admit that I understand very little. As I write this I’m lost during a talk on an online version of the Steiner tree problem. But even so, it gives a great glimpse into the frontier of research, and tells me what things I should be looking at in my own work.
One new notion that seems omnipresent at STOC this year is the concept of differential privacy. The idea stems from recent controversies over privacy in “anonymized” database information that was released to the public being de-anonymized in clever ways. So the question becomes: is it possible for a database to release information from its databases without revealing any information about the contents of the databases? “Information” could mean simple things like means and medians, or more complicated things like principal components. It’s a question about the mechanism for releasing data, and it’s inherently a computational problem: we want it to be (probabilistically) hard to accurately represent the data from the information provided. At a very high level, the idea is to add random noise to the data as you compute, and modify your methods to be resilient to such noise. There is a flurry of work being done on this topic, with a wealth of open problems waiting to be tackled. A related idea is that of homomorphic encryption, in which the user sends encrypted data to the compute server, the compute server computes things without decrypting the data, and then the user decrypts the result of the computation locally. The server, then, can have computational powers far beyond that of the user, but the user need not relinquish sensitive information to access that power.
Another popular topic (which I know even less about) is the idea of communication complexity and information complexity. Roughly, this field is concerned with two communicating parties with private information trying to compute a function that depends on both of their information. The communication complexity question is how can they do this while spending the least amount of resources on communication. The information complexity concern is to keep as much information private as possible, both between the two parties involved and any nefarious outsiders listening in on the conversation.
It’s amazing to me how much privacy concerns have permeated into theoretical computer science. The usual picture of a mathematician is of a wiry-haired chalk-dust-covered professor with no concern for the world outside his or her own mind. But in reality, we are actively working on the problems faced by average users, to protect them against the dangers of small groups of individuals making bad decisions with their sensitive data. Anyone worried about the upcoming dystopia posed by the masses of available data is apparently not informed in the magical powers of modern cryptography. It seems like it will only be a few years before these kinds of systems become feasible for large-scale applications. Maybe I’ll be lucky enough to have a hand in that
Starting Wednesday is the CCC, and this will likely be more concerned with more “pure” kinds of theoretical computer science research. I’m pretty excited for it, too (although I know by the end I will be exhausted; I’m already feeling the strain today).
MIT Lincoln Lab
This summer (actually, in a few days), I’ll be at MIT’s Lincoln Lab to work on problems in cybersecurity. I still don’t know quite what I’ll be working on (and I probably won’t be at liberty to describe it when I do find out), but vaguely the problem is in finding the right data representations for massive amounts of data so as to reason about it.
I know that the idea of the government analyzing the world’s internet activity is a scary thought, but I’m going to do my best to go into this with an open mind. At the very least, I’ll be familiar enough with the kind of work they do there to decide if it violates any of my principles. I had minor worries about this last year when I went to work at Lawrence Livermore, but by the time I got there it was quite obvious that my work would keep me far far away from weapons development and closer to problems with renewable energy.
And at least it will be fun to live on the east coast for a little while. In anticipation I’ve joined the Boston Python Users group, so it will be fun to go to some of their project nights and talk math and programming with like-minded people in the area.
Finally, I’m quite pleased to announce that I have secured an advisor for my PhD program! His name is Lev Reyzin, and he’s a learning theorist. (For the last year or so I’ve been meaning to start talking about learning theory on this blog). That means that I’m officially in the field of theoretical computer science.
I was a little bit late in choosing an advisor, mostly because I meandered around algebraic geometry and algebraic topology and logic before deciding that theoretical computer science was the right field for me.
It’s exciting to finally have an advisor because it implies a few things:
- I’ve passed all of my prelim exams.
- I can focus on one area and start to deeply understand important problems in theoretical computer science.
- My graduate life will shift from mainly course-taking to mainly research. Research is much more fun.
Lev is also a good fit for me both because I’m interested in learning theory, and because he has nontrivial experience in industry-focused research. He’s done work at Yahoo! Research and Google Research, in particular on improving their algorithms for placing news stories and ads to the front page. The formal name for this problem is the multi-armed bandit problem, and I’ll definitely be writing about it on this blog. While I’d probably be totally happy doing pure learning theory, I think it’s fun and worthwhile to look at wider applications. So I have a lot of optimism that the next few years will be fun and productive.
Until next time!
Last time we defined and gave some examples of rings. Recapping, a ring is a special kind of group with an additional multiplication operation that “plays nicely” with addition. The important thing to remember is that a ring is intended to remind us arithmetic with integers (though not too much: multiplication in a ring need not be commutative). We proved some basic properties, like zero being unique and negation being well-behaved. We gave a quick definition of an integral domain as a place where the only way to multiply two things to get zero is when one of the multiplicands was already zero, and of a Euclidean domain where we can perform nice algorithms like the one for computing the greatest common divisor. Finally, we saw a very important example of the ring of polynomials.
In this post we’ll take a small step upward from looking at low-level features of rings, and start considering how general rings relate to each other. The reader familiar with this blog will notice many similarities between this post and our post on group theory. Indeed, the definitions here will be “motivated” by an attempt to replicate the same kinds of structures we found helpful in group theory (subgroups, homomorphisms, quotients, and kernels). And because rings are also abelian groups, we will play fast and loose with a number of the proofs here by relying on facts we proved in our two-post series on group theory. The ideas assume a decidedly distinct flavor here (mostly in ideals), and in future posts we will see how this affects the computational aspects in more detail.
Homomorphisms and Sub-structures
The first question we should ask about rings is: what should mappings between rings look like? For groups they were set-functions which preserved the group operation ( for all ). The idea of preserving algebraic structure translates to rings, but as rings have two operations we must preserve both.
Definition: Let be rings, and let be a function on the underlying sets. We call a ring homomorphism if is a group homomorphism of the underlying abelian groups, and for all we have . A bijective ring homomorphism is called an isomorphism.
Indeed, we have the usual properties of ring homomorphisms that we would expect. All ring homomorphisms preserve the additive and multiplicative identities, multiplicative inverses (when they exist), and things like zero-divisors. We leave these verifications to the reader.
Ring homomorphisms have the same kinds of constructions as group homomorphisms did. One of particular importance is the kernel , which is the preimage of zero under , or equivalently the set . This is the same definition as for group homomorphisms, linear maps, etc.
The second question we want to ask about rings is: what is the appropriate notion of a sub-structure for rings? One possibility is obvious.
Definition: A subring of a ring is a subset which is also a ring under the same operations as those of (and with the same identities).
Unfortunately subrings do not have the properties we want them to have, and this is mostly because of the requirement that our rings have a multiplicative identity. For instance, we want to say that the kernel of a ring homomorphism is a subring of . This will obviously not be the case, because the multiplicative identity never maps to zero under a ring homomorphism! We also want an appropriate notion of a quotient ring, but if we quotient out (“declare to be zero”) a subring, then the identity will become zero, which yields all sorts of contradictions in all but the most trivial of rings. One nice thing, however, is that the image of a ring homomorphism is a subring of . It is a trivial exercise to prove.
Rather than modify our definition of a kernel or a quotient to make it work with the existing definition of a subring, we pick a different choice of an appropriate sub-structure: the ideal.
The Study of Ideals
From an elementary perspective, an ideal is easiest understood as a generalization of even numbers. Even integers have this nice property that multiplying any integer by an even integer gives you an even integer. This is a kind of closure property that mathematicians really love to think about, because they tend to lead to further interesting properties and constructions. Indeed, we can generalize it as follows:
Definition: Let be a commutative ring, and . We call an ideal if two conditions are satisfied:
- is a subgroup of under addition.
- For all and for all , we have .
That is, an ideal is a subgroup closed under multiplication. The even integers as a subset of give a nice example, as does the set . In fact, every ideal contains zero by being a subgroup. A slightly more complicated example is the set of polynomials divisible by as a subset of the polynomial ring .
We have to be a little careful here if our ring is not commutative. Technically this definition above is for a left-ideal, which is closed under left-multiplication. You can also define right-ideals closed under right-multiplication, and the official name for a plain old “ideal” is a two-sided ideal. The only time we will ever work with noncommutative rings (that we can envision) is with matrix rings, so excluding that case our ideals will forevermore be two-sided. Often times we will use the notation to denote the set of all possible left multiplications , and so the ideal condition is just . This is a multiplicative kind of coset, although as we are about to see we will use both additive and multiplicative cosets in talking about rings.
Let’s look at some properties of ideals that show how neatly this definition encapsulates what we want. First,
Proposition: The kernel of a ring homomorphism is an ideal.
Proof. Let be a ring homomorphism, and let . We already know that is a subgroup of under addition because kernels of group homomorphisms are subgroups. To show the ideal condition, simply take and note that , and so .
In fact the correspondence is one-to-one: every ideal is the kernel of a ring homomorphism and every ring homomorphism’s kernel is an ideal. This is not surprising as it was the case for groups, and the story starts here with quotients, too.
Definition: Let be a ring and an ideal of . The quotient group forms a ring called the quotient ring, and is still denoted by .
To show this definition makes any sense requires some thought: what are the operations of this new ring? Are they well-defined on coset representatives?
For the first, we already know the addition operation because it is the same operation when we view as a quotient group; that is, . The additive identity is just . The multiplication operation is similar: . And the multiplicative identity is clearly .
The fact that multiplication works as we said it does above gives more motivation for the definition of an ideal. To prove it, pick any representatives and . Their product is
Where we denote by . The condition for an ideal to be an ideal is precisely that the weird parts, the , become just . And indeed, is an additive group, so sums of things in are in , and moreover is closed under multiplication by arbitrary elements of . More rigorously, is equivalent to under the coset equivalence relation because their difference is a member of .
Now we can realize that every ideal is the kernel of some homomorphism. Indeed, if is an ideal of , then there is a natural map (called the canonical projection, see our post on universal properties for more) defined by sending an element to its coset: . It should be obvious to the reader that the kernel of is exactly (because if and only if ; this is a fact we borrow from groups). And so there is a correspondence between kernels and ideals. They are really the same concept manifested in two different ways.
Because we have quotient rings and ring homomorphisms, we actually get a number of “theorems” for free. These are the isomorphism theorems for rings, the most important one being the analogue of the First Isomorphism Theorem for groups. That is, if is a ring homomorphism, is a subring of , and moreover . We invite the reader to prove it by hand (start by defining a map and prove it’s a ring isomorphism). There are some other isomorphism theorems, but they’re not particularly deep; rather, they’re just commonly-applied corollaries of the first isomorphism theorem. In light of this blog’s discussions on category theory, the isomorphism theorems are a trivial consequence of the fact that rings have quotients, and that happens to be well-behaved.
Noetherian Rings and Principal Ideal Domains
One cannot overestimate the importance of ideals as a fundamental concept in ring theory. They are literally the cornerstone of everything interesting in the subject, and especially the computational aspects of the field (more on that in future posts). So to study ideals, we make some basic definitions.
Definition: Let be any subset of the ring . The ideal generated by is the smallest ideal of containing , denoted . It is “smallest” in the sense that all ideals containing must also contain .
Indeed, we can realize directly as the set of all possible finite linear combinations where the and . Such a linear combination is called an -linear combination because the “coefficients” come from . It is not hard to see that this is an ideal. It is obviously a subgroup since sums of linear combinations are also linear combinations, and negation distributes across the sum. It satisfies multiplicative closure because
is another -linear combination.
One convenient way to think of this ideal is as the intersection of all ideals which contain . Since the intersection of any family of ideals is an ideal (check this!) this will always give us the smallest possible ideal containing .
One important bit of notation is that if is the ideal generated by a finite set , then we write where . We say that is finitely generated. If happens to be a singleton, we say that is a principal ideal. Following the notation of linear algebra, a minimal (by cardinality) generating set for an ideal is called a basis for the ideal.
Thinking about how ideals are generated is extremely important both in the pure theory of rings and in the computational algorithms that deal with them. It’s so important, in fact, that there are entire classes of rings defined by how their ideals behave. We give the two most important ones here.
Definition: A commutative ring is called Noetherian if all of its ideals are finitely generated. An integral domain is called a principal ideal domain if all of its ideals are principal.
The concept of a Noetherian ring is a particularly juicy one, and it was made famous by the founding mother of commutative ring theory, Emmy Noether. Without going into too much detail, just as an integral domain is the most faithful abstraction of the ring of integers, a Noetherian ring is the best way to think about polynomial rings (and quotients of polynomial rings). This is highlighted by a few basic facts and some deep theorems:
Fact: If is a Noetherian ring and an ideal, then is Noetherian.
This follows from the correspondence of ideals between and . Indeed, for every ideal there is an ideal of which contains . Moreover, this correspondence is a bijection. So if we want a generating set for , we can lift it up to via , get a finite generating set for , and project the generators back down to . Some of the generators will be killed off (sent to by ), but what remains will be a valid generating set for , and still finite.
Theorem (Hilbert Basis Theorem): If is a Noetherian ring, then the polynomial ring in one variable is Noetherian. Inductively, is also Noetherian.
These theorems start to lay the foundation for algebraic geometry, which connects ideals generated by a family of polynomials to the geometric solution set of those polynomials. Since a vast array of industrial problems can be reduced to solving systems of polynomial equations (take robot motion planning, for example), it would be quite convenient if we could write programs to reason about those systems of equations. Indeed, such algorithms do exist, and we make heavy use of theorems like the Hilbert Basis Theorem to ensure their correctness.
In the next post in this series, we’ll start this journey through elementary algebraic geometry by defining a variety, and establishing its relationship to ideals of polynomial rings. We’ll then work towards theorems like Hilbert’s Nullstellensatz, the computational operations we wish to perform on ideals, and the algorithms that carry out those operations. As usual, we’ll eventually see some applications and write some code.
Previously in this series we’ve seen the definition of a category and a bunch of examples, basic properties of morphisms, and a first look at how to represent categories as types in ML. In this post we’ll expand these ideas and introduce the notion of a universal property. We’ll see examples from mathematics and write some programs which simultaneously prove certain objects have universal properties and construct the morphisms involved.
A Grand Simple Thing
One might go so far as to call universal properties the most important concept in category theory. This should initially strike the reader as odd, because at first glance universal properties are so succinctly described that they don’t seem to be very interesting. In fact, there are only two universal properties and they are that of being initial and final.
Definition: An object in a category is called initial if for every object there is a unique morphism . An object is called final if for every object there is a unique morphism . If an object satisfies either of these properties, it is called universal. If an object satisfies both, it is called a zero object.
In both cases, the existence of a unique morphism is the same as saying the relevant Hom set is a singleton (i.e., for initial objects , the Hom set consists of a single element). There is one and only one morphism between the two objects. In the particular case of when is initial (or final), the definition of a category says there must be at least one morphism, the identity, and the universal property says there is no other.
There’s only one way such a simple definition could find fruitful applications, and that is by cleverly picking categories. Before we get to constructing interesting categories with useful universal objects, let’s recognize some universal objects in categories we already know.
In the single element set is final, but not initial; there is only one set-function to a single-element set. It is important to note that the single-element set is far from unique. There are infinitely many (uncountably many!) singleton sets, but as we have already seen all one-element sets are isomorphic in (they all have the same cardinality). On the other hand, the empty set is initial, since the “empty function” is the only set-mapping from the empty set to any set. Here the initial object truly is unique, and not just up to isomorphism.
It turns out universal objects are always unique up to isomorphism, when they exist. Here is the official statement.
Proposition: If are both initial in , then are isomorphism. If are both final, then .
Proof. Recall that a mophism is an isomorphism if it has a two sided inverse, a so that and are the identities. Now if are two initial objects there are unique morphisms and . Moreover, these compose to be morphisms . But since is initial, the only morphism is the identity. The situation for is analogous, and so these morphisms are actually inverses of each other, and are isomorphic. The proof for final objects is identical.
Let’s continue with examples. In the category of groups, the trivial group is both initial and final, because group homomorphisms must preserve the identity element. Hence the trivial group is a zero object. Again, “the” trivial group is not unique, but unique up to isomorphism.
In the category of types with computable (halting) functions as morphisms, the null type is final. To be honest, this depends on how we determine whether two computable functions are “equal.” In this case, we only care about the set of inputs and outputs, and for the null type all computable functions have the same output: null.
Partial order categories are examples of categories which need not have universal objects. If the partial order is constructed from subsets of a set , then the initial object is the empty set (by virtue of being a subset of every set), and as a subset of itself is obviously final. But there are other partial orders, such as inequality of integers, which have no “smallest” or “largest” objects. Partial order categories which have particularly nice properties (such as initial and final objects, but not quite exactly) are closely related to the concept of a domain in denotational semantics, and the language of universal properties is relevant to that discussion as well.
The place where universal properties really shine is in defining new constructions. For instance, the direct product of sets is defined by the fact that it satisfies a universal property. Such constructions abound in category theory, and they work via the ‘diagram categories’ we defined in our introductory post. Let’s investigate them now.
Let’s recall the classical definition from set theory of a quotient. We described special versions of quotients in the categories of groups and topological spaces, and we’ll see them all unified via the universal property of a quotient in a moment.
Definition: An equivalence relation on a set is a subset of the set product which is reflexive, symmetric, and transitive. The quotient set is the set of equivalence classes on . The canonical projection is the map sending to its equivalence class under .
The quotient set can also be described in terms of a special property: it is the “largest” set which agrees with the equivalence relation . On one hand, it is the case that whenever in then . Moreover, for any set and any map which equates equivalent things ( for all ), then there is a unique map such that . This word salad is best translated into a diagram.
Here we use a dashed line to assert the existence of a morphism (once we’ve proven such a morphism exists, we use a solid line instead), and the symbol signifies existence () and uniqueness (!).
We can prove this explicitly in the category . Indeed, if is any map such that for all equivalent , then we can define as follows: for any whose equivalence class is denoted by in , and define . This map is well defined because if , then . It is unique because if for some other , then ; this is the only possible definition.
Now the “official” way to state this universal property is as follows:
The quotient set is universal with respect to the property of mapping to a set so that equivalent elements have the same image.
But as we said earlier, there are only two kinds of universal properties: initial and final. Now this looks suspiciously like an initial object ( is going from , after all), but what exactly is the category we’re considering? The trick to dissecting this sentence is to notice that this is not a statement about just , but of the morphism .
That is, we’re considering a diagram category. In more detail: fix an object in and an equivalence relation on . We define a category as follows. The objects in the category are morphisms such that in implies in . The morphisms in the category are commutative diagrams
Here need to be such that they send equivalent things to equal things (or they wouldn’t be objects in the category!), and by the commutativity of the diagram . Indeed, the statement about quotients is that is an initial object in this category. In fact, we have already proved it! But note the abuse of language in our offset statement above: it’s not really that is the universal object, but . Moreover, the statement itself doesn’t tell us what category to inspect, nor whether we care about initial or final objects in that category. Unfortunately this abuse of language is widespread in the mathematical world, and for arguably good reason. Once one gets acquainted with these kinds of constructions, reading between the limes becomes much easier and it would be a waste of time to spell it out. After all, once we understand there is no “obvious” choice for a map except for the projection . This is how got its name, the canonical projection.
Two last bits of terminology: if is any category whose objects are sets (and hence, where equivalence relations make sense), we say that has quotients if for every object there is a morphism satisfying the universal property of a quotient. Another way to state the universal property is to say that all maps respecting the equivalence structure factor through the quotient, in the sense that we get a diagram like the one above.
What is the benefit of viewing by its universal property? For one, the set is unique up to isomorphism. That is, if any other pair satisfies the same property, we automatically get an isomorphism . For instance, if is defined via a function (that is, if ), then the pair satisfies the universal property of a quotient. This means that we can “decompose” any function into three pieces:
The first map is the canonical projection, the second is the isomorphism given by the universal property of the quotient, and the last is the inclusion map into . In a sense, all three of these maps are “canonical.” This isn’t so magical for set-maps, but the same statement (and essentially the same proof) holds for groups and topological spaces, and are revered as theorems. For groups, this is called The First Isomorphism Theorem, but it’s essentially the claim that the category of groups has quotients.
This is getting a bit abstract, so let’s see how the idea manifests itself as a program. In fact, it’s embarrassingly simple. Using our “simpler” ML definition of a category from last time, the constructive proof that quotient sets satisfy the universal property is simply a concrete version of the definition of we gave above. In code,
fun inducedMapFromQuotient(setMap(x, pi, q), setMap(x, g, y)) = setMap(q, (fn a => g(representative(a))), y)
That is, once we have and defined for our given equivalence relation, this function accepts as input any morphism and produces the uniquely defined in the diagram above. Here the “representative” function just returns an arbitrary element of the given set, which we added to the abstract datatype for sets. If the set is empty, then all functions involved will raise an “empty” exception upon being called, which is perfectly fine. We leave the functions which explicitly construct the quotient set given as an exercise to the reader.
Products and Coproducts
Just as the concept of a quotient set or quotient group can be generalized to a universal property, so can the notion of a product. Again we take our intuition from . There the product of two sets is the set of ordered pairs
But as with quotients, there’s much more going on and the key is in the morphisms. Specifically, there are two obvious choices for morphisms and . These are the two projections onto the components, namely and . These projections are also called “canonical projections,” and they satisfy their own universal property.
The product of sets is universal with respect to the property of having two morphisms to its factors.
Indeed, this idea is so general that it can be formulated in any category, not just categories whose objects are sets. Let be two fixed objects in a category . Should it exist, the product is defined to be a final object in the following diagram category. This category has as objects pairs of morphisms
and as morphisms it has commutative diagrams
In words, to say products are final is to say that for any object in this category, there is a unique map that factors through the product, so that and . In a diagram, it is to claim the following commutes:
If the product exists for any pair of objects, we declare that the category has products.
Indeed, many familiar product constructions exist in pure mathematics: sets, groups, topological spaces, vector spaces, and rings all have products. In fact, so does the category of ML types. Given two types ‘a and ‘b, we can form the (aptly named) product type ‘a * ‘b. The canonical projections exist because ML supports parametric polymorphism. They are
fun leftProjection(x,y) = x fun rightProjection(x,y) = y
And to construct the unique morphism to the product,
fun inducedMapToProduct(f,g) = fn a => (f(a), g(a))
We leave the uniqueness proof to the reader as a brief exercise.
There is a similar notion called a coproduct, denoted , in which everything is reversed: the arrows in the diagram category go to and the object is initial in the diagram category. Explicitly, for a fixed the objects in the diagram category are again pairs of morphisms, but this time with arrows going to the central object
The morphisms are again commutative diagrams, but with the connecting morphism going away from the central object
And a coproduct is defined to be an initial object in this category. That is, a pair of morphisms such that there is a unique connecting morphism in the following diagram.
Coproducts are far less intuitive than products in their concrete realizations, but the universal property is no more complicated. For the category of sets, the coproduct is a disjoint union (in which shared elements of two sets are forcibly considered different), and the canonical morphisms are “inclusion” maps (hence the switch from to in the diagram above). Specifically, if we define the coproduct
as the set of “tagged” elements (the right entry in a tuple signifies which piece of the coproduct the left entry came from), then the inclusion maps and are the canonical morphisms.
There are similar notions of disjoint unions for topological spaces and graphs, which are their categories’ respective coproducts. However, in most categories the coproducts are dramatically different from “unions.” In group theory, it is a somewhat complicated object known as the free product. We mentioned free products in our hasty discussion of the fundamental group, but understanding why and where free groups naturally occur is quite technical. Similarly, coproducts of vector spaces (or -modules) are more like a product, with the stipulation that at most finitely many of the entries of a tuple are nonzero (e.g., formal linear combinations of things from the pieces). Even without understanding these examples, the reader should begin to believe that relatively simple universal properties can yield very useful objects with potentially difficult constructions in specific categories. The ubiquity of the concepts across drastically different fields of mathematics is part of why universal properties are called “universal.”
Luckily, the category of ML types has a nice coproduct which feels like a union, but it is not supported as a “native” language feature like products types are. Specifically, given two types ‘a, ‘b we can define the “coproduct type”
datatype ('a, 'b)Coproduct = left of 'a | right of 'b
Let’s prove this is actually a coproduct: fix two types ‘a and ‘b, and let be the functions
fun leftInclusion(x) = left(x) fun rightInclusion(y) = right(y)
Then given any other pair of functions which accept as input types ‘a and ‘b, respectively, there is a unique function operating on the coproduct type. We construct it as follows.
fun inducedCoproductMap(f,g) = let theMap(left(a)) = f(a) theMap(right(b)) = g(b) in theMap end
The uniqueness of this construction is self-evident. This author finds it fascinating that these definitions are so deep and profound (indeed, category theory is heralded as the queen of abstraction), but their realizations are trivially obvious to the working programmer. Perhaps this is a statement about how well-behaved the category of ML types is.
So far we have seen three relatively simple examples of universal properties, and explored how they are realized in some categories. We should note before closing two things. The first is that not every category has objects with these universal properties. Unfortunately poset categories don’t serve as a good counterexample for this (they have both products and coproducts; what are they?), but it may be the case that in some categories only some pairs of objects have products or coproducts, while others do not.
Second, there are many more universal properties that we haven’t covered here. For instance, the notion of a limit is a universal property, as is the notion of a “free” object. There are kernels, pull-backs, equalizers, and many other ad-hoc universal properties without names. And for every universal property there is a corresponding “dual” property that results from reversing the arrows in every diagram, as we did with coproducts. We will visit the relevant ones as they come up in our explorations.
In the next few posts we’ll encounter functors and the concept of functoriality, and start asking some poignant questions about familiar programmatic constructions.
This post is mainly mathematical. We left it out of our introduction to categories for brevity, but we should lay these definitions down and some examples before continuing on to universal properties and doing more computation. The reader should feel free to skip this post and return to it later when the words “isomorphism,” “monomorphism,” and “epimorphism” come up again. Perhaps the most important part of this post is the description of an isomorphism.
Isomorphisms, Monomorphisms, and Epimorphisms
Perhaps the most important paradigm shift in category theory is the focus on morphisms as the main object of study. In particular, category theory stipulates that the only knowledge one can gain about an object is in how it relates to other objects. Indeed, this is true in nearly all fields of mathematics: in groups we consider all isomorphic groups to be the same. In topology, homeomorphic spaces are not distinguished. The list goes on. The only way to do determine if two objects are “the same” is by finding a morphism with special properties. Barry Mazur gives a more colorful explanation by considering the meaning of the number 5 in his essay, “When is one thing equal to some other thing?” The point is that categories, more than existing to be a “foundation” for all mathematics as a formal system (though people are working to make such a formal system), exist primarily to “capture the essence” of mathematical discourse, as Mazur puts it. A category defines objects and morphisms, but literally all of the structure of a category lies in its morphisms. And so we study them.
The strongest kind of morphism we can consider is an isomorphism. An isomorphism is the way we say two objects in a category are “the same.” We don’t usually care whether two objects are equal, but rather whether some construction is unique up to isomorphism (more on that when we talk of universal properties). The choices made in defining morphisms in a particular category allow us to strengthen or weaken this idea of “sameness.”
Definition: A morphism in a category is an isomorphism if there exists a morphism so that both ways to compose and give the identity morphisms on the respective objects. That is,
The most basic (usually obvious, but sometimes very deep) question in approaching a new category is to ask what the isomorphisms are. Let us do this now.
In the morphisms are set-functions, and it is not hard to see that any two sets of equal cardinality have a bijection between them. As all bijections have two-sided inverses, two objects in are isomorphic if and only if they have the same cardinality. For example, all sets of size 10 are isomorphic. This is quite a weak notion of “sameness.” In contrast, there is a wealth of examples of groups of equal cardinality which are not isomorphic (the smallest example has cardinality 4). On the other end of the spectrum, a poset category has no isomorphisms except for the identity morphisms. The poset categories still have useful structure, but (as with objects within a category) the interesting structure is in how a poset category relates to other categories. This will become clearer later when we look at functors, but we just want to dissuade the reader from ruling out poset categories as uninteresting due to a lack of interesting morphisms.
Consider the category of ML types with ML functions as morphisms. An isomorphism in this category would be a function which has a two-sided inverse. Can the reader think of such a function?
Let us now move on to other, weaker properties of morphisms.
Definition: A morphism is a monomorphism if for every object and every pair of morphisms the condition implies .
The reader should parse this notation carefully, but truly think of it in terms of the following commutative diagram:
Whenever this diagram commutes and is a monomorphism, then we conclude (by definition) that . Remember that a diagram commuting just means that all ways to compose morphisms (and arrive at morphisms with matching sources and targets) result in an identical morphism. In this diagram, commuting is the equivalent of claiming that , since there are only two nontrivial ways to compose.
The idea is that monomorphisms allow one to “cancel” from the left side of a composition (although, confusingly, this means the cancelled part is on the right hand side of the diagram).
The corresponding property for cancelling on the right is defined identically.
Definition: A morphism is an epimorphism if for every object and every pair of morphism the condition implies .
Again, the relevant diagram.
Whenever is an epimorphism and this diagram commutes, we can conclude .
Now one of the simplest things one can do when considering a category is to identify the monomorphisms and epimorphisms. Let’s do this for a few important examples.
Monos and Epis in Various Categories
In the category , monomorphisms and epimorphisms correspond to injective and surjective functions, respectively. Lets see why this is the case for monomorphisms. Recall that an injective function has the property that implies . With this property we can show is a monomorphism because if then the injective property gives us immediately that . Conversely, if is a monomorphism and , we will construct a set and two convenient functions to help us show that . In particular, pick to be the one point set , and define . Then as functions . Indeed, there is only one value in the domain, so saying this amounts to saying , which we know is true by assumption. By the monomorphism property , so .
Now consider epimorphisms. It is clear that a surjective map is an epimorphism, but the converse is a bit trickier. We prove by contraposition. Instead of now picking the “one-point set,” for our , we must choose something which is one element bigger than . In particular, define , where is with one additional point added (which we declare to not already be in ). Then if is not surjective, and there is some which is missed by , we define and otherwise. We can also define to be the identity on , so that , but . So epimorphisms are exactly the surjective set-maps.
There is one additional fact that makes the category of sets well-behaved: a morphism in is an isomorphism if and only if it is both a monomorphism and an epimorphism. Indeed, isomorphisms are set-functions with two-sided inverses (bijections) and we know from classical set theory that bijections are exactly the simultaneous injections and surjections. A warning to the reader: not all categories are like this! We will see in a moment an example of a nontrivial category in which isomorphisms are not the same thing as simultaneous monomorphisms and epimorphisms.
The category is very similar to in regards to monomorphisms and epimorphisms. The former are simply injective group homomorphisms, while the latter are surjective group homomorphisms. And again, a morphisms is an isomorphism if and only if it is both a monomorphism and an epimorphism. We invite the reader to peruse the details of the argument above and adapt it to the case of groups. In both cases, the hard decision is in choosing when necessary. For monomorphisms, the “one-point group” does not work because we are constrained to send the identity to the identity in any group homomorphism. The fortuitous reader will avert their eyes and think about which group would work, and otherwise we suggest trying . After completing the proof, the reader will see that the trick is to find a for which only one “choice” can be made. For epimorphisms, the required is a bit more complex, but we invite the reader to attempt a proof to see the difficulties involved.
Why do these categories have the same properties but they are acquired in such different ways? It turns out that although these proofs seem different in execution, they are the same in nature, and they follow from properties of the category as a whole. In particular, the “one-point object” (a singleton set for and for ) is more categorically defined as the “free object on one generator.” We will discuss this more when we get to universal properties, but a “free object on generators” is roughly speaking an object for which any morphism with source must make exactly “choices” in its definition. With sets that means choices for the images of elements, for groups that means consistent choices for images of group elements. On the epimorphism side, the construction is a sort of “disjoint union object” which is correctly termed a “coproduct.” But momentarily putting aside all of this new and confusing terminology, let us see some more examples of morphisms in various categories.
Our recent primer on rings was well-timed, because the category of rings (with identity) is an example of a not-so-well-behaved category. As with sets and groups, we do have that monomorphisms are equivalent to injective ring homomorphisms, but the argument is trickier than it was for groups. It is not obvious which “convenient” object to choose here, since maps yield no choices: 1 maps to 1, 0 maps to 0, and the properties of a ring homomorphism determine everything else (in fact, the abelian group structure and the fact that units are preserved is enough). This makes into what’s called an “initial object” in ; more on that when we study universal properties. In fact, we invite the reader to return to this post after we talk about the universal property of polynomial rings. It turns out that is a suitable choice for , and the “choice” made is where to send the indeterminate .
On the other hand, things go awry when trying to apply analogous arguments to epimorphisms. While it is true that every surjective ring homomorphism is an epimorphism (it is already an epimorphism in , and the argument there applies), there are ring epimorphisms which are not surjections! Consider the inclusion map of rings . The map is not surjective, but it is an epimorphism. Suppose are two parallel ring morphisms, and they agree on (they will always do so, since there is only one ring homomorphism ). Then must also agree on , because if with , then
Because the map above is also an injection, the category of rings is a very naturally occurring example of a category which has morphisms that are both epimorphisms and monomorphisms, but not isomorphisms.
There are instances in which monomorphisms and epimorphisms are trivial. Take, for instance any poset category. There is at most one morphism between any two objects, and so the conditions for an epimorphism and monomorphism vacuously hold. This is an extreme example of a time when simultaneous monomorphisms and epimorphisms are not the same thing as isomorphisms! The only isomorphisms in a poset category are the identity morphisms.
Morals about Good and Bad Categories
The inspection of epimorphisms and monomorphisms is an important step in the analysis of a category. It gives one insight into how “well-behaved” the category is, and picks out the objects which are special either for their useful properties or confounding trickery.
This reminds us of a quote of Alexander Grothendieck, one of the immortal gods of mathematics who popularized the use of categories in mainstream mathematics.
A good category containing some bad objects is preferable to a bad category containing only good objects.
I suppose the thesis here is that the having only “good” objects yields less interesting and useful structure, and that one should be willing to put up with the bad objects in order to have access to that structure.
Next time we’ll jump into a discussion of universal properties, and start constructing some programs to prove that various objects satisfy them.
My girlfriend and I decided we want a print of my recent artistic creation from the post Bezier Curves and Picasso.
I set up an account at Society6, which does professional art printing (framed or unframed). I can’t imagine anyone beside me would be interested in including this in their art collection, but just in case it’s available for sale here. I plan to put it below my print of Picasso’s “Dog.” Donning my pretentious art-snob hat, the juxtaposition of two abstract representations of the same idea adds a profound depth to the work as a whole.
Warning: I have not yet ordered this print, so I cannot attest to the quality (perhaps it’s pixelated and I need to produce a higher quality version). I will be travelling to conferences and doing last-minute preparations for a preliminary exam over the next three weeks, so I won’t be ordering the print until June. If people are interested, leave a comment and I will send a notification via Twitter/Google+/Facebook with my thoughts once I get it.
Disclosure: I receive a $5.00 commission on any purchase. If you want a more direct way to support this blog, consider donating via Paypal.
Here’s a screenshot from Society6′s automatically generated preview of one framing option. Looks pretty cool