I’ve been spending a little less time on my blog recently then I’d like to, but for good reason: I’ve been attending two weeks of research conferences, I’m getting ready for a summer internship in cybersecurity, and I’ve finally chosen an advisor.

Visions, STOC, and CCC

The Simons Institute at UC Berkeley

I’ve been taking a break from the Midwest for the last two weeks to attend some of this year’s seminal computer science theory conferences. The first (which is already over) was the Simon’s Institute Symposium on the Visions of the Theory of Computing. It wasn’t a typical computer science theory conference, because the Simon’s Institute exists to bring techniques from computer science to other fields. So there were many talks in life sciences, ranging from neurobiology (modelling the brain as a computer) to nanochemistry (using DNA to engineer computers) to physics (quantum computing). These kinds of sciences are interesting to me, but pretty far beyond my knowledge base. Other talks were on social networks (and the web), designing markets to be efficient, and computational privacy.

All of these talks shared a common theme: how can we apply the computational “lens” to everything? Interestingly, there were some very deep philosophical questions that computational insight can give answers to. For instance, Christos Papadimitriou gave a talk on evolution and sex, and raised an interesting question: what is the place of sex in evolution? Briefly, the problem with sex is as follows. Say you have a perfect human: perfect health, perfect genes, perfect intelligence, perfect everything. Because of sex, all of this human’s offspring are guaranteed to be imperfect. If the “goal” of evolution is to promote the fitness of a species, it seems that sexual reproduction works against us. Why then, is sexual reproduction the norm and asexual reproduction the exception?

Indeed, the same patterns hold in computer science. Genetic algorithms are the analogue of sexual reproduction, while “simulated annealing” (gradient descent with random mutation) is the analogue of asexual reproduction. And there we can actually experiment, and (without insulting anyone too heavily) it’s largely the case that the asexual methods outperform sexual methods.

But with computational (mathematical) models, we can actually analyze them to see what’s going on. And in fact, genetic algorithms are not optimizing a fitness function. Instead, they are simultaneously optimizing fitness and entropy in the gene pool. One might argue this allows for maximal adaptability, and there are some connections to learning-theory methods that mirror evolution (see Multiplicative Weights Update Algorithm).

This is pretty cool to think about, but more deeply there are philosophical claims about how we can apply the computational lens to other fields. The idea is that computer scientists can create and analyze generative models which reflect the empirical phenomena we observe. Indeed, this mentality is in our blood. It stems from (in the words of my advisor) a “deeply religious” view that everything is computation. And even beyond “big data” and other popular trends in the news, researchers in other fields are increasingly appreciative of the algorithmic mindset computer scientists have to offer.

The second two conferences, the Symposium on the Theory of Computing (STOC) and the Conference on Computational Complexity (CCC) are much more technical. STOC is a whirlwind of a conference, with around 15 twenty-minute talks per day (30 in total with two running in parallel at a time). And I admit that I understand very little. As I write this I’m lost during a talk on an online version of the Steiner tree problem. But even so, it gives a great glimpse into the frontier of research, and tells me what things I should be looking at in my own work.

One new notion that seems omnipresent at STOC this year is the concept of differential privacy. The idea stems from recent controversies over privacy in “anonymized” database information that was released to the public being de-anonymized in clever ways. So the question becomes: is it possible for a database to release information from its databases without revealing any information about the contents of the databases? “Information” could mean simple things like means and medians, or more complicated things like principal components. It’s a question about the mechanism for releasing data, and it’s inherently a computational problem: we want it to be (probabilistically) hard to accurately represent the data from the information provided. At a very high level, the idea is to add random noise to the data as you compute, and modify your methods to be resilient to such noise. There is a flurry of work being done on this topic, with a wealth of open problems waiting to be tackled. A related idea is that of homomorphic encryption, in which the user sends encrypted data to the compute server, the compute server computes things without decrypting the data, and then the user decrypts the result of the computation locally. The server, then, can have computational powers far beyond that of the user, but the user need not relinquish sensitive information to access that power.

A schematic example of differential privacy at work. Image credit: Microsoft Research.

Another popular topic (which I know even less about) is the idea of communication complexity and information complexity. Roughly, this field is concerned with two communicating parties with private information trying to compute a function that depends on both of their information. The communication complexity question is how can they do this while spending the least amount of resources on communication. The information complexity concern is to keep as much information private as possible, both between the two parties involved and any nefarious outsiders listening in on the conversation.

It’s amazing to me how much privacy concerns have permeated into theoretical computer science. The usual picture of a mathematician is of a wiry-haired chalk-dust-covered professor with no concern for the world outside his or her own mind. But in reality, we are actively working on the problems faced by average users, to protect them against the dangers of small groups of individuals making bad decisions with their sensitive data. Anyone worried about the upcoming dystopia posed by the masses of available data is apparently not informed in the magical powers of modern cryptography. It seems like it will only be a few years before these kinds of systems become feasible for large-scale applications. Maybe I’ll be lucky enough to have a hand in that 🙂

Starting Wednesday is the CCC, and this will likely be more concerned with more “pure” kinds of theoretical computer science research. I’m pretty excited for it, too (although I know by the end I will be exhausted; I’m already feeling the strain today).

MIT Lincoln Lab

This summer (actually, in a few days), I’ll be at MIT’s Lincoln Lab to work on problems in cybersecurity. I still don’t know quite what I’ll be working on (and I probably won’t be at liberty to describe it when I do find out), but vaguely the problem is in finding the right data representations for massive amounts of data so as to reason about it.

I know that the idea of the government analyzing the world’s internet activity is a scary thought, but I’m going to do my best to go into this with an open mind. At the very least, I’ll be familiar enough with the kind of work they do there to decide if it violates any of my principles. I had minor worries about this last year when I went to work at Lawrence Livermore, but by the time I got there it was quite obvious that my work would keep me far far away from weapons development and closer to problems with renewable energy.

And at least it will be fun to live on the east coast for a little while. In anticipation I’ve joined the Boston Python Users group, so it will be fun to go to some of their project nights and talk math and programming with like-minded people in the area.

Advisor

Finally, I’m quite pleased to announce that I have secured an advisor for my PhD program! His name is Lev Reyzin, and he’s a learning theorist. (For the last year or so I’ve been meaning to start talking about learning theory on this blog). That means that I’m officially in the field of theoretical computer science.

I was a little bit late in choosing an advisor, mostly because I meandered around algebraic geometry and algebraic topology and logic before deciding that theoretical computer science was the right field for me.

It’s exciting to finally have an advisor because it implies a few things:

Lev is also a good fit for me both because I’m interested in learning theory, and because he has nontrivial experience in industry-focused research. He’s done work at Yahoo! Research and Google Research, in particular on improving their algorithms for placing news stories and ads to the front page. The formal name for this problem is the multi-armed bandit problem, and I’ll definitely be writing about it on this blog. While I’d probably be totally happy doing pure learning theory, I think it’s fun and worthwhile to look at wider applications. So I have a lot of optimism that the next few years will be fun and productive.

Until next time!