Learning Programming — Finger-Painting and Killing Zombies

Zmob, my first (and only) original game.

By the end, the breadth and depth of our collective knowledge was far beyond what anyone could expect from any high school course in any subject. 

Education Versus Exploration

I’m a lab TA for an introductory Python programming course this semester, and it’s been…depressing. I remember my early days of programming, when the possibilities seemed endless and adding new features to my programs was exciting and gratifying, and I brimmed with pride at every detail, and I boasted to my friends of the amazing things I did, and I felt powerful. The world was literally at my fingertips. I could give substance to any idea I cared to entertain and any facet of life I wanted to explore. I had developed an insatiable thirst for programming that has lasted to this very day.

My younger self, if programming were more noodley.

The ironic thing is that today I look back on the programs I wrote and cringe with unending embarrassment. My old code was the artistic equivalent of a finger-painting made by a kindergartener. Sure, it might look kind of like a horse, but only because the kid has no idea what he’s doing. The programs I wrote were bug-ridden, hard to read, poorly organized, and a veritable spaghetti-slop of logic.

But I can’t deny how much fun I had, and how much I learned. I will describe all of that in more detail below, but first I’d like to contrast my experience with the students I’m teaching this semester. Their labs are no more than mindlessly shuffling around data, and implementing dry, boring functions like Horner’s method for evaluating polynomials. And (amazingly) their projects are even less interesting. I think the biggest difference is that my students don’t have to actually solve any problems by writing programs. And so their experience is boiled down to following directions and pretending to be a computer in order to follow their own program’s logic.

I’m certainly not saying that following directions and simulating a computer in your head aren’t important things to be a good programmer. What I’m saying is that my students get no gratification from their work. Their results are just as dry as the problem, and the majority of the joy I see among them is when they finish a problem and don’t have to think about it anymore (even if their solution is completely wrong).

The course has other problems with it. For instance, the professor teaches the students C paradigms instead of Python paradigms (I don’t think he ever learned the right way to do things in Python), and he confuses them with talk of stack frames and registers and all sorts of irrelevant architectural details. Remember, these students have never programmed before, and some started the course just barely competent with a computer. I didn’t know what a stack frame was until I had three years of programming under my belt (two of those years were the early, experimental years).

All of this has gotten me thinking pretty regularly about how I might teach my own course, if I might ever have one. This post will roughly be an outline of how my own computer science education began. I’ll distill the most important aspects of it: the things that made me want to keep programming and the things that taught me deep ideas in natural contexts.

My First Time was with Java

My high school (Campolindo High, in Moraga, CA) was blessed with a computer science course. With my early exposure to computers (at 3 years old, by my parents’ accounts), my love of video games, and my basic grasp of HTML, it seemed inevitable that I belonged in the class. In retrospect, it was perhaps the most beneficial course I ever took, followed closely by Honors/AP English, German, and public policy. Not only did it provide me with the aforementioned thirst for programming, but it planted a mathematical seed in my mind that would flourish years later like a giant bean stalk, which I’m still in the process of climbing today.

Right off the bat the course was different. The teacher’s name was Mr. Maters, and by the end of the first week we ceased to have lectures. Mr. Maters showed us barely enough to get a simple program running with input and output, and then we were left to our own devices.

There were roughly two options for getting credit. The first was to follow an outline of exercises and small projects from a book on GUI programming in Java. Most students followed these exercises for the first two months or so, and I did at least until I had made a stupid little pizza shop program that let you order pizzas.

The second option was wide open. The students could do whatever they wanted, and Mr. Maters jokingly told us, “At the end of each quarter I’ll judge your worth, and if I deem you deserve an A, you’ll get an A, but if I deem otherwise… you’ll get an F!”

Of course, Mr. Maters was the nicest guy you ever met. He would calmly sit at his computer in the front of the lab, maintaining a help queue of students with questions. He would quietly and calmly listen to a student’s question, and then shed some insight into their problem. Mr. Maters would get a better idea of a student’s progress by how frequent and how poignant their questions were, and more often than not students who were waiting in the queue would solve their own problems before getting to the front.

Most of the students in the class chose the “wide open” route, and that meant designing games. I’m not talking about good games, mind you, I’m talking about games made by high schoolers. I made the ugliest Whack-a-Mole you ever saw, a lifeless AI for a Battleship game, and a video poker game that featured Mr. Maters’s face on the back of every card (and replacing the faces of the kings, queens, and jacks). For the last one, I even collaborated with other students who made card games by jointly designing the Maters-themed deck. Collaboration would become a bigger theme the second year I took the course (yes, I took the same course twice), but before we get there, there were some other indispensable features I want to mention.

First, the lab room was set up so that Mr. Maters could remotely control any computer in the room from his desk. The program he used was dubbed the reverent name, “Vision,” and the slackers feared its power. Vision allowed Mr. Maters to look at our code while we were asking him questions at the front, and also helped him monitor students’ progress. Second, we were allowed a shared drive on the school’s network so that we could instantly pass files back and forth between lab computers. This had a few direct learning benefits, like sharing code examples, sprites, and sound files we used in our programs. But more importantly it gave a sense of culture to the class. We occasionally had contests where every student in the class submitted a picture of Maters’s face photoshopped into some ridiculous and funny scene (really, MS-Painted into a scene). Recall, this was the early days of internet memes, and naturally we youngsters were at the forefront of it.

Third, we were given “exploration” days every so often, on which we were freed from any obligation to work. We could play games, browse around online, or just sit and talk. More often than not we ended up playing LAN Unreal Tournament, but by the end of my second year I chose to spend those days working on my programs too; my games turned out to be more fun to work on than others were to play.

All of these instilled a sense of relaxation in the course. We weren’t being taught to the midterms or the AP exam (although I did take the AP eventually, and I scored quite well considering I didn’t study at all for it). We weren’t even being told to work on some days. The course was more of a community, and even as we teased Mr. Maters we revered him as a mentor.

As I did, most students who took a first year in the course stuck around for a second year. And that was when the amazing projects started to come.

Zmob

The second year in the computer science class was all games all the time. Moreover, we started by looking at real-time games, the kinds of side-scrolling platformers we loved to play ourselves (yeah, Super Mario Brothers and Donkey Kong Country). I tried my hand at one, but before long I was lost in figuring out how to make the collisions work. Making the levels and animating the character and making the screen scroll were all challenging but not beyond my reach.

One of my early side-scrollers based on the Starfox series.

But once I got fed up with getting him to jump on blocks, I found a better project: Zmob (short for Zombie Mob). It was inspired by collaboration. I helped a friend nail down how to draw two circles in a special way: one was large and fixed, and the other was smaller, always touching the first, and the line between their two centers went through the position of the mouse. In other words, the smaller circle represented the “orientation” of the pair of circles, and it was always facing toward the mouse. It was a basic bit of trigonometry, but once I figured out how to do it I decided a top-down zombie shooting game would be fun to work on. So I started to it. Here’s the opening screen of an early version (typos and errors are completely preserved):

The intro screen to Zmob 1.0

In the game you control the black circle, and the grey circle is supposed to represent the gun. Zombies (blue circles) are spawned regularly at random positions and they travel at varying speeds directly toward your character. You can run around and if you hold down Shift you can run faster than them for a time. And of course, you shoot them until they die, and the game ends when you die. The number of zombies spawned increases as you go on, and your ammunition is limited (although you can pick up more ammo after you get a certain number of kills) so you will eventually die. To goal is to get a high score.

The game plays more like reverse-shepherding than a shooter, and while it might be hard, I don’t think anyone but me would play it for more than ten minutes at a time.

The important part was that I had a lot of ideas, and I needed to figure out how to make those ideas a reality. I wanted the zombies to not be able to overlap each other. I wanted a gun that poisoned zombies and when a poisoned zombie touched a healthy zombie, the healthy one became poisoned. I wanted all sorts of things to happen, and the solutions naturally became language features of Java that I ended up using.

The poison gun. White zombies are poisoned, while blue zombies are healthy.

For instance, at first I just represented the zombies as circles. There was no information that made any two zombies different, so I could store them as a list of x,y coordinates. Once I wanted to give them a health bar, and give them variable speeds, and poison them, I was forced to design a zombie class, so that I could give each zombie an internal state (poisoned or not, fast or slow, etc.). I followed this up by making a player class, an item class, and a bullet class.

And the bullets turned out to be my favorite part. I wanted every bullet on the screen to be updated just by me calling an “update()” function. It turns out this was the equivalent of making a bullet into an interface which each specialized bullet class inherited from. Already I saw the need and elegance behind object oriented programming, something that was totally lost on me when I made those stupid “Shape” interfaces they have you do in basic tutorials. I was solving a problem I actually needed to solve, and an understanding of inheritance was forever branded into my mind.

And the bullet logic was a joy in itself. The first three guns I made were boring: a pistol, a machine gun, and a shotgun. Each sprayed little black circles in the expected way. I wanted to break out and make a cool gun. I called my first idea the wave beam.

The wave beam: sinusoidal bullets.

The idea behind the wave beam is  that the bullets travel along a sinusoidal curve in the direction they were shot. This left me with a huge difficulty, though: how does one rotate a sine wave by an arbitrary angle? I had x and y coordinates for the bullets, but all of the convoluted formulas I randomly tried using sines and cosines and tangents ended up failing miserably. The best I could get was a sort of awkwardly-stretched sideways sine.

After about a week of trying with no success, I finally went to my statistics teacher at the time (whom I still keep in touch with) and I asked him if he knew of any sort of witchcraft mathemagic that would do what I wanted.

After a moment’s thought, he pulled out a textbook and showed me a page on rotation matrices. To my seventeen-year-old eyes, the formula was as mysterious as an ancient rune:

My particular code ended up looking like:

x += frame*Math.cos(angle) + Math.sin(frame)*Math.sin(angle)
y += frame*Math.sin(angle) + Math.sin(frame)*Math.cos(angle)

When I ran the code, it worked so perfectly I shouted out loud. After my week of struggle and botched attempts to figure this out, this solution was elegant and beautiful and miraculous. After that, I turned to calculus to make jumping look more natural in my Fox side-scroller. I experimented with other matrix operations like shearing and stretching. By the end of that year, I had a better understanding of a “change of basis” (though I didn’t know the words for it) than most of the students I took linear algebra with in college! It was just a different coordinate system for space; there were rotated coordinates, fat and stretchy coordinates, along with skinny and backward coordinates. I tried all sorts of things in search of fun gameplay.

And it wasn’t just mathematics that I learned ahead of my time. By the end of the year I had “finished” the game. I designed a chain gun that set off chain reactions when it hit zombies, I had given it a face lift with new graphics for the player and zombies. I even designed a smart tile-layout system to measure the size of the screen and display the background appropriately. I had gotten tired of measuring the sizes by hand, so I wrote a program to measure it for me. That sounds trivial, but it’s really the heart of problem solving in computer science.

Zmob, with images

The whole class “beta tested” it, i.e., spent a few days of class just playing it to have fun and find bugs. And they found lots of bugs. Overt ones (divide by zero errors making bullets go crazy) and subtler ones (if you time everything right, the zombies can’t get close enough to hurt you, and just keep bumping into each other).

One pretty important issue that came up was speed. Once I added images, I decided to use a Java library to rotate the images on every frame so they were pointing in the right direction. Now some people say Java is slow, but this part was really slow, especially when it got up to a hundred or more zombies. My solution, it just so happened, was a paradigm in computer science called caching. You pre-compute all of the rotations you’ll ever need ahead of time, and then store them somewhere. In fact, what I really did was called lazy-loading, a slightly more sophisticated technique that involved only storing the computed rotations once they were needed.

And I never learned the name of this technique until I got to a third-year college course in dynamic web programming when we discussed the Hibernate object-relational mapping for databases! Just like with linear algebra, my personalized problems resulted in me reinventing or rediscovering important concepts far earlier than I would have learned them otherwise. I was giving myself a deep understanding of the concepts and what sorts of problems they could solve.  This is distinctly different from the sort of studying that goes on in college: students memorize the name of a concept and what it means, but only the top students get a feel for why it’s important and when to use it.

An Honest Evaluation in Retrospect

I’ll admit it, I was more dedicated to my work than the average kid. A small portion of the class was only engaged in the silly stuff. Some students didn’t have a goal in mind, or they did but didn’t pursue the issue with my kind of vigor. We didn’t have access to many good examples outside of our own web browsing and the mediocre quality of the books Mr. Maters had on hand. The choice of Java was perhaps a steep learning curve for some, but I think in the end it did more good than harm.

But on the other hand, those of us that did work well got to explore, and absorb the material at our own pace. We got to struggle with problems we actually wanted to solve, leading to new insights. One of my classmates made a chat client and a networked version of Tron, while others made role-playing games, musical applications, encryption algorithms, painting programs, and much more. By the end, the breadth and depth of our collective knowledge was far beyond what anyone could expect from any high school course in any subject. I don’t say that lightly; I spent a lot of time analyzing literature and debating contemporary issues and memorizing German vocabulary and fine-tuning essays and doing biology experiments, but programming was different. It was engaging and artistic and technical and logical and visceral. Moreover, it was a skill that makes me marketable. I could drop out of graduate school today and find a comfortable job as a software engineer in any major city and probably in any industry that makes software. That class was truly what set me on the path to where I am today.

And worst of all, it absolutely breaks my heart  to hear my students say “I didn’t think programming would be like this. I’m just not cut out for it.” The best response I can muster is “Don’t judge programming by this class. It can be fun, truly.”

What They Need

It’s become woefully clear to me that to keep students interested in programming, they need a couple of things:

1. Instant gratification

My students spend way too much time confused about their code. They need have some way to make a change and see the effects immediately. They need teaching tools designed by Bret Victor (skip ahead to 10:30 in the video to see what I mean, but the whole thing is worth watching). And they need to work on visual programs. Drawing programs and games and music. Programs whose effects they can experience in a non-intellectual way, as opposed to checking whether they’re computing polynomial derivatives correctly.

2. Projects that are relevant, or at least fun.

Just like when I was learning, student need to be able to explore. Let them work on their own projects, and have enough knowledge as a teacher to instruct them when they get stuck (or better yet, brainstorm with them). If everyone having a customized project is out of the question, at least have them work on something relevant. The last two projects in the class I teach were regrettably based on file input/output and matrix sums. Why not let them work on a video game, or a search engine (it might sound complicated, but that’s the introductory course over at udacity), or some drawing/animation, a chat client, solve Sudoku puzzles, or even show them how to get data from Facebook with the Graph API. All of these things can be sufficiently abstracted so that a student at any level can handle it, and each requires the ability to use certain constructs (basic networking for a chat client, matrix work for a sudoku, file I/O in parts of a search engine, etc.). Despite the wealth of interesting things they could have students do, it seems to me that the teachers just don’t want to come up with interesting projects, so they just have their students compute matrix sums over and over and over again.

3. The ability to read others’ code.

This is an integral part of learning. Not only should students be able to write code, but they must be able to read foreign code. They have to be able to look at examples and extract the important parts to use in their own original work. They have to be able to collaborate with their classmates, work on a shared project, and brainstorm new ideas while discussing bugs. They have to be able to criticize code as they might criticize a movie or a restaurant. Students should be very opinionated about software, and they should strive to find the right way to do things, openly lampooning pieces of code that are bloated or disorganized (okay, maybe not too harshly, but they should be mentally aware).

These three things lie at the heart of computer science and software development, and all of the other crap (the stack frames and lazy-loading and linux shells) can wait until students are already hooked and eager to learn more. Moreover, it can wait until they have a choice to pursue the area that requires knowledge of the linux shell or web frameworks or networking security or graphics processing. I learned all of the basics and then some without ever touching a linux terminal or knowing what a bit was. I don’t doubt my current students could do the same.

And once students get neck deep in code (after spending a year or two writing spaghetti code programs like I did), they can start to see beauty in the elegant ways one can organize things, and the expressive power one has to write useful programs. In some sense programming is like architecture: a good program has beauty in form and function. That’s the time when they should start thinking about systems programming and networking, because that’s the time when they can weigh the new paradigms against their own. They can criticize and discuss and innovate, or at least appreciate how nice it is and apply the ideas to whatever zombie-related project they might be working on at the time.

I hold the contention that every computer science curriculum should have multiple courses that function as blank canvases, and they should have one early on in the pipeline (if not for part of the very first course). I think that the reason classes aren’t taught this way is the same reason that mathematics education is what it is: teaching things right is hard work! As sad as it sounds, professors (especially at a research institution) don’t have time to design elaborate projects for their students.

And as long as I’m in the business of teaching, I’ll work to change that. I’ll design courses to be fun, and help my future coworkers who fail to do so. Even in highly structured courses, I’ll give students an open-ended project.

So add that onto my wish list as a high school teacher: next to “Math Soup for the Teenage Soul,” a class called “Finger-paint Programming.” (or “Programming on a Canvas”? “How to Kill Zombies”? Other suggested titles are welcome in the comments :))

Encryption & RSA

This post assumes working knowledge of elementary number theory. Luckily for the non-mathematicians, we cover all required knowledge and notation in our number theory primer.

So Three Thousand Years of Number Theory Wasn’t Pointless

It’s often tough to come up with concrete applications of pure mathematics. In fact, before computers came along mathematics was used mostly for navigation, astronomy, and war. In the real world it almost always coincided with the physical sciences. Certainly the esoteric field of number theory didn’t help to track planets or guide ships. It was just for the amusement and artistic expression of mathematicians.

Despite number theory’s apparent uselessness, mathematicians invested a huge amount of work in it, searching for distributions of primes and inventing ring theory in the pursuit of algebraic identities. Indeed some of the greatest open problems in mathematics today are still number theoretical: the infamous Goldbach Conjecture, the Twin Prime Conjecture, and the Collatz Conjecture all have simple statements, but their proofs or counterexamples have eluded mathematicians for hundreds of years. Solutions to these problems, which are generally deemed beyond the grasp of an average mathematician, would certainly bring with them large prizes and international fame.

Putting aside its inherent beauty, until recently there was no use for number theory at all. But nowadays we have complex computer simulated models, statistical analysis, graphics, computing theory, signal processing, and optimization problems. So even very complex mathematics finds its way into most of what we do on a daily basis.

And, of course, number theory also has its place: in cryptography.

The history of cryptography is long and fascinating. The interested reader will find a wealth of information through the article and subsequent links on Wikipedia. We focus on one current method whose security is mathematically sound.

The Advent of Public Keys

Until 1976 (two years before the RSA method was born), all encryption methods followed the same pattern:

  1. At an earlier date, the sender and recipient agree on a secret parameter called a key, which is used both to encrypt and decrypt the message.
  2. The message is encrypted by the sender, sent to the recipient, and then decrypted in privacy.

This way, any interceptor could not read the message without knowing the key and the encryption method. Of course, there were various methods of attacking the ciphers, but for the most part this was a safe method.

The problem is protecting the key. Since the two communicating parties had to agree on a key that nobody else could know, they either had to meet in person or trust an aide to communicate the key separately. Risky business for leaders of distant allied nations.

Then, in 1976, two researchers announced a breakthrough: the sender and recipient need not share the same key! Instead, everybody who wanted private communication has two keys: one private, and one public. The public key is published in a directory, while the private key is kept secret, so that only the recipient need know it.

Anyone wishing to send a secure message would then encrypt the message with the recipient’s public key. The message could only be decrypted with the recipient’s private key. Even the sender couldn’t decrypt his own message!

The astute reader might question whether such an encryption method is possible: certainly every deterministic computation is reversible. Indeed, in theory it is possible to reverse the encryption method. However, as we will see it is computationally unfeasible. With the method we are about to investigate (disregarding any future mathematical or quantum breakthroughs), it would take a mind-bogglingly long time to do so. And, of course, the method works through the magic of number theory.

RSA

Rivest, Shamir, and Adleman. They look like pretty nice guys.

RSA, an acronym which stands for the algorithm’s inventors, Rivest, Shamir, and Adleman, is such a public-key encryption system. It is one of the most widely-used ciphers, and it depends heavily on the computational intractability of two problems in number theory: namely factoring integers and taking modular roots.

But before we get there, let us develop the method. Recall Euler’s totient function, $ \varphi(n)$.

Definition: Let $ n$ be a positive integer. $ \varphi(n)$ is the number of integers between 1 and $ n$ relatively prime to $ n$.

There is a famous theorem due to Euler that states if $ a, n$ are relatively prime integers, then

$ \displaystyle a^{\varphi(n)} \equiv 1 \mod{n}$

In other words, if we raise $ a$ to that power, its remainder after dividing by $ n$ is 1. Group theorists will recognize this immediately from Lagrange’s Theorem. While it is possible to prove it with elementary tools, we will not do so here. We cover the full proof of this theorem in our number theory primer.

In particular, we notice the natural next result that $ a^{k \varphi(n) + 1} \equiv a \mod{n}$ for any $ k$, since this is just

$ \displaystyle (a^{\varphi(n)})^k \cdot a \equiv 1^ka \equiv a \mod{n}$.

If we could break up $ k \varphi(n) + 1$ into two smaller numbers, say $ e,d$, then we could use exponentiation as our encryption and decryption method. While that is the entire idea of RSA in short, it requires a bit more detail:

Let $ M$ be our message, encoded as a single number less than $ n$. We call $ n$ the modulus, and for the sake of argument let us say $ M$ and $ n$ are relatively prime. Then by Euler’s theorem, $ M^{\varphi(n)} \equiv 1 \mod{n}$. In particular, let us choose a public key $ e$ (for encryption), and raise $ M^e \mod{n}$. This is the encrypted message. Note that both $ e$ and $ n$ are known to the encryptor, and hence the general public. Upon receiving such a message, the recipient may use his private key $ d = e^{-1} \mod{\varphi(n)}$ to decrypt the message. We may pick $ e$ to be relatively prime to $ \varphi(n)$, to ensure that such a $ d$ exists. Then $ ed \equiv 1 \mod{\varphi(n)}$, and so by Euler’s theorem

$ \displaystyle (M^e)^d = M^{ed} = M^{k \varphi(n) + 1} \equiv M \mod{n}$

By exponentiating the encrypted text with the right private key, we recover the original message, and our secrets are safe from prying eyes.

Now for the messy detail: Where did $ n$ come from? And how we can actually compute all this junk?

First, in order to ensure $ M < n$ for a reasonably encoded message $ M$, we require that $ n$ is large. Furthermore, since we make both $ n$ and $ e$ public, we have to ensure that $ \varphi(n)$ is hard to compute, for if an attacker could determine $ \varphi(n)$ from $ n$, then $ e^{-1} \mod{\varphi(n)}$ would be trivial to compute. In addition, one could theoretically compute all the $ e$th roots of $ M^e$ modulo $ n$.

We solve these problems by exploiting their computational intractability. We find two enormous primes $ p,q$, and set $ n = pq$. First, recall that the best known way to compute $ \varphi(n)$ is by the following theorem:

Theorem: For $ p,q$ primes, $ \varphi(p^k) = p^k – p^{k-1}$, and $ \varphi(p^j q^k) = \varphi(p^j)\varphi(q^k)$.

In this way, we can compute $ \varphi(n)$ easily if we know it’s prime factorization. Therein lies the problem and the solution: factorizing large numbers is hard. Indeed, it is an unsolved problem in computer science as to whether integers can be factored by a polynomial-time algorithm. Quickly finding arbitrary roots mod $ n$ is a similarly hard problem.

To impress the difficulty of integer factorization, we visit its world record. In 2009, a team of researchers successfully factored a 678-bit (232-digit) integer, and it required a network of hundreds of computers and two years to do. The algorithms were quite sophisticated and at some times fickle, failing when one node in the network went down. On the other hand, our $ p,q$ will each be 2048-bit numbers, and so their product is astronomical in comparison. In fact, even 1024-bit numbers are thousands of times harder to factor than 678-bit numbers, meaning that with the hugest of networks, it would take far longer than our lifespans just to factor a “weak” RSA modulus with the methods known today. In this respect, and for the foreseeable future, RSA is watertight.

Since we constructed $ n$ as the product of two primes, we know

$ \varphi(n) = \varphi(p)\varphi(q) = (p-1)(q-1)$,

so we can compute $ \varphi(n)$ trivially. Then, if we pick any $ e < \varphi(n)$ which is relatively prime to $ \varphi(n)$ (for instance, $ e$ itself could be prime), then we may compute the public key $ d$ via the extended Euclidean algorithm.

For a clean-cut worked example of RSA key generation and encryption, see the subsection on Wikipedia. We admit that an example couldn’t be done much better than theirs, and we use the same notation here as the writers do there.

Big Random Primes

There is one remaining problem that requires our attention if we wish to implement an RSA encryption scheme. We have to generate huge primes.

To do so, we note that we don’t actually care what the primes are, only how big they are. Generating large random odd numbers is easy: we can simply randomly generate each of its 2,048 bits, ensuring the smallest bit is a 1. Since we recall that primes are distributed roughly according to $ x / \log(x)$, we see that the chance of getting a prime at random is roughly $ 2 / \log(2^{2048})$, which is about $ 1 / 710$. Thus, on average we can expect to generate 710 random numbers before we get a prime.

Now that we know we’ll probably find a prime number fast, we just have to determine which is prime. There is essentially only one sure-fire primality test: the Sieve of Eratosthenes, in which we simply test all the primes from 2 to the square root of $ n$. If none divide $ n$, then $ n$ is prime.

Unfortunately, this is far too slow, and would require us to generate a list of primes that is unreasonably large (indeed, if we already had that list of primes we wouldn’t need to generate any more!). So we turn to probabilistic tests. In other words, there are many algorithms which determine the likelihood of a candidate being composite (not prime), and then repeat the test until that likelihood is sufficiently close to $ 0$, and hence a certainty. Generally this bound is $ 2^{-100}$, and the existing algorithms achieve it in polynomial time.

Unfortunately an in-depth treatment of one such primality test is beyond the scope of this post. In addition, most contemporary programming languages come equipped with one such primality test, so we put their implementations aside for a later date. To read more about probabilistic primality tests, see the list of them on Wikipedia. They are all based on special cases of Euler’s theorem, and the distribution of multiplicative inverses modulo $ n$.

Implementation

In a wild and unprecedented turn of events, we did not use Mathematica to implement RSA! The reason for this is so that anyone (especially the author’s paranoid father) can run it. So we implemented it in Java. As always, the entire source code (and this time, an executable jar file) is available on this blog’s Github page.

Despite its relative verbosity, Java has a few advantages. The first of these is the author’s familiarity with its GUI (graphical user interface) libraries. The second benefit is that all of the important functions we need are part of the BigInteger class. BigInteger is a built-in Java class that allows us to work with numbers of unbounded size. Recall that in Mathematica unbounded arithmetic is built into the language, but older and more general-purpose languages like Java and C adhere to fixed-length arithmetic. Disregarding the debates over which is better, we notice that BigInteger has the functions:

static BigInteger probablePrime(int bitLength, Random rnd)
BigInteger modPow(BigInteger exponent, BigInteger m)
BigInteger modInverse(BigInteger m)

For clarity, the first function generates numbers which are not prime with probability at most $ 2^{-100}$, the second computes exponents modulo “m”, and the third computes the multiplicative inverse modulo “m”. The “modPow” and “modInverse” functions operate on the context object, or the implicit “this” argument (recall Java is object-oriented [see upcoming primer on object-oriented programming]).

Indeed, this is all we need to write our program! But there are a few more specifics:

First, we need a good random number generator to feed BigInteger’s “probablePrime” function. It turns out that Java’s built-in random number generator is just not secure enough. To remedy this, we could use the “java.security.secureRandom” class, part of Java’s cryptography package; but for the sake of brevity, we instead import an implementation of the Mersenne Twister, a fast prime number generator which is not secure.

Second, there are known factoring methods for $ n=pq$ if $ p \pm 1$ or $ q \pm 1$ has only small prime factors. These are due to Pollard and Williams. So we include a special method called “isDivisibleByLargePrime”, and screen our candidate prime numbers against its negation. The largest prime we test for is 65537, the 6543rd prime. The details of this function are in the source code, which again is available on this blog’s Github page. It is not very interesting.

Third, we notice that the choice of public key is arbitrary. Since everyone is allowed to know it, it shouldn’t matter what we pick. Of course, this is a bit too naive, and it has been proven that if the public key $ e$ is small (say, $ 3$), then the RSA encryption is less secure. After twenty years of attacks trying to break RSA, it has been generally accepted that public keys with moderate bit-length and small Hamming weight (few 1’s in their binary expansion) are secure. The most commonly used public key is 65537, which is the prime $ 2^{16} +1 = \textup{0x10001}$. So in our implementation, we fix the public key at 65537.

Finally, in order to make our String representations of moduli, public keys, and private keys slightly shorter, we use alphadecimal notation (base 36) for inputting and outputting our numbers. This has the advantage that it uses all numerals and characters, thus maximizing expression without getting funky symbols involved.

Results

Here is a snapshot of the resulting Java application:


As you can see, the encrypted messages are quite long, but very copy-pasteable. Also, generating the keys can take up to a minute, so have patience when pressing the “Generate Keys” button under the tab of the same name.

There you have it! Enjoy the applet; it’s for everyone to use, but despite all my due diligence in writing the software, I wouldn’t recommend anyone to rely on it for national security.

Feel free to leave me a comment with a super-secret RSA-encoded message! Here is my encryption modulus and my public key:

public key: 1ekh

encryption modulus:
5a1msbciqctepss5dfpp76rdb5selhybzhzerklbomyvwlohasuy81r9rbd3scvujpqn9e7m5hp52qv4fli9f4bupg23060wmaf94zq94s4j22hwyi764kk0k6w8is05tyg7029ub6ux4vb4bq9xxzw9nkfs0pfteq7wnm6hya7b2l2i1w8jh25913qye67gz8z4xrep1dcj8qpyxyi56ltukqyv366hei4nme6h9x00z16cbf8g76me1keccicsgd268u3iocvb6c5lw00j234270608f24qelu8xfjcddc7n9u0w2tf46bl1yzsjb8kb5ys9gh51l0ryetge1lwh1ontenraq35wv5f4ea57zae1ojcsxp3cxpx84mbg0duoln2vny7uixl3ti4n2flvfats4wz0h1c34cgxdyixb7ylci6t4dk8raqcbdi3yxclktvb7yv1sb61nxk1kylfp0h7wqtrogf8c039mc6bqe8b7eixb72pfz4sajw1rf170ck2vysy1z6bgyngrhyn8kpepd0btcyuyj0scdshlexlg4uolls8p6frxj8dt4ykcnddeuvf7dfz1qyqpjk7ljwr1hdgaecyqa6gsxryodrup1qpwgieot6v8c5bbizxm45qj4nez5cpe9q12m0pyeszic5dtb1yc0rm7lzzddg254uf390rk4irecfxibpv2lnk9zper3kg729w32n7u0qn8mamtmh80fo6hd0n5a50d9kzft1g3bky2t2d2f1a574ukigx1dfgqrtehnmx5nd

Until next time!