Or how to detect and correct errors
Last time we made a quick tour through the main theorems of Claude Shannon, which essentially solved the following two problems about communicating over a digital channel.
- What is the best encoding for information when you are guaranteed that your communication channel is error free?
- Are there any encoding schemes that can recover from random noise introduced during transmission?
The answers to these questions were purely mathematical theorems, of course. But the interesting shortcoming of Shannon’s accomplishment was that his solution for the noisy coding problem (2) was nonconstructive. The question remains: can we actually come up with efficiently computable encoding schemes? The answer is yes! Marcel Golay was the first to discover such a code in 1949 (just a year after Shannon’s landmark paper), and Golay’s construction was published on a single page! We’re not going to define Golay’s code in this post, but we will mention its interesting status in coding theory later. The next year Richard Hamming discovered another simpler and larger family of codes, and went on to do some of the major founding work in coding theory. For his efforts he won a Turing Award and played a major part in bringing about the modern digital age. So we’ll start with Hamming’s codes.
We will assume some basic linear algebra knowledge, as detailed our first linear algebra primer. We will also use some basic facts about polynomials and finite fields, though the lazy reader can just imagine everything as binary and still grok the important stuff.

Richard Hamming, inventor of Hamming codes. [image source]
What is a code?
The formal definition of a code is simple: a code is just a subset of
for some
. Elements of
are called codewords.
This is deceptively simple, but here’s the intuition. Say we know we want to send messages of length , so that our messages are in
. Then we’re really viewing a code
as the image of some encoding function
. We can define
by just describing what the set is, or we can define it by describing the encoding function. Either way, we will make sure that
is an injective function, so that no two messages get sent to the same codeword. Then
, and we can call
the message length of
even if we don’t have an explicit encoding function.
Moreover, while in this post we’ll always work with , the alphabet of your encoded messages could be an arbitrary set
. So then a code
would be a subset of tuples in
, and we would call
.
So we have these parameters , and we need one more. This is the minimum distance of a code, which we’ll denote by
. This is defined to be the minimum Hamming distance between all distinct pairs of codewords, where by Hamming distance I just mean the number of coordinates that two tuples differ in. Recalling the remarks we made last time about Shannon’s nonconstructive proof, when we decode an encoded message
(possibly with noisy bits) we look for the (unencoded) message
whose encoding
is as close to
as possible. This will only work in the worst case if all pairs of codewords are sufficiently far apart. Hence we track the minimum distance of a code.
So coding theorists turn this mess of parameters into notation.
Definition: A code is called an
-code if
for some alphabet
,
,
has minimum distance
, and
- the alphabet
has size
.
The basic goals of coding theory are:
- For which values of these four parameters do codes exist?
- Fixing any three parameters, how can we optimize the other one?
In this post we’ll see how simple linear-algebraic constructions can give optima for one of these problems, optimizing for
, and we’ll state a characterization theorem for optimizing
for a general
. Next time we’ll continue with a second construction that optimizes a different bound called the Singleton bound.
Linear codes and the Hamming code
A code is called linear if it can be identified with a linear subspace of some finite-dimensional vector space. In this post all of our vector spaces will be , that is tuples of bits under addition mod 2. But you can do the same constructions with any finite scalar field
for a prime power
, i.e. have your vector space be
. We’ll go back and forth between describing a binary code
over
and a code in $\mathbb{F}_q^n$. So to say a code is linear means:
- The zero vector is a codeword.
- The sum of any two codewords is a codeword.
- Any scalar multiple of a codeword is a codeword.
Linear codes are the simplest kinds of codes, but already they give a rich variety of things to study. The benefit of linear codes is that you can describe them in a lot of different and useful ways besides just describing the encoding function. We’ll use two that we define here. The idea is simple: you can describe everything about a linear subspace by giving a basis for the space.
Definition: A generator matrix of a -code
is a
matrix
whose rows form a basis for
.
There are a lot of equivalent generator matrices for a linear code (we’ll come back to this later), but the main benefit is that having a generator matrix allows one to encode messages by left multiplication
. Intuitively, we can think of the bits of
as describing the coefficients of the chosen linear combination of the rows of
, which uniquely describes an element of the subspace. Note that because a
-dimensional subspace of
has
elements, we’re not abusing notation by calling
both the message length and the dimension.
For the second description of , we’ll remind the reader that every linear subspace
has a unique orthogonal complement
, which is the subspace of vectors that are orthogonal to vectors in
.
Definition: Let be a generator matrix for
. Then
is called a parity check matrix.
Note has the basis for
as columns. This means it has dimensions
. Moreover, it has the property that
if and only if the left multiplication
. Having zero dot product with all columns of
characterizes membership in
.
The benefit of having a parity check matrix is that you can do efficient error detection: just compute on your received message
, and if it’s nonzero there was an error! What if there were so many errors, and just the right errors that
coincided with a different codeword than it started? Then you’re screwed. In other words, the parity check matrix is only guarantee to detect errors if you have fewer errors than the minimum distance of your code.
So that raises an obvious question: if you give me the generator matrix of a linear code can I compute its minimum distance? It turns out that this problem is NP-hard in general. In fact, you can show that this is equivalent to finding the smallest linearly dependent set of rows of the parity check matrix, and it is easier to see why such a problem might be hard. But if you construct your codes cleverly enough you can compute their distance properties with ease.
Before we do that, one more definition and a simple proposition about linear codes. The Hamming weight of a vector , denoted
, is the number of nonzero entries in
.
Proposition: The minimum distance of a linear code is the minimum Hamming weight over all nonzero vectors
.
Proof. Consider a nonzero . On one hand, the zero vector is a codeword and
is by definition the Hamming distance between
and zero, so it is an upper bound on the minimum distance. In fact, it’s also a lower bound: if
are two nonzero codewords, then
is also a codeword and
is the Hamming distance between
and
.
So now we can define our first code, the Hamming code. It will be a -code. The construction is quite simple. We have fixed
, and we will also fix
. One can think of this as fixing
and maximizing
, but it will only work for
of a special form.
We’ll construct the Hamming code by describing a parity-check matrix . In fact, we’re going to see what conditions the minimum distance
imposes on
, and find out those conditions are actually sufficient to get
. We’ll start with 2. If we want to ensure
, then you need it to be the case that no nonzero vector of Hamming weight 1 is a code word. Indeed, if
is a vector with all zeros except a one in position
, then
is the
-th row of
. We need
, so this imposes the condition that no row of
can be zero. It’s easy to see that this is sufficient for
.
Likewise for , given a vector
for some positions
, then
may not be zero. But because our sums are mod 2, saying that
is the same as saying
. Again it’s an if and only if. So we have the two conditions.
- No row of
may be zero.
- All rows of
must be distinct.
That is, any parity check matrix with those two properties defines a distance 3 linear code. The only question that remains is how large can be if the vectors have length
? That’s just the number of distinct nonzero binary strings of length
, which is
. Picking any way to arrange these strings as the rows of a matrix (say, in lexicographic order) gives you a good parity check matrix.
Theorem: For every , there is a
-code called the Hamming code.
Since the Hamming code has distance 3, we can always detect if at most a single error occurs. Moreover, we can correct a single error using the Hamming code. If and
is an error bit in position
, then the incoming message would be
. Now compute
and flip bit
of
. That is, whichever row of
you get tells you the index of the error, so you can flip the corresponding bit and correct it. If you order the rows lexicographically like we said, then
as a binary number. Very slick.
Before we move on, we should note one interesting feature of linear codes.
Definition: A code is called systematic if it can be realized by an encoding function that appends some number “check bits” to the end of each message.
The interesting feature is that all linear codes are systematic. The reason is as follows. The generator matrix of a linear code has as rows a basis for the code as a linear subspace. We can perform Gaussian elimination on
and get a new generator matrix that looks like
where
is the identity matrix of the appropriate size and
is some junk. The point is that encoding using this generator matrix leaves the message unchanged, and adds a bunch of bits to the end that are determined by
. It’s a different encoding function on
, but it has the same image in
, i.e. the code is unchanged. Gaussian elimination just performed a change of basis.
If you work out the parameters of the Hamming code, you’ll see that it is a systematic code which adds check bits to a message, and we’re able to correct a single error in this code. An obvious question is whether this is necessary? Could we get away with adding fewer check bits? The answer is no, and a simple “information theoretic” argument shows this. A single index out of
requires
bits to describe, and being able to correct a single error is like identifying a unique index. Without logarithmically many bits, you just don’t have enough information.
The Hamming bound and perfect codes
One nice fact about Hamming codes is that they optimize a natural problem: the problem of maximizing given a fixed choice of
,
, and
. To get this let’s define
denote the volume of a ball of radius
in the space
. I.e., if you fix any string (doesn’t matter which)
,
is the size of the set
, where
is the hamming distance.
There is a theorem called the Hamming bound, which describes a limit to how much you can pack disjoint balls of radius inside
.
Theorem: If an -code exists, then
Proof. The proof is quite simple. To say a code has distance
means that for every string
there is no other string
within Hamming distance
of
. In other words, the balls centered around both
of radius
are disjoint. The extra difference of one is for odd
, e.g. when
you need balls of radius 1 to guarantee no overlap. Now
, so the total number of strings covered by all these balls is the left-hand side of the expression. But there are at most
strings in
, establishing the desired inequality.
Now a code is called perfect if it actually meets the Hamming bound exactly. As you probably guessed, the Hamming codes are perfect codes. It’s not hard to prove this, and I’m leaving it as an exercise to the reader.
The obvious follow-up question is whether there are any other perfect codes. The answer is yes, some of which are nonlinear. But some of them are “trivial.” For example, when you can just use the identity encoding to get the code
. You can also just have a code which consists of a single codeword. There are also some codes that encode by repeating the message multiple times. These are called “repetition codes,” and all three of these examples are called trivial (as a definition). Now there are some nontrivial and nonlinear perfect codes I won’t describe here, but here is the nice characterization theorem.
Theorem [van Lint ’71, Tietavainen ‘73]: Let be a nontrivial perfect
code. Then the parameters must either be that of a Hamming code, or one of the two:
- A
-code
- A
-code
The last two examples are known as the binary and ternary Golay codes, respectively, which are also linear. In other words, every possible set of parameters for a perfect code can be realized as one of these three linear codes.
So this theorem was a big deal in coding theory. The Hamming and Golay codes were both discovered within a year of each other, in 1949 and 1950, but the nonexistence of other perfect linear codes was open for twenty more years. This wrapped up a very neat package.
Next time we’ll discuss the Singleton bound, which optimizes for a different quantity and is incomparable with perfect codes. We’ll define the Reed-Solomon and show they optimize this bound as well. These codes are particularly famous for being the error correcting codes used in DVDs. We’ll then discuss the algorithmic issues surrounding decoding, and more recent connections to complexity theory.
Until then!
Posts in this series: