Machine learning is broadly split into two camps, statistical learning and non-statistical learning. The latter we’ve started to get a good picture of on this blog; we approached Perceptrons, decision trees, and neural networks from a non-statistical perspective. And generally “statistical” learning is just that, a perspective. Data is phrased in terms of independent and dependent variables, and statistical techniques are leveraged against the data. In this post we’ll focus on the simplest example of this, linear regression, and in the sequel see it applied to various learning problems.

As usual, all of the code presented in this post is available on this blog’s Github page.

## The Linear Model, in Two Variables

And so given a data set we start by splitting it into independent variables and dependent variables. For this section, we’ll focus on the case of two variables, . Here, if we want to be completely formal, are real-valued random variables on the same probability space (see our primer on probability theory to keep up with this sort of terminology, but we won’t rely on it heavily in this post), and we choose one of them, say , to be the *independent variable* and the other, say , to be the *dependent variable*. All that means in is that we are assuming there is a relationship between and , and that we intend to use the value of to predict the value of . Perhaps a more computer-sciencey terminology would be to call the variables *features* and have *input features *and* output features, *but we will try to stick to the statistical terminology.

As a quick example, our sample space might be the set of all people, could be age, and could be height. Then by calling age “independent,” we’re asserting that we’re trying to use age to predict height.

One of the strongest mainstays of statistics is the linear model. That is, when there aren’t any known relationships among the observed data, the simplest possible relationship one could discover is a linear one. A change in corresponds to a proportional change in , and so one could hope there exist constants so that (as random variables) . If this were the case then we could just draw many pairs of sample values of and , and try to estimate the value of and .

If the data actually lies on a line, then two sample points will be enough to get a perfect prediction. Of course, nothing is exact outside of mathematics. And so if we were to use data coming from the real world, and even if we were to somehow produce some constants , our “predictor” would almost always be off by a bit. In the diagram below, where it’s clear that the relationship between the variables is linear, only a small fraction of the data points appear to lie on the line itself.

In such scenarios it would be hopelessly foolish to wish for a perfect predictor, and so instead we wish to summarize the trends in the data using a simple description mechanism. In this case, that mechanism is a line. Now the computation required to find the “best” coefficients of the line is quite straightforward once we pick a suitable notion of what “best” means.

Now suppose that we call our (presently unknown) prediction function . We often call the function we’re producing as a result of our learning algorithm the *hypothesis*, but in this case we’ll stick to calling it a prediction function. If we’re given a data point where is a value of and of , then the *error* of our predictor on this example is . Geometrically this is the vertical distance from the actual value to our prediction for the same , and so we’d like to minimize this error. Indeed, we’d like to minimize the sum of all the errors of our linear predictor over all data points we see. We’ll describe this in more detail momentarily.

The word “minimize” might evoke long suppressed memories of torturous Calculus lessons, and indeed we will use elementary Calculus to find the optimal linear predictor. But one small catch is that our error function, being an absolute value, is not differentiable! To mend this we observe that minimizing the absolute value of a number is the same as minimizing the square of a number. In fact, , and the square root function and its inverse are both increasing functions; they preserve minima of sets of nonnegative numbers. So we can describe our error as , and use calculus to our heart’s content.

To explicitly formalize the problem, given a set of data points and a potential prediction line , we define the error of on the examples to be

Which can also be written as

Note that since we’re fixing our data sample, the function is purely a function of the variables . Now we want to minimize this quantity with respect to , so we can take a gradient,

and set them simultaneously equal to zero. In the first we solve for :

If we denote by this is just . Substituting into the other equation we get

Which, by factoring out , further simplifies to

And so

And it’s not hard to see (by taking second partials, if you wish) that this corresponds to a minimum of the error function. This closed form gives us an immediate algorithm to compute the optimal linear estimator. In Python,

avg = lambda L: 1.0* sum(L)/len(L) def bestLinearEstimator(points): xAvg, yAvg = map(avg, zip(*points)) aNum = 0 aDenom = 0 for (x,y) in points: aNum += (y - yAvg) * x aDenom += (x - xAvg) * x a = float(aNum) / aDenom b = yAvg - a * xAvg return (a, b), lambda x: a*x + b

and a quick example of its use on synthetic data points:

>>> import random >>> a = 0.5 >>> b = 7.0 >>> points = [(x, a*x + b + (random.random() * 0.4 - 0.2)) for x in [random.random() * 10 for _ in range(100)]] >>> bestLinearEstimator(points)[0] (0.49649543577814137, 6.993035962110321)

## Many Variables and Matrix Form

If we take those two variables and tinker with them a bit, we can represent the solution to our regression problem in a different (a priori strange) way in terms of matrix multiplication.

First, we’ll transform the prediction function into matrixy style. We add in an extra variable which we force to be 1, and then we can write our prediction line in a *vector form* as . What is the benefit of such an awkward maneuver? It allows us to write the *evaluation* of our prediction function as a dot product

Now the notation is starting to get quite ugly, so let’s rename the coefficients of our line , and the coefficients of the input data . The output is still . Here we understand implicitly that the indices line up: if is the constant term, then that makes our extra variable (often called a *bias* variable by statistically minded folks), and is the linear term with coefficient . Now we can just write the prediction function as

We still haven’t really seen the benefit of this vector notation (and we won’t see it’s true power until we extend this to kernel ridge regression in the next post), but we do have at least one additional notational convenience: we can add arbitrarily many input variables without changing our notation.

If we expand our horizons to think of the random variable depending on the random variables , then our data will come in tuples of the form , where again the is fixed to 1. Then expanding our line , our evaluation function is still . Excellent.

Now we can write our error function using the same style of compact notation. In this case, we will store all of our input data points as rows of a matrix and the output values as entries of a vector . Forgetting the boldface notation and just understanding everything as a vector or matrix, we can write the deviation of the predictor (on all the data points) from the true values as

Indeed, each entry of the vector is a dot product of a row of (an input data point) with the coefficients of the line . It’s just applied to all the input data and stored as the entries of a vector. We still have the sign issue we did before, and so we can just take the square norm of the result and get the same effect as before:

This is just taking a dot product of with itself. This form is awkward to differentiate because the variable is nested in the norm. Luckily, we can get the same result by viewing as a 1-by- matrix, transposing it, and multiplying by .

This notation is widely used, in particular because we have nice formulas for calculus on such forms. And so we can compute a gradient of with respect to each of the variables in at the same time, and express the result as a vector. This is what taking a “partial derivative” with respect to a vector means: we just represent the system of partial derivates with respect to each entry as a vector. In this case, and using formula 61 from page 9 and formula 120 on page 13 of The Matrix Cookbook, we get

Indeed, it’s quite trivial to prove the latter formula, that for any vector , the partial . If the reader feels uncomfortable with this, we suggest taking the time to unpack the notation (which we admittedly just spent so long packing) and take a classical derivative entry-by-entry.

Solving the above quantity for gives , assuming the inverse of exists. Again, we’ll spare the details proving that this is a minimum of the error function, but inspecting second derivatives provides this.

Now we can have a slightly more complicated program to compute the linear estimator for one input variable and many output variables. It’s “more complicated” in that much more mathematics is happening behind the code, but just admire the brevity!

from numpy import array, dot, transpose from numpy.linalg import inv def bestLinearEstimatorMV(points): # input points are n+1 tuples of n inputs and 1 output X = array([[1] + list(p[:-1]) for p in points]) # add bias as x_0 y = array([p[-1] for p in points]) Xt = transpose(X) theInverse = inv(dot(Xt, X)) w = dot(dot(theInverse, Xt), y) return w, lambda x: dot(w, x)

Here are some examples of its use. First we check consistency by verifying that it agrees with the test used in the two-variable case (note the reordering of the variables):

>>> print(bestLinearEstimatorMV(points)[0]) [ 6.97687136 0.50284939]

And a more complicated example:

>>> trueW = array([-3,1,2,3,4,5]) >>> bias, linearTerms = trueW[0], trueW[1:] >>> points = [tuple(v) + (dot(linearTerms, v) + bias + noise(),) for v in [numpy.random.random(5) for _ in range(100)]] >>> print(bestLinearEstimatorMV(points)[0]) [-3.02698484 1.03984389 2.01999929 3.0046756 4.01240348 4.99515123]

As a quick reminder, all of the code used in this post is available on this blog’s Github page.

## Bias and Variance

There is a deeper explanation of the linear model we’ve been studying. In particular, there is a general technique in statistics called maximum likelihood estimation. And, to be as concise as possible, the linear regression formulas we’ve derived above provide the maximum likelihood estimator for a line with symmetric “Gaussian noise.” Rather than go into maximum likelihood estimation in general, we’ll just describe what it means to be a “line with Gaussian noise,” and measure the linear model’s bias and variance with respect to such a model. We saw this very briefly in the test cases for the code in the past two sections. Just a quick warning: the proofs we present in this section will use the notation and propositions of basic probability theory we’ve discussed on this blog before.

So what we’ve done so far in this post is describe a computational process that accepts as input some points and produces as output a line. We have said nothing about the quality of the line, and indeed we cannot say anything about its quality without some assumptions on how the data was generated. In usual statistical fashion, we will assume that the true data is being generated by an actual line, but with some added noise.

Specifically, let’s return to the case of two random variables . If we assume that is perfectly determined by via some linear equation , then as we already mentioned we can produce a perfect estimator using a mere two examples. On the other hand, what if every time we take a new example, its corresponding value is perturbed by some random coin flip (flipped at the time the example is produced)? Then the value of would be , and we say all the are drawn independently and uniformly at random from the set . In other words, with probability 1/2 we get -1, and otherwise 1, and none of the depend on each other. In fact, we just want to make the blanket assumption that the noise doesn’t depend on *anything* (not the data drawn, the method we’re using to solve the problem, what our favorite color is…). In the notation of random variables, we’d call the random variable producing the noise (in Greek is the capital letter for ), and write .

More realistically, the noise isn’t chosen uniformly from , but is rather chosen to be Gaussian with mean and some variance . We’d denote this by , and say the are drawn independently from this normal distribution. If the reader is uncomfortable with Gaussian noise (it’s certainly a nontrivial problem to generate it computationally), just stick to the noise we defined in the previous paragraph. For the purpose of this post, any symmetric noise will result in the same analysis (and the code samples above use uniform noise over an interval anyway).

Moving back to the case of many variables, we assume our data points are given by where is the observed data and is Gaussian noise with mean zero and some (unknown) standard deviation . Then if we call our predicted linear coefficients (randomly depending on which samples are drawn), then its expected value conditioned on the data is

Replacing by ,

Notice that the first term is a fat matrix () multiplied by its own inverse, so that cancels to 1. By linearity of expectation, we can split the resulting expression up as

but is constant (so its expected value is just itself) and by assumption that the noise is symmetric. So then the expected value of is just . Because this is true for all choices of data , the bias of our estimator is zero.

The question of variance is a bit trickier, because the variance of the entries of actually do depend on which samples are drawn. Briefly, to compute the covariance matrix of the as variables depending on , we apply the definition:

And after some tedious expanding and rewriting and recalling that the covariance matrix of is just the diagonal matrix , we get that

This means that if we get unlucky and draw some sample which makes some entries of big, then our estimator will vary a lot from the truth. This can happen for a variety of reasons, one of which is including irrelevant input variables in the computation. Unfortunately a deeper discussion of the statistical issues that arise when trying to make hypotheses in such situations. However, the concept of a bias-variance tradeoff is quite relevant. As we’ll see next time, a technique called *ridge-regression* sacrifices some bias in this standard linear regression model in order to dampen the variance. Moreover, a “kernel trick” allows us to make non-linear predictions, turning this simple model for linear estimation into a very powerful learning tool.

Until then!

The formatting’s messed up in the second half, makes it very difficult to read…

LikeLike

In the source code, yes? Just fixed that now. Thanks for the heads up.

LikeLike

Should dS/dA not also be negative? Also, your first and third code snippets show up on one line with tags, at least for me.

Great article though!

LikeLike

Yes of course. Fixed!

LikeLike

“Again, we’ll spare the details proving that this is a minimum of the error function, but inspecting second derivatives provides this.”

Perhaps mentioning that an orthogonal projection has taken place would provide a little intuition as to why the error function is minimized? I enjoy approaching regression geometrically, though it’s nice to understand it using straight calculus as well.

LikeLike

That’s a good observation. I was just trying to stay away from too much linear algebra in this post, since it’s not entirely necessary. The calculus approach is nice, too, because to move up to ridge regression you just need to add a normalization term to the error function and follow the same steps. I don’t know if there’s a nice interpretation of ridge regression as a projection, but there very well may be.

LikeLike

> Then by calling age “dependent,” we’re asserting that we’re trying to use age to predict height.

Did you mean to say that age is the “independent” variable in this example?

http://en.wikipedia.org/wiki/Dependent_and_independent_variables

LikeLike

When you say “, assuming the inverse of exists”, you mean “assuming the inverse of exists”, right? If the inverse of exists, you get the much simpler expression , I think.

LikeLike

Yes, correct! Thanks for catching that.

LikeLike

Reblogged this on Personal Notes and commented:

Great article! Reblogging it for future reference.

This post also cleared a confusion I had few months back about the difference between statistical learning and non-statistical learning or just machine learning. Love the blog!

LikeLike

Very useful thank you! One comment: would it not be better to simply solve the linear system X^TXw = X^Ty without taking the inverse of X^TX?

LikeLike

I believe that in the absence of special structure on X, taking inverses and solving linear systems are equivalent in runtime (or at least, they are not known to be different).

LikeLike

Hi, great post!

One thing that’s bothering me, the part where you derive the gradient of S with respect to w. You mention some formulas in the Matrix Cookbook. I don’t think the formulas you reference are the same ones in the link anymore.

Second, it’s very clear to me that \frac{\delta(w^Tw)}{\delta w} = 2w. What’s less clear is what happens when we introduce X^T*X into the mix and get \frac{\delta(w^T*X^T*X*w)}{\delta w} = 2*X^T*X*w.

How can I see this more clearly? Is it a matter of unpacking notation? Should I think of X^T*X as a “constant” unaffected by w?

Love your blog!

LikeLike

OK, I think I got it. It seems to be dependent on the fact that X^T*X is a symmetric matrix, the unpacking checks out given that fact.

LikeLike

I’ve looked at this multiple times, but I can’t seem to figure out why, at the beginning, b doesn’t equal -1/n(Sigma y-ax). How did we get rid of the negative sign?

LikeLike

I believe I have a typo in the line

Which should have a negative sign for the () part, as follows.

LikeLike