Ben Recht, a computer science professor at UC Berkeley, recently wrapped up a 3-month series of blog posts on Paul Meehl’s “Philosophical Psychology.”

Recht has a table of contents for his blog series. It loosely tracks a set of lectures that Meehl gave in 1989 at the University of Minnesota. In it, he surveys of the philosophy of science, lays out a framework for scientific debate, and critiques scientific practice. Recht summarizes his arguments, simplifies the ideas, provides examples, and offers his own commentary, considering today’s computerized world.

I usually hate reading things in the “philosophy of science” or “philosophy of mathematics” categories. They usually come off as jargon-dense, idle navel-gazing. The discussions around them boil down to weirdly aggressive superiority contests, with Bayesians and constructivists trying way too hard to take the high ground. I leave these discussions thinking, more often than not, “So what?”

While I haven’t watched Meehl’s lectures themselves (I had never heard of him before this), Recht’s series covers these topics in a much more palatable way. And the “so what” is clear: statistical studies in social sciences are often junk, and how can we understand why and what can be done to improve the situation?

One of the more interesting parts of the series was how Meehl and Recht helped me realize how glued I was to the concept of a statistical hypothesis test as the central means to glean knowledge in science. What is the alternative, after all, to a properly-performed randomized controlled trial with a large sample size? In the series Recht explains how this gold standard is simply not enough for many kinds of questions. He ties it to the mathematical underpinnings of what hypothesis tests are doing, and relates it to our understanding of spurious correlations all over social science.

In a later post in the series, Recht shows how, before the program of hypothesis testing became the norm, there other ways to run experiments. He recounts biochemist F. Gowland Hopkins’s 1906 experiment demonstrating the effect of diet on rat growth. It wasn’t a randomized controlled trial, but a two well-timed interventions that demonstrated a clear casual relationship. And it only took 16 rats. Having a concrete alternative to a hypothesis test was a pleasant intellectual jolt.

It’s an intriguing series, well worth the time. Go read it!


Want to respond? Send me an email, post a webmention, or find me elsewhere on the internet.

This article is syndicated on:

DOI: https://doi.org/10.59350/3vper-hra91