Welcome to Regression Alert, your weekly guide to using regression to predict the future with uncanny accuracy.
For those who are new to the feature, here's the deal: every week, I break down a topic related to regression to the mean. Some weeks, I'll explain what it is, how it works, why you hear so much about it, and how you can harness its power for yourself. In other weeks, I'll give practical examples of regression at work.
In weeks where I'm giving practical examples, I will select a metric to focus on. I'll rank all players in the league according to that metric and separate the top players into Group A and the bottom players into Group B. I will verify that the players in Group A have outscored the players in Group B to that point in the season. And then I will predict that, by the magic of regression, Group B will outscore Group A going forward.
Crucially, I don't get to pick my samples (other than choosing which metric to focus on). If I'm looking at receivers and Ja'Marr Chase is one of the top performers in my sample, then Ja'Marr Chase goes into Group A, and may the fantasy gods show mercy on my predictions.
And then, because predictions are meaningless without accountability, I track and report my results. Here's last year's season-ending recap, which covered the outcome of every prediction made in our eight-year history, giving our top-line record (46-15, a 75% hit rate) and lessons learned along the way.
Our Year to Date
Sometimes, I use this column to explain the concept of regression to the mean. In Week 2, I discussed what it is and what this column's primary goals would be. In Week 3, I explained how we could use regression to predict changes in future performance-- who would improve, who would decline-- without knowing anything about the players themselves. In Week 7, I illustrated how small differences over large samples were more meaningful than large differences over small samples. In Week 9, I showed how merely looking at a leaderboard can give information on how useful and predictive an unfamiliar statistic might be.
Sometimes, I use this column to point out general examples of regression without making specific, testable predictions. In Week 5, I looked at more than a decade worth of evidence showing how strongly early-season performances regressed toward preseason expectations.
Other times, I use this column to make specific predictions. In Week 4, I explained that touchdowns tend to follow yards and predicted that the players with the highest yard-to-touchdown ratios would begin outscoring the players with the lowest. In Week 6, I showed the evidence that yards per carry was predictively useless and predicted the lowest ypc backs would outrush the highest ypc backs going forward. In Week 8, I discussed how most quarterback stats were fairly stable, but interceptions were the major exception.
The Scorecard
| Statistic Being Tracked | Performance Before Prediction | Performance Since Prediction | Weeks Remaining |
|---|---|---|---|
| Yard-to-TD Ratio | Group A averaged 25% more PPG | Group B averaged 12% more PPG | None (Win!) |
| Yards per Carry | Group A averaged 39% more rushing yards per game | Group A averages 33% more rushing yards per game | None (Loss) |
| Interceptions Thrown | Group A threw 69% as many interceptions | Group B has thrown 58% as many interceptions | 2 |
We've known it was coming for a few weeks now, but our yards per carry prediction officially closed as a loser this week. The 6% swing from Group A to Group B was the second-smallest in this prediction's history (one time, Group A actually increased its lead). This brings the record of these predictions down to 10-3 all time.
But again, the failure of the prediction shouldn't be viewed as a vindication of yards per carry as a useful predictor. Our "low-ypc" backs averaged just 3.39 yards per carry at the time of the prediction, but they averaged 4.53 in the four weeks since. Group A led Group B by 1.98 yards per carry leading up to the prediction, but that lead fell to just 0.43 since.
So why did the prediction fail? Because random stuff happens in small samples. Chuba Hubbard got hurt, and Rico Dowdle turned into Jim Brown in his absence. Alvin Kamara and Tony Pollard saw their usage fall off a cliff. Normally, the good breaks and the bad breaks happen to both samples fairly evenly, but sometimes randomness favors Group B a bit more, and we wind up with an exceptionally large swing (in our very first attempt, production swung 76% in favor of Group B), while other times it favors Group A.
Randomness remains fairly evenly distributed in our interceptions prediction, though, with the groups performing about as expected. Over the last two weeks, the "low-interception" teams average 0.7 interceptions per game, and the "high-interception teams"... also average 0.7 interceptions per game. There are four teams that have thrown three interceptions over the last two weeks, and all four of them were "low-interception" teams; Kansas City, Washington, Indianapolis, and Dallas threw 12 interceptions in 28 games through seven weeks. They've combined to double their season total in just 8 games since. Over small samples, interceptions are largely random.
Updating Our Guiding Principles
In Week 3, I outlined our three guiding principles for predicting regression, which I referred to as "the North Star of all future analysis."
- Principle #1: Everything regresses to the mean.
- Principle #2: Not everything regresses at the same rate.
- Principle #3: Not everything has the same mean.
It occurred to me that there's a fourth, equally important principle that I've written about before, and it was remiss of me to omit it:
- Principle #4: Means move.
But as I think on it some more, this is really just a much more accurate restatement of the second principle. If you remember, "regression to the mean" is the observation that any time there's a random draw, the expected outcome should conform to the underlying probabilities regardless of what the results were on the last random draw. If I roll a six-sided die 100 times, I should expect the average of all rolls to be around 3.5; this is true even if the average of the last 100 rolls was 4.2. The dice aren't "hot", randomness was just random.
Likewise, if Ja'Marr Chase's "true performance level" includes breaking 10% of all tackle attempts, then he might have a lot of very big games early in the year if he just happens to break 20% of tackle attempts... but we should still expect a broken tackle rate closer to 10% going forward. Calling it "regression" is a bit misleading because it makes it look like a directed force, like the universe is bending his short-run averages to match his underlying probabilities, but that's not the case. The universe has already forgotten his past performance, and his future performance is most likely to conform to the underlying distribution simply because... it's the underlying distribution.
(This is why many proponents prefer the term "reversion to the mean" rather than "regression to the mean". Performance isn't moving in the direction of the underlying expectation; it's returning to the underlying expectation.)
Because this is the mechanism behind regression, it's incorrect to say that "not everything regresses at the same rate". In fact, for every process in the known universe that can be modeled probabilistically, the expectation going forward should always be that it conforms to the underlying distribution. Regression affects everything at the same rate.
The only reason it doesn't look like it from our perspective is that sometimes the means that we're reverting to will move.
Chase Brown averaged 2.5 yards per carry over the first five weeks. Travis Etienne Jr. averaged 5.8. Was Brown suddenly a "2.5 ypc back" or Etienne a "5.8 ypc back"? No, yards per carry is famously prone to randomness, which means the strongest expectation is that their actual mean remained unchanged (probably somewhere around league average). In the four weeks since, Brown averaged 6.0 ypc while Etienne averaged 3.7; have their means shifted? No, again, ypc is just especially random; our expectation for the underlying distribution remains unchanged.
Sam Darnold, on the other hand, leads the league with 9.6 yards per pass attempt this year. His career average is 7.2; should we expect Darnold to average 7.2 YPA going forward? No—yards per attempt is one of the most stable statistics in football. You don't lead the league in a category without overperforming to some extent, so he's unlikely to maintain that 9.6 yard average... but he's equally unlikely to revert all the way back to 7.2. His "true performance level", his mean, has almost certainly moved.
Again, from an outside perspective, this looks a lot like "yards per rush attempt regresses much more quickly than yards per pass attempt", but in practice, both immediately revert to the underlying mean (in expectation). It's just that because the former is more dominated by randomness, any over- or under-performance is less likely to be an indicator that the underlying mean has changed.
So let me propose a new set of guiding principles for us to keep in mind:
- Principle #1: Everything regresses to the mean.
- Principle #2: Not everything has the same mean.
- Principle #3: Means move.