Welcome to Regression Alert, your weekly guide to using regression to predict the future with uncanny accuracy.
For those who are new to the feature, here's the deal: every week, I dive into the topic of regression to the mean. Sometimes, I'll explain what it really is, why you hear so much about it, and how you can harness its power for yourself. Sometimes, I'll give some practical examples of regression at work.
In weeks where I'm giving practical examples, I will select a metric to focus on. I'll rank all players in the league according to that metric and separate the top players into Group A and the bottom players into Group B. I will verify that the players in Group A have outscored the players in Group B to that point in the season. And then I will predict that, by the magic of regression, Group B will outscore Group A going forward.
Crucially, I don't get to pick my samples (other than choosing which metric to focus on). If I'm looking at receivers and Justin Jefferson is one of the top performers in my sample, then Justin Jefferson goes into Group A, and may the fantasy gods show mercy on my predictions.
Most importantly, because predictions mean nothing without accountability, I report on all my results in real time and end each season with a summary. Here's a recap from last year detailing every prediction I made in 2021, along with all results from this column's six-year history (my predictions have gone 36-10, a 78% success rate). And here are similar roundups from 2021, 2020, 2019, 2018, and 2017.
In Week 2, I broke down what regression to the mean really is, what causes it, how we can benefit from it, and what the guiding philosophy of this column would be. No specific prediction was made.
In Week 3, I dove into the reasons why yards per carry is almost entirely noise, shared some research to that effect, and predicted that the sample of backs with lots of carries but a poor per-carry average would outrush the sample with fewer carries but more yards per carry.
In Week 4, I explained that touchdowns follow yards, but yards don't follow touchdowns, and predicted that high-yardage, low-touchdown receivers were going to start scoring a lot more going forward.
In Week 5, we revisited one of my favorite findings. We know that early-season overperformers and early-season underperformers tend to regress, but every year, I test the data and confirm that preseason ADP is still as predictive as early-season results even through four weeks of the season. I sliced the sample in several new ways to see if we could find some split where early-season performance was more predictive than ADP, but I failed in all instances.
|STATISTIC FOR REGRESSION
|PERFORMANCE BEFORE PREDICTION
|PERFORMANCE SINCE PREDICTION
|Yards per Carry
|Group A had 42% more rushing yards per game
|Group A has 11% more rushing yards per game
|Group A had 7% more points per game
|Group B has 61% more points per game
Things have looked grim for our yards per carry prediction before-- Group A still held a 10% advantage through three weeks in 2021 before a miracle final week from Group B-- so I'm not throwing dirt on the prediction quite yet. But we're not in a great spot, and it's been pretty clear why: Group A is still averaging 4.97 yards per carry, and Group B is still stuck at 3.93. If yards per carry is random, we'd expect this to happen eventually, but it's looking like our luck might have finally run out.
Our yard-to-touchdown ratio prediction (which gave us its first-ever loss last year) has had no such issues. I mentioned that our "low-touchdown" Group B had a party in the end zone last week, scoring 7 touchdowns in 11 games after scoring just 6 in 33 games before the prediction. Well, last week, they took it to another level, again scoring 7 times, but this time in just 9 games, thanks to bye weeks. Group B's touchdown average last week (0.78 per game) was higher than Group A's "unsustainably high" touchdown rate at the time of the prediction (0.76).
As a result, Group B has staked a commanding two-week lead. Group B has been so dominant that you could remove the three highest-performing receivers (Puka Nacua, Ja'Marr Chase, and A.J. Brown), and it would still lead Group A in fantasy points per game by 7%. Any lead that is built in two weeks can be erased in two weeks, but so far, Group B has been cruising.
The Science of Intuition
One goal of this column is to convince you that regression to the mean is real, it is powerful, and it is everywhere. To explain what it is and how (and why) it works. Another goal is to give you lists of players who are underperforming and players who are overperforming so you can make informed decisions about what to do with them going forward.
But the most important goal is to equip you with the tools to spot regression in the wild on your own, to help you develop intuitions about what kinds of performances are sustainable and what kinds of performances are unsustainable. For starters, I'll highlight certain stats and give you my opinions on them. Yards per carry: bad. Yards per touchdown: sustainable, but only within a narrow range from about 100-200. Interception rate: bad. (Sorry, spoiler alert.)
But as years go on, one fact of life in fantasy football is exposure to new statistics. If you listen to football commentary these days, you might hear about things like Air Yards, Completion Percentage over Expectation (or CPOE), or Expected Points Added (or EPA). Some of these stats didn't even exist until a few years ago. Are they good? Are they bad? There are too many statistics to cover them all. But a quick trick should help sort the wheat from the chaff.
The gold standard measure of how much a stat might regress is something called stability testing. By comparing performance in one sample to performance in another, we can determine how similar those performances are and how much of a player's performance carries over from one game to the next, from one season to the next. Something like broken tackles, it turns out, is pretty stable. The backs who break a lot of tackles in one year also tend to break a lot of tackles in the next year.
Something like yards per carry, on the other hand, is not stable at all. I've already run down some of the studies, but you can see the results in the predictions from this column, too. Year after year, prediction after prediction, we see both high-YPC backs and low-YPC backs regress to virtually the same average. Even including the results from this year's thus-far failed prediction, across nearly 7,000 carries over 7 seasons, Group A averages 4.50 yards per carry, and Group B averages 4.52.
But running stability testing is probably going to be beyond the abilities (or the inclinations) of most fantasy football players, and ordinarily, we can't just create seven years' worth of prediction history to look back on. (Additionally, just because a statistic is stable doesn't necessarily mean it's useful. Sack rate is one of the most stable quarterback stats, but it's also useless for fantasy football purposes unless you're in the rare league that penalizes quarterbacks for sacks.)
So when you encounter a brand new stat, what can you do to tell if it's a useful stat or not? I'm a big fan of a concept that I call "the leaderboard test", that statisticians call "face validity", and that the rest of us call "the smell test". Just from looking at a list, how well does it match our intuitions of what that list should look like?
Continue reading this content with a PRO subscription.
"Footballguys is the best premium
fantasy football only site on the planet."
Matthew Berry, NBC Sports EDGE