Welcome to Regression Alert, your weekly guide to using regression to predict the future with uncanny accuracy.
For those who are new to the feature, here's the deal: every week, I dive into the topic of regression to the mean. Sometimes I'll explain what it really is, why you hear so much about it, and how you can harness its power for yourself. Sometimes I'll give some practical examples of regression at work.
In weeks where I'm giving practical examples, I will select a metric to focus on. I'll rank all players in the league according to that metric and separate the top players into Group A and the bottom players into Group B. I will verify that the players in Group A have outscored the players in Group B to that point in the season. And then I will predict that, by the magic of regression, Group B will outscore Group A going forward.
Crucially, I don't get to pick my samples (other than choosing which metric to focus on). If I'm looking at receivers and Cooper Kupp is one of the top performers in my sample, then Cooper Kupp goes into Group A and may the fantasy gods show mercy on my predictions.
Most importantly, because predictions mean nothing without accountability, I track the results of my predictions over the course of the season and highlight when they prove correct and also when they prove incorrect. At the end of last season, I provided a recap of the first half-decade of Regression Alert's predictions. The executive summary is we have a 32-7 lifetime record, which is an 82% success rate.
If you want even more details here's a list of my predictions from 2020 and their final results. Here's the same list from 2019 and their final results, here's the list from 2018, and here's the list from 2017.
In Week 2, I broke down what regression to the mean really is, what causes it, how we can benefit from it, and what the guiding philosophy of this column would be. No specific prediction was made.
In Week 3, I dove into the reasons why yards per carry is almost entirely noise, shared some research to that effect, and predicted that the sample of backs with lots of carries but a poor per-carry average would outrush the sample with fewer carries but more yards per carry.
|STATISTIC FOR REGRESSION||PERFORMANCE BEFORE PREDICTION||PERFORMANCE SINCE PREDICTION||WEEKS REMAINING|
|Yards per Carry||Group A had 10% more rushing yards per game||Group B has 9% more rushing yards per game||2|
|Yards per Touchdown||Group A scored 3% more fantasy points per game||Group A has 7% more fantasy points per game||3|
At the time of the prediction, Group A averaged 6.41 ypc and Group B averaged 3.81 ypc. Since the prediction, Group A averages 4.52 ypc and Group B averages 4.38 ypc. It's the easiest prediction in the book.
Our yard-to-touchdown ratio prediction is off to a rockier start. The touchdowns have indeed regressed (Group B is averaging one per 173 yards while Group A is all the way up at one per 269 yards), but a couple big yardage games from Group A receivers have left them with a lead through one week. Plenty of football left to be played, though.
Revisiting Preseason Expectations
In October of 2013, I wondered just how many weeks it took before the early-season performance wasn't a fluke anymore. In "Revisiting Preseason Expectations", I looked back at the 2012 season and compared how well production in a player's first four games predicted production in his last 12 games. And since that number was meaningless without context, I compared how his preseason ADP predicted production in his last 12 games.
It was a fortuitous time to ask that question, as it turns out, because I discovered that after four weeks in 2012, preseason ADP still predicted performance going forward better than early-season production did.
This is the kind of surprising result that I love, but the thing about surprising results is that sometimes the reason they're surprising is really just because they're flukes. So in October of 2014, I revisited "Revisiting Preseason Expectations". This time I found that in the 2013 season, preseason ADP and week 1-4 performance held essentially identical predictive power for the rest of the season.
With two different results in two years, I decided to keep up my quest for a definitive answer about whether early-season results or preseason expectations were more predictive down the stretch. In October of 2015, I revisited my revisitation of "Revisiting Preseason Expectations". This time, I found that early-season performance held a slight predictive edge over preseason ADP.
With things still so inconclusive, in October of 2016, I decided to revisit my revisitation of the revisited "Revisiting Preseason Expectations". As in 2015, I found that this time early-season performance carried slightly more predictive power than ADP.
To no one's surprise, I couldn't leave well enough alone in October 2017, once more revisiting the revisited revisitation of the revisited "Revisiting Preseason Expectations". This time I once again found that preseason ADP and early-season performance were roughly equally predictive, with a slight edge to preseason ADP.
And of course, as a creature of habit, when October 2018 rolled around, I simply had to revisit my revisitation of the revisited revisited revisitation of "Revisiting Preseason Expectations". And then in October 2019 and October 2020 and October 2021 I... well, you get the idea.
And now, as you've probably guessed, it's time for an autumn tradition as sacred as turning off the lights and pretending I'm not home on October 31st. It's time for the tenth annual edition of "Revisiting Preseason Expectations"! (Or as I prefer to call it, "Revisiting Revisiting Revisiting Revisiting Revisiting Revisiting Revisiting Revisiting Revisiting Revisiting Preseason Expectations".)
Continue reading this content with a PRO subscription.
"Footballguys is the best premium
fantasy football only site on the planet."
Matthew Berry, ESPN