Unlock More Content Like This With a Footballguys Premium Subscription
"Footballguys is the best premium
fantasy football only site on the planet."
Matthew Berry, NBC Sports EDGE
Welcome to Regression Alert, your weekly guide to using regression to predict the future with uncanny accuracy.
For those who are new to the feature, here's the deal: every week, I dive into the topic of regression to the mean. Sometimes I'll explain what it really is, why you hear so much about it, and how you can harness its power for yourself. Sometimes I'll give some practical examples of regression at work.
In weeks where I'm giving practical examples, I will select a metric to focus on. I'll rank all players in the league according to that metric, and separate the top players into Group A and the bottom players into Group B. I will verify that the players in Group A have outscored the players in Group B to that point in the season. And then I will predict that, by the magic of regression, Group B will outscore Group A going forward.
Crucially, I don't get to pick my samples (other than choosing which metric to focus on). If the metric I'm focusing on is touchdown rate, and Christian McCaffrey is one of the high outliers in touchdown rate, then Christian McCaffrey goes into Group A and may the fantasy gods show mercy on my predictions.
Most importantly, because predictions mean nothing without accountability, I track the results of my predictions over the course of the season and highlight when they prove correct and also when they prove incorrect. Here's a list of my predictions from 2020 and their final results. Here's the same list from 2019 and their final results, here's the list from 2018, and here's the list from 2017. Over four seasons, I have made 30 specific predictions and 24 of them have proven correct, a hit rate of 80%.
In Week 2, I broke down what regression to the mean really is, what causes it, how we can benefit from it, and what the guiding philosophy of this column would be. No specific prediction was made.
In Week 3, I dove into the reasons why yards per carry is almost entirely noise, shared some research to that effect, and predicted that the sample of backs with lots of carries but a poor per-carry average would outrush the sample with fewer carries but more yards per carry.
In Week 4, I talked about yard-to-touchdown ratios and why they were the most powerful regression target in football that absolutely no one talks about, then predicted that touchdowns were going to follow yards going forward (but the yards wouldn't follow back).
|Statistic for regression||Performance before prediction||Performance since prediction||Weeks remaining|
|Yards per Carry||Group A had 10% more rushing yards per game||Group A has 13% more rushing yards per game||2|
|Yards per Touchdown||Group A scored 9% more fantasy points per game||Group B scores 29% more fantasy points per game||3|
I always say that my yards per carry prediction is going to fail one of these days, and after a terrible week, it looks like ours might be on life support. The prediction has two main theses. The first is that yards per carry is basically a random number generator, and so far that has been true. Group A saw its ypc average go from 5.40 over the first two weeks to 3.82 in Week 3 to 5.56 in Week 4. Group B saw its ypc go from 3.87 to 4.97 to 5.21. Overall, our "low-ypc" backs are outgaining our "high-ypc" backs since our prediction, 5.09 to 4.74.
The other thesis is that volume tends to be stickier across samples. Our Group A backs averaged 13.9 carries per game over the first two weeks and 14.2 carries per game over the last two weeks. So far, so good. But Group B has seen its per-game average plummet from 17.7 all the way down to 11.5, which is how Group A has widened its lead so much. If this result holds I'll conduct a postmortem on what went wrong in a couple of weeks. For now, suffice it to say that this workload reduction is a serious concern for the viability of our prediction. But we still have two weeks to go, so we'll see how things play out.
Our second prediction has hit no such snags, however. Our Group B wide receivers did see their yardage total drop a bit (mostly because 25% of the sample last week played for the Raiders, so when Derek Carr is held under 200 passing yards it's going to put a serious crimp in Group B's production). But the touchdowns regressed exactly as predicted; Group A, our "high-touchdown" group, scored two touchdowns total in 9 games. Group B, meanwhile, saw three different receivers score two touchdowns all by themselves.
Revisiting Preseason Expectations
In October of 2013, I wondered just how many weeks it took before the early-season performance wasn't a fluke anymore. In "Revisiting Preseason Expectations", I looked back at the 2012 season and compared how well production in a player's first four games predicted production in his last 12 games. And since that number was meaningless without context, I compared how his preseason ADP predicted production in his last 12 games.
It was a fortuitous time to ask that question, as it turns out, because I discovered that after four weeks in 2012, preseason ADP still predicted performance going forward than early-season production did.
This is the kind of surprising result that I love, but the thing about surprising results is that sometimes the reason they're surprising is really just because they're flukes. So in October of 2014, I revisited "Revisiting Preseason Expectations". This time I found that in the 2013 season, preseason ADP and week 1-4 performance held essentially identical predictive power for the rest of the season.
With two different results in two years, I decided to keep up my quest for a definitive answer about whether early-season results or preseason expectations were more predictive down the stretch. In October of 2015, I revisited my revisitation of "Revisiting Preseason Expectations". This time, I found that early-season performance held a slight predictive edge over preseason ADP.
With things still so inconclusive, in October of 2016, I decided to revisit my revisitation of the revisited "Revisiting Preseason Expectations". As in 2015, I found that this time early-season performance carried slightly more predictive power than ADP.
To no one's surprise, I couldn't leave well enough alone in October 2017, once more revisiting the revisited revisitation of the revisited "Revisiting Preseason Expectations". This time I once again found that preseason ADP and early-season performance were roughly equally predictive, with a slight edge to preseason ADP.
And of course, as a creature of habit, when October 2018 rolled around I simply had to revisit my revisitation of the revisited revisited revisitation of "Revisiting Preseason Expectations". And then in October 2019 and October 2020 I... well, you get the idea.
And now, as you've probably guessed, it's time for an autumn tradition as sacred as turning off the lights and pretending I'm not home on October 31st. It's time for the ninth annual edition of "Revisiting Preseason Expectations"! (Or as I prefer to call it, "Revisiting Revisiting Revisiting Revisiting Revisiting Revisiting Revisiting Revisiting Revisiting Preseason Expectations".)
If you've read the previous pieces, you have a rough idea of how this works, but here's a quick rundown of the methodology. I have compiled a list of the top 24 quarterbacks, 36 running backs, 48 wide receivers, and 24 tight ends by 2019 preseason ADP.
From that list, I have removed any player who missed more than one of his team’s first four games or more than two of his team’s last twelve games so that any fluctuations represent performance and not injury. As always, we’re looking by team games rather than by week, so players with an early bye aren't skewing the comparisons.
I’ve used PPR scoring for this exercise because that was easier for me to look up with the databases I had on hand. For the remaining players, I tracked where they ranked at their position over the first four games and over the final twelve games. Finally, I’ve calculated the correlation between preseason ADP and stretch performance, as well as the correlation between early performance and stretch performance.
Correlation is a measure of how strongly one list resembles another list. The highest possible correlation is 1.000, which is what you get when two lists are identical. The lowest possible correlation is 0.000, which is what you get when you compare one list of numbers to a second list that has no relationship whatsoever. (Correlations can actually go down to -1.000, which means the higher something ranks in one list the lower it tends to rank in the other, but negative correlations aren’t really relevant for this exercise.)
So if guys who were drafted high in preseason tend to score a lot of points from weeks 5-16, and this tendency is strong, we’ll see correlations closer to 1. If they don’t tend to score more points, or they do but the tendency is very weak, we’ll see correlations closer to zero. The numbers themselves don’t matter beyond “higher = more predictable”.
Here's the raw data for anyone curious. If you're willing to take my word for it, I'd recommend just skipping ahead to the "Overall Correlations" section below for averages and key takeaways.
|Player||ADP||Games 1-4||Games 5-16|
|Patrick Mahomes II||1||4||6|
|Player||ADP||Games 1-4||Games 5-16|
|Ronald Jones II||28||25||17|
|Player||ADP||Games 1-4||Games 5-16|
|Player||ADP||Games 1-4||Games 5-16|
|QUARTERBACK||ADP||EARLY-SEASON||AVG OF BOTH|
|RUNNING BACK||ADP||EARLY-SEASON||AVG OF BOTH|
|WIDE RECEIVER||ADP||EARLY-SEASON||AVG OF BOTH|
|TIGHT END||ADP||EARLY-SEASON||AVG OF BOTH|
|OVERALL||ADP||EARLY-SEASON||AVG OF BOTH|
Two years ago I noticed that early-season results had outperformed preseason ADP for all five seasons I had measured at tight end. I tentatively declared that maybe tight end was different than the other positions and performance stabilized quicker.
In the first year after that tentative conclusion, early-season performance had the least predictive power on record at tight end. In the second year after that tentative conclusion, preseason tight end ADP had its best year yet. Ever since declaring that early-season performance might matter more at tight end, preseason ADP has dominated. The most glaring example was Rob Gronkowski, who was TE6 before the season but TE36 over the first four weeks, looking for all the world like he was old and slow and essentially done. Then from games 5-16, he was the #4 fantasy tight end, right in line with our initial expectations.
Over the seven years of tracking tight ends, early-season performance still holds a predictive edge, but I'm less certain now that that's not just random fluctuations. If you flip a coin five times in a row, sometimes it will come up heads five times in a row; this doesn't mean the coin is weighted towards heads. Sometimes random processes are just streaky.
Speaking of streaky, preseason ADP dominated for the first two years of our quarterback analysis (covering the 2014 and 2015 seasons), but early-season results have won in each of the five seasons since. Does this mean I think early-season performance might be more predictive at quarterback? No, I think random processes are sometimes streaky.
Over the biggest sample available and across all four positions, preseason ADP predicts stretch performance almost exactly as well as early-season performance does. The first four weeks of the season feel like they're incredibly meaningful, but the truth is they only tell us as much as we already knew over the offseason.
Of course, just comparing preseason ADP to early-season results is a false dilemma; in truth, we should be basing our expectations on a blend of the two. A strict 50/50 mix of both sources predicts rest-of-year performance substantially better than either source alone at every position.
There's no testable prediction this week other than just a general reminder that player performance to date will tend to regress strongly in the direction of our initial preseason expectations.