Welcome to Regression Alert, your weekly guide to using regression to predict the future with uncanny accuracy.
For those who are new to the feature, here's the deal: every week, I break down a topic related to regression to the mean. Some weeks, I'll explain what it is, how it works, why you hear so much about it, and how you can harness its power for yourself. In other weeks, I'll give practical examples of regression at work.
In weeks where I'm giving practical examples, I will select a metric to focus on. I'll rank all players in the league according to that metric and separate the top players into Group A and the bottom players into Group B. I will verify that the players in Group A have outscored the players in Group B to that point in the season. And then I will predict that, by the magic of regression, Group B will outscore Group A going forward.
Crucially, I don't get to pick my samples (other than choosing which metric to focus on). If I'm looking at receivers and Ja'Marr Chase is one of the top performers in my sample, then Ja'Marr Chase goes into Group A, and may the fantasy gods show mercy on my predictions.
And then, because predictions are meaningless without accountability, I track and report my results. Here's last year's season-ending recap, which covered the outcome of every prediction made in our eight-year history, giving our top-line record (46-15, a 75% hit rate) and lessons learned along the way.
Our Year to Date
Sometimes, I use this column to explain the concept of regression to the mean. In Week 2, I discussed what it is and what this column's primary goals would be. In Week 3, I explained how we could use regression to predict changes in future performance-- who would improve, who would decline-- without knowing anything about the players themselves. In Week 7, I illustrated how small differences over large samples were more meaningful than large differences over small samples. In Week 9, I showed how merely looking at a leaderboard can give information on how useful and predictive an unfamiliar statistic might be.
In Week 11, I explained the difference between anticipated regression and the so-called "Gambler's fallacy", and in Week 12, I talked about retrodiction, or "predicting" the past as a means of testing your model.
Sometimes, I use this column to point out general examples of regression without making specific, testable predictions. In Week 5, I looked at more than a decade worth of evidence showing how strongly early-season performances regressed toward preseason expectations. In Week 15, I examined evidence that the best teams were barely more than a coin flip to win any given game in the playoffs, and less than that to win the entire thing.
Other times, I use this column to make specific predictions. In Week 4, I explained that touchdowns tend to follow yards and predicted that the players with the highest yard-to-touchdown ratios would begin outscoring the players with the lowest. In Week 6, I showed the evidence that yards per carry was predictively useless and predicted the lowest ypc backs would outrush the highest ypc backs going forward. In Week 8, I discussed how most quarterback stats were fairly stable, but interceptions were the major exception.
In Week 10, we looked at how passing performances were trending down over the years and predicted this year would set new lows for 300-yard passing games. In Week 13, we discussed how most players declined slightly late in the year, but predicted that rookies would improve. In Week 14, I explained that "hot streaks" were largely just random clustering and predicted that the "hottest" players would regress to their season averages.
The Scorecard
| Statistic Being Tracked | Performance Before Prediction | Performance Since Prediction | Weeks Remaining |
|---|---|---|---|
| Yard-to-TD Ratio | Group A averaged 25% more PPG | Group B averaged 12% more PPG | None (Win!) |
| Yards per Carry | Group A averaged 39% more rushing yards per game | Group A averages 33% more rushing yards per game | None (Loss) |
| Interceptions Thrown | Group A threw 69% as many interceptions | Group B has thrown 82% as many interceptions | None (Win!) |
| 300-Yard Games | Teams had 30 games in 9 weeks | Teams have 24 games in 7 weeks | None (Win!) |
| Rookie PPG | Group A averaged 4.94 points per game | Group A averages 5.25 points per game | None (Win!) |
| Rookie Improvement | 45% are beating their average | None (Loss) | |
| Hot Players Regress | Players were performing at an elevated level | Players have regressed 66.7% toward their season average | None (Win!) |
Our last prediction of the season wound up being the closest we've ever tracked; we needed our "hot" players to average 13.30 or fewer points per game to win, and they averaged 13.30 exactly.
While the particulars were quite close, the generalities performed how they always do: after striking the injured players from the list, we wound up with 24 "hot" players—players who were dramatically outperforming their full-season average as we entered the fantasy playoffs. Of those 24, half scored higher than their full-season average and half scored lower. The median outcome was... regressing 98.1% of the way back to the full-season average.
Seven of the "hot" players took their performance to an even higher level. Cameron Ward was QB27 through Week 13, but QB14 over the last four weeks. Despite a Week 14 bye, Tyrone Tracy Jr. rose from RB39 to RB26 (RB20 in points per game). Chase Brown rose from RB16 to RB2. Kenneth Gainwell jumped from RB21 to RB12. Derrick Henry went from RB11 to RB3. Michael Wilson climbed from WR43 to WR3. Jauan Jennings went from WR40 to WR20 despite a bye in Week 14 (he was WR12 in points per game).
Those are the names we're going to remember in future seasons when we're making late-season moves to secure a league-winner down the stretch, but it's important to note that those players are the exception, and late-season surprises are as likely to underperform down the stretch as they are to continue overperforming.
Our Long-Term Report Card
To wrap up the season, I wanted to look back not just at this year's predictions but at every prediction since Regression Alert launched in 2017. Remember, I'm not picking individual players; I'm just identifying unstable statistics and predicting that the best and the worst players in those statistics will regress toward the mean, no matter who those best and worst players might be.
Sometimes this feels a bit scary. Predicting that stars like Jahmyr Gibbs and Puka Nacua, in the middle of league-winning seasons, are going to start falling off is an uncomfortable position. But looking back at our hit rate over time makes it a bit easier to swallow.
Top-line Record
- 2017: 6-2
- 2018: 5-1
- 2019: 7-2
- 2020: 6-1
- 2021: 8-1
- 2022: 4-3
- 2023: 5-3
- 2024: 5-2
- 2025: 5-2
- Overall: 51-17 (75%)