Five years ago I wrote a post for SiouxSports.com, When to start looking at PWR. I want to revisit that post because we now have five more seasons of data, including the first full season with last year’s PWR revisions.
It’s been noted countless times on message boards (by people presumably offended that others enjoy looking at PWR?) that PWR is only calculated once at the end of the conference tournaments. So why do we calculate “as if the season ended today” versions of PWR before that?
We look at PWR before the end of the season because we think it’s going to provide some insight into what that final PWR might be and what our favorite teams need to do to make the tournament. When to start looking depends what insight you’re looking for.
In this article I’ll look at how stable PWR is over time and how well it predicts the final PWR. The PWR starts containing useful information about what each team needs to do for an at-large bid as early as November. Front-runners start to become more entrenched by January. But as readers of this blog know, only the top few teams going into the conference tournaments are absolute locks, and teams as low as the mid-20s still stand a chance.
Week-to-week stability of PWR
My previous article started with a look at how stable PWR is. My thinking was that if next weekend’s games have the potential to completely upend the PWR table, then this week’s PWR table may not be particularly interesting.
The above chart shows the average PWR movement (in rank positions) of teams ranked over consecutive weeks. Consistent with the last article, PWR exhibits wild swings (an average or 4+ positions week-to-week) until the December break (movements in December are lower because teams play so many fewer games over holiday breaks). By January, movement has settled into an average of 2-3 positions per weekend for ranked teams, and down to 1-2 positions by March.
PWR’s ability to predict the final PWR
So we know when PWR stabilizes week-to-week, but what we really want to know is how good a predictor a weekly PWR is of the one true final PWR.
Though PWR seems relatively stable in January, because week-to-week movements have settled down, those movements add up enough over the weeks that January’s PWR isn’t a spectacular predictor of the final PWR. On January 1 (about 90 days before the final PWR) teams have been an average of 3-8 ranks off from their final rank. Even 30 days out teams are only within 2-4 positions on average of their final rank.
Likelihood of teams finishing in the top 12 of PWR
That’s where I stopped five years ago. Let’s go a little further—the reason we care about PWR is we want to know if a team is going to make the tournament. Let’s look at how many teams in the top 12 of PWR are still in the top 12 at tournament selection time.
From 50%-85% of top 12 teams as of January 1 (90 days out) have finished in the top 12. At 60 days out, that has climbed to roughly 60%-85% holding onto a top 12 spot. By 30 days out, that has climbed to 75%-100%.
Also interesting is knowing how much a team’s performance to date has set their fate. Let’s look at how highly ranked you must be at given times to be a pretty good lock for the tournament and how lowly ranked you can be and still stand a chance.
At 90 days out (January 1) anyone can fall out of contention, though it takes a notable collapse by a previously top-performing team. In the last ten years, we’ve never seen a team that was top 4 at 60 days out (February 1) miss finishing top 12. We’ve seen a season where the top 12 are locked at 42 days out, but also one where only 6 of the top 12 at 28 days out manage to finish top 12.
On the flip side, every year in the past 10 has seen a team ranked #17 or lower 90 days out climb into the top 12. It’s most common for a team around #20 at 90 days out to be the lowest rank from which anyone climbs to finish top 12. But, there has been a recent season in which a team unranked until Feb. 22 and ranked #25 until Mar. 15 made it to the top 12.
Effects of the new formula
It’s really too early to tell based on the empirical data if the new PWR formulas is more or less stable than the previous one. It’s a reasonable guess that the removal of the TUC cliff and introduction of sliding RPI bonuses would lead to less severe movements, but that’s not obviously the case from the available results.
I should note that the 2013 line in the first two charts isn’t directly comparable to those from earlier seasons. The 2013 PWR ranks all teams so the line represents an average of all teams, while the earlier lines only include those teams that were ranked at both times.
Some notes on statistics
Feel free to skip this paragraph if you don’t care about statistics.
I’m not a statistician and would happily take some advice from one. I didn’t calculate proper correlations in the past because I wasn’t sure what to do with the teams dropping in and out of being ranked. Giving the unranked teams the average rank of the tied group, as I’ve seen done with Spearman, struck me as potentially exaggerating those teams’ rises and declines as they fall in and out of being ranked. I suppose I could have run a standard Pearson on the teams ranked in both periods rather than just report the mean difference, but I didn’t.
But the new formula ranks every team, so without further ado here’s the Spearman rho correlation between the PWR for each week and the final PWR. Not surprisingly, you can see that even the earliest PWR rankings make a statistically significant contribution to the final PWR (with a reasonably high degree of confidence).