Category Archives: Analysis of PWR

New forecast presentation – PWR by wins

I’m pleased to announce an improvement to the way I present PWR forecasts this year. There were two guiding principles to the design of the new presentation:

  • The question people are really asking until conference tournaments begin is, “what will it take for my team to make the playoffs (or finish top 4)?”
  • Everyone is interested in something a little different—some are fans of a single team and just care about that team, some want to check up on rivals, and some want to dig through all the data to look for interesting outcomes.

My forecast posts in past years gave some insight into what it takes for a team to make the playoffs, but was limited to the teams I chose or scenarios I found interesting. To help expand that analysis to all teams, I sought a useful way to present all the data.

The table on the PWR By Wins page shows you how many wins each team needs to likely finish at each PWR ranking. If you want more detail on a specific team, you can click a team name to see the probability curves of how likely that team is to end the regular season with each PWR ranking with a given number of wins out of its remaining scheduled games.

This is the first public presentation of this, so I’m sure there will be some tweaks and improvements in coming weeks. Check it out, and let me know if there’s anything I can do to make this data more useful to you.

What does it take for each team to finish at each PWR rankings: PWR By Wins

Methodology

The page notes when the forecast was last run (assume that it includes all games that had been completed as of that time).

Each forecast is based on at least one million monte carlo simulations of the games in the described period. For each simulation, the PairWise Ranking (PWR) is calculated and the results tallied. The probabilities presented in the forecasts are the share of simulations in which a particular outcome occurred.

The outcome of each game in each simulation is determined by random draw, with the probability of victory for each team set by their relative KRACH ratings. So, if the simulation set included a contest between team A with KRACH 300 and team B with KRACH 100, team A will win the game in very close to 75% of the simulations. I don’t simulate ties or home ice advantage.

How many teams will each conference put in the playoffs? (2016 edition)

This time of year always brings speculation about which teams are positioned for the NCAA tournament, which sometimes leads to discussions about each conference’s performance.

Looking at how many teams each conference has in the top 16 in PWR gives an interesting benchmark of performance to date. But, that occasionally raises questions of how the 2nd half schedules might reshape that field. Because we already know the rest of the regular season schedule, it’s pretty straightforward to simulate the rest of the regular season (assuming teams will continue to perform as they have to date) to see how each conference is likely to fare at the end of the regular season.

How many teams will each conference put in the playoffs?

Likelihood of each conference’s number of teams in the top 14 PWR at the end of the regular season
0 1 2 3 4 5
Atlantic Hockey 90% 10%
Big Ten 17% 70% 13%
ECAC 1% 19% 68% 12%
Hockey East 6% 51% 43%
NCHC 1% 44% 51% 5%
WCHA 96% 4%

Shaded cells represent the number of teams each conference currently has in the top 14. The Big Ten is most likely to make a gain (not surprisingly, since they hold positions 15 and 16), while Hockey East and the NCHC are  most likely to take a loss (holding positions 12 & 13, and 14, respectively).

How is each conference doing compared to its historical performance?

Looking back at 2014’s How many teams will each conference put in the playoffs?, ExileOnDaytonStreet on the USCHO forum had counted each conference’s current members’ average tournament appearances per year over the previous ten years:

Number of members that made that tournament per year
Atlantic Hockey 1.3
Big Ten 3
ECAC 2.3
Hockey East 4.1
NCHC 4.3
WCHA 1

Compared to their members’ historical performances–

  • Atlantic Hockey is on track, likely to just get its autobid.
  • The Big Ten continues to underperform, as it has since its inception. Of course, some of that can be attributed to its historically strong members now having to play each other instead of other teams.
  • ECAC is dramatically outperforming, with 4 at-large teams the most likely outcome based on performance to date.
  • Hockey East is right on track, likely getting 4-5 teams in at-large.
  • The NCHC is on-track to slightly behind, with 4 most likely but 3 much more likely than 5.
  • The WCHA, like Atlantic Hockey, is on track and likely to just get its autobid.

Keep in mind that the historical numbers are total tournament participants, whereas for forecasting purposes we just look at top 14 in PWR.

Is inter-conference play the key?

People sometimes speculate that inter-conference play is the key to a good PWR rating (though my own attempts to test that hypothesis have proven inconclusive at best).

Here is each conference’s current non-conference record (courtesy of CHN).

Inter-conference records (from CHN)
Atlantic Hockey .250
Big Ten .494
ECAC .606
Hockey East .545
NCHC .628
WCHA .458

The conferences likely to send the most teams to the tournament are indeed those with the best inter-conference records.

Methodology

Forecasts include the results of games played through Sunday of this week, unless otherwise noted.

Each forecast is based on at least one million monte carlo simulations of the games in the described period. For each simulation, the PairWise Ranking (PWR) is calculated and the results tallied. The probabilities presented in the forecasts are the share of simulations in which a particular outcome occurred.

The outcome of each game in each simulation is determined by random draw, with the probability of victory for each team set by their relative KRACH ratings. So, if the simulation set included a contest between team A with KRACH 300 and team B with KRACH 100, team A will win the game in very close to 75% of the simulations. I don’t simulate ties or home ice advantage.

Resources

A first look at the 2016 at-large bid cutlines

If you’re new here, you might want to start with Welcome to collegehockeyranked.com. While anything related to college hockey rankings is fair game for this site, in most articles I try to provide insight as to where teams are likely to end up in the PairWise Rankings (PWR) that mimic the NCAA’s men’s ice hockey tournament selection process (and, thus, which teams are likely to be selected for the tournament).

In last year’s When to start looking at PWR, I noted that the early January PWR does give us some useful information as to what each team needs to do to make the tournament at-large. Top teams can still fall out of contention (though it takes a notable collapse for the top few), and it’s pretty unusual for a team ranked much lower than 25 at this time of year to climb to an at-large bid.

To test those larger trends against this year’s schedule and results, I ran simulations for the remaining scheduled regular season games to see where each team is likely to end up. The full methodology is described at the bottom of this article.

Before we jump into the data, I do want to remind you that starting simulations now (with over 450 scheduled games remaining) makes it pretty likely that some of the 1% events will happen. So, just telling you the average outcome for each team wouldn’t be particularly useful, because it would include an assumption about the team’s future performance that will prove wrong for some teams. Instead, I tell you where a team is likely to end up conditional on how many games they win (or, how many games a team needs to win to achieve an outcome such as making the NCAA tournament at-large).

Which teams are likely to get an at-large bid?

Around this time last year, I asked, “Is anyone safe?”, and answered,

Not completely. Even #1 Harvard could slip to the bubble if it wins only 6-7 of its remaining 14 scheduled games. That’s not particularly likely

Harvard went 5-10-1 in its next 16 games to fall to #22 in the PWR at the end of the regular season. The Crimson were still very much on the bubble until they secured a bid by winning the ECAC tournament. Though the assumption that Harvard would keep performing as it had to date (and thus win far more than 6-7 more games) proved wrong, the simulated prediction proved correct that Harvard would be on the bubble if that happened.

#1 Quinnipiac’s KRACH is so strong relative to its scheduled competitors that none of my simulations (which weight likely outcomes by KRACH) had them winning fewer than 6 games! However, knowing that past results aren’t a perfect predictor of future results, we can look at the positioning of the “win 6” and guess that they could get into trouble if they win just 2-4 of their remaining scheduled games.

quinnipiac

If you’re feeling deja vu, let me add that #2 Harvard could find itself in trouble with only 6 wins in its remaining 16 scheduled games.

harvard

Down to about #11 Penn State, teams just need avoid a slump that approaches (or goes beneath) .500 to stay positioned for the at-large field.

1 Quinnipiac
2 Harvard
3 Nebraska-Omaha
4 St Cloud St
5 North Dakota
6 Providence
7 Cornell
8 Michigan
9 Yale
10 St. Lawrence
11 Penn State

pennstate

From about #12 Boston University to about #19 Minnesota, teams need to win 60-80% of their remaining games.

12 Boston University
13 Notre Dame
14 Mass.-Lowell
15 Rensselaer
16 Boston College
17 Minnesota State
18 Union
19 Minnesota

bostonuniversity

minnesota

The lowest rank at this time of year from which a team usually climbs to an at-large bid is in the mid-20s. It takes a hot streak, but someone usually does it.

20 Dartmouth
21 Denver
22 Bowling Green
23 Holy Cross
24 Robert Morris
25 Minnesota Duluth
26 Western Michigan

dartmouth westernmichigan

Is anyone out of contention?

#27 Michigan Tech to #45 Mercyhurst aren’t mathematically eliminated, but need something approaching a perfect remaining season to get an at-large bid. It’s a bit easier for teams near the top of the list (2-3 losses for most) than those at the bottom (almost no losses and a bit of a luck).

27 Michigan Tech
28 Miami
29 New Hampshire
30 Alaska Anchorage
31 Merrimack
32 Clarkson
33 Massachusetts
34 Wisconsin
35 Ferris State
36 Northern Michigan
37 Brown
38 Vermont
39 Princeton
40 Bentley
41 Bemidji State
42 Air Force
43 Ohio State
44 Connecticut
45 Mercyhurst

mtech

mercyhurst

For #46 Lake Superior State and below it looks like the only path to the NCAA tournament is through the conference tournaments. Those include:

46 Lake Superior State
47 Colgate
48 RIT
49 Northeastern
50 Sacred Heart
51 Alaska
52 Maine
53 Michigan State
54 Army
55 Arizona
56 Canisius
57 Colorado College
58 Alabama-Huntsville
59 Niagara
60 American International

LakeState

Methodology

Forecasts include the results of games played through Sunday of this week, unless otherwise noted.

Each forecast is based on at least one million monte carlo simulations of the games in the described period. For each simulation, the PairWise Ranking (PWR) is calculated and the results tallied. The probabilities presented in the forecasts are the share of simulations in which a particular outcome occurred.

The outcome of each game in each simulation is determined by random draw, with the probability of victory for each team set by their relative KRACH ratings. So, if the simulation set included a contest between team A with KRACH 300 and team B with KRACH 100, team A will win the game in very close to 75% of the simulations. I don’t simulate ties or home ice advantage.

Resources

A new #1 in KRACH

Unlike PWR (which mimics the tournament selection process) , KRACH is just for fun. But, a lot of people like it and it’s what I use to estimate team strength when simulating game outcomes.

When writing yesterday’s post, I noticed there’s a new king of the hill in PWR – #2 North Dakota.

Only once this season has #1 Minnesota State been knocked out of first place in KRACH, on Dec. 29 by then second-in-PWR Harvard. The following week Harvard also took over first place in PWR. Harvard’s reign was short-lived, as Minnesota State took back the top rankings in both PWR and KRACH on January 12 and have held both until this week.

krach

pwr

When to start looking at PWR (revisited)

Five years ago I wrote a post for SiouxSports.com, When to start looking at PWR. I want to revisit that post because we now have five more seasons of data, including the first full season with last year’s PWR revisions.

It’s been noted countless times on message boards (by people presumably offended that others enjoy looking at PWR?) that PWR is only calculated once at the end of the conference tournaments. So why do we calculate “as if the season ended today” versions of PWR before that?

We look at PWR before the end of the season because we think it’s going to provide some insight into what that final PWR might be and what our favorite teams need to do to make the tournament. When to start looking depends what insight you’re looking for.

In this article I’ll look at how stable PWR is over time and how well it predicts the final PWR. The PWR starts containing useful information about what each team needs to do for an at-large bid as early as November. Front-runners start to become more entrenched by January. But as readers of this blog know, only the top few teams going into the conference tournaments are absolute locks, and teams as low as the mid-20s still stand a chance.

Week-to-week stability of PWR

My previous article started with a look at how stable PWR is. My thinking was that if next weekend’s games have the potential to completely upend the PWR table, then this week’s PWR table may not be particularly interesting.

PWR_weektoweekchange

The above chart shows the average PWR movement (in rank positions) of teams ranked over consecutive weeks. Consistent with the last article, PWR exhibits wild swings (an average or 4+ positions week-to-week) until the December break (movements in December are lower because teams play so many fewer games over holiday breaks). By January, movement has settled into an average of 2-3 positions per weekend for ranked teams, and down to 1-2 positions by March.

PWR’s ability to predict the final PWR

So we know when PWR stabilizes week-to-week, but what we really want to know is how good a predictor a weekly PWR is of the one true final PWR.

PWR_differencefromfinal

Though PWR seems relatively stable in January, because week-to-week movements have settled down, those movements add up enough over the weeks that January’s PWR isn’t a spectacular predictor of the final PWR. On January 1 (about 90 days before the final PWR) teams have been an average of 3-8 ranks off from their final rank. Even 30 days out teams are only within 2-4 positions on average of their final rank.

Likelihood of teams finishing in the top 12 of PWR

That’s where I stopped five years ago. Let’s go a little further—the reason we care about PWR is we want to know if a team is going to make the tournament. Let’s look at how many teams in the top 12 of PWR are still in the top 12 at tournament selection time.

PWR_shareoftop12finish

From 50%-85% of top 12 teams as of January 1 (90 days out) have finished in the top 12. At 60 days out, that has climbed to roughly 60%-85% holding onto a top 12 spot. By 30 days out, that has climbed to 75%-100%.

Also interesting is knowing how much a team’s performance to date has set their fate. Let’s look at how highly ranked you must be at given times to be a pretty good lock for the tournament and how lowly ranked you can be and still stand a chance.

PWR_highest

At 90 days out (January 1) anyone can fall out of contention, though it takes a notable collapse by a previously top-performing team. In the last ten years, we’ve never seen a team that was top 4 at 60 days out (February 1) miss finishing top 12. We’ve seen a season where the top 12 are locked at 42 days out, but also one where only 6 of the top 12 at 28 days out manage to finish top 12.

pwr_lowest

On the flip side, every year in the past 10 has seen a team ranked #17 or lower 90 days out climb into the top 12. It’s most common for a team around #20 at 90 days out to be the lowest rank from which anyone climbs to finish top 12. But, there has been a recent season in which a team unranked until Feb. 22 and ranked #25 until Mar. 15 made it to the top 12.

Effects of the new formula

It’s really too early to tell based on the empirical data if the new PWR formulas is more or less stable than the previous one. It’s a reasonable guess that the removal of the TUC cliff and introduction of sliding RPI bonuses would lead to less severe movements, but that’s not obviously the case from the available results.

I should note that the 2013 line in the first two charts isn’t directly comparable to those from earlier seasons. The 2013 PWR ranks all teams so the line represents an average of all teams, while the earlier lines only include those teams that were ranked at both times.

Some notes on statistics

Feel free to skip this paragraph if you don’t care about statistics.

I’m not a statistician and would happily take some advice from one. I didn’t calculate proper correlations in the past because I wasn’t sure what to do with the teams dropping in and out of being ranked. Giving the unranked teams the average rank of the tied group, as I’ve seen done with Spearman, struck me as potentially exaggerating those teams’ rises and declines as they fall in and out of being ranked. I suppose I could have run a standard Pearson on the teams ranked in both periods rather than just report the mean difference, but I didn’t.

But the new formula ranks every team, so without further ado here’s the Spearman rho correlation between the PWR for each week and the final PWR. Not surprisingly, you can see that even the earliest PWR rankings make a statistically significant contribution to the final PWR (with a reasonably high degree of confidence).

PWR_spearman