Category Archives: Calculating PWR

Introducing the PWR Calculator

I’m please to announce an exciting new tool, an interactive PWR calculator. The calculator lets you see how PWR would be affected if already played games had different outcomes, if future games turned out a certain way, or if unscheduled (fictional) games were played.

PWR Calculator

While changes to PWR over the years have stabilized it and removed some of its biggest quirks (the TUC cliff), it’s somewhat unavoidable in any ranking scheme that there will be outcomes that have a surprising or outsized impact. The tool makes it easy for you play “what if” and see how PWR would differ with different game outcomes.

Like everything on CollegeHockeyRanked, it was designed from the ground up to work well on your mobile device, your tablet, or a computer. It’s also exceptionally straightforward and easy to use—providing a point and click interface to try different results, filters to help you focus only on the games and rankings you care about, and instantaneous feedback on your scenarios with no recalculate button or form submissions.

Here are a few interesting things to look for in the calculator right now:

  • If #27 Harvard had beat #11 Minnesota in either game of its November 17-18 series (lost 2-4 and 1-2 in OT, respectively), Harvard’s PWR would currently be 19 instead of 27, an 8 rank jump! Winning both would have further catapulted the Crimson to 13.
  • #8 Nebraska-Omaha is happy to have split with #1 Notre Dame in the October 26-27 series. Had the Mavericks lost instead of a 6-4 win on October 26, they would now be #16 instead of #8.
  • Because a team’s PWR ranking is relative to other teams (it’s a comparison of each team to all other teams), results that don’t even involve a team can affect its fortunes. #11 Minnesota would instead be #8 right not if #16 Michigan had defeated #4 Clarkson.

Any questions? Did you find anything interesting yourself in the calculator?

The tournament that could have been

For better or worse, the NCAA changed the hockey tournament selection criteria this year. While the formulas were carefully crafted to produce a nearly identical outcome for last year, things could have been a little different this year if the change hadn’t been made.

The PairWise Rankings that could have been

First, the collegehockeyranked supercomputers have produced a table of where this year’s top 20 would have ranked under last year’s formula.

Minnesota 1 1
Boston College 2 3
Union 3 2
Wisconsin 4 8
Ferris State 5 9
Quinnipiac 6 5
Mass.-Lowell 7 4
Notre Dame 8 11
St Cloud St 9 7
MSU-Mankato 10 13
Providence 11 6
Colgate 12 12
Vermont 13 15
North Dakota 14 10
Michigan 15 16
Northeastern 16 19
Cornell 17 14
New Hampshire 18 18
Ohio State 19 20
Yale 20 22

As we should have expected, the tournament field doesn’t change much (after announcing the changes last Fall, the NCAA issued a statement noting that the tournament field wouldn’t have changed at all the previous year under the new formula).

Vermont is out at #15 while Cornell is in at #14. North Dakota fans would have been a little less worried and MSU-Mankato fans a little more, but both teams would still make the field.

But, the order is different enough to change the bracket quite a bit. My strength is statistics and rankings, not bracket-making, so I’ll walk you through a straight serpentine bracket then note the interesting aspects:

West (St Paul)
1. Minnesota vs 16. Robert Morris
8. Wisconsin vs 9. Ferris St

Northeast (Worcester)
3. Boston College vs 14. Cornell
6. Providence vs 11. Notre Dame

East (Bridgeport)
2. Union vs 15. Denver
7. St Cloud St vs 10. North Dakota

Midwest (Cincinnati)
4. Mass-Lowell vs 13. Mankato
5. Quinnipiac vs 12. Colgate

  • Problem #1 – Which #1 do you put in Cincinnati?
  • Problem #2 – Providence v Notre Dame is no good.
  • Problem #3 –  St Cloud St vs North Dakota is no good.
  • Problem #4 – Attendance at Cincinnati would be in the double digits.

We can solve #2 and #3 by swapping North Dakota and Notre Dame.

I have no idea what the committee would do about #1 and #4.

It would be nice to get Quinnipiac in Bridgeport but Union is already there and swapping St Cloud St to Cincinnati may not help Cincinnati much. A three way move could instead land Wisconsin in Cincinnati and St Cloud St in St. Paul.

Instead of just swapping UND and ND, you could also imagine a three way move that put Colgate in Worcester, Notre Dame in Bridgeport, and UND in Cincinnati.

The matchups and outlook would certainly be different (and North Dakota fans would undoubtedly be delighted for another stop in Worcester on the B.C. revenge tour).

Like I said, I’m a rankings guy, not a bracket guy, so feel free to let me know what I did wrong.

PWR formula uncertainty resolved

The previously dueling PWR implementations (see Uncertainty around PWR calculation) seem to have been resolved for now. This weekend, USCHO changed the formula it uses to calculate PWR so its tables now match those on CHN and SiouxSports. Previously, USCHO had weighted only the win% component while the others had weighted all components (win%, opponents win%, and opponents opponents win%).

As of right now, all three tables are identical:

Reading USCHO’s change as a sign that they received confirmation that their previous method was incorrect, this is great news for college hockey fans as it lifts the uncertainty that was previously hanging over the dueling implementations.

Uncertainty around PWR calculation

Some uncertainty apparently still persists around the NCAA’s new tournament selection criteria for men’s hockey.

CollegeHockeyNews unveiled its first Pairwise Rankings for the season (CHN PWR), and their implementation is a bit different from USCHO’s (USCHO PWR).

The differences aren’t just discrepancies in the underlying game data (e.g. neutral ice vs. not), but instead seem to be modest differences in the way the game weights are applied. CHN acknowledged that:

“But while the committee was transparent in how the weightings and Bonus were supposed to be done in general, it didn’t completely explain how the numbers were supposed to be applied against the existing RPI. There are different ways to do it.

Therefore, different sites are showing slightly different results. And we’ve been fielding constant questions as to why ours doesn’t match what’s being shown elsewhere.”

This PWR on this site has mimicked what USCHO has been publishing, though I’ll certainly keep an eye on developments.

Hopefully people in the know can help everyone converge on a common understanding of the new criteria, or there could be some surprises come tournament time for the first time in many years!