Busting Brackets
Fansided

Bracketology: Why it is too early to cut down the NET rankings

NEW YORK, NY - MARCH 8: Led by committee chairman Mark Hollis (3rd from L), the NCAA Basketball Tournament Selection Committee meets on Wednesday afternoon, March 8, 2017 in New York City. The committee is gathered in New York to begin the five-day process of selecting and seeding the field of 68 teams for the NCAA MenÕs Basketball Tournament. The final bracket will be released on Sunday evening following the completion of conference tournaments. (Photo by Drew Angerer/Getty Images)
NEW YORK, NY - MARCH 8: Led by committee chairman Mark Hollis (3rd from L), the NCAA Basketball Tournament Selection Committee meets on Wednesday afternoon, March 8, 2017 in New York City. The committee is gathered in New York to begin the five-day process of selecting and seeding the field of 68 teams for the NCAA MenÕs Basketball Tournament. The final bracket will be released on Sunday evening following the completion of conference tournaments. (Photo by Drew Angerer/Getty Images) /
facebooktwitterreddit

It might only be November but bracketology minds are whirling at the release of the first NET – NCAA Evaluation Tool – rankings of the season.

It’s only November, but people are already trying to cut down the NET – the NCAA Evaluation Tool, that is. The general feeling is that the NCAA’s new in-house ratings system for bracketology got it all wrong in its first release. The bigger issue, though, might be how we understand ratings systems in general.

In a much-publicized and widely-cheered decision, the NCAA Selection Committee decided this past offseason to scrap the old RPI as their in-house metric for measuring team resumes on Selection Sunday. The replacement: the NCAA Evaluation Tool, or NET.

Here’s how the NCAA described the new metric in its announcement in August:

"The NCAA Evaluation Tool, which will be known as the NET, relies on game results, strength of schedule, game location, scoring margin, net offensive and defensive efficiency, and the quality of wins and losses."

On Nov. 26, the NCAA released its first iteration of the NET rankings, which is updated daily on their website. To say that the NET was poorly received would be an understatement, at best. The response to the NET rankings has been overwhelmingly negative so far. Perhaps understandably so. Small schools like Loyola Marymount and Belmont were suddenly top-15 teams, while Kentucky didn’t crack the top-60? Pure madness, surely!

Or is it?

The big issue that seems to be cropping up with critics of the NET is that it doesn’t match up with the eye test – it defies what we know (or rather, what we think we know) as basketball analysts, journalists, and fans.

But herein lies the rub: the NET is not designed to be predictive.

At its core, the NET is a descriptive, results-based tool. Its main function is to help the NCAA Tournament Selection Committee identify which 36 teams are most deserving of at-large bids, based on their regular-season performance. It wasn’t designed to get it right perfectly on the first try.

In short, the system is going to be volatile for a while, at least until the schedule starts to normalize after conference play begins. That’s just the nature of results-based models.

In contrast, there are plenty of predictive models out there. Ken Pomeroy, Jeff Sagarin, and ESPN all produce predictive ratings systems. The Selection Committee has stated that they will use those metrics in conjunction with the NET (and other results-based systems) when it comes time to make the bracket. Clearly, both types of models have their place at the table.

Results-based systems tell you who has played the best basketball up until now. Predictive models help you understand who should play better basketball tomorrow.

But it begs the question: Why publish the NET rankings so early? It’s a fair question – and a much better one than asking why Team X is ranked higher than Team Y.

One possible explanation is that the NCAA wanted to start reinforcing its new brand as early as possible. The RPI had been in use for so long that it just became part of the vernacular. In order for fans to understand what pundits are talking about in March, the NET needed to enter the college basketball zeitgeist post haste.

What’s especially interesting, though, is that, for all the flack that the NET has gotten, another results-based rankings system has actually churned out some similar results.

As of this writing, ESPN’s Strength of Record (SOR) metric – the results-based counterpart to their predictive BPI – has Loyola Marymount ranked as #14 in the nation. That is just three spots lower than the Lions hold in the most recent NET ratings. On the other hand, the predictive models list the Lions at #106 (Sagarin), #124 (Pomeroy), and #159 (BPI).

What this tells us is that, while LMU is off to its best start in school history, their fans shouldn’t get their hopes too high just yet. The other shoe is likely going to drop when conference play starts – especially with two tilts against one of the nation’s premier programs in Gonzaga.

It also tells us is that looking at both types of models paints a much clearer picture than just focusing on one or the other. That’s why it’s off-base to bash results-based models for being “wrong”, especially in the early going. These models only tell half of the story.

Next. Power Rankings after Feast Week. dark

No single metric will ever be the be-all-end-all, perfect measurement of a team’s strength. But the idea is that the results-based models and the predictive models will eventually align when the dust has settled and it’s time to select the 36 at-large teams for the NCAA Tournament.

So, instead of getting mad at an algorithm, take a deep breath, open up a few new browser tabs, and check out both types of rating models in order to see the bigger picture.

In the meantime, save the net-cutting for the Final Four.