Monday, February 13, 2017

Rating Teams and the Three Factors

To develop my own customized team ratings for seeding, I wanted to get away from the RPI.  The RPI is a fair litmus test to validate “how real” a team’s record is, but does little to evaluate “how good” a team is.  RPI ratings can be manipulated by clever scheduling, and can also be sabotaged by things outside of a team’s control.  I also did not want to just cut and paste Pomeroy or Sagarin ratings and say “Hey, look what I did…” Not cool, and honestly, not much fun. 

The best way I felt to display these figures was to take a page from the immortal Dr. Emmett Brown’s playbook when he converted the DeLorean into a Time Machine.  The internal display gave the operator 3 readings: where you are, where you were, and where you are going (in some order).  This led me to create three different rankings: Basic, Strength, and Normalized.




As you can see from the graphic, each ranking values the teams differently, as it takes into different factors that rate performance.  While we know West Virginia can blow the door off any team any given night, it doesn’t tell us if they can achieve consistent results, or if they can sustain those results going forward.



The Basic Rating tells us Who You Are.  Weighted primarily off Pomeroy efficiency and Sagarin Ratings, normalized for a minimal level of performance, it gives a fair indicator of how well you score, how well you prevent scoring, and how good the overall quality of opponent has been.  This is great in a vacuum for theorizing which team is over/underrated, but does little to get us to a bracket.

The Strength Rating tells us Who/Where You Were.  In order to actually incorporate a tool the NCAA uses, I have used the teams’ RPI numbers to determine Top 50 W/L, Top 100 W/L, and bad losses.  It answers the question: When I have stepped on the court with my peers, what results have I delivered.  Some teams, like Iowa State and Georgia Tech, benefit immensely here.  The mid-majors, due to down years in the gut of many of their conferences, defections, and just bad scheduling luck, are getting crushed in this aspect.  Few teams got many opportunities, and many of them blew those.  And you just can’t help the fact that you may have put BC, Washington, and Texas on your Big Boy schedule and those teams are Butt.  This does give a fairly reliable rating of who is tourney caliber, and I used it for a few published brackets… but as I said before, opportunity and schedule can be manipulated here to mask what a team will do come NCAA time.

So… the Normalized ratings attempt to tell us Where you are Going, particularly in March.  Here, we reduce those opportunities to percentages, while giving additional weight to road neutral win % and win % in the last 12 games.  This puts mid majors on a level playing field with majors. Syracuse can buoy their record with wins at the Carrier Dome, but the NCAA Tourney isn’t at the Carrier Dome.  While Iowa State bangs around the Big 12, their quality numbers get a boost by doing it consistently and occasionally stealing one on the road.  Belmont and Vermont, lacking the quality opportunities, can be fairly rated by taking on all their challengers without slip up.


-->
While not perfect, these numbers output consistent values that don’t have teams jumping all over the grid.

1 comment:

  1. So basically this shows that my Gophers (and the big 10 in general) are worse then what other bracketologists are analyzing using metrics such as RPI in the bracket matrix. I take the above as analyzing the actual quality of the teams (e.g. Who is the best)? Pretty interesting to see it play out based on who is the best and not what I would deem as who is the most deserving.

    ReplyDelete