How should a team be ranked? – a theoretical discussion

Anyone who has read “Death to the BCS” will agree that the current BCS rankings are a sham.  The computer rankings are a bunch of nonsense, the Harris Poll is full of people who have no idea what they’re talking about, and coaches are obviously biased, plus they have no time to look at how all 120 (or at least the top 30 or so) FBS teams have played each week.  Even without knowing all this, the results speak for themselves.  Was anyone else surprised to see Oklahoma ranked as #1 the first week the BCS rankings came out?  Thank goodness Missouri beat them to end that fiasco.  That’s just one example of a plethora of ludicrosities (I made that up myself) that have come from the current ranking system.  Any sane person would come to the conclusion that a selection committee is the best method to place teams in a playoff, or in the case of the Beauty Contest Series, to put the top two teams into a National Championship.

Still, regardless of how you determine which teams go where, college football rankings will always exist due to the fact that you have 120 teams that only play 12-14 games.  This gives rise to the age-old academic question: how should a team be ranked?  Yes, of course the obvious answer is that the best team should be ranked at the very top.  But how do you define the “best” team?  Or, more generally, how do you determine if one team is “better” than another?  I’ve seen Sloppy and others complain that after one team beats another, the winning team should be ranked higher than the losing team because the winning team has proven that it’s better.  But I would contend with this: if Team A beats Team B, does that automatically mean that Team A is the better team or that Team A just played better on that given day?  Look at the 1980 U.S. Olympic hockey team (see the movie “Miracle”).  Does the fact that they beat the Soviets mean they were a better team?  Of course not.  But did they play better on that given day?  Yes.  How a team plays is obviously going to vary from one game to the next.  So then do you rank based on how each team would match up if each team played at its average level?

Here’s another wrinkle: what about teams that get better or worse throughout the season?  Because I’m a staunch BYU fan, I’ll use them as an example.  The Cougars stunk at the beginning of the season, but caught fire close to the end, winning 4 of their last 5.  Their only loss in that stretch was in the game at Utah where BYU was clearly the dominant team.  The Utes needed a lucky grab, a lucky bounce, a blown call, and a blocked field goal to win it.  But that’s a different story altogether.  Anyway, in their bowl game, the Cougars ended up playing UTEP, a team with a similar record.  But UTEP’s story was the exact opposite: they had started the season on fire, but had then lost 5 of their last 6 games or something like that.  If BYU had played UTEP early in the season, there’s a good chance they would have lost.  But as it was, they hammered UTEP.  I would contend that, with the way BYU was playing, they probably could have beaten a lot of teams with a better record, and that were ranked higher than the Cougars were.  Heck, at that point in time, they were probably better than some of the top 25 teams (Utah being one of them).  So should they have been ranked in the Top 25 despite their early-season losses?  To what extent should ranking reflect previous performance as opposed to current potential?

What do you think?

Comments