The NCAA Men's Tournament Committee assigned some seeds this year that were wrong, inexplicable, or both. What I'd like to do in this installment is analyze three of the worst Committee designations and explain why they reek.
Louisville
The first example of Committee misstep was the assignation of Louisville as an eight seed. While the ACC was not on a par with the SEC or Big 10, Louisville was the ACC's second-best team. They sported a gaudy 27-7 record. How did the esteemed Committee (capitalized, like Jesus) come up with an eight seed? Heading into the tournament, the venerable RPI rated Louisville as 14th in the country (a four seed), while the more currently fashionable NET had them at 24th in the country, a sixth seed. Tagging Louisville with an eight, facing a nine in the opener, made no sense. There are only two ways to arrive at that seed.
First, you'd have to take a really hard stance on the ACC being completely lousy and overrated, which is a tough thing to do when their few inter-conference games ended months ago. Taking this position requires intellectual arrogance and a kind of certainty that has no place in reasoned debate. The second possible way of arriving at an eight seed for Louisville is that you must be (clearing my throat -- ahem, ahem) cheating.
UConn
Next up, let's discuss UConn, the two-time defending national champions, but not a member of any Big Boy football league. UConn's seeding (an eight) is less of an intellectual conundrum than Louisville's because RPI has UConn barely making the tournament, and the NET has them as an eight, exactly where they are. The problem lies in the complete ignoring of what UConn accomplished these last two years. If you think what happened last March doesn't affect what happens this March, well, I have an observation for you. But let's circle back to that in a moment. For now, let me just say that top seed Florida had to rally desperately down the stretch to beat UConn 77-75.
Why do I have a bit of an issue with UConn as an eight? I suppose it comes down to what I perceive, again, as intellectual arrogance. You have a team, UConn, that has just won back-to-back NCAA titles. Obviously, their league isn't all peaches and cream, no matter what the metrics say this particular season. So why cut off the use of data from two previous years of both UConn and Big East play and rely solely on this season's pitifully limited interaction between Big East teams and the Big Boys? My intellectual issue with assigning UConn an eight and a Florida matchup (which I'm sure the Gators didn't love) is that it requires the following assumptions and mechanics of implementing these assumptions. Pay attention, folks, because I have a solid statistical and logical argument to make here.
What makes more sense: (1) to base an evaluation of a team on a 35-game schedule with one season and no tourney interaction with other conferences' teams or (2) to base an evaluation of a team on three years of data and three years of between-conference games and tournament games?
Now, you may counter-argue that how can what happened one or two seasons ago have anything to do with this season's team? A fair question to which I could posit a counter-question, namely what do the outcomes of games 80 days ago have to do with the current as-of-today team?
But hold on, I won't even bother to ask that question, although you can ponder it if you like. Instead, I'll simply skip to a devastating, if hypothetical, closing argument. What if two and three-year NET and RPI averages led to better predictability of outcomes than single-season metrics? What if multi-year stats led to better predictability of tournament outcomes?
Ahhhh, I see some readers' synapses lighting up. What would that mean, indeed? And why has nobody explored this in depth? The questions actually answer themselves. If this hypothesis were true, well, readers can work out the practical consequences.
And that's why I included UConn in this list of very curious seedings. Because not everything in life is obvious, and sometimes seeing reality involves prioritizing longer term metrics over shorter term metrics, even when the shorter-term metrics are how things are always done.
I will leave this slightly revelatory but patently obvious topic for now and wander along to my third, "What the hell were they thinking?" seeding.
Gonzaga
How did Gonzaga, yes -- that Gonzaga, wind up as an eight seed scheduled to run into Houston in Round Two?
Obviously, the goal here was to eliminate one of these teams as early as possible. This seeding, on the face of it, makes zero sense. The old RPI had the Zags as the 20th best team (a five seed). The NET ratings, again allegedly the most relied upon these days, had Gonzaga as the eighth best team and therefore a two seed. So how did The Committee arrive at an eight seed, especially with Gonzaga looking its best as the season closed?
It's sad when The Committee decides to blatantly handicap particular programs by pitting them against each other as early as possible in a matchup that makes no sense. This wasn't about juggling of metrics, eyeball testing, common sense, or Einsteinian insight that could have landed Gonzaga as an eight seed. This was flat-out, clownish rigging. No math, no scheduling, no analyses of personnel could have landed Gonzaga at eight.
Thankfully, I wasn't the only person to notice that Frank and Jesse had hijacked the tourney stagecoach. Ben Sherman's "NCAA Tournament Selection Committee makes huge seeding error" appeared Tuesday on msn. That, however, wasn't an error, Mr. Sherman. That was a stick-up.
Conclusion
We'll discuss disparities between spreads and seedings another day. For now, I'd like to ask, (1) "What would happen if we discovered that three-year metric averages actually yielded more accurate seedings than this-season metrics? " and (2) "Why is highway robbery allowed in setting up the world's highest profile basketball tournament?"
More importantly, why hasn't #1 above been examined and tested (or has it?), and why aren't Gonzaga's fans/alumni suing some Committee into oblivion?
I'd like to leave this installment by emphasizing that reality is not an avatar of anyone's metrics. The metrics are the avatar, the approximation, a flawed and partial representation of reality. Metrics don't factor in what they don't know. The problem, overall, is treating metrics as some kind of reality.
Bob Dietz
March 26, 2025