PD’s Postulations: Do recruiting rankings matter?

With the Sugar Bowl debacle, the BCS title game and hopefully all the announcements of Florida transfers and early NFL draft entrees behind us, the 2012 season is fading fast in the orange and blue rear-view mirror. It’s time to hyper-focus on recruiting, something that many college football fans have been doing for the last month, and some all year.

In following the recruiting sagas, an interesting thing happened over the days surrounding the Under Armour and Army high school all-star games last week. Florida came into the week with its class of commitments ranked No. 1 in the country by ESPN. Early in the week, the Gators lost the pledge of defensive lineman Caleb Brantley, which prompted the recruiting service to drop Florida to No. 3 in the rankings, while vaulting Notre Dame to No. 1. Notre Dame’s advance without a commitment seemed to be a gratuitous attempt to make the recruiting rankings match the BCS rankings, with Alabama No. 2, serving as another promotion for the BCS title game being broadcast in a matter of days on that same network. However, when Caleb Brantley re-committed, Florida was not restored to the No. 1 spot. Notre Dame maintained the top spot in the next rankings update, with the service citing the commitment of Max Redfield to the Irish at the UA game as solidifying the ranking. However, Florida also received a commitment from Demarcus Robinson the day after the game. So since the week’s initial rankings that had Florida ranked No. 1, both schools added a net of one four-star player. But the three-day waffling of one of Florida’s commitments caused a flip-flop in the rankings (Florida did move back ahead of Alabama after the dust settled).

Putting the BCS title game promotion theory aside, this shuffling of the rankings seems very arbitrary. As does the overall ranking of Notre Dame over Florida, regardless of the artificial manipulation of the order this week. Notre Dame has 23 commits, of which 10 are in the ESPN 150. Florida also has 10 commits in the ESPN 150, but has 26 total players in the class so far. Florida has five players in the top 50; Notre Dame has four. Florida has nine in the top 76; Notre Dame has five. The highest ranked Florida player in the 150 is Vernon Hargeaves III, at No. III, a player who won the MVP of the UA all-star game while no Notre Dame commitment really stood out at either all-star game. Notre Dame’s highest-ranked commit is Jaylon Smith at No. 9. Florida’s next ranked player is Kelvin Taylor at Number 15, while the next Irish commit is down the list at No. 27 (Greg Bryant). As for star power, Florida has two players ranked by ESPN as the No. 1 at their position in the country: Hargeaves at safety and Kelvin Taylor at running back. Notre Dame has zero committed players ranked No. 1 at their position.

But this is just one comparison between teams where the ranking seems arbitrary given the collective player rankings of the individual players according to that service. The validity of the overall system of recruiting services ranking signing classes will not hinge on a single example, but rather the accuracy over the entire national ranking over a period of many years. Florida is in strong running for the No. 1 final class rankings this year, and while Gators fans are understandably as giddy as a school girl over being at or near the top of the team recruiting rankings whenever it happens, do the rankings really matter? Are they valid, and if so to what degree? To be so invested in the rankings, one would hope they would be a strong predictor of future success. To try to answer this question at least to some degree, I conducted an analysis to gauge this validity or lack thereof, and thus the significance of recruiting class rankings in football. The results may surprise you.

The Analysis

To measure the predictive power of the team rankings, I took an average of the team rankings from the major recruiting services each year and compared them to the accomplishments of those classes. To measure the accomplishments, I averaged the final AP poll and the USA Today/Coaches poll for the first four years of each class’s eligibility. Given the strong tendency the last decade toward playing true freshmen, the high number of early NFL defections as well as the assumption that those who are not good enough to play as true freshmen or gain early draft entry are going to have on average much less influence on the team’s success than those who make an impact from Year 1, I chose the first four years versus years two through five of a class’s eligibility. It is as clean a method as I see to gauge how accurate these rankings are in predicting success for each class.

Using 2002 as the first year that the two biggest ranking services began, this gives us data for eight years of signing classes. I concentrated on the top-25 classes of each recruiting year, firstly to keep symmetry with the final top-25 polls, but also because when you get further down the list than 25, you can’t expect much accuracy at all, as the aggregate talent level is so homogeneous across the country that no matter how good someone is at discerning a difference in recruiting classes, it would all be a wash at that level.

The Findings

For signing classes 2002 through 2009, teams with a top-25 signing class finished on average 15.6 spots below or above their National Signing Day rank in the final season-ending polls over their first four years of eligibility. That’s a variation larger than the actual top 25 itself, in that a team with a signing class ranked No. 15 could expect based on average accuracy to finish their four years ranked as highly as No. 1 or as low as No. 31 in the country in terms of average final poll ranking. Tracking backwards, it’s even worse: teams that finished in the top 25 in the final polls averaged a 17.1 position swing from their signing day ranking.

When narrowing the analysis to the top 10, the predictive value is even more volatile, with an average variation of 14.5 looking prospectively and 13.6 looking retrospectively. Although those numbers are indeed smaller than the margin of error for top-25 teams, the total variation of the top 25 ranking group is 31.2, or 25 percent larger than the top 25 itself…when looking at the top 10, the margin of error alone is 45 percent bigger than the top 10, while the total variation is 190 percent bigger than the top 10 itself.

Those numbers are pretty condemning, if not confusing, but they are not quite as bad as they appear. The impact of outliers plays a part in the disparity, as does the compound effect of successive recruiting classes. That is to say that any outlier years where a program has a bad recruiting class may be carried and propped up by much better surrounding classes during the four year eligibility, or that an excellent outlier class could likewise be dragged down by the surrounding classes that are not very strong. But the fact that we’re looking at four years of consecutive classes should still create an average that is a strong indicator of the accuracy of the recruiting class rankings. Likewise, there is always a halo effect around strong classes or strong seasons on the field that attracts multiple strong classes that helps to smooth the data and mitigate the outlier effect. On the flip side of that dynamic, when a typically strong program has a down outlier year in recruiting, it is usually due to either a coaching change that makes it difficult to corral a great class while in transition, or a scholarship anomaly that leads to a small class or a class that is imbalanced because of a cyclically small number of scholarships available or poor succession planning by the coaching staff. In the case of a coaching transition, most programs see the proverbial “Year 2 Recruiting Bump” that balances out the previous down class, and in the case of the scholarship anomaly, it is usually followed by a much larger than usual graduating class (since these holdover players created the small scholarship availability in the first place). Both of these phenomena serve to mitigate the outlier class or often balance it out evenly over two years so that the blip does not have a significant impact on the findings.

So, as stated, when we look at the overall validity of the biggest recruiting services in ranking the strength of signing classes, the inaccuracy is pretty immense. When the variation in accuracy is 25 percent larger than the entire top-25 poll rankings (and 190 percent for the top 10), that’s a pretty poor level of performance. And those are the overall averages across all of college football. When you look at the individual examples, things can get really scary. Let’s take for instance the program that ended the 2012 season and headed into the BCS national title game as the Number 1 ranked team in the country: the Irish. Notre Dame was ranked second in aggregate recruiting rankings on National Signing Day in 2008. That class wound up being ranked in the final AP Poll Top 25 and/or Coaches Poll Top 25 exactly zero times over their four years of effective eligibility from 2008 to 2011.

Perhaps worse than that, Texas A&M signing classes were ranked in the recruiting aggregate top 25 for five-straight years — their classes of 2002 to 2006 — without EVER being ranked in the final Top 25 in any of the eight years spanning those classes’ effective eligibility windows. On National Signing Day in 2003, the signing classes of Colorado, Mississippi State, North Carolina, North Carolina State, Oklahoma State, South Carolina, Stanford, Texas A&M and Washington were all ranked in the aggregate Top 25 in the country, and three of them were ranked in the top 10. Of those nine programs, NONE of them ever finished in any of the final AP or Coaches Top 25 polls within the four-year effective eligibility window. That’s over 35 percent of the ranked signing classes — and 30 percent of the top 10 — in one year that were never even good enough to be ranked in the final polls. Ever.

Deeper Cut

In Part 2 of this analysis, I will break down many of the specifics that shed light on just how bad some of the inaccuracies are. I’ll also attempt to get at why there is such wild variation in the accuracy from team to team. Finally, and most importantly to Gator fans, I will pull out the bright side of all this seeming randomness and discuss whether there is in fact some meaning in what the data tend to suggest is meaningless.

Spoiler alert: it’s good for the Gators.

Until then, remember that each day is a gift, that’s why they call it the present.

David Parker
One of the original columnists when Gator Country first premiered, David “PD” Parker has been following and writing about the Gators since the eighties. From his years of regular contributions as a member of Gator Country to his weekly columns as a partner of the popular defunct niche website Gator Gurus, PD has become known in Gator Nation for his analysis, insight and humor on all things Gator.