PD’s Postulations: Recruiting Mirth and Mythology Pt 3

Last week I looked at the mythology of signing 5-star athletes as a necessity for winning championships on the field. The week prior, I explored the myth of “closing” on or about national signing day (NSD). Both analyses proved to raise more questions than they answered. This week I hope to find a more concrete idea of whether the recruiting metrics perceived by most recruitnicks as being the gold standard, are actually meaningful. This week I will attempt to demonstrate whether or not NSD recruiting rankings matter.

Background

Having tracked many changes in the rankings through the years, I have recorded many instances where player rankings go up when they commit to schools with the highest number of subscriptions to recruiting services (Notre Dame, FSU, Georgia, et al), and go down when they commit to schools with lower subscribed members (specifically, Florida). And I have seen instances where in the weeks before the national title game, the rankings have vaulted the teams playing in the national title game ahead of other teams in the rankings, despite not garnering more new commitments during that time. The cynic in me scoffs at the black box methodology of ranking recruiting classes based on these seemingly artificial manipulations to increase the hype and buzz around their rankings. But the science-minded part of me needs to see the full set of data before drawing conclusions from these anecdotal – but very convincing – observations.

The validity of the overall system of recruiting services ranking signing classes will not hinge on a single example or some anecdotal observations, but rather the accuracy of the entire national ranking over a period of many years. Florida at certain points in the fall appeared to have a top-5 national recruiting class within the scope of possibilities, but was squeezed out of the top-10 on NSD, much to the loudly articulated displeasure of many Gator recruitnicks. But do those rankings even matter? Are they valid, and if so to what degree? To be so invested in the rankings, one would hope they would be a strong predictor of future success, because if they did not do a very accurate job of predicting future success, then their only utility is a little wintertime entertainment through the suspension of major disbelief. To try to answer this question at least to some degree, I conducted an analysis to gauge this validity or lack thereof, and thus the significance of recruiting class rankings in football. The results may surprise you.

The Analysis

To measure the predictive power of the team rankings, I took an aggregate average of the team rankings from the major recruiting services for each year and compared them to the accomplishments of those recruiting classes. To measure the accomplishments, I averaged the final AP poll and the USA Today/Coaches poll for the first four years of each class’s eligibility. Given the strong tendency in modern college football toward playing true freshmen, the high number of early NFL defections, as well as the assumption that those who are not good enough to play as true freshmen or gain early draft entry are going to have on average much less influence on the team’s success than those who make an impact from Year 1, I chose the first four years versus years two through five of a class’s eligibility. It is as clean a method as I see to gauge how accurate these rankings are in predicting success for each class.

Using 2002 as the first year that the two biggest ranking services began, this gives us data for 11 years of signing classes that have played out their first 4 years of eligibility. I concentrated on the Top 25 classes of each recruiting year, firstly to keep symmetry with the final Top 25 polls, but also because when you get further down the list than 25, you can’t expect much accuracy at all, as the aggregate talent level is so homogeneous across the country that no matter how good someone is at discerning a difference in recruiting classes, it would all be a wash at that level.

The Big Picture

For signing classes 2002 through 2012, teams with a Top 25 signing class finished on average 15.9 spots below or above their National Signing Day rank in the final season-ending polls over their first four years of eligibility. That’s a total variation larger than the actual Top 25 itself, in that a team with a signing class ranked Number 16 could expect based on average accuracy to finish their four years ranked as highly as Number 1 or as low as Number 32 in the country in terms of average final poll ranking. Tracking backwards, it’s even worse: teams that finished in the Top 25 in the final polls averaged a 16.4 position swing from their signing day ranking.

When narrowing the analysis to the Top 10, the predictive value is even more volatile, with an average error of 14.4 looking prospectively and 12.7 looking retrospectively. While the total combined plus/minus margin for error is slightly larger than the entire ranking when looking at the Top 25, the margin for error in just one direction (plus or minus) is larger than the entire ranking when looking at the Top 10. So, the total variation of the Top 25 ranking group is 31.8, or 27% larger than the Top 25 itself…when looking at the Top 10, the single direction margin of error alone is 44% bigger than the Top 10 itself, while the total variation is 188% bigger than the Top 10 itself.

Those numbers are pretty condemning (and possibly confusing), but they are not quite as bad as they appear. The impact of outliers plays a part in the disparity, as does the compound effect of successive recruiting classes. That is to say that any outlier years where a program has a bad recruiting class may be carried and propped up by much better surrounding classes during the 4 year eligibility, or that an excellent outlier class could likewise be dragged down by the surrounding classes that are not very strong. But the fact that we’re looking at four years of consecutive classes should still create an average that is a strong indicator of the accuracy of the recruiting class rankings. Likewise, there is always a halo effect around strong classes or strong seasons on the field that attracts multiple strong classes that helps to smooth the data and mitigate the outlier effect. On the flip side of that dynamic, when a typically strong program has a outlier down year in recruiting, it is usually due to either a coaching change that makes it difficult to corral a great class while in transition, or a scholarship anomaly that leads to a small class or a class that is imbalanced because of a cyclically small number of scholarships available or poor succession planning by the coaching staff. In the case of a coaching transition, most programs see the proverbial “Year 2 Recruiting Bump” that balances out the previous down class, and in the case of the scholarship anomaly, it is usually followed by a much larger than usual graduating class (since these holdover players created the small scholarship availability in the first place). Both of these phenomena serve to mitigate the outlier class or often balance it out evenly over two years so that the blip does not have a significant impact on the findings.

So, as stated, when we look at the overall validity of the biggest recruiting services in ranking the strength of signing classes, the inaccuracy is pretty immense. When the variation in accuracy is 27% larger than the entire Top 25 poll rankings (and 188% for the Top 10), that’s a pretty poor level of performance. And those are the overall averages across all of college football. When you look at the individual examples, things can get really scary. Let’s take for instance the program that ended the 2012 season and headed into the BCS national title game as the Number 1 ranked team in the country: the Irish. Notre Dame was ranked second in aggregate recruiting rankings on National Signing Day in 2008. That class wound up being ranked in the final AP Poll Top 25 and/or Coaches Poll Top 25 exactly zero times over their four years of effective eligibility from 2008 to 2011.

Perhaps worse than that, Texas A&M signing classes were ranked in the recruiting aggregate Top 25 for five-straight years – their classes of 2002 to 2006 – without EVER being ranked in the final Top 25 in any of the eight years spanning those classes’ effective eligibility windows. On National Signing Day in 2003, the signing classes of Colorado, Mississippi State, North Carolina, North Carolina State, Oklahoma State, South Carolina, Stanford, Texas A&M and Washington were all ranked in the aggregate Top 25 in the country, and three of them were ranked in the Top 10. Of those nine programs, NONE of them ever finished in any of the final AP or Coaches Top 25 polls within the four-year effective eligibility window. That’s over 35% of the ranked signing classes – and 30% of the Top 10 – in one year that were never even good enough to be ranked in the final polls. Ever.

A Deeper Cut

So overall, the recruiting class rankings do a pretty terrible job of predicting how well those players will perform or achieve on the field. Now I will get deeper into the specifics of the findings to demonstrate just how flawed their predictive powers are, and try to determine why anyone should care about class rankings.

But first, I want to again clarify the scope of this analysis. It is not a study of the predictive power of star ratings on individual players. In addition to the in-depth analysis I did last week on the impact of 5-star players on a football program’s success, there have been a couple of other relatively well circulated analyses of the accuracy of star ratings out there, and they have been very shallow in method and very hollow in findings. Overall I cannot argue with the very high level takeaway that “Five-star good, No-star bad,” but beyond that very general connotation, there isn’t a lot of meat on the bone. And it is not for nothing, as they say. In order to perform a meaningful analysis on the accuracy of star ratings for individual players, the number of variables that would have to be controlled is far beyond the scope of sanity. In addition, there are so many fudge factors involved from the perspective of the recruiting services, the sizzle just isn’t worth the steak.

Ultimately, even if the individual player star rating systems were accurate, my response is, “So what?” Because my interest in recruiting is how signing classes will help the Gators win more games and more championships, or how they will help other schools who compete with Florida do the same. If I cared about stockpiling great players that get all sorts of media buzz and adulation, but almost never win championships, I would be a fan of Clemson, Georgia or FSU. However, I am a fan of the Gators, and in Title Town, we measure excellence in wins and championships.

So you can see why I find it useful to conduct this analysis: does the ranking of entire recruiting classes against those of all the other schools have any accuracy in projecting success of those schools in terms of wins and championships, as would be demonstrated to a great extent by final poll rankings? The more wins you secure, the higher you will be ranked in general; if you win your conference title, you will usually receive a bump in the polls – especially in leagues that hold conference title games. It is not perfect, but it is the best team-to-team, poll-to-poll, apples-to-apples comparison to judge the meaning of any recruiting rankings out there. Now with all that having been said, let’s look at some more of the specifics of the findings.

Da Debil in Dem Details

Let’s first examine a few years as an example of how random the signing class rankings can be in terms of predictive ability. In 2006, USC signed the top ranked class, but wound up finishing fourth in four-year poll performance. Not terribly egregious. Pretty accurate, in fact. Florida was the king of that quad as far as success on the field, and was ranked second in 2006 recruiting, so neither of the Number 1 teams in recruiting rankings or poll rankings were big misses. However, in that same year, there were some huge misses. Tacking forward, Notre Dame’s ranking was off by 34 spots (that is, ranked Number 6 in 2006 aggregate recruiting rankings, but finished 40th in the nation in combined 4-year final poll rankings). The swing and miss on FSU’s ranking – another of 2006’s Top 10 ranked signing classes – was even worse, over-ranking their class by 36 spots. Tracking backward (i.e., looking at the four-year final polls’ Top 25 and seeing where those recruiting classes were ranked in 2006) uncovered even bigger blunders like Ohio State (missed by 10 spots), Virginia Tech (25), TCU (46) and Boise State (50).

And those are just in the Top 10. In the rest of the Top 25, it was like a wild, wild west gun show – if the gun fighters were Don Knots and Tim Conway (both in their current state of health), with misses like Missouri (33), Utah (38), Cincinnati (40) and West Virginia (45). The average margin for error in the 2006 recruiting rankings Top 25 was 14 spots, while tracking backwards it was 19. In the Top 10 alone it was 11 and 14, respectively.

In 2007, Florida won the recruiting title, but wound up finishing fifth in the aggregate polls over the next four years, as that time span contained Urban’s only two years in the first 5 that were not 13-1 finishes. Ohio State was the best performing class of that year, after singing just the Number 13-ranked class in 2007. Overall, that Top 10 had an average miss of nearly 20 spots, with only one significant outlier (TCU by 53). Boise State, Notre Dame and North Carolina were all recruiting ranking fails by over 40 spots. Even when adjusting for outliers, you wouldn’t want to wager any of your own money on future success based on signing day rankings.

The least volatile Top 10 in this 11-year span were the classes of 2002 and 2011 – two of only three classes in the study that had any single-digit margin for error in the Top 10 looking forwards or backwards. By comparison, the signing classes of 2005, 2008, 2009 and 2012 were among the worst in terms of predictive accuracy, so with the exception of 2011, the rankings have become worse over the years, not better. Even as the best representative of class ranking accuracy in the last ten years, the 2002 class still had a margin of error of 13 spots tracking forward and 16 spots tracking backwards for the Top 25. In the Top 10, it snuck just under the double digit ceiling at 9 and 7, respectively. Of the Top 10 signing classes in 2002, five of the rankings only missed their final poll predictions by 1 spot, and six of them wound up finishing in the Top 10 in the collective aggregate polls. That’s pretty darn solid. Had this been the norm, the findings would have looked much different. However, the very next year, the margin of error nearly doubled for the Top 10 (at 16), where there were misses of 11, 14, 40, 41 and 44. The following year was another good one for the recruiting services, actually projecting the top two teams in the polls – USC and LSU – correctly. However, the next year in 2005, the error rate ballooned again, this time shooting all the way up to 18 spots, 21 for the Top 10. That trend of very bad accuracy continued for the final seven years of the analysis.

But a quick look at the anomalies explains why the recruiting services have any luck at all in predicting the success of signing classes, and why the casual observer often erroneously perceives the rankings to be generally on the money. Let’s consider the 2004 recruiting rankings. USC’s class was ranked Number 1, with LSU’s class ranked Number 2. Well, it should come as no surprise that the two teams that each won or “claimed” the national title just a couple weeks before National Signing Day that year were LSU (official BCS champions) and USC (beauty pageant AP “champions”). In fact, when you look at the entire Top 10 in aggregate final poll rankings for the 2003 season – released in January 2004, a matter of days before National Signing Day – the class of 2004 aggregate recruiting rankings varied only a mere 4.9 positions from where those teams were ranked in the final polls of the season that just ended. In fact, over the 11 years of this analysis, the average margin of error for Top 10 recruiting class rankings that I discussed earlier was 15.9 spots. From 2002 to 2016, the average variance between the aggregate recruiting class ranking and the previous month’s final average AP and Coaches Poll rankings was just 9.7. Over the 11 years of the analysis, the margin for error for Top 10 recruiting class rankings only nudged into single digits three times (double digits 73% of the time); when looking at how closely the rankings mirrored (i.e., copied) the previous month’s final polls, the variance never reached 14, and only touched double digits 5 times out of 16 (31% of the time).

So if we are looking at predictability, it appears twice as likely that the final poll rankings will predict the final recruiting class rankings a few days later, than the recruiting class rankings will predict the future final poll results. This is no doubt due in part to teams doing well in recruiting because of the momentum created by winning on the field (a trend we saw sharply underscored with the meteoric rise and meteorite plunge of the Gators’ 2016 on-field success leading to a similar trend line in recruiting momentum and success between August to February). It is also, however, no doubt also influenced by the recruiting services hedging their bets by aligning the top of the recruiting rankings with the top of the poll rankings.

The reason it appears to the casual observer that the class rankings seem to match the final polls is that they are looking at the same year’s final polls and recruiting rankings – not the final polls over the next four years that prove out the accuracy of the rankings.

To pick another glaring example of wild misses, tracking forward, the Number 1 signing class of 2005 was FSU; they didn’t even finish in the final Top 25 the country in either poll in any of the four years of the effective eligibility of that class. On the other side of the coin, Ohio State finished in the 30s or worse in every recruiting ranking in 2003, but finished #3 in final poll rankings as a class over the next four years.

What it Means

These findings are not an indictment of the ability of these self-proclaimed recruiting gurus to judge talent, because truth be told they base their assessments on visit lists and buzz more than all other factors combined (and it is not very clear what “all other factors” even are, but we do know for a fact that among them are such nefarious metrics as what players go to which recruiting service-sponsored all-star game, and which ones go to the similarly sponsored elite camps, and even what football-magnet high schools to which they transfer that are friendly to one recruiting service or another). Given those facts, these findings may be an indictment on the evaluation talent of the upper third of the programs in the country. Or more likely, the evaluation acumen and the ability to develop the talent once it is on campus.

Because when I went team-by-team to compare the ranking trends of recruiting vs final AP/Coaches polls, certain teams like Alabama and Clemson have trend lines that are very close over the years. Both in direction and general ranking, these schools’ trend lines both either gradually rose together or fell together over the 14 years since 2002. Other schools like FSU have trend lines that are not remotely similar, as the Semis for a solid decade signed consistently high-ranked recruiting classes while consistently finishing unranked or low in the final polls. Then there are the many schools like Florida and Auburn, whose recruiting and final poll trend lines look like the wildly and randomly flailing arms of a giant octopus attacking an old Spanish galley.

The outcome of the analysis should not serve as a blunt object with which to hammer the recruiting services, either. But it does demonstrate that their class rankings essentially have no predictive power or accuracy in forecasting the future of a program. And that seems to be the only reason to follow the ranking systems. But that does not mean the rankings and the process of following them are useless to fans beyond the artificially created entertainment value. And it does not mean that there is no significance to signing a highly rated recruit or having a class that is highly ranked. It merely means that neither of those mean much of anything by themselves as far as insuring the success of the program over that player’s or that class’s period of eligibility.

Final (Sort Of) Conclusion

The primary take-away here is that being ranked highly in recruiting IS significant – it means your program has a lot of very good raw materials with which to work over the next three to five years – but it is not a precise or even general predictor of future success in any capacity. The differentiating factors as always are matching the right players to the program and systems, developing and coaching them up and avoiding those unforeseen, largely uncontrollable impacts like injuries, transfers, coaching instability, etc. These impacts on the success of a class were not taken into account to adjust for “program turbulence,” if you will, that would absolve the recruiting service rankings of some of the blame for their significant inaccuracies. They were ignored firstly because the 11 year period is long enough to smooth out those influences on a program. We are in the era of almost instant program transformation. Urban Meyer won a national title in two years. So did Gene Chizik, Bob Stoops and Jim Tressel. Les Miles, Pete Carroll and Nick Saban all did it in three years, with Saban also doing it at LSU in four. Larry Coker even did it in his first year. Gus Malzahn took Auburn from last in the division to the BCS title game in his first season. And of course Jim McElwain raised the Gator ship off the bottom of the S-E-Sea and sailed it straight to Atlanta in his maiden voyage and Florida skipper. All of those national titles were won since 2000, and in fact these coaches represent ALL of the national title winning coaches since 2000 except Mack Brown, who took eight years to win his at Texas, and Jimbo Fisher, who took seven years (including his 3 seasons as the de facto head coach while Bobby Bowden napped in the practice field crow’s nest). And all the coaches on that list but Miles and Coker took over struggling programs. So 11 years is by far long enough a sample duration to factor out any program transition troubles that might misrepresent a program’s recruiting success.

The Good News

The longer you follow recruiting, the more you realize they key to turning these signing classes into successful football programs over their period of eligibility goes far beyond the players signing each February. If you plan to win a lot of games and win championships, it is very important to sign great classes of athletes on a consistent basis. But the programs that use that raw talent to succeed are the ones with great coaching staffs who not only have an eye for talent, but an eye for talent that will fit their systems and fit their programs on a cultural level, and above all a high level of ability in developing the talent to play at the highest level within that system.

You see where I am going with this.

It doesn’t matter what other programs do or where they rank this year – the success of this Gators class will bear out on the strength of the staff’s abilities and the stability of the Florida program’s system. As for how much they achieve, it will depend on the program, and certainly not the strength of the endorsement of a Top 10 class ranking in February. And for one last mention of predictive accuracy, I will predict that it will be very fun for Gator fans to watch this class and the rest of the program develop and perform and bring home serious hardware over the next four years. I think in future years we will look at this class as the one that fueled the Mac Attack to greatness in the SEC and put the Gators back where they belong among the national elite. Since this class did not rank among the all-time best in school history in terms of National Signing Day regard, we’ll definitely have to check back in four years and see how accurate that prediction – and that of the recruiting ranking – both are.

Until then, remember that each day is a gift, that’s why they call it the present.

David Parker
One of the original columnists when Gator Country first premiered, David “PD” Parker has been following and writing about the Gators since the eighties. From his years of regular contributions as a member of Gator Country to his weekly columns as a partner of the popular defunct niche website Gator Gurus, PD has become known in Gator Nation for his analysis, insight and humor on all things Gator.

5 COMMENTS

  1. PD, very interesting! Thanks for taking the time to do this analysis. With parity & scolly limitations, coaching staff is best predictor imo. That explains the success of programs yr over yr that have great success with top 25 classes & also the underperformers with top 10 classes.

  2. recruiting service # of *s is not the be all and end all, but it is hard to argue with the fact that carlos dunlap, tim tebow, percy harvin, shariff floyd, dominick easley, and a number of other florida greats were 5 *s if my memory serves me right.
    i think there is no question…none…that on average a team of 5 * athletes will defeat a team of 3 * athletes. as tebow himself said: “hard work beats talent unless talent works hard”.
    you have to have the raw material, the potential, the athleticism present in the first place.
    and you are more likely to get that in a 5* than in a 3*.
    i don’t see how anyone can argue against that.
    # of *s should matter, but must be weighed with other factors…to get a complete picture.

    • How about 5* players that didn’t live up to their hype like Ronald Powell(#1 HS player in the country),Jeff Driskel(#1 HS QB in the country),Andre Debose,John Bentley,Gary Brown, or Darrell Lee. 3* players who turned out to be much better then their 3 star ranking like Mike Pouncey,Antonio Callaway,Alex McCalister,Cam Dillard,Jarrad Davis, or Quincy Wilson.

  3. I still have to wonder why we lose so many in state 4 & 5 star recruits to so many out of state schools, when we use to get them on a regular basis. Just a bad reputation after the Muschamp era, poor facilities, poor recruiting effort/budget or just bad luck???? I agree that fitting the system, hard work and coaching makes a difference, but I would rather start of with top talent and go from there. Money doesn’t buy happiness but it is hard to be happy without it, and talent doesn’t guarantee wins but it is hard to win without it!!!