PD’s Postulations: Recruiting Mirth and Mythology Pt. 2

Last week I looked at the mythology of “closing”, the parentheses being needed because there is no consensus on what closing really means, and even whether or not the concept really exists in this day and age of strategically manipulated public announcements. In Part 2 of this series examining the mirth and mythology of recruiting, I will try to find the truth and tall tales behind the myth of the 5-star talents.

How Many Stars On Thars?

This season, the biggest complaint or focus of dread among many Gator fans is the lack of commitments from prospects that are ranked 5-star recruits by the recruiting entertainment services. In this recruiting class, the Gators did not garner the pledge of a single player who was rated a 5-star at the time of his commitment. Two players saw their star ratings lifted to 5-star after committing to Florida, but none were so recognized when they pledged to the good guys.

The screams about 5-star commitments have simmered slightly since Florida was upgraded from zero to two (which is tied for eight in the nation – maybe that was the top-10 recruiting finish everyone was looking for), however a pair of 5’s is a far cry from the royal flush of 5-stars to which Florida Gators fans had become accustomed prior to the last several years.

So with all the ruckus over the lack of 5-star signees at Florida (could you describe the ruckus?), and all the 5-star talent flocking to other places, the question has to be raised: does it really matter?

There have been studies done comparing the proportion of 5-star high school athletes being drafted by the NFL, as well as being first-round picks, compared to the proportion of non-5-stars. The results of those analyses can be found by using Google. I won’t bother mentioning those results because they are irrelevant to the success of a college program. The job of a college coaching staff is to evaluate and sign the best players in the country they can to fit into their systems and help them win the most games and the most championships possible. In college football. A track to the NFL is a nice recruiting pitch, but no college coach has ever been paid or otherwise compensated for the number of kids he got drafted into the NFL.

So the only analyses that matter are those that demonstrate a correlation between the signing of 5-star recruits to the winning of college football games and college football championships. That is what I will look at here.

The Method, Man

With apologies to rapper Clifford Smith, the method is not overly important because there are many ways to approach this comparison. I will look at it from three different angles and let you decide which is the most telling. Or if any of them reveal anything conclusive at all. First, the lay of the land: 5-star recruits are supposed to be immediate impact players as true freshmen, and they are assumed to be 3-year players – and treated as such in the depth chart planning, unless they get to campus and either show they are not ready or they get injured. Since 2002, 70% of Florida’s 5-star signees have played as true freshmen and skipped the redshirting process. The 5-star signees over that time period have stayed at Florida less than 3.5 years on average (including redshirt years), and have played on the field an average of 3 years each. That’s 14 year’s worth of data for Florida, and I would wager good money these numbers are very similar at all the Power 5 programs that sign 5-star players (and there are rarely as many as 30 teams in America that sign any 5-star prospects in any given year – in fact only once since 2002 has there been a year where more than 28 schools signed a 5-star recruit). So that conveniently matches up very well with the AP and coaches top-25 polls – which usually have combined about 27 ranked teams – as far as having about the same number of teams each year as the number of teams that signed 5-star talent in a given year.

I am using 2002 as the starting point both to keep things relevant to modern recruiting and personnel management trends, and because that is the year the first of the major recruiting services went to a 5-star system.

Therefore, the impact of 5-star signees will be measured by the first 3 years they are part of a college program. As demonstrated above, statistically speaking, that is their window of contribution on the field. I will analyze the data four ways: (1) a snapshot comparison of the cumulative impact of 5-star players on winning the national title, (2) the 3-year prospective impact of 5-star players signed in a single recruiting class, (3) the retrospective impact on a single season of 3-straight years of 5-star recruits, and (4) a chronological look at the relationship between 5-star recruits and program success (a chicken-and-egg analysis). I will look at aggregate AP & coaches poll finishes and national titles as the most meaningful measures of success. The 5-star players were identified according to their final star rating by the four biggest recruiting services (247, ESPN, Rivals and Scout).

Bird’s Eye View: National Titles

Would it surprise you to know that of all the teams that won national titles since 2002, only one of them had a single 5-star recruit on its starting roster? Well it should, because that’s simply absurd. Would it surprise you to know that since 2002, of the 14 national titles won in the FBS, 13 of them were won by the 7 teams with the most 5-star recruits signed since 2002? Well it shouldn’t, because that one is absolutely true.

Since 2002, 13 of the 14 national titles were won by the 7 schools with the most 5-star recruits signed over that time span (not including the class of 2016, since they have not played yet). They are in descending order of most 5-star players signed, with their title seasons in parentheses: USC (’04), Florida (’06, ’08), Texas (’05), FSU (’13), Ohio State (’02, ’14), Alabama (’09, ’11, ’12, ’15) and LSU (’03, ’07). The only other team with a national title in that span was Auburn (’10), which was just the #14 team in terms of 5-star players signed.

This would seem to be a slam dunk endorsement that if you want to win a national title, you have to sign the most 5-star players in the nation. However, there are other elements to consider. USC signed the most 5-star players in this time period (78), which was nearly twice as many as Alabama signed (43), however Alabama won 4-times as many national titles as USC did. Likewise, of the top 4 schools in terms of signing 5-star players, three of them only won a single national championship. Whereas the three schools behind them in 5-star signees won 8 natties between them. It’s also been 11 years since #1 USC won one, and 10 years since #3 Texas took home a crystal trophy. Then there is Oklahoma, who tied LSU with the 7th-most 5-stars (38), yet they have zero natties over that time period compared to LSU’s two.

Then there are Georgia, Miami, Notre Dame, Michigan and Tennessee, all of which signed more 5-stars than Auburn, but Auburn is the only one from that group with a title. And they also played in a second national title game, something that none of those other five teams did. And compare the programs at Alabama (43 5-stars) and Georgia (35): The Tide only average one more 5-star signee than Georgia every TWO years…yet they have 4 national titles in the last 7 season while Georgia is 0-for-35-years. But that discrepancy can probably be better understood by looking at things chronologically.

And before moving on, let’s throw a little cold water into the face of this 5-star slam dunk argument by looking at the natural enemy to the 5-star argument: the coaching argument. Those 13 national titles won by 7 programs were won by 7 head coaches: 4 of whom won national titles at multiple stops as head coaches, and 1 of whom won a national title as a coordinator at another school. As head coaches, Nick Saban won national titles at LSU and Alabama; Urban Meyer won them at Florida and Ohio State; Jim Tressel won them at Ohio State and Youngstown State; Pete Carroll won a national title at USC and a Super Bowl championship with Seattle; Jimbo Fisher won a national title as head coach of FSU and as an offensive coordinator of LSU. Just two of the 7 coaches – Les Miles and Mack Brown – won national titles at only one place. So these coaches win huge wherever they go. Is it the 5-star talent or the coaching?

Looking Forward

Beyond winning national championships, the theory that signing 5-star players is an advantage to programs compared to not signing them (and the more, the merrier), must logically be supported by data that demonstrate that teams getting the 5-star talent are winning more games than the teams that aren’t. One way to check this out is to look at the data prospectively: take an accounting of one signing class’s 5-star talent hauls, and see how those teams performs over the next three years relative to the other teams in the nation. So let’s take a look.

First let’s look at Florida, starting with the smallest target scope. And let’s start at the very beginning; a very good place to start. In 2002, the Gators inked two 5-star players, good for 11th in the nation. They were tied at that ranking with five other teams, so worst case scenario, they could be considered the 16th-best team in 5-star success that year. If there is something to this 5-star theory, that 11-16 range is about where you would expect Florida to finish on average in the national polls over the next three years. But Florida finished 37th. Not too great, but the small number of 5-star signees no doubt played a part, right? So let’s look at the following year 2003, when Florida signed seven 5-star players, the most in the nation and two more than second place USC. We should expect this class to fare a bit better, given the big advantage in 5-star talent, but that class went on to average only #25 in the polls.

Fast-forward to 2007, which was the next year that Florida signed the most 5-star players (12). Over the next three years, that class finished second place in aggregate final poll rankings. That’s a little more like it. The book-ending years around 2007 saw similar results. Florida signed the second-most 5-star kids in 2006 and averaged a third place final poll finish. In 2008, the Gators signed the 5th-most 5-stars and finished sixth. So now we’re cooking with gas.

Until 2009, when Florida signed the second-most 5-star kids in the land and only finished sixteenth. And in 2010, it was even worse, signing the most 5-star athletes in the country and averaging just a #23 final poll ranking. Clearly it must be random, right? Or could it be that the signing classes of 2002, 2003, 2009 and 2010 were so rich with 5-star talent but finished so poorly because they were coached by guys named Zook and Muschamp, while the classes of 2006 through 2008 were so consistently predictive of high finishes because they were coached by Urban Meyer?

I did this for all the teams and found some that are very consistently accurate in predicting high finishes (like Alabama), and some that are very consistently inaccurate (like USC and Texas). So it’s better to look at the averages. But first let’s look at some interesting numbers year to year.

Most interesting is the number of teams who find their way into the rankings – and into the top 10 – without signing a 5-star athlete at all. For the 2007 signing class, the aggregate final polls had 4 teams in the top 10 that had signed zero 5-star players in 2007. The classes of 2008, 2009 and 2010 did that one better, all landing half of their top 10 teams without benefit of a 5-star signee in the trigger year. In those four years combined, over 62% of the ranked teams did not have a 5-star recruit. Overall, across the 12 years for this piece of the analysis, an average of 60% of the 3-year aggregate ranked teams had no 5-star players signed in the trigger year, and that average included 28% in the top 10 alone.

In all, the year that did the best job of proving the case for 5-star players predicting success on the field when measuring this way was 2005 – and it wasn’t even close. The 2005 class saw 32 teams sign at least one 5-star recruit. When the final polls for the next three years were aggregated, only 44% of the final ranking teams were not on that list of 32 teams with 5-star talent. The next closest year had 57% of its final poll teams find their ranking without a 5-star player in the trigger year (three years tied at the 57%). Likewise, 2005 was the only year that produced all ten of the final poll top ten teams from the 5-star club. On the other side of the coin – the worst years in terms of missing with 5-star talent – were 2008 and 2010, with both years producing half of its top ten and 64% of its top 25 poll teams without benefit of a 5-star player.

Another interesting aspect of this analysis is how far the misses were off target. That is, how predictive a 5-star ranking is for final poll outcomes – and thus how predictive of success is signing 5-star talent. For instance, in 2009, Alabama tied for the second-most 5-star players signed in the year (#2 rank), and finished #1 ranked in the aggregate polls three years later. That is a miss by just 1 spot, which is exactly what you would expect to see if the 5-star theory held water. But it doesn’t even have to be that close to be meaningful. LSU was the #1-ranked team that year in 5-star signings, and finished fifth in the polls, a difference of just 4. That’s two confirming wins for the theory in the top-5 in 2009. But what of the #2, #3 and #4 teams in the final polls for that stretch? Boise State, Oregon and TCU occupied those three spots in the aggregate polls and they missed their 5-star predicted finish by over 30 spots apiece (none of them signed a single 5-star player in 2009).

When measuring how well or how poorly the 5-star predictive accuracy was at predicting final poll finishes, the consistency across the years is remarkable. The average miss in predictive power across the time period was 11.7 ranking spots. Not bad, but not good. But it was very consistent, as all 12 years’ averages ranged between 10 and 14. But the area we are most interested in is the top-10, isn’t it? The average miss in the top-10 is statistically better, but only marginally so, at 11.3. However, that’s a margin of error bigger than the range of spots in the top-10. In other words, if you are in the top-10 of 5-star signees, you can expect to finish in the top-10 in the polls…give or take 11 spots. So you could finish in the top-10 in 5-stars and finish 21st in the polls. And that’s just on average. You could in fact wind up like USC in 2013, which signed the most 5-star recruits in the country but wound up finishing #29 in aggregate poll rankings. Or face Florida’s #1-to-#23 plunge in 2010, as mentioned earlier. At this point, looking forward is not giving me much confidence in the 5-star theory, so let’s try it the other way.

Looking Backward

Logically, this method should be a stronger indicator of the 5-star theory, and the expectation is that it will have better predictive value. That’s because when we look at one year of 5-star signees and judge that signing class’s performance over the next three years of poll rankings, it ignores the contributions and impact of the next two signing classes that are also in the mix. In this section we will look retrospectively, and compare the aggregate ranking of 3-straight signing classes of 5-star recruits against a single year’s poll rankings to which all three years contribute.

So here we go.

Right off the bat we see the top-10 predictive value clean up a bit. Whereas in the prospective view, nearly 30% of the poll-ranked teams had no 5-star recruits in the trigger year, only 20% of the top-10 teams had no 5-star guys when we look retrospectively. A one-third improvement. It is even more pronounced when you look at all ranked teams. Whereas the prospective method produced 60% of the final rankings with no 5-star players, the retrospective method wound up with just 34%. Nearly cut it in half. So that is a 33% and a 43% improvement, right? Well, not exactly. When you aggregate three signing classes, you are going to have three times the opportunities for teams to sign a 5-star player. So you would expect more teams to have 5-star talent.

For instance, the most recent signing class with three classes to compete on the field through 2015 was the class of 2013. That year 25 programs signed at least one 5-star athlete. In 2014, only 23 programs signed a 5-star, but it added more teams to the 5-star total. That’s because although there were two fewer teams with 5-star talent in 2014, 8 teams from the 2013 list signed no 5-stars in 2014, and 7 new teams signed a 5-star. In 2015, even fewer teams signed a 5-star player (23), but by the same dynamics, two more new teams were added to the 5-star list. So this is not an apples-to-apples comparison. But as a standalone metric, it does make a much better case for the 5-star theory: only 2 of the top 10 on average per year with no 5-stars, and only 8.5 out of the entire top 25. That’s a pretty good record.

So if we agree that these numbers are a pretty strong argument for the theory that in order to finish in the top 25, then it is a good idea to sign at least one 5-star player every three years. But is that really what the 5-star mythology is about? I would say it is not. I would say the mythology states that the more 5-star players you sign, the better team you will have and the better you will perform. So the ultimate measure in this comparison would be to look at the predictive value of the rankings.

As before, let’s start with Florida. From 2002 to 2004, Florida signed ten 5-star players, good enough for fifth in the county. One more 5-star player and they’d have been third. The Gators would up an average of 25th in the 2004 final polls. From 5 to 25 is not a good prediction. Move the needle just two years forward and we combine the classes from 2004 to 2006. Florida signed the 3rd-most 5-star players in the nation (1 signee shy of second place), and finished #1 in both final polls in 2006. But as we discussed earlier, the coaching changes from Zook to Meyer to Muschamp taints the entire analysis for Florida…or, put another way, makes a very strong argument that coaching is much more important than 5-star accumulation in terms of winning on the field. Looking at USC for this 3-year period would also throw a monkey wrench into things. They not only signed the most 5-star talent from ’04 to ’06, they signed twice as many as the second-place program (22 for USC, 11 for Oklahoma). And they would up 4th in the final polls in 2006, behind Florida (10 5-star players), Ohio State (9), and LSU (8). With that much 5-star talent on the field and a national championship-winning (and future Super Bowl-winning) head coach on the sidelines, one would expect that they would have just rolled to the national title that year. But they didn’t even make the title game. They had an elite head coach AND the most 5-star talent and it couldn’t produce a sniff of the national title. That throws a monkey wrench into the theory, but what kind of monkey is anyone’s guess.

So, since individual years and teams are too volatile to suggest any credible conclusions, let’s focus again on the averages. Recall the average miss on ranked teams in the prospective analysis – that is, how far off-target or on-target the 5-star rankings were compared to the final aggregate poll rankings. That average miss was 11.7 ranking spots. Not great but not bad. Well the average miss using the retrospective method – that which should logically produce better results – was 17 ranking spots. That’s pretty bad considering the aggregate top-25 of the AP and coaches polls in any given year did not have more than 27 total teams in the twelve year analysis.

But again, let’s focus on the money spots: the top-10. Recall again that when analyzing prospectively, the average miss in top-10 rankings based on 5-star recruits was 11.3 spots. Well, looking retrospectively, the average miss inflated to 15.2 spots. That’s a pretty large margin for error on just 10 spots. But that’s as good as it gets, because if we look year to year, the numbers pop much larger. Like the most recent three-year aggregate signing classes that culminated in the final rankings for the 2015 season. Top-10 teams TCU, Houston and Iowa all missed their 5-star predicted finish by over 40 spots. That happens when you finish in the top-10 after three-straight years of signing zero 5-star players, as those three programs did. In 2014, there was #8 Georgia Tech missing by over 40 spots and #3 TCU missing by nearly 50. But in 2014, you had #1 Ohio State only missing its predicted finish by 4 spots, and of course this year Alabama won the national title after signing the most 5-star players the last 3 years.

So the picture is starting to take shape. One more element to investigate.

The Chicken and the Egg

So which comes first? The stockpiling of 5-star talent or winning big on the field? Does a team start to win in the fall after a few winning signing classes in February…or do spikes in recruiting fortunes begin once a program starts to see results on the field?

For this analysis, I looked at five schools of interest to Gator fans, starting with Florida, of course. Next I added Alabama, Clemson and FSU, both due to their rivalry status and/or their perceived positions as kings of national signing day, and because they have all experienced shifts from low poll finishes to high poll finishes over the last 14 years. Lastly I added Auburn, which has had significant spikes up and down in recent years that could offer a unique perspective on this comparison. On a line graph, I plotted the rankings in terms of 5-star players signed each year against the final poll rankings of the same year. I made one set of graphs starting with the recruiting class of 2002 transitioning to the football season of 2002, and another set starting with the football season of 2001 transitioning to the signing class of 2002. Since there is no 3-year aggregation in this analysis, there are 14 years in this data set.

What I expected to see was one of two things. Either I would see a lag between the two trend lines showing an increase in poll rankings followed by an increase in 5-star player rankings (“build it and they will come”) – or I would see the opposite lag: first the 5-star signings go up, then the poll rankings (get the ingredients, then bake the cake). What I found was neither. What I found was the definition of randomness.

With Florida, the 5-star trend line stayed pretty flat. With only two 5-star hauls outside the op-10 after 2002 (2005, 2011), the Gators hovered in the top-6 for 8 of the 14 years, with three more at #9. But as we all remember, our final poll rankings were a complete yo-yo over those 14 years. Way down, way up, way down – way up for just a year – then way down again. Once again, the most important subject of study – the Gators’ recruiting – is useless as a case study….or perhaps very useful in dispelling the 5-star mythology. You decide.

For Alabama, the 5-star theory works perfectly. That is, if you think that a dynasty can be built on five 5-star players in one signing class. Because that’s how many Alabama signed in 2008. Alabama had only signed one single 5-star player in the previous 6 signing classes. That one 5-star signee followed the only season in a 5-year stretch in which Alabama finished ranked (perhaps a tiny hint of recruiting feeding off of performance on the field, but only if just one car decided to drive to Iowa to watch all those dead guys play baseball in Field of Dreams). But after just one 5-star player in half a dozen years, the Tide pulled in 5 of them…and after finishing unranked in four of the previous five seasons, that fall they finished 6th in the country. And they never looked back on either front, finishing in the top-3 of 5-star classes 7 of the next 8 years, and finishing in the top 4 in the final polls 5 of the next 7 years (with a 7th and 10th thrown in the mix in “down” years). So either that 5-man 5-star signing class of 2008 created a dynasty all on its own – in their true freshman year – or perhaps it was the fact that a year earlier a guy named Nick Saban took over the program.

FSU provided no help for the 5-star theory, either. They finished in the top-10 in terms of 5-star signings for 5-straight years from 2002 to 2006, and only finished higher than 15th in the final polls once – a 10th place early on in 2003. Neither 5-star talent nor on-field performance seemed to be affecting the other in any way. For the next three years, both final poll rankings and 5-star signees were a rare sight in Tallahassee, finishing 30th, 12th and 10th on the 5-star side, and unranked, 21st and unranked in the final polls. But the gradual increase in 5-star talent continued, following up with a 5th-place and a 1st place. However, the poll finishes continued to be dismal, at 17th and 23rd. Finally in 2012, after another #1 class with 5-star players, FSU cracked the top-10 with an 8th place poll finish. Then the 5-star talent seemed to finally pay off with a national title and a 5th place finish in two consecutive years. However, last year they fell back again to 15th, despite finishing in the top-5 in 5-star rankings in five of the last six years. For FSU, the difference wasn’t made by the chicken or the egg. The only difference between winning big and floundering was the crab legs.

Clemson at least helps the analysis by narrowing the scope of focus. Between 2002 and 2010, they stunk on the field and rarely signed a 5-star player. Then in 2011, after failing to sign a 5-star player in 6 of the previous 9 years, Clemson shot to #1 with a bullet: five 5-star signees. The following February they were back to zero again, but they at least crawled up to #22 in the nation after finishing unranked in 4 of the last 10 years. Those five 5-star players in one year did not have the immediate one-fall impact the ones in Alabama supposedly did. Clemson finished #7 in 5-star hauls in 2013, but still through 2014, only had that one year of finishing in the top-10 for 5-star talent. For all their recruiting hype, just that one class. All told, when they reached the NCAA title game for the 2015 season, Clemson had only finished in the national top-10 for 5-star signees twice in the past 14 years, though both were in the past five. This February they finished second with three 5-star players. We will see if this improves the 5-star mythology or not. I won’t even discuss Auburn, because their graph looks like a Salvador Dali crashed into a Picasso.

Jumping to a Conclusion

Harkening to that brainstorm floor mat game from Office Space, there are a host of options before us as far as how to interpret these data. But the only one that requires no leap of faith or serious denial of the facts is this: In terms of predicting future success, signing 5-star talent absolutely does not matter…unless it does. And the more 5-star talent you sign, the better your team will be…unless it isn’t. If you don’t sign any 5-star talent, you will not be very successful and won’t ever finish in the national top-5 or compete for a national title…unless of course you do.

I am of the frame of mind that there are three possible ways to look at this: (1) The data clearly support the 5-star theory, (2) The data clearly do not support the 5-star theory, or (3) The data are inconclusive. Only the first option validates the 5-star theory, and in my estimation, the findings fit #3: inconclusive.

So being a 5-star athlete in high school is great for the athlete, because it is a pretty good predictor that not only will they get a scholarship to play football at the college of their choice, but they will likely also be drafted by an NFL team. And while that is great for the player, the positive 5-star impact on the football team appears to be more mythology than mirth.

So, back to the drawing board. In Part 3 of this series, I will analyze what I hope will be the defining model for connecting recruiting rankings to success on the field: the ranking of the entire signing class. Not just the ones “closed” on national signing day, not just the small number of 5-star can’t-miss players, but the whole class. Hopefully it will help many of us put to rest at long last the question of whether recruiting class rankings matter.

David Parker
One of the original columnists when Gator Country first premiered, David “PD” Parker has been following and writing about the Gators since the eighties. From his years of regular contributions as a member of Gator Country to his weekly columns as a partner of the popular defunct niche website Gator Gurus, PD has become known in Gator Nation for his analysis, insight and humor on all things Gator.

5 COMMENTS

  1. my head is spinning. I don’t think you look at one side without the other.

    bottom line is, you need both. but the “chicken” is proving your system can work, which will attract better players being the “egg”

    look no further than Florida’s odyssey over the past few coaches to see that recruiting is pinned to the success of a program, but if you don’t coach up the kids and develop them as players you get what you have from Zook and Muschamp eras. kids that can compete, but can’t get over the hump for whatever reason. conversely you move to Meyer who got the players he wanted, put them in places to succeed and the rings started rolling in. (see ’06 signing class for UF, one year after this coaching regime got started)

    i think its become even more imperative over the past several years on what these coaches can sell. Saban is tried and true, which is why they keep going back to the well because they see the best chance to have a life after CFB. we’ve heard all the stories about how Saban packages his recruiting pitch simply by stating, “this is where you can go…”

    Mac has a pretty good resume to get the best out of his players. his QB’s have all been efficient and done well, once he has pieces in place to support his system. i think Gator nation needs to chill and realize that how last year finished was simply a function of the parts couldn’t make up the sum of results we hoped to see on the field. we had a kid from dental school attempting kicks for pete sake

    let these guys do their thing and watch our team become relevant again in the national landscape of CFB

  2. Now that was a report, but I might have missed something while reading this and that is size for the position . As this game evolves, I think we will start seeing 6’4″ wide receivers, with speed and what does that do for a 5’9″ DB. It was obvious with Harris at QB last year, if 10% of his throws were blocked by linemen,, then 100 % of that 10% would have never been caught. Unless your Ole Miss. To quote a phrase from my last girl friend, “Size does matter” Sorry couldn’t resist.

  3. PD, have you missed your calling as a parser of big data??? Thanks for your thorough and thoughtful analysis on this. I look forward to your next installment where you evaluate the entire signing classes!

    I think there are a few more avenues to pursue, however, in terms of meaningful rankings. In my mind they would include:

    1) Geographic bias: Where are the 5-star players from? Are they being evaluated by the same individuals at these rating services consistently across geographic regions? Consider that maybe certain geographic regions consistently produce better players. Or perhaps players from certain regions are consistently over-ranked because they are studs in their region, but the relative competition is weaker? Consider that geographic bias, if it exists, by definition might play a part in certain conferences being consistently stronger on average.

    2) Conference bias: Consider how much cross-conference and out-of-conference play positively and negatively influences rankings. Hopefully this will become clearer with the playoff system being introduced and hopefully expanding. But absent this, if final rankings are the ultimate goal, then a strong team in a weaker conference would seem to have the advantage over a moderate team in a strong conference (or a strong division within a strong conference , i.e., SEC West). It would seem to me that in the first instance a team could consistently dominate a relatively weak conference, play “up” against a ranked non-conference opponent and win, and end up in the top 5 or top 10 nationally w/out landing a single 5-star player for years. I think your example from 2009 suggests this.

    3) Coaching bias: You touched on this above, and I think accurately pointed out, it’s hard to account for. Some coaches are great recruiters. Some are great at evaluating talent. Some are great program managers. Some are great at Xs and Os. Some are great at instilling discipline. Some are great at spotting trends and exploiting them. Some are all of these things, and others a combination of some of these things. Some are great at knowing their weaknesses and surrounding themselves with talent to fill the gaps.

    Sorry for rambling, but these are just things that crossed my mind as I read and thought about what you wrote above. Maybe there’s a way to decrease the variance from ratings by discounting the outliers? Maybe some of the variables should be weighted? Maybe it’s meaningful to look at rankings within conferences as well as overall national rankings? Or to factor in how ranked teams perform against non-conference opponents?Maybe you can adjust for coaching by weighting the variable based on overall win/loss record, staff continuity or relative rankings of offense and defense?

    So many variables… so little time…

  4. Very good. I believe you’ve done as good as can be done by choosing which data events to take to the lab. Like what was just posted “so much data, so many variables”. Causality at best is a very slippery slope that doesn’t necessarily mix good with common sense. I think someone posted the last four NCs were won by teams having 50% 5’s in each recruiting class as if it proved 5* recruits won NC’s. It doesn’t. A definite maybe? I’ll say door #3 and will look forward to part three.
    On another note, the, I believe young man, mentioned size which could be important on both sides of the fence.