Have you seen the movie “Moneyball” or read the book by Michael Lewis? The story is about how the Oakland A’s baseball club managed to come up with winning seasons despite being one of the poorest teams in professional baseball. “Moneyball” refers to the strategy of identifying players used by the Oakland A’s general manager, Billy Beane.
What does Moneyball have to do with college admissions?
Just give me a few more minutes. In his book, Michael Lewis goes into more depth about the origins of Moneyball and Bill James, author of the original Baseball Abstract. James was interested in looking at what value individual players brought to the game in an era of ever-increasing salaries. After all, baseball seems to have statistics on just about everything so it shouldn’t be that difficult to figure out exactly how good players are, right?
If you’ve seen the movie, you know that wasn’t, and in some cases still isn’t, the case. The amount of money baseball players were being paid was based more on perception rather than actual results. The following is a quote from the book which explains the relationship to college admissions and why you should read the book as well:
There was but one question he left unasked, and it vibrated between his lines: if gross miscalculations of a person’s value could occur on a baseball field, before a live audience of thirty thousand, and a television audience of millions more, what did that say about the measurement of performance in other lines of work? If professional baseball players could be over- or under-valued, who couldn’t? Bad as they may have been, the statistics used to evaluate baseball players were probably far more accurate than anything used to measure the value of people who didn’t play baseball for a living.*
Right about now, maybe you’re thinking, “oh yeah, college admissions has that covered with US News College Rankings.”
Not quite. Give US News College Rankings credit for adding more outcome oriented data into its rankings formula over the years. Yet it still relies heavily on “inputs” when creating the rankings. In baseball, the scouts were looking at batting stance and swing rather than the number of times the player actually got on base. The college rankings do the same with test scores and faculty salaries.
Furthermore, US News College Rankings still consider academic reputation which accounts for a hefty 22.5% of the National Universities rankings. Inside Higher Education reported on a recent study that looked at what it would take for the University of Rochester to move up into the top 20 in the National University rankings. The study found that “even massive expenditures year after year and huge leaps in student quality and graduation would not be enough. The reputation score as judged by its peers would need to increase from 3.4 to 4.2 on a scale of 5, something that has only a .01 percent chance of happening, the paper said.”
Basically, the rankings have simply factored in “perception” so that the results don’t stray too far from most people’s preconceived notions about who are the “best” schools.
But don’t put all the blame on US News. They’re interested in selling their rankings and probably have pretty good reason to believe that if they took out academic reputation the results would appear “different” enough for readers to question their validity. In other words, they wouldn’t confirm their expectations of which schools are the “best.”
And the colleges, for all of their outrage over the “beauty contest” nature of the rankings, aren’t blameless either. According to Andrew Ferguson in Crazy U, US News has asked schools participating in the National Survey of Student Engagement (NSSE) to make their results public so that they can be included in the rankings where the “effect might be revolutionary. Which is probably why all but a handful of college presidents have decide to keep the NSSE results secret.”
Ferguson summarizes the situation as follows:
For twenty years they have criticized the U.S. News rankings for lacking precision and authority–for obsessing about inputs when outcomes are what really matter–even as they sit on the outcomes data that might make the rankings more authoritative and precise.*
So is a Moneyball strategy even possible in college admissions? I think so, that’s why I created my spreadsheet.
I’ve also decided that I’m going to start identifying potential Moneyball colleges on my 50-50 college list. After all, just because a college accepts at least 50% of students and has at least a 50% graduation rate doesn’t mean that it’s a great value. I’m going to spend some time digging into the data but I’ll provide updates as the project progresses. In the meantime, you can expect a few more posts discussing possible Moneyball indicators for college admissions.
*Moneyball, pg 72; Crazy U pg 49
(I am an affiliate for Amazon, so I do get a percentage if you buy a book through my site. I have to admit, I was really just trying to get the image for the book.)
Interesting information on more of what goes into the college rankings. I think I read about the NSSE in Colleges that Make a Difference and I wondered why the results are kept secret. The answer is enlightening and frustrating!
To be fair, the NSSE was designed so that colleges would use the data for self-evaluation. They are being to encouraged to survey their students to identify areas to improve. Including the information in rankings is kind of like asking for help with a problem and then getting punished for asking for help. I do find it interesting which schools choose to participate and of those that do, which publish their data.