Malcolm Gladwell speaks at PopTech! 2008 confe...
Image via Wikipedia

Malcolm Gladwell emerges infrequently with analyses that often change perceptions on issues we think are innocuous but turn out to be subtly powerful.

In the most recent issue of the New Yorker, Gladwell takes on the U.S. News & World Report’s college rankings, an ordered listing of universities that has evolved from a utilitarian service from a third-rate news weekly to become the cornerstone of a rankings business that has outlasted its initial host.

The U.S. News rankings use seven weighted variables to derive one set number for each academic institution:

  1. Undergraduate academic reputation — 22.5%
  2. Graduation and freshman retention rates — 20%
  3. Faculty resources — 20%
  4. Student selectivity — 15%
  5. Financial resources — 10%
  6. Graduation rate performance — 7.5%
  7. Alumni giving — 5%

The problem Gladwell describes with the U.S. News rankings (and with other rankings, like Car and Driver auto rankings) is that they try to make the incomparable comparable — they’re comprehensive and attempt to apply the same measurements to qualitatively different entities. In short, they’re heterogeneous.

U.S. News is pretty cagey in deploying its rankings, cautioning parents and incoming freshmen to look at all the available data and not to rely on the rankings, while weaving in text like “These results also serve as a validation of the U.S. News Best Colleges rankings methodology that weights undergraduate academic reputation at 22.5 percent.”

Other problems exist even deeper in the U.S. News approach. For example, 1/5 of the score is based on the category “faculty resources.” What is this, exactly? The goal is to measure “engagement,” but the rankings have to rely on proxies to estimate engagement. According to U.S. News, there are six factors they look at to determine engagement — two class-size components (big classes vs. small classes), faculty salary, faculty degree attainment, student:faculty ratio, and the proportion of the faculty who are full-time.

But why do some of these matter? Does it matter to quality whether my philosophy professor has an ABD or a PhD or an MA? And does a more highly paid professor equate to a more engaged professor? Unless things have really changed since I went to college, there’s almost an inverse relationship there.

In fact, wealth seems to be driving the U.S. News rankings, and in a particular passage, Gladwell loses his cool:

Rankings are not benign. They enshrine very particular ideologies, and, at a time when American higher education is facing a crisis of accessibility and affordability, we have adopted a de-facto standard of college quality that is uninterested in both of those factors.

Gladwell’s article is well worth reading (Penn State will suddenly appear to be a most admirable university). It touches on ranking of all types — hospitals, law schools, business schools, and so forth. U.S. News has turned this into a real industry, but a flimsy one.

Ultimately, this is all another reminder of how odd it is that highly educated and educationally ambitious people seem to seek clarity through numbers that, when you pull back the veil, are very poor proxies of quality, predictors of value, or estimates of differentiation.

Yes, I’m looking at you, impact factor.

Enhanced by Zemanta
Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

15 Thoughts on "Gladwell Tackles College Rankings: The Perils of Comprehensive Heterogeneous Systems"

While I’m not fond of what the U.S. News rankings has done to higher education, they do consider quality to be multi-dimensional — as opposed to the Impact Factor, which measures quality entirely upon one measurement: the citation.

What is missing from the U.S. News rankings is the price of education. Are parents really getting 5 to 10 times the value by sending a child to a private college than a state college? My sense is not.

Also, most of the dimensions it measures streams from wealth, so there’s a real Matthew Effect in their rankings. Gladwell’s point is that the justification for what they measure is pretty flimsy. I’d argue that it’s not redeemed by having multiple flimsy dimensions.

Gladwell’s argument is that we need multi-dimensionality if we are to compare heterogeneous things, like automobiles or universities. His issue with the US News editors is what they select as indicators of quality and how much weight is given to them. Change the indicators or their weight and you get different rankings.

Yes, I’m looking at you, impact factor.

Given Gladwell’s argument, I don’t think he would take issue with the Impact Factor. It is uni-dimensional (although time defines the period of observation). If a 2-year window is insufficient, use 5 or 10. If you have an issue with giving equal weight to a citation, then weight them accordingly. If you have issues altogether with using citations as an indicator of quality, simply select another one (like downloads, blog posts, 5-star rankings, etc.). Others have.

ISI has been clear on what they measure and how they calculate these scores. Like university officials, scientists may simply put too much weight on a single indicator of quality.

Kent, why stop at impact factor? Why not go after the twin villains of GPA and class rank? When do we stop counting and start thinking?

Yes, good point. Credit scores and P/E ratios, too? We keep wanting the illusion of quantification when in fact we’re just painting on top of qualitative judgments in many cases.

Well what do you think about the article ranking tool on PLoS One? It is sort of like Netflix; a reader can give a article up to 5 stars for insight, reliability and style. Hardly anybody does; maybe it is just too complex.

Are you suggesting that we adopt a ranking system where college students, graduates and others rank universities like Netflix films?

The mention of Penn State has me intrigued (since I worked there for 20 years and several of my children attended it). Since this is TA-restricted article, I could only read the abstract, where the bias of the ranking toward “Yale-ness” and against “Penn State-ness” is noted. As a Princeton grad, I would point out another major omission, which I stress in interviewing applicants to Princeton: Princeton has always laid great emphasis on undergraduate education, and every professor, no matter how senior or exalted, teaches undergraduates, and that is not true of many top research universities. This kind of nuance is lost in the U.S. News ranking.

Comments are closed.