h-index (Hirsch) Polski: index h (Hirscha)
h-index (Hirsch) Polski: index h (Hirscha) (Photo credit: Wikipedia)

It’s no secret that Google’s PageRank algorithm is basically the familiar journal citation approach blown out mathematically and practically to achieve the real-time network effect. Oh, how powerful it is! Now, Google is going a bit more old school, ratcheting its engine back to dabble in math of a different kind, this time the math of Jorge Hirsch, whose h-index is slowly becoming an alternative to the impact factor.

While the h-index was originally designed to measure an individual scientist’s impact, it’s attempt to square citations can be applied to any set of articles. Google is calling its creation of five-year h-indices “Scholar Metrics.” Using the h-index, the value is the lowest number at which the number of articles and the number of citations match. In applying the h-index to journals, Google is creating a set of very interesting tensions.

One of the main advantages of Google Scholar is that it is more comprehensive in its scope than Thomson Reuters’ Web of Science. At least, let’s start with that assumption — that more is better. Using Google Scholar, you get a ranking like that shown in the figure below. Nature is the top-ranked journal, followed by NEJM, Science . . . and RePEc? That stopped me. Turns out that’s Research Papers in Economics, a metadata hub for economics papers — or, as it describes itself, “a decentralized bibliographic database of working papers, journal articles, books, books chapters and software components, all maintained by volunteers.” Publishers participate by depositing metadata, and RePEc provides links. RePEc has its own rankings of the papers listed there, but getting to a paper is no easy feat.

So is more better? RePEc is a metadata hub listed among journals. It’s not indexed by Thomson Reuters for good reason — RePEc produces no new scientific information itself. Yet, it has an h-index in Scholar Metrics. Strange.

Other entries raise doubts about the value of the wider selection available in Google Scholar used to generate Scholar Metrics — arXiv.org and SSRN both publish interesting preliminary papers, but do they belong on this list? Conferences can also make the list.

Now, some may argue that adding these “gray literature” elements is a good thing, but it’s a highly selective set of the gray literature — I know newsletters, blogs, and monograph series that could be included but aren’t indexed in Google Scholar because they don’t have an academic institute or journal brand behind them. While Google Scholar is broader, it’s also idiosyncratic.

Sample Comparisons Between Journal h-Index and Impact Factors

An interesting 2011 paper in Research on Social Work Practice entitled, “Evaluating Journal Quality: Is the H-Index a Better Measure Than Impact Factors?” compared the h-indices for social science journals with five-year impact factors, and found a high correlation. However, faculty empirical quality ratings correlated better with h-index values, something the authors attribute to their field’s applied research culture — that is, useful clinical research, which may be less cited because it’s aimed at practitioners (who don’t cite) creates a better reputation, something citations in and of themselves may not capture completely.

One blogger believes there are no downsides to the Scholar Metrics approach:

But who needs to continue to list Scholar Metrics’ virtues? It’s free and it’s Google’s. Bottom line.

It’s free! It’s Google!

OK, let’s settle down. Pitfalls exist. For instance, is a “citation” in the Google Scholar index really a citation in the traditional sense? We’re all well-aware of all the baggage a citation can carry — most are straightforward, but many are not, and they can range from damning to fraudulent. Which gets us back to one of the main questions — are more citations, which Google Scholar must possess, better? Or are they merely adding noise? Thomson Reuters analyzes self-citation, and punishes journals that generate an appreciable percentage of their impact factor through self-citation. Yet, looking at the arXiv.org citation data in Google Scholar, it’s clear that most of the citations are self-citations from one arXiv.org paper to another. The same seems to hold true for SSRN, although there are more citations from outside.

There are other concerns with Google’s approach. For one, only journals meeting Google’s inclusion criteria can participate in Scholar Metrics. These inclusion criteria are largely technological in nature, and can change with the wave of an engineer’s hand, as happened to multiple journals last summer. Untangling a new Google edict can take months, during which time it seems a journal would be delisted from Scholar Metrics. While being delisted from Web of Science is usually a measure of some sort of malfeasance, being delisted from Scholar Metrics could be due to some misapplied headers or robot.txt file.

Google’s Scholar Metrics are a nice start to something that could be honed into a useful, free tool. But as it stands now, it’s just a start. More human judgment must be brought to bear, either through better engineering, actual human curation, or a mixture of the two.

Enhanced by Zemanta
Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

18 Thoughts on "Google's New "Scholar Metrics" Have Potential, But Also Prove Problematic"

Google’s Scholar Metrics for journals is interesting and has potential, but as you point out, there are some problems here. Another thing which irks me is that you can’t (yet?) really find a ranking for journals in a specific category. At least I don’t see any way to. Sure, I can do a search for “management” but all that does is return a list of journals that use “management” in their title. A couple of the journals we publish are high on this list. The problem is that these titles should also appear when I search for “business” journals, but they don’t. If Google really wants to offer something that competes with IF they’ll need to put more thought & effort into this. I hope they do. In my view, more valuable metrics and tools available is good for everyone.

I’m not sure how far I trust Google these days. To contrast them with Thomson-Reuters and the Impact Factor, Thomson-Reuters is selling me the metric. Google is using their metric as a way to sell advertisements. Selling ads is at the root of everything Google does. As such, Thomson-Reuters’ success depends on the quality of the metric and data directly, whereas Google’s success depends on making the property attractive to advertisers.

Google has already shown a propensity toward altering their ranking systems for business purposes, both allegedly increasing rank for those buying ads and flagrantly subverting the accuracy of results provided in order to promote new ventures like Google+.

I suppose it gets back to that adage about how if you’re not paying for it, you’re not the customer. And running a business built on selling advertisements creates a different set of pressures than running a business based on selling access to accurate data.

I don’t think it’s fair to say that everything that Google does is the root of everything they do. I use Google Scholar Citations to keep track of my articles and who is referencing my articles. It’s a very valid and useful service for me as a scientist. I see no ads on the Citations page and no ads when I expand to details of the articles. I see no ads when I search on Google Scholar either. Also I don’t see ads when I search on Google Patents. These are all very valuable services. Maybe I have an excellent ad blocker and they are just blocked…

The key phrase to add to your comment is, “for now”. This is the problem with free services. Google is a for-profit company, not a charity. There are things they do that currently don’t bring in revenue, but these can’t be expected to last forever. At some point the funding runs out, the investors get tired of waiting, the shareholders demand return on investment. This is inevitable, as giving things away and getting nothing back is a terrible business model.

So the question is whether the service will just disappear (as so many of Google’s efforts that failed to create revenue have disappeared) or how it will change when it starts needing to pay its way. Twitter is changing, putting promoted tweets from advertisers into your stream. The way Facebook handles your privacy has changed and is continuing to change. As noted, Google’s search engine now prioritizes promoting Google+ and advertisers over providing accurate answers. Google Scholar is under the same inevitable pressure.

Does anyone have any idea how often Google updates the data used for these rankings?

According to http://scholar.google.com/intl/en/scholar/metrics.html, “Scholar Metrics are based on our index as it was on April 1st, 2012. For ease of comparison, they are NOT updated as the Scholar index is updated.” The several references to April 1st made me wonder whether it was an April Fool’s joke, but it seems not to be (I’ve corresponded with them).

Using the h-index to rank journals seems absolutely crazy to me, and it gives results that simply do not line up with what I’d consider high quality. The problem is that some journals are enormously larger than others, which gives them far more chances to get highly cited papers even if their average is very low.

For example, Journal of the American Mathematical Society is among the top few journals in mathematics. Google gives it a five-five h-index of 39, the same as Computers & Mathematics with Applications, which is a much lower-ranked journal (best known for an embarrassing retraction of a crackpot paper). This is not a reasonable comparison, since JAMS published 173 papers in that time period, while C&MWA published 2956. So 22.5% of the JAMS papers got at least 39 citations already, while only 1.3% of the C&MWA papers did, a factor of more than 17 lower. If a naive student uses the h-index to decide where to submit a paper, or a dean uses it to evaluate a publication list, then they will be seriously misled regarding the quality of these journals. A typical C&MWA paper is by no means comparable to a typical JAMS paper.

Another example is PLoS Biology vs. PLoS ONE. PLoS Biology claims to publish papers of “exceptional significance,” while PLoS ONE publishes anything methologically sound, but PLoS ONE has a higher h-index (100 vs. 92). However, it publishes about 20 times as many papers, so this is once again a misleading comparison for anyone who cares about the typical papers in these journals.

The impact factor is a mediocre metric, with limited utility and disturbing susceptibility to manipulation, but the h-index is just ridiculous. It bothers me that Google is in some small way endorsing it.

They could be doing something far more valuable. Condensing citation information down to a single number is sure to lose valuable information. What we care about is the whole distribution – what fraction of the papers published in a given time range have achieved a certain number of citations? It would be great if Google would supply a journal summary page giving a graphical display of this information and a UI to look into it more deeply. Unlike h-indices, that could actually be illuminating. They have a data set of citations, so it’s just a matter of building the front end.

Something to look at that is attempting to do what you’re asking is Total Impact (http://total-impact.org/) I’ve used it a couple of times “for fun” just to see how I was doing… I dont know if it will ever get “critical mass”, but it’s an interesting experiment in measuring things other than formal citation.

I’m curious how they handle citations to articles in repositories like arXiv, since many preprints which are posted in arXiv are later published by journals as well. For example, according to Google Scholar, the top-cited paper in arXiv (with 5606 citations) is this:

http://arxiv.org/abs/cond-mat/0702595

But that paper was also published in Nature. So is Google only counting citations to the arXiv preprint version, or are they counting any citations to this paper, either in arXiv or in Nature? I’d assume the former, but some spot checking seems to show that they’re counting citations to the paper in Nature as well. In fact, that paper doesn’t even seem to show up in the list of 295 papers used in calculating the h-index for Nature, and with over 5000 citations, it should be at the top of the list. So does the fact that the paper was posted in arXiv mean that it’s only counted for arXiv’s h-index and not for Nature’s? That seems like it could skew the rankings quite a bit for journals which regularly have articles posted in arXiv prior to publication (e.g. Reviews in Modern Physics, which is #3 by impact factor and not in the top 100 by h-index).

In any case, it seems like including repositories opens the door to some confusing situations, which Google at least doesn’t address on the “about” page for Scholar Metrics. I’d be interested to see them post more about how they’re handling this sort of thing, since it’s pretty important for the accuracy of the rankings.

Mat, I had similar questions when I investigated Scholar Metrics. I searched in Google Scholar for a couple articles published in my company’s journals, and I found more than one instance where our article was conflated either with the prepress version in arXiv, or with another article by the same author, with a similar title, published in a different journal. It makes me wonder if every time one of those articles is cited, Scholar Metrics counts it as a citation of the version in arXiv, rather than counting it towards our journal’s h-index.

Similarly, Scholar Metrics only ranks journals that have published 100 or more articles in a five-year period. One of our journals is nowhere to be found in Scholar Metrics, even though it has published 110 articles in that time period. A search in Google Scholar for the journal’s title turns up 70-odd articles. But when I did a search for the abbreviated title of the journal in Google Scholar, I found the missing 30-something articles. So is Google Scholar Metrics smart enough to combine the citations attributed to the journal’s full title and it’s abbreviated title? I’m not sure.

Actually, cond-mat/0702595 was published in Nature Materials, rather than Nature, and Google Scholar Metrics correctly includes it in the h-index for Nature Materials. I wouldn’t be surprised if they have trouble with this issue (it’s a tricky thing to sort out automatically and reliably), but this particular case seems OK.

Wow, yeah, my mistake there. But then that still gets at one of my points above: citations to this article in Nature Materials also count towards the h-index (and therefore, the “Top Publications” ranking) of arXiv. As you say, that can be a tricky thing to maintain; hopefully they’re devoting some significant effort to their authority control to make that consistent.

Comments are closed.