PLoS: The Public Library of Science
Image by dullhunk via Flickr

Last week, the Public Library of Science (PLoS) announced the release of its article-level usage data.

Repeating the often-expressed claim that “research articles should primarily be judged on their individual merits, rather than on the basis of the journal in which they happen to be published,” each PLoS article now includes its own metrics.

In addition to the number of downloads (broken down into HTML, PDF, and XML, and plotted in a cumulative graph — see example), citations from Scopus, PubMed, and CrossRef are presented in real-time.  Conspicuously absent are citations from ISI.

But PLoS doesn’t stop there. Each article also includes user ratings, reader comments, linked blog posts, and social bookmarking services like Connotea and CiteULike.

Multiple Indicators of Quality

The main problem with indicators is that they lose their function soon after they become visible. For instance, the Impact Factor — an indicator of the average quality of an article as measured by citations —  has become synonymous with quality itself.  It has ceased to function as an objective indicator of quality as authors and editors modify their behavior to maximize their citation return. Given that the impact factor is a proprietary tool and the fact that part of its calculation (the denominator) is kept closely guarded by ISI, it’s understandable why PLoS has sought out the participation of other citation indicators, relying on three instead of one.

Greater Reliance on Usage Data

With the maturing of standards to count article downloads, authors may be more willing to accept usage statistics as an indicator of quality (or at least popularity).  Preliminary results from the 2009 Peer Review Survey indicates that more authors are willing to see formal peer review replaced by usage statistics (15% of respondents in 2009 compared to only 5% in 2007).

A Preemptive Move?

A cynic may read PLoS’s move to provide article-level indicators a preemptive move in advance of being given its first impact factor score for PLoS ONE, a journal with different editorial goals than its flagship journals.  Understanding that authors are infatuated with journal impact factors, PLoS may be positioning itself to counter its first low score for PLoS ONE, emphasizing readership, bookmarking, and blogging data over citations.

Top-ranked Articles

PLoS has released comprehensive statistics in a downloadable spreadsheet.  A cursory glance at the data reveals a sea of zeros punctuated with small islands of small numbers for most of the social network metrics.  These figures are not surprising given the low level of reader comments on PLoS papers and the low priority scientists give to professional blogging.  The fact that PLoS released the data at all is laudatory and fits with their general ethos of “increasing transparency.”

If you take a look at what has received the most citations in the PLoS suite of journals, they are the biomedical research articles. A closer look at what is heavily downloaded, commented, and blogged, however, reveals a very different picture. These lists include such non-research article such as:

This certainly gives the impression that part of the focus of PLoS is on science publishing rather than publishing science.

It is clear that PLoS sees quality as a multi-dimensional construct, and thus presents a collection of indicators in attempt to paint a broader, more complex picture of article performance.  Whether this approach will win in an environment that puts emphasis on simple indices is yet to be seen.

Reblog this post [with Zemanta]
Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist. https://phil-davis.com/

Discussion

8 Thoughts on "PLoS Releases Article-level Metrics"

The danger of ‘article usage’ stats is twofold: (1) they equate popularity with quality; and (2) by providing yet another metric for quick and easy assessment of academic output, they fuel rather than confront the real problem: the inappropriate use of such metrics.

Many academics lament the fact that search teams and administrators often don’t take the trouble to read articles when assessing (potential) faculty. Instead they simply look up the impact factors of the journals in which they publish. Deposing the Impact Factor tyrant and replacing it with another metric won’t necessarily make things any fairer.

Moreover, since ‘article usage’ is arguably even less comparable across academic disciplines, it could make the situation worse. If an oceanography paper and a molecular biology paper are both published in Science, one can at least make the reasonable assumption that they are both high-quality articles – on the basis of the company they keep in that particular journal. If both are published in PLoS ONE, and the molecular biology paper is downloaded ten times more frequently than the oceanography paper, does that mean it has more academic merit – or just that it’s a molecular biology paper?

I do like the idea of article metrics, but like Richard above, I have serious reservations about their use for ranking the impact of a paper. They do provide an interesting window for authors to see what’s going on with their paper, but I feel it’s far too easy to misinterpret what they mean. To Richard’s objections above, I’d add the following:

Trusting the wisdom of the crowd is inherently flawed because there is no crowd. As the linked article points out, most online participation is done by small groups of dedicated users. Often these groups have biases and agendas. If the quality of science is judged by readers giving a paper a ranking, then what’s to stop, as one example, a group of rabid creationists from marking down good evolution papers and repeatedly giving high rankings to creationist claptrap? This would result in good science being underfunded and religion receiving lots fo grants. The example cited in your blog entry here is telling as well–PLoS draws heavily on a readership that is very interested in science publishing and new models for information sharing. Those interests may not be shared by the majority of scientists who spend less time reading PLoS journals, but you wouldn’t know that from looking at these metrics.

The other issue is the ease with which such rankings are gamed. Everyone knows that online reviews are heavily infested with fake comments from authors on their own books and business owners touting their own products. If a scientist’s grant funding is going to be influenced by the rankings and download numbers of his papers, you can bet that he will make it his job to spend all available time downloading his own papers and giving them stellar reviews.

David and Richard

We completely agree that usage statistics need to be treated with caution and have been at pains to point this out in our various informational pages (as well as listing various caveats in their use) – see for example tab 2 of http://article-level-metrics.plos.org/ and also http://www.plosone.org/static/usageData.action

We also make the point in several places that usage statistics need to be treated as indicators of trends, rather than absolute measures to be applied to a single article.

It is also worth noting, of course, that article-level metrics at PLoS are far more than just usage – they include citations, blog coverage, comments, notes, ratings, social bookmarks etc – all of which can provide readers with extra information and context as related to a particular article. Of course, though, usage data are a very visible addition to this suite of metrics.

At the end of the day, in our opinion openly providing the usage data (data which could be routinely provided by any publisher) is preferable to keeping it hidden. By making these data available, we are adding one more measure into the mix and allowing the community to make its own decisions as to how to use it. And now that we have started to provide usage data, it will be possible for the community to engage in meaningful debate (and research) into its significance.

“Don’t blame the metric, blame the interpretation” – seems to be ETS-like sophistry. If one creates, and promotes, a new metric such as article impacts (and, let’s ne clear, this isn’t being advocated by PLoS as a type of “top tracks” ephemeral curiosity) then you need to take responsibility for the potential outcomes of its use.

Interesting too that this type of schema could reward notoriety over integrity: I would bet the next “cloned cow” paper will get lots of traffic/links etc…but may not be an example of sterling science.

[Editor’s note: ETS = “Educational Testing Service”]

Brandon: “If one creates, and promotes, a new metric such as article impacts then you need to take responsibility for the potential outcomes of its use.”

You seem to be missing the point that PLoS is not creating anything here (unlike ETS) Rather they are just exposing factual data that traditional publishers have been unable or unwilling to share.

Of course the facts can be misused and misinterpreted, as PLoS openly acknowledges.

As persons involved in science publishing, we should not argue for the suppression of factual information. The opposite of the flawed ‘wisdom of the crowd’ is not wisdom.

I was wondering your feelings on the cumulative chart rather than a monthly comparison where you can see peaks and valleys?

There will never be a decline, which authors may enjoy, but they will also need to do visual subtraction to see the differences between monthly usage.

Shouldn’t this be treated more like an analytic package?

Is it useful to authors to see results of news/blog posts/etc on their article rather than a running total put to a graph?

Comments are closed.