Last week, the Public Library of Science (PLoS) announced the release of its article-level usage data.
Repeating the often-expressed claim that “research articles should primarily be judged on their individual merits, rather than on the basis of the journal in which they happen to be published,” each PLoS article now includes its own metrics.
In addition to the number of downloads (broken down into HTML, PDF, and XML, and plotted in a cumulative graph — see example), citations from Scopus, PubMed, and CrossRef are presented in real-time. Conspicuously absent are citations from ISI.
Multiple Indicators of Quality
The main problem with indicators is that they lose their function soon after they become visible. For instance, the Impact Factor — an indicator of the average quality of an article as measured by citations — has become synonymous with quality itself. It has ceased to function as an objective indicator of quality as authors and editors modify their behavior to maximize their citation return. Given that the impact factor is a proprietary tool and the fact that part of its calculation (the denominator) is kept closely guarded by ISI, it’s understandable why PLoS has sought out the participation of other citation indicators, relying on three instead of one.
Greater Reliance on Usage Data
With the maturing of standards to count article downloads, authors may be more willing to accept usage statistics as an indicator of quality (or at least popularity). Preliminary results from the 2009 Peer Review Survey indicates that more authors are willing to see formal peer review replaced by usage statistics (15% of respondents in 2009 compared to only 5% in 2007).
A Preemptive Move?
A cynic may read PLoS’s move to provide article-level indicators a preemptive move in advance of being given its first impact factor score for PLoS ONE, a journal with different editorial goals than its flagship journals. Understanding that authors are infatuated with journal impact factors, PLoS may be positioning itself to counter its first low score for PLoS ONE, emphasizing readership, bookmarking, and blogging data over citations.
PLoS has released comprehensive statistics in a downloadable spreadsheet. A cursory glance at the data reveals a sea of zeros punctuated with small islands of small numbers for most of the social network metrics. These figures are not surprising given the low level of reader comments on PLoS papers and the low priority scientists give to professional blogging. The fact that PLoS released the data at all is laudatory and fits with their general ethos of “increasing transparency.”
If you take a look at what has received the most citations in the PLoS suite of journals, they are the biomedical research articles. A closer look at what is heavily downloaded, commented, and blogged, however, reveals a very different picture. These lists include such non-research article such as:
- The Impact Factor Game (by the Editors of PLoS Medicine)
- Why Most Published Research Findings Are False
- Open Access: Taking Full Advantage of the Content
- Ten Simple Rules for Getting Published (by the editor-in-chief of PLoS Computational Biology)
- Medical Journals Are an Extension of the Marketing Arm of Pharmaceutical Companies (by Richard Smith, a member of the PLoS Board of Directors), and
- Why Current Publication Practices May Distort Science
This certainly gives the impression that part of the focus of PLoS is on science publishing rather than publishing science.
It is clear that PLoS sees quality as a multi-dimensional construct, and thus presents a collection of indicators in attempt to paint a broader, more complex picture of article performance. Whether this approach will win in an environment that puts emphasis on simple indices is yet to be seen.