Editor’s Note: the following is a guest post by Simon Wakeling, Stephen Pinfield and Peter Willett. Simon is a Lecturer in the School of Information Studies at Charles Sturt University, and Stephen and Peter are Professors at the University of Sheffield’s Information School (where Simon was also based, until his move to CSU). In collaboration with colleagues at Loughborough University they recently completed a large scale AHRC funded research project investigating open-access mega-journals.
The internet is awash with commentary. Those of us who read news services, or watch YouTube videos, or indeed engage with any form of content published online, know that scrolling down will likely reveal comments left by other consumers of that same content — sometimes only a handful, but often hundreds or even thousands. This phenomenon is the subject of much debate, particularly concerning the tone of discourse and the dangers of its manipulation. One thing that is palpably not in question is the level of commenting online.
Ask scholarly publishers about the rates of commenting on the articles they publish, however, and a very different picture emerges. Commenting functionality, which allows readers of an online article to add a comment relating to that article, visible to future readers, has been a feature of online academic publishing since its earliest days. While comparisons with the massive volume of comments on popular news and media sharing sites are obviously imperfect, there is nonetheless a widespread perception that article commenting has failed to embed itself in academic culture.
Why is this significant? Article commenting is most relevant to two related aspects of today’s scholarly publishing environment. The first is the concept of post-publication peer review (PPPR). In her Scholarly Kitchen blog on commenting in 2017, Angela Cochran argued that “post-publication peer review = online commenting”. While this might not necessarily be true for every version of PPPR (F1000 Research’s process, for example, involves invited reviewers submitting an open review after initial publication of the manuscript), it is true that some proponents of PPPR envision a model wherein the community of readers provide ongoing evaluation of the scholarly literature through comments.
This notion of a community reviewing and evaluating articles is of particular importance to open-access mega-journals (OAMJs). Such journals operate soundness-only review policies which explicitly exclude judgements of significance, novelty and interest from the decision to publish, instead accepting any work that is deemed scientifically sound. The intention, as PeerJ’s Publisher and Co-founder Peter Binfield put it, is to “let the community decide”. He explained:
“If subjective filtering (on whatever criteria) has not happened ‘pre-publication’ … then clearly the community needs to apply new tools ‘post publication’ to try to provide these types of signals based on the reception of the article in the real world” (Binfield, 2013)
In practice, these “new tools” have primarily been post-publication metrics, particularly altmetrics, and article commenting. Indeed the PLOS ONE website explicitly states that its comment functionality is intended to “facilitate community evaluation and discourse around published articles”.
Despite the near consensus about the popularity (or lack thereof) of commenting on academic articles, there is surprisingly little publicly available data relating to commenting rates. We could find only two blog posts by Euan Adie, in 2008 and 2009, that examined commenting rates and the nature of comments left on articles published by BioMed Central (BMC) and PLOS ONE. He determined that only a small proportion of papers had received comments (18% of PLOS ONE articles, and just 2% of BMC articles).
To address this, as part of a team of academics from the Universities of Sheffield and Loughborough, we have recently published research into article commenting on PLOS journals. This work is based on a data set generously provided by PLOS, comprising all comments (and their supporting metadata) left on PLOS articles between 2003 (the date of the first PLOS article) and the end of 2016.
What did we find?
1. Commenting rates are low, and getting lower
Our analysis showed that since 2003, only 7.4% of articles published across all PLOS journals have been commented upon — and when comments left by publisher operated accounts are excluded, this figure drops to 5.2%. We also found that commenting rates have been declining for all journals since 2010.
2. Very few articles have multiple comments
Articles that had been commented on received on average 1.9 comments, with two thirds of these articles receiving a single comment. Just 592 articles were found to have five or more comments, that figure representing 0.3% of all published articles. The articles with the most comments were mostly found to have been the center of some controversy. The (now retracted) article with the most comments (206) refers to “the Creator”, while the apparent stimuli for comments on other articles include the alleged refusal of researchers to share underlying data, results that contradict other influential papers, and apparently serious perceived flaws in research methodology and analysis. The article with the eighth highest number of comments (46) has no reader comments whatsoever; all comments are made by the author, and correct the order of references in the article.
3. Most (but not all) comments discuss the academic content of the paper.
As part of our research we developed a typology of comments, and manually coded a 10% sample of the PLOS data set (2,888 comments in total). Within this typology the 11 categories were organized into two distinct groups. Firstly, there are procedural comments: those identifying spelling or typographical errors, noting media coverage, linking to supplementary data etc. Secondly, there are academic comments: those praising or criticizing the paper, asking questions of the authors, linking to related material, or generally discussing the intellectual content of the paper. It is this second type of comment that relates to the concept of PPPR. We found that around a third of comments were procedural, and two thirds academic, with discussion of the content of the paper the most common (52% of all comments in the sample).
4. More comments address issues of scientific soundness than the significance of the work
We further analyzed comments coded as academic discussion, to determine whether they addressed any of the typical aspects of peer review (novelty, relevance to the journal, significance, and technical soundness). Two thirds of academic discussion comments left on PLOS ONE articles were found to cover the scientific soundness of the work, but just 13.6% its significance.
5. There is variation in commenting across journals
While most of the findings presented above are based on the aggregated data set, it was striking that different PLOS journals exhibited quite different commenting characteristics. PLOS Biology and PLOS Medicine in particular were found to have higher rates of commenting since 2009, and that those comments were more likely than for other journals to be academic in nature, and to discuss the significance of the findings.
What do these results mean?
In part, our research confirms what many publishers already know: academics rarely comment on articles. There has been much speculation (but little formal research) into why this should be the case. It could be, as Neylon and Xu suggest, that we are simply observing the 90-9-1 rule in action (90% of people observe, 9% make small contributions, and 1% are responsible for most of the content) — although that is more an observation of a pattern of behavior rather than an explanation as such. It may be that academics in general, particularly those at early stages of their careers, are uncomfortable publicly engaging in critical discussion of other people’s work. An unwillingness to comment might also stem from the long standing culture of academics, as Cochran suggests, where discussion of articles takes place in well-understood environments — staff rooms (usually accompanied by coffee), in research group meetings, or at conferences.
Cochran also notes that “what’s missing from commenting sites specifically on mega-journals, database sites, and third party sites is community.” We think our results offer some support for this view, most notably in the fact that PLOS Medicine and PLOS Biology — the longest established and highest ranked of the PLOS journals, and those that might best be said to have a community of readers — both exhibit significantly more commenting activity than the much larger and topically diverse PLOS ONE.
Understandings of what constitutes peer review are also clearly very embedded in academic communities and difficult to challenge. Some of our other work on mega-journals has led us to question not so much the model of soundness-only peer-review (which in fact involves pre-publication soundness-only peer review followed by post-publication quality indicators of novelty, significance and relevance), but ways in which it can be incorporated into scholarly practice in the short term. This challenges many deep-seated presumptions about peer review, such as when it should happen (the assumption is pre-publication) and what it should include (novelty, significance, and interest as well as rigor and soundness). It can take time for new models to become accepted.
For PPPR and OAMJ models to work as originally conceived publishers not only need to persuade more academics to comment more frequently — a serious enough challenge on its own — but also to engage in a different type of commenting.
Our most significant finding, however, relates to the analysis of what comments are typically about. For commenting to properly constitute PPPR, or communicate a community’s views of a mega-journal article, a sufficient number of comments arguably need to address the types of issues typically considered in peer review reports (novelty, significance, and interest). Our research shows that while academic discussion of the article is relatively prevalent, for PLOS ONE articles these comments are most likely to address issues of technical soundness — ironically, the one factor already addressed in peer review. So for PPPR and OAMJ models to work as originally conceived publishers not only need to persuade more academics to comment more frequently — a serious enough challenge on its own — but also to engage in a different type of commenting. Much of this is about academic cultures, of course, which are notoriously difficult to shift, at least with current incentives. Contributing to community dialogue in ways such as commenting, as well as engaging in peer review more generally, needs greater recognition.