A couple of weeks ago, I chaired a panel at the 2014 STM Spring Conference on Ethics and Trust in Journal Publishing: How Sound is the System? It’s an important topic, and one that I and all the panel members (John Bohannon – Science journalist, Phil Davis – consultant and fellow Scholarly Kitchen chef, Chris Graf – Wiley’s New Business Director, and Ivan Oransky – Retraction Watch co-editor) feel strongly about. At a time when more research articles are more readily available to more readers globally than ever before, it’s crucial we are confident that those papers meet the highest standards and, that on those occasions where they don’t, there is a sound system in place to revise or retract them.

Dr. Who
Image via Mikamilacat.

So what can we do to make the publishing process more sound? This was one of the questions I posed to the panelists; their answers – a mix of the pragmatic and the aspirational – were thought-provoking and, in some cases, quite controversial!

Not surprisingly, many of the suggestions focused on peer review, including going all out for open peer review by deploying technology to facilitate that change – assuming that openness around the peer review process would facilitate scrutiny of that process, including accelerating the identification and correction of problems. A variation on this theme was the suggestion of making peer review part of the public record, along with the paper. While this wouldn’t necessarily be fully open in the sense that the reviewers themselves would be publicly identified, it would make the whole process more transparent. Although these suggestions represent significant changes to the peer review system, and would require a cultural and behavioral shift on the part of researchers and publishers alike, they are also plausible. Indeed, some organizations are already starting to experiment with these sorts of approaches – The EMBO Journal, for example, includes the following statement in its guidelines for authors:

The EMBO Journal makes the editorial process transparent for all accepted manuscripts, by publishing as an online supplementary document (the Peer Review Process File, PRPF) all correspondence between authors and the editorial office relevant to the decision process. This will include all referee comments directed to the authors, as well as the authors’ point-by-point responses. Internal communications and informal consultations between editors, editorial advisors or referees will remain excluded from these documents. Importantly, referee anonymity will be strictly maintained. Authors have the possibility to opt out of the transparent process at any stage prior to publication.

A much more radical suggestion, and one that would require far bigger cultural and behavioral changes, was to materially change the academic reward system, including an end to the ‘fetishization’ of the peer-reviewed paper as part of that system. A couple of years ago, fellow chef, David Smith, characterized the research process as follows:

  1. Scholar gets funding for research
  2. Scholar does research
  3. Scholar undertakes a process whereby they attempt to maximize the value of the research they’ve done by attempting to get as many papers out as possible, whilst simultaneously getting as much tenure/funding credit as possible for the same body of work (these things tend to trend against each other and you’ll note that there are two different definitions of value wrapped up there)
  4. Scholar selects journals in which to publish the work
  5. Publisher places successful works out for greater dissemination
  6. Fortune and glory follows (or not).

It’s fair to say that nothing much has changed since then (OA may now be mainstream, but it has had no real impact on this workflow) and, I suspect, nothing much is going to change any time soon. What’s more, it’s not an issue that we, as publishers, can (or should) influence; our mission is to serve the researchers and professionals working in our communities, not to dictate how or why they get hired or promoted.

However, it is within our control to implement one of the other radical suggestions from the panel – to create and implement a publication audit process for all journals. John and Ivan are both famous for ‘outing’ publishers whose publishing process is less than watertight – and in doing so, they are providing a valuable (if sometimes unpopular!) service. But, as they both noted, their work doesn’t constitute any kind of real (ie consistent, continuous) safety check. That would require publisher support and participation.

Ideally, an audit of this sort would be independent of the publishers themselves, but it is hard to see who – other than we – would be willing to pay for such a process. And publishers do have a good track record of collaborating to improve scholarly communications. Think CrossRef, ORCID, Research4Life, CLOCKSS, and more. In particular, many publishers (and the societies for which they publish) are members of COPE (the Committee on Publishing Ethics), which was formed in 1997 by a small group of medical journal editors to provide “advice to editors and publishers on all aspects of publication ethics and, in particular, how to handle cases of research and publication misconduct”.

At Wiley, we used the COPE toolkit to undertake our own ethics audit of our health science journals (covered by Chris at STM and at last year’s International Congress on Peer Review and Biomedical Publication), which we then used to help educate and encourage editors to improve their processes. Although this approach may not be scalable – and, in fact, the results were somewhat mixed – perhaps we could use elements of it to create a more efficient and effective audit system in future. It’s got to be better than waiting for John to sting us again, surely!?

With thanks to John Bohannon, Phil Davis, Chris Graf, and Ivan Oransky

Alice Meadows

Alice Meadows

I am a Co-Founder of the MoreBrains Cooperative, a scholarly communications consultancy with a focus on open research and research infrastructure. I have many years experience of both scholarly publishing (including at Blackwell Publishing and Wiley) and research infrastructure (at ORCID and, most recently, NISO, where I was Director of Community Engagement). I’m actively involved in the information community, and served as SSP President in 2021-22. I was honored to receive the SSP Distinguished Service Award in 2018, the ALPSP Award for Contribution to Scholarly Publishing in 2016, and the ISMTE Recognition Award in 2013. I’m passionate about improving trust in scholarly communications, and about addressing inequities in our community (and beyond!). Note: The opinions expressed here are my own

Discussion

15 Thoughts on "How Can We Make the Publishing Process More Sound?"

I agree with Harvey that the number of “retractions” are minute but so are the number of ethical issues except from the obvious documented by Nature. What is more difficult to assess is “quality” or value (including the chance that some fundamental advance may get discovered only decades later). While peer review and journal impact factor (IF) looms large in the background of these discussions, a number of funding agencies, globally, are ignoring these IF’s in favor of other, post publishing measures to determine future funding and to channel or encourage directions for research.

A number of post publication analysis services are looking at that spectrum, including social media.
In fact the Nature efforts and that of others is post publication analysis.

Perhaps there should be more serious consideration for post publishing measures? There are objections which can immediately be raised. But before shooting a nascent idea, perhaps a more critical analysis might open up ideas. This is done today, for example, by qualifying “subscribers” or “members” before their inputs can be posted.

According to a SCOPE white paper, there are more than 23,000 scholarly journals, and 1.4 million articles, annually, increasing.

It’s true that we can’t with 100% certainty predict the significance of any research results, and that hindsight is likely to be much more accurate. That said, looking back 10 years to see what really mattered may be too long a timescale to reward the best researchers adequately, and keep their careers moving forward.

There is likely still some value in having an expert editor put together a panel of expert peer reviewers who read the article in depth, and using their extensive knowledge of the field, declare it (at least preliminarily) to be of a level of value high enough to warrant publication in journal X. It’s not going to be perfect, and IF may not be the best way to rank or measure where a journal sits in the hierarchy, but if I’m a postdoc and I’m up for a job, I can’t wait 10 years for my results to have been proven important.

On the other hand, if I’ve just published those results in a journal with really high standards, that at least sends a signal to hiring committees that my peers see my work as important.

Our research show 28,000 journals and 2,000,000 articles published in 2013. And growing.

All of the post publication metrics and tools that exist now are, as you say, post publication. It takes a long time, even with newer initiatives such as altmetrics, to get any info of value.

We need something which is a leading indicator available at publication. Something that helps users know what went on prior to publication. See my response below for what I think is one solution.

Alice,

I was in the audience during that session, and was quite happy when Ivan mentioned what we are doing with PRE as a possible solution. I prefer to use the word “validation” rather than “audit” as the latter seems to have negative connotations to me. I responded to Angela’s post yesterday with similar thoughts. Obviously I’m not objective, since I’m the Founder and Managing Director of PRE, but it seems that what we are are doing with services such as PRE-val and PRE-score addresses this need directly. Our primary goal is to support quality peer review and create incentives around that. As David mentions, journals with high standards around peer review currently have no way to really show that or get credit for the hard work they put in. With PRE they do. We’re just getting started but my hope is in the future we will have wide participation and these things we’re discussing have become reality.

Adam

The ideal of independent audits for publishers may be too radical an idea to happen any time soon, but until (or unless?) it does, the publishing industry itself should surely accept responsibility for self-regulation. Our own audits at BioMed Central have proved invaluable in identifying areas where we can provide education and support for our editors, both as individuals and as a group, and have informed the content of our editorial training programme.
However, publication is just part of the long process and while open peer review, including publication of all previous versions of the manuscript, increases transparency and makes it easier to detect poor practice, perhaps the question of how we make the publication process ‘sound’ needs to be addressed in the broader context of education for scientists in their roles as researchers and reviewers as well as editors.

A good Managing Editor will establish an audit trail that kicks in at submission and ensures (to the extent possible) that anomalies and red flags are pursued pre-acceptance. In the awful events they are not, the cause can be quickly spotted via that audit trail. Nothing is perfect of course. I would have liked to see more representation on your panel from the professional and dedicated corps of MEs, who regard the integrity of their journal’s content as the heart of their jobs, day in and day out. They have a privileged view of these issues.

This is conjecture, but my feeling is that science publishing is struggling (and failing) to deal with a massive garbage-in, garbage-out problem. The proportion of truly solid papers may actually be very low, and so the system we have in place (peer review) is unable to sort the good results from the bad without applying what seem like impossibly high standards.

For example, disguising positive post-hoc results as confirmation of a priori predictions is apparently routine, and since neither the data or the analysis plan are generally available it’s impossible to check. Selective reporting of positive results may also be very common. In some sense, journals are responsible for this problem because they are reluctant to impose very strict standards on their authors, which would in turn incentivize better research practice.

One ray of sunshine is the development of initiatives like the Centre for Open Science (http://centerforopenscience.org/), which help researchers document and then publish their entire research workflow. It’s possible that journals encouraging and ultimately requiring the use of such tools would do a lot to improve the quality of their submissions.

Tim, is the massive “garbage” you refer to bad writing or bad research? If the latter I am not sure it is the publishing industry’s job to change the standards. And in either case if you raise the rejection rate all you do is feed the APC bottom feeders, as it were.

As for documenting and publishing entire workflows instead of simple articles that creates several problems, which I discuss below. It may just make the garbage pile that much harder to wade through. Documentation is no substitute for quality and it chews up everyone’s valuable time.

Tim, your comments are right on target. Academics are notorious for having “results” before applying for grants in order to accrue all the benefits you mention, including articles.

It is concerning that there are so many comments here stating that the need to get articles out in a “timely” fashion” is to assure or support career advancement of the researchers. This lays a moral burden which was never the intent when the idea of a “journal” emerged. It changes or distorts the entire intent and purpose of publishing.

In fact it becomes a tacit collusion which the publishing industry is too willing to encourage and indirectly supports the motivations for marginal submissions and the veritable persiflage that many editorial boards must wade through rationalizing that if not then one may overlook “that” work which, later, surfaces as a critical breakthrough.

The idea of a “double blind” review may now be problematic and its own worst enemy. With semantic enhancement and similar techniques, AI provides a powerful first cut and a filter that may allow true collegiality to enter back into the journal industry. Unfortunately, it may impact negatively on the bottom line of the publishers.

“… to create and implement a publication audit process for all journals.” sounds like a very expensive effort. Are there any cost and burden estimates for this, especially burden? And what are the supposed benefits? Note too that “all journals” is probably unrealistic and the bad actors are the least likely to comply.

In the design of regulatory systems, which this is a case of, the opposite approach is often far more efficient. This means developing mechanisms for identifying the worst cases. I call it the “worst first” heuristic.

As for open review, consider that attention is basically a zero sum game. Tracking through every version of a paper, plus all the comments, etc., along the way, chews up a great deal of attention. What is it that people are supposed to stop looking at in order to free up all this attention? Other papers? If so then if open review actually worked it might drastically reduce the number of papers that are read. Is this what we want?

Regulatory mechanisms that create large burdens on human time typically do not work. We are, after all, talking about human behavior, where time is a scarce resource. The least burdensome solution is often the best. I suggest looking in that direction.

Thanks for all your comments. Joshua, you’re quite right that this needs to be addressed in the wider context of education for scientists, many of whom are themselves editors and peer reviewers as well as writers. This is relevant to Marjorie’s point too – while I agree there are many excellent managing editors out there (and unfortunately some bad ones too), there are also many who are doing their best but just haven’t been properly trained, as our ethics audit showed albeit on a small scale. One thing we can be sure of – as with pretty much everything in the world of scholarly communications, there’s unlikely to be a one size fits all solution. But it’s got to be good news that so many new ways of improving the process are being developed – both pre and post publication. I suspect we are never likely to achieve perfection – we are all only human – but we can and should keep striving for it!

Something that might be doable, but still not cheap, is a kind of better business bureau that takes in and catalogs reviews and criticisms of journals. This has revolutionized consumer behavior so why not author’s?

Comments are closed.