Publishing keeps changing. More than a decade ago, a competitive advantage we all embraced was reducing the time to publication. Criticisms were legion — months to a decision, months to publication, all apparently due to intractable editorial habits from yesteryear. In response, editors and production teams pledged to work quickly to get papers published. Now, many journals put papers up faster than ever. In addition to shorter times from submission to acceptance, there are “in press” sections in some journals, a proxy for preprints. Some journals publish raw manuscripts as placeholders for the final version. Even top-tier journals measure themselves by their time to publication.

tortoise on the road

This speed may create lower levels of accuracy and reliability. I’ve seen first-hand how rapid publication practices, driven by competitive forces as well as the tempting capabilities of Internet publishing, can lead to an increase in corrections and errata at scientific journals. There is a price to pay to squeezing the time to publication down to its absolute minimum. The aggregate rise in corrections and retractions across the journals system may provide more evidence that haste makes waste. With questions of the quality and legitimacy of the reports published in journals arising in the mainstream media, the costs of speed to our brands and the overall reputation of the industry may be worth reconsidering.

Moreover, is rapid publication where the competitive advantage currently resides? Or has the strategic ground shifted?

The rise of preprints, now endorsed by the NIH, has created a new pressure valve for rapid publication outside of journals. With this emerging venue for interim publication coming into place, do journals need to be so quick to publish?

“Slower may be better” is a theme resonating through the media space now. In light of the hoaxes, propaganda, and misinformation campaigns being waged by nation-states, individual players and their bots, and conspiracy theorists, readers and users are reflexively returning to slower, more considered venues. Subscriptions have seen dramatic increases at the New York Times, the Washington Post, and other trusted news outlets that take their time and break stories only when they truly have the goods. Speed is becoming increasingly associated with blather.

Moving away from “breaking news” has helped the British newspaper The Times, which is reporting a 200% increase in subscriptions since abandoning the practice of competing on this frontier. Shifting from the rolling news practices associated with a “breaking news” approach to a three-part publishing day, with news going up at 9 a.m., 12 p.m., and 5 p.m. each day, has helped editors, marketers, and users alike, according to Catherine Newman, the Chief Marketing Officer:

What has been revolutionary for us and editorial is that in changing to the editions’ publishing strategy and moving away from rolling news, we now have appointments to view with our subscribers and registered users that we didn’t have previously.

The approach has also allowed Newman’s marketing team to learn what readers are interested in between the publication events, providing feedback to reporters and editors so that the next release of content reflects the priorities readers have stated. Using a call center that receives feedback from readers, the news team is able to pursue stories readers want to know more about:

. . . calls are now often played in news meetings, so journalists can hear first-hand what readers want. For example on March 13, the day Scottish national party leader Nicola Sturgeon announced plans to hold a second independent referendum to separate Scotland from the U.K., the call center messages were replayed in the newsroom.

This slower approach allows The Times to remain in sync with its audience, while providing time for reporters and editors to develop relevant stories, prioritize effort, and coordinate themselves. It also sets expectations with readers, who can use other tools to get news ephemera (Twitter, Facebook), but know The Times will be there at 9, Noon, and 5 with comprehensive and well-crafted reporting. Slowness and care have become competitive differentiators.

The toll of haste is also an issue magazines are currently grappling with, as detailed in a recent article in the Columbia Journalism Review (CJR). Online competition has forced magazine journalists and editors to work faster than ever, which is delivering information online that is not as thoroughly fact-checked and edited as what follows later in print:

In our conversations with research editors at more than a dozen award-winning national and regional magazines, we found this same pattern: Print gets the full-on fact-checking process; online content gets at most a spot-check.

There are two clear problems with early errors — errors don’t foster trust, and corrections often go unnoticed, rendering them ineffective. If a correction falls in the digital forest, does it make a sound?

Journals don’t have to do everything anymore, and maybe it’s time to return to doing what they do best

Some might argue that lower-quality digital-first information is merely a symptom of editors and publishers holding onto old habits, but there are reasons to believe it’s not so simple, and that perhaps it’s just the opposite. Time pressures change with online, as do work habits. Realizing you have a more malleable format can make you more comfortable about rushing to publication, as corrections are just a click away. As the CJR story notes:

Practices vary, however, by magazine and by magnitude of error. Portland Monthly, which has no formal corrections policy for online stories, simply fixes errors (which are rare) as they occur and doesn’t notify readers, according to Assistant Editor Ramona DeNies, who oversees fact-checking for the city magazine.

In addition, publishers and editors with print backgrounds seem to produce more reliable material. Nona Willis Aronowitz, an editor at Fusion, says in the CJR story:

. . . there are some digital publications that are just as rigorous at fact checking as any magazine out there, and it’s probably because the people running it are print people.

A key factor may be a shared sensitivity to permanence for editors and publishers with a print background. Editors, production staff, and authors who have experienced having their work cast in the paper equivalent of stone know that terrible feeling of seeing an error that has inevitably reached thousands of readers. There is no taking it back, so they work harder on prevention. The cumulative effect of such training may carry over into digital products that are themselves better for it.

Yet, we have proposals to make a publication event essentially endless, with no limit to the number of revisions, updates, and changes authors or editors can make, the version of record essentially malleable, with the idea that this:

. . . supports the dynamic nature of the research process itself as researchers continue to refine or extend the work, removing the emotive climate particularly associated with retractions and corrections to published work.

Is dynamism something journals can leave to the preprint world now?

Perhaps, instead, the strategic differentiator for journals isn’t unpredictable schedules, rapid publication, and error-prone publishing of scientific reports. With preprint servers supporting rapid, preliminary publication in an environment that is actually more supportive of amendments/corrections, speed, and unpredictability, perhaps journals should rethink shouldering the load of and courting the risks of rapid publication. More importantly, there are indications that coordinating with your audience, taking more time to fact-check and edit, and returning to a higher level of quality may be the smart move.

Journals don’t have to perform every publishing trick anymore. Maybe it’s time to return to doing what they do best — vetting information carefully, validating claims as best they can, and ensuring novelty, quality, relevance, and importance around what they choose to publish.

Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

23 Thoughts on "The Tincture of Time — Should Journals Return to Slower Publishing Practices?"

I frequently review for journals, maybe because I also publish regularly and get cited a lot. When an editor tells me I have 6 weeks to deliver a review, I put a tickler on my calendar to read the manuscript a week before the deadline. Can anyone tell me what happens in those 5 weeks of delay? That’s right, nothing. Time passes. It’s even worse for editors who give me two or three months. Obviously, sending it back early won’t work if the other reviewers also procrastinate. Giving a manuscript a very careful reading seldom takes me more than two hours. So why not just give me a week to completion? I’d say, it’s simply stupid tradition and the pompous attitude among some reviewers that their professional lives are much too busy for a monthly or semi-annual manuscript to review for a journal. Nonsense. If you cannot turn around a manuscript in a week’s time, you should never review them at all. Letting manuscripts marinate is soothing but wasteful.

I’ve worked on multiple experiments to make this part of the process more efficient. What you say is generally true (the “if it weren’t for the 11th hour, nothing would happen in the world” phenomenon), but there are contingencies. First, once you leave Anecdote World or Exception Land, and generalize a policy, it becomes complicated. You can lose reviewers who travel a lot, lose reviewers during vacation periods (summer, December), or simply make them mad if you request a two-week turnaround. In a general approach, a one-week turnaround is doomed to fail, or at least make things untenable. To have a decent fetch, my experience is that 4-6 weeks latitude is about optimal. It’s not ideal, but for work based on volunteer goodwill and intellectual collaboration, journals and editors are best seen as considerate and helpful.

Thanks Kent – but a lot of the discussion here appears to focus on newspapers and magazines – but the conclusions are drawn for journals.
Journal publishers sped up processes in response to market needs and the feedback they received from the communities of scientists they interacted with. In the future those communities may not care about fast publication, but today they still do — not least because they want to ensure primacy, establish reputation and to meet (real or perceived) funder requirements for justification of grants. Some of these are admittedly addressed by preprints – but so far no users are asking for slower processes as a consequence – more rigour perhaps (for reproducibility etc), but without compromising speed.
I am not certain whether this is the same in all STM fields?

You’ve hit on a key contradiction — more rigor, same quick speed we’ve become accustomed to. The reason to point to larger trends is that the scientific community reads news and magazines, as well, and may worry when their journals aren’t as accurate as some of these better sources. Also, the uptick in retractions, corrections, and other problems with accuracy and reliability can’t be viewed in isolation. There is a larger story about the information ecosystem here, I think.

If we want more rigor, we might need to slow down. With preprints now taking the early pass in many fields, maybe the pressure to rush has lessened, and journals can reconsider their rapid publication practices. Things are changing, including user expectations.

news and magazines are “better sources” than journals, Kent?
I think Shiloh is right with regard to print journals, but your points are well-take with regard to the advent of post-publication peer review and super-fast review then immediate posting.
In my role as EIC, I’m working on bringing turnaround times down, but at the same time recognising that the quality of our product takes more time than the quick-and-dirty online newcomers put in – I’m pretty sure most readers recognise that, and I’ve had many authors thank us for the time we invest it helping them get a better article out of it than they would with an outlet that cares less.

Really useful to think critically about our working assumptions, Kent, and in this case, thank you for questioning the assumption that faster = better. If publishers felt free to allow a more rational pace to the publishing lifecycle, I’d imagine we’d address some of the temporal gaps in discovery and access, as we wait for metadata to work its way through our supply chain, https://scholarlykitchen.sspnet.org/2017/03/16/time-warp-lag-publication-discovery/. Thank you for these insights today!

Wouldn’t a better approach be to optimize the metadata supply chain so as to narrow those gaps? Slowing the rest of the publishing process down for that purpose doesn’t seem like a particularly effective strategy if the point is to make discovery and access more efficient.

NYT and them lot still conform to the same 24-hour news cycle as the rest. The process isn’t significantly slowed down; it’s slightly reorganized within the usual time constraints of the news business and targeted at 9-to-5ers more specifically.

A viable editorial approach apparently, but its effect on integrity and legitimacy of news itself is marginal at best, and mostly non-existant.

Interesting piece, food for thought. I work in journal publishing. The answer to timeframe is probably somewhere in the middle – between the speed at which journal issues are being published now and a very long process. Speed should not result in lower quality output.

Likewise, I’m not sure the pressures on the news media and academic journals can reasonably be compared, because we’re not chasing eyeballs for ad dollars in the same way, and a research finding is rivalrous – no one else can publish the same finding from the same person – whereas anyone can report the same political or social occurrence.

The overall point about letting preprint servers do their thing and allowing journals to remain a carefully curated repository of knowledge is a good one, though. I think we’ll learn a bit about to what extent the research world feels that fast-moving, error-prone preprints are good enough for some uses.

This is an excellent column that may or may not apply to the B2B publishing environment that I inhabit. However perhaps there are at least two other hurdles — beyond an overly fast pace — that undermine B2B quality. I wonder if either if these apply to scholarly journals. One is contending with mammoth workloads brought about by the advent of online media. In many cases, staffs already burdened by existing print-only workloads have added internet responsibilities not offset by badly-needed additional staff. The second is a rush to frequency. Monthly and twice-weekly frequency was the way to go, now it’s gotten somewhat out of hand via inadequate attempts to sustain high-quality while delivering content daily or even twice-daily. And mind you, a lot of the rush is occurring at a time when more daily newspaper publishers are seriously considering the wisdom of a lesser frequency.

Annual studies I conduct involving B2B e-news delivery clearly confirm quality shortfall. Among other symptoms, 65 percent of articles reviewed during the past six years reflect no evidence of enterprise reporting. So you could say B2B publishers contend with a triple threat. One element would be the faster pace being forced upon us — complicated even further by increased frequency and the resulting overwhelming workloads. Now that we are in this rut, there doesn’t seem to be any way we could slow down — is there???

It is timely counterpoint – time to pause and rethink. Thank you Kent!
How quickly, and inexpensively, can we throw it over the wall is neither good nor sustainable, irrespective of who pays.

My two cents on this topic and these issues:

Al-Khatib, A., Teixeira da Silva, J.A. (2017) What rights do authors have? Science and Engineering Ethics
http://link.springer.com/article/10.1007/s11948-016-9808-8
DOI: 10.1007/s11948-016-9808-8

Teixeira da Silva, J.A. (2017) It may be easier to publish than to correct or retract faulty biomedical literature. Croatian Medical Journal 58(1): 75-79.
http://www.cmj.hr/2017/58/1/28252878.htm
DOI: 10.3325/cmj.2017.58

Al-Khatib, A., Teixeira da Silva, J.A. (2017) Threats to the survival of the author-pays-journal to publish model. Publishing Research Quarterly 33(1): 64-70.
http://link.springer.com/article/10.1007/s12109-016-9486-z
DOI: 10.1007/s12109-016-9486-z

Teixeira da Silva, J.A., Dobránszki, J. (2017) Excessively long editorial decisions and excessively long publication times by journals: causes, risks, consequences, and proposed solutions. Publishing Research Quarterly 33(1): 101-108.
http://link.springer.com/article/10.1007/s12109-016-9489-9
DOI: 10.1007/s12109-016-9489-9

Teixeira da Silva, J.A., Katavić, V. (2016) Free editors and peers: squeezing the lemon dry. Ethics & Bioethics 6(3-4): 203-209.
https://www.degruyter.com/view/j/ebce.2016.6.issue-3-4/ebce-2016-0011/ebce-2016-0011.xml
DOI: 10.1515/ebce-2016-0011

Both an excellent anaysis and very timely … here are the number of ‘corrections’ as recorded in Web of Science:
2012 (12,869) 2013 (13,465) 2014 (15,511) 2015 (18,155) 2016 (19,062)

… and the number of ‘retracted publications’ and ‘retractions’ as recorded in Web of Science:
2012 (223) 2013 (192) 2014 (203) 2015 (149) 2016 (269)

In my view the multiplication of journals is killing the system. The main problem I find as associated editor is to find reviewers, and just because we are all getting requests from a dozen journals at the same time. The delays in the journals I work with are usually not related to the reviewers (4-6 weeks is very reasonable for a good review), but to the difficulty to find somebody competent to do the review. Aski that person to do it in two weeks will not help.
On the other hand, with all the news about fakes and scandals and how that paper gor though reviews…we should remember that the peer review system is not essential for science, not part of the scientific method, the only essential thing is for science reproducibility !

Here’s the question though: is it the increased number of journals or the increased number of papers? If there were fewer journals, but the same number of submissions, I suspect that things would be much the same.

To echo David’s point, there are many things multiplying in the system at once — more researchers, more fields of scientific and scholarly endeavor, more papers, and more journals. Surveys of peer review show a system that is under some pressure, but apparently not much more than in prior eras. The load-balancing is generally good.

As for peer review not being essential for science, I disagree. It has been part of validating and sorting reports for centuries, and has become more widespread and useful as information flows have increased. We could use more of it in other parts of life, and learn to accept the time for review and revision because it does improve the information we rely upon.

Comments are closed.