Publishing keeps changing. More than a decade ago, a competitive advantage we all embraced was reducing the time to publication. Criticisms were legion — months to a decision, months to publication, all apparently due to intractable editorial habits from yesteryear. In response, editors and production teams pledged to work quickly to get papers published. Now, many journals put papers up faster than ever. In addition to shorter times from submission to acceptance, there are “in press” sections in some journals, a proxy for preprints. Some journals publish raw manuscripts as placeholders for the final version. Even top-tier journals measure themselves by their time to publication.
This speed may create lower levels of accuracy and reliability. I’ve seen first-hand how rapid publication practices, driven by competitive forces as well as the tempting capabilities of Internet publishing, can lead to an increase in corrections and errata at scientific journals. There is a price to pay to squeezing the time to publication down to its absolute minimum. The aggregate rise in corrections and retractions across the journals system may provide more evidence that haste makes waste. With questions of the quality and legitimacy of the reports published in journals arising in the mainstream media, the costs of speed to our brands and the overall reputation of the industry may be worth reconsidering.
Moreover, is rapid publication where the competitive advantage currently resides? Or has the strategic ground shifted?
The rise of preprints, now endorsed by the NIH, has created a new pressure valve for rapid publication outside of journals. With this emerging venue for interim publication coming into place, do journals need to be so quick to publish?
“Slower may be better” is a theme resonating through the media space now. In light of the hoaxes, propaganda, and misinformation campaigns being waged by nation-states, individual players and their bots, and conspiracy theorists, readers and users are reflexively returning to slower, more considered venues. Subscriptions have seen dramatic increases at the New York Times, the Washington Post, and other trusted news outlets that take their time and break stories only when they truly have the goods. Speed is becoming increasingly associated with blather.
Moving away from “breaking news” has helped the British newspaper The Times, which is reporting a 200% increase in subscriptions since abandoning the practice of competing on this frontier. Shifting from the rolling news practices associated with a “breaking news” approach to a three-part publishing day, with news going up at 9 a.m., 12 p.m., and 5 p.m. each day, has helped editors, marketers, and users alike, according to Catherine Newman, the Chief Marketing Officer:
What has been revolutionary for us and editorial is that in changing to the editions’ publishing strategy and moving away from rolling news, we now have appointments to view with our subscribers and registered users that we didn’t have previously.
The approach has also allowed Newman’s marketing team to learn what readers are interested in between the publication events, providing feedback to reporters and editors so that the next release of content reflects the priorities readers have stated. Using a call center that receives feedback from readers, the news team is able to pursue stories readers want to know more about:
. . . calls are now often played in news meetings, so journalists can hear first-hand what readers want. For example on March 13, the day Scottish national party leader Nicola Sturgeon announced plans to hold a second independent referendum to separate Scotland from the U.K., the call center messages were replayed in the newsroom.
This slower approach allows The Times to remain in sync with its audience, while providing time for reporters and editors to develop relevant stories, prioritize effort, and coordinate themselves. It also sets expectations with readers, who can use other tools to get news ephemera (Twitter, Facebook), but know The Times will be there at 9, Noon, and 5 with comprehensive and well-crafted reporting. Slowness and care have become competitive differentiators.
The toll of haste is also an issue magazines are currently grappling with, as detailed in a recent article in the Columbia Journalism Review (CJR). Online competition has forced magazine journalists and editors to work faster than ever, which is delivering information online that is not as thoroughly fact-checked and edited as what follows later in print:
In our conversations with research editors at more than a dozen award-winning national and regional magazines, we found this same pattern: Print gets the full-on fact-checking process; online content gets at most a spot-check.
There are two clear problems with early errors — errors don’t foster trust, and corrections often go unnoticed, rendering them ineffective. If a correction falls in the digital forest, does it make a sound?
Journals don’t have to do everything anymore, and maybe it’s time to return to doing what they do best
Some might argue that lower-quality digital-first information is merely a symptom of editors and publishers holding onto old habits, but there are reasons to believe it’s not so simple, and that perhaps it’s just the opposite. Time pressures change with online, as do work habits. Realizing you have a more malleable format can make you more comfortable about rushing to publication, as corrections are just a click away. As the CJR story notes:
Practices vary, however, by magazine and by magnitude of error. Portland Monthly, which has no formal corrections policy for online stories, simply fixes errors (which are rare) as they occur and doesn’t notify readers, according to Assistant Editor Ramona DeNies, who oversees fact-checking for the city magazine.
In addition, publishers and editors with print backgrounds seem to produce more reliable material. Nona Willis Aronowitz, an editor at Fusion, says in the CJR story:
. . . there are some digital publications that are just as rigorous at fact checking as any magazine out there, and it’s probably because the people running it are print people.
A key factor may be a shared sensitivity to permanence for editors and publishers with a print background. Editors, production staff, and authors who have experienced having their work cast in the paper equivalent of stone know that terrible feeling of seeing an error that has inevitably reached thousands of readers. There is no taking it back, so they work harder on prevention. The cumulative effect of such training may carry over into digital products that are themselves better for it.
Yet, we have proposals to make a publication event essentially endless, with no limit to the number of revisions, updates, and changes authors or editors can make, the version of record essentially malleable, with the idea that this:
. . . supports the dynamic nature of the research process itself as researchers continue to refine or extend the work, removing the emotive climate particularly associated with retractions and corrections to published work.
Is dynamism something journals can leave to the preprint world now?
Perhaps, instead, the strategic differentiator for journals isn’t unpredictable schedules, rapid publication, and error-prone publishing of scientific reports. With preprint servers supporting rapid, preliminary publication in an environment that is actually more supportive of amendments/corrections, speed, and unpredictability, perhaps journals should rethink shouldering the load of and courting the risks of rapid publication. More importantly, there are indications that coordinating with your audience, taking more time to fact-check and edit, and returning to a higher level of quality may be the smart move.
Journals don’t have to perform every publishing trick anymore. Maybe it’s time to return to doing what they do best — vetting information carefully, validating claims as best they can, and ensuring novelty, quality, relevance, and importance around what they choose to publish.