austin bikes guard cat.
austin bikes guard cat. (Photo credit: Wikipedia)

The Ig Nobel Prizes are favorites among many scientists and science watchers. They provide a much-needed spark of insouciance to otherwise dour and serious proceedings. The Ig Nobels accomplish this by identifying studies that are, on the surface, ridiculous, but which ultimately take us on a wonderful journey toward the sublime. As the founders say, “The Ig Nobel Prizes honor research that first makes people laugh, then makes them think.”

Recent examples include studies of whether it is mentally hazardous to own a cat. On the surface, the questions appears to be silly, emanating from stereotypes of eccentric “cat ladies.” However, dig deeper into the question, and you learn that it’s a serious question about a feline-borne parasite (Toxoplasma gondii, which gives you toxoplasmosis). The parasite infests brains, and can lead to certain behaviors (possibly schizophrenia, possibly depression), including a potential propensity to horde cats, which helps the parasite thrive. Two teams — one from the US and one from Europe — pursued this question, so split the Ig Nobel Prize for the work.

Listening to this discussed during an interview on StarTalk Radio with the founder of the Ig Nobels, I was struck with how much sustained discussion and reflection these simple and creative studies fostered. Even the process of choosing the Ig Nobels — a process consisting of sorting through published research — seems to require a lot of time and thought.

Contrast this with the trend to produce and publish studies faster, and the increasing insistence from authors for rapid publication.

The purported benefits of rapid publication include elimination of unnecessary production cycles, delivery of information to scientists who can use it, and the ability for science as a whole to advance more quickly with faster access to new research findings.

Publication is certainly a major threshold for information to cross. But does pushing for “faster” come at a cost?

We see the increase in retractions, which is partially driven by fraudsters who publish many studies quickly, which leads to batches of studies being retracted once a handful begin to foul. We see a mega-journal publishing 35,000 papers per year, making it nigh impossible to evaluate its quality on a daily basis, much less overall. We see readers feeling more and more overwhelmed, reviewers burning out, and editors more and more stressed. The publishing world is moving faster and faster, and the ledger doesn’t usually seem net positive.

We also see how variable the implementation of new research is by field, and how variable the reasons for faster publication can be across science. In some fields, there are patent concerns, which makes primary publication very important for the owners of the intellectual property. In some fields, research is progressing rapidly in some areas, so faster publication is a competitive advantage for all involved. In medicine particularly, some papers are pushed ahead for competitive reasons at times, for public health reasons at other times. Overall, publishers view faster publication as an advantage and a necessity. It’s just a matter, after all, of tightening up production steps and eliminating wasted time.

Yet, again and again, we find that what at first appears simple and straightforward is actually complex and challenging.

Peer review is viewed by some as a simple matter of sending a manuscript to a couple of qualified people, getting their comments back, and taking their recommendations forward. Of course, it is much more complex than it seems at first. Finding qualified people is usually a major chore in and of itself. Some journals, especially mega-journals, seem to take shortcuts even here. Then there are the issues of managing peer reviewers, who often are peripatetic, with multiple email addresses depending on where they are; who miss deadlines; who forget to send the attachments; who can’t manage their spam filters; and so forth. Then reviewers have to be assessed for timeliness and quality — reviewing the peer reviewers takes time and effort. And most journals don’t take peer reviewer comments without qualification — they are usually concatenated and interpreted through the further review of editorial staff and senior editors.

It’s more fun to think about peer review as a simple process that flows smoothly along magical silver rails, but it is far from that. Think it through, and you realize how much work, effort, and complexity running peer review for 1,000-50,000 manuscripts entails. Creating fast-track systems within this flurry of work is a major effort, and securing the right reviewers at the right time can create significant logistical challenges.

Production systems are full of dependencies, making a push here subject to an equal and opposite reaction in another section. Faster publication of selected manuscripts can slow down production for others. The net time savings may be closer to zero across the production system than is immediately apparent. Making the production system faster overall often requires major reengineering efforts (direct-to-XML processes, CMS modifications, and potentially additional staff, just to name a few). Vendor contracts may need to be renegotiated, platform provider relationships retooled, and new notification systems and services created.

The potential downside to all this is less apparent, but comes in the form of more corrections, retractions, and near-misses on both fronts.

What’s interesting to me is that the time and effort in dealing with these and correcting or modifying the literature is not insignificant, and the increased rate of occurrence is quite likely making scientists less willing to accept initial reports at face value, as they wait for enough time to pass for a community consensus to arise, for possible corrections to emerge, and for a reasonable retractionless period to pass. This hesitancy is a potential cost of a faster but less trustworthy system of scholarly publishing. It may explain part of the recently discussed trend to cite older papers at a higher rate, as modern publishing is viewed as more ephemeral and less reliable. The publication event may occur earlier, but community acceptance of results is slowed to some degree. In communication theory, the sender speaks earlier, but the listener assesses what she heard far longer than before, and waits even longer to act on the information, awaiting confirmation of one type or another.

On the journal front, the burden of corrections, retractions, and near-misses on editorial offices and editorial staff is not insignificant, and an increased rate of corrections and retractions also places more of a burden on production systems. These are the systems also being reengineered to be faster, so there is a contradiction at the heart of this race for pace — we are robbing from Thoughtful Peter to pay Hasty Paul.

Dealing with claims that might lead to a correction or retraction can be simple and straightforward (e.g., a misnumbered figure is easily corrected), but each event is usually more complex and fraught than that. Claims of bias, sloppy methodology, fraud, or plagiarism require some level of assessment, investigation, and consensus by the editorial staff. Often, it can takes weeks of work to assess a claim well, especially if it has some credence. Would this time have been better spent at the front-end during review? Did the “need for speed” create a downstream time-suck? With many journals that have accelerated, the rates of retractions and corrections rise, redistributing editorial work from the pre-publication zone to the post-publication zone. Clearly, speed can come at a cost.

There may be another cost, which is in the credibility of the process as perceived by authors. In a recent blog post, Michelle Kelly-Irving, a social epidemiologist by day, writes about:

. . . the industrial numbers of paper submissions that [for-profit open access] journals receive and attempt to manage. I was shocked by this industrialisation of research – these types of journals come across as the battery-chicken farmers of academic work – with endless numbers of papers waiting to be managed in the quickest possible way so that the next one could be dealt with.

An assembly line of papers is not conducive to careful evaluation and serious reflection over the myriad issues that arise from any research project. Stopping to think is something we do less and less, but something either we or our readers (in our stead) need to do more and more, especially with publications coming out faster and often conclusions inviting logical leaps.

One of my favorites stories along these lines involves the finding that US counties that were majority Republican, sparsely populated, and mostly rural were also counties with the highest rates of kidney cancer. This led some to speculate actively that less access to medical care, lower educational attainment, and higher consumption of alcohol and tobacco led to the high rates. However, the counties with the lowest rates of kidney cancer were also majority Republican, sparsely populated, and mostly rural. How could this be? Because such counties have low populations, two things happen — they are less likely to have any people with kidney cancer, giving them a greater predominance of 0% rates, and when there is a case of kidney cancer, their lower populations mean the resulting rate is much higher than it would be in a populous county. Basically, it’s a statistical anomaly, and one that has led many misguided political and social trends.

Pushing for speed within the publication process may be putting a greater onus on our readers, eroding our brands, increasing skepticism/cynicism around the publication process, and diminishing the role of editors and publishers through a corrosive/erosive process. Maybe we should pause, rethink, and reassess the value of the filters we have created and how best to support, strengthen, and sustain them. Will a week longer make a huge difference? In which direction? Whose risk increases?

Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

10 Thoughts on "Slow and Steady — Taking the Time to Think in the Age of Rapid Publishing Cycles"

Thank you for this column. Scholarly book publishers are feeling the same “need for speed,” to quote Lightning McQueen. I will share your articulate argument for maintaining a quality workflow with my staff and series editors.

Your article quickly goes beyond the “Ig Nobel” and addresses the critical issue of academic publishing, the rate/quality of publishing and the weakness or problematic nature of the current process of peer review of articles. It is this latter, more formality and in many cases lacking in substance, that needs to be faced and the tacit acceptance because of custom and apparent lack of an alternative measure, along with “impact factor”. You have well defined the underlying fragile underbelly of the system.

Wikipedia and some segments of the grey network have started to address the issue by post publication review where there are many more eyes on what is in the literature. The arrival of search engines which can scourer the literature from article title to parsing of text across publications in the conventionally curated journals as well as what might termed fugitive reflects the changing nature and exposes knowledge to a larger community for judgment.

As you note, the volume of publications coupled with the difficulty, particularly in STM, of duplicating both research and complete analysis weakens the ability of deep peer review in a timely and cost effective manner. The demands of authors and the mechanized process of journal publishing on schedule mitigate against the current effort. But then change does not come without cost to all with a vested interest in the status quo as Christensen’s work well shows. And the pub/perish industry has a deeply entangled set of individuals and enterprises who have been struggling with the changes that are at the gates.

The trade-offs are interesting to watch, as editors, reviewers, and readers are all spending more time assessing or double-checking or dealing with unexpected controversies around what has been published. The efficiency of this is questionable, especially as “publication speed” leads to an increasingly slippery literature. Efficiency is a net number — it has to encompass the entire system, not just one part. If we are net less efficient, we should reconsider whether we’ve introduced new sources of inefficiency unwittingly.

Admittedly, it’s a terrible comparison, but I couldn’t help but think of Henry Ford’s assembly line while reading this article, and the negative effect this had on the coachbuilders of the time. Fast forward 100 years, and we see that this innovation was the best thing that could have happened to that industry. And despite the initial upheaval, the auto industry has managed to retain a diverse set of brands, including some modern coachbuilders that still take a slow, handcrafted approach to the construction of automobiles.

There is certainly pressure on all publishers to speed things up, and that’s not necessarily a bad thing. But “faster” and “fast” are two different concepts, and it’s up to each publisher to decide what kind of brand they are going to be. Are you focused on serving as many individuals in your scientific community as possible? Then maybe you’ll sacrifice some substance for speed. Is the best scientific research your ultimate goal, despite the time or expense? Then slow down and get things right.

Looking again at automobiles, even the modern coachbuilders of today have embraced new manufacturing technologies to improve reliability, reduce time to market, and produce more units at a certain high-level of quality. They just don’t sacrifice everything in the name of speed. So even though their products cost more, enough consumers perceive the brand as being “worth it” to keep these manufacturers in business. (I’m comparing cost to time here, which again, is not perfect, but I’m sure you get the point.) It’s up to us as publishers to convince our customers (authors, in this case) that our brand is worth the extra time, if that’s the business model we select.

Certain disciplines may limit the number of business models available to a publisher, and not every publisher will be able to make the transition to the next generation of publishing. But that’s life (and business). For all the successful car companies left in the world today, there are far more that failed. But learning from the mistakes of others is the first step toward success. A little bit of luck wouldn’t hurt either…

As you acknowledge, there are flaws in this comparison. What Ford foresaw and operationalized was the ability to make standardized parts come together in a standardized manner. The main variable that remained was human, and manufacturing robots have taken most of those out of the automobile manufacturing process now, at least for the big stuff. Cars are great these days, and they get better every year.

The main flaw is that inputs into the scholarly publishing process are far from uniform, making the process not nearly as amenable to standardized approaches. We have to some degree convinced ourselves that it can be an industrial process, but that’s where we’re seeing more mistakes creep in and an erosion of trust. Intellectual outputs require intellectual work, which is not the same as manufacturing work. Taking the time to think about what we’re publishing is something that is occurring less and less, and something that is incredibly undervalued, in my estimation — to some degree because we have assumed moving papers through can be reduced to a production process.

And we’re now full-circle, having spent a little more time thinking that one through.

I am not sanguine that this meets the concerns in the opinion piece cited in the Frontiers article. First, only the rich can afford a hand-crafted and detailed review, a manifestation of Ross Ashby’s Law of Requisite Variety. It’s like Alexander the Great having Aristotle as a personal tutor. Alternative solutions are needed to attenuate. It’s like a department store only having a few clerks for many customers.

Speed can be increased by the use of artificial intelligence. Watson and children are able to scan from title to text and compare. Maybe even make decisions on manuscripts. There are folks looking at AI to reduce time in processing legal cases instead of regular courts. Submit your article knowing that it may have a more careful review by AI.

The real issue is the increased volume of articles that are lacking in quality and are being pushed thru the system for pub/perish reasons on the part of all. One wonders if AI might actually reduce the number of journals where the AI rejection rate raises standards even with appeal to a human and thus the volume of persiflage is attenuated. It is clear that there are similar journals where one could absorb the others as volume becomes more rational.

While we’re on terrible metaphors. This quite on point paper likens modern scientific communication to soccer, v. (American) football. [Using those names in that manner, just to avoid confusion.]

http://www.ncbi.nlm.nih.gov/pubmed/25163621

Esp. section 3 is relevant to the present discussion.

Another relevant point regarding this paper is that I have it on good authority (personal communication with the first author) that the metaphor started out as a joke that he tried to get through the reviewers, and they actually liked it and started arguing with him as to the details, so the “joke” ended up being amplified, even to the point of ending up in the title!

Incidentally, the same concept led to simulations that may be on point to recent posts herein as well:

http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0010782

where a simple simulation identifies “a ‘sweet spot’ between the points of very limited and very strict requirements for pre-publication review.

Comments are closed.