The Ig Nobel Prizes are favorites among many scientists and science watchers. They provide a much-needed spark of insouciance to otherwise dour and serious proceedings. The Ig Nobels accomplish this by identifying studies that are, on the surface, ridiculous, but which ultimately take us on a wonderful journey toward the sublime. As the founders say, “The Ig Nobel Prizes honor research that first makes people laugh, then makes them think.”
Recent examples include studies of whether it is mentally hazardous to own a cat. On the surface, the questions appears to be silly, emanating from stereotypes of eccentric “cat ladies.” However, dig deeper into the question, and you learn that it’s a serious question about a feline-borne parasite (Toxoplasma gondii, which gives you toxoplasmosis). The parasite infests brains, and can lead to certain behaviors (possibly schizophrenia, possibly depression), including a potential propensity to horde cats, which helps the parasite thrive. Two teams — one from the US and one from Europe — pursued this question, so split the Ig Nobel Prize for the work.
Listening to this discussed during an interview on StarTalk Radio with the founder of the Ig Nobels, I was struck with how much sustained discussion and reflection these simple and creative studies fostered. Even the process of choosing the Ig Nobels — a process consisting of sorting through published research — seems to require a lot of time and thought.
Contrast this with the trend to produce and publish studies faster, and the increasing insistence from authors for rapid publication.
The purported benefits of rapid publication include elimination of unnecessary production cycles, delivery of information to scientists who can use it, and the ability for science as a whole to advance more quickly with faster access to new research findings.
Publication is certainly a major threshold for information to cross. But does pushing for “faster” come at a cost?
We see the increase in retractions, which is partially driven by fraudsters who publish many studies quickly, which leads to batches of studies being retracted once a handful begin to foul. We see a mega-journal publishing 35,000 papers per year, making it nigh impossible to evaluate its quality on a daily basis, much less overall. We see readers feeling more and more overwhelmed, reviewers burning out, and editors more and more stressed. The publishing world is moving faster and faster, and the ledger doesn’t usually seem net positive.
We also see how variable the implementation of new research is by field, and how variable the reasons for faster publication can be across science. In some fields, there are patent concerns, which makes primary publication very important for the owners of the intellectual property. In some fields, research is progressing rapidly in some areas, so faster publication is a competitive advantage for all involved. In medicine particularly, some papers are pushed ahead for competitive reasons at times, for public health reasons at other times. Overall, publishers view faster publication as an advantage and a necessity. It’s just a matter, after all, of tightening up production steps and eliminating wasted time.
Yet, again and again, we find that what at first appears simple and straightforward is actually complex and challenging.
Peer review is viewed by some as a simple matter of sending a manuscript to a couple of qualified people, getting their comments back, and taking their recommendations forward. Of course, it is much more complex than it seems at first. Finding qualified people is usually a major chore in and of itself. Some journals, especially mega-journals, seem to take shortcuts even here. Then there are the issues of managing peer reviewers, who often are peripatetic, with multiple email addresses depending on where they are; who miss deadlines; who forget to send the attachments; who can’t manage their spam filters; and so forth. Then reviewers have to be assessed for timeliness and quality — reviewing the peer reviewers takes time and effort. And most journals don’t take peer reviewer comments without qualification — they are usually concatenated and interpreted through the further review of editorial staff and senior editors.
It’s more fun to think about peer review as a simple process that flows smoothly along magical silver rails, but it is far from that. Think it through, and you realize how much work, effort, and complexity running peer review for 1,000-50,000 manuscripts entails. Creating fast-track systems within this flurry of work is a major effort, and securing the right reviewers at the right time can create significant logistical challenges.
Production systems are full of dependencies, making a push here subject to an equal and opposite reaction in another section. Faster publication of selected manuscripts can slow down production for others. The net time savings may be closer to zero across the production system than is immediately apparent. Making the production system faster overall often requires major reengineering efforts (direct-to-XML processes, CMS modifications, and potentially additional staff, just to name a few). Vendor contracts may need to be renegotiated, platform provider relationships retooled, and new notification systems and services created.
The potential downside to all this is less apparent, but comes in the form of more corrections, retractions, and near-misses on both fronts.
What’s interesting to me is that the time and effort in dealing with these and correcting or modifying the literature is not insignificant, and the increased rate of occurrence is quite likely making scientists less willing to accept initial reports at face value, as they wait for enough time to pass for a community consensus to arise, for possible corrections to emerge, and for a reasonable retractionless period to pass. This hesitancy is a potential cost of a faster but less trustworthy system of scholarly publishing. It may explain part of the recently discussed trend to cite older papers at a higher rate, as modern publishing is viewed as more ephemeral and less reliable. The publication event may occur earlier, but community acceptance of results is slowed to some degree. In communication theory, the sender speaks earlier, but the listener assesses what she heard far longer than before, and waits even longer to act on the information, awaiting confirmation of one type or another.
On the journal front, the burden of corrections, retractions, and near-misses on editorial offices and editorial staff is not insignificant, and an increased rate of corrections and retractions also places more of a burden on production systems. These are the systems also being reengineered to be faster, so there is a contradiction at the heart of this race for pace — we are robbing from Thoughtful Peter to pay Hasty Paul.
Dealing with claims that might lead to a correction or retraction can be simple and straightforward (e.g., a misnumbered figure is easily corrected), but each event is usually more complex and fraught than that. Claims of bias, sloppy methodology, fraud, or plagiarism require some level of assessment, investigation, and consensus by the editorial staff. Often, it can takes weeks of work to assess a claim well, especially if it has some credence. Would this time have been better spent at the front-end during review? Did the “need for speed” create a downstream time-suck? With many journals that have accelerated, the rates of retractions and corrections rise, redistributing editorial work from the pre-publication zone to the post-publication zone. Clearly, speed can come at a cost.
There may be another cost, which is in the credibility of the process as perceived by authors. In a recent blog post, Michelle Kelly-Irving, a social epidemiologist by day, writes about:
. . . the industrial numbers of paper submissions that [for-profit open access] journals receive and attempt to manage. I was shocked by this industrialisation of research – these types of journals come across as the battery-chicken farmers of academic work – with endless numbers of papers waiting to be managed in the quickest possible way so that the next one could be dealt with.
An assembly line of papers is not conducive to careful evaluation and serious reflection over the myriad issues that arise from any research project. Stopping to think is something we do less and less, but something either we or our readers (in our stead) need to do more and more, especially with publications coming out faster and often conclusions inviting logical leaps.
One of my favorites stories along these lines involves the finding that US counties that were majority Republican, sparsely populated, and mostly rural were also counties with the highest rates of kidney cancer. This led some to speculate actively that less access to medical care, lower educational attainment, and higher consumption of alcohol and tobacco led to the high rates. However, the counties with the lowest rates of kidney cancer were also majority Republican, sparsely populated, and mostly rural. How could this be? Because such counties have low populations, two things happen — they are less likely to have any people with kidney cancer, giving them a greater predominance of 0% rates, and when there is a case of kidney cancer, their lower populations mean the resulting rate is much higher than it would be in a populous county. Basically, it’s a statistical anomaly, and one that has led many misguided political and social trends.
Pushing for speed within the publication process may be putting a greater onus on our readers, eroding our brands, increasing skepticism/cynicism around the publication process, and diminishing the role of editors and publishers through a corrosive/erosive process. Maybe we should pause, rethink, and reassess the value of the filters we have created and how best to support, strengthen, and sustain them. Will a week longer make a huge difference? In which direction? Whose risk increases?