Editor’s Note: Today’s post is by Sneha Kulkarni. Sneha is the Managing Editor for, Editage Insights, Editage, Cactus Communications. Her passion to bridge the communication gap in the research community led her to her current role of developing and designing content for researchers and authors. She writes original discussion and comment articles that provide researchers and publishers a platform to voice their opinion. The complete list of her published content can be found here: http://www.editage.com/insights/users/sneha-kulkarni.
Is there such a thing as too much science? The volume of global scientific output has increased exponentially. In 2016, close to three million papers were published by researchers all around the globe, as per the recently published report by the U.S. National Science Foundation. We are witnessing a boom in science publications that is both exciting and concerning. Is the deluge of scientific publications taking us closer to unraveling unanswered questions? Or is it adding to the noise that makes identifying the really significant publications difficult?
The motives behind publishing research have rapidly changed. Beyond the personal gratification of making one’s findings known and helping the field progress, an unhealthy competitiveness has spiked the need to publish. In the cutthroat, competitive arena that science has become for many researchers, lengthening the list of publications on one’s CV has become a necessity to survive. Despite the criticism around the “publish or perish culture” of academia, publication output continues to be the primary measure of a researcher’s success. An applicant’s publication record is one of the most important factors funders consider in decisions to grant or renew funding. Similarly, for hiring decisions, promotions, or even salary hikes, institutions tend to favor those with a lengthy publication list. In fact, some universities have gone as far as incentivizing publications in high impact factor journals. Therefore, in their attempts to showcase worth and productivity, researchers scramble to add publications to their names. In fact, it could be said that the competitive culture of academia leaves researchers with no option but to churn out papers rapidly and consistently.
In this race to publish more, some end up choosing the path of leniency. They indulge in unethical publication practices that are hard to detect at the publication stage, such as slicing a set of findings into several publications or publishing results that haven’t been thoroughly verified. Another concerning repercussion is the rise of the fragmented authorship or “hyper-authorship” phenomenon wherein a single paper lists up to hundreds of co-authors. Of course, multiple-author papers are a cultural norm in certain disciplines such as physics and biomedicine, but shared authorship has seen a rise in non-scientific subjects such as social sciences. Therefore, discerning the contribution of each author in such papers can be arduous for grant and tenure review committees. Initiatives such as Project CRediT can help distinguish between the contributions of all authors listed on a paper and bring clarity to the attribution as well.
Apart from this, the publishing boom has set researchers with a very real challenge. With hundreds of papers being churned out every month, keeping up with the publications in one’s own field is becoming a challenge for many researchers. “I can’t even keep up with my own relatively specialized field: the euro,” says Professor Jesper Jespersen, an economist from Roskilde University, Denmark. “I think it’s difficult to see [which papers] are really good, and [which papers] are just repetition,” he adds. Turning what could have been a single publication into a series of papers might provide a push to researchers’ career aspirations, but it can be disastrous to the field, as it can drown out meaningful results.
Discovering relevant literature in the expanding library of publications is becoming a real issue for researchers. Despite effective discovery tools, the sheer volume of search results can be overwhelming. And imagine going through a staggering number of these results to discover that most of the studies listed are repetitive or weak! We are thus faced with a multifaceted problem: Researchers are adding to the global publication output, but are they really contributing to science?
The research deluge only exacerbates the problem of low uptake of replication studies in scholarly publishing. With the constantly increasing publication output, it would be unrealistic to replicate every published study. And, of course, most researchers are inclined towards publishing novel and groundbreaking results rather than tinkering with a study that has already been published.
At the journal end, the swelling volume of publications can put immense pressure on editors who need to identify quality studies. Journals are flooded with submissions, and to tell genuine studies from plagiarized ones can be challenging. No wonder, perhaps, that salami sliced publications and other studies that provide little to push the existing boundaries of knowledge still end up getting published.
And that’s not all! The research deluge affects all stakeholders in the system; consider, for example, libraries. Around the world, libraries have to walk the tightrope between strained budgets and the ever-growing number of new journals being launched to accommodate the burgeoning research output. Adding new titles to already expensive subscription packages is a tough proposition for university libraries. And is it really worth it for librarians to even consider adding new titles, if much of the research published in them may not even be making a significant contribution?
The cumulative effect of the mass production of less valuable publications is straining science heavily. From using up often-dwindling science budgets to burdening the scholarly community with too much information, mass production of research can have a far-reaching impact. Countries across the world are increasing their R&D expenditure. However, in order for this investment to provide tangible returns, the misplaced focus on the volume of publication and the average output of researchers needs to be correctively redirected to the impact that publication is likely to have for the concerned field of study. Funding bodies and institutions should encourage researchers to publish fewer yet better papers – ones that adhere to good publication practices and are likely to significantly add to the current body of knowledge on the topic. In order to pick the best research for funding, some granting committees and institutions allow researchers to submit only a select few papers that best present their work. Making this the norm would help ensure that only the best research (and not necessarily a long list of publications) would get the researcher the grant, the job, or the position he or she is vying for.
Every year sees the addition of a new crop of researchers and every day sees the uncovering of new knowledge about the world. Therefore, while more research is welcome, there is a need for us to pause and consider whether the endless stream of publications is leading us to the solutions necessary for a better future.