Manuscripts are complicated. They start with a summary, continue with a justification for the work, and a comprehensive review of other work related to the topic. This introduction is followed by the methodology, arguably one of the most important parts of the paper. Once the “why” and “how” are explained, the paper lays out the results, discussion points, and conclusions.
Each section of the manuscript listed above helps to tell the story of why the author bothered to do the work, how they did it, and what they learned from it. Peer reviewers and editors are tasks with casting a critical eye on these parts:
- Is the abstract clear?
- Does the author explain the rationale for undertaking this work (novelty)?
- Is the literature review complete? Does the literature review show a bias toward a specific conclusion?
- Is the methodology sound? Can it be replicated?
- Are the results reasonable given what the reviewer understands from the methods and introduction?
- Are the conclusions consistent with the stated results?
- Are the references complete and devoid of unnecessary puffery with citations to reputable works in the field?
Hold on — what’s that last one?
Reference lists. How closely are reviewers casting a critical eye on reference lists? Well, let’s take a step back and look at some opinions on citations in general.
There are some journals that limit the number of references an author can include in a paper. I never quite understood the point of this and assumed that the rationale had something to do with limiting page counts. It also seems like a useful exercise in figuring out that what you leave out is just as important as what you put in (a lesson taught to me by my freshman political science professor).
This practice of limiting citations does not always sit well with authors. Authors are often faced with requests from peer reviewers to add references to their paper. There are many reasons for this:
- Omissions of important and relevant work from the literature review.
- The author neglected to mention works that are contrary to what the author is presenting.
- Peer reviewer takes advantage of their role and requests that the author add referenced to the reviewer’s works.
- The reviewer is familiar with recently published and relevant papers that the author did not include.
- The reviewer or editor suggests papers published in the journal in order to boost its Impact Factor.
What you don’t often see are reviewers asking an author to remove references. I understand from my friends in the humanities that reviewers do often heavily edit and review reference lists but I haven’t seen evidence (anecdote or otherwise) that this is a widespread practice in STM journals.
Arguments have been made regarding what a citation actually means. This discussion is coming to the forefront with the popularity of preprint servers. The question being asked is whether researchers should cite preprints that have not undergone peer review. I am not going to debate that here, but the next leap is whether journals should allow preprints to be cited and if so, should there be a clear indication that the paper has not been reviewed?
Is a citation a vote of approval for the cited work? Is it an acknowledgement of work contributing to the overall body of knowledge?
Did you know that there is an inflated importance of article citations? Of course you did. We talk about it all the time — in posts, in editorial board meetings, on op-ed pages, etc. We know that funding agencies use citations, that tenure committees use citations, and that authors in some countries are paid modest to huge bonuses for publishing papers in journals that have strong citation performances.
Recently, a debate started around the role that so-called “predatory journals” play in this vicious citation game. Bloomberg’s BusinessWeek published an article about pharmaceutical companies using Omics, a journal publisher sued by the Federal Trade Commission for deceiving researchers, to publish shoddy studies. After the article spends lots of inches describing the complaints and charges against Omics, it drops this bomb:
Bloomberg Businessweek found that researchers at major pharmaceutical companies, including AstraZeneca, Bristol-Myers Squibb, Gilead Sciences, and Merck, submit to Omics journals and participate in their conferences. Pfizer, the biggest U.S. drugmaker, has published at least 23 articles since 2011, including two since the FTC’s lawsuit.
The article goes on to posit that pharmas publish papers in these journals that won’t likely withstand the scrutiny of a quality medical journal. The pharmas benefit because the Omics journals are open access and they assume clinicians will have an easy time finding them via a Google search.
But there is something else going on? Again from the BusinessWeek article.
Jeffrey Curtis, a physician and professor at the University of Alabama at Birmingham, worked on a rheumatoid arthritis study with Bristol-Myers that was published in an Omics journal within two weeks of submission. Companies “are often in more of a hurry and are willing to accept lower-tier journals,” Curtis says. “They want a citation. They want someone to be able to reference it and have it be official.”
The pharmas are counting on other researchers to use the work and cite the work. Those researchers may publish in very reputable medical journals. So back to the question of what a citation actually means. Is it a vote of approval? These citations are often not flagged because predatory publishers, and notably Omics, owns many journals with titles that are almost identical to real journals.
The implications on modern medicine are huge; but, we are seeing this on a smaller scale as well.
Researchers that are living off citation metrics — untenured or soon to be unemployed or underfunded post-docs — are using citation metrics to prove their value. We even have an index for it, the h-index.
We see evidence that the h-index is gaining importance, with many scholars including it on their CV and on various applications. The h-index is a particularly nasty implementation as authors can game that system independently — no journal corroboration necessary.
I have recently discovered several papers where the authors are adding self-citations to a paper in review. The paper may have one or two self-citations in the original submission. A revision may come back with 15 more. And a second revision may have another 30 added. During the production process, still more may be added. The final published paper is now an h-index factory.
In these instances, I assume that the authors are counting on the editors and reviewers NOT reviewing the references after the initial review. Why would they? It’s not uncommon for a second review (if there is one) to be nothing more than a verification that requested changes were made. The editor and reviewers trust that the authors are not trying to sneak something past them.
I have recently discovered several papers where the authors are adding self-citations to a paper in review…The final published paper is now an h-index factory.
With a recent paper I reviewed from an ASCE journal, many of the added citations came from papers the author published in Omics journals as well as other well-known questionable journals. Even if the references were carefully scrutinized, would the reviewers recognize that Irrigation and Drainage Systems Engineering is not ASCE’s Journal of Irrigation and Draining Engineering? Would they notice that the Int. J. Hydraul. Eng. is not the same as the J. Hydraul. Eng.? Our incessant need to abbreviate journal titles in the references to within an inch of their lives is certainly not helping.
The gaming of the h-index lead one group of researchers to propose a new index—the s-index. The authors point out the following in the abstract:
Incorporating superfluous self-citations in one’s writings requires little effort, receives virtually no penalty, and can boost, albeit artificially, scholarly impact and visibility, which are both necessary for moving up the academic ladder.
I don’t disagree with this argument but the creation of a new index may serve to muddy the waters even more. The lead author, Justin Flatt, had trouble defining “superfluous self citations.” In an interview with Richard Poynder, Flatt agrees that the research community needs to form a consensus around acceptable levels of self-citations.
Phil Davis argues that the creating of a new index is not the best solution. He recommends that the h-index be coupled with an h-index minus self-citations. Think of it as an h-index and an h – s index. A wide gap between the two numbers would show that a large portion of a researcher h-index comes from self-citations. Knowing this may tell a reader whether this author is a bona fide legend or just a legend in their own mind.
Now that we have established the ways that researchers can inflate their h-index and capitalize on self-citations, we should talk about some solutions.
Critical review of the reference section is warranted. I looked at reviewer instructions across multiple publishers. Elsevier lists “references” as something that should be reviewed but it doesn’t say for what. PNAS and Wiley don’t mention references in their instructions. BMJ, Cell Press, and Taylor & Francis all recommend that reviewers ask if there are any glaring omissions of references but does not ask reviewers to review the references for quality or appropriateness.
I suggest that journals and editors consider the following steps in ensuring reference lists are helpful and appropriate:
- Reviewers should be asked to look for gratuitous self-citations and ask the authors to justify the inclusion of those references in their rebuttals to reviewer comments.
- Subsequent versions of reviewed articles should be evaluated for inappropriate references being added. This could be a reviewer task, editor task, or staff task.
- References should be scanned for citations to known “predatory” journals. Once identified, an editor or reviewer can make a determination whether it’s appropriate to include. Conducting this scan would be labor intensive. It has been rumored that Cabell’s, who launched the journal blacklist earlier this year, is working on a tool for scanning reference lists for this purpose.
- Let’s agree to stop abbreviating journal titles in references. The predatory publishers are taking advantage of journal brands and launching titles with small tweaks in the title. Abbreviating journal titles in the references abets that confusion as well as does a number on a journal’s search engine optimization.
- Gratuitously citing your own work in inappropriate contexts should be considered an ethical issue and dealt with as such. COPE currently has no guidelines on this issue, but journals can take a leadership role in curbing this behavior.
- Journals that discover gratuitous and inappropriate self citations after a paper is published should publish a correction noting that the identified references should not have been included as they are not relevant to the paper.
- There should be zero tolerance for journal editors to insist on citations to the journal that are superfluous. Likewise, reviewers should not be permitted to provide a laundry list of their own works unless they are absolutely necessary for the paper.
If citations matter, then they matter. It seems that we, as a community of researchers and publishers, have determined that they do matter. Support for the Initiative for Open Citations seems to prove that point. If this content is valuable and the metrics around them used to make massive decisions about funding science and who gets promoted, etc., then we need to stop ignoring them and start casting a more critical eye on what’s going on there.