Four years ago, I wrote about the importance put on reference lists of published works. I explained that citations are the currency of scholarly communication and yet, editorial attention to the reference lists is not necessarily as comprehensive as these lists deserve. Since writing this post, several new tools have become available to further scrutinize the heavily produced and yet editorially ignored citations.

A few weeks ago, my good friend and colleague Bruce Rosenblum send me a message about this 2017 Scholarly Kitchen post. He wrote that four year ago, I asked for a tool that would automatically review and flag reference lists for so-called predatory journals and now my wish has been granted! These kinds of messages don’t come often and so I was excited to learn more. Edifix is an automated reference editing and tagging tool from Inera. Using Cabells Predatory Reports, the Edifix tool will flag references in a paper that appears to come from a problematic journal. From there, further editorial review can commence.

As I documented in this 2017 post, a single journal article inflated with self-citations can turn any paper into an h-index factory. Further, many problematic journals purposefully use titles that are close to real journal titles and nearly indistinguishable once we abbreviate the heck out of them.

Another new development since this post is the Scite feature that can, among other things, find retracted papers or papers with published expressions of concern being cited. Using PubMed, CrossRef, and the Retraction Watch database of retracted papers, the tool will again flag a reference to a retracted or corrected paper allowing for further editorial scrutiny.

Automated tools to help flag references is absolutely the direction we should be moving in and I am pleased to see these developments. No doubt, the willingness of publishers to open and share their full reference lists via CrossRef and the I4OC initiative will further shed a light on areas in need of improvement.


Manuscripts are complicated. They start with a summary, continue with a justification for the work, and a comprehensive review of other work related to the topic. This introduction is followed by the methodology, arguably one of the most important parts of the paper. Once the “why” and “how” are explained, the paper lays out the results, discussion points, and conclusions.

File Stack and Magnifying Glass

Each section of the manuscript listed above helps to tell the story of why the author bothered to do the work, how they did it, and what they learned from it. Peer reviewers and editors are tasks with casting a critical eye on these parts:

  • Is the abstract clear?
  • Does the author explain the rationale for undertaking this work (novelty)?
  • Is the literature review complete? Does the literature review show a bias toward a specific conclusion?
  • Is the methodology sound? Can it be replicated?
  • Are the results reasonable given what the reviewer understands from the methods and introduction?
  • Are the conclusions consistent with the stated results?
  • Are the references complete and devoid of unnecessary puffery with citations to reputable works in the field?

Hold on — what’s that last one?

Reference lists. How closely are reviewers casting a critical eye on reference lists? Well, let’s take a step back and look at some opinions on citations in general.

There are some journals that limit the number of references an author can include in a paper. I never quite understood the point of this and assumed that the rationale had something to do with limiting page counts. It also seems like a useful exercise in figuring out that what you leave out is just as important as what you put in (a lesson taught to me by my freshman political science professor).

This practice of limiting citations does not always sit well with authors. Authors are often faced with requests from peer reviewers to add references to their paper. There are many reasons for this:

  • Omissions of important and relevant work from the literature review.
  • The author neglected to mention works that are contrary to what the author is presenting.
  • Peer reviewer takes advantage of their role and requests that the author add referenced to the reviewer’s works.
  • The reviewer is familiar with recently published and relevant papers that the author did not include.
  • The reviewer or editor suggests papers published in the journal in order to boost its Impact Factor.

What you don’t often see are reviewers asking an author to remove references. I understand from my friends in the humanities that reviewers do often heavily edit and review reference lists but I haven’t seen evidence (anecdote or otherwise) that this is a widespread practice in STM journals.

Arguments have been made regarding what a citation actually means. This discussion is coming to the forefront with the popularity of preprint servers. The question being asked is whether researchers should cite preprints that have not undergone peer review. I am not going to debate that here, but the next leap is whether journals should allow preprints to be cited and if so, should there be a clear indication that the paper has not been reviewed?

Is a citation a vote of approval for the cited work? Is it an acknowledgement of work contributing to the overall body of knowledge?

Did you know that there is an inflated importance of article citations? Of course you did. We talk about it all the time — in posts, in editorial board meetings, on op-ed pages, etc. We know that funding agencies use citations, that tenure committees use citations, and that authors in some countries are paid modest to huge bonuses for publishing papers in journals that have strong citation performances.

Recently, a debate started around the role that so-called “predatory journals” play in this vicious citation game. Bloomberg’s BusinessWeek published an article about pharmaceutical companies using Omics, a journal publisher sued by the Federal Trade Commission for deceiving researchers, to publish shoddy studies. After the article spends lots of inches describing the complaints and charges against Omics, it drops this bomb:

Bloomberg Businessweek found that researchers at major pharmaceutical companies, including AstraZeneca, Bristol-Myers Squibb, Gilead Sciences, and Merck, submit to Omics journals and participate in their conferences. Pfizer, the biggest U.S. drugmaker, has published at least 23 articles since 2011, including two since the FTC’s lawsuit.

The article goes on to posit that pharmas publish papers in these journals that won’t likely withstand the scrutiny of a quality medical journal. The pharmas benefit because the Omics journals are open access and they assume clinicians will have an easy time finding them via a Google search.

But there is something else going on? Again from the BusinessWeek article.

Jeffrey Curtis, a physician and professor at the University of Alabama at Birmingham, worked on a rheumatoid arthritis study with Bristol-Myers that was published in an Omics journal within two weeks of submission. Companies “are often in more of a hurry and are willing to accept lower-tier journals,” Curtis says. “They want a citation. They want someone to be able to reference it and have it be official.”

The pharmas are counting on other researchers to use the work and cite the work. Those researchers may publish in very reputable medical journals. So back to the question of what a citation actually means. Is it a vote of approval? These citations are often not flagged because predatory publishers, and notably Omics, owns many journals with titles that are almost identical to real journals.

The implications on modern medicine are huge; but, we are seeing this on a smaller scale as well.

Researchers that are living off citation metrics — untenured or soon to be unemployed or underfunded post-docs — are using citation metrics to prove their value. We even have an index for it, the h-index.

We see evidence that the h-index is gaining importance, with many scholars including it on their CV and on various applications. The h-index is a particularly nasty implementation as authors can game that system independently — no journal corroboration necessary.

I have recently discovered several papers where the authors are adding self-citations to a paper in review. The paper may have one or two self-citations in the original submission. A revision may come back with 15 more. And a second revision may have another 30 added. During the production process, still more may be added. The final published paper is now an h-index factory.

In these instances, I assume that the authors are counting on the editors and reviewers NOT reviewing the references after the initial review. Why would they? It’s not uncommon for a second review (if there is one) to be nothing more than a verification that requested changes were made. The editor and reviewers trust that the authors are not trying to sneak something past them.

I have recently discovered several papers where the authors are adding self-citations to a paper in review…The final published paper is now an h-index factory.

With a recent paper I reviewed from an ASCE journal, many of the added citations came from papers the author published in Omics journals as well as other well-known questionable journals. Even if the references were carefully scrutinized, would the reviewers recognize that Irrigation and Drainage Systems Engineering is not ASCE’s Journal of Irrigation and Draining Engineering? Would they notice that the Int. J. Hydraul. Eng. is not the same as the J. Hydraul. Eng.? Our incessant need to abbreviate journal titles in the references to within an inch of their lives is certainly not helping.

The gaming of the h-index lead one group of researchers to propose a new index—the s-index. The authors point out the following in the abstract:

Incorporating superfluous self-citations in one’s writings requires little effort, receives virtually no penalty, and can boost, albeit artificially, scholarly impact and visibility, which are both necessary for moving up the academic ladder.

I don’t disagree with this argument but the creation of a new index may serve to muddy the waters even more. The lead author, Justin Flatt, had trouble defining “superfluous self citations.” In an interview with Richard Poynder, Flatt agrees that the research community needs to form a consensus around acceptable levels of self-citations.

Phil Davis argues that the creating of a new index is not the best solution. He recommends that the h-index be coupled with an h-index minus self-citations. Think of it as an h-index and an h – s index. A wide gap between the two numbers would show that a large portion of a researcher h-index comes from self-citations. Knowing this may tell a reader whether this author is a bona fide legend or just a legend in their own mind.

Now that we have established the ways that researchers can inflate their h-index and capitalize on self-citations, we should talk about some solutions.

Critical review of the reference section is warranted. I looked at reviewer instructions across multiple publishers. Elsevier lists “references” as something that should be reviewed but it doesn’t say for what. PNAS and Wiley don’t mention references in their instructions. BMJ, Cell Press, and Taylor & Francis all recommend that reviewers ask if there are any glaring omissions of references but does not ask reviewers to review the references for quality or appropriateness.

I suggest that journals and editors consider the following steps in ensuring reference lists are helpful and appropriate:

  • Reviewers should be asked to look for gratuitous self-citations and ask the authors to justify the inclusion of those references in their rebuttals to reviewer comments.
  • Subsequent versions of reviewed articles should be evaluated for inappropriate references being added. This could be a reviewer task, editor task, or staff task.
  • References should be scanned for citations to known “predatory” journals. Once identified, an editor or reviewer can make a determination whether it’s appropriate to include. Conducting this scan would be labor intensive. It has been rumored that Cabell’s, who launched the journal blacklist earlier this year, is working on a tool for scanning reference lists for this purpose.
  • Let’s agree to stop abbreviating journal titles in references. The predatory publishers are taking advantage of journal brands and launching titles with small tweaks in the title. Abbreviating journal titles in the references abets that confusion as well as does a number on a journal’s search engine optimization.
  • Gratuitously citing your own work in inappropriate contexts should be considered an ethical issue and dealt with as such. COPE currently has no guidelines on this issue, but journals can take a leadership role in curbing this behavior.
  • Journals that discover gratuitous and inappropriate self citations after a paper is published should publish a correction noting that the identified references should not have been included as they are not relevant to the paper.
  • There should be zero tolerance for journal editors to insist on citations to the journal that are superfluous. Likewise, reviewers should not be permitted to provide a laundry list of their own works unless they are absolutely necessary for the paper.

If citations matter, then they matter. It seems that we, as a community of researchers and publishers, have determined that they do matter. Support for the Initiative for Open Citations seems to prove that point. If this content is valuable and the metrics around them used to make massive decisions about funding science and who gets promoted, etc., then we need to stop ignoring them and start casting a more critical eye on what’s going on there.

 

 

Angela Cochran

Angela Cochran

Angela Cochran is Vice President of Publishing at the American Society of Clinical Oncology. She is past president of the Society for Scholarly Publishing and of the Council of Science Editors. Views on TSK are her own.

Discussion

13 Thoughts on "Revisiting: Turning a Critical Eye on Reference Lists"

No mention of the most important point about references. NO REFERENCE SHOULD BE CITED UNLESS PERSONALLY READ. The scientific community should be setting a good example by avoiding the circulation and repetition of fake news.

Thanks Angela, excellent post, quick question, is there definitive criteria and an open list of who and what are defined predatory publishers, with transparent review processes? I believe there are a number of initiatives, Think, Check, Submit, Cabells etc. I know on the flip side, DOAJ (Directory of Open Access Journals) do a good job on global out-reach and education for their approved list of some 15000 OA journals from around the world, that meet set criteria (education can be key too). With all the technologies able to check citations and references, it would be good to know if there’s a common, centrally used source of potential predatory journals, and then who, as a community, or corporation are managing these lists, going inside the ‘black box’ so to speak.

Thank you, Adrienne. Cabells sort of explains how they go about rating journals, but that list is not open. I am not aware of any others and, in fact, there has not been broad consensus of how to define a predatory journal. The only real agreed upon factors (best I can tell) is this… journals that lie about conducting peer review or lie about who is on the board are likely “predatory.”

Hard to think anyone is going to take on curating a public list post-Beall, unless they really love litigation and harassment.

Isn’t this a fundamental problem? Relying on a service that doesn’t offer a full accounting of how it makes a designation (into a contested category); Transforming whatever the set of indicators, or weighing of approved and discouraged behaviors, into a binary; And even if we came to completely trust Cabells’ designations for journals, and of the defining characteristics of predatory vs. non-predatory journals, these don’t tell us anything about the qualities of cited articles.

I’m struggling to understand how automating Cabells’ list, thereby obfuscating this chain of fuzziness, will help improve the qualities of science.

Yes, it does make things a lot less favorable. But given that no one is willing (or daring enough) to make a public and transparent list, you have to try to work with the tools available. If I were running a journal, I’d use this service to flag any questionable references to the author, but not to strictly ban them. A note stating that this reference came up in an automated check as coming from an unreliable source, can you please check it, and perhaps consider whether there’s a different reference you could use to make this point. But I wouldn’t go so far as to make things binary.

Thanks, Adrian. The criteria used by Cabells to evaluate suspected predatory publications can be found here: https://blog.cabells.com/2019/03/20/predatoryreport-criteria-v1-1/. These criteria are refined and evolve along with the academic publishing ecosystem; a revision is currently underway, to be released in the coming months.

Each journal evaluation is conducted by a trained journal auditor and our process is built to ensure that new journals, those of a lower relative quality, or those from developing countries are not classified as predatory for these reasons. Our evaluations are nuanced and require deliberation. The resources necessary to build, maintain, and refine our tools do not make it possible for us to make them openly available at this time.

It would be immensely time consuming for editors to have authors justify single references that may be deemed inappropriate, whatever “inappropriate” means. However, there is a common pattern that is often associated with egregious acts of bad referencing: They will often be included in bulk in the Introduction section of a paper, when the author is introducing the topic. For example:

Similar work on this topic has been done elsewhere (1–30).

This general hand-waving block-citation pattern is also found in manuscripts when the author has been coerced by a reviewer or Editor who strongly recommends that a list of references be added to the manuscript before it is accepted.

I think many of these proposals, using some new index instead of h, having a dedicated task in production to check reference lists, etc. are curing symptoms rather than the disease. The only way to really end these (semi-)fraudulent practices is to stop ascribing value to a researcher’s h-index. It is an almost meaningless metric, and the only reason it is being used is because funding agencies and policy makers can apply it to pretend to evaluate researchers without understanding any of the research. Something like the h-index should, ideally, simply not be calculated any longer, and certainly everybody should stop paying attention to it. Once citations are no longer the be-all and end-all of a researchers career, there will no longer be an incentive to influence it artificially.

Thank you Angela – this is chock-full of important points to chew on and consider. I really appreciate the flag about how pharma/industry may be tapping into the predatory journals for their own benefit – very insightful!

We’ve recently been discussing limiting the # of references included in the print version (the full list would be available online). I welcome all thoughts about doing so. Pros/cons especially welcome as we realize it would present potential production issues but in the case of our journals, would save precious printed pages, especially on Review articles that we all know are inherently more laden with tons of references. Thanks! —Phaedra

Very thoughtful article with important points. As suggested, one of the first thing that should be adopted by the publishers/scholarly community is to stop abbreviating journal titles in references to create more insight on references. May be it can be limited to those references that do not have a valid DOI.

We discussed self-citation at COPE, asking the questions: where is the line between what is appropriate and what is not? What is a peer reviewer’s responsibility in calling out self-citation? Should journals have specific policies about self-citation?

Following the discussion we published guidance https://publicationethics.org/citation-manipulation-discussion-document. Determining the appropriate level of self-citation is context dependent and the guidance helps editors tackle citation manipulation, recommending that journals should consider policies about appropriate levels of self-citation, education for editors around appropriate times and ways to request citations, and procedures when inappropriate requests or occurrences of citation manipulation arise.

Comments are closed.