Shhh!...
Image by ConvenienceStoreGourmet via Flickr

A study published recently in the Annals of Internal Medicine sought to probe whether published randomized controlled trials (RCTs) cited relevant prior work. After all, if we all stand on the shoulders of giants, we should at least check our footing before attempting another leap. This is especially important in medical research, where patients are put at risk and where there has been documented evidence of citation invention and diversion.

The authors plumbed meta-analyses to compile lists of all the possible papers on a topic, figuring that meta-analysts have an inclusive approach to the literature (an approach that trialists might not share). They looked back 40 years through the literature to arrive at their dataset. Doing so, they found that not only were the trialists only citing 21% of the trials they possibly could have, they were claiming to be the first trial on a topic. Trials citing the fewest percentage of possible trials did this more often, but even in the top quintile, it still occurred.

Now, there may be multiple reasons why reference lists might be on the lean side. One that immediately occurred to me was space considerations. Because this study spanned four decades of trials, much of what was published was generated before online supplements and so forth. Print constraints, and in the older studies, true difficulties assessing the entire scope of the literature (-40 years = 1970), might explain some of the findings. But the authors dismiss this — without measuring it. In fact, how they dismiss it is almost suspiciously dismissive:

The possibility that journal space limitations are causing this lack of citation seems unlikely. We find it implausible that authors are being forced to limit themselves to 2 or fewer of their most critical citations by page or reference list limitations.

Since this isn’t my first rodeo, I won’t dismiss that explanation too quickly. If a researcher is writing up a trial, one that starts from Point A and ends at Point B, he or she might reasonably do some hand-waving at Point A’s best studies, then move right along, citing the studies that matter on the journey to Point B. As the authors themselves concede, trialists’ motivations for citation are not informed by exhaustive citation.

In fact, there are hints that the reference list problem might be a style problem and not a substance issue. Here are a few quotes from the study to that effect:

The surprising constancy of the number of cited trials with increasing numbers of citable trials meant that the 2 proportional citation measures . . . must be interpreted with care.

A small improvement (P > 0.001) was seen in trials published after 2000 . . .

However, [differences in citation patterns between trialists and meta-analysts] does not account for the constant average number of citations as the number of citable trials increased.

Studies not cited in a published RCT may have been cited in a funding proposal or institutional review board application.

My interpretation isn’t nearly as nefarious as the authors’. In my mind, the expectation is that the report of a trial tips its hat to some of its predecessors. If it’s easy to find more (which it became around 2000), there will be an extra tip of the hat or two. But since research reports are almost as stylized as Kabuki theater, a lot of formal structures will be mimicked, tending to the mean. Also, authors might have gone into exhaustive detail in their funding requests or during institutional reviews, but trimmed reference lists back for publication.

To bolster the speculation that form is the culprit here, it’s worth noting that the Annals of Internal Medicine itself places a cap of “75 or fewer bibliographic references” on authors in its instructions for authors. Other major medical journals — where most RCTs are published — also trim references routinely in order to meet page budgets.

Of course, the researchers could have asked. Surely many of the trialists being scrutinized via citation lists and databases like the Web of Science are still alive and kicking. Why not call them up and ask why they didn’t do a more extensive job of citing the literature? Were there good reasons? Were they purposely duplicitous? Were they inadvertently ignorant? Did they feel the reference list was relatively unimportant in the grand scheme of things?

Ultimately, it’s hard to know whether this trial demonstrates a weakness in research, flaws in research presentation, problems with editorial policies, or limitations around information availability — or all of the above.

It’s clear we can do better, but this study does little to illuminate the path forward.

Enhanced by Zemanta
Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

24 Thoughts on "Don't Look Back — Do Scientists Squelch Citations to Justify Claims of Novelty?"

I can’t speak to the medical literature but on the physical science side there is no concept that citations should be exhaustive. They are part of a narrative (I like the Kabuki analog).

Most citations typically occur in the first part of a paper, where the history and present state of the problem is explained. I did a quick study and found 60% of citations in the first 25% of the paper on average. One cites the pioneers and some immediate precursors, plus perhaps a review article or two. There is no general review of the literature.

Then, as the work is presented and the findings discussed, one cites papers that support assumptions or provide methods. Around 20-30 citations is typical.

This narrative aspect is why I think citation analysis is often based on a false assumption about what citation tells us.

The most damning part of this revolve around lack of statements that the literature was reviewed prior to claims of novelty or initiation of risky studies. Whether the reference list is comprehensive is a separate matter, and authors could solve it with a statement. However, journals don’t have policies requiring these, so Kabuki wins.

I thought the study was about a lack of citations in this regard. Sounds like you want to require authors to make certain certifications. What might these say? Are you talking about voluntary industry standards or government regulation? What is the benefit of this burden?

Bear in mind that you will not be able to define “literature review” for regulatory purposes, because the web of scientific knowledge is seamless. The concept of “topic” is hopelessly vague.

It’s ultimately about putting patients at risk unnecessarily — why repeat a study that’s been done? If you’d only searched the literature . . .

I think a journal policy that requires a statement that a comprehensive literature search was conducted to ensure that no comparable study had been done is reasonable and space-conscious.

This would be better done at the human subject protection stage, which occurs before the study is approved and I thought was pretty rigorous. Why involve the journals after the fact? It makes no sense.

Why not let readers know? It takes little effort if it was done to write a sentence.

If you mean the Kabuki analog as a criticism then I disagree. The basic narrative is 1. Here’s the problem, 2. Here’s what we did, 3. Here’s what we found and 4. Here’s what it means. As communication this is perfect.

Probably not related to clinical trials, but isn’t there a specific concept in citation analysis used to describe how very basic, pioneering work isn’t even cited at all because it has become too basic? I’m beating my brains out, but I can’t remember the term (or how to google it). It has something to do with how the oldest journal articles are “extinguished.” There was a paper by Garfield (I think) noting, for example, how principles laid out by Newton or Einstein are never cited at all because later work builds upon their basic assumptions. The newest works then only cite the later papers. Can anyone remember what that principle is called?

Bill,
Robert K. Merton coined the phrase “obliteration by incorporation” in 1968. Garfield also wrote about citation obliteration in one of his Current Contents essays.

Merton, Robert K. (1968). Social theory and social structure. New York: The Free Press.

Garfield, Eugene (1977). The ‘Obliteration Phenomenon’ in science and the advantage of being obliterated! Current Contents, 51/52, Dec. 22, 1975: 5–7

Citation is used to introduce the immediate problem that was worked on, so of course it does not include the history of the field. Every physicist would have to have 1,000 citations, going back to 1600, 950 of which were always the same. Of the 20-30 citations that is typical there are usually a few in the 20-50 year pioneer range and a few in the 10-20 year specialty development range, the rest being recent.

I agree with your interpretation that this more about self-editing, or even conventional editing, than nefarious intent. It does point to a bigger issue though: that journals shouldn’t restrict citation counts on research articles.

Given the status afforded to RCTs as evidence within the health field and the media, their results need to be placed in some sort of context. Cursory acknowledgements of related but rarely contradictory work simply isn’t acceptable.

Authors are always fearful when doing a literature search that they will find an obscure paper that has made the same claim or theoretical development.

A venerated scientist once told me that, while one is obliged to search the prior literature, it is important not to search too hard.

Comprehensiveness often comes with a price.

I agree on a visceral level (nobody wants to find out that someone has already been there and done that), but vigorously disagree with those who claim that this is a good thing. In reviewing and reading papers, I’ve seen examples time and time again where the authors make a claim of novelty where none is warranted. This isn’t savvy writing – this is sloppy writing at best, and unethical writing at worst. If a literature search comes at a price of acknowledging that someone got there first, so be it.

An “obscure paper” all depends on one’s perspective. Much of the evolutionary biology literature is obscure to those in the biomedical sciences (and vice versa), but many of the workers in both fields could use a swift kick in the backside from the writings of their obscure colleagues in other departments.

What physicist would ever reference Isaac Newton these days? I wanted to, once, just as a lark, so I tried to find a citation for the Principia by looking in the Web of Science. (I admit I wasn’t going to read the darned thing, like I “ought” to have.) But, what did I find? All the recent references for “Newton I” were about birds. Isaac has become an ornithologist! So, anyway, that’s one reason why references will never be exhaustive: textbooks.

But more importantly, except perhaps for the most expensive, life-critical research, there is no advantage to exhaustive citation. What does it gain society? Sure, you don’t want to waste money repeating experiments that have been done 100 times, but science does actually need a certain amount of replication of prior work. If some modest extra amount of replication happens by accident, no big deal.

So, here’s a more-or-less good reason not to cite a paper: Imagine that there’s a paper out there that claims to answer a research question, but you don’t quite trust it. What do you do? Yes, you *could* reference it and laboriously try to describe your suspicions. Unfortunately, a description of someone else’s errors is normally speculation, so you’re likely to be wrong in detail, so what’s the point? Referencing it _without_ giving your suspicions would merely give the wrong impression. Consequently, the best strategy — from an overall perspective — may just be to ignore it.

And, of course, sometimes you just don’t know what to say about a paper. Or, you really don’t know if it applies, or…

In my opinion, people who think references must be exhaustive are dangerously naive.

And, as for the argument that we should cite everything to help people evaluate research? Also dangerously naive. It tries to take over the real purpose of citations — to educate your readers — and turns them into a tool for acquiring money. When that happens, the original purpose will get lost rapidly, and we will have uninformative citations that are designed exclusively to raise one’s “score”.

Wouldn’t citing a review article, one that has an extensive reference list of the basic papers on the subject be both sufficient and an efficient use of space?

People do cite review articles frequently, but usually only one. Citations play a very specific narrative role. Explaining a field is not it. They are used to introduce and support the specific work reported in the article. There seems to be a surprising amount of confusion about this. Maybe I should publish my findings, as I seem to have discovered something, which I thought was obvious.

Well, that is what reviews are for. One cites the last massive Annual Reviews summary of the field, which is the only kind of paper that has an “exhaustive” list of references for a field anyhow.
Of course this does not let you off the hook if you deliberately ignored something very relevant, and a reviewer like me was assigned to your manuscript…

I’ve heard of conflicting theories in citation analysis regarding the citing of review articles. I think the first theory was that this was what you were supposed to do (indicating David Crotty is on the right track); the 2nd theory was that SOME authors cite review articles but slant their conclusions to support their own work. The strategy is to assume that later authors will be trusting, and not backtrack to the original research papers pointing the other way. This goes way back—I don’t think even Phil Davis could find this stuff, some of which might be just imprudent banter.

As others have said, citations are there to support the work being done. If scientists started citing like lawyers we’d need a lot of pages.

None of this gets into the question of whether the reference cited actually supports the work as the authors say it does. A topic for another day.

The real problem is citations are a bit short and can’t even be called a summary of previous works and you have to look for that work cited to really understand why and what was cited.

Citations are not supposed to be a summary of previous work. They are part of the narrative of the article, mostly explaining the problem the present work addresses, plus supporting the approach taken. Criticizing citations for not being what they are not is a mistake.

Comments are closed.