We’ve had a lot of citation analysis on this blog recently, capped by a recent Science paper providing a hint that the citation window is narrowing thanks to online journals.
Is the narrowing of citations a good or bad thing? Does online availability mean that people cite less because they now can actually read the articles they are citing? Are citations even a good measure of the absorption and dissemination of the literature anymore? Have they ever been?
Why are citations sanctified?
In a recent paper, Armstrong and Wright report an interesting technique to catch people cheating on their citation homework. Using a classic market research paper with an unmistakable methodological fingerprint, they analyzed citations to the paper (or lack thereof) to determine whether the paper was cited properly, whether the authors employed the methodologies (a sign that they read the paper), or if the authors overlooked the citation (prior research) altogether.
By making a methodological paper the centerpiece of the research, they could detect flaws in citation more easily. It’s a clever approach.
What the authors claim to have found is that a fair percentage of citation lists miss key papers, use citation improperly, or cite out of context.
However, the paper has significant methodological flaws — a questionable reliance on Google results, for instance, and some funny math. Too bad.
Another clever approach was used in 1983 by Robert N. Broadus (see page 3 of the 9-page PDF). Akin to how mapmakers insert bogus towns to catch other mapmakers copying them (if a fake town appears in a competitors’ map, they must have copied), Broadus inserted an erroneous citation of a 1964 article (one word in the title was wrong). Of the 148 subsequent citations, 23% used the faulty title.
As noted in a prior post on this blog, impact factor inflation has been found to be primarily caused by longer reference lists. But are these lists increasingly just academic posturing? Citations occur for many reasons. How much citation is valid? How much of it is justified? How much of it is meaningful?
Isn’t citation just a print paradigm attempt at linking?
Maybe linking (from blogs, Google, institutional directories, and other online sources) is what we should be studying instead . . .
2 Thoughts on "Citations & the Missing Link"
This is a very interesting post. I would like to add to it that the automation of parts of the scholarly process are likely to alter the use of citations, suggested reading lists, etc. The technology now exists to search not just on a few keywords (the typical Google search uses three terms), but by inputting queries of unlimited size, which are likely to take the form of the abstracts of articles or even the full text of the articles themselves. This will in turn yield search results that are essentially a map of a particular area of research, with the article in question (that is, the one used as an input query) sitting at the center. While such a search does not imply that an author is recommending a particular resource, it does provide a new and valuable insight, a portrait of all related works. I anticipate that this technique will erode the hold that human-generated citations currently have on measurements of scholarly importance.
It should be mentioned that using ISI indexes as a tool for discovering copied erroneous citations may be problematic, for the reason that ISI routinely corrects citations in its database. Random errors may be standardized to look like more common citation errors and give the impression that a unique citation error may have been copied.
The Broadus (1983) article is a real treat. Below is the full citation and DOI.
Broadus, Robert N. “An Investigation of the Validity of Bibliographic Citations.” Journal of the American Society for Information Science 34, no. 2 (1983): 132-35.