1655 engravure of the islands Amboyna (top) and Nera (bottom). National Maritime Museum, London. It is a copy of an earlier Dutch engravure. Pulau Neira and the vulcano Gunung Api belong to the Banda Islands.
1655 engravure of the islands Amboyna (top) and Nera (bottom). National Maritime Museum, London. It is a copy of an earlier Dutch engravure. Pulau Neira and the vulcano Gunung Api belong to the Banda Islands.

Citation behaviors vary widely between and within STM and HSS; within particular disciplines citations can also play sharply different kinds of roles. The complexity of adducing scholarly significance from citation metrics is further increased as scholars may use citations differently from one publication to the next.  The Chicago Manual of Style (16th Ed.) tells us in Chapter 14.1 that the basic “purpose of source citations” is, as a requirement of “ethics, copyright laws, and courtesy to readers” to “identify the source of direct quotations or paraphrase and of any facts or opinions not generally known or easily checked.” “Conventions vary according to discipline,” the CMS cautions, “the preferences of publishers and authors, and the needs of a particular work.” The CMS doesn’t have much to say about citation indices, and impact.

So much recent attention to the trouble with creating and assessing the “impact” of scholarship (from the New York Times hand-wringing about the affect of big media coverage on scholarly rigor to a very recent report from the UK on the “Rising Tide of Metrics”) has me thinking about what we do when we are citing a source. In the assessment ambit, scholarly impact is assessed at least three ways: anecdote, altmetrics, and citation indices, with the last being the most influential—and surely the most institutionalized. We hear that discovery tools will make our scholarship more available, and thus it will have more impact (as measured by citations). We are told that social media promotion will help our work to have more impact (as measured by citations).

I’ve read plenty about the algorithms for citation analyses, about how and whether we use those data to assess scholars and their scholarship, and even the circular argument that scholars ought to cite one another more in order to increase their colleagues’…impact. Academia.edu’s claims to be able to increase citations has made news, as have Elsevier’s plans to make Scopus more competitive with the Journal Citation Report from Thomson Reuters.

What I don’t hear much about, though, is what a citation represents for the scholar writing a research article. When we cite a source, we could be doing any one of a number of things, and many of them have little to do with acknowledging, say, impact—with crediting another scholar’s work with having provided the intellectual foundation, the inspiration, or vital information for the argument that’s being made in the article in question.

For historians like me, as well as for other scholars, often the most important citation is to the primary research material. A lot of scholarly production goes into preserving and disseminating these materials, on site and online. This work has an enormous impact on the kind of scholarship we do, but it is sometimes not visible in our citations. When I’m writing about an eighteenth-century printed almanac, for example, I may be using the online version made available through the Evans Early American Imprint Collection of the American Antiquarian Society, the premier collection of pre-1800 printed materials available in microfiche and now digitized through the Evans-Text Creation Partnership. I don’t cite either the fiche or the online access, but rather the text itself, and its publication date.

Of course, our reading of primary sources is also influenced by the analyses and interpretations of other scholars who have examined the same or similar sources. We acknowledge them in our footnotes, but these citations rarely do justice to the complexities of scholarly exchange. In fact I know a lot more about those eighteenth-century almanacs because by chance years ago I shared fellowship time at the AAS with a scholar working on the history of the daily diary, a researcher who explained the way that early almanacs were regularly annotated as a form of record keeping. This scholar some years later wrote a terrific book on the subject, and along the way published some reflections on her research in various venues. So now I can cite her book, but in years past I cited conference papers, or an essay in an online publication — not one of which would turn up in JCR.

So, is the answer to shoo all book text and conference papers online, too, so someone can figure out how to carefully curate that material and assess the metrics with the care that JCR takes? Working backwards from the citation rather than from the scholarship in question (what work really did influence this work?), we miss a lot, maybe most, of the impact.

Blind counting of citations, too utterly ignores the fact that titles often appear in footnotes for reasons having nothing to do with the previous work’s contribution to the current discussion. An easy example of a citation that reflects something other than impact is the citation naming a work that the author is criticizing. The citation could represent criticism in any number of registers, from devastating (the primary source is faked or misdated, the quantitative work is off) to modest (interpretive disagreement) to mild (the argument needs updating). Such critical citations do not just come in review essays; they often come within discursive footnotes.

The vast majority of citations that are not critical in any of the ways noted above still may not suggest that the cited scholarship has been central to developing the work in question. Let’s look at another common citation practice, which we might call situating the work. This is the citation that is usually embedded in a textual footnote with a large group of other citations. These citation groupings come in different registers, too. Some might come at the beginning of an article, describing previous work on the general topic (“For previous treatments of…” or “A brief review of the literature on this topic would include…”). They can come mid-argument on a particular point (“For examples of this phenomenon in other historical contexts, see…” or “Similar patterns have been found in the following cases:….”)

Here’s another complication. Journals have highly varied policies about citations. Three examples from my field. The Journal of the Early Republic will not review manuscripts more than 9,000 words in length, including notes. The Journal of American History requires that submitted manuscripts be not less than 10,000 or more than 14,000 words, including notes. The generous William and Mary Quarterly tells authors that their manuscripts may not exceed 10,000 words, excluding notes, and notes may not exceed 5,000 words. I’m guessing that authors of WMQ articles include more of those “situating” citations than in other journals.

Another regular citation type is the exemplar. For a long time it was pretty much de rigueur when discussing the emergence of nations in the wake of democratic revolutions to reference Jurgen Habermas on the “public sphere,” and Benedict Anderson on “imagined communities.” You don’t need citation indices to know that these works were incredibly influential. But the half-life of influence and citations, I would argue, were inversely proportionate. The more citations these works collected, the less intense the engagement with the arguments they made and the more casual the references.

And there is also the purposely brief reference. Sure, we all think that every article on a subject brushing up against our own published work ought to cite us; obviously, we can discount some of this angst as a inflated assessment of one’s own impact. But sometimes the no-cite is powerful evidence of impact. Here’s how this goes. Scholar A is working on topic Z, and publishes just as Scholar B is about to publish on a similar topic (we’ll call it Z-1). Given the close circles in which research kin move, it is unlikely that Scholars A and B have not been aware of one another’s work. So Scholar B might offer this: “for treatment of a related issue, see Scholar A, citation.” This is a civil nod, not an acknowledgment of impact.

For an estimate of the proportion of each of these kinds of references, I did a back of the envelope accounting of citations in a recent article by a senior scholar in the latest William and Mary Quarterly. Alison Games wrote about “Cohabitation, Surinam-Style: English Inhabitants in Dutch Suriname after 1667,” which looks at an unusual period in the era of global colonization when the Dutch conquered an English colony but the two colonial powers decided to “cohabit.” The breakdown in my admittedly hasty count shows a preponderance of primary source citations, and only a marginally larger number of direct as opposed to indirect secondary source references. In 104 footnotes (the WMQ style is to cluster citations at the end of each paragraph) Games references seventeenth-century Dutch and English sources 168 times. Fifty-eight times she directly cited a secondary source, a work of historical scholarship, that provided key information or insight for her argument about the consequences of Dutch-English attempt to live and work side-by-side. But forty-five times she offered an indirect reference to a work of scholarship that offered a comparative example or another perspective on a similar issue. Because Games is working in a relatively unexplored field, I thought that Games would be less likely than others to have a high proportion of indirect (situating or exemplary style citations). A quick glance at the article that follows Games’s, on a much more heavily studied subject, suggests that other articles indeed may include a much higher proportion of those indirect references.

I think citation is critical; it is the foundation on which scholarship is built. In my discipline citations serve any number of purposes, most of which involve educating the reader of a particular essay about scholarly context rather than intellectual influence. When we assume, though, that volume of citations ipso facto is equivalent to impact, we have likely misapprehended the purpose of many citations and we have surely missed which citations reflect genuine impact, that is, scholarly influence.

Karin Wulf

Karin Wulf

Karin Wulf is the Beatrice and Julio Mario Santo Domingo Director and Librarian at the John Carter Brown Library and Professor of History, Brown University. She is a historian with a research specialty in family, gender and politics in eighteenth-century British America and has experience in non-profit humanities publishing.

Discussion

37 Thoughts on "When Do Citations Reflect "Impact?""

Interesting analysis. Thanks. It leads me to the question: ” what do we mean by impact”. For example in your statement:

“An easy example of a citation that reflects something other than impact is the citation naming a work that the author is criticizing.”

Surely the fact that criticism was needed, of whatever flavour you then rightly nuance, suggests the article has had impact if defined along lines of ‘this caused someone to think and/or do something’. The impact can be positive or negative, but it exists.

In some circles though impact seems to be defined along the lines of ‘what impact does this have on society’ with perhaps a bias towards progressive impact. And your argument leads to: can citations (or even Altmetrics) really be used as a basis to measure this?

Thanks, Martin.

Absolutely agree that the key issue is defining ‘impact.” For now, the quantification practices and policies seem to assume that more citation=greater (better) impact. And yet there are so many factors at work, the above of which are only a few, that this really can’t be so. I can think, and I’m sure you can too, of studies that have been endlessly cited and circulated that were later retracted, for one example.

I too did a back of the envelope study of citations, but in the domain of science. I took ten research articles from Science magazine, divided the text into quarters, then counted the citations in each quarter. The results were pretty uniform, with about 60% of the citations occurring in the first quarter. This is were the research background is explained and it is largely historical in nature, typically beginning with the founding of the research area decades before, then leading up to the effort being reported in the present article. There was little evidence of direct influence, but one would need a cognitive taxonomy to determine that.

Most of the rest of the citations occurred in the last quarter, where the significance of the findings is discussed. Here too there is little in the way of direct influence. Citations in the middle quarters tended to be to support the use of specific numbers, methods or assumptions, in the research being reported, which might be regarded as a form of direct influence.

Clearly the logic of citation is complex (and worthy of study). In this context the concept of impact probably should be regarded as a technical term of art, one which simply means being cited.

My envelope looked a little different from yours, David, in that I was assessing, note by note (and likely imperfectly, but Prof. Games will surely let me know about that) whether a citation reflected a direct influence on this particular essay. One could argue that, and indeed the quantification of citation assumes that, all citations reflect worth. But their value is so different within even a single piece that it calls into question, I think, the premise that in the aggregate they are somehow more meaningful.

The convention of heavy citations in the beginning of an article is very interesting. That’s not what I see for history. Although some early on context-setting notes are often a feature I doubt they’d account for 60% of the citations in that first 25%!

The different patterns are probably due to the fact that the historian’s research is heavily dependent on documents while the scientist’s is not. In science I think many citations are only found after the work is finished and the paper is being written, whereas documents are the historian’s instruments.

Regarding note by note assessment, I have done some of that but informally. As a student of reasoning I looked at the various different roles that different citations play in the reasoning presented in the article. As I alluded to above, we could use a taxonomy of citation types. My guess is that there are ten to twenty basic types. I think some work has been done in this area. Then we could truly analyze the differences among fields, as well as article types.

If memory serves, impact is a reflection of the number of times a journal is cited and not the authors of the articles in it. Thus, if one is published in a highly cited journal one is in essence joining an elite club of authors who’s work’s are being cited. And, their works are being cited because what they have to say is “important” and ipso facto the journal is publishing important works.

This may well go down as my favorite blog post for the year! In this and in all industries we rely heavily on metrics that are easy to capture and those tend to be the electronic version of a hash mark. It’s easy and computers can count them reliably but it reflects not just a paper-based publication model where it was all we had, it also relies exclusively on the value to the researcher community who publish. Down loads and page views are equally easy to capture in today’s environment but miss down loads and page views from repositories–still it’s a place to start.

Well, it’s only July, but thanks! I heartily agree that metrics, easily captured, are only the place to start.

Reading this blog reminds us of the variety of ways used to make citations in the text and the reference lists in different disciplines. Similarly there must be varieties in the ways citations are used to determine ‘impact’.
One aspect of this which interests me lies in the debate over whether or not you should count citations in the text as opposed those in the reference list. A major reference might be cited three times in the text but only once in the list – thus reducing its ‘impact’ relative to other citations.
For more discussion see Hartley, J. (2012). To cite or not to cite: Author self-citations and the impact factor. Scientometrics 92, 2, 313-317,

Thanks for this! I’m going to go locate and read that piece right now…

Scientometrics is a broad and active field, including lots of work in citation analysis. Normalizing impact factors based on the citation behavior of different subfields is itself an active research area, with some widely accepted methods. The research evaluation community is also grappling with these issues, but research has to be evaluated because competition is fierce.

Like you, Karin, I have long been suspicious of what value citation counts cam have in disciplines outside the sciences and why there seems to be a presumption that measuring “impact” in this way should count as a positive attribute. I think of books like “The Bell Curve” that were hugely controversial and attracted a ton of criticism, but though it no doubt had a great many citations, I think few people would consider them to be positive assessments of the book’s value. Indeed, quite the contrary. For a parallel in the world of science, think of the controversy over “cold fusion.” I can’t imagine that a large number of citations to papers supporting this hypothesis would be appropriately considered a positive endorsement of the research.

Yes, I always think of the autism-vaccine study– it’s had an enormous impact, obviously. We could argue that all impact is impact (neutral) but I think in some prominent cases we can say that citations = a very different sort of impact. But what I want to get to especially was not only whether impact was good or bad per se, but whether impact is really what the author means to suggest by that citation.

As a journal editor for some 30 years, I have 2 comments:
(1) Those who take such metrics too seriously might not read, but they sure can count.
(2) Agonising over how the counting is done is a bit like monkeys picking nits off one another – it makes no difference whatever, as the nits will still be there.

Fair enough. But I’m an eternal optimist and/ or tilter at windmills. And by training I’m inclined to ask why humans are doing what we are doing at any given point, what’s driving that behavior, and what it implies. I think that’s as important for scholarly publishing as for any other endeavor.

Gary, most science is quantitative and the science of science is no exception. Moreover, reading the papers (whatever that means) does not work in many evaluation settings. Quantitative assessment, properly done, plays an important role.

Thank you. This is interesting. I also wonder how the trends in citation growth will affect this. For example, in our geoscience journals, the average paper had roughly 35 references in the 1950s, whereas today most papers have 90 or more references (see https://geosociety.wordpress.com/2013/06/14/what-will-happen-to-references/ for an analysis of this). Does a growing pool of citations make it easier or harder to spot impact? Is it just that older literature has become much more visible now that everything is digital, or have the ways in which people cite changed over the years?

This is a relatively deep research question, Matt, one that people are working on.

But Phil says this: “the trend to cite older papers began decades before Google Scholar, Google, or even the Internet was invented.”

So the increasing half life trend is not due to the digital revolution. It may be a social trend and likewise for increasing the number of citations. Do we know that the latter is universal? Is there long term longitudinal data on the average number of citations per article, by subfield? I have not seen that and would like to.

Thanks– it’s indeed a pretty deep pool of work, but I haven’t read this one! I’ll be following up with another post shortly.

Perhaps this is a good example of when reading the papers is not feasible. It is a recent email from Gene Garfield’s Sigmetrics listserv, which I recommend highly to anyone who wants to see what is going on in scientometrics. David

>Scimago Lab has launched the 2015 edition of its Scimago Institutions Ranking, a comprehensive review of the performance of more than 5100 universities, government research centers, hospitals and research intensive companies from all over the world.

http://scimagoir.com/index.php

The new edition is more visual, with a world map customizable by country and sector and an impressive collection of data, including bibliometric info from Scopus database (2009-2015), innovation metrics extracted from patent databases and freshly updated webometric indicators.

There are 13 different indicators distributed as follows:

Research indicators: Output, Scientific Talent Pool, Excellence, Leadership, International Collaboration, Normalized Impact, Specialization

Innovation indicators: Innovative Knowledge, Technological Impact

Web visibility indicators: Website Size, Domain’s Inbound Links

SIR intends to be used a science evaluation resource to assess worldwide universities and research-focused institutions. The end-user can choose the preferred indicators, the geographical coverage and temporal range and can export directly the data to his/her computer.

An extensive methodology with bibliographic references is provided at http://scimagoir.com/methodology.php

I know that this is a little late in the day but Karin and others may like to know of a paper due out in the journal INFORMATION RESEARCH drawing from the trust project undertaken by CIBER Research and UTK by a group of authors headed by Clare Thornley and entitled THE ROLE OF TRUST AND AUTHORITY IN THE CITATION BEHAVIOUR OF RESEARCHERS. It draws together information from structured interviews and will probably be published in September.

Anthony

Interesting blog on ‘impact’ and citations. While working on Bookmetrix we slowly become aware of the various kinds of scholarly influence books have; some examples can be found on bit.ly/bookmetrix15 with highly cited, downloaded or online mentioned books. The data also learns us that perhaps the citation half life for books is quite a bit longer than anticipated, 150-15 years (!), but here we are only scratching the surface.

That’s interesting, thanks. Clearly metrics need to account for scholarship across many platforms and for humanists the book is key.

People are still citing Aristotle and Plato today as well as the pre-Socratics!

Comments are closed.