A new initiative to make academic assessment less reliant on the impact factor has been launched. The initiative is the San Francisco Declaration on Research Assessment, adorably dubbed “DORA” by scientists who have, over the past two decades, become masters of the acronym. The DORA Web site lists about 400 individual and 90 organizational signatories to the declaration currently.
As described in one of the editorials published in a coordinated fashion last week, Bruce Alberts, the editor-in-chief at Science, the declaration was:
. . . initiated by the American Society for Cell Biology (ASCB) together with a group of editors and publishers of scholarly journals, recognizes the need to improve the ways in which the outputs of scientific research are evaluated.
The main impetus for the group stems from what they perceive to be an over-extension and misapplication of the impact factor in the assessment of scholarly achievement across disciplines. As Alberts writes:
. . . [the impact factor] has been increasingly misused . . . with scientists now being ranked by weighting each of their publications according to the impact factor of the journal in which it appeared. For this reason, I have seen curricula vitae in which a scientist annotates each of his or her publications with its journal impact factor listed to three significant decimal places (for example, 11.345). And in some nations, publication in a journal with an impact factor below 5.0 is officially of zero value. As frequently pointed out by leading scientists, this impact factor mania makes no sense.
The head of the ACSB, Stefano Bertuzzi, sought to be both aggressive and conciliatory when quoted in a Science Insider article by indicating that this is not an attack on Thomson Reuters or the impact factor itself:
I see this as an insurrection. We don’t want to be at the mercy of this anymore. We’re not attacking [Thomson Reuters] in any way. [We are attacking] the misuse of impact factors.
This all sounds laudable, but there are problems. One of the problems is a lack of novelty in the thinking — that is, these tendencies are well-known and have dogged the impact factor for years, as shown in this quote from a paper published in 2005:
The use of journal impact factors instead of actual article citation counts to evaluate individuals is a highly controversial issue. Granting and other policy agencies often wish to bypass the work involved in obtaining actual citation counts for individual articles and authors. . . . Presumably the mere acceptance of the paper for publication by a high impact journal is an implied indicator of prestige. . . . Thus, the impact factor is used to estimate the expected influence of individual papers which is rather dubious considering the known skewness observed for most journals. Today so-called “webometrics” are increasingly brought into play, though there is little evidence that this is any better than traditional citation analysis.
Even so, there are indirect measures that can be applied given what the impact factor measures, as one researcher into citation metrics has noted:
Impact Factor is not a perfect tool to measure the quality of articles but there is nothing better and it has the advantage of already being in existence and is, therefore, a good technique for scientific evaluation. Experience has shown that in each specialty the best journals are those in which it is most difficult to have an article accepted, and these are the journals that have a high impact factor. Most of these journals existed long before the impact factor was devised. The use of impact factor as a measure of quality is widespread because it fits well with the opinion we have in each field of the best journals in our specialty.
The impact factor has clear utility, especially for distinguishing between journals within a field. Abandoning it for the wrong reasons would be akin to throwing the baby out with the bath water.
Yet, instead of focusing solely on researchers, funders, and academic institutions — the main bodies with control over how research assessment occurs — the draft DORA statement also draws publishers in as extrinsic elements reinforcing this purported environmental problem. The declaration also has some unsupportable elements, something Philip Campbell, the editor of Nature, pointed out in an interview in the Chronicle of Higher Education, pointing out that Nature has not signed on to DORA:
. . . the draft statement contained many specific elements, some of which were too sweeping for me or my colleagues to sign up to.
Campbell picked up on one of the statements that caught my eye, as well — that journals should:
. . . [g]reatly reduce emphasis on the journal impact factor as a promotional tool, ideally by ceasing to promote the impact factor or by presenting the metric in the context of a variety of journal-based metrics (e.g., 5-year impact factor, EigenFactor , SCImago , h-index, editorial and publication times, etc.) that provide a richer view of journal performance.
So, while this is ostensibly not an attack on Thomson Reuters, one recommendation is for journals to cease promoting the impact factor. That feels like an attack on both journals and Thomson Reuters. The editorial release in the EMBO Journal spends an inordinate amount of time complaining about the impact factor. In Molecular Biology of the Cell, the acronym-happy scientists create a mock impact factor and call it the Metric for Evaluating Scientific Scholarship (MESS), a definite slight against the impact factor. Despite acknowledging the utility of the impact factor in evaluating journals in many ways, some publishers have over-reacted, with eLife saying:
If and when eLife is awarded an impact factor, we will not promote this metric.
Other vocabulary emerging includes a “declaration of independence” and “scientific insurgents,” again creating the impression of throwing off an oppressive regime of publishers wielding the impact factor.
The core issue isn’t the existence of the impact factor or its utilization by journals, but its lazy adoption by academics. Conflating other issues is equally sloppy, and eLife is certainly throwing itself on its sword for no particular reason.
More disconcerting from a group of sophisticated academics is some confusion within various statements — for instance, in the quote above, the DORA authors list a “variety of journal-based metrics” but include the h-index, which is not a journal-based metric but a hybrid article- and researcher-based metric. In addition, while decrying the use of one journal-based metric, they ask publishers to deploy more journal-based metrics to . . . make it less likely that academics will use journal-based metrics?!? Things like this make the entire document seem hasty and substandard.
There are specific recommendations pertaining to publishers that also seem unnecessary or ill-conceived:
- Remove all reuse limitations on reference lists in research articles and make them available under the Creative Commons Public Domain Dedication. Publishers can spend a lot of time and not an insignificant amount of money ensuring that reference lists are accurate. What incentive is there for them to then make these lists freely available? Who does it help? Commercial entities wishing to make products off the backs of publishers?
- Remove or reduce the constraints on the number of references in research articles. Constraints on reference lists are intended to force researchers to point to the best sources and not throw everything in but the kitchen sink. These limits also help publishers control the costs of checking references. As with many points in the declaration, this is irrelevant, actually. If the goal is to make academic institutions stop using the impact factor as a proxy for academic achievement, why would this issue matter?
- Relax unnecessary limits on the number of words, figures, and references in articles. This isn’t an enumerated recommendation, but is included in the narrative preceding the list. The idea itself is completely author-centric — shorter articles are more readable and usable, but authors often find the work of shortening or revising their work to be burdensome and difficult. There is an editorial balance to be struck, and DORA should not stick its nose into that balance. It’s a superfluous issue for the initiative, and adds to the impression that an ill-defined animus informs the declaration.
There’s a deeper problem with the DORA declaration, which is an unexpressed and untested inference in their writing about how the impact factor may be relevant to academic assessment groups. They assert repeatedly, and the editorials expand on these assertions, that tenure committees and the like connect the impact factor to each individual article, as if the article had attained the impact stated. I don’t believe this is true. I believe that the impact factor for a journal provides a modest signal of the selectivity of the journal — given all the journal titles out there, a tenure committee has to look for other ways to evaluate how impressive a publication event might be. In fact, research has found that impact factor is actually a better marker than prestige in some fields, especially the social sciences, because it is more objective. If we dispense with something useful, we need something better in its place. That is not on offer here.
In a more narrative editorial about DORA published in eLife, the wedge politics of DORA and the reliance on personal stories both become more clearly evident:
Anecdotally, we as scientists and editors hear time and again from junior and senior colleagues alike that publication in high-impact-factor journals is essential for career advancement. However, deans and heads of departments send out a different message, saying that letters of recommendation hold more sway than impact factors in promotion and tenure decisions.
Another way of saying this is that you need both publication in high-impact journals and letters of recommendation for career advancement. Since that’s not an inflammatory statement but common sense, but controversy is a prerequisite, the situation is cast as an either-or choice — you have to drop one to have the other, a clearly false premise.
Another anecdote comes next:
. . . researchers on the sub-panels assessing the quality of research in higher education institutions in the UK as part of the Research Excellence Framework (REF) have been told: ‘No sub-panel will make any use of journal impact factors, rankings, lists or the perceived standing of publishers in assessing the quality of research outputs’. However, there is evidence that some universities are making use of journal impact factors when selecting the papers that will included in their submission to the REF.
Again, the prohibition is against the sub-panel’s use of impact factors. But if a university is trying to submit a tight, powerful application, and if the impact factor provides a reliable path for this activity, the complaint isn’t clear to me at all. If I’m whittling down a list, I’m going to use some tools to whack away the underbrush, including impact factor, authorship factors, recency, and so forth. Conflating this activity with the activity of a sub-panel isn’t fair, but doing so highlights the group’s singular focus on the impact factor.
Alternatives are mentioned in many editorials, and of course various alt-metrics groups have signed onto DORA. Yet, alt-metrics still provide metrics, and any metric is going to be susceptible to manipulation and perceived misuse. In addition, the transparency demanded by DORA would likely be challenging for alt-metrics providers, who have their own “secret sauce” approach or unclear standards in many cases.
To me, the authors of the DORA declaration have not done sufficient legwork to see exactly how the impact factor is being used by tenure committees, have not given sufficient thought to their goals, have offered mixed messages about how to improve the situation, have thrown journals into an issue that could be dealt with separately, and have allowed some petty resentments to mix into what could be a more vaunted approach to improving academic assessments.
That’s too bad, because the best point of the declaration — that academics should be evaluated using a variety of inputs and mostly on their potential and fit — risks being lost in what is being interpreted broadly as an attack on the impact factor. And if academia thinks the problem is not their own practices and cultural attitudes, they could miss the point of the DORA declaration entirely.