Editor’s note: Today’s post is co-authored by Lisa Janicke Hinchliffe, TSK Chef, and Heather Parkin, an MS/LIS candidate at the iSchool at the University of Illinois Urbana-Champaign. Heather works at the Illinois Fire Service Institute Library, which supports researchers in fire science and firefighter candidates across the state. After her graduation in May, Heather intends to pursue a position in academic librarianship with a focus on electronic resources management and scholarly communication. 

Impact metrics are a ubiquitous and contentious part of scholarly publishing. Although theories of methods for objectively measuring scientific activity have existed for hundreds of years, the metrics we use today were largely developed in the second half of the twentieth century.  Alex Csiszar’s history of this development demonstrates that concerns that researchers would change their behavior to manipulate these metrics were present from their inception. The possibility of misuse is not a recent phenomenon.

As of October 2025, there are over twenty-five thousand signatories of the 2012 San Francisco Declaration on Research Assessment (DORA), calling for the de-emphasizing of journal-based metrics, especially the Journal Impact Factor, in assessing individual research and career progression. These signatories include major academic publishers such as Elsevier, Springer Nature, and Sage Publications, as well as universities and funding organizations.

Today, we report on an effort to investigate to what degree major publishers appear to “reduce emphasis on the journal impact factor as a promotional tool” and “make available a range of article-level metrics” as recommended by DORA, or if they otherwise encourage a diversification of scholarly assessment approaches and reduced emphasis on numeric measurements.

engraving of the judgement of ParisJudgement of Paris, Engraving by Raphael Sadeler I, 1589

Our investigation entailed systematic observation of the journal and article-level impact metrics, their placement, and their visibility as presented on twelve major publisher platforms. We selected the twelve publishers that had published the most individual journal articles in 2023 based on data from Dimensions.ai: IOP Publishing, Oxford University Press, Wolters Kluwer, American Chemical Society (ACS), SAGE Publications, Frontiers, Institute of Electrical and Electronics Engineers (IEEE), Taylor & Francis, Wiley, MDPI, Springer Nature, and Elsevier.

For each publisher, we did a keyword search using the term “reliability” (data collection occurred in September-October 2025). We limited the articles to those published in 2023, and the articles were sorted in relevancy order. We selected two articles from two different journals. When possible, we selected the fifth and tenth search result for analysis, however we adjusted as needed for platform differences.

After two articles that appeared in two different journals were selected, a review for impact metrics was conducted, starting from the journal home page and/or the article web page. For each metric located, we recorded what the metric was, what the metric was called, its value, the number and type of pages displaying the metric, its location on a page, its visibility on the page, and whether an explanation of how the metric was calculated could be reached from that page.

What We Found: Journal-Level Metrics

Although the number and type of metrics reported by journals issued by the same publisher were mostly internally consistent, we found a wide range in the number of journal-level metrics across our sample of publishers. The average number of metrics reported by a publisher was six, and the median was five.

Sage Publications reported the most types of journal-level impact metrics and Wolters Kluwer and Elsevier tied for the fewest. Sage Publications, reporting thirteen metrics, Oxford University Press, reporting eleven metrics, and MDPI, also reporting eleven metrics, were the three publishers that each reported one or more metrics unique to their platform. Sage Publications reported the h5-median, the Journal Citation Indicator Ranking, and the 5-Year Journal Impact Factor Category Ranking. MDPI told users how many articles that journal had published which had been cited ten or more times, and Oxford University Press reported the Cited Half-Life of the journal.

Journal level metrics were visible on locations that fell into one of five categories:

  • The journal home page.
  • An “about the journal” page.
  • A page specifically dedicated to reporting journal metrics.
  • A page that reported the metrics of all the journals from that publisher.
  • A recurring element, such as a banner, which appeared on most webpages concerning the journal in question.

Publishers were most likely to report at least one metric on the journal home page, and the least likely to report on a page that grouped together the impact metrics for all journals from that publisher.

The only metric reported by all twelve publishers was the 2-year Journal Impact Factor (JIF). The JIF was also the metric most consistently reported on the journal home page, and the metric most likely to be visible without scrolling on one or more pages. Even on publisher platforms that reported ten or more metrics, the JIF was set apart by being one of a handful repeated on both a page or another location dedicated to collating impact metrics and the journal home page or a recurring element. In other words, the JIF was consistently made more visible through repetition and prominent location. The 5-year JIF was reported less consistently than the 2-year JIF but was still present on a majority of publisher platforms.

What We Found: Article-Level Metrics

Drawing conclusions on which metrics were most reported at the article level was complicated by the inconsistent definition of usage metrics and the variety of approaches taken with respect to reporting citation data. Eleven out of twelve publishers provided some form of article-level metric. The only publisher that did not, Wolters Kluwer, was also the only publisher reviewed that is not a DORA-signatory. All publishers that reported article-level metrics reported citation data.

Different publishers sourced their citation data from different places. Some publishers only used one source, while others used up to five. CrossRef was the most used source, followed by Web of Science, Scopus, and Dimensions. Frontiers Publications did not state which database it sourced citation data from on its article pages, but did cite Dimensions as its source for citation data in its 2023 impact report.

Although publishers used similar words and phrases to describe usage metrics at the article level, our analysis revealed that two metrics with the same name from different publishers may record different types of access. For example, on its “Learn About These Metrics” popover, the American Chemical Society defines article views as “the COUNTER-compliant sum of full text article downloads… (both PDF and HTML) across all institutions and individuals.” However, Taylor & Francis defines views/article views on the “Metrics” tab associated with each article as “the cumulative total PDF, EPUB and full-text HTML views and downloads.” MDPI reports article views at the end of each article as well, but we were unable to locate any explanation of what that recorded, other than the assurance that “multiple requests from the same IP address are counted as one view.”

Implications

Although eleven of the twelve publishers reviewed signed the JIF-critical DORA, the JIF still maintains a prominent place in how publishers present the impact of the content they publish. We did not have access to historical data to examine if or how the prominence and placement of the JIF have changed on these platforms over time, so we cannot make any claims with respect to whether publishers have made any effort to de-emphasize this metric since 2012. However, our observations support a claim that the JIF maintains a special status in scholarly impact metrics.

The second most reported metric among the reviewed publishers was the CiteScore. This newer metric has been praised for its transparency but criticized over the choice to calculate the citation rate across all document types published in a journal, rather than only “citable items.” The inclusion of CiteScore on ten of twelve journal platforms may indicate that the CiteScore is gaining the same prominence as its older relative, despite initial claims that it could not act as a viable alternative to the JIF.

Among our sampling group, being a DORA signatory correlates to provision of some kind of article-level metric, most commonly the number of times the journal has been cited. It seems the publishers are paying some attention to the clause in DORA which encourages provision of “a range of article-level metrics.” An area for further study would be to investigate a larger sample of publishers and see if these two factors maintain a pattern of relationship.

Finally, even if two publishers use the same label for readership metrics on an article level, users of this data should be aware that they may not be directly comparable. This aligns with observations from other researchers that the appearance of comparability of readership or usage statistics may be misleading. Using similar labels does not guarantee comparability between publisher platforms.

Conclusion

Our research shows large variations in the number, type, and placement of journal and article-level impact metrics presented on major publisher platforms. The JIF emerged as the most common and most prominent impact metric reported, suggesting more efforts could be made to de-emphasize this metric in line with the goals of DORA. A near second in commonality is CiteScore. An analysis of article-level metrics was complicated by publishers using identical or similar terms to indicate different usage patterns, such as downloads only vs. downloads and web page views. Citation data came from a wide variety and number of sources, meaning researchers should be careful when comparing citation statistics from different publisher platforms.

Lisa Janicke Hinchliffe

Lisa Janicke Hinchliffe

Lisa Janicke Hinchliffe is Professor/Coordinator for Research Professional Development in the University Library and affiliate faculty in the School of Information Sciences, European Union Center, Center for Global Studies, and Center for Social and Behavioral Science at the University of Illinois at Urbana-Champaign. lisahinchliffe.com

Heather Parkin

Heather Parkin

Heather Parkin is an MS/LIS candidate at the iSchool at the University of Illinois Urbana-Champaign. Heather works at the Illinois Fire Service Institute Library, which supports researchers in fire science and firefighter candidates across the state. After her graduation in May, Heather intends to pursue a position in academic librarianship with a focus on electronic resources management and scholarly communication. 

Discussion

3 Thoughts on "Impact Metrics on Publisher Platforms: Who Shows What Where?"

I’d love to see publishers just showing their COUNTER-compliant Unique Item Requests and Unique Item Investigations instead of ‘abstract views’ and ‘full text downloads’. The standard is there, let’s use it!

Thanks Lisa and Heather for a great analysis. Two comments. First, so-called “article level metrics” are very useful information. Citation and altimetric links improve context and efficiency for readers to articles, and improve discovery. Ergo they improve trust, public understanding, and research discovery. Glad they are included.

On Journal level metrics, although these can be argued to have some value, the long-standing beef is that the “scholarly” community continues to use and abuse scholarship around these, which degrades trust, and acts to game the system (aka commit what most would call fraud if an author did it) and it is driving decisions on publication types and referencing. No uncertainties or variability are included and clarivate and others report these to several insignificant digits. At one public meeting, I heard these extra digits defended to help rank journals at a fine scale! One study Martin Clark and I did shows this is just noise (https://doi.org/10.1002/2017WR021125). Clarivate and others could easily normalize by publication date within a year, to prevent gaming by front-loading publications (they don’t and “scholarly” journals do), They could easily consider measures of variability and show it (they don’t), which would eliminate all but one significant digit. Etc. It’s not the these metrics are just bad ones, it’s that they act to corrupt trust and weaken journals’ and publishers’ role in improving trust because we are knowingly propagating and at worst promoting, and engaging in bad science.

I think the JIF is always going to appear prominently as it’s still very important to many authors. However, I think the move by publishers to add more metrics (up to 13!) is actually a good news story and a significant improvement from the limited information that was typically provided 10 or so years ago. Obviously we can always do better but good to celebrate progress.

Comments are closed.