Impact Metrics on Publisher Platforms: Who Shows What Where?
A review of 12 major publishers finds that they display an average of 6 journal-level impact metrics on their platforms. The Journal Impact Factor is the only metric displayed on all 12.
A review of 12 major publishers finds that they display an average of 6 journal-level impact metrics on their platforms. The Journal Impact Factor is the only metric displayed on all 12.
Publishers have led themselves into a mess by focusing on rising submissions as a positive indicator of journal performance. The time has come to close the floodgates and require that authors demonstrate their commitment to quality science before we let them in the door.
Today’s guest author offers a progress report on recent efforts to build open-source technology for open access book metrics.
Data sonification is the process of translating data into sound. Here, Lutz Bornmann and Christian Leibel present the sonified results of a recent analysis of the impact of scientific team size on innovation.
This is the third and final article in a guest series reflecting on the main themes and ideas gathered and discussed at The Munin Conference at the end of 2024. Today’s focus is measuring impact.
Users (human and machine) are accessing scholarly content in new ways, challenging traditional usage analytics models. In this guest post, Tim Lloyd outlines the challenges ahead in quantifying usage.
Christos Petrou presents evidence suggesting that growth in retractions has not been universal across regions and subject areas, and it is primarily driven by the industrial-scale activity of papermills (rather than the activity of individual researchers) and the growth of research from China.
Promoting research integrity is not just identifying bad behavior: problem articles can also be detected by the absence of ‘honest’ signals of integrity.
In today’s Kitchen Essentials post, Alice Meadows interviews Tasha Mellins-Cohen, Executive Director of COUNTER Metrics (formerly Project COUNTER), which plays a critical role in enabling consistent usage metrics reporting.
Hélène Draux presents the first of a two-part effort to chart the topography of mental health scholarship. Here, established methods, including pre-existing classifications are employed.
How can we measure the impact of research papers on influencing public policy? An interview with Euan Adie of Overton.
While higher rates of endogeny can help indexes identify journals being used for self-promotion, nepotism, or other unethical ends, endogeny itself should not be equated with them and can be the result of a narrow or new field of research.
Thoughts on Elsevier’s acquisition of Plum Analytics.
It is now conference season, which for me means lots and lots of editorial board meetings. The next swing comes in the fall when the fiscal year comes to a close. With 35 journals in the American Society of Civil […]
Businesses are using more data than ever to inform decision making. While the truly large Big Data may be limited to the likes of Google, Amazon, and Facebook, publishers are nonetheless managing more data than ever before. While the technical challenges may be less daunting with smaller data sets, there remain challenges in interpreting data and in using it to make informed decisions. Perhaps the most daunting challenge is in understanding the limitations of the dataset: What is being measured and, just as importantly, what is not being measured? What inferences and conclusions can be drawn and what is mere conjecture? Where are the bricks and mortar solid and where does the foundation give way beneath our feet?