Drawing Lines to Cross Them: How Publishers are Moving Beyond Established Norms
Looking at five ‘lines’ that the publishing industry has broadly agreed upon, but that now we are finding ourselves crossing.
Looking at five ‘lines’ that the publishing industry has broadly agreed upon, but that now we are finding ourselves crossing.
A hackathon for the Financial Times Top 50 journals list is underway for those who want to shape how metrics are developed. An interview with Andrew Jack.
Sharing and evaluating early stage research findings can be challenging, but that’s starting to change. Learn more in this guest post by Sami Benchekroun and Michelle Kuepper of Morressier
The Altmetric “flower” is an icon, and the annual Top 100 list a much-anticipated event. But is the flower really a stalk?
Last week’s Transforming Research conference in Baltimore, MD, gathered a range of speakers across the academic and professional spectrum. Charlie Rapple highlights some of the new research that was shared, and draws out some of the prevalent themes.
At the Researcher to Reader conference, a volunteer project was launched to define a new suite of indicators to help researchers judge publishers, rather than the other way around.
A brief summary of the main citation indicators used today.
An interview with the team behind the new Release 5 of the COUNTER Code of Practice.
What should publishers know about researchers and their work? Alice Meadows and Karin Wulf follow up a post earlier this year about “Seven Things Every Researcher Should Know about Scholarly Publishing.”
After many and long conversations among colleagues within and beyond the Scholarly Kitchen
about what researchers need to know about scholarly publishing, Alice Meadows and Karin Wulf compiled a list of what we think to be the most urgent issues.
There is no shortage of critique of citation metrics and other efforts to quantify the “impact” of scholarship. Will a report calling for “responsible metrics” help researchers, administrators and funders finally wean themselves?
Citation practices vary between and within STM and HSS; they also vary by discipline and within disciplines. Though citation metrics presume evidence of “impact,” in fact a citation may represent a range of intentions. Given the emphasis on citation indices, isn’t it important to query what scholars are actually doing when they cite another scholar’s work?
The recent ORCID-CASRAI conference in Barcelona brought together over 150 researchers, research administrators, funders, publishers, vendors, and others working in scholarly communications to discuss research evaluation, with a particular focus on social science and humanities – resulting in some interesting conversations and observations
Technology is great, but does it deserve top billing? Leon Wieseltier’s essay in the New York Times as well as articles by other academics raise a challenge to the information industry as a whole.
Businesses are using more data than ever to inform decision making. While the truly large Big Data may be limited to the likes of Google, Amazon, and Facebook, publishers are nonetheless managing more data than ever before. While the technical challenges may be less daunting with smaller data sets, there remain challenges in interpreting data and in using it to make informed decisions. Perhaps the most daunting challenge is in understanding the limitations of the dataset: What is being measured and, just as importantly, what is not being measured? What inferences and conclusions can be drawn and what is mere conjecture? Where are the bricks and mortar solid and where does the foundation give way beneath our feet?