A couple weeks ago, a diverse group of volunteers met on a conference call to discuss a not-at-all-secret plan to develop a new kind of indicator for the assessment of scholarly journals. Yes, I know, dear reader, you’ve heard that before, but before you roll your eyes and check your email, bear with me for one more paragraph. I promise there’s a twist.
The genesis of the project was the Researcher to Reader conference back in February, where Tasha Mellins-Cohen, Mike Taylor and I organized (arguably), and hosted a workshop on the subject of metrics. During the course of the workshop, it became obvious that the world doesn’t need any more frameworks to assess the quality of research contained within an article, a journal, or by a specific researcher or institution. There are already plenty of them, even if many of them struggle to gain traction due to the dominance of the Impact Factor. Instead, we decided to look at indicators of quality for the service provided by publishers.
In many ways, the role of the scholarly publisher is changing. In the past, it was the publisher’s job to select the most interesting, well-conducted research with the potential for the greatest impact, add value, package it up, and distribute it to interested parties around the world. While that’s still an important part of what publishers do, it’s not the only part of the value proposition. In addition, publishers are communication service providers. Their clients are researchers looking for the best platform for their research in order to enhance their academic reputations. As grant money becomes scarcer and the need to publish becomes ever more pressing, it’s the services that authors are provided that are becoming ever more important.
This is obviously all happening against a background of concerns about the quality of service provided by so-called predatory publishers. Most readers of this blog will be aware of the work of Jeffrey Beall in introducing us to the term predatory publishing and his popular, but controversial, blog that listed suspected bad actors, also known as a blacklist. Many researchers make use of indexing services like Web of Science, Scopus, and others, as de facto whitelists. There’s also Cabell’s, based out of Texas, that produce a journal index that they market as an actual whitelist, or list of good places to publish. They’re also planning to launch a blacklist at the end of this month. Meanwhile, those who have access to the academic networks of the global north tend to rely on personal knowledge and relationships with editors and societies. Essentially, we have a somewhat piecemeal and arguably ethnocentric system for identifying quality in scholarly publishing services.
More recently, a group of trade organizations and publishers have been cooperating on an information campaign, Think.Check.Submit, aimed at making authors aware of the risks and helping them make judgements. This approach of educating and empowering authors is a solid one. Project Cupcake aims to take this a step further and provide data and concrete information, where available and in an unbiased way, to help researchers reach their own informed conclusions.
The analogy we settled on was one of a cupcake factory (bear with me, it’ll make sense in a minute). If you’re a company that is outsourcing the creation of your tasty confectionary to a factory owner, you might want to know a bit about how the product is being made. You’d need to protect your company’s reputation and make sure you’re getting value for money. You might want to know how efficiently the batter making is and that the cakes are not sitting too long and going stale. You might ask what’s going into the icing, whether it’s really strawberry flavored premium frosting or just sugar paste with pink dye in it. You get the idea.
And so it goes with the researchers’ interest in how their articles are being handled. Researchers want to know that the publisher they’re sending their work to is actually conducting peer-review. They might ask how many reviewers per article are there, and how many rounds take place on average? They may care about acceptance rates or time to rejection. They may want to know how long it’s likely to take for their article to be released and how quickly it’ll turn up in whichever indexing service matters to them and their discipline. They may be concerned about the quality of typesetting and how frequently the publisher introduces mistakes in their typesetting process. In more technical disciplines, the suitability of the output for machine learning is likely to be of interest. It might also be helpful to document which industry initiatives a journal participates in. Having DOIs, supporting ORCID, being a member of OASPA may all be useful indicators of quality and good practice.
While many publishers make at least some of this information available, it’s not aggregated anywhere and it’s not validated.
Project Cupcake is in its infancy and we’re not sure what the end point will be. Our initial goal will be some kind of white paper, publication, or report outlining a potential framework and analyzing the feasibility of gathering the data needed. Beyond that, time will tell. If we eventually make Project Cupcake into something tangible, it probably won’t be a single metric or badge of approval, but a suite of indicators that give researchers the information they need to make their own decisions.
In the first conference call, we started to work through both what it is that researchers would want to know about a publisher and how that data could be gathered. We explored whether data might come through relationships with publishers, or perhaps by providing channels for researchers to feed back information.
However it turns out, for me the point is that the dynamics of how publishers interact with their stakeholders are changing and specifically, authors are shifting from being a resource to a primary customer. I and many others, have written about how we need to change, or rather, augment the way we evaluate researchers to meet these new realities. Maybe we should also be thinking the same way about the journals they publish in.
If you’d like to join us in the cupcake tent, drop an email to: email@example.com
So far, the following brave souls are volunteering their time and substantial expertise:
- Tasha Mellins-Cohen, Highwire, Co-chair
- Mike Taylor, Digital Science, Co-chair
- Phill Jones, Digital Science, Co-chair
- Caitlin Meadows, Charlesworth Group
- David Cox, Taylor and Francis Group
- Diane Cogan, Ringgold, Inc
- Elizabeth Gadd, Loughborough University
- Jennifer Smith, St George’s University, London
- Katie Evans, University of Bath
- Nisha Doshi, Cambridge University Press
- Pippa Smart, Independent publishing consultant
- Ray Tellis, Taylor and Francis Group
- Rick Anderson, University of Utah
- Sam Bruinsma, Brill
- Stephen Curry, Imperial College, London
- Syun Tutiya, National Institution for Academic Degrees and University Evaluation, Japan
- Wilhelm Widmark, Stockholm University