A couple weeks ago, a diverse group of volunteers met on a conference call to discuss a not-at-all-secret plan to develop a new kind of indicator for the assessment of scholarly journals. Yes, I know, dear reader, you’ve heard that before, but before you roll your eyes and check your email, bear with me for one more paragraph. I promise there’s a twist.
The genesis of the project was the Researcher to Reader conference back in February, where Tasha Mellins-Cohen, Mike Taylor and I organized (arguably), and hosted a workshop on the subject of metrics. During the course of the workshop, it became obvious that the world doesn’t need any more frameworks to assess the quality of research contained within an article, a journal, or by a specific researcher or institution. There are already plenty of them, even if many of them struggle to gain traction due to the dominance of the Impact Factor. Instead, we decided to look at indicators of quality for the service provided by publishers.
In many ways, the role of the scholarly publisher is changing. In the past, it was the publisher’s job to select the most interesting, well-conducted research with the potential for the greatest impact, add value, package it up, and distribute it to interested parties around the world. While that’s still an important part of what publishers do, it’s not the only part of the value proposition. In addition, publishers are communication service providers. Their clients are researchers looking for the best platform for their research in order to enhance their academic reputations. As grant money becomes scarcer and the need to publish becomes ever more pressing, it’s the services that authors are provided that are becoming ever more important.
This is obviously all happening against a background of concerns about the quality of service provided by so-called predatory publishers. Most readers of this blog will be aware of the work of Jeffrey Beall in introducing us to the term predatory publishing and his popular, but controversial, blog that listed suspected bad actors, also known as a blacklist. Many researchers make use of indexing services like Web of Science, Scopus, and others, as de facto whitelists. There’s also Cabell’s, based out of Texas, that produce a journal index that they market as an actual whitelist, or list of good places to publish. They’re also planning to launch a blacklist at the end of this month. Meanwhile, those who have access to the academic networks of the global north tend to rely on personal knowledge and relationships with editors and societies. Essentially, we have a somewhat piecemeal and arguably ethnocentric system for identifying quality in scholarly publishing services.
More recently, a group of trade organizations and publishers have been cooperating on an information campaign, Think.Check.Submit, aimed at making authors aware of the risks and helping them make judgements. This approach of educating and empowering authors is a solid one. Project Cupcake aims to take this a step further and provide data and concrete information, where available and in an unbiased way, to help researchers reach their own informed conclusions.
The analogy we settled on was one of a cupcake factory (bear with me, it’ll make sense in a minute). If you’re a company that is outsourcing the creation of your tasty confectionary to a factory owner, you might want to know a bit about how the product is being made. You’d need to protect your company’s reputation and make sure you’re getting value for money. You might want to know how efficiently the batter making is and that the cakes are not sitting too long and going stale. You might ask what’s going into the icing, whether it’s really strawberry flavored premium frosting or just sugar paste with pink dye in it. You get the idea.
And so it goes with the researchers’ interest in how their articles are being handled. Researchers want to know that the publisher they’re sending their work to is actually conducting peer-review. They might ask how many reviewers per article are there, and how many rounds take place on average? They may care about acceptance rates or time to rejection. They may want to know how long it’s likely to take for their article to be released and how quickly it’ll turn up in whichever indexing service matters to them and their discipline. They may be concerned about the quality of typesetting and how frequently the publisher introduces mistakes in their typesetting process. In more technical disciplines, the suitability of the output for machine learning is likely to be of interest. It might also be helpful to document which industry initiatives a journal participates in. Having DOIs, supporting ORCID, being a member of OASPA may all be useful indicators of quality and good practice.
While many publishers make at least some of this information available, it’s not aggregated anywhere and it’s not validated.
Project Cupcake is in its infancy and we’re not sure what the end point will be. Our initial goal will be some kind of white paper, publication, or report outlining a potential framework and analyzing the feasibility of gathering the data needed. Beyond that, time will tell. If we eventually make Project Cupcake into something tangible, it probably won’t be a single metric or badge of approval, but a suite of indicators that give researchers the information they need to make their own decisions.
In the first conference call, we started to work through both what it is that researchers would want to know about a publisher and how that data could be gathered. We explored whether data might come through relationships with publishers, or perhaps by providing channels for researchers to feed back information.
However it turns out, for me the point is that the dynamics of how publishers interact with their stakeholders are changing and specifically, authors are shifting from being a resource to a primary customer. I and many others, have written about how we need to change, or rather, augment the way we evaluate researchers to meet these new realities. Maybe we should also be thinking the same way about the journals they publish in.
If you’d like to join us in the cupcake tent, drop an email to: projectcupcake2017@gmail.com
So far, the following brave souls are volunteering their time and substantial expertise:
- Tasha Mellins-Cohen, Highwire, Co-chair
- Mike Taylor, Digital Science, Co-chair
- Phill Jones, Digital Science, Co-chair
- Caitlin Meadows, Charlesworth Group
- David Cox, Taylor and Francis Group
- Diane Cogan, Ringgold, Inc
- Elizabeth Gadd, Loughborough University
- Jennifer Smith, St George’s University, London
- Katie Evans, University of Bath
- Nisha Doshi, Cambridge University Press
- Pippa Smart, Independent publishing consultant
- Ray Tellis, Taylor and Francis Group
- Rick Anderson, University of Utah
- Sam Bruinsma, Brill
- Stephen Curry, Imperial College, London
- Syun Tutiya, National Institution for Academic Degrees and University Evaluation, Japan
- Wilhelm Widmark, Stockholm University
Discussion
25 Thoughts on "Project Cupcake: Designing a New Type of Journal Metric."
Great initiative. Very happy to also help, if required.
Also love it. Some thoughts from my 2015 post on this topic.
http://www.thespectroscope.com/read/we-need-post-publication-peer-review-of-journals-by-lenny-teytelman-304
I have long advocated for post-publication peer review of research, but why don’t we apply post publication peer review to the journals themselves?
Two years ago, Mansi Srivastava suggested to me that we should add a service to our PubChase that would allow scientists to share review and decision times for each journal. We even built a prototype a year ago (screenshot above). We never released it because we were already doing too much and promoting PubChase, protocols.io, and TheSpectroscope.
The site I am envisioning now would show review times, number of rounds before acceptance/rejection, and satisfaction of the author with the manuscript handling (as in “how likely are you to submit another manuscript to this journal?” or simply “did you find the editorial/review process constructive or destructive?”). No ranking or score for journals, but full transparency and feedback – I bet not all journals would fare equally well.
——————-
(Also a lot of good discussion in the comments of the above post.)
TOTAL SERVICE INFLUENCES CHOICE OF PLATFORM
So, “Publishers are communication service providers. Their clients are researchers looking for the best platform for their research … [for which there are many motives, apart from that which you allude to].” And when seeking the best platform, researchers look at the totality of the communication services provided, which include commenting on articles and how easy it is to access those comments. For example, in Scholarly Kitchen one identifies oneself in some way when contributing a comment, but not when reading comments. These can be quickly scanned by busy people who do not have time to spare. Not so with the UK’s leading science journal Nature. Having been enticed to read a fascinating article, one is confronted with “Commenting is not currently available.” This usually turns out to be entirely false. If one logs on, one can then read and contribute as one wishes. Unlike the author of the present article, I will not conjure up editorial motives for this, suffice to say it is just one factor that turns one off the idea of Nature as a suitable publication platform.
Your only goal is to publish a report, yet you wonder what authors could possibly want from publishers?
I’m afraid that you may have misunderstood me, so my apologies for being unclear.
We’re in uncharted waters here, so it’s impossible to really say what a solution will look like until after the problem has been fully scoped out and the feasibility assessed. Given the innovative nature of this work, a phased approach is neccersary.
The immediate goal is to create some kind of documentable framework and plan for execution. Beyond that, we’ll have to think about a range of issues from technical feasibility, to governance, to sustainability.
Like I said, “Project Cupcake is in its infancy and we’re not sure what the end point will be.”
You’re overthinking what authors want from journals/publishers is what I mean. Best cupcakes are simple, while this cupcake seems bent on incorporating all sorts of different ingredients.
You’ll end up with a fancy novelty cupcake that no one will enjoy eating.
You don’t indicate whether this will be an open product/website or require some form of payment or subscription. I hope it will be open.
Will you have research faculty as part of this working group? Or will you take your ideas to faculty for feedback? Looks like you have a good mix of publishers and librarians already.
A great idea…and a great name for it!
Hello Robin,
That’s a very good point about openness. I’m not sure what model for sustainability we’d recommend if we even recommend one at all, but I’d like to see the information made as widely available as possible.
We already have one member of a research faculty in the group, Prof Stephen Curry of Imperial College. I can’t speak for the other group members, but I’d certainly like to add one or two more research academics if we can find people interested in working on it. As you noticed, we also have a number of librarians in the group who may be able to help us with reaching out to academics for feedback.
Dear Phil,
Could Quality Open Access Market (www.qoam.eu) be an expamle of the service you are thinking of?
In QOAM quality information is solely based on academic crowd sourcing. Libraries use Base Score Cards to judge the transparency of a journal’s web site with respect to the editorial board, peer review, governance and work flow. Currently, QOAM has 5000 of such cards. Authors may share their experience with a journal via Valuation Score Cards. Already 3000 did so. Based on these two indicators journals are SWOT-categorized: Strong, Weaker, Opportunity (for publishers) and Threat (to authors).
Next to that QOAM collects price information. It produces the publication fee an author really paid (last question in the Valuation card) and the price quoted on a publisher’s web site (last question in the Base card). Finally it reflects the discounts effectuated via offsetting deals and memberships.
Thus QOAM tries to help authors make well informed decisions about the journal they submit their article to.
I would be interested in joining this project if it extends to books as well, not just journals. Many of the same questions can be asked about book publishers and the quality of the service they provide, but in book publishing the role of the staff acquiring editor has no counterpart in journal publishing and yet it is quote crucial to the success and reputation of a book publisher. Back in 1994 I published a 50-page article in an edited volume titled Editors as Gatekeepers on “Listbuilding at University Presses” in which identified nine distinctive roles that acquiring editors play that might be used as a starting point for an evaluative metric. It should be noted that individual scholarly societies have from time to time done surveys of their members rating book publishers. The American Political Science Association, for example, did this back in 2011. By the way, quality of copyediting should be part of any such list, for both journal and book publishers. I wonder why it was not included in the list of ratable services here?
Interesting initiative. We need more transparency in academic journals.
When seeking a manufacturer of cupcakes the company doing so would have a set of specs and the manufacturer would present how it would meet the specs not as you suggest the other way around.
What you appear to be doing is making a list of publisher runnings down the page and services provided across the top of the page and putting in a check by each item or a numeric number indicating just how well the publisher meets the goals.
The problem with these sort of things is that personnel and goals within the company change, etc. Additionally, you may find that an evaluation may apply to only one journal or to multiple journals but not universally throughout the company’s portfolio.
Thus, your evaluations are being applied to a multiple set of moving targets within one company.
You have taken on a daunting task.
Good luck!
Hi Harvey.
You know, all analogies break down when you look too closely at them.
The basic idea here is that our current indicators attempt to judge the quality of a journal based on the quality or impact of the work of the authors. Since authors are a customer of publishers, that seems rather backwards. That’s a bit like judging a car’s safety based on how well people drive, rather than whether the car has seat belts and air bags. Publishers really ought to be judged by the level of service that they provide to their customers, at least on some level.
I know why it is the way it is, but I think we can and should do better.
You’re right about publishers having multiple journals, which may have different standards, which is why this will probably be journal level, rather than publisher level one.
You’re also correct that performance will change over time, just as impact factors and levels of readership change. That’s at least part of the point of having metrics in the first place; that they change over time.
At the risk of beating a metaphor to death: Not everyone is looking for the same thing in a cupcake.
Not every author is seeking the same thing when they are choosing a journal to which to submit their article. They might be looking for very different characteristics, depending on their career goals (at that moment), the time-sensitivity of the article’s release, the quality and level of interest the authors themselves have in the research they are reporting – and on, and on…
The same authors writing another article 6 months later will have a different spectrum of values from that which drove the submission of the prior works.
Even the same article, rejected by one journal will have different decision profile when it is being submitted to a different journal.
It seems to me that you are not really aiming at a new “metric” so much as at a kind of fingerprint or set of descriptors about a journal to match against an author’s current profile of needs in submitting THIS article at THIS time.
Hi Marie,
You’re absolutely right. Not every author will want the same types or level of service every time. That’s why I’m personally thinking that we should aim for some kind of suite of indicators, rather than a single metric or ‘badge of approval’.
My thought is that providing a range of indicators will make it possible for potential authors to know what they’re buying in advance. I’m not totally convinced that bringing transparency will be enough to bring real choice in the marketplace, but I’m sure that you can’t have meaningful choice without transparency.
I have to stress though, that cupcake isn’t my project, it’s a joint effort from a group of volunteers so I can’t predict what the outcome will be.
Academicians have already started working on this. You may visit the following link for a similar attempt made by one.
https://www.researchgate.net/publication/315720114_Speed_and_Processing_Time_of_Journals_in_Management_International_Business_and_Marketing
Phill,
I like the construct for Cupcake, especially since the focus is the author. This approach recognizes the varying needs of the author and would expand and collect the appropriate metrics for that aspect of the service. So, a couple of thoughts…….1) This leaves me wondering how some internal metrics, such as speed to publication, could be audited. 2) Once all this data is collated, it could lead to ‘norms’ or best practices within or perhaps, across disciplines.
Thanks Judy,
The question about how to audit internal metrics like time to publication is a great one and will be one of the questions we’ll have to address. We’ve already started discussing ideas. It’s possible that we may be able to cross-check through author feedback or something along those lines, we’ll have to see.
It would be great if it did lead to some level of normalisation or at least transparency. To my mind, the big thing is to give authors the information that they need to make their own, informed decisions. I’m a believer in letting the market straighten itself out, but the market can’t do that if there’s not enough information for consumers to make informed choices. It’s possible that de facto norms will emerge out of that process.
It seems important to point out that the length of time a publisher takes to reach a decision is less under the oublisher’s control than the time it takes to produce a book once a finished manuscript is delivered. Peer review involving academic experts is unpredictable because those experts take varying amounts of time to perform their reviews even when they initially agree to deadlines. So it would not be appropriate to hold publishers solely accountable for the amount of time this part of the process takes.
Please stop to take authors and people as naive who do not know where to publish. The so-called “Think. Check. Submit” is a capitalist initiative made by big industrial publishers who wish to monopolize the beneficial publishing industry.
All these classifications go against the scientific principle based on objectivity and disinterest but not on money and size: The more you have money, the more you are a white-list.
To the point about what researchers want to know about the cupcakes (I mean articles), researchers as readers and authors *Do* want to know more about the peer review process as it applies to the particular article they are reading/writing. Transparency about the process is an indicator of reliability/trust.
Some publishers are using Peer Review Evaluation (PRE) to share this information. The service provides one-click access to information about how many reviewers, rounds of review, type of peer review (single blind is researcher’s preferred approach), and more info. These are indicators of quality and trust for article they are reading. The service also provides easy access to peer review policies. When it comes to judging the quality of research that they may reference in their own articles or use to advance their own research, readers take the time to judge the quality for themselves – and they would find information about peer review useful – Survey information on this topic is available at PRE-val.org/research. Disclosure, TBI Communications conducted the survey.
Thanks Anne,
That’s a very good point that PRE is already doing some information gathering here. At the very least, that’s evidence that at least some publishers are prepared to make these sorts of information available. I think we’re going to see more of this sort of thing in the future.
“We explored whether data might come through relationships with publishers, or perhaps by providing channels for researchers to feed back information.” You may also wanted to explore an opportunity to tie up with a metadata repository, to send an automated E-mail to Authors(Corresponding..) and seek feedback about their experience with the publishers with a specific set of standard questions, to measure the quality of the journal in which it is published. This will automate the process of collecting feedbacks and experiences of authors, which can be extrapolated to quality standards of the journal. Execute the automation – as soon as a new metadata set is updated on a repository(for instance Crossref or Pubmed) through publisher.