In recent months, investigations and allegations have questioned what some journals consider to be a “peer” and what exactly constituted a “review.” The scrutiny received by manuscripts can be so varied across journals, that the term “peer review” may hold very little value in itself. For some journals, the phrase has been used to make entirely false pretenses about what takes place between submission and publication.
Perhaps not surprisingly, some see a market in measuring and rating the peer review process across journals. I recently interviewed Adam Etkin, Director of Publishing at the Academy of Management and frequent commenter in the Scholarly Kitchen about his new venture, called preSCORE, which is described as a new metric that “measures the level of peer review conducted prior to the publication of scholarly material.”
Disclaimer: preSCORE was recently acquired by STRIATUS/JBJS, publisher of the Journal of Bone & Joint Surgery, whose CEO is Kent Anderson, fellow blogger and current President of the SSP. To avoid conflicts of interest, Kent has recused himself from participating in the interview and the comment section below.
Q: What problem do you hope to solve with preSCORE?
A: The short answer is that we will help to verify that a journal which claims to be conducting peer review, is in fact doing what they claim. The slightly longer answer is that preSCORE aims to support journals and publishers who value ethical, rigorous peer review, as well as those who read scholarly publications.
Q: When did you come up with the idea of preSCORE?
A: While out for a jog on a Saturday morning during the fall of 2009. I do not remember the exact date or time. Over the years I’d revisit the idea, tweak things, and get feedback from people in the industry. I was very fortunate that while Director of Publishing for the Academy of Management I was able to pursue my Masters Degree in publishing and I wrote my thesis about peer review and the preSCORE concept.
Q: A solution to the problem of trust in peer review has been transparency. The EMBO Journal, for example, publishes the entire peer review process, including all referee comments, editorial decision letters, author responses, and the timelines of submissions, decisions, revisions and publications. You have decided to come up with a single numerical ranking that summarizes the quality of peer review. Can you explain the benefits of your approach?
A: First, I think efforts such as your EMBO example are great. However, in my opinion, I don’t think most journals are ready to go that far, at least not yet, and they may lack the resources to do so even if they wanted to. Second, I want to stress the preSCORE is more than just a metric or a “single numerical ranking that summarizes the quality of peer review.” When we started to show people our idea they were too focused on “What’s a good number? What’s a bad number?” Our view is that ANY participation in preSCORE is good. ANYTHING we as a scholarly community can do to promote and support legitimate, ethical, rigorous peer review helps everyone. As I said earlier, at the most basic level we will let users know that an article was peer reviewed. If the user wants more details, including the metric, they can drill down for that. At the journal level we consider other factors such as COPE membership, plagiarism screening practices, transparency of retraction policies, and other best practices which we think should be followed by legitimate scholarly journals.
Q: The preSCORE algorithm is based on the h-scores (a numerical index of the productivity and citation performance of an author’s papers) of a journal’s editors and reviewers. Can you explain how the h-index measures the quality of peer review a paper receives?
A: At first as I developed the idea I was trying to figure out a way to indicate how many “eyeballs” looked at something prior to publication. That’s where the first algorithm came from. As I thought about it I wanted to come up with a way to also tell what “types of eyeballs” or how “expert” the people involved in the peer review process were. By factoring in the h-index it seemed to be a way we could try to do that.
Q: The preSCORE algorithm assigns three numerical values to editors (0.4 to the Editor in Chief (EIC), 0.3 to the Associate Editor (AE), and 0.2 to each of the reviewers). How did you come up with these weights? Does the EIC (who may spend just a few minutes reading the abstract and assigning an AE) provide twice the value as someone who spends two to five hours reviewing a paper?
A: Not the first time someone has asked me that! We think of an EIC as the Captain of a ship. They focus on the mission of the journal and help make sure that the ship (the journal) is on course; an EIC makes sure that the editorial board members, the AEs and reviewers are performing their duties properly and effectively. Ultimately a journal EIC is responsible for what gets published, so the buck stops there, so to speak. That’s why we’ve weighted them the highest in our algorithm. Also, there are generally more reviewers involved so I think that balances things out a bit.
Q: There is research suggesting that the quality of review declines with age (Callaham, 2011); at the same time, the h-score of authors increases with age. A graduate student (or post-doc) may provide an excellent quality review yet have a very small h-index score. At the other end, an emeritus professor may provide a very poor review and have a very high h-index. Is it possible to distinguish quality from longevity in these cases?
A: Another good question and something we’re thinking about. I’m aware of these types of studies, but I’m not certain older reviewers are necessarily poor reviewers. Having said that, it is absolutely true that a younger reviewer can do a great job. H-index seems to be the most widely accepted and available metric for what we are trying to accomplish. We are looking into whether or not the m-index, which takes into account the length of someone’s publication history, might be an alternative.
Q: Similarly, a journal that employs professional editors, like Nature, may score much more poorly than one that relies entirely on researchers, like eLife. Care to comment?
A: I’d just repeat what I said earlier. We don’t want to focus on “scores” as much as participation. It’s amazing when an athlete wins the gold, silver or bronze medal, but it’s amazing for any athlete who participates in the Olympics.
Q: Your algorithm also includes, in the denominator, the square-root of the article version in its score. Why does a paper that has gone through multiple revisions indicate poorer peer review? In my thinking, it suggests just the opposite.
A: I don’t think it indicates poorer peer review at all. A paper might come in to a journal that is really well written by a researcher who is tops in their field. It may only need 1 or 2 revisions. Is that “bad?” Of course not. We use the square-root because typically earlier rounds of review are much more rigorous then later rounds. By the final rounds of review some reviewers may have dropped off or they’re basically just saying “Yeah, they’ve addressed my concerns. Accept!”
Q: How does your algorithm adjust for missing data? Do you impute values?
A: The system would flag instances like those so staff could go in via an admin to see what’s going on and manually enter the appropriate info.
Q: What is the difference between preSCORE and preVAL?
A: While the preSCORE metric is the “heart” of the system, as I mentioned we did not want people to be put off by or confused by numbers and rankings. That’s why we came up with preVAL. PreVAL answers the first, most basic question of “Was this peer reviewed?” Just the appearance of preVAL lets the user know the answer to that questions is “yes.” We want to give the user an indicator of rigor, but we also want to try to avoid that idea of conflict you mention from a journal POV. Users who want more details about the peer review process behind and article can click the preVAL link and open a preSCORE window that displays additional information such as how many rounds of review were conducted, what roles participated in the review, the preSCORE metric, and more. Some journals may elect to share the reviewer comments for each round. Some may want to publish reviewer names while others may not. Right now preVAL is an article, not journal, level tool. Having said that, as we see wider adoption we may expand this to the journal.
Q: From your FAQ page, it appears that the publisher is the customer of your services. As we’ve seen in the financial market, there is a risk when ratings agencies have a financial relationship with those they intend to evaluate. How does your service address this conflict of interest?
A: If I understand what you’re asking, I don’t see a conflict of interest. Our evaluation and services must be 100% on the up and up. Anything less will destroy our brand as well as that of the journals who participate. Our entire philosophy and mission is grounded in ethical behavior and rigor for all involved.
Q: Can you explain the business and management relationship between preSCORE and the Journal of Bone & Joint Surgery?
A: STRIATUS/JBJS, Inc., publishes the Journal of Bone & Joint Surgery, along with other journals. JBJS is committed to “Excellence Through Peer Review” so our philosophies align. They also offer educational products and as their first data product preSCORE makes sense for them. With the acquisition of preSCORE by STRIATUS/JBJS, Inc., I will have more resources at my disposal in terms of management specialties and organizational infrastructure in order to bring preSCORE services to the market and sustain and build the preSCORE products. As far as management, I’m confident that I’ll have the level of autonomy and support I’ll need to make preSCORE very effective, and will also be able to bring my team’s perspectives to the management team at STRIATUS/JBJS, Inc.
Q: What do you hope to accomplish in 2014?
A: We’ve reached an agreement with Thomson-Reuters that will allow us to use h-index and data from Web of Science and the work on that will be completed in Q1. We’re also working on custom development with ScholarOne, Aries, BenchPress and more. Following that will be proof of concept. We have several publishers who will be participating. At the same time we are developing APIs and web services which will allow platform providers such as Highwire, Atypon and others to display preSCORE information. We expect to go live by Q4. Of course in addition to continuing to develop and build our services we will be spreading the word about preSCORE!