Rubriq co-founders Shashi Mudunuri (left) and Keith Collier (right).
Rubriq co-founders Shashi Mudunuri (left) and Keith Collier (right).

It’s not everyday that someone introduces something novel in scholarly publishing. Most new ventures are variations on well-established models. With the creation of Rubriq, co-founders Shashi Mudunuri and Keith Collier have broken new ground. Rubriq is an attempt to provide peer-review independent from journals. While others, such as Faculty of 1000, have done this with post-publication reviews, Rubriq’s model focuses on pre-publication – indeed pre-submission (to a journal) review.

Collier publicly unveiled Rubriq, now in beta release, at the STM Innovations Seminar in London in December (you can download his presentation here). Collier is not new to either scholarly publishing or managing peer review systems. Prior to co-founding Rubriq, he was Vice-President and General Manager of Thomson Reuter’s ScholarOne.

Unfortunately for Collier, he lives in proximity to this author and was therefore ambushed with a request for an interview over lunch. Collier’s responses were graciously provided via e-mail.

Q: What is Rubriq?

A: Rubriq is a for-benefit business with the mission of putting time back into science. We are developing a model for independent and standardized peer review. It’s important to know we aren’t a publisher and have no plans of becoming one. We provide rigorous reviews by the same qualified peers who review for journals, but with a standardized scorecard that can be used in any publishing model.

The core of our innovation is the scorecard itself as a quantitative peer review instrument. Our thinking is that a paper should be evaluated against a standard outside the lens of any single journal. Rubriq asks “How good is this article?” rather than “Should this article be published in journal X?” This is a big shift for reviewers. We believe that this shift will create a more consistent, transparent peer review process that can be used by any journal within the field.

Our system will enable fast, rigorous, verifiable, portable reviews, and will present authors with the best options for publishing their research based on the independent review.

Q: Who is Rubriq?

A: Rubriq is a division of Research Square, a privately held company created by entrepreneur Shashi Mudunuri, who also started American Journal Experts in 2004, and who collaborated with Keith Collier to co-found Rubriq.

Q: What problem did you set out to solve?

A: Rubriq can address many problems, but there are two specific issues we are focused on. The first is the individual pain felt by authors as they go through the publication process – too slow, redundant, inconsistent, biased – the list goes on.

Second is the overall amount of time spent on redundant peer review. Our calculations put time spent just on rejected reviews that are not shared with subsequent journals in the range of 11-15 million hours annually, which is staggering. We want to put some of that time back into science.

Q: How does it work?

A: There are three steps to the Rubriq Process.

Step 1: Compliance Check. We classify the paper by subject area and keywords, check for conflicts of interest and ethical statements, and run the paper through iThenticate. These combined items create the compliance check portion of the Rubriq Report.

Step 2: Peer Review. We match the paper to three reviewers who complete independent, double-blinded reviews of the manuscript by filling out the Rubriq scorecard and leaving bulleted comments for the authors. These reviewers receive ongoing feedback about their reviews, as well as compensation for each review completed.

The reviews are compiled into a single report—a Rubriq scorecard—with numerical scores for the quality of the research, the quality of the presentation, and the novelty and interest. These scores are used to calculate an overall score for the manuscript: the R-Score.

Step 3: Journal Recommendation. Based on author preferences, journal classification, and Rubriq Report scores, we create a list of journals that are matches for the paper. The journal list includes detailed information about each journal, including acceptance rates, time to first decision, and APCs, if applicable.

The author can then choose to submit directly to a journal, revise the manuscript based on the feedback, or share the report with journals in the Rubriq network to determine interest. Of course, they could also publish themselves directly to a repository.

Journals with Rubriq accounts will be able to search for new papers that match their preferences in terms of scores, topics, keywords, etc. They will also be able to set up alerts when new papers that match their needs are added to the system, and will be able to control all contact preferences with authors.

Q: What is an R-Score?

A: The R-Score combines the quality of the manuscript with its potential impact. Reviewers determine the level of novelty and interest, which sets the R-Score range, and the overall quality assessment determines how high within that range the overall R-Score will be.

Q: Are reviews single- or double-blinded?

A: Reviews are double-blinded in Rubriq. However, journal account holders will be able to see reviewer names and affiliations with author permission.

Q: What is the business model?

A: We charge a flat fee per submission and are currently operating on an author-pays model. In our discussions with institutions and funders, new models are being suggested to us that we are working to incorporate into the system. If grant funds allow for publishing services, the funder would essentially be the customer. If article processing charges (APC) can be divided between an open access (OA) publisher and Rubriq, that would remove some of the burden from the author as the fee could be applied directly to the publication.

Q: How much do authors pay?

A: Currently in our beta release, the cost to authors is $500 (US) for the scorecard. Pricing for the complete service (journal matching, iThenticate report and compliance check, and ability to share scores with journals in the network) is anticipated at $700. A significant portion of this fee goes directly to the reviewers to incentivize them to prioritize the review and return it within a few days. As a for-benefit business, our goal is to keep this cost as low as possible.

Q: Why would an author want to pay? What is the benefit to authors?

A: The idea behind Rubriq is very different from submitting to just one journal and getting a review specific to that journal. We feel if we can provide a Rubriq Report in about a week with data driven journal recommendations, we can save researchers weeks to months. In addition to feedback on the paper, they have a portable review and are presented with realistic options for publication.

Funders have indicated that they would like to see a splitting of the APCs across Rubriq and gold OA publishers. So in that scenario, where the journal offers a discount for papers submitted with a Rubriq Report, getting the report doesn’t cost authors any more than it would have had they submitted directly to the journal.

With Rubriq, authors can also see all their choices for publication before deciding where to submit or publish. They can compare journals more thoughtfully before making a decision. Since the author pays, they also retain control over their score and how it is used. They can choose to keep it confidential, and can choose when and where to share it – inside or outside the Rubriq system.

Q: Can/do institutions (universities or funding agencies) pay on behalf of authors? In other words, do you have an institutional model?

A: There has been interest from institutions in subsidizing Rubriq fees for researchers, and we can definitely support managing accounts from shared funds.

In addition, we’ve had promising discussions with Funders, and they see the system-wide economic benefit of Rubriq and how we can help researchers publish faster. We hope to work with funders and create a model that works for all parties.

We are also working on an institutional partnership model where funds earned by Rubriq reviewers from a university or lab can be pooled in a central account for others in that institution to use for their own Rubriq Reports.

Q: Can anyone access the Rubriq database and see the scores?

A: No. Authors have complete control over whether they wish to share their scores. All Rubriq Reports come with a verification code that can be used by third parties to access reports shared by the author. If an author chooses to share their reviews with journals in the Rubriq network, only journals with verified accounts can access the database of papers and scores. If an author wishes to only show their Rubriq Report to one journal, they have that control.

Q: Do publishers “subscribe” to Rubriq? How does the publisher relationship work?

A: There are no costs to publishers in our model, unless they are submitting papers. Journals can create accounts and set up search alerts for papers that are broadcast into the Rubriq system. We do want to indicate the journals that accept or require Rubriq reviews, so journals in our network will be highlighted on the Journal Recommendation Report.

Q: What publishers are you working with?

A: We have been pleasantly surprised with the positive response from the publishing community as they look for ways to improve the process, not just for their editors, but also for reviewers and authors. During this initial beta period, we are working with five publishers that range in size and model to help us test the system: a large commercial publisher, a mid-size commercial publisher, a society publisher, and two open access publishers. We can likely share their names in the coming weeks.

Q: Can publishers publish the R-Score?

A: We see the R-Score as a new article level metric that indicates the quality and interest of a paper based on three independent reviews. It is a starting point for post-publication comments and ratings. It allows an initial stratification of the literature, especially in the mega-OA journals.

Q: Can publishers publish the reviews from Rubriq along with the papers, much as eLife is now publishing artifacts from their review process.

A: It is important to know that if you are reviewing for Rubriq, you are essentially reviewing for any journal. And while we are performing double-blind reviews, editors with Rubriq accounts will have access to the identities of the reviewers on reports shared with them by authors. Permission to publish reviewer names alongside reviews will need to be up to each reviewer. We know that some journals are becoming more transparent with their peer review process in this way, so we are building in the ability for reviewers to give permission on a case-by-case basis to make the review and his or her identity public if the journal requests it.

Q: Can publishers use Rubriq in place of their own peer review processes?

A: Yes. We’ve talked with publishers that would like to use Rubriq directly for various reasons or recommend it to their authors. We don’t see Rubriq replacing peer review for most journals, but we hope that it can help provide editors with some initial insight and allow them to reduce their time to first decision. Some may use it as an initial filter by setting a threshold for a minimum score needed to submit.

We are finding that some publishers see value in suggesting Rubriq to rejected authors as a way to give the paper a second look without investing more resources, and at the same time, provide authors with useful guidance.

Michael Clarke

Michael Clarke

Michael Clarke is the Managing Partner at Clarke & Esposito, a boutique consulting firm focused on strategic issues related to professional and academic publishing and information services.


43 Thoughts on "An Interview With Keith Collier, Co-Founder of Rubriq"

My main question is this: you talk much about author-pays and rubriq gets paid but what about the peer-reviewer? Do they get paid or are they still the chumps in this situation, giving their expertise and time away for free?

Chris, I think this is addressed in the interview?

“These reviewers receive ongoing feedback about their reviews, as well as compensation for each review completed.”

It will be interesting to watch what happens with Rubriq. I got to know Keith over the years he was at S1 and like and admire him very much. He’d spoken to me about this idea when it was “germinating” and I look forward to watching their progress.

Paying experts for their reviews would indeed incentivize them to producing high-quality, timely responses. However, by moving from a voluntary market to a cash market, it is important to compensate reviewers fairly, otherwise reviewers will feel insulted and reject the offer. How much is “fair?” That also depends, but it might be between $200-$500 based on the type and quality of the review as well as the prestige of the reviewer. This is going to drive up the costs of publishing, naturally. These costs were spread across many institutions in a voluntary market and, like APCs, they will be concentrated on the author or funder.

Is there a problem in getting timely, high-quality reviews? For high-profile journals in the life sciences, the answer is clearly “no.” There may be a market in the second or third-tier journals where getting reviewers to voluntarily spend time on a poor manuscript provides little (if any) incentive. Some fields that suffer from a awfully lengthy publication delay (i.e. economics) may also benefit greatly by moving the review process into a cash-based market.

I think the time saving concept is not in one journal getting a timely set of reviews in, it’s in the idea of doing that with one journal, being rejected, then repeating that with the next journal and the next. Here the goal is to do one set of reviews that would be used for all journals.

I’m not sure that’s really feasible though. It’s unclear to me what happens to an author who pays the $700 and has their article rejected. Is that the end of the story? What about revisions? How many times can I revise and have my paper re-reviewed for that initial $700? Or do I need to pay for each revision (and would that result in encouraging reviewers to call for revisions in order to bring in more $)?

I think there’s also something naive in the notion that the same style of peer review is appropriate for all journals. As one example, toxicology journals usually require a dose:response curve. But general biology or medical journals may not have that requirement. Am I reviewing the paper for the former or the latter? Or do I have to perform a detailed review for every single possibility out there?

The assumption that a review is objective and thus be shopped around to different journals needs to be tested, which the interview claims Rubriq is doing. As a reviewer myself, the context of the manuscript and its intended publication outlet weighs considerably in my evaluation.

If Rubriq follows the American Journal Editors model, they will have to use graduate students to do the reviews in order to keep the costs down. As several ex-AJE contractors have complained openly (see comments to:, the company will expect quite a lot for little pay. This is not necessarily a bad thing, but it will limit the reviewer pool to low-cost workers. Low-cost workers may indeed provide excellent, high-quality reviews, yet they lack the authority of experts in the field.

As a result, the market for peer review may be highly-segmented: At one end, there will be the prestigious subscription-access journals, which will rely on volunteer experts for reviews. At the other end will be the class of non-discriminatory OA journals that allow most manuscripts to flow through–these journals do not need expert, timely reviews. Indeed, a couple of thumbs-up (e.g. F1000 Research) would do. In the middle, however, are second-tier and specialist journals that indeed may have problems getting high-quality, timely reviews. If Rubriq succeeds, it will be here.

Phil, Just to clarify, Rubriq will not be using grad students as peer reviewers.

We only accept reviewers that have a terminal degree in their field, hold a current academic appointment (postdoctoral fellow, research or clinician scientist, or faculty-level researcher), and have review experience. They also must have at least one first author publication.

Keith, my apologies for assuming wrongly. An authoritative group of reviewers is similar to F1000, only F1000 does not pay its reviewers, nor requires much from them. If you cannot incentivize reviewers through the prestige model (e.g. giving them public recognition within a scientific society) and are relying on financial incentives, then $100 may not be enough to get a high-quality, timely review. A physician may donate time and resources traveling to Cuba to help care for the needy but balk at a $100 check to do a few hours of work. It can be very insulting, especially if the manuscript is inappropriate, poorly-written, and not interesting. A good editor will prevent these manuscripts from going out for review because a good editor must make a reviewer happy if s/he is going to provide some voluntary labor.

I think this is a really important point that speaks to psychology and human nature. If one sees performing peer review as an altruistic act, done for the betterment of the community (and with an assumption that others will perform this same act in return), that brings about a different set of behaviors than when something is presented as a requirement or as a paid job.

A key peer reviewer may perform many reviews for a journal in a year, often because that journal represents his/her society, or because establishing a quality standard for one’s field can help that field grow and become more established. But when you make review a requirement, and set a threshold limit that must be attained, will reviewers keep coming back for more? Think of PeerJ which requires one review from each member per year. How often will they be told, “I’ve already done my review, ask again next year.”

Similarly here it becomes a paid transaction. Reviewers may accept or decline reviews based on their bank statement rather than the other reasons listed above (“I’ve got a vacation coming up and $100 will buy a nice dinner…”).

Hi David,

Appreciate the comments. You are correct that we are focused on the cross journal challenges of peer review. We believe the Rubriq review can be done once and then used many times in different ways by different journals, depending on the needs of each journal.

Some editors will use the Rubriq review to decide whether to conduct full peer review on the paper. Others will use it to supplement their own reviewers feedback to make the final decision. Others may use it along with just an Editorial Board review to make a decision. Editors will know the reviewers identities and their affiliations.

We aren’t naive. We know our standard approach to peer review can’t be a substitution for what fully happens within each journal review. I think even in the PLoS ONE scenario, there would be an editorial check to verify our Rubriq reviews confirm the “valid science” requirement.

I think the experiments in cascading peer review have struggled for this very reason – you can’t take what one journal does and use it for another.

The idea is create some kind of standard, portal format and do that once. I realize this is breaking new ground to develop a standard and ask reviewers to review against a standard versus asking reviewers if the paper should be published in one journal. It takes a while to come around to the idea and how it could work within each journal.

In discussions with some of the high profile journals, they liked the idea of author submitting papers that were revised based on comments from Rubriq reviewers. So the first time the journal sees the papers, it’s been revised and they have the original set of reviews.

I worry though that many authors will be hesitant to pay twice for reviews for the same paper, once with Rubriq and once with the journal after acceptance. It also does away with the time savings you’re purporting to add here, as you’ll have to go through the peer review process at least twice (if not many more times).

Phil, we see the $100 as a small payment to thank reviewers for their time and to ask them to prioritize the review. The goal is to perform the reviews in a week of two. The only challenge we’ve had so far is recruiting MDs who see the payment as an hourly rate and are probably closer to the range you suggest.

I think you are right that the high-profile journals don’t have a challenge in getting high quality reviews done quickly. They won’t be the early adopters. The value Rubriq provides initially will likely be in the second and third-tier journals as well as journals looking to keep their time to decision as low as possible.

When we talk about putting time back into science it’s focused on reducing the journal loops that authors go through. If we can eliminate the redundant review processes across journals as much as possible, we’ll fulfill our mission.

Thanks Keith for the response. I agree that reducing redundant peer review is a valuable goal. I do question how many rounds of review a paper normally travels before acceptance. The little research that has been done suggests that most articles are targeted efficiently, but that there are a class of authors who aim very high and are willing to have their article cascade many times before finding publication. These authors may be incentivized to do this through policies that pay them to publish in high Impact Factor journals. Submission fees may also be a solution to this problem, however.

I have trouble reconciling Calcagno et al’s estimate that 80% of papers were accepted at the journal they were first submitted with the high rejection rate (>50%) at most mid to high tier journals. It’s possible that low tier journals accept so many papers that they dominate this statistic, but there’s clearly something funny going on.

This can simply be explained by the number of submissions being widely spread among papers (80% of published papers going through one submission, the other 20% going through many), even if all journals have at most 50% acceptance rate. View the raw data as a bipartite graph: vertices on the left are papers, vertices on the right are journals, edges are submissions, and we mark at most one edge par paper to denote the successful submission. It then easy to draw such a graph with 80% papers having only one edge, which is marked, and each journal having at most 50% of its edges marked. This is all the more easy if there are papers that never get accepted (and such paper may be more common than a non-editor may think).

There are but 8766 hours in a year and the claim that 11-12 million hours per year are wasted seems absurd!

So what they are doing is replacing the editor in chief and editorial board. It cost a publisher to have these. If a publisher uses this service they can save all sorts of money. In fact, the cost of the EIC and Board would be a dinner at the annual meeting. The profit of a third tier journal suddenly goes up and in fact would or could stay around.

Should a journal use the R factor what incentive is there for the EIC and Board to have any interest in the journal? They have in essence abdicated their responsibility.

Just who is this pool of reviewers who have all this time on their hands. After all, reviewers do not have this time now.

I can see the journals on Beal’s list jumping on this because, and maybe this is good, their submissions will at least be reviewed.

This reminds me of OA and will be fraught with all of its pitfalls.

Harvey, we don’t propose to replace any Editors or Editorial Boards. But we do hope to reduce the burden on reviewers and authors, maintain the authority of scholarly publishing through rigorous pre-publication peer review, and speed up the time it takes to get published.

The millions of hours spent on peer review for papers that are ultimately rejected is fairly easy to calculate. You do make a valid point – it seems absurd. It is absurd that we haven’t innovated more in the area of pre-publication peer review. It’s unclear what peer review even means since it’s done in thousands of journal silos from end to end. There was a good SK post on this a couple years ago. (

Many want the alternative of only a post publication peer review approach and cite the current cost of peer review as a tax on the research community it can no longer afford. I’m certainly not an advocate for that approach, but I do think we can do better as a community to reduce the burden of peer review on the research community. Ok, I’m off my soap box.

So what is being proposed is a pre-review for the reviewing process to begin. I just cannot see JACS or JAMA or JPHARMSCI, etc. abdicating their responsibility for a $100 review.

But, as said, perhaps 2d and 3d tier journals or those on the Beal list will go for it.

Having been intimately involved in one of the above journals, we acknowledged the costs of reviewing but felt our “brand” was more important than modifying it in any way except to make it better even if we had to absorb more costs. Our authorship never complained about the time or the revisions required once published.

I’m not sure how novel this idea is. Peerage of Science seems to predate them by at least a year, and PubCred was proposed back in 2010 (

I also think that a major failing here is that it turns a curated process into a brute force approach. Reviewers are chosen from a pool of volunteers by a matching algorithm. As a former journal editor, the core of my job was to make sure that each accepted article had been vetted not just by experts, but by the right experts. Rubriq turns this guided process into a stochastic one, assuming expertise exists based on keyword matching or willingness to review.

To draw in authors, they still have the problem of competing against free services, namely Peerage of Science and, other than F1000 Research, every journal on earth (discussed here

What I think might be a better future for a company like this is to turn the service into a private preprint and editing service. Think of young authors, unsure of their writing skills, or authors struggling with the English language. Combine the benefits of language and grammar editing with a round of private peer review to improve the quality of the article before it gets sent to a journal for final judgement, and you might have a service that would draw paying authors, especially since it can be done confidentially, without exposing one’s weaknesses to the community at large.

According to the Peerage of Science website, the service, launched Nov 2011, has 66 manuscripts and 163 peer reviews. I don’t believe that reviewers are paid, but benefit (hopefully) by reciprocity and perhaps through a feeling of altruism. It is still a voluntary market. So is the PubCred solution, which is structured on paying into the system with reviews before you are able to get your own manuscript reviewed.

Rubriq intends to create a cash-market for their service, so it will operate very differently from these. The scientific writing and editing market is highly-successful (both for legitimate and illegitimate reasons), and if you are educated and can communicate effectively in English, $100 will go a long way in most countries. As mentioned, this service cannot pay for the authority upon which the peer-review system is based. Rubriq may provide very good, timely reviews, but they probably won’t be able to deliver an authoritative name as part of their service.

They require the reviewer at least have PhD (, so you’re most likely talking about postdocs. Back when I was a postdoc, I probably would have been pretty happy to pick up $100 every now and again for doing a round of peer review. But I’m not sure many above that level would see that as a motivating factor.

The payment is a key incentive for us to recruit reviewers, but we believe there are other incentives beside the money. Rubriq reviewers are essentially reviewing for any journal, so they could gain professional exposure to multiple Editors in their field. Rubriq is a way to gain experience as a reviewer and we intend to focus on reviewer performance and training as part of our service. Our system also allows for more choice in what papers reviewers claim in the assignment process.

I was on campus and on the door of a very .prominent Chemist was a check for $100 with a note: This is what 12 years of University, a PhD and a tenured position is worth.

We also have elements of PubCred since reviewer payment can be banked individually or within a group/lab and used for Rubriq payment. We have talked to Peerage, and have great respect for what they’re trying to do. I believe we are trying to address the same problem, but just in different ways.

Our algorithm for matching papers to reviewers is a great starting point, and is being continually fine-tuned by our Managing Reviewers to ensure that the right matches are being made. We are basing all the factors of our matching algorithm on human decisions, not simply looking for keywords. This is an ongoing process that will likely retain its human component through all of our beta phases, and possibly beyond.

We don’t see ourselves as competing with journals for authors at all. As journals see the efficiencies they can gain and accept the Rubriq report as part of their submission process, and as Rubriq moves toward proactively matching papers with the most appropriate journals, both authors and journals will benefit.

What are the qualifications of Rubriq’s Managing Reviewers? What level of experience do they have, and how many are employed for coverage of which fields? How do they compare with the sort of senior-leading-researcher-active-in-the-field-and-well-respected requirements for serving as the editor in chief for most journals?

I’m glad you asked this questions, because it speaks directly to how we are very different than “journal-specific” peer review. We are not trying to be Editors-in-Chief, who are responsible for determining the journal’s content. Our team consists of former scientists with PhDs and postdoctoral training, and their role is to find qualified reviewers whose expertise closely matches the topic of the paper.

Our goal is to create a set of reviews that are useful to the author, suggest appropriate journals based on the reviews, and provide a report that could be useful to busy Editors as they evaluate papers for their journals.

I think that’s a key differentiation. Journal editors see themselves as gatekeepers in some ways, and as I said somewhere above, the key is that the right expert has reviewed the paper, rather than just any expert. There’s no real shortcut for experience in a field over time to help make those judgment calls. You’re also limited to the pool of reviewers who have signed on with your service, rather than the entire scientific community at large. Which is why I think it’s likely that for many journals, a second journal-curated round of peer review will be a requirement.

I do think there’s great merit though, in providing a service to help authors improve their papers before they’re submitted. While there are many services offering language editing, one that also offers scientific review could provide value for many authors. At that level, just having a qualified person with fresh eyes seeing the paper could help clean up any obvious shortcomings before submission. I also think that providing a ranking of suggested journal targets for an article could be greatly appreciated, as this is an area where many, particularly those early in their careers, struggle to get a good read on both the landscape and their own work.

And I’d still love to know how Rubriq (or any of the peer review services, or F1000 Research for that matter) deals with multiple rounds of revisions.

David, Sorry I missed your question about revisions in Rubriq. It’s a question we’ve debated internally several times and I honestly think we are going to have to test different approaches and pivot if needed. Our current position is to not offer a revision round. We do expect authors will improve the paper after reviews, but whether they include the original Rubriq report or not with the submission is up to the author.

I appreciate all your thoughtful comments (and Phil’s) today!

Hi David,

I am a Managing Reviewer with Rubriq. When choosing reviewers for a particular manuscript, we are not at all limited by the reviewers who have already signed up with Rubriq. When we receive a manuscript and do not have an active reviewer with the right expertise for a particular manuscript, we actively invite reviewers who have published manuscripts in similar areas of research and who are experts in those areas. We make sure that the reviewers who review the papers are well matched to the papers based on the specific topic, model system, and methods used in the paper. It is very important to us that we find the right reviewers for the paper. Many of our current reviewers are faculty members who review for top journals in their fields.

There is no automation in our method of choosing reviewers for manuscripts. Once we have built a large network of reviewers, our algorithm will help us identify reviewers within our existing pool to consider for the review. The MRs will still make a final decision regarding the expertise of the reviewers that the algorithm suggests and how well the reviewers match to the manuscript. If we have reviewers within our network who are qualified to review that particular paper, we will make the paper available to them to claim the review. If we do not have a good match in our network, we will personally invite new reviewers with the right qualifications and expertise to review the manuscript.

I hope that my comments address your concerns regarding the level peer review that we provide.

Thanks Cynthia, the additional information is very helpful. It might be worth adding in some info on this editorial oversight to your FAQ list ( Being able to reach outside of your membership is a big plus for Rubriq, particularly if this turns into a market that gets fragmented into many camps.

One issue that publishers are facing, particularly in light of the rhetoric around the eLife launch, is the question of “professional editors” and their qualifications versus editors who are currently working researchers. In Rubriq’s case, some of this concern is lifted, as your editors aren’t making final decisions on acceptance/rejection of a paper. But there will still likely be some concern over relatively inexperienced editors’ ability to find and sign on top peer reviewers, particularly across an enormous diversity of research fields.

Thanks Keith. It’s a difficult question–many, if not most papers are not immediately accepted as submitted and require at least some minor revision. That’s generally part of the peer review process, and often why it takes so long. At the same time, it might be hard to run a viable business if one offered authors unlimited resubmissions of the same article until it’s exactly perfect without charging them for it, especially since you’re paying peer reviewers for their work.

Rubriq is a very interesting initiative that may revolutionize the publishing market.

The problem is that a high R-score would not compel a publisher (editor) to publish.

I would advise Rubriq to start with some “forgotten” (never published) papers from arXiv – for free! – and to see the results of their reviews. If the arXiv papers with high R-score would find the publisher soon then the authors certainly would be interested to try. This process can be watched directly and statistics can be easily gathered.

I see a potential problem: an author may not be willing to go through Rubriq a second time if he got a bad review the first time. There is therefore a strong incentive for Rubriq to channel papers toward reviewers that have an habit of giving good grades. And even if this incentive is untold, reviewer may perceive it, thus having an incentive to give good grades without spending too much time on the paper. How is this issue addressed?

Good point. These sorts of peer review systems are built around having the author review the work of the reviewers in order to rank their performance. It is likely that happy authors whose work has been glowingly praised are going to respond with better reviews than those whose work has been torn apart, even if accurately and insightfully torn apart. That creates a feedback loop enforcing the rubber stamping of submitted papers, with more reviews being assigned to better-ranked reviewers.

Very interesting interview and discussion. I am one of the founders of Peerage of Science, and like Keith said, the approaches are quite different in many ways. Both Rubriq and Peerage of Science will be presenting in STM Spring Conference flash session in Washington on May 1st, I think there will be interesting discussions both on and off the stage about the two approaches.

About ranking reviewer performance in “these sorts of” systems: in Peerage of Science, reviewer performance is evaluated by other reviewers, not by authors. Reviewers thus have incentive to be ruthlessly critical when the manuscript deserves a trashing, but need to be careful to justify their criticism too. I was concerned early on that reviewers would withhold criticism, but experience has shown that peer-review-of-peer-review works: out of the 67 manuscripts, 5 so far have been withdrawn by authors after consistent barrage from several reviewers.

Editors of subscribing journals have access to all reviews, authors can not choose what to show.

I think any peer review system needs to first fulfill a core requirement: the reviews need to be efficient for filtering (=rejecting) manuscripts. Other requirements come after that.

In that respect, think of the recent arsenic life debacle. Science used traditional peer review system with (2?) senior-well-respected reviewers. What if they had instead used a system where any qualified scientist can engage as reviewer, and everyone’s arguments are evaluated by other reviewers without knowing how “senior-well-respected” each other is?

I agree that having the reviewers review one another is likely a better system than having just the author review them–they’re at least going to provide more of a neutral opinion instead of responding to criticism of their own work.

Though I do wonder what the reviews look like when two reviewers strongly disagree. If one’s rating as a reviewer ever comes to count for any sort of career credit, then one would need to be extremely careful who one reviews with, as a disagreement with an unqualified fellow reviewer could result in a negative reviewer review, thus damaging your reputation. That’s part of the problem with the voluntary nature of Peerage of Science. It’s unclear how qualified a set of reviewers you’re going to end up with.

5 out of 67 translates to a 7.5% rejection rate, though I’m not sure that’s a fair measurement, given that presumably Peerage of Science is looking at a broad swath of papers across all quality levels, whereas an individual journal is probably looking at a more limited stratified level of quality. I wonder how that compares though, with the percentages that editors reject without review solely because of poor quality.

I think any peer review system needs to first fulfill a core requirement: the reviews need to be efficient for filtering (=rejecting) manuscripts. Other requirements come after that.

I think this is an important point, but when you’re running a company, you need to serve your customers. In Rubriq’s case, the paying customers are the authors, and that’s different from Peerage where (as far as I can tell) the revenue comes from the journals looking for rigorously reviewed articles. So each will have to deal with a different set of economic pressures in order to maximize revenue.

In that respect, think of the recent arsenic life debacle. Science used traditional peer review system with (2?) senior-well-respected reviewers. What if they had instead used a system where any qualified scientist can engage as reviewer, and everyone’s arguments are evaluated by other reviewers without knowing how “senior-well-respected” each other is?

Isn’t that pretty much what happened? Science published the paper into a system where any qualified scientist could engage with the results, publish their own opinion through letters to the editor or even blogging, and best of all, could do the next set of experiments needed to disprove the paper and publish them, thus making a much stronger case than a mere commentary. That’s how science works. The initial pre-publication peer review process is only the beginning of the evaluation of the work.

Surprisingly, in cases of strong reviewer disagreement, one reviewer often openly concedes when they are asked to evaluate the arguments of the other. Scientists may be just as career-hungry as everyone else, but when it comes to scientific arguments they are extremely honest. But you are right, reviewer disagreement may become an issue once the reputation gained in the system starts to matter more personally – right now people still behave very much in the spirit of Peerage of Science.

Note the 5 manuscripts were withdrawn by authors themselves – this should not be confused with journal rejection rates. Peerage of Science does not judge, does not reject anything, it is still (as it should be) the editors of scholarly journals who judge what merits to be published. The subscribing journals have their own rejection rates, some higher, some lower.

The only kind of “rejection” (aside from administrative filtering of obviously unqualified material) in Peerage of Science is when nobody submits a review by the deadline you as an author have set. And that is as it should be; either you set a too tight deadline just under holidays, or perhaps the work simply does not deserve to be peer reviewed.

You are correct, Peerage of Science gets its revenue from subscribing journals, and also from advertising agreements with supporting organizations (at present two universities).

Yes, initial pre-publication peer review process is only the beginning. But it needs to be there, in its proper place at the beginning. Moreover, it should be good at what it is supposed to do.

Surely you agree that there should be a rigorous pre-publication peer-review system in journals calling themselves peer-reviewed, even though the real peer review will be given by the world in the decades post-publication? If pre-publication peer review is unimportant, we might just as well throw it away completely as some are already suggesting.

What I was trying to say, is that in this particular case the problem was faulty methods – it was not with the hypothesis or interpretation being scientific uncertainties which are only settled with post-publication analysis and more data, and it was not deliberate fraud which peer review is fairly toothless to combat. This was precisely something that pre-publication peer review should be good at controlling for. A raison d’être for pre-publication peer review. My argument is that here the traditional process did not do very well in an essential task it should have fulfilled, and the alternative pre-publication peer review model Peerage of Science offers would have served Science (both the journal and the human enterprise) better.

Our biggest incentive at Rubriq is to build credibility with both the research community and the Editors. If our grades are inflated or do not accurately reflect the quality of the paper, we have nothing of value to provide. One of our challenges, and something we have put a lot of thought into, is how to create a set of metrics for the reviewers who participate in the Rubriq system. Because our scorecard has a numeric component, we can easily identify outliers who consistently deviate from the average within the reviewer panel. We are also considering ways to provide peer and Editor feedback for reviewers.

I think we have a great opportunity to apply structure to a process that is currently quite variable, which could result in higher quality reviews. We are designing the system so that once we have a large enough reviewer pool, papers are not assigned to reviewers directly, but are sent to a group of well-matched peers who can then claim the papers that are of greatest interest to them. Under this model, Rubriq’s role is to make sure that the members of an assignment group are all qualified for the paper, and to cull poor reviewers from the list based on the feedback from the community.

Ragarding finding the right reviewers I have developed an algorithm that finds all of the published researchers whose work is close to a given article or proposal. It even defines various degrees of closeness, sort of like concentric rings. Thus it is also a science mapping tool. Some human judgement is still required but the potential for bias is greatly reduced. It has tested successfully but I am looking for other opportunities.

Comments are closed.