Should members of the National Academy of Sciences be permitted to control the peer-review process for their own submissions?
If one is concerned with citation impact, the answer is both “no” and “yes.”
A recent article, “Systematic Differences in Impact across Publication Tracks at PNAS,” is based on a citation analysis of nearly 2,700 papers published in the Proceedings of the National Academy of Sciences (PNAS) between 2004 and 2005. The study appeared on December 1st in the online journal, PLoS ONE, and is authored by David Rand and Thomas Pfeiffer, both at Harvard University.
Articles submitted to PNAS currently follow three submission tracks:
- Direct – authors submit their own manuscripts, which are then subject to double-blind review
- Communicated – authors submit their manuscripts to an NAS member, who is responsible for overseeing the peer-review process
- Contributed – NAS members may find their own reviewers and submit their own articles
While direct submissions are the norm for most scientific journals, this method is relatively new for PNAS, adopted as recently as 1995. Since then, direct submissions have become the norm, with communicated submissions waning in popularity. In order to simplify and streamline the submission process, PNAS will eliminate communicated submission in July 2010. There has been some scrutiny of the communicated submission track, some arguing that it has allowed unacceptable papers to be published without adequate review.
Citation statistics seem to support this notion. As a group, Contributed articles significantly underperformed Direct and Communicated articles. But when Rand and Pfeiffer looked at the distribution of these three submission tracks, they uncovered something more surprising.
The top 10% of member Contributed papers significantly outperformed both Direct and Communicated papers, while the bottom 10% of member Contributed papers greatly underperformed Direct and Communicated papers. In other words, there was much more diversity in the citation performance of papers submitted by NAS members.
In explaining their results, Rand and Pfeiffer consider whether the Contributed track (where NAS members select their own reviewers and submit their own work) may subject their papers to a softer and more lenient form of review. This at least would explain the underperforming articles.
But what about the stellar articles? Rand and Pfeiffer speculate:
[It is] possible that these alternative publishing procedures may facilitate the publication of time-sensitive and groundbreaking work which is of high quality but might suffer under the standard review process.
In other words, an alternate publication track may counter-balance the overly-conservative review practice of high-prestige journals. In considering the policy implications for journal publishing, Rand and Pfeiffer maintain that there are real benefits for this model:
The benefit of facilitating publication of extremely high-impact Contributed papers could be argued to out-weigh the potential cost of allowing more low quality papers to also be published.
Confusing peer-review with quality of submission?
Rand and Pfeiffer’s main assumption in comparing the three groups of published papers (Direct, Communicated, and Contributed) is that they are similar in all respects. Indeed, the researchers attempted to control for effects that are known to be associated with higher rates of citations (paper topic classification, author-pays OA, special feature). Rand and Pfeiffer admit that they would liked to have included press releases, but they are missing the biggest difference in their analysis — namely, Contributed papers are submitted by a small and elite group of established authors who are invited members of the National Academy of Sciences.
Each year, NAS members may elect a scant 72 new members and 18 foreign associates based on recognition of their outstanding past achievements in research. Once nominated, these names are subject to a complex and rigorous process of evaluation and election. Those who become an NAS member, and are therefore able to contribute their own work to PNAS, are very unlike the profile of authors who submit through the Direct track. Rand and Pfeiffer’s analysis contains no controls for author effects such as previous publication and citation performances, or even the number of authors per paper — all variables that are highly predictive of future citations.
We also don’t know if NAS members are selectively submitting only some of their work to PNAS, or if they are opting to use the Direct submission route, as some members do.
Without controlling for author effects, Rand and Pfeiffer are unable to disentangle the effect of author quality from peer-review process, and thereby confusing one for the other.
While this paper is methodologically sound, the lack of any real effort to consider other possible explanations is the real weakness of this study. It does, however, pose some important questions about member-facilitated peer-review that should be addressed in more detail.
Discussion
3 Thoughts on "Does Reviewing Your Peers Create Better Results Than Peer-Review?"
Dear Philip,
Thanks very much for your thoughtful posting about our paper.
I completely agree that author effects are an important issue which we were not able to include in our analysis because we did not have the data readily available.
However, all papers published through the Contributed track have NAS-member authors. Thus if prestige of author was responsible for increased citation rates, we would expect to see higher citation counts across all Contributed papers. However, we do not. Instead Contributed papers are cited less on average, and only cited more in the extreme. This makes me believe that the success of the very successful Contributed papers is not driven by author prestige.
We also agree that the issue of selection bias in which papers get sent to which track is an important issue, and discuss this our conclusion:
“In addition to the differences in referee selection and review process, [NAS member] authors may choose to submit papers they feel are stronger or weaker through particular tracks. The direction of this effect, however, is unclear. For example, one could hypothesize that weaker papers are submitted through alternative tracks to increase the probability of acceptance, or that stronger papers are submitted through alternative tracks to increase the speed of acceptance. Or perhaps both suggestions are correct. Exploring this issue also merits future study.”
I’d love to hear your thoughts!
Thanks,
David G. Rand
Harvard University
drand@fas.harvard.edu
http://www.DavidGertlerRand.com
David,
Thanks for your thoughtful reply.
My main concern comes down to whether you may be attributing citation differences to the peer-review process when they may be explained by other variables, such as author effects.
Several secondary analyses could have been conducted to rule out some of the competing hypotheses, for instance:
1. Comparing NAS member articles that went through Communicated versus Direct submission tracks would allow you to understand the effect of self-selection.
2. Using the number of authors as a variable in your regression model would help to control for self-citation.
3. Analyzing the top and bottom 10% for more clues of causation such as the age of the author (remember that NAS membership is for life), or funding source.
You may be completely right that peer-review is responsible for your findings, but without ruling out competing explanations, you may be reporting on a spurious association.
Phil,
Thanks for discussing this interesting paper. Given the ‘light’ refereeing process for contributed papers in PNAS, it is to be expected that they have lower citation rates on average. Of course academy members are producing high quality papers but their incentive is to send the best of them to Nature, Science and Cell while sending papers that are more difficult to publish to PNAS as contributed papers.
In my paper with Nicolas Maystre where we analyzed the effect of open access in sample of PNAS papers ( http://ideas.repec.org/p/cmi/wpaper/cemi-workingpaper-2008-007.html ; http://scholarlykitchen.sspnet.org/2008/11/18/author-pays-oa/ ) we also had the result that papers that were directly submitted received more citations (controlling for a number of factors including past productivity of the last author) which we did not investigate further.
What is really interesting is the finding that the most cited papers in PNAS appear to be contributed, rather than directly submitted. I interpret this result as suggesting that academy members sometimes use the contribution track to publish very important papers quickly, possibly bypassing a conservative bias in the top three journals.
Patrick Gaule