Is a scholarly journal harmed by a having a professional, full-time editor at the helm rather than being run by a working researcher? The publicity campaign for the newly proposed eLife journal has suggested as much, but very little empirical data has been offered. Is there a way to measure the performance of professional editors and compare it with that of academic editors?
As Phil Davis recently pointed out, the powers-that-be behind eLife have made a concerted effort to emphasize their journal will be run by working researchers rather than full-time editors with scientific experience and training. The journal is meant as a direct competitor for top tier publications like Nature, Science, and Cell, all of which employ professional editors. The rationale for this decision has been given as an issue of control:
“There’s a great deal of displeasure in the life-science community with the control of journals that don’t use scientists to make decisions,” Mr. Schekman told me. “There are many scientific journals that have scientists as editors, but they have not captured the same kinds of very interesting, groundbreaking studies that are published in Cell, Nature, and Science.” Lured by name prestige, researchers route their best work to those journals. That means they neglect other outlets where scientist-editors have more control.
The first question a statement like this raises is whether this sentiment is widespread. More questions then arise: since this is an emotional response, is there a factual basis, a measurable qualitative difference seen in the performance of the two types of editors? Are professional editors inferior? Or are they being used as a scapegoat for rejected authors, the scholarly equivalent of blaming the referee when one’s local sports team loses a match? Is choosing an editorial model an important business decision, does it really matter, and if so, how can we prove it?
First, a conflict of interest statement: I am a former professional journal editor-in-chief. I am no longer involved in the day-to-day decision making process regarding article acceptance. I now manage a portfolio of journals that are all edited by working academic researchers. I have seen the efficacy of both types of editors and have seen both produce high quality journals that are at the top of their categories. I don’t feel that I have a bias either way here though likely there is some personal pride lingering from my editor-in-chief days, a hope that I was fair and made correct assessments of submitted papers. So please do take that into account as you read on.
In order to get a sense of whether this anti-editor sentiment is indeed a common thread in science, I have spoken with a number of biology contacts and bluntly asked them for their thoughts on the subject. While this is all anecdotal, there has been a consistent response that indeed, there is a good amount of resentment aimed at non-academic editors.
The resentment is not generally directed at the well-respected editors-in-chief of the top targeted journals though. To be named editor of a journal like Nature, Science, or Cell, one must have a great deal of experience, a strong track record, and have gained the respect of the scientific community. One doesn’t walk in off the street to these types of jobs. The resentment is instead aimed squarely at the lower editorial levels, the infamous “28-year-old failed postdoc” assistant editor who seems to get all the blame when a paper is rejected.
How dare this novice, this barely-out-of-diapers washout who wasn’t good enough to make it as a “real” scientist, attempt to judge the value of the submitted paper?
I don’t think this is an unreasonable reaction, but it is something of an emotional kneejerk response that falls prey to several flawed assumptions.
First, can we, once and for all, dispose of the outdated cliche that anyone with a PhD who leaves academia is a “failure”? Recent studies have shown that the number of tenure track faculty positions available is tiny when compared to the number of degrees awarded (and this is particularly true in biology). When I first arrived in graduate school (many, many years ago), the prevailing attitude was that anyone who took a job in industry was a “sell-out.” Economic realities have blunted that opinion over the years, but there’s still a lingering level of ivory tower snobbery in effect.
I am regularly asked to give graduate students a talk on “alternative careers.” Is it really an “alternative” when it’s where the majority are going to end up?
For many, the academic career path has become less and less attractive over the years as administrators have realized that it’s a buyers’ market. If you’re not willing to sacrifice more and more of your life for less and less reward, there’s a line of hundreds of equally qualified candidates who would gladly have your job. This has driven many of the best minds, the most talented students away from science. This New York Times article is somewhat misguided in its conclusions, but it does point out that many top science students see areas like finance as likely to be more rewarding than the lengthy slog of academia.
If the system is driving the best and brightest students to Wall Street, then where does the “failure” lie? When a PhD finds gainful employment as an editor in an area that offers a more attractive (for them) career path yet still lets them contribute to and be part of the scientific community, should we consider them “failures”? Are they really worse off then the increasing number of permanent postdocs stuck in midlevel positions for life?
Questions of success aside, there is clear reason to understand why an experienced researcher should be upset when someone with lesser experience is allowed to make judgement calls about the quality of their work. But is this really the case for high-quality journals?
I can’t speak for Science, Nature, and Cell (and their spinoff journals) — hopefully, their editorial staff can weigh in below in the comments — but for the professionally edited high-quality journals I’m familiar with, the less-experienced staff is not allowed to make decisions in a vacuum. Submissions are seen by multiple editors, and any decisions made by assistant editors is reviewed and vetted by more senior editors. For the author, the correspondence you receive may be from the assistant editor but it’s unlikely this is the only person on staff who has read your submission or contributed to the decision made.
Since this does seem to be such a common misunderstanding, publishers are clearly not doing a good enough job making our editorial processes transparent to authors.
All that said, it’s difficult to argue against the success seen by journals with full-time editors. There’s a clear correlation between having full-time editors and status as an absolute top-tier journal, at least in biology. The top journals, and particularly the journals targeted here, employ this strategy. But is this merely correlation or is there some advantage offered?
Measuring “quality” is a difficult thing, as the comments on Tim Vines’ recent blog posting discuss. While your measurement of performance may differ from mine, I’m going to try to separate out “quality” from “performance” and suggest a study using the metrics chosen by academia itself, mainly the impact factor and citation (your suggestions of better metrics welcome below). We’ll need a selection of journals that employ different editorial policies, some with full-time professional editors and some with part-time academic editors.
The first thing I’d do is replicate this type of study done by The EMBO Journal. Where do manuscripts that the editors reject eventually get published? Do they end up in journals with a similar impact factor to the journal that rejected them, a lower one, or a higher one? This would give us a sense of how well each type of editor is performing their job, how well they understand the level of their own journal and how well they’re interpreting the level of the submitted work they’re choosing to reject.
Next, I’d want to look beyond the level of the journal and try to focus in more on the level of the individual papers. Let’s take a random sample of accepted papers from a journal, and compare the citations they receive to a sample of rejected papers. To avoid the real stinkers that are going to massively drop the second group’s performance, I’d try to take this sample from the more borderline rejects, the ones that were eventually published in journals with a higher, equal or slightly lower impact factor. I’d probably also want to subtract any citations coming from the authors themselves, so one is just measuring impact on the field beyond the group that did the original work. Citations could be weighted by the ranking of the journal where they appear — a citation in a higher ranked journal being indicative of greater impact than one from a smaller, niche journal.
Do these rejected papers outperform the accepted papers? Are the editors making the right call as far as predicting the impact of individual papers? Most important for the question asked by the study, how does the performance of the two editorial groups compare?
While not definitive, these sorts of studies would at least give us a basis for comparison, one that relies on data rather than on feelings (this is science after all, not religion). The studies would hopefully provide some evidence arguing for the superiority of one type of editor over the other, or as may be the case, little difference between the two. One could go a step further and refine things, asking questions about broad, general journals as opposed to niche-specific journals and whether one type of editor confers an advantage under different circumstances.
Even if one could come up with a definitive answer here, it’s unlikely to alleviate the resentment. When you’re dealing with the absolute top journals that reject 95% of the papers submitted, you’re going to end up with a lot of ruffled feathers. The “failed postdoc” makes for an easy scapegoat, but who will authors blame when they feel they are unfairly rejected by eLife?
Given the number of angry responses journals with academic editors regularly receive, I suspect that a great deal of the emotion here is rooted in the act of rejection, not in the credentials of the editor. No one likes to be told that their life’s work is not good enough. But that’s merely a personal suspicion, and without any data to back it up, there are limits to how far you should trust it.