English: Flyer depicting William Templeton cre...
Image via Wikipedia

Is a scholarly journal harmed by a having a professional, full-time editor at the helm rather than being run by a working researcher? The publicity campaign for the newly proposed eLife journal has suggested as much, but very little empirical data has been offered. Is there a way to measure the performance of professional editors and compare it with that of academic editors?

As Phil Davis recently pointed out, the powers-that-be behind eLife have made a concerted effort to emphasize their journal will be run by working researchers rather than full-time editors with scientific experience and training. The journal is meant as a direct competitor for top tier publications like Nature, Science, and Cell, all of which employ professional editors. The rationale for this decision has been given as an issue of control:

“There’s a great deal of displeasure in the life-science community with the control of journals that don’t use scientists to make decisions,” Mr. Schekman told me. “There are many scientific journals that have scientists as editors, but they have not captured the same kinds of very interesting, groundbreaking studies that are published in Cell, Nature, and Science.” Lured by name prestige, researchers route their best work to those journals. That means they neglect other outlets where scientist-editors have more control.

Responses to these sorts of statements have come from professional editors including BioMed Central’s Miranda Robertson and the editors of Nature Chemical Biology (subscription required).

The first question a statement like this raises is whether this sentiment is widespread. More questions then arise: since this is an emotional response, is there a factual basis, a measurable qualitative difference seen in the performance of the two types of editors? Are professional editors inferior? Or are they being used as a scapegoat for rejected authors, the scholarly equivalent of blaming the referee when one’s local sports team loses a match? Is choosing an editorial model an important business decision, does it really matter, and if so, how can we prove it?

First, a conflict of interest statement: I am a former professional journal editor-in-chief. I am no longer involved in the day-to-day decision making process regarding article acceptance. I now manage a portfolio of journals that are all edited by working academic researchers. I have seen the efficacy of both types of editors and have seen both produce high quality journals that are at the top of their categories. I don’t feel that I have a bias either way here though likely there is some personal pride lingering from my editor-in-chief days, a hope that I was fair and made correct assessments of submitted papers. So please do take that into account as you read on.

In order to get a sense of whether this anti-editor sentiment is indeed a common thread in science, I have spoken with a number of biology contacts and bluntly asked them for their thoughts on the subject. While this is all anecdotal, there has been a consistent response that indeed, there is a good amount of resentment aimed at non-academic editors.

The resentment is not generally directed at the well-respected editors-in-chief of the top targeted journals though. To be named editor of a journal like Nature, Science, or Cell, one must have a great deal of experience, a strong track record, and have gained the respect of the scientific community. One doesn’t walk in off the street to these types of jobs. The resentment is instead aimed squarely at the lower editorial levels, the infamous “28-year-old failed postdoc” assistant editor who seems to get all the blame when a paper is rejected.

How dare this novice, this barely-out-of-diapers washout who wasn’t good enough to make it as a “real” scientist, attempt to judge the value of the submitted paper?

I don’t think this is an unreasonable reaction, but it is something of an emotional kneejerk response that falls prey to several flawed assumptions.

First, can we, once and for all, dispose of the outdated cliche that anyone with a PhD who leaves academia is a “failure”? Recent studies have shown that the number of tenure track faculty positions available is tiny when compared to the number of degrees awarded (and this is particularly true in biology). When I first arrived in graduate school (many, many years ago), the prevailing attitude was that anyone who took a job in industry was a “sell-out.” Economic realities have blunted that opinion over the years, but there’s still a lingering level of ivory tower snobbery in effect.

I am regularly asked to give graduate students a talk on “alternative careers.” Is it really an “alternative” when it’s where the majority are going to end up?

For many, the academic career path has become less and less attractive over the years as administrators have realized that it’s a buyers’ market. If you’re not willing to sacrifice more and more of your life for less and less reward, there’s a line of hundreds of equally qualified candidates who would gladly have your job. This has driven many of the best minds, the most talented students away from science. This New York Times article is somewhat misguided in its conclusions, but it does point out that many top science students see areas like finance as likely to be more rewarding than the lengthy slog of academia.

If the system is driving the best and brightest students to Wall Street, then where does the “failure” lie? When a PhD finds gainful employment as an editor in an area that offers a more attractive (for them) career path yet still lets them contribute to and be part of the scientific community, should we consider them “failures”? Are they really worse off then the increasing number of permanent postdocs stuck in midlevel positions for life?

Questions of success aside, there is clear reason to understand why an experienced researcher should be upset when someone with lesser experience is allowed to make judgement calls about the quality of their work. But is this really the case for high-quality journals?

I can’t speak for Science, Nature, and Cell (and their spinoff journals) — hopefully, their editorial staff can weigh in below in the comments — but for the professionally edited high-quality journals I’m familiar with, the less-experienced staff is not allowed to make decisions in a vacuum. Submissions are seen by multiple editors, and any decisions made by assistant editors is reviewed and vetted by more senior editors. For the author, the correspondence you receive may be from the assistant editor but it’s unlikely this is the only person on staff who has read your submission or contributed to the decision made.

Since this does seem to be such a common misunderstanding, publishers are clearly not doing a good enough job making our editorial processes transparent to authors.

All that said, it’s difficult to argue against the success seen by journals with full-time editors. There’s a clear correlation between having full-time editors and status as an absolute top-tier journal, at least in biology. The top journals, and particularly the journals targeted here, employ this strategy. But is this merely correlation or is there some advantage offered?

Measuring “quality” is a difficult thing, as the comments on Tim Vines’ recent blog posting discuss. While your measurement of performance may differ from mine, I’m going to try to separate out “quality” from “performance” and suggest a study using the metrics chosen by academia itself, mainly the impact factor and citation (your suggestions of better metrics welcome below). We’ll need a selection of journals that employ different editorial policies, some with full-time professional editors and some with part-time academic editors.

The first thing I’d do is replicate this type of study done by The EMBO Journal. Where do manuscripts that the editors reject eventually get published? Do they end up in journals with a similar impact factor to the journal that rejected them, a lower one, or a higher one? This would give us a sense of how well each type of editor is performing their job, how well they understand the level of their own journal and how well they’re interpreting the level of the submitted work they’re choosing to reject.

Next, I’d want to look beyond the level of the journal and try to focus in more on the level of the individual papers. Let’s take a random sample of accepted papers from a journal, and compare the citations they receive to a sample of rejected papers. To avoid the real stinkers that are going to massively drop the second group’s performance, I’d try to take this sample from the more borderline rejects, the ones that were eventually published in journals with a higher, equal or slightly lower impact factor. I’d probably also want to subtract any citations coming from the authors themselves, so one is just measuring impact on the field beyond the group that did the original work. Citations could be weighted by the ranking of the journal where they appear — a citation in a higher ranked journal being indicative of greater impact than one from a smaller, niche journal.

Do these rejected papers outperform the accepted papers? Are the editors making the right call as far as predicting the impact of individual papers? Most important for the question asked by the study, how does the performance of the two editorial groups compare?

While not definitive, these sorts of studies would at least give us a basis for comparison, one that relies on data rather than on feelings (this is science after all, not religion). The studies would hopefully provide some evidence arguing for the superiority of one type of editor over the other, or as may be the case, little difference between the two. One could go a step further and refine things, asking questions about broad, general journals as opposed to niche-specific journals and whether one type of editor confers an advantage under different circumstances.

Even if one could come up with a definitive answer here, it’s unlikely to alleviate the resentment. When you’re dealing with the absolute top journals that reject 95% of the papers submitted, you’re going to end up with a lot of ruffled feathers. The “failed postdoc” makes for an easy scapegoat, but who will authors blame when they feel they are unfairly rejected by eLife?

Given the number of angry responses journals with academic editors regularly receive, I suspect that a great deal of the emotion here is rooted in the act of rejection, not in the credentials of the editor. No one likes to be told that their life’s work is not good enough. But that’s merely a personal suspicion, and without any data to back it up, there are limits to how far you should trust it.

Enhanced by Zemanta
David Crotty

David Crotty

David Crotty is a Senior Consultant at Clarke & Esposito, a boutique management consulting firm focused on strategic issues related to professional and academic publishing and information services. Previously, David was the Editorial Director, Journals Policy for Oxford University Press. He oversaw journal policy across OUP’s journals program, drove technological innovation, and served as an information officer. David acquired and managed a suite of research society-owned journals with OUP, and before that was the Executive Editor for Cold Spring Harbor Laboratory Press, where he created and edited new science books and journals, along with serving as a journal Editor-in-Chief. He has served on the Board of Directors for the STM Association, the Society for Scholarly Publishing and CHOR, Inc., as well as The AAP-PSP Executive Council. David received his PhD in Genetics from Columbia University and did developmental neuroscience research at Caltech before moving from the bench to publishing.

Discussion

24 Thoughts on "Can We Measure the Value of Professional Editors?"

As a 28 year old PhD who voluntarily left academia for a so-called ‘alternative’ career in science, I loved reading this article. Hit the nail on the head many times!

If academics want someone equally qualified in the subject area reviewing every article, the likely end result is a whole lot of ego clashing and messy politics. That said, where does it leave the term ‘peer review’? I can see this from both sides. Great article!

Interesting study, but I am not sure who would pay for it, nor why? Perhaps a large publisher trying to decide which way to go. The two groups — pro editors and academics, do not have separate associations do they? The scientometrics community is otherwise occupied, mostly assessing impact. This study is a bit applied for them I think. I am sure Phil Davis could do it.

One thing, you seem to be talking about confining the analysis of rejects to mistakes, that is stuff published elsewhere in roughly equal journals. That may be wrong.

The phenomenon may be more one of correlation than cause. Authors get upset when a paper is rejected, often blaming the editor. They get more upset if it is a ‘high impact journal’, and since high impact journals reject more papers, this happens more often. Of course the three highest impact journals are edited by professional editors, and so it is these editors that most often become the focus for author wrath. I’ve heard authors level withering accusations of ineptitude at academic editors on numerous occasions; they just tend to get less emotive because there is (sadly) less riding on publication in the journals they edit. As you imply, this may change if eLife succeeds.

This debate obscures the real problem, however. Most of the time, the problem arises when an editor receives conflicting reports and has to take sides. In this case, (s)he is going to upset someone: if the paper is accepted, the over-ruled referee will deem the editor incompetent; if the paper is rejected, the author will deem the referee incompetent; if an additional referee is sought or further revisions are reuqired, some or all involved will deem the editor indecisive and incompetent, and in any case this only defers their conflicting assessments of the final judgement. A notable feature of eLife is the proposal for a more collaborative approach to resolving such issues. It will be interesting to see if they succeed with this, though as others have noted this may be at the expense of timeliness and ‘impact’.

Scientists who whine about the deficiencies of professional editors and extoll the virtues of pure peer review should look at how well pure peer review works in assessing NIH grant applications.

Strong professional editors can arbitrate conflicting reviews and petty disputes with fewer conflicts of interest than peer scientists who act as editors. They can act with more detachment, with less need to consider peer politics, and they can develop a critical perspective that working scientists have little time or opportunity to develop. They can utilize the resources of peer review without being enslaved by it.

Do these rejected papers outperform the accepted papers? Are the editors making the right call as far as predicting the impact of individual papers? Most important for the question asked by the study, how does the performance of the two editorial groups compare?

Studies of the fate of rejected manuscripts all report the same findings: Rejected articles end up in journals of lower impact and are cited less frequently, although there are some exceptions. In addition, these studies report some percentage of rejected manuscripts that cannot be subsequently found and the assumption is that the authors gave up trying to publish. This goes against another prevailing claim that everything in science is published.

I don’t know of any studies that compare professional editors against academic editors, although there are various studies that look at varying the peer-review process slightly within the same journal (BMJ is highly experimental in this regard).

I collected the data David is talking about for Mol Ecol and it shows exactly what Phil describes. Another measure of peer review quality is the coefficient of variation in citation rate within journals (see Neff & Olden’s Bioscience paper). If a journal has an inconsistent review process, in that the editors are not good judges of which papers meet their standards, then there will be a lot of variation in citation rate. The CoV is dimensionless and can easily be compared between journals with different impact factors, and would not be all that hard to collect for a wide range of different types of journal.

On wonders why scientists are not complaining about professional editors being employed to make decisions about books as well as journal articles? Perhaps because books in science are a sideshow and not the main act? I would suggest, however, that journals like Nature and Science function more like books in that decisions about what gets published in them have at least as much to do with criteria involving helping these continue to be widely popular journals as they do with pure judgments of scientific merit, i.e., with marketing not just editorial criteria. if that hypothesis is correct, then there is no question that professional editors are better equipped to make these decisions than working scientists, who are ill equipped to make decisions about markets. Professional editors, needless to say, are also better equipped to make decisions about whether to publish a journal in the first place–again, a decision involving criteria other than pure merit.

While their may be some correlation between quality and high impact one does not necessarily follow the other. While it is true that articles of lesser quality may have less impact, it does not necessarily follow that low quality papers equals low impact nor that high quality papers equals high impact. The DECREASE STUDIES were recently discovered to be based on faulty data, yet these studies had an enormous impact (some of the six DECREASE Studies were cited more than 200 time each). Therefore I believe a study that conflated popularity with quality would produce erroneous results. Added to the quality equals impact fallacy one must also consider the fact that journals reject papers for many reasons. The quality of the paper is not the only consideration used in selecting a paper for publication. It would be false to assume that if a paper is rejected by a leading journal it then follows that it would only be published by a lesser journal. Any study that tried to measure quality based on impact would have an inherent bias which would skew the study results.

One can certainly argue for different measures of quality. I selected the ones I did because they’re the currency of the realm, the metrics selected by academia itself. I’m less interested in the comparative quality of a journal than in the internal performance–how regularly did the journal’s editor make the “right” decision? How does that performance level compare with other types of editors at other journals? How one decides that the “right” decision has been made is open for debate.

I work in medical education at a university that has both an MD and a DO medical school. Of course there’s a lot of discussion who makes a better doctor. I suppose you can argue either way but I would take a good DO any day over a mediocre MD or a good MD any day over a mediocre DO. I think the same applies here.

As an author what I absolutely loathe is when the editor is lazy and turns the review into a vote-counting process among the reviewers and if accepted just tells the author to fix everything criticized by the reviewers. It makes you feel like your in some nightmare version of an Aesop fable trying to please a bunch of masters. It’s almost as bad as defending a dissertation.

As was noted above, editors need to consider the feedback from reviewers carefully with an open mind using it along with their own judgement to make a thoughtful decision and then provide a single coherent critique on how to revise the manuscript if it is accepted. The requirements are to be knowledgeable and experienced in the field and to take the time and put the effort into doing it right. I’ve had both good and bad experiences from both kind of editors and can’t really speak to which in general does a better job.

I don’t think anyone would disagree with you, David. But eLife is making an editorial statement that professional editors are categorically inferior to academic editors with little but anecdote. Richard Sever (see comment) describes how and why professional editors are easy to blame, and David Crotty is only asking that such categorical statements are backed up with some data considering that we have inferential evidence to suggest otherwise.

In the end, this may only have to do with the performance of good and bad editors.

Thanks Phil. I was just trying to make the point that I believe the variation in quality is far greater within the two types of editors than between them. David Crotty has a valid point but good luck trying to find a valid performance measure. Also, if I am right, you would need a massive sample to detect a difference.

Isn’t that something of an answer itself? If it’s nearly impossible to measure any significant difference, does one exist?

I’m not sure how eLife will implement a collaborative approach to editorial decisions, but I think this would be a key improvement to the publishing process. As it stands, at top-tier journals it is basically 1 person making the decision. It’s hard to stay grounded and impartial once you realize you have that power.

How do you define the “right” decision, David? An editor can reject a “quality” paper (or in this case a highly cited paper) for many legitimate reasons. I often tell Chinese researchers that they need to think like editors. Editors select articles on three broad criteria; 1) is the science of good quality, 2) will this article have an impact and 3) will this article interest “my” readers. A paper can easily pass the first two and then be rejected for the third. Editor’s first responsibility is to interest their readers. If a journal does not interest its readers it won’t be around for long.

Chinese researchers, for example, like to submit their manuscripts to high impact journals rather than journals that might be interested in the topic matter. What is true for the Chinese is most likely true for other Asian authors, and just to be clear, East Asian authors now account for 18.5% of all articles archived on PubMed, 13% of all clinical trials archived on PubMed and 7.5% of all articles published in core journals. Article rejection rates are not just a function of the actions of editors, they are also a function of the actions of authors.

The first question that needs to be asked is how many papers are rejected merely because they do not fit the editorial profile of the publication? Once you remove those articles, then perhaps you can look at the impact factor as a surrogate for quality – even then (The DECREASE STUDIES) I think impact is an inadequate true measure of research quality.

My definition of “right” doesn’t matter as much as does the definition of those trying to make the decision on the type of editor. I’m not declaring that the Impact Factor is a perfect measure of quality, just that it is the metric used by academia for that purpose.

But I take your point–if a paper is of high quality and has a tremendous impact but has been rejected by the journal for being out of scope, then that will unfairly negatively sway the editor’s score in my study. The “right” decision has been made, but it looks wrong under my criteria. The question then is how often this really occurs (I’m assuming most of the sorts of papers you describe above are going to be both out of scope and of insufficient quality).

One could try to exclude all papers rejected for being out of scope, but that introduces a much more difficult step to the analysis, probably one that precludes a lot of the automation that would make it possible to look at large numbers of papers.

David,

I have been doing a bit of investigative work on the PubMed Database. In 1997 there were 22,673 human clinical trials archived on PubMed, of these 4,230 (18.7%) were published in the “core journals” (as defined by PubMed). In 2010 there were 38,552 human clinical trials archived on PubMed of which 5,006 (12.9%) were published in the “core journals.” In other words, the competition to get published in the core journals is much higher today than it was in 1997. The share of clinical trial articles published in the core journals is roughly 1/3 lower today than it was in 1997. It is for this reason that I think your methodology would be biased. I conclude from this data that a very large percentage of the rejection rate in the core journals is merely the result of increased competition for space. I predict that the study you propose would show that core journal editors are getting increasingly better at selecting “quality” articles when in fact, all that has really occurred is that the competition for space as become tighter. Editors today are probably no better or worse than the editors of 14 years ago.

This increased competition for limited space nicely predicts, in my view, reactions such as eLife. All we are observing is power (due to increased competition for space) shifting from authors to editors. No wonder then that researchers feel powerless and embrace efforts like eLife.

A couple of follow-up points…Three trends have converged that have hurt professionally run journals (such as Cell, Nature, and Science):

1. The growth of the number of scientists at a time of increased job competition, which increases competitive pressures to publish at the top places. As a result, a wider variety and greater number of scientific manuscripts arrive at the editor’s desk, which creates tremendous challenges for understanding and sorting through the articles.

2. The competitive pressures for publishing great manuscripts forces editors at professionally run journals to become acquisitions editors rather than content editors. They have to woo the best scientists to publish with them instead of the competing journals. This turns the editor’s job into that of a salesman.

3. In part because of these two reasons, the incentives to become a great professional editor are not as great as they once were.

I don’t think the system of professionally run journals is broken, and the open source journals have not provided a good alternative so far. Professional journals need to have better incentives to cultivate great professional editors. I don’t see any proposals on the table to address this issue.

Are you casting aspersion here on all the “professional editors” who acquire books, which indeed involves selling your press to an author? That is a primary part of a book editor’s job. It has not deterred many of us from pursuing that profession.

Not at all. But the time (and pressure) involved in pursuing the necessary commercial activities is time that can’t be devoted to determining and improving content.

As commercialization has increased in life sciences journals, the time that can be devoted to doing serious editorial work has contracted while the need for good editing has increased.

You raise some interesting big picture questions, though you might have to buy me a beer if you want to get into a deeper discussion like this. In this case though, you may be overthinking things. If one is going to make a statement that group A is superior to group B, then one must define the terms for that judgement, and then perform measurements and compare the two groups. My hypothetical studies may not be the best way to do such measurements but they were selected because they are the chosen metrics of academia.

Personally I believe there are good editors and bad editors, and the quality of their work does not directly correlate with their employment status.

I believe you miss the fundamental problem of using professional editors: the standards of science are are not being set by scientists. I state my case in an editorial in GENETICS (the peer-edited journal of the Genetics Society of America; GENETICS 181: 355–356). Briefly, the standards of science have historically been determined by scientists. I have seen this principle steadily erode in my field during my career as the professionally-edited journals–Nature and Cell and their spawns, especially–became increasingly influential, such that now the professional editors have a large influence on who gets hired and promoted and funded. Admittedly, that’s our (the practicing scientists’) fault because we are the ones who sit on the hiring and promotions and grant review committees, but it’s degrading science.

One more point, regarding “To be named editor of a journal like Nature, Science, or Cell, one must have a great deal of experience, a strong track record, and have gained the respect of the scientific community.” I beg to differ. A PubMed search of those editors will tell a different story in many cases. Try it and see.

I think it’s something of a myopic viewpoint. Who, in your view is allowed to “set the standards for science”? If your criteria is based on a PubMed search detailing the papers published by an individual, then you have immediately ruled out the vast majority of practicing scientists, as most publication records aren’t particularly stellar. As I recall, most authors appear only once in the literature, never to appear again. So you’ve effectively taken peer review out of the system because only a small cadre of the elite who have followed one particular career path are qualified to weigh in on what’s important.

What about university and institutional administrators? They have a say in who gets hired, who gets space, equipment, how money is spent. I guess we need to do away with them as well. Certainly elected officials who determine the budgets of funding agencies can’t possibly be allowed to have a say in decisions about science. Nor the bureaucrats running those agencies. I suppose we’ll have to do away with the Wellcome Trust as well since that’s run by a variety of non-scientist executives. Ditto the trustees of HHMI.

Hmm, it strikes me that you elite scientists are not going to have much time to do any science, given all the new jobs you’re proposing to take on.

I’m not disputing the value of the working scientist as editor–all of the society-owned journals I manage have that arrangement. I just think one can’t deny the success that has been exhibited by journals with professional editors. How does one reconcile the high level of quality achieved with such incompetent unqualified editors? If you’re right, then Cell, Nature and the like should regularly be publishing terrible papers. The citation record seems to rule that out.

(comment edited to remove a bit of hearsay that in retrospect didn’t seem fair to all involved)

A follow up question: in the field where I spent nearly 15 years doing research (cellular and molecular biology), a PI quickly moves away from the bench when starting a lab. After the first few years, they do fewer and fewer experiments and soon are no longer doing any actual bench work. Should a researcher who is no longer doing experiments be considered a “practicing scientist”?

How does that PI differ from a journal editor if both spent the same amount of time doing experiments before moving away from the bench?

Comments are closed.