Logo of the band Rejected Español: Logo de la ...
Logo of the band Rejected Español: Logo de la banda Rejected (Photo credit: Wikipedia)

Journals play a strong role in quality signaling for the papers they publish, as Phil Davis has discussed before. Quality signaling may seem self-explanatory — you’d think it has to do only with how good a paper is — yet quality is highly nuanced, as Phil discussed in a later comment on the same post:

  1. Quality is an abstract construct. It doesn’t exist except in our heads. We often know it, however, when we see it.
  2. Quality is multi-dimensional. It is made up of many different components, each one of them may say something different about what we’re trying to evaluate.
  3. Quality ultimately reflects a private evaluation.
  4. Yet, our private evaluation is influenced – often to a great deal – by the evaluations of others. This is because we are often unsure of our own evaluations and seek the evaluations of others — especially those of experts. We are also social creatures that prefer consensus.
  5. Lastly, quality is often associative. What we think of the group as a whole affects what we think of its individual members. This is why journal branding is so important.

An oversimplification of the peer-review process portrays administrative staff distributing a batch of scientific papers to a set of volunteer expert reviewers, who evaluate, grade, and return the papers to be accepted or rejected based upon their combined input and a dash of final editorial judgment.

This version misses an important step that precedes external peer-review in nearly every journal — initial editorial review for quality, relevance, and scientific interest — associative quality, in many ways. Usually performed by in-house editors, this step can lead to the rejection of a high number of submissions. And while some open access and mega-journal advocates are disdainful of it, editorial review is becoming more important as the number of submissions — and the number of misguided submissions — climbs, and as editors become more attuned to the needs of authors, who usually prefer a quick decision to weeks or months of delay before rejection.

How widely is it used? It’s nearly universal. I only know of two traditional journals that don’t routinely practice it, and both of those are itching to implement it.

What percentage of papers does it remove on the front end? Recently, I used a combination of desk research techniques — looking stuff up online and asking people via email — to collect a sample of editorial rejection or “desk reject” rates. This admittedly incomplete survey (18 journals) found that between 7% and 83% of submissions are rejected without external peer-review. Here’s the anonymous list of rates of rejection without external peer-review:

  • 13.3%
  • 26%
  • ~50%
  • 13.5%
  • 60%
  • 75%
  • 88%
  • 58%
  • 35%
  • 24%
  • 24% (yes, two different journals had 24% rates, this is not a mistake)
  • 27%
  • 60%
  • 10%
  • 10% (yes, two different journals)
  • 7%
  • 33%
  • 48%

There was no clear association between the rate of editorial rejection and obvious things like field, size, impact factor, or circulation. The only little trend I saw was that medical journals seem to have generally higher initial rejection rates than journals in other fields. But overall, editorial review practices seem to have developed through a mix of editorial philosophy, pragmatism, culture, and experience. Most editorial review processes involve at least two editors agreeing a paper should be bounced out, but this varies a bit as well.

Early review and rejection helps editors cope with the crush of submissions all strong journals are dealing with, along with a feeling of needing to be fair and fast with authors who have missed the mark in some way — wrong journal, poor study, or both. Also, peer-reviewers are more stretched generally, so journal editors are sensitive to this volunteer group; editorial review is a way of sparing them from the obvious mismatches.

Editorial review and rejection seems to be increasingly important for traditional journals. One source mentioned that some journals are hiring staff solely to provide an initial filter for obvious misfits and dreck. However, it is mainly  missing from open access mega-journals. For mega-journals, associative quality is less important than a broad filter. From a business standpoint, publishing more papers is better business, as well, further damping the impulse to narrow the candidate set from the start.

The fact that each paper is more demanding to publish is likely also factoring into the trend for traditional journals to winnow down the field early — disclosures, supplementary data, complex data sets, online summaries, editorials, and large image sets all require a lot of staff and editorial time. Editorial review allows editors and staff to focus on the papers that have a chance, and even more on the papers that will ultimately be published. It reduces distractions.

We also have a robust journals ecosystem, which reassures editors that mismatches can find a home, making passing on a decent batch of papers far easier.

Brands have also ascended in importance over the past few decades, and brands bring all sorts of identifying traits along with them. Knowing these traits and living with them, editors are able to more quickly identify misfits, making editorial review both more efficient and more accurate.

There’s a flip-side to editorial review — marketing. If you don’t have a clear idea of journal identity or place, how can you market your journal? This has significant implications, for authors, readers, news and social media, and ultimately impact factor and sustainability. If you can’t tell if a paper is “right for you,” then you don’t know what your journal’s about, which by extension means you can’t tell the world what your journal’s about and/or don’t know who in the world to tell these things.

Editorial review is growing in importance for domain-specific journals and a key to success. It underscores the importance of professional editors to brands and communities, highlights the value of associative quality, and reflects on your ability to market, promote, and promulgate your journal. It’s absence within the mega-journals may help explain the diffuse focus of these repository-like publications and their sporadic media coverage and brand associations. After all, quality signalling isn’t just about how good a paper is. It’s about how a community will process it, which means knowing which community you’re working with. Editorial review is a key but often forgotten component of quality, efficiency, and effectiveness.

Enhanced by Zemanta
Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

View All Posts by Kent Anderson

Discussion

34 Thoughts on "Editorial Rejection — Increasingly Important, Yet Often Overlooked or Dismissed"

Interesting post – especially the statistics for the % of editorial rejections! That’s the kind of thing which is invisible to those of us who are just on the other side (submitting papers, that is). A lot of journals will publicize overall rejection rates, but don’t tell you that breakdown.

I’d be interested in a follow-up piece on editorial involvement at the other end: once reviews are in, how much input does the editor have in balancing all the different views (reviewers’ + authors’)? Funnily enough, I’ve felt more annoyed in the past by editors ignoring my input as a reviewer than I have as an author …

Thanks for the stats which are very interesting (even if a small sample).

I would be interested to know how much rejection is based on scientific arguments versus the pursuit of a high impact factor. In my personal experience editorial rejection has been with the argument “not of general interest” (or something like it), accompanied by an offer to transfer the manuscript to a lower-rated journal of the same publisher. My conclusion is that these editorial rejections are based on some estimation of the number of citations the paper will receive in two years (ie rejecting if below some threshold value for that journal). This is scientifically unacceptable. I hope this is not too widespread but anecdotal evidence suggests otherwise…

How does one separate out rejections based on “pursuit of a higher impact factor” from rejections based on low quality? If an editor rejects your paper because it’s flawed, flat-out awful or not significant in any way, how is that not a “scientific argument”?

If citations measure scientific importance then citation potential is a scientific argument. In fact most of what is here being called quality is actually importance. Quality sounds like how well the work was done and that is not what counts. What counts is how important the results are, which is often a matter of luck, not something in the control of the researcher. Important discoveries are seldom planned.

I don’t think there is a particular association with in-house professional editors. Editorial rejection is widely practiced in journals run by academic editors. By indentifying papers an editor knows will not survive peer review, (s)he saves authors time (allowing them quickly to resubmit to a more ‘appropriate’ journal) and reduces the burden on referees on whom the journal depends.

The process of course requires good editors (and various people will now probably argue all day about whether academics or professional editors are best equipped in this respect), and there is always an element of subjectivity (a former colleague once editorially rejected a paper from Science only to see it appear on the cover of Nature a few months later). Nevertheless, most academics, most of the time, would probably agree that the pros outweigh the cons.

The numbers I supplied to Kent for this post came from a journal with a working scientist editor.

It’s important to emphasize the tremendous value of speed when discussing this subject. Most, if not all researchers would much rather have their paper rejected immediately than having to go through a multi-month peer review and revision process that in the end also results in rejection. Where possible, a good editor can supply a brief statement as to why the paper was rejected, and this can be very helpful for the author in finding a proper home for the publication. And the faster that home can be found, the better.

Yes – speed is key. In fact, on some journals, editors offer authors the option of disputing the decision and requesting the paper be sent to referees but explicitly point out that it will go through a time-consuming peer review process that in their judgement will almost certainly result in rejection.

This is a great post, Kent. It underscores the fact that publishing is not passive or simply the work of volunteers. I would add one other point, which in some respects may be the most important editorial decision of all: the selection of the journal’s editor. It’s somewhat true (except for desk rejections) that most publishers take a hands-off approach to the editorial work for a journal, but the really important decision is the one that precedes it: Who will be the editor in the first place? That choice can drive a journal’s direction and can have a large influence on what gets published, which in turn influences the kind of research people conduct in the first place. Publishers, in other words, are part of the research process, not an add-on. Of course, some people don’t like this, and they would like it even less if they understood how deeply publishers influence the research agenda.

Great point. The extent to which journals actually drive research would itself be a good research topic. It sounds like a positive feedback loop, which may help account for the occurrence of fads in research, or more politely, hot topics.

It’s worth noting that the editorial rejection rate among the better scholarly book publishers is much higher. I don’t think anyone has done a survey of this, but the rate is likely to be 90% and up. The role of the book acquisitions editor as list curator is a key difference between book publishing and much journal publishing. This has implications for the debates over peer review, OA, etc., which sometimes conflate articles and books.

Sad but true – this system also militates against really new work because no-one has heard of it before and thus rejection is the reflex response. Shame because real innovation could help the journal status.

Damned if you do, damned if you don’t. One commenter chides editors for immediately rejecting anything truly novel, and another chides editors for chasing flashy and exciting new papers to draw readers rather than relying solely on scientific validity.

These seem somewhat contradictory points, much like the constant demands that publishers embark on expensive technological experiments to revamp scientific communication while also lowering the prices that pay for such experimentation…

It is not a contradiction, but rather a balancing act. More technically the optimization of a multi-objective system. In all such cases each separate objective is sub-optimized, so those who over-value that objective complain. Such cases are everywhere in the human condition. It is the curse of having more than one objective, and the life blood of politics.

Why do you assume an editor would automatically reject anything novel? The breakthrough paper likely will get the most cites. As an editor, you sometimes have to take a chance or your journal will stagnate.

Alan Thomas has pointed to one way in which editorial review differs for books compared with journal articles. I would further emphasize that there are constraints operating for books that do not exist for journals. Chief among them is that reviewers are paid for reviewing books but seldom are paid for reviewing articles. This is a significant cost for scholarly publishers, and some have even operated by imposing a cap on how much any individual editor can spend in a given year on readers’ fees, thus determining in advance how many submissions can be passed on for external review. Also, questions of market that do not exist for journal articles individually exist for books, and not only fit for the publisher’s list (comparable to fit for the journal) is important but also sales potential; many books are turned down, not on grounds of scholarly dfect, but because they are not expected to sell enough copies to cover the costs of publishing them. Discussions about peer review, and other subjects, in SK would do well to bear in mind that not everything true for journal publishing is true for other types of scholarly publishing.

I would like to thank you for bringing the issue of editorial rejection prior to peer review to the fore. I am the new editor-in-chief of a broadband physics journal, Physica Scripta, published by the Royal Swedish Academy of Sciences. One of my first tasks upon assuming this position has been to devise a strategy to enable the editorial team, and not least myself, to cope with the flood of submissions. Our submissions are escalating (20% p.a.) and our current overall rejection rate is over 70%, and rising.

Physica Scripta could receive a disproportionately large number of crank submissions owing to the fact that the academy oversees the selection process for the Nobel Prizes, however, apart from these delightful manuscripts, the publishing editor will reject a few papers, such as those that have been submitted to the journal in error, or after rejection by another journal without suitable modifications being made to adapt them to our requirements.

The next stage of the review process is the distribution of manuscripts to our expert editors for an assessment of the scientific quality and the novelty of the research. In the process of performing this task, I might identify further manuscripts that are obviously not going to be accepted for one reason or another as I consider the work to pass it on to the appropriate expert editor (or as I identify suitable reviewers when no obvious expert editor exists within our team).

This screening procedure saves the authors in question having to wait any longer for an inevitable rejection. The expert editors will eliminate unsuitable manuscripts in their preliminary review and then the manuscript is peer reviewed.

There is a third stage of the review process that falls between a preliminary screening procedure and a full peer review: If, during the peer review process, a reviewer with a sound publishing record and a good history of reviewing returns a manuscript with a damning report, the manuscript might be rejected on the basis of one report only. In these instances, however, the reviewers’ publishing and reviewing history will be checked carefully, because, despite what the numbers would tend to imply, the decision to reject is not one that is taken lightly.

Thus, for Physica Scripta, when manuscripts are rejected prior to a full peer review, this could be by the publishing editors, the publisher, the editor-in-chief and, finally, the expert subject editors. The procedure is important, and in our case, we are exceedingly fortunate to have a publishing editor with a science degree, the journal’s publisher and I have several years of research experience and we have relevant (but complementary) PhDs. In addition to this, we have a strong, dedicated, and growing team of expert editors.

I consider myself to be in a privileged position, for one further reason: the strategies adopted by many journals are determined by the need for profit, however, our publisher is owned by a non-profit-making organization, and the journal is owned by another. This gives us unusual freedom to pursue scientific excellence.

Many aspects contribute to a journal’s editorial rejection procedure, however the individuals involved shape the content of the journal and serve to build its reputation.

Suzy Lidström
The opinions expressed are my own and are not made on behalf of the Royal Swedish Academy of Sciences

My personal problem with publication is work which appears to fly in the face of all current thinking – in this case an apparent reversal of Parkinson’s symptoms and decline in more than one case with the use of food additives and supplements . No-one is willing to promote or publish an extremely promising line of work and yet without publication there is no credibility for further work.

Andrew, have you considered reviewing the field to provide the definitive reference documents on factors known to influence the symptoms of Parkinson’s disease? You would need to write an extremely comprehensive review, but you could and should use the review to include an outline of your results as “an interesting finding in the process of being expanded upon”. I would not make your research the major topic if you are experiencing difficulty publishing your results, however, once you have managed to get one publication past the reviewers, you would then have a document to which you could refer in future submissions.

Remembering the impact of nicotine on Parkinson’s, I ran a quick google search, which led me to an article in “Expert Review in Clinical Pharmacology” entitled “Can nicotine be used medicinally in Parkinson’s disease?” Could this provide you with a solution? The role of nicotine has been discussed for many years in this context, so you might need to wait a while until you are able to bring your research right into the public eye with a striking title like this.

I am not familiar with medical journals, but it could be indicative of the absence of a suitable and respectable journal in which to expound upon controversial ideas before they attain acceptance that Cooray and Cooray published their hypothesis that ball lightning could be the result of optical hallucinations caused by epileptic seizures in an atmospheric science journal (“the open access atmospheric science journal”) rather than in a medical one.

Editors of medical journals might be interested to note that Physica Scripta deliberately set out to invite controversial (physics) articles many years ago with the intention of pushing these ideas forward for general consideration. Researchers submitting to this section of the journal take extreme care to review the existing state of knowledge as thoroughly as possible (of course, we do stipulate that they should do so) before moving on, either to present both sides of the debate, or to explain their hypothesis and clarify where the controversies lie. Many researchers have commented on how valuable they find this opportunity and said that they feel it to be a great strength of the journal.

Thank you for the insight. I had assumed that the new information was what was most important. It seems publication is the no.1 goal so maybe I’ll try that route. So frustrating when patients could be benefitting now but the system blocks the innovation.

This post is on editorial rejection, and the claim that you made and to which I replied was that your “personal problem with publication is work which appears to fly in the face of all current thinking.” This means that you need to overturn conventional thinking, or prove that your work deserves inclusion alongside mainstream ideas. I do not intend to be disrespectful in saying that the claims that you wish to make sound as though they might not be so very far from some of the wacky ideas that have been put forward in the past and which have resulted in some people failing to receive treatment for curable conditions. On this basis alone, I am not surprised to hear that your results have been rejected by (possibly overly conservative) reviewers. Nevertheless, this is the opposition that you are up against if publication is your aim. I am suggesting that you might be able to achieve publication for your results in small steps. The first being, to get the idea out for consideration, the second to present scientific results that must be of extraordinarily high quality if you wish to be sufficiently convincing to publish.

Your work falls well outside my expertise, so I would not be able to make any judgement on it, however, it seems to me that if you have encountered difficulty publishing, then you need to establish that you are an extremely serious and credible scientist when presenting your results. They must be more than usually convincing (larger studies, better statistics, more convincing argumentation) than conventional findings to pass an audit. If you incorporated the results in a review, you would be mentioning your them for completeness, and as something out of the ordinary. (However “more than one case” does not give me the impression that you have the statistics required.) You would, probably and quite rightly, be able to claim that your extraordinary results would require the investment of considerable resources to test them out because, being controversial, they would need to be proved beyond all reasonable doubt. This probably transforms your problem into one of funding!

I will not be replying further because my interest is in the editorial issues under discussion.

If your work is methodologically sound, then you might consider a journal like PLoS ONE, where the criteria for acceptance is based solely on this, rather than on the perceived significance of the conclusions.

Editorial rejection is necessary, for the reasons cited in this post, and more. It saves the editors and reviewers time and energy, it helps target the submission to the most appropriate venue, it helps keep the authors out of (often extended) editorial limbo, and yes, it forces the editors to define their journal.

But it’s IMPERATIVE that practicing scientists, peers of the authors, make these decisions (see PMID: 19233837). If professional editors need to be involved (and I acknowledge that need for journals with high volume), they should be _closely_ supervised by practicing scientists who are the authors’ peers.

At my journal (GENETICS), all decisions to reject without review are made by at least two (often more) editors who are practicing scientists, the authors’ peers. That’s necessary to maintain the integrity of Science (capital “S”) (see PMID: 19233837).

Can you provide the empirical evidence that you have used to reach this conclusion? I would be very interested in seeing the studies proving a superior performance by one type of editor over the other.

Of course I can’t point to a controlled study of this (I’d appreciate it if anyone could point me to such a thing, and I’d welcome such a study), but by what metric would one judge “performance”? The IF is one way, but I think most everyone would admit that that’s a flawed metric.

I make my (admittedly subjective) case for peer-editing in PMID: 19233837, and from my point-of-reference it seems self-evident. From a practical point-of-view, my experience has been that the professional editors at e.g. Nature and Cell are inexperienced and unaccomplished (do a PubMed search on them sometime), and simply tally the votes of the reviewers (sometimes 4 or 5 of them!). I believe that the editors are the MAIN gatekeepers (the reviewers are only advisers to the editor), and as such MUST have the credentials to judge the veracity and significance of the manuscript and the respect of the authors.

There are important roles for professional editors, but I submit it’s not in deciding on which manuscripts to review or to accept or to reject.

I’m not against the use of working scientists as journal editors–all of the journals I currently manage have that arrangement. I just have a hard time accepting any such decision without any evidence to back it up. If I’m going to take a creationist to task for basing teaching decisions on feelings (“I don’t want to have descended from an ape!”) then I have to apply that same level of rigor to my own actions and decisions. There are clear examples where leading journals are run by professional full time editors (and this extends way beyond the absolute top tier journals like Nature and Cell). I want to know how those journals can be so successful and respected if, as you propose, the editors are incapable of making good decisions. Is “it just feels this way” good enough reason to determine the course of scientific research?

Really, I think we need good editors, regardless of their employment status. There are many journals run by working scientist editors that are of poor quality, just as, as you note, there are professional editors who don’t do a good job either. No one likes to be judged, and this is an emotionally charged area. But I’ve seen just as much resentment leveled at working scientist editors as I have at professional editors. Perhaps this gets somewhat magnified at the top journals, as the stakes are so high, but really, being rejected is upsetting no matter who is making the decision.

I’m not sure I’d be willing to draw such strict lines based solely on the single criterion of employment status. I know professional editors who have been running journals for decades, who read thousands of papers and attend enormous numbers of meetings every year. These experienced professionals are better-versed in the big picture of the fields they cover than most working scientists, simply because they can devote their full attention to covering the field. They don’t have to spend their time writing grants, serving on committees, teaching classes and mentoring students. Scientists very often become very focused on their area of research and build a deep but narrow level of specialized knowledge. The professional editor has the freedom to remain a generalist.

If, as you suggest, the ability to judge science should be based on a PubMed search of the editor’s publication record, then the vast majority of scientists currently working are disqualified. The majority of authors appear in the literature only once. Which means we should probably do away with our peer review system as it is being populated by incompetents. Or at least we should put the entire burden on those few whose CV’s are up to snuff.

Where is the line drawn then? If someone is a working scientist in industry, are they capable of judging the work done by someone in academia? Most PI’s in my experience move away from the bench fairly early in their careers. If you’re not conducting experiments yourself, are you to be considered a “practicing” scientist? Are you capable of judging someone else’s experiments if you haven’t picked up a pipette in 20 years?

The landscape of science continues to change and diversify. It seems somewhat parochial to try to ignore that. If only 15% of PhD students are going to head toward tenure track faculty positions, what of the other 85% of those in science? Do they have no value? Are they allowed to have a voice as well? If anything, we need to accommodate these new career paths and make them valued members of the community, otherwise science becomes an even less attractive career.

–I’m not against the use of working scientists as journal editors–all of the journals I currently manage have that arrangement. I just have a hard time accepting any such decision without any evidence to back it up.–

I’d love to have evidence on the question. Any idea of where I can get it? In the meantime, it seems to me we can have a philosophical (non-empirical) debate about it.

–If I’m going to take a creationist to task for basing teaching decisions on feelings (“I don’t want to have descended from an ape!”) then I have to apply that same level of rigor to my own actions and decisions.–

I don’t think the stakes are quite that high in this case………..

–There are clear examples where leading journals are run by professional full time editors (and this extends way beyond the absolute top tier journals like Nature and Cell). I want to know how those journals can be so successful and respected if, as you propose, the editors are incapable of making good decisions.–

I admit to having a cloistered view of the scientific publishing world. My viewpoint is that of a basic scientist, and the professional (mostly scientifically inexperienced and unaccompmlished) editors nearly rule that world.

–Is “it just feels this way” good enough reason to determine the course of scientific research?–

No. Let’s talk it through (which is all we can do in the absence of empirical evidence).

–Really, I think we need good editors, regardless of their employment status. There are many journals run by working scientist editors that are of poor quality, just as, as you note, there are professional editors who don’t do a good job either. No one likes to be judged, and this is an emotionally charged area. But I’ve seen just as much resentment leveled at working scientist editors as I have at professional editors. Perhaps this gets somewhat magnified at the top journals, as the stakes are so high–

The stakes being so high is exactly the problem!

“Today the coin of the realm is an article published in a few select journals with alluring covers and double-digit impact factors. Students vying for the best postdoctoral placements, postdocs contending for scarce faculty positions, and faculty seeking promotion and tenure all believe that they must publish articles in a few favored journals to reach their goal. And they are not wrong: the people choosing postdocs and making appointment and promotion decisions tend to put more weight on articles published in those journals (their claims to the contrary ring hollow). This situation reaches its apogee outside of the United States, where the impact factor (measured to four significant figures!) looms large over hiring and promotion decisions.”

–but really, being rejected is upsetting no matter who is making the decision.–

I don’t think so. Being rejected by a respected peer “who has tread the same path as the authors, who wrestles every day with the unknown, who knows from hard-won
experience what it takes to tell a significant story” should be more palatable than being rejected by a 30-year old just out of his/her postdoc with 1 or 2 peer-reviewed publications under his/her belt (really; do a PubMed search on some of them).

“Do professional editors possess Solomonic wisdom? Some, perhaps. But I havemore confidence in an editor who has tread the same path as the authors, who wrestles every day with the unknown, who knows from hard-won experience what it takes to tell a significant story. Being a practicing scientist does not ensure wisdom, but experience—recent, relevant experience—breeds sound judgment. It seems obvious that the endorsement of a peer should carry more weight than the approval of an administrator, but the hegemony of the ordained journals results in the opposite. This situation seems surreal.”

–I’m not sure I’d be willing to draw such strict lines based solely on the single criterion of employment status. I know professional editors who have been running journals for decades, who read thousands of papers and attend enormous numbers of meetings every year. These experienced professionals are better-versed in the big picture of the fields they cover than most working scientists, simply because they can devote their full attention to covering the field.–

Ah, now that’s a valid point! I concede that the professional editors have the advantage that they’re able to focus ALL their energy on the journal. I admit that that’s one area where peer-editing is at a significant disadvantage.

–Scientists very often become very focused on their area of research and build a deep but narrow level of specialized knowledge. The professional editor has the freedom to remain a generalist.–

Another good point. Academics do tend to become……..well……….academic. I think this is the strongest argument for professional editors.

–If, as you suggest, the ability to judge science should be based on a PubMed search of the editor’s publication record, then the vast majority of scientists currently working are disqualified.–

But the vast majority of scientists currently working aren’t chosen as peer-editors of well-regarded journals. My journal (and most other well-regarded peer-edited journals, I’m sure) choose their peer-editors very carefully. In fact, we do disqualify the vast majority of scientists currently working.

–The majority of authors appear in the literature only once. Which means we should probably do away with our peer review system as it is being populated by incompetents.–

Huh? The peer-editors of my journal (and I’m sure of most other peer-edited journals) choose their reviewers carefully, based on the reviewer’s expertise and proven record of accomplishment and judgment. I mean, we don’t pick reviewers at random from lists of authors. And in fact that’s the point: peer-editors who are part of the practicing scientific community know which of their peers are able to judge their peers and which ones are able to provide them insightful advice on submitted mss..

–Or at least we should put the entire burden on those few whose CV’s are up to snuff.–

Yes. And we do.

–Where is the line drawn then? If someone is a working scientist in industry, are they capable of judging the work done by someone in academia?–

If the Editor knows the person and his/her work and has confidence in his/her judgment, then of course YES! I have many colleagues who are practicing scientists in industry who have the wisdom and experience to judge the work of their peers in academia, and I’ve often used them as reviewers.

–Most PI’s in my experience move away from the bench fairly early in their careers. If you’re not conducting experiments yourself, are you to be considered a “practicing” scientist?–

If you’re responsible for advancing knowledge (as the PI is) and therefore “wrestle every day with the unknown” and “know from hard-won experience what it takes to tell a significant story”, then YES.

–Are you capable of judging someone else’s experiments if you haven’t picked up a pipette in 20 years?–

Of course, pipetting everyday is not necessary for one to be able to judge a peer’s story. But one who is tasked with judging other peoples’ hard-won stories needs to have produced his/her own hard-won stories (plural), and be continuing to produce them (yes, usually by guiding surrogates, which is actually harder than doing it yourself……) to know how hard it is to tell a significant story and therefore to be qualified to judge others’ stories.

–The landscape of science continues to change and diversify. It seems somewhat parochial to try to ignore that.–

Indeed! I’m painfully aware of the situation, and I’m not ignoring it………..
— If only 15% of PhD students are going to head toward tenure track faculty positions, what of the other 85% of those in science? Do they have no value? Are they allowed to have a voice as well? If anything, we need to accommodate these new career paths and make them valued members of the community, otherwise science becomes an even less attractive career.–

………as I said in my previous post, there IS a role (a great need, even), for professional editors. It’s just not in making the decisions of which manuscripts to publish. (What I believe is their role is the subject of a separate discussion). But, it seems to me the implication of your statement is that we should give professional editors this role to increase the number and variety of career options for PhDs. I believe the stakes—the integrity of Science (cap S)—are too high to allow us that expediency.

Mark,
It’s a useful and productive conversation to have. Anything we can do to improve the communication process is always welcome. But I think declarations that one type of editor over the other is “imperative…to maintain the integrity of science” are premature in the absence of concrete evidence.

I take your point that with experience comes wisdom. It’s less clear to me though, the direct relationship between doing a certain amount of research and having the ability to accurately understand and judge the research of others. I know very senior researchers with impressive publication records whose judgment I wouldn’t trust at all. I know postdocs whose advice I’d stake my life on. Again, to me the decision stems from the individual and their abilities, not a blind litmus test of employment status or publication record.

If you don’t think there’s an enormous level of resentment directed at working scientist editors when they reject papers, then you may just be dealing with a more polite group of scientists than is found in other fields. The vituperative, personal attacks that my editors regularly receive are evidence to the contrary (and these are coming from the rare few brave enough to confront an editor, most just grumble quietly to their colleagues). Rejection, being told your work is not good enough, is not an easy thing to take, no matter the source of that decision.

Also, those 30 year old editors are not making decisions in a vacuum. Most, if not all prominent journals have large staffs of editors. Each paper is seen by more than one set of eyes, and inexperienced editors have their work overseen by more experienced colleagues. The direct communication with the author may come from that one inexperienced editor, but they’re not the only person involved in the final decision.

As a (former) biologist, I tend to take the evolutionary view on this–we’re better off with a varied ecosystem. I think each type of editor has strengths and weaknesses. It’s up to the community and the individual author to determine which system works better for their needs.

One question–one of the arguments against the working scientist/professional editor controversy is that many of those railing against professional editors are older, well-established scientists. The suggestion is that this is seen as an attempt by those in power to further entrench that power base. If the older established scientists set themselves up as the only ones competent to judge science (and hence control all the subsequent funding and career rewards that stem from publication) then they remain in power despite a greatly shifting landscape. Others have made private suggestions to me that the big problem here is the large percentage of editors who are female, and that the gender biases in the culture of science are at play here–male scientists don’t want to be judged by females. I think that there’s some validity to these arguments, though not in the majority of cases and that there are legitimate arguments beyond these prejudices. Your thoughts on whether either of these things come into play at all would be appreciated.

• I know very senior researchers with impressive publication records whose judgment I wouldn’t trust at all. I know postdocs whose advice I’d stake my life on.

Indeed! “Being a practicing scientist does not ensure wisdom, but experience—recent, relevant experience—breeds sound judgment.”

I will point out that the highest impact journals (I’ll be frank: it’s Nature and Cell and their spawns) are edited mostly by postdocs………..I hope they’re all the ones you’d stake your life on, but somehow I doubt it.

• Again, to me the decision stems from the individual and their abilities, not a blind litmus test of employment status or publication record.

Indeed. As I said, the peer-edited journals I am familiar with choose their editors very carefully. And again “……. experience—recent, relevant experience—breeds sound judgment.”

• If you don’t think there’s an enormous level of resentment directed at working scientist editors when they reject papers, then you may just be dealing with a more polite group of scientists than is found in other fields.

Perhaps. It has been my experience that the people who submit to GENETICS are a fairly collegial group (though that’s not to say I’ve never encountered a vituperative attack…….).

• The vituperative, personal attacks that my editors regularly receive are evidence to the contrary (and these are coming from the rare few brave enough to confront an editor, most just grumble quietly to their colleagues). Rejection, being told your work is not good enough, is not an easy thing to take, no matter the source of that decision.

I suspect it’s easier for many authors to defame someone they don’t see as a peer. Of course that doesn’t justify it. And I’m sure authors grumble quietly to their colleagues about peer editors (academics love to grumble……….)

• Also, those 30-year old editors are not making decisions in a vacuum.

At Nature the are! From their website: “Like the other Nature titles, Nature has no external editorial board [It’s breathtaking: they’re actually proud of having NO editorial input from practicing scientists!]. Instead, all editorial decisions are made by a team of full-time professional editors.”

• Each paper is seen by more than one set of eyes, and inexperienced editors have their work overseen by more experienced colleagues.

As at my peer-edited journal. And ALL the editors of my journal who collaborate on these important decisions are people who “tread the same path as the authors, who wrestle every day with the unknown, who know from hard-won experience what it takes to tell a significant story.”

• One question–one of the arguments against the working scientist/professional editor controversy is that many of those railing against professional editors are older, well-established scientists. The suggestion is that this is seen as an attempt by those in power to further entrench that power base. If the older established scientists set themselves up as the only ones competent to judge science (and hence control all the subsequent funding and career rewards that stem from publication) then they remain in power despite a greatly shifting landscape.

Ah, the conspiracy theory………Always a popular one! And why not? It just seems like it MUST be true.

In fact, it’s EXACTLY the opposite, for at least three reasons. First, whatever “power” I have as an editor is granted to me by my community. I have to earn it by making fair decisions on my colleagues’ work. That’s especially true in today’s competitive publishing environment: if the editors of GENETICS don’t do a good and fair job of choosing mss. for publication authors will submit their best work elsewhere.

Second, it doesn’t matter to me what journals I publish in (well…with the exception of P£o$ On€). Senior editors (like me) don’t need (and probably shouldn’t get) any more “career rewards”. The career rewards should go to the next generation; if they don’t Science (cap S) is in trouble. The profession has treated me (and many of my senior colleagues) well, and I’d like to see it treat the next generation of scientists as well. But, much more than for me others of my generation, they have to get their papers past inexperienced, unaccomplished editors. I don’t think that’s treating them (or Science) very well.

Which brings me to my third and most important reason: young scientists today are too much at the mercy of inexperienced, unaccomplished editors. And let’s be frank again: in my realm it’s basically three journals that have most of the influence (everyone knows which ones I’m talking about). “Today the coin of the realm is an article published in a few select journals ……. Students vying for the best postdoctoral placements, postdocs contending for scarce faculty positions, and faculty seeking promotion and tenure all believe that they must publish articles in a few favored journals to reach their goal. And they are not wrong: the people choosing postdocs and making appointment and promotion decisions tend to put more weight on articles published in those journals. This situation reaches its apogee outside of the United States, where the impact factor (measured to four significant figures!) looms large over hiring and promotion decisions.”

I find it absurd that the professional editors at a few journals have such influence over who gets hired, promoted, and funded. It’s not the professional editors’ fault—they’re just trying to do their job—but rather the fault of the senior scientists who sit on hiring and promotions and grant review committees. But that’s a separate discussion.

• Others have made private suggestions to me that the big problem here is the large percentage of editors who are female, and that the gender biases in the culture of science are at play here–male scientists don’t want to be judged by females.

I’ve never heard that one before, and, at the risk of sounding naïve, I’ll say that I’d be surprised if anyone in my scientific community voiced (or even thought) that concern. But, I can imagine there might be some truth to it in other scientific (e.g. medical?) communities. I really don’t know.

OK, we’ve both stated our positions. I’ve appreciated this opportunity to present my views on this; it’s helped me clarify my position. But I think we’re at an impasse. Over-and-out for now.

This is yet one more difference between how editorial review is conducted of journal articles versus books. Very occasionally, an acquiring editor at a scholarly publishing house will have an advanced degree in the field in which he or she is acquiring, but for the most part the people who fill this role are not “experts” in the usual sense but just well-educated generalists. This is also true for the faculty who serve on the editorial boards of university presses, who function in that role more as generalists than as specialists since very few of the books accepted are ones right in their own core areas of expertise. Would anyone care to make the claim that, therefore, the books published by scholarly publishers are not maintaining the “integrity” of their disciplines? I think not.

Comments are closed.