Journals play a strong role in quality signaling for the papers they publish, as Phil Davis has discussed before. Quality signaling may seem self-explanatory — you’d think it has to do only with how good a paper is — yet quality is highly nuanced, as Phil discussed in a later comment on the same post:
- Quality is an abstract construct. It doesn’t exist except in our heads. We often know it, however, when we see it.
- Quality is multi-dimensional. It is made up of many different components, each one of them may say something different about what we’re trying to evaluate.
- Quality ultimately reflects a private evaluation.
- Yet, our private evaluation is influenced – often to a great deal – by the evaluations of others. This is because we are often unsure of our own evaluations and seek the evaluations of others — especially those of experts. We are also social creatures that prefer consensus.
- Lastly, quality is often associative. What we think of the group as a whole affects what we think of its individual members. This is why journal branding is so important.
An oversimplification of the peer-review process portrays administrative staff distributing a batch of scientific papers to a set of volunteer expert reviewers, who evaluate, grade, and return the papers to be accepted or rejected based upon their combined input and a dash of final editorial judgment.
This version misses an important step that precedes external peer-review in nearly every journal — initial editorial review for quality, relevance, and scientific interest — associative quality, in many ways. Usually performed by in-house editors, this step can lead to the rejection of a high number of submissions. And while some open access and mega-journal advocates are disdainful of it, editorial review is becoming more important as the number of submissions — and the number of misguided submissions — climbs, and as editors become more attuned to the needs of authors, who usually prefer a quick decision to weeks or months of delay before rejection.
How widely is it used? It’s nearly universal. I only know of two traditional journals that don’t routinely practice it, and both of those are itching to implement it.
What percentage of papers does it remove on the front end? Recently, I used a combination of desk research techniques — looking stuff up online and asking people via email — to collect a sample of editorial rejection or “desk reject” rates. This admittedly incomplete survey (18 journals) found that between 7% and 83% of submissions are rejected without external peer-review. Here’s the anonymous list of rates of rejection without external peer-review:
- 24% (yes, two different journals had 24% rates, this is not a mistake)
- 10% (yes, two different journals)
There was no clear association between the rate of editorial rejection and obvious things like field, size, impact factor, or circulation. The only little trend I saw was that medical journals seem to have generally higher initial rejection rates than journals in other fields. But overall, editorial review practices seem to have developed through a mix of editorial philosophy, pragmatism, culture, and experience. Most editorial review processes involve at least two editors agreeing a paper should be bounced out, but this varies a bit as well.
Early review and rejection helps editors cope with the crush of submissions all strong journals are dealing with, along with a feeling of needing to be fair and fast with authors who have missed the mark in some way — wrong journal, poor study, or both. Also, peer-reviewers are more stretched generally, so journal editors are sensitive to this volunteer group; editorial review is a way of sparing them from the obvious mismatches.
Editorial review and rejection seems to be increasingly important for traditional journals. One source mentioned that some journals are hiring staff solely to provide an initial filter for obvious misfits and dreck. However, it is mainly missing from open access mega-journals. For mega-journals, associative quality is less important than a broad filter. From a business standpoint, publishing more papers is better business, as well, further damping the impulse to narrow the candidate set from the start.
The fact that each paper is more demanding to publish is likely also factoring into the trend for traditional journals to winnow down the field early — disclosures, supplementary data, complex data sets, online summaries, editorials, and large image sets all require a lot of staff and editorial time. Editorial review allows editors and staff to focus on the papers that have a chance, and even more on the papers that will ultimately be published. It reduces distractions.
We also have a robust journals ecosystem, which reassures editors that mismatches can find a home, making passing on a decent batch of papers far easier.
Brands have also ascended in importance over the past few decades, and brands bring all sorts of identifying traits along with them. Knowing these traits and living with them, editors are able to more quickly identify misfits, making editorial review both more efficient and more accurate.
There’s a flip-side to editorial review — marketing. If you don’t have a clear idea of journal identity or place, how can you market your journal? This has significant implications, for authors, readers, news and social media, and ultimately impact factor and sustainability. If you can’t tell if a paper is “right for you,” then you don’t know what your journal’s about, which by extension means you can’t tell the world what your journal’s about and/or don’t know who in the world to tell these things.
Editorial review is growing in importance for domain-specific journals and a key to success. It underscores the importance of professional editors to brands and communities, highlights the value of associative quality, and reflects on your ability to market, promote, and promulgate your journal. It’s absence within the mega-journals may help explain the diffuse focus of these repository-like publications and their sporadic media coverage and brand associations. After all, quality signalling isn’t just about how good a paper is. It’s about how a community will process it, which means knowing which community you’re working with. Editorial review is a key but often forgotten component of quality, efficiency, and effectiveness.