Editor’s Note: Today’s post is by Christos Petrou, founder and Chief Analyst at Scholarly Intelligence. Christos is a former analyst of the Web of Science Group at Clarivate and the Open Access portfolio at Springer Nature. A geneticist by training, he previously worked in agriculture and as a consultant for Kearney, and he holds an MBA from INSEAD.
The analysis of operational data is complex, dull, and unrewarding. It is also necessary. Three case studies of major journals and portfolios explain why.
Hindawi could have averted a catastrophe by monitoring closely its operational performance. PLOS ONE could have maintained higher standards of author service by simulating the effects of submission surges to its pipeline. And Frontiers in Oncology can keep track of the inevitable trade-off between editorial rigor and journal speed.
Hindawi: A collapse foretold
Hindawi’s collapse was one of the most consequential events in the recent history of scholarly publishing. Several of its journals were taken over by papermills that targeted its special issues program, leading to mass retractions. The aftermath was devastating: the reputation of the industry was stained, trust in research was compromised, and several of Hindawi’s journals were delisted from the Web of Science (WoS), which contributed to the decimation of Wiley’s (which purchased Hindawi in 2021) stock value and the decision to terminate the Hindawi brand and put on hold its special issues program.
Were there any signs that could have helped Wiley avoid the catastrophe? It appears so. The turnaround time gap between special issue and regular papers should have alarmed editors and publishers alike. Specifically, while until 2019 special issue papers at Hindawi cleared peer review about a week faster than regular papers (well within expectations for invited content), the gap widened markedly from 2020 onward. By 2022, special issue papers were accepted 56 days faster than regular papers (123 days vs 67 days).

The gap is evident in a selection of large, Hindawi titles. For example, in 2022, special issue papers were accepted 73 days faster than regular papers for BioMed Research International and 90 days faster for Wireless Communications and Mobile Computing. Both titles were initially delisted from WoS, and while BioMed Research Internationally has been re-indexed, Wireless Communications and Mobile Computing has ceased operations.

Wiley and Hindawi claimed to have been caught off guard by papermills. That might be true, but the signs were there for everyone to see as early as Q2 of 2020, when special issue papers were already accepted 38 days faster than regular papers. That is two full years before concerns were raised internally and an investigation commenced.
Frontiers in Oncology: Learning from the mistakes of others
As news of Hindawi’s retractions spread across the industry in late 2022, other publishers and journals took notice. Frontiers is likely to have been one of them. Take, for example, its journal Frontiers in Oncology, one of Frontier’s largest titles that published more than 7,000 papers in 2022. Following the news of the retractions at Hindawi, the journal slowed down substantially. It accepted papers in 77 days in July 2022, while the investigation of Wiley was ongoing, and it slowed to 103 days a year later in July 2023. Ever since, the journal has slowed down even further, inching toward 120 days in late 2024.

Frontiers in Oncology operates in a subject area that has been targeted by papermills and drives a significant portion of retractions. Moreover, the journal gets most of its content from China (67% of its papers in 2022 per OpenAlex), the country that accounts for the lion’s share of retractions in oncology. I estimate that about 2% of papers published in oncology by Chinese authors end up retracted. Following the developments at Hindawi and the high-risk profile of Frontiers in Oncology, it is possible that Frontiers added more layers of editorial scrutiny that would have led to the journal’s slowdown. It is healthy for a publisher to seek efficiency yet prioritize integrity when necessary.
In addition to slowing down, Frontiers in Oncology has reduced its overall paper output by 52% from 2022 to 2024 (from around 7,650 to 3,700 papers). Its Chinese content has shrunk by 57% and its non-Chinese content has shrunk by 44%.
PLOS ONE: Trading one queue for another
In 2020, PLOS ONE accepted papers in 157 days and published them 26 days later. This is a relatively slow performance, but within the range of expectation, given the journal’s profile (subject area, size, and selectivity). Then, the journal took a turn for the worse. By the end of 2021, it accepted papers in 181 days. It slowed down to more than 200 days by mid-2022, and it peaked at 221 days in January 2023. It must have taken a herculean effort to accelerate things back to about 180 days by the end of 2023 and maintain that performance until now.
But the journal’s woes did not end there. As the turnaround time to acceptance was brought under control, the turnaround time of production (acceptance to publication), went up. It exceeded 30 days for most of 2022, crossed 50 days by the end of 2023, and jumped to over 60 days in early 2025.

Overall, the journal has been publishing papers in about 235 days in the last three years, as opposed to 183 days for most of 2019 and 2020. A slowdown of 52 days might not be a big deal for one paper, but it translates to nearly 3,000 years of additional wait-time annually for a journal the size of PLOS ONE. That’s three millennia of papers waiting in digital queues for (likely) overworked staff to pick them up. It does not make for happy authors or a happy editorial and publishing team.
So, what might have transpired at PLOS ONE? It is not possible to know without access to in-house data, but my reading of the numbers is that the journal eventually cracked under the pressure of increased submissions in the early phase of the pandemic. This explains the slowdown in peer review. Once the volume of submissions eased and the team brought the peer review process under control, the queue and the slowdown moved to production, which has yet to recover; like cars escaping one broken traffic light only to get stuck at the next one.
The analysis of queues at peer review and production adds nuance. The queue at peer review for PLOS ONE grew by nearly 40% from early 2020 to late 2021 (from ~6,500 to ~9,000 papers) as a result of the journal’s slowdown and the COVID-driven growth of submissions. The queue at production has grown dramatically, more than doubling to nearly 3,000 papers.

Lessons and a rhetorical question
Operational data are often treated as logistical trivia. The case of Hindawi shows that they can be existential. Turnaround time performance, queues, and the editorial team’s output should be assessed side-by-side regularly to identify issues early and rectify them before it is too late.
PLOS ONE offers an example of a journal that likely is operating at capacity. Such journals can manage effectively the workload under normal circumstances, but struggle with collective absences (no coincidence that the turnaround time of PLOS ONE is always slower in January than December both at peer review and production) or unprecedented influxes of papers. Business-as-usual can lead to complacency; instead, journals need to be ready to deal with unexpected events such as submission surges or high staff turnover. Editorial teams do not have to be over-expanded, but contingency plans must be in place. Capacity planning beats crisis firefighting.
Last, the case of Frontiers in Oncology does not offer a lesson, but it poses a question: what is the optimal balance between operational efficiency and the safeguarding of integrity? I do not have an answer, although I would recommend erring in favor of integrity, as the journal appears to have done. However, I can confidently state that papers spend too long unattended in inboxes. Submission checks should take minutes, not days; reviewing should take hours, not weeks. Given the increased importance of author experience, it is vital for publishers to keep on top of their processes and deliver better service to their authors.
Discussion
25 Thoughts on "Guest Post — Reading the Leaves of Publishing Speed: The Cases of Hindawi, Frontiers, and PLOS"
Super interesting. I love these kinds of data- and chart-heavy contributions. Thank you.
Glad to hear that, Matt. Hopefully more to come in the next weeks!
Peer reviewers spend months with the paper in their inbox for review. It’s simply not top priority, competing with teaching and research. Maybe if they received payment, they could do a better and faster job.
I fear that could break more things than fix. I think there are untapped pools of reviewers that could contribute to more efficient publishing. Planning to write about this soon.
Looking forward to your input!
Perhaps a good subthread in that, could be how the shifting financial tides of academia might effect the “untapped” and tapped pools of reviewers. Where does a relatively unchanged peer review structure of the last 70+ years fit into a model where the primary producers, reviewers, and consumer of the resource is asked to reprioritize their activities as the shape of the academy seems to be quickly and dramatically changing?
I would be out of my depth trying to discuss this. What I am confident about though is that (a) peer review has become slower (in a way, it is at its slowest point since at least 2011), and (b) part of the reason for the slowdown is the growth of papers and under-utilisation of reviewers from developing countries, especially China.
Can you say a little more about where this data comes from and how you got it?
The source for Frontiers and PLOS is PubMed. The source for Hindawi is an old JSON data dump that seems to have disappeared from the internet, alongside Hindawi.
MDPI?
Agree with this question. Why is MDPI missing in this commentary?
Turnaround times are not a useful measure toward understanding variance at MDPI. As Paolo Crosetto noted many years ago (https://paolocrosetto.wordpress.com/2021/04/12/is-mdpi-a-predatory-publisher/), MDPI has done a remarkable job of absolutely standardizing its journal turnaround times to a point where there is almost no variance between journals. Because of the nature of this system, the measurements used in this article would not show the same sorts of effects seen at the publishers examined.
It would not make too interesting a case study: year in, year out, they are at ~45 days from submission to publication.
How does this compare to what one might call “the industry standard” in non-predatory journals? Also rejection rates. We really do need some solid measures that help us identify those publishers who are nothing more than money-making ventures and putting the entire ecosystem of scholarly and scientific knowledge production at risk. This is serious.
I’m not sure these are the right numbers to look at in terms of determining the legitimacy of a publisher. We want things to be reviewed thoroughly and rigorously, but also as quickly as possible so as not to impede the flow of knowledge (and in some cases, to save lives). We shouldn’t punish a publisher that finds efficiencies and does something like treating peer reviewers really well so people are more willing to agree to review and do their reviews in a timely manner. What matters here are the internal comparisons. Why were some article types getting much faster acceptances than others? That’s a different analysis than saying that a journal that takes a year to accept an article is automatically better than a journal that takes 6 months.
I agree the legacy publishers, and society publishers, could and did get lazy and inefficient in their use of peer reviewers, but we should not use that as an alibi for overlooking the way in which questionable contemporary publishers are using peer review as a fig leaf for what is otherwise a low-quality, and in some case fraudulent, money-making exercise. It is not just the expected speed of turnaround, it is also the fact that the publisher has a vested interest in accepting the manuscript submission almost regardless of peer review because of the APCs that comes its way. This is a distinct conflict of interest. I tested this out against a couple of OA publishers. In one case the publisher expected me to provide a thorough review with a turnaround of four days (and the other reviewer cited was clearly pro forma), and in the other the publisher proceeded with the publication of the submission despite some real flaws in the manuscript that I pointed out and were not acknowledged and/or corrected in the final piece. Does this matter? Well, who cares if questionable publishers are making a mint and turning out low-quality stuff if discerning readers can see it and make their own judgement? I am in an area where policy decisions are often informed by systematic and widespread reviews of the literature where experts try to come to some conclusion about the scientific consensus with some kind of estimate of the median or overall result. That is becoming increasingly hard as such reviews can be strongly influenced by low-quality and even fraudulent publications that threaten to flood the ecosystem and that are difficult to spot without reading each item in detail, which is logistically taxing and not usual practice. You would be aware of how the likes of Kennedy in the US can find support for almost any position by trawling the literature. This is a problem and we just haven’t taken it seriously enough.
Yes Peter, understood, and all of this has been written about here and elsewhere since the early 2000’s and BioMed Central’s (and subsequently PLOS’s) launch of the APC model for open access. There is an inherent conflict of interest in the model, and as long predicted, it has had many unintended (but not unanticipated) consequences. At the time, those pointing out these problems were largely shouted down (and The Scholarly Kitchen labeled an “enemy” of open access because we dared point out the flaws in the dogma driving the movement).
I’m not trying to belittle the massive research integrity problems the model has caused. What I’m saying is that comparing turnaround times is not an effective mechanism for fully understanding a publisher’s practices. Yes, there can be obvious red flags (a majority of papers accepted immediately or within an unlikely amount of time for review), but can you really say that a publisher that takes 8 weeks to run a peer review process is substantively different than one that takes 10 weeks? Or that the peer review process at a history journal should take the same amount of time as the peer review process at a surgery journal?
This is useful. You should be applauded for making this effort with real, time-series data on crucial aspects of scholarly publishing.
But I am not convinced about the interpretation/conclusions.
First, the role of editor. In my role as a senior editor for a decade (pre-OA and predatory journals) at a top social science journal I desk-rejected 50% of submissions on grounds either of out of scope or poor quality. I never once got push back from aggrieved authors on this score and I felt that peer reviewers got a better deal because they were not dealing with marginal submissions.
Second, the role of the publisher. There is surely a misalignment with financial incentives here, at least where APCs are an important feature. If the publisher gets tougher on peer review and editorial desk review, that means fewer publications and less income from APCs, which is a primary driver for many of these publishers. I have stopped reviewing for a number of Open Access journals because it is clear in many instances that the peer review is frankly pro forma and that necessary manuscript changes are not made. After all, why would you as publisher have a system that ended up requiring changes or event rejecting submissions when the author might go elsewhere or feel put out at paying money and not being accepted, and your APC take declined?
Thirdly, the role of the peer reviewer. I continue to review submissions for journals because I get to see quality manuscripts and have the satisfaction of seeing improvements made and quality, improved manuscripts published. If all you are doing as a peer reviewer is feeding a profit-hungry machine that is processing far too many poor quality submissions that are going to end up polluting the scientific and scholarly publishing eco-system, why do it?
Hello, Peter! Thank you for sharing your experience and insights. I am in full agreement on the need for rigorous peer review and appreciate the added context.
I fear that we are side-stepping the key issue here that we seem unwilling to call out – namely that publisher references to “peer review” in many contexts are increasingly pro forma and are in fact a fig leaf for production lines that are calibrated to volume and profit maximisation at the expense of the scientific and scholar mission. Why is this an issue? Because it is polluting the scholarly and scientific output system with low-quality and often fraudulent and questionable publications. Frankly I think we are being far too polite. It is about time that scientific and scholarly organisations called out those publishers where “peer review” is not an effective quality control and indeed is a fig leaf for otherwise entirely commercial enterprises with limited commitment to the scientific and scholarly mission.
It’s interesting that as an industry, we do not agree on how we define basic terms like submission dates, acceptance dates, or even publication dates. Is it “submitted” when the author first clicks submit, or after it passes through an initial quality screen? Is it accepted when the editor first signals their intent to accept it, or after all final files have been received? I have seen these terms applied in a variety of ways. And then you have some journals publishing the authors accepted version, some adopting an ahead of print or pre-issue publication workflow, and so forth.
My favorite is “time to first decision” which means a wildly different number of things calculated in wildly different ways.
Although in the case of the analysis above, none of this matters as long as the individual company is internally consistent.
It is not pretty, but if you take a least-common-denominator approach and weed out the offending journals and portfolios, you can do analysis across the industry for submission, acceptance, and online first. As David notes, more complex timestamps are a free for all.
In case of current decline of quality of publication such discussions like the one above are misleading. There are not good quatitave measures of journal qualities but a better factor of the publisher’s behavior would be to analyse the journals’ rejection rates after the review process (including possible resubmissions of the rejected papers).
This would be a sign of a possible good reviewer choice and their potential hard work.
Yes, this could be a useful measure of the quality or otherwise of journals that are on the margins of what are clearly “predatory”. For example, Frontiers in its annual report lists all the citation levels for its journals, and shows pre-peer review rejection rates (which is a pleasant surprise to see). When I was a senior editor at a top social science journal I desk-rejected 50% of submissions on grounds of poor quality or out of scope, but after peer review another third were rejected. So, that’s an overall acceptance rate of about 20%. It could be a lot different now that APCs mean that journals are more reliant on income from accepted manuscripts than they used to be in the past.