Every research article submitted to a journal should come with a digital certificate validating that the authors’ institution(s) has completed a series of checks to ensure research integrity.

Journals should not hold primary responsibility for detecting, correcting, and punishing authors for inappropriate behavior.

In recent months, several “scandals” have rocked trust and confidence in journals. Thousands of Hindawi papers were retracted because they most likely came from papermills. Wiley announced that their new papermill detection service flagged 10-13% of submissions across 270 journals for further review.

Frontiers published ridiculous AI generated figures. And a handful of journals from various publishers were caught publishing papers with obvious AI LLM chatbot text included.

These incidents led to mainstream press articles questioning the value of journals and SCIENCE. I’ve yet to see one that questions the value of institutions or funding of science.

There are consequences for journals that don’t seem to care about research integrity, no matter what their corporate mission statements claim. There are mechanisms for dealing with those journals — they lose indexing in important compendia, they lose their Impact Factor, they lose out reputationally and submissions drop dramatically, at least for a while. They go on lists at institutions or national funders as being restricted.

person examining representation of a digital document with a magnifying glass

Journals have positioned themselves as being a trusted source with peer review and some level of validation of scholarship.

Journals have been increasingly expected to explain their value and for the vast majority of serious journals, rigorous peer review has been the answer along with community curation. However, forensic analysis of data sets and gel stains was never an expected task of traditional peer review. And yet, today, journal staff may be performing any number of checks including plagiarism scans, figure analysis, identity checks on claimed authors and reviewers, and at least following the data links supplied to see if data was deposited as may have been required. Now we will start having to add in papermill checks — are all the named authors real people, are they at the institutions listed on the paper, do those institutions exist, are all of the authors working in the same field, have they collaborated before, are there “tortured phrases” littered throughout the paper?

And AI detection tools, for which none have risen to the top as accurate, scalable, or integrated in any manuscript tracking systems, is a new frontier awaiting exploration by journal offices.

For every score or report on every automated integrity check, a person needs to review and decide what to do: just reject the paper based on the score or go back to the author for an explanation and if acceptable, work with them to fix the problem?

For any journal (or maybe suite of journals at a society), dealing with ethics issues requires significant staff time. Any one of these issues could be an honest error by an inexperienced author. In those cases, a journal may want to work with them to get the issue fixed and continue the paper down its peer review path. Other times, it’s bad behavior that needs to be addressed.

If a paper that fails the integrity checks is rejected, there is a good chance it will show up at another journal, wasting the time of yet another journal staffer.

These additional checks are coming at a time when the review of papers submitted to journals is expected to be fast and inexpensive and yet none of the processes above are either fast or inexpensive. And the number of papers submitted to journals is mostly increasing — though that is not the case in every discipline.

Conducting integrity checks on papers is also a Sisyphean task with little reward. The vast, vast majority of papers submitted to the vast majority of journals are written by ethical and responsible researchers. Despite the sensational headlines decrying almost 10,000 retractions in 2023, about 8,000 of them were from Hindawi journals. Context is everything.

If we remove the journals or publishers that are not actually conducting peer review (or do conduct peer review but then ignore the reviewer comments and accept the papers anyway), the number of papers with serious ethical issues is low. And yet, every publishing conference this year, last year, and next year will spend significant amounts of time addressing research integrity issues. An equal amount of time will be spent attending demos of new tools built to detect research integrity issues.

Despite the relatively low number of incidents, not checking every accepted paper puts a journal at risk of missing something and winding up on the front pages of Retraction Watch or STAT News. This is not where we want to be and it opens you up to a firestorm of criticism — your peer review stinks, you don’t add any value, you are littering the scientific literature with garbage, you are taking too long to retract or correct, etc.

The bottom line is that journals are not equipped with their volunteer editors and reviewers, and non-subject matter expert staff to police the world’s scientific enterprise.

Some have called for journals to simply retract or publish an expression of concern if questions are raised about published papers and force the institutions to conduct an investigation.

Every time a managing editor has to send a paper to an institution for investigation, a small piece of their soul goes dark. Maybe if all your papers come from US R1 research institutions you will at least be able to identify who to whom the email should be sent. I have personally spent hours hitting translate in the web browser on institution web pages searching for anything that might look like an integrity officer. Usually the best you can find is a dean without an email publicly listed.

Every time a managing editor has to send a paper to an institution for investigation, a small piece of their soul goes dark.

But large well-funded institutions are not off the hook. Their reviews and investigations are slow and not at all transparent.

There are obvious reasons not to trust an institution with policing their own research outputs; however, the use of third-party tools would help mitigate those concerns.

A solution to the problem is for institutions to take responsibility for conducting integrity checks and providing validation to the journals. The many fine companies that are trying to sell publishers the expensive technology solutions should be trying to sell enterprise solutions to institutions.

Might there be a middle ground? The technology tools could be available to individual authors that then have to get the validation and submit it with their paper. This will come with a fee.

I don’t see how journals needing to employ more and more integrity checks and human review of the results is sustainable. As “cheating the system” becomes exponentially easier with the AI tools already at our fingertips, the constant public shaming of journals for not catching issues will continue to erode trust, not only in journals, but also science.

And this is why backing the integrity review up in the timeline is crucial. Trust in science is low, like really low. The US is one election away from potentially losing most science funding. In corners of the universe, what is true is no longer relevant and many lies are believed as fact.

Instead of hoping that strapped journal offices and volunteers find the bad papers before they are published and instead of blaming the journal when one slips through, maybe the institutions — the employers of the researchers — have a significant role to play in ensuring the scientific record is clean from the start.

I welcome continuing discussion on where efforts to ensure research integrity are most efficiently deployed.

Angela Cochran

Angela Cochran

Angela Cochran is Vice President of Publishing at the American Society of Clinical Oncology. She is past president of the Society for Scholarly Publishing and of the Council of Science Editors. Views on TSK are her own.

Discussion

95 Thoughts on "Putting Research Integrity Checks Where They Belong"

Beautifully written and very insightful! Thank you
And I ask, what to do about a whole collection od developing countries that act as though research integrity has not even been invented….An uphill battle, I am afraid. Here in Brazil, public universities are slowly coming on board and that is good news. From my part, I have been trying to increase the visibility of research integrity using social media and my Consultancy,

Anna, this is indeed a problem and it takes a nation/national institution to care about their global reputation. China did and for years made strides to curb fraud. And I know first hand from my work with Brazilian science editors that “professionalizing” the journal editorial office was a big step in that direction. There is no shortage of educational opportunities. It’s the will to make systemic changes that is missing.

Couldn,t agree more. Though I sometimes feel like some kind of D. Quixote, but I keep on trying, becasue I truly believe in the importance of scientific integrity at all levels.

I fear the effect the use of AI tools¹ will have on the will of people to undertake the required effort to actually DO the systemic changes.
¹ Specially when every new AI tool promises and promotes how they will make your work “fast” and “easy”.

“Every time a managing editor has to send a paper to an institution for investigation, a small piece of their soul goes dark.” My Goodness, I relate to this so much!

As a journal editor, I am feeling the struggle against fraudulent manuscripts so much.
The simplest and probably most sustainable action against this problem can be taken by research institutions indeed, not by journals. It is to remove the currently widespread incentive to publish. Stop judging scientists by their paper output. Perhaps then we can return to a time when publishing was part of the scientific discourse and not the goal of research itself. Who would go to the trouble of faking a paper if there was no reward?

You provide an excellent action to this problem, to which I agree. But the question is ‘if publications with positive data/results are the bread and butter of scientists, to receive funding, what is the alternative?’ Research is taking place in these lab every single day but is it a result that journals want to publish?

Angela – thank you for an interesting and timely posting.

It’s frustrating that organizations best placed (and that have the greatest power) to influence author behavior use journal publishers as a lightning rod for their own failures.

However, journal publishers also want to “have their cake and eat it”. They claim to publish “only the highest quality research” while often running away from the consequences of quality process failures.

We are approaching a time when journals will have to decide if they are just bureaucratic “administrators of peer review”, or if society should trust (and reward them) for making a more meaningful contribution to quality assurance in research workflow.

In more practical terms, here are some thoughts about what journals could do today:

Transparency/accountability – nobody likes to be told that quality comes from a “back box” that you just need to trust. Journals could be more transparent about what they do and do not check: “This manuscript was checked for plagiarism”, “This manuscript was not check for data manipulation” etc. The simplest way to implement this would be to share the output report from an automated tool like PaperPal (COI – I’m a Strategic Advisor to Cactus) in the published manuscript.

Sampling and Process Re-engineering – checking manuscripts is indeed a “Sisyphean task” – journals will go broke if they conduct deep, manual quality checks on every manuscript! But in many quality assurance workflows this problem is addressed by “sampling”. Journals could randomly select e.g. 1/100 manuscripts for a deep manual check. Plus, journals could use heuristics to sample and trigger “heightened scrutiny” – for example where an author was subject to prior retractions (https://scholarlykitchen.sspnet.org/2024/02/13/guest-post-how-identifiers-can-help-publishers-do-a-better-job-of-curating-the-scholarly-record/).

Communication with Funders and Research Institutions – Maybe the submission form could be updated to ask the author to identify the integrity officer at their institution?

Richard Wynne

A reasonable approach, but all those checks cost time and money. Given the financial pressures constantly put on publishers to reduce subscription costs/APCs, how willing do you think institutions are going to be toward the significant price increases necessary to add this level of scrutiny?

I wish I definitively knew the answer to your question! Journals are in a tough spot. As self-proclaimed gatekeepers of quality it’s going to look very strange if they don’t make major, visible investments to update their outdated quality assurance infrastructure. Yes, making investments is risky, but so is clinging to a slowly failing status quo.

On the financial point, it’s interesting as Richard pointed out, publishers want to have their cake and eat. Universities, at least in the UK, are struggling financially: https://www.theguardian.com/commentisfree/2024/mar/29/britain-universities-freefall-saving-them-funding-international-students
I can imagine their argument to be ‘we can’t afford it, that’s what we pay $$$ to publishers for’. Publishers should be embracing challenge like this which differentiate them from preprints and – at some point in the future – AI reviewing of manuscripts.

Our checker (SciScore) costs between 1 and 2 bucks a run for our integrity checks (given fairly large subscription volume), if that is a lot of pressure on the bottom line, I might question the financial soundness of the journal.

Sure there’s the direct financial and human resources to perform the checks, but that’s exactly why the sampling approach suggested by Richard makes total sense. Furthermore, given that sampling is one of fundamental building blocks of science, it should work, I think.

I’m not sure sampling is the answer here. It can serve some purposes, perhaps give a sense of the frequency of the problem, but as a journal reader, I want every paper in the journal to have been verified as much as is possible. We know, at least from retraction frequency and how often fraud is found, that it happens in still a fairly small minority of the literature (10K retractions from 5M articles is 0.2%). But the problem is that each of those small number of fraudulent articles can cause significant reputational damage to a journal (how likely are you to send your next paper to the rat testes journal?), and more importantly, significant public harm (how many people have died because of Andrew Wakefield’s fraudulent papers?). Sampling won’t stop those from slipping through.

The suggestion of sampling checks is nonsense . This applies to products produced following always the same process. Articles are written by several different human beings.

Let me see if I can make sense of “sampling”.

Tax returns are unique and prepared by individuals, and tax authorities do not have the resources to fully examine every tax return. The solution is to “sample” and audit a small percentage of returns. This has an impact on the quality of *all* tax returns because everybody has a chance of being audited and therefore takes greater care with their returns.

So, even though sampling does not check every item it has an impact on every item.

Scholarly manuscripts are not unique artworks being judged for their subjective merit.
Many aspects of research reporting (manuscript publication) are relatively standard. Do the references support the assertions made? Have images been manipulated? Are the results statistically significant? Etc. Some checks can be automated at relatively low cost, however other checks (e.g. forensic analysis of data manipulation) are much more expensive. That’s where sampling could be a useful technique.

There is a tendency in scholarly publishing to rule out any solution that is not 100% perfect on the unspoken assumption that the current methods are doing just fine. But is that still a valid assumption?

“Sampling” is obviously not the answer to all quality problems in research evaluation, but it is a relatively standard technique that could help improve the quality of scholarly publishers’ offering.

In my experience with systematic image screening at The Journal of Cell Biology over a dozen years, even universal screening was not a deterrent to authors. The number of manipulations that we detected (both those that affected the interpretation of the data, and those that did not) was remarkably consistent over that time, despite the fact that authors knew that their images would be screened if their manuscript was accepted for publication. Thus, I do not think that sampling will be a deterrent, and the only way to properly address this problem is with universal screening.

To me the real lever is with the funders. Require any paper, preprint, or other research output submitted in a grant progress report to come with research integrity certification from a qualified 3rd party. Threaten to cut off the money and the institutions will jump.

An important part of any certification process would be disclosure of the effectiveness of the 3rd party’s process/tool relative to an industry standard.

I’d think you’d create a set of standards/criteria that companies (and publishers) could use to apply for approval from the agency (much like the idea of what’s required to be a “qualified” repository for funded research data). Perhaps a standard output of the process, a report on what was run and the results, could become part of the paper.

Thank you, Angela, for sharing this. Beautifully presented.

In India, institutions have started implementing plagiarism detection tools, though it’s a long way to go. I liked the idea of making institutions share responsibility for research integrity. I believe workshops/seminars with authors on research integrity will help increase awareness. Transparency in the publisher’s manuscript evaluation process will also educate authors on the various steps involved.

Thank you for sharing your thoughts. You’ve highlighted a significant trend: the investment in ‘integrity check’ technology is predominantly by developers of proprietary tools. This trend suggests a move towards a future where access and control of these tools are tightly held, potentially leading to a new era of dependency on specific vendors. ‘Trust’ becomes a result of a proprietary toolchain.

There’s a critical need for open-source solutions in the realm of research documentation. Such tools could inherently address many integrity concerns *further down the research lifecycle* while remaining free and accessible to the research community. This shift could significantly alter the landscape, challenging the traditional role of publishers and potentially reshaping the academic publishing industry.

In the current scenario, the focus of proprietary tool developers on addressing ‘last-mile’ integrity checks underscores a gap that the open infrastructure community must urgently address. It’s a pivotal moment for the community to advocate for and develop open-source solutions that are *in the hands of researchers* to ensure the long-term integrity and accessibility of research outcomes.

Thank you for this typically thoughtful and well-reasoned piece. The question of where the line for responsibility for ensuring basic research integrity standards should be drawn surfaced at a meeting I was at last Fall. As Angela says, currently publishers are seen to be responsible for detecting and dealing with breaches of community at submission and during the editorial process. Whilst they definitely have an important role to play, the current situation is unbalanced. Many participants at the session from the above mentioned Fall meeting were of the opinion that the balance point needs to be moved further upstream and earlier in the research cycle so that funders, institutions and research groups take a greater share of responsibility for ensuring that inexperienced authors are better supported and bad actors are weeded out earlier on in the research process. The real question of course is how to accomplish this and what sanctions should be placed on bad actors when a case is proven. Clearly another area in which publishers, funders, and institutions need to come together.

The first institution to ask for access to the Papermill Alarm turned out, on inspection, to be a legitimate representative of the institution – a full professor. Inspection of his publications showed a long history of quite blatant image manipulation. I suspect he was running a mill. Now every time an institution contacts me I wonder if they want to deal with fraud or just make it harder to find. Trust is hard to automate.

I took a call once from an institution who had faculty embroiled in multiple retractions. After the introductions, their first question was “how do we make this go away?”. On explaining that the retractions were a fait accompli, they asked if we would republish the same papers in another journal, as they “needed the numbers”. There were no queries about the actual problems with the papers, no denial of wrongdoing (or pleas of innocence), no attempt to reassure us of action taken (we naively thought they might be seeking advice on how to train their researchers).

As much as I know we all need to work together to solve these complex problems, it is incredibly frustrating to come up against the (at best) wall of silence from universities and funders on this. So how do we motivate / incentivise them to act? What worked for journals / publishers was being publicly called out on social media and PubPeer. Could that same tactic work for institutions and funders?

Name and shame? Maybe. Look how quickly certain cancer centers responded when a slew of papers were questioned at once. Within two weeks of a blog post, 13 requests for retractions were sent to journals with a few dozen other corrections.

I do think this can be a collaborative effort. But also seems that tech solutions are super close and could easily facilitate review and certification.

That is an important question that I have somewhat chosen to ignore. I envision a system where works are submitted to a service that conducts all kind of integrity checks and send the report to the author and someone at the institution. Fixes are made where possible. Once the paper is ready to submit, a validation code and report is submitted with the manuscript to a journal that can see what was checked. I guess this could be a “trust but verify” approach.

What a clear and bold idea. Thanks for writing this.
I’m partial to @Heike Riegler’s idea or the “3 good papers rule”. But assuming things continue on their current path, I am wondering how this might play with the growing use and creation of preprints.
If the researchers’ institutions take an active hand in quality control, and many outputs are preprinted, what exactly is left for journals to do other than curate?
Now curation is not trivial, and it serves an important place in our attention economy where content is only growing. But this seems like putting journals on a path to reduced relevance. If a preprint server has a robust advanced search interface and supports user tagging, why would anyone read a journal if there was a quality check done by the institution?

Perhaps I’m missing something.

Ah, you said “quality check” and I am talking about integrity checks. The traditional peer review process with experts and peers determining if a paper is good, important, and novel cannot be done by the institution. The conflict there is not manageable.

Is it just me, or do other people feel that the push by funders and academic institutions to include papers in preprint servers and institutional repositories is only going to make this problem worse? Are the servers and IRs going to run their own ethics checks? Or are we just going to further pollute the record or science?

“The bottom line is that journals are not equipped with their volunteer editors and reviewers, and non-subject matter expert staff to police the world’s scientific enterprise.”

What a pathetic excuse. The bottom line is that publishers do not want to use part of their enormous profit and pay editors and peer-reviewers to do a better job. For example, employ a number of statistical consultants instead of paying huge dividends to shareholders.

I would suggest you are confirming Angela’s point in this piece. Universities/Institutions and The Gates Foundation, Wellcome Trust, HHMI, et al who sit gloatingly on mountains of billions of gold should be the ones funding and supplying the integrity checks on the research on which their names are attached.

When we have questioned a paper for malfeasance, we go to the institution where the possible malfeasance occurred and say–this is on you to investigate. The potentially corrupt act took place in your house. Let us know what you find out.

Not sure I understand how on earth am I confirming Angela’s point in this piece:)

It is well know that publishers have enormous profits: “Elsevier operates at a 37% reported operating profit margin compared to Springer Nature which operates at a 23% margin.”

This means they can use part of that money to do a better quality check before they decide to publish something. It’s that simple, use part of your extreme profits and don’t hide behind the volunteers who do quality checks for you. Don’t cry, use part of that money to check what you publish. Pay people to do that for you, don’t rely on free labor.

It’s also well know that the institutions and funders have mountains of money. Shouldn’t they be funding the verification of what they are paying for?

Well, most of universities are funded by public money and are nonprofit entities. Unlike publishers. Universities hardly have extra money to spare. Unlike publishers that are making extreme profits.

Anyway, Angela, you or whoever are free to argue that universities should take their part of the responsibility regarding this issue, I have no problem with that. However, please do not hide behind the work of volunteers and please do not cry that you have no funds to do more, because you have. As I said in my first comment, it is just an excuse.

https://en.wikipedia.org/wiki/List_of_colleges_and_universities_in_the_United_States_by_endowment
The National Association of College and University Business Officers (NACUBO) maintains information on endowments at U.S. higher education institutions by fiscal year (FY).As of FY2023, the total endowment market value of U.S. institutions stood at $839.090 billion, with an average across all institutions of $1.215 billion and a median of $215.682 million

It’s also perhaps worth considering the unintended consequences that might arise in a market where only the wealthiest and most profitable publishers can do the sorts of integrity checks needed to retain trust in the literature (no, not every publisher has the same profit margin as Elsevier). Should integrity be the privilege of the wealthy, and does that then lead to (even) further market consolidation and further entrenchment of the largest for-profit publishers?

For some reason I cannot reply to David’s comment so I will do that here.

David wrote: “It’s also perhaps worth considering the unintended consequences that might arise in a market where only the wealthiest and most profitable publishers can do the sorts of integrity checks needed to retain trust in the literature (no, not every publisher has the same profit margin as Elsevier). Should integrity be the privilege of the wealthy, and does that then lead to (even) further market consolidation and further entrenchment of the largest for-profit publishers?”

If I understand David correctly, he is basically saying that the most profitable publishers do not want to use their extra profit to improve their product because they are afraid that would put those less profitable publishers in difficulties because these cannot invest that much money in improving their product. So by not investing part of their extra profit in quality control, these publishers are actually doing a favor to all scientists and science in general. If so, I can only laugh at this. This would be in my opinion by far the worst explanation about why publishers retain their huge profits.

You do not understand David correctly.

I am basically saying that if you create a tiered publication system where only the more profitable publishers can do the checks needed to make the research credible, then you will end up with only the more profitable publishers. I’d rather see a robust ecosystem filled with non-profit publishers running at small margins, not to mention university presses and library publishing done by universities. So a solution that relies on the enormous profits of commercial publishers is one that would lead to unintended consequences.

If you can afford to fund the research, or afford to do the research, or afford to publish the research, you can afford to do the integrity checks and have an obligation to do them.

The only other point I would make is society publishers are not multi million dollar profit centers.

“If you can afford to fund the research, or afford to do the research, or afford to publish the research, you can afford to do the integrity checks and have an obligation to do them.”

That has nothing to do with my initial comment. You are free to argue that, and I have no problem with it. I am just saying it is pathetic to hide behind the work of volunteers and cry that publishers do not have funds to do more, because they have. Publishers should top whining and invest part of their huge profits in order to improve their product.

David writes:
“You do not understand David correctly.

I am basically saying that if you create a tiered publication system where only the more profitable publishers can do the checks needed to make the research credible, then you will end up with only the more profitable publishers. I’d rather see a robust ecosystem filled with non-profit publishers running at small margins, not to mention university presses and library publishing done by universities. So a solution that relies on the enormous profits of commercial publishers is one that would lead to unintended consequences.”

It seems I did understand you correctly. You again state that those who can do something, don’t do it because they are sympathetic towards those who have difficulties doing it. They do not want to improve their product because others cannot improve their product. This kind of altruistic and empathetic attitude would be the first in any free market economy.

Again, no, you clearly do not understand what I’m trying to say. It has nothing to do with improving a product or wanting or not wanting to do anything, or altruism or empathy. I’m saying that you are mistaken in assuming all publishers make significant profits and have high margins, and that if your solution requires significant profits and high margins, then you will exclude from your solution many non-profit, university presses, and organizations publishing outside of the sciences. As has been the case with many other well-intentioned attempts to reform science publishing, you are proposing something that will further entrench the most profitable companies to the detriment of those that run on very low margins.

David writes:
“Again, no, you clearly do not understand what I’m trying to say. It has nothing to do with improving a product or wanting or not wanting to do anything, or altruism or empathy. I’m saying that you are mistaken in assuming all publishers make significant profits and have high margins, and that if your solution requires significant profits and high margins, then you will exclude from your solution many non-profit, university presses, and organizations publishing outside of the sciences. As has been the case with many other well-intentioned attempts to reform science publishing, you are proposing something that will further entrench the most profitable companies to the detriment of those that run on very low margins.”

I am saying that publishers should stop whining and hiding behind volunteers who do quality checks for them. If publishers care about their product, they should invest part of their huge profits and try to improve their product. It is as simple as that. Nobody or nothing is stopping them of doing that. It is pathetic to cry and whine “we have no funds to do it” when we all know they have. And please don’t worry about those publishers that run on very low margins, many scientists including myself are willing to review papers submitted to journals run by non-profit organizations. They will survive.

David writes:

“Thanks for explaining what ‘we all know.’ ”

Yes, we all know that “Elsevier operates at a 37% reported operating profit margin compared to Springer Nature which operates at a 23% margin.” and that they can do more unless there is a law stopping them to invest more money to improve their product…

And we all know that Elsevier and Springer Nature are the only publishers that exist. I think I’m done with this conversation.

Don’t know if this comment has already been made but could pre-prints be the integrity check point?

Agree. As I argued in the Chronicle of Higher Education a few months ago (https://www.chronicle.com/article/how-to-stop-academic-fraudsters), universities should institute data chain-of-custody systems to certify that their researchers are not committing fraud. This would save many universities, including prestigious ones like Harvard and Duke, a lot of trouble, such as lawsuits and 1,200-page investigative reports (https://www.chronicle.com/article/heres-the-unsealed-report-showing-how-harvard-concluded-that-a-dishonesty-expert-committed-misconduct).

Hi Angela – thanks for this piece, very thought provoking.

The way I see it, journals & publishers are expected to tackle research misconduct chiefly because they’re the only stakeholder that has the opportunity: they alone have a pipeline where the vast majority of unpublished articles are gathered and evaluated before they become public.

Even if institutions and funding agencies were highly motivated to improve research integrity, they have no clue what their researchers are doing. None. Institutions and funding agencies only find out about the articles their researchers publish months or years after they appear, by which time it is far too late to address integrity issues.

The only way I can see to resolve this fundamental misalignment of opportunities and motivations would be for publishers to see this as a new line of business: establish a presubmission platform for checking research integrity issues (c.f. Morressier), and charge these stakeholders to a) ensure compliance with funders and institutions policies, and b) catch research integrity issues before the article makes it into the public sphere.

Excellent. I love the bold opening statement. While I tend to agree that institutions could/should be tasked with providing some kind of Research Integrity (or Researcher Integrity), I see that this too can be challenging for many institutions that lack the resources (human and financial) as well as the knowledge.
💡 Perhaps authors could be requested to agree with their paper being published in a preprint server dedicated to “Rejected submissions” along with the reasons for the rejection. This in theory would make them think twice before submitting product of malpractice.
What do you think?

Provocative essay and lots of insightful comments. As one who works in a government science agency with elaborate procedures for pre-submission review and approvals on manuscripts and the datasets*, I have perspectives why mandating institutional research integrity checks across the scholarly publishing domain would be ineffective. Fundamentally, few institutions or authors would have the wherewithal and tolerance for a rigorous and consistent system, and it would become just another box ticking exercise. Second, willful fraud such as the subtle image manipulations that have roiled the biomedical literature are hard even for co-authors to detect.

To me, more liberal Expressions of Concerns or moderated article comments by publishers seem to be the most tractable improvement that could be made without increased financial burdens that would get passed on to authors. Nonjudgmental EoCs such as “Readers are advised that questions of image/data provenance etc. have been raised on this article. These comments and author responses may be viewed at….” would be something, especially if the situation isn’t clear cut. Automated linkages of PubPeer comments back to the article could be done and some journals e.g., PLOS, allow comments.

*6 sign offs including 2 peer reviews for the manuscript and 4 sign offs for the accompanying datasets including peer reviews of the metadata and audits of the data. This is just for permission to advance to ‘Start’ with the journal.

I like what Holden Thorp had to say — journals should just retract articles that they can no longer vouch for, and leave it to the institutions to investigate whether there was any unethical behavior on the part of the authors. This would result in largely unsatisfying retraction notices, but would relieve the investigative burden from journals.
https://www.science.org/doi/10.1126/science.ade3742

But even reaching the decision that you can no longer vouch for the data requires some form of investigation. There’s no way to completely pass the buck to another stakeholder for screening data or evaluating data anomalies, nor should it be passed. In my opinion, all stakeholders should be screening data and evaluating data anomalies ─ before funding (funders), before submission (institutions), and before publication (publishers). This is like the Swiss cheese model of COVID prevention that we all became familiar with. There will be holes in the checks at every stage, but, if there are more stages, there will be a less falsified data in the published literature.

Formal investigations of potential unethical behavior still have to be carried out by the institutions. Having multiple stakeholders screening data does not necessarily reduce the number of these investigations (in fact, it may increase them); it just shifts them to different stages of the research process. But at least the published literature will be cleaner.

Agreed. I still like the idea of funders requiring (and paying for) any paper that is submitted on a progress report for a grant has to have some level of anti-fraud certification.

And yes, just weighing papers based on the validity of the contents (and not worrying if it’s an honest mistake or fraud) still takes time and effort, but it’s a lot less time and effort. No back and forth with authors, no reaching out to their institution, etc. Just, is this right (or can we stand behind what’s reported here)? If not, it’s retracted, we don’t care why/how the incorrect information got in there, it’s incorrect, end of story.

Most of the unethical issues are caused by the journals because most of them do give a short time to the peer reviewer to work on. In the same vein most of the comments of reviewers are not taken to.

You’re saying that journals are the primary source of falsified research results? Don’t the authors have something to do with it?

And why are peer reviewers given short turnaround times? Because authors demand rapid publication. Blaming everything on publishers without actually examining the problem carefully is exactly where APCs came from.

This interesting discussion makes me wonder about unintended consequences of shifting and evolving responsibilities. When industry shifts, it tends to generate outcomes that differ from original intentions. Hypothetically, if institutions and funders take on greater roles in pre-publication research integrity, what could that do to the balances of roles in the scholarly enterprise where more tends to beget more? Someone is going to benefit in unintended ways, and someone is not. Scholarly publishing growth is driven by odd incentives. I wonder how shifting balances may seemingly address some quirks while also leading to unintended consequences in who does what and who benefits the most – like we see in other areas, like OA, and benefits of scale.

Your well written piece was labelled “very provocative” by the person whose Tweet link I followed. Having read it, I think she needs to take something to reduce the reactivity. You precisely describe the problem we face as editors and the nonsense of expecting us to police integrity with our volunteer resources. But as a tenured academic I also see that the Universities can’t do this either. I like the sensible suggestions on doable checks, transparency about what we can and can’t do in the thread, but most of all the role of funders. This has to be a shared enterprise and they have the leverage others lack

Reputable news publications make a lot of effort to verify their sources and publish pieces by verified journalists with a track record in quality journalism. In return they expect readers to pay for a subscription, or if ‘read for free’, rely on other sources of revenue such as advertising.

Could journals not act in a similar way? Pushing the problem onto the author or the institution (who will not be independent) doesn’t seem like a viable solution to me.

Ultimately the publisher should be responsible for what they publish and the integrity of it.

There you go…

BMJ is looking for Freelance Statistical Reviewer

Responsibilities:

“Provide a statistical or methodological review, as appropriate, on manuscripts assigned by the BMJ Open editorial team. We expect a review for an original manuscript to take between 2-3 hours to complete, and a review of the revised manuscript to take between 30 mins-1 hour to complete. Ideally, we are looking for someone who can commit to handling between 1 to 10 reviews a week.”

Salary: £115/review or £40/hr (applicants preference)
https://jobsearch.bmj.com/jobs/job/Freelance-Statistical-Reviewer/745

Publishers should stop whining and invest in improving of their final product. It’s that simple…

But that’s an entirely different type of review than is being discussed here. Statistical review is a common practice at many journals (particularly medical journals). It typically happens late in the review process, after the paper has passed the desk rejection stage, and usually after it’s been through at least one round of peer review/revision. This means it is practiced on a much smaller number of manuscripts than are submitted to the journal in total. Statistical reviewers are usually paid an honorarium and that amount is factored into the journal’s APC or subscription price.

Here we’re talking about an entirely different and additional aspect of review, research integrity checks, which need to be performed much earlier in the process and on a much higher number of manuscripts.

Publishers should stop whining and invest in improving of their final product. It’s that simple…

Should librarians and researchers similarly stop whining and invest in paying for the improvements in the review of published papers? Are you suggesting here that publishers should feel free to increase their APCs and subscription prices as needed?

Angela wrote: “Journals should not hold primary responsibility for detecting, correcting, and punishing authors for inappropriate behavior.”

Detecting. I talk about detecting. Better detecting -> improvement of publishers’ final product.

Publishers can do a lot to improve detection of inappropriate behavior. For example, a statistician/methodology expert employed by a publisher can detect anomalies in the data, impossible results, unrealistic effects sizes… A lot of studies have been retracted exactly because someone checked the data or carefully went through published papers. So yes, Angela talked about such issues, and publishers can do more, and BMJ’s initiative is one small step in the right direction.

David wrote: “Are you suggesting here that publishers should feel free to increase their APCs and subscription prices as needed?”

“Feel free”? Has anyone been stopping them in doing exactly that? :)))) Aren’t they increasing their APCs all the time? Just like the Nature has done recently “The APC to publish Gold Open Access in Nature is £8890.00/$12290.00/€10290.00.”

So as I wrote many times here, the major publishers can use part of their enormous profits to invest in detection of inappropriate behavior. They don’t have to increase APCs, they have the money. Which they can use to increase the quality of their product. They can pay people to improve their final product. I mean if they interested in the quality of their product;-)

And please don’t start again “what about the small publishers?” 😉

And please don’t start again “what about the small publishers?”

Understood.You’re proposing that they leave the market, and everything get turned over to the large, highly profitable corporations. That’s not what I think would be best for the research community but to each his own.

That is such a strange interpretation of what I am saying. They don’t have to leave the market. They can, for example, require that every study is preregistered, data is deposited somewhere online, peer-review is open and transparent, and they can allow and encourage post-publication peer-review, and scientific community will help them for free to improve the quality of their final product. As I wrote also before, scientific community is more willing to do that for publishers that do not strive for enormous profits. On the other hand, major publishers can pay people to improve their final product.

Anyway, are you saying that Boeing should not invest in the quality check of their aircraft because Bombardier does not have the same amount of money to invest, so it is just fine that Boeing produces the same quality aircraft as Bombardier while maintaining high profits. They should not strive to improve their aircraft because they have sympathy towards Bombardier? So consumers of both manufactures will ‘suffer’ equally because Boeing does not want to improve their product, but expects people will still respect their brand and pay more because of their past good reputation. Is that’s what you are saying?

They can, for example, require that every study is preregistered, data is deposited somewhere online, peer-review is open and transparent, and they can allow and encourage post-publication peer-review, and scientific community will help them for free to improve the quality of their final product. As I wrote also before, scientific community is more willing to do that for publishers that do not strive for enormous profits. On the other hand, major publishers can pay people to improve their final product.

Isn’t that along the lines of what this blog post is calling for? That research institutions take charge of the ethical behavior of their employees and ensure the integrity of their manuscripts rather than asking publishers to do that (unpaid) work for them? Or is asking for that what you called “whining” in the first comment in this thread?

” That research institutions take charge of the ethical behavior of their employees and ensure the integrity of their manuscripts rather than asking publishers to do that (unpaid) work for them?”

“…than asking publishers to do that (unpaid) work for them?”

LOL. Publishers are the last to complain that they do something for free for someone else… I am sorry, but this goes beyond irony.

In my view, the blog unsuccessfully tries to argue that the publishers do not have to do anything more to improve the quality of their product despite the fact that many of them have enormous profit margins. In my view, the blog writer, and you, unsuccessfully defend the major publishers’ unwillingness to do more because you are apparently worried about smaller publishers that do not have high profit margins. You are defending the major publishers’ rights to keep their extreme profits while trying to shift responsibility for their final product to someone else. Such argumentation simply cannot hold. I am pretty sure, or sincerely hope, the large majority of scientific community do not share your views.

There you go: a new paper “Are open access fees a good use of tax payers money?”

The author, Graham Kendall, estimated that the main eight publishers (Elsevier, Frontiers, PLOS, Sage, Wiley Limited, MDPI, Springer Nature, Taylor & Francis) received USD $5.420 billion in article processing charges during 2015-2023.

https://doi.org/10.1162/qss_c_00305

Can some of this money be used to improve the quality of their final product, I wonder. 😉

Wow, that is some article:
“The main message though is that there is not a reliable way to accurately estimate the revenue income for a given publisher.”
“These figures will not be totally accurate, indeed, they may be quite far from the true values…”
“The data is incomplete.”
“it is challenging to collect the data, and be fully confident that it is robust and complete.”
“it is almost certain that any data collection exercise is lacking”

I think there are more caveats and disclaimers in the paper than there are data or conclusions. It also fails to recognize that not all research is funded, nor that not all funded research is taxpayer funded. So drawing a direct correlation between poorly estimated numbers and taxpayer burden is unwarranted. Even so, the “study” estimates around $600M per year is spent on APCs at the largest of OA publishers. That seems a remarkable bargain to me. Global R&D funding is estimated at $2.47 Trillion each year (https://www.statista.com/statistics/1105959/total-research-and-development-spending-worldwide-ppp-usd/), so spending 0.024% of research funding to pay for making the results public seems a pretty good use of funds to me.

And to make the point you’re trying to make here, it’s silly to just look at OA spend. DeltaThink estimates the total journals market to be around $11B annually (https://deltathink.com/news-views-total-value-of-scholarly-journals-market/). Why not look at all of publishing and not just a sub-section? Is spending 0.45% of your funding money on making research results publicly available too much? If funders want the research they’re paying for to be better vetted by journals (or anyone else), what is a reasonable percentage of their budgets to be put to that purpose? Is it less than half a percent as you suggest?

Did you just get this 0.45% in this way:
“the total journals market to be around $11B annually” / “Global R&D funding is estimated at $2.47 Trillion each year”?

You say “it is silly to just look at OA spend” and then you try to make a point by stating that “Global R&D funding is estimated at $2.47 Trillion each year.”

My point, as well of the cited article, was that publishers get a lot of money from open access fees, and that they can use some of that money to improve their final product. Nothing else, nothing more. As I have been saying here from the start.

You said it is silly just to look at OA and then you mention that “Global R&D funding is estimated at $2.47 Trillion each year” in a discussion about scientific publishing. That’s amazing. And that is silly;-) As if multinational companies investing in the development of their products have much interests in publishing scientific papers…

At least we’re now talking about real numbers rather than make believe ones. Perhaps a better point for you to make would be how poorly journals vet the papers they publish, and that charging $1200 to publish an article that boils down to the author making up a bunch of numbers, admitting they probably aren’t right, and then drawing conclusions based on them is indeed a poor use of money, although given that the author lists no funding on the paper, it would seem exempt from being a “use of taxpayer money.”

But let’s simplify, and I’ll ask again. What percentage of a funder’s research budget is a reasonable amount to put toward ensuring that the resulting research results are indeed valid? Is there a number that you think is fair?

“What percentage of a funder’s research budget is a reasonable amount to put toward ensuring that the resulting research results are indeed valid? Is there a number that you think is fair?”

I cannot say. I only know and have been saying from the start here that is ridiculous that major publishers do not want to invest part of their (extremely) high profits into improvement of own final product. It is unfair to say “we cannot or we do not want to do more” while making so much profit. That’s all.

On the other hand, some propose/demand that journals pay “four hundred and fifty dollars for peer review” 😉
https://twitter.com/450Movement

If that’s too much, what is your proposal? Is there a number that you think is fair?

I don’t have a specific number either. But if there is a demand that journals do significant additional work beyond what they do right now, then is it reasonable to ask that some funding is put toward the performance of that additional work (or as is the case here, the suggestion that no extra money go to journals but instead that additional work is done by the organizations directly receiving research funding)?

I struggle though with the argument that some publishers could just eat into their profits and cover these additional expenses on behalf of the funders, largely because if they were willing to take on that loss (which is not how successful businesses generally work), it creates an unfair advantage for those profitable publishers, and the publishers that make little profit or that run at a loss can’t afford to make that choice, and will be shut out of the market, further consolidating power in the commercial entities that already dominate things too much.

Let’s go back to the this blog that argued universities/funders should do more to improve publishers’ final product. My point is that if publishers are unhappy with the quality of their final product, they should invest more to improve it. They have the money. At least the major publishers have. If they care about the quality of their product, they should invest more. If they don’t, it is unethical and unfair to blame someone else. Of course, they can continue with their practice; however, they should not complain if their reputation suffers. Don’t invest in the quality of your final product, it will happened to your journal as it happened to MDPI’s Sustainability. Invest more or suffer.

“The chairs of the Publication Forum panels have decided to downgrade Sustainability journal from level 1 to level 0 from the beginning of 2023.”
https://julkaisufoorumi.fi/en/news/sustainability-level-0-2023

Could one similarly argue that if research institutions are unhappy with the quality of the final product (i.e., research results), then they should invest more to improve it. Or if research funders are unhappy with the quality of the final product they’re funding, then they should invest more to improve it. If the funders and the universities don’t really care about this stuff and can’t be bothered to work on fixing the problem, why should publishers? Are they the only responsible party in the system?

Sure, you are free to argue that. My response was directed at what was argued in this blog. If publishers don’t want to improve their final product, their reputation and eventually their profits will suffer. It’s their choice. They will adapt or they will lose.

Okay. If universities don’t want to improve their final product, their reputation and eventually their profits will suffer. It’s their choice. They will adapt or they will lose.

The only problem is that I have never defended alleged universities unwillingness to invest more in the quality of their researchers and their work as you have been defending the major publishers’ unwillingness to invest part of their huge profit in improvement of the quality of their final product 😉 I have never said, to paraphrase you:

“…it creates an unfair advantage for those profitable universities, and the universities that make little profit or that run at a loss can’t afford to make that choice, and will be shut out of the market…”

😉

I think all stakeholders have a role to play. But I disagree that the financial burden should fall solely on the publishers. Publishers are already investing by building the necessary tools at their own expense, and doing the actual work at their own expense, but the argument is that this is likely not sustainable for some (but not all) publishers. I’d like to see the parties committing the actual fraud (universities and research institutes) do their fair share, which they seem unwilling to do (https://www.insidehighered.com/news/government/science-research-policy/2024/04/02/universities-oppose-plan-bolster-federal). And if I were a funder investing all those billions/trillions in research, I’d want to be sure that I wasn’t being ripped off. But hey, let’s blame Elsevier instead.

Yes, all stakeholders have a role to play, no one is denying that. However, according to you and Angela, the major publishers should not invest more. I interpret your writing in this way: they (i.e., the major publishers) should be allowed to keep their (enormous) profit margins, hide behind “their volunteer editors and reviewers” and shift responsibility to someone else “Putting Research Integrity Checks Where They Belong.”

The whole blog and your writing can be summarized as “we have done our part, we don’t want to invest any of our profit, it is time for someone else to improve the quality of our final product.” That is simply unacceptable to the majority of scientific community, at least I hope so.

From the blog:

“Putting Research Integrity Checks Where They Belong”
-> they don’t belong to us (publishers), these checks are responsibility of others, not us.

“The bottom line is that journals are not equipped with their volunteer editors and reviewers, and non-subject matter expert staff to police the world’s scientific enterprise.”
-> we will keep relying on our volunteers, there is absolutely no way we will use our profits to pay reviewers/experts to improve the quality of our final product. We will continue hiding behind our volunteers…

“I don’t see how journals needing to employ more and more integrity checks and human review of the results is sustainable.”
-> if we invest more, our profits will be in danger, and that is not a sustainable business model in our view, others should pay to improve the quality of our product, while we will keep our profits…

Igor, we run into this problem every time we engage in these comments. You start making up arguments, creating strawmen, and putting words in other people’s mouths. It’s easy to win an argument when you get to invent what the other person is saying.

Should the major publishers invest in research integrity? Of course, but they are already doing so. Who do you think is paying for the STM Integrity Hub and the community tools it is building (https://www.stm-assoc.org/stm-integrity-hub/)? Who do you think is paying for United2Act (https://united2act.org/about/)? Those profitable publishers are indeed investing their profits in trying to solve this problem.

And I won’t speak for Angela, but you are reading my argument incorrectly. What I’m saying is that if you set up a system where only the super profitable wealthy publishers are able to do research integrity checks, then you will penalize (and potentially eliminate) all of the small presses, all of the interesting OA startups, all library-led publishing programs, nearly all university presses (with a few notable exceptions), regional journals, journals in areas of the planet that aren’t as wealthy as the US/Europe, etc., etc.

I do think there’s a role for funders to pay for these sorts of checks and for example, the Gates Foundation agrees with me, and are now paying significant amounts to the profitable commercial publisher Taylor & Francis to run those checks on their funded research (https://www.f1000.com/verixiv/). Unfortunately this is limited only to Gates funded researchers and one commercial publisher, so I’d like to see something more equitable be achieved. Further, I’d like to see more pressure on the universities, who are the ones actually committing the fraud here, rather than putting the blame on those whose funds are being mis-used or those who are trying their best to catch the fraud.

I don’t think those are unreasonable arguments. We may have to just agree to disagree, but as always, please don’t presume to make up arguments on my part.

I find it ironic that you say that I put words in other people’s mouths and misinterpret their writing, when you did exactly that in your first reply to my comment. To remind you, your first sentence in your first reply to my comment was this:

“I would suggest you are confirming Angela’s point in this piece.”

You say I tend to create a strawman, when you repeatedly engage in whataboutism by saying “what about those small publishers, they will not be able to compete anymore if major publishers invest more money into improving their product.” This was me paraphrasing you, now I will cite you to avoid accusation of putting words in your mouth, you wrote: “you are proposing something that will further entrench the most profitable companies to the detriment of those that run on very low margins.” … “You’re proposing that they leave the market, and everything get turned over to the large, highly profitable corporations. ” And this is how that sounds to me: “What about Embraer, how will they survive if Boeing invests more money in their products, what about Embraer, what will happen to them…? Pure whataboutism.

And I repeat again and again: major publishers cannot hide behind an army of volunteers, make huge profits, and try to move responsibility to someone else when they realized people started to criticize their final product. If your business model relies on the work of an army of volunteers… and you see something is not going right, you cannot blame it on volunteers or someone else, change your business model and start paying people to improve the quality of your product! I cannot understand I have to repeat this over and over, and your only response to that is whataboutism (what will happen to small publishers), not a single word that perhaps any industry that is based on the work of volunteers will (or should!) eventually collapse because volunteers will stop dedicating their expertise and time or people will stop buying their products when they realize it is unethical to make extreme profits by using volunteers…

I do not know what to say anymore… I cannot understand anyone can defend a business model that is built on the work of an army of volunteers. I am simply suggesting that if publishers are not happy with the quality of their final product, they should perhaps stop relying on the work of an army of volunteers and invest more money.

Finally, this is really tiring… I like to discuss and argue, that has been my passion all my life, but I have not seen a single argument from you only repeated whataboutism. Not happy with the work of volunteers, don’t hide behind them, employ people and replace volunteers. Paying people for their work is kinda fair, isn’t it?

I’m not sure “whataboutism” is the right word for what you’re describing here. It’s defined as:
https://www.merriam-webster.com/dictionary/whataboutism
“the act or practice of responding to an accusation of wrongdoing by claiming that an offense committed by another is similar or worse”
My response to your argument is not an attempt to distract you by claiming that someone somewhere else is doing something worse than any particular publisher. What I am trying to do (and clearly failing) is to explain to you the unintended consequences of the approach your are suggesting. If a necessary activity is something that only the rich can do, then only the rich will remain in the market.

I’ve not tried to defend the profits of the commercial publishers nor their use of volunteers. I do, however, think that a diverse ecosystem of publishers and routes to publishing is much healthier for the research community than one that is composed entirely of highly profitable commercial entities. Your solution, at least in my experience, would create inequities and reduce the diversity of the ecosystem. It is a simple point, and from your comment above, I think we are not having the same argument.

Paying people for their work is kinda fair, isn’t it?

Absolutely. And given the enormous amount of work that needs to be done in order to assure research integrity, it would be great if those funding and producing research were willing to pay for the integrity checking work that they so desperately need.

“Whataboutism (also known as Whataboutery especially in the UK) is a deflection or red herring version of the classic tu quoque logical fallacy — sometimes implementing the balance fallacy as well — which is employed as a propaganda technique. It is used as a diversionary tactic to shift the focus off of an issue and avoid having to directly address it.”
https://rationalwiki.org/wiki/Whataboutism

Shifting the focus on smaller publishers instead of answering why major publishers should not invest part of their huge profits to improve their product is whataboutism. In my view and according to my understanding of what whataboutism is. Perhaps I am wrong, but I have anyway answered several times about possible the unintended consequences of the approach I was suggesting. For example, I wrote “And please don’t worry about those publishers that run on very low margins, many scientists including myself are willing to review papers submitted to journals run by non-profit organizations. They will survive.” So my proposal is this:

If you want to have (huge) profits:
-> pay people to do the quality check of your final product
-> you have no excuse for low quality product
-> don’t blame others, don’t whine, don’t hide behind volunteers or small publishers

If you struggle and/or you are non-profit
-> you can count on the support of scientific community
-> if volunteers make mistakes, scientific community will be more forgiving because we all volunteer
-> ask for help, people will donate their expertise and time, and even money to help

The world is full of large and small business, and smaller businesses can survive without compromising the quality of their product. There is no reason why smaller publishers cannot survive while maintaining high quality of their products.

By the way, you keep point out differences between major publishers and those small ones that might struggle… so I wonder why the author of this blog made no such distinction and did not propose measures that would take into account these differences.

She wrote: “The bottom line is that journals are not equipped with their volunteer editors and reviewers, and non-subject matter expert staff to police the world’s scientific enterprise.”

Some publishers are more equipped than others, so why grouping them all together and hide behind volunteers? Why not to write a more balanced view on the issue and propose that all involved parties (publishers, funders, universities…) should do their part instead of simply shifting the responsibility from publishers to others (“Putting Research Integrity Checks Where They Belong”)?

In response to my “Paying people for their work is kinda fair, isn’t it?” you wrote: “Absolutely. And given the enormous amount of work that needs to be done in order to assure research integrity, it would be great if those funding and producing research were willing to pay for the integrity checking work that they so desperately need.”

Can I interpret this as you being in principle supportive of the idea that peer-reviewers should be paid for their work?

“Don’t worry about them…they will survive” is not a viable business plan.

-> ask for help, people will donate their expertise and time, and even money to help

Isn’t that exactly what this post was about? Asking for help in just this manner, getting people to donate their expertise and time and even money?

I struggle with your argument — are you suggesting that the author here is wrong for not differentiating between the different types of publishers in the market, while presenting your own solution that is only viable for the wealthiest of publishers, with everyone else left to survive on wishes and hopes that maybe something magical will happen to keep them in business? Aren’t you guilty of the same level of oversimplification? Having worked in non-profit publishing for several decades, I would not advise risking one’s program, the welfare of one’s employees, and the overall health of the scholarly communications community on a plan that clearly favors the powerful incumbents and puts the smaller independents and new startups at significant risk.

Can I interpret this as you being in principle supportive of the idea that peer-reviewers should be paid for their work?

I think it’s a really good parallel for what you’re suggesting here, and I think I am going to bow out of this discussion by quoting the leader of one of the community’s most important and innovative non-profits who spoke out about the second order effects of doing this, which are very similar to the unintended consequences of what you’re proposing here:
https://scholarlykitchen.sspnet.org/2021/06/16/whats-wrong-with-paying-for-peer-review/

“One of the primary goals here is to penalize big publishers who make “too much” profit. Setting aside the challenges of how that’s defined and who gets to decide how much is too much, such a system would require publishers to build an entirely new infrastructure…This would be a huge burden on any publisher but especially smaller ones with tight margins

The truth is that payments to reviewers would just lead publishers to raise their prices. They’d raise prices to cover the money they’re paying reviewers. And they’d raise them again to cover the cost of making those payments. A movement focused on punishing big commercial publishers by forcing them to pay reviewers would thus lead to big publishers jacking up their prices by 20 to 30 percent. Publishers will just pass the costs along to libraries and authors, drawing additional money out of library budgets and granting agencies’ publication funds, and putting a healthy percentage into big publishers’ pockets.

And as seems to be the case here, she questions whether the real problem one wants to solve is the one being discussed or is instead anger at the profits brought in by some publishers:

If it’s anger at the scale of selected publishers’ profit, then there’s a far simpler solution: don’t review for or publish in their journals. The most powerful action each of us has as a consumer is where we choose to spend our time and resources: if I think that certain publishers are generating inflated profits based on my labor, I can simply choose not to give them my labor.

You wrote: “Isn’t that exactly what this post was about? Asking for help in just this manner, getting people to donate their expertise and time and even money?”

Nope. People typically ask for help when they don’t know or are unable to do something. This blog was not asking for help because publishers cannot or do not know how do something, the blog tried to move responsibility to others without using assets publishers have on their disposal.

You wrote: “I struggle with your argument — are you suggesting that the author here is wrong for not differentiating between the different types of publishers in the market, while presenting your own solution that is only viable for the wealthiest of publishers, with everyone else left to survive on wishes and hopes that maybe something magical will happen to keep them in business? Aren’t you guilty of the same level of oversimplification? ”

My solution was direct answer to your worries. Oversimplification? Perhaps. However, my solution is more fair from the business ethics perspective than a proposal “Putting Research Integrity Checks Where They Belong.”

You wrote: “I think it’s a really good parallel for what you’re suggesting here, and I think I am going to bow out of this discussion by quoting the leader of one of the community’s most important and innovative non-profits who spoke out about the second order effects of doing this…”

What does it mean to be a non-profits-person in your view in the field of scientific publishing?
Being a CEO at PLOS?

I don’t have time to start discussion about that text you linked. But the attempt to defend something that cannot be defended is amazing. The argumentation put by the authors “paying will corrupt people” is simply unbelievable. The authors wrote “We want to stress that there is a lot of research showing that altruistic behavior is quickly eroded into an ugly mess by the addition of financial incentives…” “So, instead of getting into a mess by trying to pay them money…” I am speechless.

Comments are closed.