Every research article submitted to a journal should come with a digital certificate validating that the authors’ institution(s) has completed a series of checks to ensure research integrity.

Journals should not hold primary responsibility for detecting, correcting, and punishing authors for inappropriate behavior.

In recent months, several “scandals” have rocked trust and confidence in journals. Thousands of Hindawi papers were retracted because they most likely came from papermills. Wiley announced that their new papermill detection service flagged 10-13% of submissions across 270 journals for further review.

Frontiers published ridiculous AI generated figures. And a handful of journals from various publishers were caught publishing papers with obvious AI LLM chatbot text included.

These incidents led to mainstream press articles questioning the value of journals and SCIENCE. I’ve yet to see one that questions the value of institutions or funding of science.

There are consequences for journals that don’t seem to care about research integrity, no matter what their corporate mission statements claim. There are mechanisms for dealing with those journals — they lose indexing in important compendia, they lose their Impact Factor, they lose out reputationally and submissions drop dramatically, at least for a while. They go on lists at institutions or national funders as being restricted.

person examining representation of a digital document with a magnifying glass

Journals have positioned themselves as being a trusted source with peer review and some level of validation of scholarship.

Journals have been increasingly expected to explain their value and for the vast majority of serious journals, rigorous peer review has been the answer along with community curation. However, forensic analysis of data sets and gel stains was never an expected task of traditional peer review. And yet, today, journal staff may be performing any number of checks including plagiarism scans, figure analysis, identity checks on claimed authors and reviewers, and at least following the data links supplied to see if data was deposited as may have been required. Now we will start having to add in papermill checks — are all the named authors real people, are they at the institutions listed on the paper, do those institutions exist, are all of the authors working in the same field, have they collaborated before, are there “tortured phrases” littered throughout the paper?

And AI detection tools, for which none have risen to the top as accurate, scalable, or integrated in any manuscript tracking systems, is a new frontier awaiting exploration by journal offices.

For every score or report on every automated integrity check, a person needs to review and decide what to do: just reject the paper based on the score or go back to the author for an explanation and if acceptable, work with them to fix the problem?

For any journal (or maybe suite of journals at a society), dealing with ethics issues requires significant staff time. Any one of these issues could be an honest error by an inexperienced author. In those cases, a journal may want to work with them to get the issue fixed and continue the paper down its peer review path. Other times, it’s bad behavior that needs to be addressed.

If a paper that fails the integrity checks is rejected, there is a good chance it will show up at another journal, wasting the time of yet another journal staffer.

These additional checks are coming at a time when the review of papers submitted to journals is expected to be fast and inexpensive and yet none of the processes above are either fast or inexpensive. And the number of papers submitted to journals is mostly increasing — though that is not the case in every discipline.

Conducting integrity checks on papers is also a Sisyphean task with little reward. The vast, vast majority of papers submitted to the vast majority of journals are written by ethical and responsible researchers. Despite the sensational headlines decrying almost 10,000 retractions in 2023, about 8,000 of them were from Hindawi journals. Context is everything.

If we remove the journals or publishers that are not actually conducting peer review (or do conduct peer review but then ignore the reviewer comments and accept the papers anyway), the number of papers with serious ethical issues is low. And yet, every publishing conference this year, last year, and next year will spend significant amounts of time addressing research integrity issues. An equal amount of time will be spent attending demos of new tools built to detect research integrity issues.

Despite the relatively low number of incidents, not checking every accepted paper puts a journal at risk of missing something and winding up on the front pages of Retraction Watch or STAT News. This is not where we want to be and it opens you up to a firestorm of criticism — your peer review stinks, you don’t add any value, you are littering the scientific literature with garbage, you are taking too long to retract or correct, etc.

The bottom line is that journals are not equipped with their volunteer editors and reviewers, and non-subject matter expert staff to police the world’s scientific enterprise.

Some have called for journals to simply retract or publish an expression of concern if questions are raised about published papers and force the institutions to conduct an investigation.

Every time a managing editor has to send a paper to an institution for investigation, a small piece of their soul goes dark. Maybe if all your papers come from US R1 research institutions you will at least be able to identify who to whom the email should be sent. I have personally spent hours hitting translate in the web browser on institution web pages searching for anything that might look like an integrity officer. Usually the best you can find is a dean without an email publicly listed.

Every time a managing editor has to send a paper to an institution for investigation, a small piece of their soul goes dark.

But large well-funded institutions are not off the hook. Their reviews and investigations are slow and not at all transparent.

There are obvious reasons not to trust an institution with policing their own research outputs; however, the use of third-party tools would help mitigate those concerns.

A solution to the problem is for institutions to take responsibility for conducting integrity checks and providing validation to the journals. The many fine companies that are trying to sell publishers the expensive technology solutions should be trying to sell enterprise solutions to institutions.

Might there be a middle ground? The technology tools could be available to individual authors that then have to get the validation and submit it with their paper. This will come with a fee.

I don’t see how journals needing to employ more and more integrity checks and human review of the results is sustainable. As “cheating the system” becomes exponentially easier with the AI tools already at our fingertips, the constant public shaming of journals for not catching issues will continue to erode trust, not only in journals, but also science.

And this is why backing the integrity review up in the timeline is crucial. Trust in science is low, like really low. The US is one election away from potentially losing most science funding. In corners of the universe, what is true is no longer relevant and many lies are believed as fact.

Instead of hoping that strapped journal offices and volunteers find the bad papers before they are published and instead of blaming the journal when one slips through, maybe the institutions — the employers of the researchers — have a significant role to play in ensuring the scientific record is clean from the start.

I welcome continuing discussion on where efforts to ensure research integrity are most efficiently deployed.

Angela Cochran

Angela Cochran

Angela Cochran is Vice President of Publishing at the American Society of Clinical Oncology. She is past president of the Society for Scholarly Publishing and of the Council of Science Editors. Views on TSK are her own.

Discussion

68 Thoughts on "Putting Research Integrity Checks Where They Belong"

Beautifully written and very insightful! Thank you
And I ask, what to do about a whole collection od developing countries that act as though research integrity has not even been invented….An uphill battle, I am afraid. Here in Brazil, public universities are slowly coming on board and that is good news. From my part, I have been trying to increase the visibility of research integrity using social media and my Consultancy,

Anna, this is indeed a problem and it takes a nation/national institution to care about their global reputation. China did and for years made strides to curb fraud. And I know first hand from my work with Brazilian science editors that “professionalizing” the journal editorial office was a big step in that direction. There is no shortage of educational opportunities. It’s the will to make systemic changes that is missing.

Couldn,t agree more. Though I sometimes feel like some kind of D. Quixote, but I keep on trying, becasue I truly believe in the importance of scientific integrity at all levels.

I fear the effect the use of AI tools¹ will have on the will of people to undertake the required effort to actually DO the systemic changes.
¹ Specially when every new AI tool promises and promotes how they will make your work “fast” and “easy”.

“Every time a managing editor has to send a paper to an institution for investigation, a small piece of their soul goes dark.” My Goodness, I relate to this so much!

As a journal editor, I am feeling the struggle against fraudulent manuscripts so much.
The simplest and probably most sustainable action against this problem can be taken by research institutions indeed, not by journals. It is to remove the currently widespread incentive to publish. Stop judging scientists by their paper output. Perhaps then we can return to a time when publishing was part of the scientific discourse and not the goal of research itself. Who would go to the trouble of faking a paper if there was no reward?

You provide an excellent action to this problem, to which I agree. But the question is ‘if publications with positive data/results are the bread and butter of scientists, to receive funding, what is the alternative?’ Research is taking place in these lab every single day but is it a result that journals want to publish?

Angela – thank you for an interesting and timely posting.

It’s frustrating that organizations best placed (and that have the greatest power) to influence author behavior use journal publishers as a lightning rod for their own failures.

However, journal publishers also want to “have their cake and eat it”. They claim to publish “only the highest quality research” while often running away from the consequences of quality process failures.

We are approaching a time when journals will have to decide if they are just bureaucratic “administrators of peer review”, or if society should trust (and reward them) for making a more meaningful contribution to quality assurance in research workflow.

In more practical terms, here are some thoughts about what journals could do today:

Transparency/accountability – nobody likes to be told that quality comes from a “back box” that you just need to trust. Journals could be more transparent about what they do and do not check: “This manuscript was checked for plagiarism”, “This manuscript was not check for data manipulation” etc. The simplest way to implement this would be to share the output report from an automated tool like PaperPal (COI – I’m a Strategic Advisor to Cactus) in the published manuscript.

Sampling and Process Re-engineering – checking manuscripts is indeed a “Sisyphean task” – journals will go broke if they conduct deep, manual quality checks on every manuscript! But in many quality assurance workflows this problem is addressed by “sampling”. Journals could randomly select e.g. 1/100 manuscripts for a deep manual check. Plus, journals could use heuristics to sample and trigger “heightened scrutiny” – for example where an author was subject to prior retractions (https://scholarlykitchen.sspnet.org/2024/02/13/guest-post-how-identifiers-can-help-publishers-do-a-better-job-of-curating-the-scholarly-record/).

Communication with Funders and Research Institutions – Maybe the submission form could be updated to ask the author to identify the integrity officer at their institution?

Richard Wynne

A reasonable approach, but all those checks cost time and money. Given the financial pressures constantly put on publishers to reduce subscription costs/APCs, how willing do you think institutions are going to be toward the significant price increases necessary to add this level of scrutiny?

I wish I definitively knew the answer to your question! Journals are in a tough spot. As self-proclaimed gatekeepers of quality it’s going to look very strange if they don’t make major, visible investments to update their outdated quality assurance infrastructure. Yes, making investments is risky, but so is clinging to a slowly failing status quo.

On the financial point, it’s interesting as Richard pointed out, publishers want to have their cake and eat. Universities, at least in the UK, are struggling financially: https://www.theguardian.com/commentisfree/2024/mar/29/britain-universities-freefall-saving-them-funding-international-students
I can imagine their argument to be ‘we can’t afford it, that’s what we pay $$$ to publishers for’. Publishers should be embracing challenge like this which differentiate them from preprints and – at some point in the future – AI reviewing of manuscripts.

Our checker (SciScore) costs between 1 and 2 bucks a run for our integrity checks (given fairly large subscription volume), if that is a lot of pressure on the bottom line, I might question the financial soundness of the journal.

How much time does it take a human to read and interpret each result, communicate them to the authors and their institutions, review any revisions or respond to any follow-up questions? As I wrote recently, “time spent by human editorial staff is the most expensive part of the publishing process” (https://scholarlykitchen.sspnet.org/2024/03/20/the-latest-crisis-is-the-research-literature-overrun-with-chatgpt-and-llm-generated-articles/).

Sure there’s the direct financial and human resources to perform the checks, but that’s exactly why the sampling approach suggested by Richard makes total sense. Furthermore, given that sampling is one of fundamental building blocks of science, it should work, I think.

I’m not sure sampling is the answer here. It can serve some purposes, perhaps give a sense of the frequency of the problem, but as a journal reader, I want every paper in the journal to have been verified as much as is possible. We know, at least from retraction frequency and how often fraud is found, that it happens in still a fairly small minority of the literature (10K retractions from 5M articles is 0.2%). But the problem is that each of those small number of fraudulent articles can cause significant reputational damage to a journal (how likely are you to send your next paper to the rat testes journal?), and more importantly, significant public harm (how many people have died because of Andrew Wakefield’s fraudulent papers?). Sampling won’t stop those from slipping through.

The suggestion of sampling checks is nonsense . This applies to products produced following always the same process. Articles are written by several different human beings.

Let me see if I can make sense of “sampling”.

Tax returns are unique and prepared by individuals, and tax authorities do not have the resources to fully examine every tax return. The solution is to “sample” and audit a small percentage of returns. This has an impact on the quality of *all* tax returns because everybody has a chance of being audited and therefore takes greater care with their returns.

So, even though sampling does not check every item it has an impact on every item.

Scholarly manuscripts are not unique artworks being judged for their subjective merit.
Many aspects of research reporting (manuscript publication) are relatively standard. Do the references support the assertions made? Have images been manipulated? Are the results statistically significant? Etc. Some checks can be automated at relatively low cost, however other checks (e.g. forensic analysis of data manipulation) are much more expensive. That’s where sampling could be a useful technique.

There is a tendency in scholarly publishing to rule out any solution that is not 100% perfect on the unspoken assumption that the current methods are doing just fine. But is that still a valid assumption?

“Sampling” is obviously not the answer to all quality problems in research evaluation, but it is a relatively standard technique that could help improve the quality of scholarly publishers’ offering.

In my experience with systematic image screening at The Journal of Cell Biology over a dozen years, even universal screening was not a deterrent to authors. The number of manipulations that we detected (both those that affected the interpretation of the data, and those that did not) was remarkably consistent over that time, despite the fact that authors knew that their images would be screened if their manuscript was accepted for publication. Thus, I do not think that sampling will be a deterrent, and the only way to properly address this problem is with universal screening.

To me the real lever is with the funders. Require any paper, preprint, or other research output submitted in a grant progress report to come with research integrity certification from a qualified 3rd party. Threaten to cut off the money and the institutions will jump.

An important part of any certification process would be disclosure of the effectiveness of the 3rd party’s process/tool relative to an industry standard.

I’d think you’d create a set of standards/criteria that companies (and publishers) could use to apply for approval from the agency (much like the idea of what’s required to be a “qualified” repository for funded research data). Perhaps a standard output of the process, a report on what was run and the results, could become part of the paper.

Thank you, Angela, for sharing this. Beautifully presented.

In India, institutions have started implementing plagiarism detection tools, though it’s a long way to go. I liked the idea of making institutions share responsibility for research integrity. I believe workshops/seminars with authors on research integrity will help increase awareness. Transparency in the publisher’s manuscript evaluation process will also educate authors on the various steps involved.

Thank you for sharing your thoughts. You’ve highlighted a significant trend: the investment in ‘integrity check’ technology is predominantly by developers of proprietary tools. This trend suggests a move towards a future where access and control of these tools are tightly held, potentially leading to a new era of dependency on specific vendors. ‘Trust’ becomes a result of a proprietary toolchain.

There’s a critical need for open-source solutions in the realm of research documentation. Such tools could inherently address many integrity concerns *further down the research lifecycle* while remaining free and accessible to the research community. This shift could significantly alter the landscape, challenging the traditional role of publishers and potentially reshaping the academic publishing industry.

In the current scenario, the focus of proprietary tool developers on addressing ‘last-mile’ integrity checks underscores a gap that the open infrastructure community must urgently address. It’s a pivotal moment for the community to advocate for and develop open-source solutions that are *in the hands of researchers* to ensure the long-term integrity and accessibility of research outcomes.

Thank you for this typically thoughtful and well-reasoned piece. The question of where the line for responsibility for ensuring basic research integrity standards should be drawn surfaced at a meeting I was at last Fall. As Angela says, currently publishers are seen to be responsible for detecting and dealing with breaches of community at submission and during the editorial process. Whilst they definitely have an important role to play, the current situation is unbalanced. Many participants at the session from the above mentioned Fall meeting were of the opinion that the balance point needs to be moved further upstream and earlier in the research cycle so that funders, institutions and research groups take a greater share of responsibility for ensuring that inexperienced authors are better supported and bad actors are weeded out earlier on in the research process. The real question of course is how to accomplish this and what sanctions should be placed on bad actors when a case is proven. Clearly another area in which publishers, funders, and institutions need to come together.

The first institution to ask for access to the Papermill Alarm turned out, on inspection, to be a legitimate representative of the institution – a full professor. Inspection of his publications showed a long history of quite blatant image manipulation. I suspect he was running a mill. Now every time an institution contacts me I wonder if they want to deal with fraud or just make it harder to find. Trust is hard to automate.

I took a call once from an institution who had faculty embroiled in multiple retractions. After the introductions, their first question was “how do we make this go away?”. On explaining that the retractions were a fait accompli, they asked if we would republish the same papers in another journal, as they “needed the numbers”. There were no queries about the actual problems with the papers, no denial of wrongdoing (or pleas of innocence), no attempt to reassure us of action taken (we naively thought they might be seeking advice on how to train their researchers).

As much as I know we all need to work together to solve these complex problems, it is incredibly frustrating to come up against the (at best) wall of silence from universities and funders on this. So how do we motivate / incentivise them to act? What worked for journals / publishers was being publicly called out on social media and PubPeer. Could that same tactic work for institutions and funders?

Name and shame? Maybe. Look how quickly certain cancer centers responded when a slew of papers were questioned at once. Within two weeks of a blog post, 13 requests for retractions were sent to journals with a few dozen other corrections.

I do think this can be a collaborative effort. But also seems that tech solutions are super close and could easily facilitate review and certification.

That is an important question that I have somewhat chosen to ignore. I envision a system where works are submitted to a service that conducts all kind of integrity checks and send the report to the author and someone at the institution. Fixes are made where possible. Once the paper is ready to submit, a validation code and report is submitted with the manuscript to a journal that can see what was checked. I guess this could be a “trust but verify” approach.

What a clear and bold idea. Thanks for writing this.
I’m partial to @Heike Riegler’s idea or the “3 good papers rule”. But assuming things continue on their current path, I am wondering how this might play with the growing use and creation of preprints.
If the researchers’ institutions take an active hand in quality control, and many outputs are preprinted, what exactly is left for journals to do other than curate?
Now curation is not trivial, and it serves an important place in our attention economy where content is only growing. But this seems like putting journals on a path to reduced relevance. If a preprint server has a robust advanced search interface and supports user tagging, why would anyone read a journal if there was a quality check done by the institution?

Perhaps I’m missing something.

Ah, you said “quality check” and I am talking about integrity checks. The traditional peer review process with experts and peers determining if a paper is good, important, and novel cannot be done by the institution. The conflict there is not manageable.

Is it just me, or do other people feel that the push by funders and academic institutions to include papers in preprint servers and institutional repositories is only going to make this problem worse? Are the servers and IRs going to run their own ethics checks? Or are we just going to further pollute the record or science?

“The bottom line is that journals are not equipped with their volunteer editors and reviewers, and non-subject matter expert staff to police the world’s scientific enterprise.”

What a pathetic excuse. The bottom line is that publishers do not want to use part of their enormous profit and pay editors and peer-reviewers to do a better job. For example, employ a number of statistical consultants instead of paying huge dividends to shareholders.

I would suggest you are confirming Angela’s point in this piece. Universities/Institutions and The Gates Foundation, Wellcome Trust, HHMI, et al who sit gloatingly on mountains of billions of gold should be the ones funding and supplying the integrity checks on the research on which their names are attached.

When we have questioned a paper for malfeasance, we go to the institution where the possible malfeasance occurred and say–this is on you to investigate. The potentially corrupt act took place in your house. Let us know what you find out.

Not sure I understand how on earth am I confirming Angela’s point in this piece:)

It is well know that publishers have enormous profits: “Elsevier operates at a 37% reported operating profit margin compared to Springer Nature which operates at a 23% margin.”

This means they can use part of that money to do a better quality check before they decide to publish something. It’s that simple, use part of your extreme profits and don’t hide behind the volunteers who do quality checks for you. Don’t cry, use part of that money to check what you publish. Pay people to do that for you, don’t rely on free labor.

It’s also well know that the institutions and funders have mountains of money. Shouldn’t they be funding the verification of what they are paying for?

Well, most of universities are funded by public money and are nonprofit entities. Unlike publishers. Universities hardly have extra money to spare. Unlike publishers that are making extreme profits.

Anyway, Angela, you or whoever are free to argue that universities should take their part of the responsibility regarding this issue, I have no problem with that. However, please do not hide behind the work of volunteers and please do not cry that you have no funds to do more, because you have. As I said in my first comment, it is just an excuse.

https://en.wikipedia.org/wiki/List_of_colleges_and_universities_in_the_United_States_by_endowment
The National Association of College and University Business Officers (NACUBO) maintains information on endowments at U.S. higher education institutions by fiscal year (FY).As of FY2023, the total endowment market value of U.S. institutions stood at $839.090 billion, with an average across all institutions of $1.215 billion and a median of $215.682 million

It’s also perhaps worth considering the unintended consequences that might arise in a market where only the wealthiest and most profitable publishers can do the sorts of integrity checks needed to retain trust in the literature (no, not every publisher has the same profit margin as Elsevier). Should integrity be the privilege of the wealthy, and does that then lead to (even) further market consolidation and further entrenchment of the largest for-profit publishers?

For some reason I cannot reply to David’s comment so I will do that here.

David wrote: “It’s also perhaps worth considering the unintended consequences that might arise in a market where only the wealthiest and most profitable publishers can do the sorts of integrity checks needed to retain trust in the literature (no, not every publisher has the same profit margin as Elsevier). Should integrity be the privilege of the wealthy, and does that then lead to (even) further market consolidation and further entrenchment of the largest for-profit publishers?”

If I understand David correctly, he is basically saying that the most profitable publishers do not want to use their extra profit to improve their product because they are afraid that would put those less profitable publishers in difficulties because these cannot invest that much money in improving their product. So by not investing part of their extra profit in quality control, these publishers are actually doing a favor to all scientists and science in general. If so, I can only laugh at this. This would be in my opinion by far the worst explanation about why publishers retain their huge profits.

You do not understand David correctly.

I am basically saying that if you create a tiered publication system where only the more profitable publishers can do the checks needed to make the research credible, then you will end up with only the more profitable publishers. I’d rather see a robust ecosystem filled with non-profit publishers running at small margins, not to mention university presses and library publishing done by universities. So a solution that relies on the enormous profits of commercial publishers is one that would lead to unintended consequences.

If you can afford to fund the research, or afford to do the research, or afford to publish the research, you can afford to do the integrity checks and have an obligation to do them.

“If you can afford to fund the research, or afford to do the research, or afford to publish the research, you can afford to do the integrity checks and have an obligation to do them.”

That has nothing to do with my initial comment. You are free to argue that, and I have no problem with it. I am just saying it is pathetic to hide behind the work of volunteers and cry that publishers do not have funds to do more, because they have. Publishers should top whining and invest part of their huge profits in order to improve their product.

David writes:
“You do not understand David correctly.

I am basically saying that if you create a tiered publication system where only the more profitable publishers can do the checks needed to make the research credible, then you will end up with only the more profitable publishers. I’d rather see a robust ecosystem filled with non-profit publishers running at small margins, not to mention university presses and library publishing done by universities. So a solution that relies on the enormous profits of commercial publishers is one that would lead to unintended consequences.”

It seems I did understand you correctly. You again state that those who can do something, don’t do it because they are sympathetic towards those who have difficulties doing it. They do not want to improve their product because others cannot improve their product. This kind of altruistic and empathetic attitude would be the first in any free market economy.

Again, no, you clearly do not understand what I’m trying to say. It has nothing to do with improving a product or wanting or not wanting to do anything, or altruism or empathy. I’m saying that you are mistaken in assuming all publishers make significant profits and have high margins, and that if your solution requires significant profits and high margins, then you will exclude from your solution many non-profit, university presses, and organizations publishing outside of the sciences. As has been the case with many other well-intentioned attempts to reform science publishing, you are proposing something that will further entrench the most profitable companies to the detriment of those that run on very low margins.

David writes:
“Again, no, you clearly do not understand what I’m trying to say. It has nothing to do with improving a product or wanting or not wanting to do anything, or altruism or empathy. I’m saying that you are mistaken in assuming all publishers make significant profits and have high margins, and that if your solution requires significant profits and high margins, then you will exclude from your solution many non-profit, university presses, and organizations publishing outside of the sciences. As has been the case with many other well-intentioned attempts to reform science publishing, you are proposing something that will further entrench the most profitable companies to the detriment of those that run on very low margins.”

I am saying that publishers should stop whining and hiding behind volunteers who do quality checks for them. If publishers care about their product, they should invest part of their huge profits and try to improve their product. It is as simple as that. Nobody or nothing is stopping them of doing that. It is pathetic to cry and whine “we have no funds to do it” when we all know they have. And please don’t worry about those publishers that run on very low margins, many scientists including myself are willing to review papers submitted to journals run by non-profit organizations. They will survive.

David writes:

“Thanks for explaining what ‘we all know.’ ”

Yes, we all know that “Elsevier operates at a 37% reported operating profit margin compared to Springer Nature which operates at a 23% margin.” and that they can do more unless there is a law stopping them to invest more money to improve their product…

And we all know that Elsevier and Springer Nature are the only publishers that exist. I think I’m done with this conversation.

Agree. As I argued in the Chronicle of Higher Education a few months ago (https://www.chronicle.com/article/how-to-stop-academic-fraudsters), universities should institute data chain-of-custody systems to certify that their researchers are not committing fraud. This would save many universities, including prestigious ones like Harvard and Duke, a lot of trouble, such as lawsuits and 1,200-page investigative reports (https://www.chronicle.com/article/heres-the-unsealed-report-showing-how-harvard-concluded-that-a-dishonesty-expert-committed-misconduct).

Hi Angela – thanks for this piece, very thought provoking.

The way I see it, journals & publishers are expected to tackle research misconduct chiefly because they’re the only stakeholder that has the opportunity: they alone have a pipeline where the vast majority of unpublished articles are gathered and evaluated before they become public.

Even if institutions and funding agencies were highly motivated to improve research integrity, they have no clue what their researchers are doing. None. Institutions and funding agencies only find out about the articles their researchers publish months or years after they appear, by which time it is far too late to address integrity issues.

The only way I can see to resolve this fundamental misalignment of opportunities and motivations would be for publishers to see this as a new line of business: establish a presubmission platform for checking research integrity issues (c.f. Morressier), and charge these stakeholders to a) ensure compliance with funders and institutions policies, and b) catch research integrity issues before the article makes it into the public sphere.

Excellent. I love the bold opening statement. While I tend to agree that institutions could/should be tasked with providing some kind of Research Integrity (or Researcher Integrity), I see that this too can be challenging for many institutions that lack the resources (human and financial) as well as the knowledge.
💡 Perhaps authors could be requested to agree with their paper being published in a preprint server dedicated to “Rejected submissions” along with the reasons for the rejection. This in theory would make them think twice before submitting product of malpractice.
What do you think?

Provocative essay and lots of insightful comments. As one who works in a government science agency with elaborate procedures for pre-submission review and approvals on manuscripts and the datasets*, I have perspectives why mandating institutional research integrity checks across the scholarly publishing domain would be ineffective. Fundamentally, few institutions or authors would have the wherewithal and tolerance for a rigorous and consistent system, and it would become just another box ticking exercise. Second, willful fraud such as the subtle image manipulations that have roiled the biomedical literature are hard even for co-authors to detect.

To me, more liberal Expressions of Concerns or moderated article comments by publishers seem to be the most tractable improvement that could be made without increased financial burdens that would get passed on to authors. Nonjudgmental EoCs such as “Readers are advised that questions of image/data provenance etc. have been raised on this article. These comments and author responses may be viewed at….” would be something, especially if the situation isn’t clear cut. Automated linkages of PubPeer comments back to the article could be done and some journals e.g., PLOS, allow comments.

*6 sign offs including 2 peer reviews for the manuscript and 4 sign offs for the accompanying datasets including peer reviews of the metadata and audits of the data. This is just for permission to advance to ‘Start’ with the journal.

I like what Holden Thorp had to say — journals should just retract articles that they can no longer vouch for, and leave it to the institutions to investigate whether there was any unethical behavior on the part of the authors. This would result in largely unsatisfying retraction notices, but would relieve the investigative burden from journals.
https://www.science.org/doi/10.1126/science.ade3742

But even reaching the decision that you can no longer vouch for the data requires some form of investigation. There’s no way to completely pass the buck to another stakeholder for screening data or evaluating data anomalies, nor should it be passed. In my opinion, all stakeholders should be screening data and evaluating data anomalies ─ before funding (funders), before submission (institutions), and before publication (publishers). This is like the Swiss cheese model of COVID prevention that we all became familiar with. There will be holes in the checks at every stage, but, if there are more stages, there will be a less falsified data in the published literature.

Formal investigations of potential unethical behavior still have to be carried out by the institutions. Having multiple stakeholders screening data does not necessarily reduce the number of these investigations (in fact, it may increase them); it just shifts them to different stages of the research process. But at least the published literature will be cleaner.

Agreed. I still like the idea of funders requiring (and paying for) any paper that is submitted on a progress report for a grant has to have some level of anti-fraud certification.

And yes, just weighing papers based on the validity of the contents (and not worrying if it’s an honest mistake or fraud) still takes time and effort, but it’s a lot less time and effort. No back and forth with authors, no reaching out to their institution, etc. Just, is this right (or can we stand behind what’s reported here)? If not, it’s retracted, we don’t care why/how the incorrect information got in there, it’s incorrect, end of story.

Most of the unethical issues are caused by the journals because most of them do give a short time to the peer reviewer to work on. In the same vein most of the comments of reviewers are not taken to.

And why are peer reviewers given short turnaround times? Because authors demand rapid publication. Blaming everything on publishers without actually examining the problem carefully is exactly where APCs came from.

This interesting discussion makes me wonder about unintended consequences of shifting and evolving responsibilities. When industry shifts, it tends to generate outcomes that differ from original intentions. Hypothetically, if institutions and funders take on greater roles in pre-publication research integrity, what could that do to the balances of roles in the scholarly enterprise where more tends to beget more? Someone is going to benefit in unintended ways, and someone is not. Scholarly publishing growth is driven by odd incentives. I wonder how shifting balances may seemingly address some quirks while also leading to unintended consequences in who does what and who benefits the most – like we see in other areas, like OA, and benefits of scale.

Your well written piece was labelled “very provocative” by the person whose Tweet link I followed. Having read it, I think she needs to take something to reduce the reactivity. You precisely describe the problem we face as editors and the nonsense of expecting us to police integrity with our volunteer resources. But as a tenured academic I also see that the Universities can’t do this either. I like the sensible suggestions on doable checks, transparency about what we can and can’t do in the thread, but most of all the role of funders. This has to be a shared enterprise and they have the leverage others lack

Reputable news publications make a lot of effort to verify their sources and publish pieces by verified journalists with a track record in quality journalism. In return they expect readers to pay for a subscription, or if ‘read for free’, rely on other sources of revenue such as advertising.

Could journals not act in a similar way? Pushing the problem onto the author or the institution (who will not be independent) doesn’t seem like a viable solution to me.

Ultimately the publisher should be responsible for what they publish and the integrity of it.

Leave a Comment