Editor’s note: Today’s post is by Michal Ordak, Assistant Professor at the Department of Pharmacotherapy and Pharmaceutical Care at the Medical University of Warsaw.

The pressure to achieve “statistical significance” has become more than a technical hurdle — it is a psychological burden that shapes research behavior, as others in The Scholarly Kitchen have previously observed. Drawing on 10 years as a statistical editor, I reflect on how this culture distorts both science and scientists, and why an underused resource may hold part of the solution.

During the last decade of my work as a statistical editor for several medical journals, I have frequently encountered authors who were worried that their results were “not significant.” They feared that such a situation would determine the fate of their article, and sometimes even their career. Requests to remove outliers, apply a different test, or “adjust” the analysis did not stem from a lack of knowledge but from fear that the absence of significance meant the absence of a future.

Over time, I noticed that this fear was growing. Instead of a discussion about methods, I received justifications such as: “I know this is not ideal, but at least it gave a significant result.” Often, when I recommended corrections in line with good practices, authors replied that “perhaps they would apply them in the future,” but not now — because it would change their “nice” results. This was no longer merely a technical issue.

Image of a person's face showing frustrating and pain over data analysis, with digital images of data overlaying the photo.

The pursuit of significance has become a psychological burden, shaping researchers’ decisions, distorting their judgment, and undermining their sense of security in the academic world. And when the fear of losing a “nice result” begins to outweigh methodological rigor, it is not only the author who suffers, but also the quality of the published literature upon which we develop theory and knowledge. Science loses twice: the results are less reliable, and those who produce them bear additional psychological costs. I wrote about this more extensively in European Science Editing, noting that the pursuit of significance has become a psychological burden for many researchers. Another example can be found in this post in the Pensoft Blog.

SAMPL Guidelines Offer a Solution

To address these challenges, the SAMPL guidelines (Statistical Analyses and Methods in the Published Literature) offer a particularly important framework and potential solution for researchers working with statistical data. The SAMPL guidelines are practical, basic principles of statistical reporting developed for authors and reviewers. Their implementation is recommended by the World Association of Medical Editors (WAME, an organization representing more than 1,000 journals from 92 countries. And yet, these very basics are in practice most often neglected.

In my most recent study, which included 150 statistical reviews for clinical journals, I examined what happens when authors apply SAMPL at the manuscript preparation stage. The results were clear: more acceptances, fewer rejections due to methodological errors, faster editorial decisions, and fewer disputes between reviewers. In other words, SAMPL saves time, stress, and effort for all participants in the process: authors, reviewers, and editors.

Here arises the paradox: if SAMPL works, why is it still so rarely applied? A survey I conducted among editors within WAME showed that only about 5% of journals actually use these recommendations. The reasons are repetitive: a lack of specialized reviewers, limited financial resources, and the belief that “it is enough to mention the guidelines in the instructions for authors.” But this is not enough.

Recommendations for Publishers

Without active implementation, SAMPL remains a dead document, and statistical errors continue to appear decade after decade in the same form. From the combination of these two observations – the psychological burden of significance and the neglect of SAMPL – important lessons emerge for publishers and editors.

  1. First, journals can reduce the pressure on “significance” if they clearly communicate to authors that methodological rigor matters more than the p-value.

Incorporating the SAMPL guidelines into journal instructions for authors as a foundational step to improve statistical reporting. Making these guidelines visible and accessible helps authors understand expectations and encourages adherence to best practices. By including SAMPL in their instructions, journals demonstrate a commitment to raising editorial standards and supporting transparent research. Editorial boards themselves should also be familiar with these guidelines to ensure consistency in applying them during manuscript review.

  1. Second, the SAMPL guidelines help authors maintain transparency and reduce the fear that poorly described statistics will undermine their work.

Authors should be asked to declare in their cover letters that they have read and applied the SAMPL guidelines. This declaration can prevent common statistical errors from the very beginning of the submission process. Such a requirement would highlight the responsibility of authors to follow core statistical principles. Journals might even decide not to send manuscripts for peer review if the declaration is missing, thereby emphasizing the central role of statistical rigor. As noted in earlier literature, cover letters are meant to provide essential information, making them a natural place for this statement.

  1. Third, consistent application of SAMPL also relieves editors themselves, which means fewer ambiguous reviews, faster decisions, and a fairer process.

Another key recommendation is to promote awareness and expertise in the use of SAMPL guidelines by organizing online training sessions for editorial boards. These sessions could focus on recurring statistical problems, provide practical strategies to address them, and showcase examples from leading journals. Training helps editors enforce the SAMPL guidelines more effectively, ensuring manuscripts are held to the same standard. Online meetings also allow editors to discuss challenges in implementation and build consensus on how best to evaluate statistical analyses.

The guidelines also call for broader advocacy at international scientific events. Conferences such as the World Conference on Research Integrity and the International Congress on Peer Review and Scientific Publication offer ideal platforms to promote SAMPL. Presenting the rationale and benefits of the guidelines at these meetings can stimulate discussion among editors, reviewers, and researchers worldwide. Such advocacy can increase adoption across disciplines and strengthen the global culture of high-quality statistical reporting.

Finally, journals are encouraged to go beyond a passive mention of SAMPL and move towards active engagement. This includes requiring compliance with the guidelines for all submissions, providing authors with illustrative examples of common mistakes and their corrections, and reminding reviewers to explicitly reference SAMPL when evaluating manuscripts. These steps ensure that the guidelines are not only acknowledged but also actively shape editorial practice. Collectively, these recommendations aim to embed statistical integrity into the publication process, improving the clarity, reliability, and impact of biomedical research.

Conclusion

The pressure for “statistical significance” is not a minor methodological issue. It is a psychological weight that influences researchers’ decisions, distorts their judgment, and undermines their sense of security in academia. And yet, solutions exist. The SAMPL guidelines show that statistics can be reported simply, transparently, and honestly. Implementing such principles does not require a revolution – it only requires the courage to shift priorities: from asking “is the result significant?” to “is the result reported rigorously?”

If science is to be both honest and healthy, we must accept that statistically non-significant results are just as much a part of reality as those that “crossed the magical threshold of p < 0.05.” And we must learn to report them with the same care and clarity. Only then will significance stop hurting.

Michal Ordak

Michal Ordak

Michał Ordak is an Assistant Professor at the Department of Pharmacotherapy and Pharmaceutical Care at the Medical University of Warsaw. He serves as a statistical editor for several medical journals and publishes recommendations for authors and editorial board members. His main mission is to improve the quality of statistical reporting, a message he also emphasizes in his presentations at numerous scientific events.

Discussion

1 Thought on "Guest Post — When Significance Hurts: What the SAMPL Guidelines Can Teach Us"

Dr. Ordak; Thank you for your comments on the SAMPL guidelines. I wish Doug were alive to appreciate your praise. When I became interested in improving statistical reporting, his research was the first I found in the literature. When I, as an unknown medical editor, asked for his help with the guidelines, he immediately and graciously accepted. His work on the CONSORT and other guideline initiatives was of course substantial. He is sorely missed.

Tom Lang
Author, the SAMPL Guidelines
Author, How to Report Statistics in Medicine

Comments are closed.