Editor’sNote: Today’s post is by [Natalie Simon, communications consultant specializing in research and higher education. Natalie has also provided communications support to the 8 World Conference on Research Integrity.]

In an era of large data sets, statistics is a vital part of the research process, but are researchers equipped with the skills needed to effectively use statistical analysis tools?

Certainly not, according to the latest book by Klaas Sijtsma, Never Waste a Good Crisis. In this book, Sijtsma draws from his vast professional experience both as a statistician (psychometrician to be exact) and a university administrator. He served as Dean of the Faculty of Social and Behavioral Sciences during the so-called Stapel Affair at the University of Tilburg, one of the most well-known cases of research misconduct in Europe.

Sijtsma participated in the 8th World Conference on Research Integrity in Athens in June, and I had the chance to chat to him about his book.

Drawing of small people examining numerical and statistical reports on a giant clipboard

He wrote the book in response to a growing sense of crisis in the academic community around integrity and reproducibility of published research. Earlier this year Nature reported a new record in retracted papers, more than 10 000 in 2023. Academic papers are retracted when fraud or plagiarism has been picked up, or in cases where there are problems with the data or methodology. A retraction can be due to a perfectly innocent mistake and does not automatically imply research misconduct.

“Like everyone I distinguish between deliberate fabrication, falsification and plagiarism on the one hand, and so-called questionable research practices on the other,” says Sijtsma. But he offers an alternative, and perhaps more obvious, diagnosis of the cause of the current high levels of questionable research practices, sometimes called ‘sloppy science’, a terminology disliked by many as patronizing.

The issue, he argues, is the high number of researchers who are mostly left to figure out statistics for themselves.

“Statistics is not their trade,” he says. “I spent much of my career at various faculties of social and behavioral sciences, so I know quite well what level of statistical training most researchers in these fields get, and it is not very much: usually a few introductory courses in methodology and statistics, and then perhaps one more advanced course later in their study, but that is about it.”

Statistics as an independent and complex field

The trouble with this approach to statistics is that it is a very complex field. Statisticians study for many years and then dedicate their careers to grappling with the issues that emerge from large data sets.

“More often than not, data are extremely complex,” explains Sijtsma. “And grow even more complex as the size of data sets grow.”

“There are usually many things wrong with the data, scores are missing, or some people did not answer survey questions, and there are strange outliers in the data. Or today we see data sets collected through wearable tech for instance, or smartphones gathering data every couple of hours a day.”

These data sets are never the typical textbook examples researchers have learned in their statistical training. Suddenly they are faced with having to analyze very complex data using methods they have never heard of, but with software that simply allows them to do these things without a proper understanding of what they are doing.

This situation is simply asking for trouble, says Sijtsma.

He describes a common situation in which researchers may seek help from a statistician, but, underestimating the complexity of their problem, find the consultation frustrating and unhelpful, and ignore the feedback of the statistician.

“It is a common joke among the research community that you go to a statistician with a problem, and after speaking to them for an hour your problem is even bigger than it was before,” says Sijtsma. But the issue is serious. Statistical support is usually not quick and easy. Often the statistician needs time to consult with colleagues and think over the issues involved. But too often researchers don’t have patience for that, they want a quick answer and an easy solution.

Reproducibility crisis and statistics

Professor John Ioannidis, who gave the Steneck-Meyer lecture at the 8th WCRI, offers some of the most dire warnings around research reproducibility and false, or at least unreproducible, research published in the literature. One of his most cited papers is an essay called, Why most published research findings are false and argues that certain research study characteristics, such as size of the study, the effect size or level of flexibility in designs, definitions or outcomes, contribute to whether or not a finding is likely to be false. Ioannidis states that “for most study designs or settings, it is more likely for a research claim to be false than true.”

Sijtsma believes that while there are certainly fraudulent researchers out there, it is unlikely to be the majority. The majority struggle with statistics to the best of their ability, but cannot avoid making errors that can be very serious.

“The people who work in research integrity come up with a range of different terms for this, they call it HARKing, p-hacking, selection bias, and so on,” he says. “But I believe the simplest answer is staring us in the face. We are asking people to do things [statistical analysis] they are not trained for.”

When asked why one does not hear a pushback from researchers themselves over their lack of training in statistics Sijtsma responds that it is difficult to know you are in a difficult situation when everyone around you is in the same boat.

“Suppose as an experiment we trained half our students not only as able psychologists, but also adequately equipped them with statistics training, and the other half we continued with the standard training we have been giving up to now,” Sijtsma says. “It would not take long for the second group to realize they missed something essential, because they will notice that a lot of people around them have skills they lack, but if everyone is in the same situation, you simply don’t notice.”

What to do about it

While it is probably not feasible to provide comprehensive statistical training to all budding researchers, Sijtsma does offer some potential solutions.

The first is the research integrity gold standard: complete transparency.

“We could already improve things if we live up to the old principle of science, that the scientific enterprise is a public affair. All the work we do should be made transparent and accessible to everyone.”

This includes not only the scientific paper, or findings, to be made accessible. But also, the data and any software or tools used to analyze the data. The first step towards research with greater credibility is that the various elements of the research need to be made available for another researcher to be able to reproduce the statistical results in the first place, but also to do the same study again with newly collected data. This is replication research.

“If researchers are refusing to make their research details and data available, they are simply asking the scientific community to take their word for the results, to believe their results just because they say so, and that is not science.”

Preregistration is also an important element in making science transparent. This is a process through which you register your research plans, methodology, etc., in a publicly available repository, which is then timestamped. This allows your colleagues to compare what you planned to do with the final work you published. Sijtsma acknowledges that of course research plans change along the way, and there is room for this in preregistration. But the methodology should not be changed after the researcher has seen the results to adapt it to the results falsely suggesting a positive finding.

“The second proposed solution is very simple,” says Sijtsma. “If as a researcher you feel like you are in over your head with the statistical analysis, don’t just push forward, but stop and seek advice from a trained statistician.”

The essence of Sijtsma’s recommendations in Never Waste a Good Crisis is this: Acknowledge science is a public enterprise and practice complete transparency. Admit you cannot be an expert at everything, so ask a statistician for help if needed. Although many researchers already embraced these guidelines long before research integrity became a hot topic, broader acceptance requires no less than a culture change in some quarters. This will take time and perseverance from everybody.

Discussion

1 Thought on "Guest Post — Never Waste a Good Crisis: A Conversation with Klaas Sijtsma, Former Rector Magnificus of Tilburg University"

Comments are closed.