Editor’s Note: Today’s post is by Danny Kingsley. Danny is a thought leader in the international scholarly communication space. She works as a consultant and is a Visiting Fellow at the Australian National Centre for the Public Awareness of Science.
There’s been a great deal of movement around Open Research recently. UNESCO adopted its Recommendation on Open Science in 2021. In April 2022, Finland published its Policy for Open Scholarship. In November, Ireland launched its National Action Plan for Open Research and Romania has released its White Paper on the Transition to Open Science 2023-2030. In December, Ukraine announced its National Open Science Action Plan – even in the middle of war. And in the US, the Office of Technology and Science Policy has just announced 2023 to be the ‘Year of Open Science’
Much of the reasoning behind this focus on Open Research is a need to address the reproducibility crisis – Retraction Watch reported there were nearly 5,000 retractions in 2022. This represents a significant (and growing) cost to many players in the scholarly communication process to manage the integrity of the scientific record. Recent detective work to identify a ‘Multimillion-dollar trade in paper authorship’ has highlighted some of the work that publishers are having to undertake to retract the publications that have links to this practice.
Something is rotten in the state of research. And while retracting work that is fraudulent or incorrect is very important, it is expensive and too late. We should be nipping the problem in the bud much earlier – back with the training researchers receive at university. Achieving Open Research (and therefore increasing reproducibility) requires the provision of systematic research training that focuses specifically on research practice.
A recent preprint, “The Australian academic STEMM workplace post-COVID: a picture of disarray” paints a disturbing picture of what is happening down under. The authors surveyed over 500 early career researchers in 2022, following a similar 2019 survey. The issues the survey identified are wide-ranging, but what stands out from a reproducibility perspective is the link between poor supervision and poor research practice.
Academia works on a master/apprentice model for training. This is outdated, inefficient, and problematic. In the survey, fewer than a quarter of the respondents had been part of a discussion about skills development in the past 12 months. Only 13.1% had been provided an example of appropriate research standards, 9.1% of the respondents had had no discussions at all about their career in that time.
Nearly half of the respondents said they had been impacted by poor research practices – with the ordering or inclusion/exclusion of authors the most common issues. Other issues included fabrication, falsifying, selective use of or plagiarizing of data. Nearly 30% of respondents thought these practices were potentially severe enough to warrant paper retraction, staff dismissal, or a grant being repaid. Concerningly, the most common source of pressure to engage in poor research practice was the respondents’ supervisors.
Questionable research practice is not restricted to Australia by any means. A recently published study of 1,336 PhD students from five European countries uncovered that granting guest authorship to a ‘person in power’ was not uncommon, with approximately three in ten respondents indicating they had done so at least once. This practice was more common in the medical, natural, and technical sciences. The study identified that questionable authorship practices are prevalent among early career researchers, apparently reinforced “through a combination of coercive power relations and dominant norms in some research cultures, particularly in the natural, technical, and medical sciences”. And a recently published study of 285 graduate students in Jordan found only 22.5% of the students were aware of authorship criteria, and a lack of knowledge of different practices of scientific misconduct. There was an absence of constructive mentorship support.
Those of us working in scholarly communication are greatly impacted by these poor practices – from the need to retract papers, to the constant updating of the bibliometric record, provision of guidelines, and the need to develop policies and processes within institutions to manage poor practice.
Australia does not have a centralized academic integrity office despite calls for one. The Australian Academy of Science has recently established an award called the David Vaux Research Integrity Fellowship Award to recognise individuals who have led efforts in their or other institutions to foster and promote integrity in science. Rather than focusing on addressing research misconduct (which of course is necessary), we need to be concentrating on the source – the training of researchers. The sector needs to identify and articulate a clear framework or curriculum for training and professional development in areas of ‘research practice’ beyond disciplinary-specific skills and knowledge.
My 2008 PhD looked at the reasons why there was a low uptake of open access in Australia. During the in depth interviews I conducted, I asked about participants’ experience in learning how to write academic papers, and what they were now doing to train up their own students. The findings were revealing. The majority of interviewees had experienced very little training in relation to the writing of papers and the choice of journals in which to publish. And in turn, there was a wide range of positions on training their students, with many saying it was ‘easier’ if they wrote the papers and one saying, “I’m not sure if it’s my responsibility to train them to write”.
Our current system is not serving the needs of research students in relation to learning research practice. This is clearly demonstrated by the range of offerings from multiple providers available to bridge this skills gap. For example, writing papers is a ‘career-critical skill’. For over a decade, the Write a Scientific Paper (WASP) course has been offered to young career medical professionals in Malta, London, Bahrain, and now online. WASP’s ultimate objective is to “impart the faculty’s collective experience to the delegates in this crucial, early aspect of career progress.”
Beyond writing itself, a central skill in the scholarly communication process is peer review. Publishers and library staff alike will be aware of the phenomenon of peer review training sessions being over-subscribed. There have been claims that the “distinct lack of training in peer review has worsened the crisis in academic publishing”. And a recent review article discusses the need for training in peer review, suggesting a peer reviewer residency as a solution.
I subscribe to a ListServ called Biomed News on Scholarly Communication which scans biomedical papers weekly for scholarly communication issues. A recent missive included the following: Becoming an Effective Manuscript Reviewer, How to Be a Great Peer Reviewer, Peer review: improving feedback and Serving as a reviewer. It is not unusual for the list to include this volume of papers published in the academic literature on how to *do* peer review.
As those who are directly affected by the ability of people to complete (and the quality of) peer review, it is not surprising publishers are also stepping into the peer review training space. As a small example, Wiley Author Services offers Peer Review Training, PLOS has a Peer Reviewer Center, and Elsevier’s Researcher Academy covers peer review amongst other areas. Publishers training researchers in peer review makes sense given the current situation. But a PhD is supposed train candidates to become researchers. Knowledge about peer review, copyright, research data management, open research practices, academic integrity, and the like should not be left to publishers to pick up.
It is not just researchers who need this training. Those supporting researchers and working alongside them in the Third Space also need research practice skills. An offering to this cohort is from the Southern African Research and Innovation Management Association which have a Research Ethics and Integrity Online Short-Course, aimed at research managers.
There are some glimmers of hope. The UK Reproducibility Network is working with Research England on a five-year Open Research Programme which includes a Training Workstream that is building “a UKRN training syllabus and resources catalogue”. It’s a good start but this is a sector-wide and international issue. One approach could be to invest in updating existing frameworks such as the 2011 Vitae Researcher Development Framework. Another could be to engage with the National Information Standards Organization.
The Australian preprint makes some recommendations including the need to extend the length of PhD training to allow time for more skill development, specifically skills which benefit in project management and leadership, to better fit them for academia or the wider scientific workforce. There is a program already in place in Australia called the Transferable Skills Toolkit, which focuses on prioritizing tasks, managing expectations, demonstrating achievement, inspiring and managing others. These are all good skills to have, but do not address issues of academic integrity.
Universities often refer to graduate attributes or graduate qualities. While these are admirable and important, in addition we need to consider ‘researcher practice attributes’ and collectively agree on generic research practice skills needed to ensure open and reproducible research. Once a framework is broadly accepted, there is an opportunity to standardize training across multiple providers, allowing qualifications to be built through a global accreditation system.
Let’s face it – none of the players in the scholarly communication space – institutions, publishers, libraries, research offices, academies, and not the academic community itself – are able to solve this alone. We need to work together.
Discussion
4 Thoughts on "Guest Post: Start at the Beginning – The Need for ‘Research Practice’ Training"
Training is good but there are some ideas in here that are problematic and miss the worst parts of the system.
For example, the idea of extending PhDs to allow for more training is perverse. Australian PhDs already average a duration longer than they are funded (3.5 year scholarships, 4+ year average duration). US completion times are often 7+. Keeping the next generation of scholars grinding in relative poverty throughout more of their 20s is not a recipe for a healthy and happy workforce that can say no to bad practices.
And if there was a discussion of incentives and power I missed it. There’s no point teaching students about the importance of doing open science or why gift authorship is bad if we get to funding and then punish them for having done so. Our system rewards publication in high impact journals rather than openness and early career researchers need to show they are well connected to eminent researchers to improve their chances. In such a system, you can teach people all you want about doing the right thing but when funding success rates are so poor and the criteria so perverse, you will just produce a generation of angry and unfunded open scientists going to industry and cursing the years they wasted on academia. Meanwhile, people who fake data win Canada Research Chairs.
I’m sure that the author is aware of these issues and can’t solve all the world’s problems in one blog post but some acknowledgment of these issues I think is important.
Thanks for your thoughts and I agree with the point you are making. Underpinning all of the questionable research practices is the race to gather more and more papers because that is the only ‘coin’ of value in academia. Until we change what is rewarded, it will be very difficult to overcome ‘questionable research practices’ – the polite parlance currently being used. The challenge here is the continued concentration of the research infrastructure into a smaller number of companies where institutions provide and then buy back their own data to feed into the ubiquitous university ranking systems (some of which are owned by the same companies). It’s a very, very big problem. You are correct the blog format creates space limitations – there are at least three substantial blogs sitting in what I have written here in this response!
My thoughts on research practice are a little different.
I support the ‘short’ PhD, because it’s a huge effort even to complete one and it need not contain masses of extra classes [as mine did in the US over 6 years, and it was even longer to graduation] while on a tenuous or low income. The PhD need not be the absolute last word on a topic, either.
But on ‘research practice’ I agree that more can be done to show students how and where to write. The job of the supervisor is to do this, and certainly not to obtain a ‘gift’ authorship themselves in the process! Also, having produced my long list of ethical and reliable journals that avoid large APCs and the big commercial publishers, we start there when searching for outlets, not with ‘highest impact’ journals. Students are free to ignore this, but at least the supervisor can show there is such a thing as ‘ethical’ publishing when they close the PhD circle by publishing to a wider audience. [and they can learn to follow the DORA agreement when they themselves come to assess the work of others, later in careers] https://simonbatterbury.wordpress.com/2015/10/25/list-of-decent-open-access-journals/
Thanks Simon for providing that resource, it’s very useful. However the argument I am making is that whatever we *perceive* the job of the supervisor to be in relation to research practice training it is clearly not happening at any type of scale. Many supervisors would themselves not be cognizant with the current debates, techniques and areas of scholarly communication, research integrity and open research they should be supporting their students in. Who is training the trainers? What are we basing that training on? My argument is we do not have an agreed starting point for this.