A few years ago, I gave a talk about the changing nature of being a research scientist, and in particular, the amount of “asks” that are made on a researcher’s time. Although I summarized that talk in a 2019 post (and repeat some of it below), the last week has brought it back to mind, along with a phrase I’ve used in nearly every talk I’ve given over the last 15 or so years – time is a researcher’s most precious commodity.
The high number of degrees awarded by universities every year, combined with the very low number of tenure track faculty positions and the scarcity of funding made available, have made research careers into something of a buyer’s market. We’ve seen universities and funders continually increase the demands they make. Researchers are required to do more and more beyond their actual research.
Back in what seem like the ancient days, scientists were largely tasked with things like, well, doing research, along with teaching, and some level of mentoring and serving on committees. We slowly moved into an era where many research positions became more like being a freelancer where the university provides you space, but you need to raise enough money to pay for the rent on that space, not to mention paying for things like your own salary, equipment, and staffing costs. Because of this, much of the job of the principal investigator became grant writing.
Now on top of this, scientists are being asked to jump through an enormous number of additional bureaucratic hoops. If you’re doing a research project, it has been suggested that you need to take the time to pre-register your experiments and go through some level of peer review before you’ve even done anything. Clinical trials must be publicly registered for example. As you start to do your research, some feel you should be continuously making each result public, taking the time to write up each incremental piece, posting it online and having it reviewed and being part of a discussion around it.
When you’ve completed the project, there’s a drive to have authors make early drafts of their write-ups public and to monitor and respond to any comments. Then you need to publish the actual paper, and you must take great care to publish it in an outlet that follows the very specific rules set by your funders, your university, and all of your collaborators’ funders and institutions. Then you should (and will likely soon be required to) make the data behind the paper publicly available and help others use it. If you really want to drive reproducibility, you’ll also write up and release your methodologies.
Societal impact is now deemed important, so you have to become your own publicist, promoting yourself and the work via social media. At the same time, people may be talking about your paper via post-publication peer review systems, so you need to monitor those and respond to any questions/criticisms. And of course, you likely have institutional, national, and funding agency policies around public access, so you have to figure out what those are, figure out the right version of the paper, figure out where it goes, under what conditions, and at what time. Likely you have multiple authors from multiple institutions with multiple funding sources, so you have to do this for multiple policies and deposit in multiple repositories.
That’s all on top of the previous work, creating a huge time and effort sink when what researchers really want to do is…research.
No one goes into science because they really love bureaucracy and filling out forms. Further, if we see the purpose of research as benefitting society, then every second we take a researcher away from the bench means slower progress.
When I’ve worked with research societies to try to drive progressive policies around open data and open methods, the biggest concern is always the increased burden placed on researchers. Not every open data set or openly documented method is going to be of enormous use to the field. But because we can’t know in advance which ones will provide that potentially huge value, the idea is to make sure all of that information is preserved and made available, just in case. The goal, however, has to be to reduce the burden involved as much as possible. In the case of open data and open methods, this means better training and tools to incorporate better data and methods recording practices into the research process itself. If you’ve planned your experiments well and are collecting data in an organized manner and carefully recording your methods (as you should be), then making that information publicly available is less burdensome than trying to reconstruct it into a usable form after the fact.
Entangled in Tentacles
But not all new research requirements or proposals can be as efficiently brought into existing workflows. As a case in point, yesterday’s post on Octopus, a newly-proposed micro-publication platform, seems to emphasize the publication process over the research itself. This is an ongoing problem for new business ventures and policy proposals around scholarly publishing. Publishers have a tendency to see themselves as being at the center of the research world, where researchers tend to see publishing as peripheral, something that takes them away from the things they really want to spend time doing (again: research).
Instead of one instance of taking time off from research to go through the writing and publication process for one article summarizing a project, Octopus is predicated on researchers doing this eight times per project, at each stage of workflow. And, since most research projects are a series of failures and readjustments, those eight interruptions could quickly turn into 10, then 12, then 14 interruptions and so on, as one has to go back and rewrite and publish new “formulated scientific problem” articles, “hypothesis” articles, “methods and protocols” articles over and over again as new information becomes available. Will anyone have any time for any actual research in between writing sessions?
Then there’s the burden on peer reviewers and readers. The Octopus approach assumes that researchers want to continuously read about (and review) each incremental piece of a research project, rather than waiting for one short summary at the end. If indeed this is the case, then time spent on discovery and reading (and peer review) will need to increase eight-fold as well. “I have too much to read,” or “I’m struggling to keep up with the literature,” are two very common complaints we hear from researchers. Is the solution to increase the volume of the literature, particularly with speculative pieces about research that hasn’t yet been (and may never be) performed?
All of this activity is supposed to lead to undefined “recognition” offered by undefined parties, again an ongoing problem in scholarly communication, where we can track activity, but we can’t make anyone care about it enough to reward those tracked activities.
Peer Review Pivots
PeerJ’s announcement last week that they will no longer send all “sound” articles out for peer review, but will instead start desk rejecting articles that fail to make a meaningful contribution to the scientific record offers a clear example of the weight of the burdens the research community is being asked to bear. The change in editorial policy comes as a response to what PeerJ was hearing from the researchers who peer review for their journals:
We have heard from so many of our editorial board members and reviewers that they feel swamped by peer review requests and that they – and the system more widely – are close to breaking point. We most regularly hear this frustration when papers that they are reviewing do not, in their expert opinion, make a meaningful contribution to the record and are destined to be rejected; and should, in their view, have been filtered out much sooner in the process.
PeerJ began as a bold experiment toward a new open access business model (and we desperately need more of these experiments) and should be praised for their adaptability and willingness to pivot multiple times in response to the needs of their customers. This current shift in strategy provides a potential answer to the questions posed above about Octopus – no one wants to spend a lot of time reading and peer reviewing junk.
Better, Not More
As we think about publishing strategies and policies, we need to think in terms of “better”, rather than just piling on more hoops for researchers to jump through. We need to think about what end result we’re seeking, and whether it’s worth the effort required to achieve it. Just having more information available isn’t necessarily better than having less information released about a research project, especially if it comes at a cost to progress. Any point where we can introduce efficiencies into the research process, however, is greatly welcome, which creates opportunity for publishers and research societies alike, as these organizations exist to support the research community and to provide services that make researchers’ lives easier.
If you’re putting together ideas about new ways to publish research or policies around researcher requirements, it is essential to understand the burdens your proposal creates. Can you find a way to lighten the researcher’s load, rather than expanding it? If the goal is to drive scientific progress and the betterment of society, anything you can do to shift more time toward doing actual research, rather than serving the needs of your policy, must absolutely be a priority.