A few years ago, I gave a talk about the changing nature of being a research scientist, and in particular, the amount of “asks” that are made on a researcher’s time. Although I summarized that talk in a 2019 post (and repeat some of it below), the last week has brought it back to mind, along with a phrase I’ve used in nearly every talk I’ve given over the last 15 or so years – time is a researcher’s most precious commodity.
The high number of degrees awarded by universities every year, combined with the very low number of tenure track faculty positions and the scarcity of funding made available, have made research careers into something of a buyer’s market. We’ve seen universities and funders continually increase the demands they make. Researchers are required to do more and more beyond their actual research.
Back in what seem like the ancient days, scientists were largely tasked with things like, well, doing research, along with teaching, and some level of mentoring and serving on committees. We slowly moved into an era where many research positions became more like being a freelancer where the university provides you space, but you need to raise enough money to pay for the rent on that space, not to mention paying for things like your own salary, equipment, and staffing costs. Because of this, much of the job of the principal investigator became grant writing.
Now on top of this, scientists are being asked to jump through an enormous number of additional bureaucratic hoops. If you’re doing a research project, it has been suggested that you need to take the time to pre-register your experiments and go through some level of peer review before you’ve even done anything. Clinical trials must be publicly registered for example. As you start to do your research, some feel you should be continuously making each result public, taking the time to write up each incremental piece, posting it online and having it reviewed and being part of a discussion around it.
When you’ve completed the project, there’s a drive to have authors make early drafts of their write-ups public and to monitor and respond to any comments. Then you need to publish the actual paper, and you must take great care to publish it in an outlet that follows the very specific rules set by your funders, your university, and all of your collaborators’ funders and institutions. Then you should (and will likely soon be required to) make the data behind the paper publicly available and help others use it. If you really want to drive reproducibility, you’ll also write up and release your methodologies.
Societal impact is now deemed important, so you have to become your own publicist, promoting yourself and the work via social media. At the same time, people may be talking about your paper via post-publication peer review systems, so you need to monitor those and respond to any questions/criticisms. And of course, you likely have institutional, national, and funding agency policies around public access, so you have to figure out what those are, figure out the right version of the paper, figure out where it goes, under what conditions, and at what time. Likely you have multiple authors from multiple institutions with multiple funding sources, so you have to do this for multiple policies and deposit in multiple repositories.
That’s all on top of the previous work, creating a huge time and effort sink when what researchers really want to do is…research.
No one goes into science because they really love bureaucracy and filling out forms. Further, if we see the purpose of research as benefitting society, then every second we take a researcher away from the bench means slower progress.
When I’ve worked with research societies to try to drive progressive policies around open data and open methods, the biggest concern is always the increased burden placed on researchers. Not every open data set or openly documented method is going to be of enormous use to the field. But because we can’t know in advance which ones will provide that potentially huge value, the idea is to make sure all of that information is preserved and made available, just in case. The goal, however, has to be to reduce the burden involved as much as possible. In the case of open data and open methods, this means better training and tools to incorporate better data and methods recording practices into the research process itself. If you’ve planned your experiments well and are collecting data in an organized manner and carefully recording your methods (as you should be), then making that information publicly available is less burdensome than trying to reconstruct it into a usable form after the fact.
Entangled in Tentacles
But not all new research requirements or proposals can be as efficiently brought into existing workflows. As a case in point, yesterday’s post on Octopus, a newly-proposed micro-publication platform, seems to emphasize the publication process over the research itself. This is an ongoing problem for new business ventures and policy proposals around scholarly publishing. Publishers have a tendency to see themselves as being at the center of the research world, where researchers tend to see publishing as peripheral, something that takes them away from the things they really want to spend time doing (again: research).
Instead of one instance of taking time off from research to go through the writing and publication process for one article summarizing a project, Octopus is predicated on researchers doing this eight times per project, at each stage of workflow. And, since most research projects are a series of failures and readjustments, those eight interruptions could quickly turn into 10, then 12, then 14 interruptions and so on, as one has to go back and rewrite and publish new “formulated scientific problem” articles, “hypothesis” articles, “methods and protocols” articles over and over again as new information becomes available. Will anyone have any time for any actual research in between writing sessions?
Then there’s the burden on peer reviewers and readers. The Octopus approach assumes that researchers want to continuously read about (and review) each incremental piece of a research project, rather than waiting for one short summary at the end. If indeed this is the case, then time spent on discovery and reading (and peer review) will need to increase eight-fold as well. “I have too much to read,” or “I’m struggling to keep up with the literature,” are two very common complaints we hear from researchers. Is the solution to increase the volume of the literature, particularly with speculative pieces about research that hasn’t yet been (and may never be) performed?
All of this activity is supposed to lead to undefined “recognition” offered by undefined parties, again an ongoing problem in scholarly communication, where we can track activity, but we can’t make anyone care about it enough to reward those tracked activities.
Peer Review Pivots
PeerJ’s announcement last week that they will no longer send all “sound” articles out for peer review, but will instead start desk rejecting articles that fail to make a meaningful contribution to the scientific record offers a clear example of the weight of the burdens the research community is being asked to bear. The change in editorial policy comes as a response to what PeerJ was hearing from the researchers who peer review for their journals:
We have heard from so many of our editorial board members and reviewers that they feel swamped by peer review requests and that they – and the system more widely – are close to breaking point. We most regularly hear this frustration when papers that they are reviewing do not, in their expert opinion, make a meaningful contribution to the record and are destined to be rejected; and should, in their view, have been filtered out much sooner in the process.
PeerJ began as a bold experiment toward a new open access business model (and we desperately need more of these experiments) and should be praised for their adaptability and willingness to pivot multiple times in response to the needs of their customers. This current shift in strategy provides a potential answer to the questions posed above about Octopus – no one wants to spend a lot of time reading and peer reviewing junk.
Better, Not More
As we think about publishing strategies and policies, we need to think in terms of “better”, rather than just piling on more hoops for researchers to jump through. We need to think about what end result we’re seeking, and whether it’s worth the effort required to achieve it. Just having more information available isn’t necessarily better than having less information released about a research project, especially if it comes at a cost to progress. Any point where we can introduce efficiencies into the research process, however, is greatly welcome, which creates opportunity for publishers and research societies alike, as these organizations exist to support the research community and to provide services that make researchers’ lives easier.
If you’re putting together ideas about new ways to publish research or policies around researcher requirements, it is essential to understand the burdens your proposal creates. Can you find a way to lighten the researcher’s load, rather than expanding it? If the goal is to drive scientific progress and the betterment of society, anything you can do to shift more time toward doing actual research, rather than serving the needs of your policy, must absolutely be a priority.
Discussion
19 Thoughts on "The Curse of More, or, Does Anybody Have Any Time Left to Do Research?"
Hi David, An excellent post that clearly articulates what many of us (including publishers) are thinking.
This is a very interesting post. I am a bit conflicted, in that I support a lot of the new requirements to make taxpayer-funded research more accountable and transparent, but I get your point about the burden of it cumulatively starting to become unreasonable. On the administrative side of our universities, we have admin assistants who do routine paperwork to save the time of the “expensive” employees. I wonder if our institutions should consider creating “research administrative assistants” who would be tasked with doing all of the steps that the researcher doesn’t actually have to do, like monitoring for responses/feedback. I’m not picturing a normal admin asst but maybe more of a post-doc type job who might be able to even do some of that basic prelim-report writing. This is outside my area of expertise, so I’m just asking – can you envision hiring someone who could do a lot of that paperwork, if properly trained, for multiple faculty researchers (not just a grad student working on a single project)? Or does too much of it just have to be done by the PI directly?
The solution is to grant the money and walk away. Publication in a highly ranked journal is all the transparency you need.
I think there’s a lot of support that universities could provide to their researchers. As one example, who is better trained at organizing and archiving information than librarians? Many are already putting in place programs to help researchers with managing data, which could prove enormously valuable in light of open data policies. Of course, someone has to pay for all that support…
Good points – the initiative seems heroically optimistic in several respects. But journal publishing also takes away a massive amount of time from “real” research. The funding of this project shows that there is dissatisfaction with the current research evaluation mechanism, and that there is an appetite to improve the transparency, granularity and effectiveness of research output evaluation.
Richard Wynne – Rescognito
I hope your tongue is firmly in your cheek when you equate societal impact with promoting work via social media. (Societal impact results from engagement, collaboration, co-creation, multi-channel communication etc from ideation onwards, not from a quick tweet posted as an afterthought to a completed project. Obvs).
Absolutely! I think the problem I’m trying to get across is that I keep seeing things like Altmetrics being proposed as the means of measuring “societal impact”. Real impact is much more subtle and slow to measure during the grant life cycle which I wrote about at length here:
https://scholarlykitchen.sspnet.org/2018/08/02/societal-impact-meet-new-metric-old-metric/
Instead we are directing researchers to waste time striving for high Altmetric scores, which are largely meaningless in terms of measuring real impact.
Re: the idea of ‘better research’ – I’ve oft heard the idea that any researcher’s colleagues and admins are unlikely to to read your papers or books at tenure or performance reviews, but they will count them and their citations.
If researchers were incentivized to produce less manuscripts, but of higher quality (more robust, transparent, and meaningful) as opposed to producing more manuscripts of lower quality (mediocre, opaque, piecemeal), I think our research enterprise would benefit. So all we need to do is change tenure and promotion processes within academic and research institutions…
This “problem” feels mostly self inflicted to me. There is no question that all of us are busy, and that in general, administrative requirements (sometimes appropriately called administrative burdens) are increasing. I’m a surgeon, believe me, I get it.
Some of these administrative burdens — both clinically and in the conduct of science — are both important and good. For example, as a surgeon, I’m required to document that I was present during an operation if I want to bill for it. If someone can show I wasn’t, I’m on the hook for fraud. This is a good thing. Keeps patients safe. Likewise, as a clinician-scientist, I’m required to get permission from the Institutional Review Board before doing human-subjects or animal research. Though those processes are slow and heavy on the administrative burden side, my experience with them is that the people behind the processes are generally well intentioned even when the processes themselves are clunky. Registration of randomized trials (and, perhaps, research more generally) is a burden that in aggregate will yield (has yielded) a considerable social and scientific good.
But many of the examples you gave are entirely optional, and, in fact, have serious downsides that are not discussed frequently enough in general (and are not viewed with sufficient skepticism, in my opinion, here at SK). Some of these are:
-Preprint servers. Posting one’s work to a preprint server is optional. It’s my opinion that it’s a mostly self-serving gesture, and one that carries with it considerable harms that are insufficiently covered in general (and insufficiently covered here on SK). But that is another conversation for another day. Posting one’s work to a preprint server is optional. Certainly reviewing articles on a preprint server is optional, and it shows. The large majority of preprint server papers get no comments, and most of the ‘reviews’ of preprint server content on some major biomedical preprint servers are social media posts, like tweets. Hardly count as substantive reviews. And, again, optional.
-The thin-slicing of work, à la Octopus. Optional. No one has to do this. The act of doing it is mainly self serving. The idea that anyone would review those bits and pieces strikes me as unlikely (given that some actual journals struggle for reviewers, and most preprints go unreviewed, makes it seem implausible that many people would spend time reviewing slices of slices on platforms like Octopus). In any case, doing so is optional.
-Open data and open methods. Currently optional in most fields. Probably really important in some areas of inquiry, less so in others. Decide whether it adds value to your field, and act accordingly.
Researchers are being distracted by many of these things. They needn’t be. If one focuses on the things that are required (getting ethical approval from the IRB for research, registering a prospective trial), invests the time on the high-reward / high-value parts of the job (design good studies, run them well, report them with integrity in reputable journals, whether OA or paywalled; engage with open data initiatives when needed/helpful), and does not get distracted by the optional and low-value activities (preprint servers, Octopus et al), things may not be so bad.
The solution has been and remains in plain sight.
It’s all optional until someone gets in the ear of the leadership at a funding agency, or a university, or a policymaker and then it’s a requirement. I agree with you entirely that there’s an enormous amount of nuance here, and how some policies make much more sense and provide much more value in certain contexts. But if there’s one thing I’ve learned about policy, it’s that policymakers hate nuance, and that one simple rule for everyone is the most likely outcome. That’s why we have “one size fits all” open access rules that treat clinicians, cell biologists, and historians as functionally equivalent.
Great observation, David. One size does not fit all.
It’s also why some of the things that have great surface appeal — preprint servers are on my mind lately — deserve deeper scrutiny than they’re getting.
(Just chiming in here to say that we very much welcome guest pieces Seth and it seems you might have a piece on preprints at least half-way written in your head? I would love to hear more from your perspective.)
That’s awfully nice, Lisa. Thank you. I’ve written and spoken on the topic a few times. Clearly my opinion is not carrying the day. 😉
If I get the time, I’ll give it some thought.
Have a good weekend,
Seth
During my PhD we had a seminar with the founder of PubPeer, and he showed this vision of post-publication peer review and discussion on papers, and how pre-registration would solve many of the problems the papers currently have, and how it’s important to respond to comments about your paper, and so on – and I asked him just this question: when am I supposed to find time to do all this? Because right now it seems to me that I should publish one paper and spend the rest of my life curating it. He said something about contributing to the community and how this is important, so basically “just find this time”. I said “aha”, I was quite unconvinced, I finished my PhD, and I left academia, so end of story for me. I’m happy to hear that it was not only me who was concerned. Reading the comments here I see why my supervisor refused to post our paper on the preprint server, for the same reason probably.
Reading about the continually increasing requirements placed on researchers that David so cogently describes makes me glad I’m a retired researcher. But I feel bad that we’ve allowed our younger colleagues to become entangled in the tentacles of Octopus et al..
I strongly disagree with the context this point appears in: “If you really want to drive reproducibility, you’ll also write up and release your methodologies.”. Writing up and releasing your methodology is not an act of bureaucracy, it is an act of research. This isn’t even exclusive to science. Anyone who wants to be taken seriously on a topic has to show their methodology in some way. If science wants taken seriously, communicating methodology as clearly as possible is the bare minimum and not just a matter of reproducibility. Of course, rules regarding methodologies and their communication could be bureaucratically enforced and one might disagree with some of their details. That doesn’t change the fact that methodology is central to science and therefore the way we communicate methods rightfully gets a lot of attention.
On the other hand reading a funding agencies arbitrary rules and then having your grant thrown out after multiple rounds of review because some of those rules are not written in perfectly clear language, that’s bureaucracy and wastes everyone’s time. Grant lotteries could help a lot here.
I’ve been a proponent of accurate methods reporting for a long time. Here’s a 2014 piece calling for it:
https://scholarlykitchen.sspnet.org/2014/11/18/nevermind-the-data-where-are-the-protocols/
And here’s a policy proposal on improving methods reporting I coauthored for the US federal government:
https://www.dayoneproject.org/post/re-envisioning-reporting-of-scientific-methods
Many of the activities I listed are of great value, and worth doing. But we must recognize that they mean extra work for the researcher. Sometimes that additional burden is going to be worth it (open data, open methods, etc.). Sometimes it won’t. We need to be careful to determine the difference between the two, and when we are asking for additional work, we need to think about ways we can minimize it.
What is ‘research’? One could argue that many of the burdens (including others, such as teaching) are part-and-parcel of the overall research process and actually help it.
So, being forced to organize your thoughts to teach them helps one focus/work on the real problems and a deeper understanding.
Being forced to write up your work in a formal report, of high quality, produces clarity of thinking and a deeper understanding.
Being forced to use appropriate statistics (especially in medical research) means researchers have a greater understanding of *how* to research and how to properly interpret data.
Being forced to undergo peer review provides direct feedback, improvement and (often) a deeper understanding.
The same could be said for data and so on. All lead to better ‘research’. I think the trick is in what is forced and what is not, which Drs Leopold and Crotty have alluded to.