A couple of recent posts on this blog have been aimed at increasing researchers’ understanding of publishing and publishers. As it happens, around the time they were posted, I was in the middle of a series of interviews with researchers, and they in turn were flagging challenges that they wished publishers understood better. I thought therefore it might be useful to summarize and paraphrase some of the main points raised, along with thoughts about how such issues might be addressed. This is a snapshot, a starting point for further discussion, not a comprehensive list. Further contributions from researchers are welcome in the comments;
Although I’m not a researcher myself, I’ve presented the views of those I’ve spoken to in a first person voice because it feels more human than talking about “they” and “them” .
ASPIRATION: We want things to be more streamlined.
- Context: Publishing is important but it’s only one activity of many that we must undertake — teaching, researching, managing people and projects. Your processes are painful, your systems overly fragmented, your interfaces unnecessarily different. Why do you all do things separately and differently? Can’t you be more integrated / automated?
- Possible solutions: Yes, in attempts to differentiate their services, publishers often reinvent the wheel — the ways in which one’s system might be better than another’s aren’t always sufficiently beneficial to outweigh the disadvantages to a researcher trying to complete the same task in fifteen different ways. But publishers do collaborate on many shared services. In some cases, this collaboration takes the form of vital “plumbing” (often invisible to researchers, and admittedly sometimes solving problems that are more keenly felt by other parties, such as institutions). In other cases, through grants or early adoption, publishers support the development of cross-publisher services with more direct researcher value; in my last post, I looked at this trend of publishers investing in workflow tools. There is a sense that publishers are thinking more strategically about where being different is a good thing, and where in fact they have more to gain (economies of scale) by collaborating with others to “commoditize” — and therefore streamline — some functions. Publisher readers: what functions might in future be shared and streamlined?
ASPIRATION: We want wider recognition for our efforts.
- Context: We’re struggling for visibility in a crowded, competitive environment. Our supervisors’ / institutions’ mechanisms for evaluating us are out of date. We spend a lot of time contributing to the publishing process in ways that ultimately benefit the overall system but for which we personally don’t get recognition. Ideologically, we want to keep doing our bit; in reality, our time is so pressured that have to weigh each non-core task. Can you help us surface and quantify these efforts, and help us influence our institutions in taking them seriously?
- Possible solutions: Publishers already support — either as investors or customers — services such as Publons which provide public recognition for researchers’ reviewing activities (and which are then working with institutions to begin the not insignificant task of trying to expand the bases of institutional evaluations). Publisher support (in terms of expertise as well as money) has also been crucial to the development of ORCID, which both directly enables researchers to get credit for their work, and participates in projects such as CRediT which is working towards a universal way of “badging” researchers’ contributions (e.g. as a reviewer, or coder, or for supervising projects or winning funding, etc). Publishers have also supported — again, either as investors or customers — the development of altmetric services which aim to surface and summarize discussion around publications; perhaps the badging and altmetrics concepts can collide to help institutions evaluate, recognize and reward researchers’ wider contributions.
ASPIRATION: We don’t want to feel exploited; we want to trust that you share our goals.
- Context: Putting institutional evaluation processes to one side, we publish so that our work can be found, read and applied by others. We want to feel that is your primary raison d’être too but feel sometimes that publishing has become an end in itself, rather than a means. As we scrape together budget for staff and equipment, it is at worst soul-destroying and at least infuriating to think of the immodest surpluses and profits made by some publishers, which we feel are disproportionate to the value that you add versus the value we add ourselves. Our expectations are changing — for example, we want to get our work out there more quickly — and while we aren’t (all) demanding that you throw the baby out with the bathwater, we do want to feel like you’re taking our evolving needs seriously, and that you’re prepared to scrutinize the fitness for purpose of each part of the publishing process, e.g. in the context of new technologies. We are losing trust in your ability to deal transparently with us or to put the needs of scholarship and research above all else.
- Possible solutions: Publishers could articulate better the ways in which they do explore and respond to researchers’ needs, but PR is a defensive response and perhaps a touch solipsistic: ideally one should show not tell, i.e. be functioning in a way that self-evidently reflects shared values, and changing researcher needs and expectations. Taking a hypothetical approach to innovation (“suppose we were starting from scratch: what new approach would we invent to deliver what our customers want”) or using reversal techniques (“what things could we do that our customers would hate,” and then see what the opposites are) can make it easier to envision paths around real-world obstacles. Where there’s a will, there’s a way — the question uppermost in researchers’ minds is whether publishers really have that will.
In adding your comments, note that I’m aiming to do a follow up post pulling together any more themes that emerge. To that end it would be helpful if you could try to echo the structure I’ve used (aspiration > context > possible solution), so that the focus is on what we might actually do to improve things, rather than just how wrong things might be.