Insert Head HereI recently programmed and chaired a one-day conference for UKSG, with a theme of The scholarly communications ecosystem: understanding and responding to evolving expectations. I invited some researchers to consider the extent to which they find current systems for disseminating their work fit for purpose, and what improvements (if any) they might  seek. Meanwhile, publishers, librarians, and providers of related technology and services talked about the efforts they are making to keep up with researchers’ evolving needs. As we started the closing panel discussion, I asked the participating researchers to rate the current “system”. The result? Reasonable, reasonable, poor. I assumed the frustrations would be focused on things like access issues or clunky formats. And yet my notes from the event show that I had probably set the entire day up with the wrong premise. Once again, in focusing on the role of publishers, libraries, etc., we are tackling the symptoms, not the cause of the problem. The real frustration for researchers, which came up time and again, from different perspectives, is that institutional systems for evaluation (rather than publishing-related systems for dissemination) are no longer fit for purpose. I’ve summarized the most thought-provoking points from the day below.

  • Stockholm syndromeEarly career researchers love the work that they do, but feel unfairly treated by institutional structures for recognition. Some of those interviewed by Anthony Watkinson and his team for the CIBER Harbingers project categorized themselves as “slaves to a publishing-based reputation system”, but others didn’t like the use of slavery as an analogy, acknowledging that their adherence to the system is self-imposed.
  • Buridan’s ass? Which format do you choose for publishing your work — the one that will get it out fastest, or the one that is most respected when it comes to evaluating your performance? Jane Winters talked about “edited collections” (of e.g. conference papers) as the place where the newest humanities research is likely found, while noting that in terms of impact and visibility, publishing in an edited collection is like “throwing your research down a well” (the format being insufficiently mainstream to contribute to readership or impact for your work). Current evaluation of publishing formats leaves researchers with a tough choice between speed and impact. Winters also talked about the challenges of “making your first book sufficiently different to your thesis” in a world where theses are commonly published online, and where early career researchers are expected to have a “balanced publishing portfolio”, including monographs and journals, but also popular periodicals, blogs, newspapers and so on. She advocated for more guidance from publishers, institutions and societies to help scholars know how and where best to focus their efforts.
  • Square pegs, round holes. Several times during the day we touched on how difficult it is to get interdisciplinary research published, because it doesn’t fit the established journal verticals; Micheál Ó Fathartaigh had joined with others to set up a new journal, partly to address this. Seth Cayley brought the topic to life with a closer look at some digital humanities projects (hands-down favorite was the study/ngram that showed that, for one year only — 1964 — the Beatles were indeed bigger than Jesus), the serious point behind the fun being that the cross-disciplinary nature of such projects makes universities nervous about putting them forward for evaluation within traditional frameworks — institutional conservatism meant that even award-winning research was not considered excellent enough to be put forward for the UK’s most recent research evaluation exercise (REF 2014) for fear that it would not “score” well. (Ironically/encouragingly, as Jane Winters pointed out, those digital humanities projects that were put forward actually did quite well).
  • Etiologies of open? Though not expressly on the agenda, open access (OA) and open science ran like threads through the day — one of our researcher panelists was a co-founder of the OA journal Studies in Arts and Humanities, our library speakers had collaborated to form an OA repository and press, our publisher speaker talked about re-usability of published research (look out for the imminent launch of the Scigraph.com linked data platform from SpringerNature). You could take from this that open is mainstream, an established rather than evolving need, not something we needed to focus on specifically. On the other hand, Sabina Michnowicz talked about researchers having to fund OA from their own pockets, and Jane Winters lamented the institutional conservatism that means humanities scholars still consider OA too “risky” a model for communicating their work because it is conflated with a lack of rigorous peer review. The framing of the issues around open has moved on: a newcomer to our sector might have concluded from the discussions that it is institutions, not publishers, who are blocking progress towards open.
  • You can lead a horse to water, but you can’t make it drink. The concern about outworn higher educational processes was also evident in discussions around researcher evaluation. Winters talked about the many activities expected of a “good academic citizen” (and reminded us of the relatively low prominence of publishing by showing where it sits in the guidance being given to early career researchers by her discipline’s society). Steven Inchcoombe talked about practical ways of helping institutions evaluate the “bigger picture”, such as enabling the time invested in the review process to be better recognized and credited. However, Michnowicz’s letter to the Higher Education Funding Council for England, asking that peer review contributions be recognized in research evaluation, and received a scant, and negative, response.
  • Keepalive signals? The publication process did not, of course, get off scot-free. Ó Fathartaigh characterized peer review as the process of watching a loved one sail away, not knowing when they will be back, not hearing from them for months, “wondering how she’s faring”. “Is it too much to ask for the occasional update?” he pondered. Inchcoombe had earlier cast “inefficient” peer review (siloed and linear submission/rejection processes) as “publishing’s nasty secret”, and talked about how improvements can be made both in terms of systems (a new manuscript submission service has reportedly reduced turnaround times for selected SpringerNature journals by 50%) and in terms of processes (SpringerNature’s “transfer” service undertakes the reformatting when a submission is transferred between their journals — well received by the academics in the room who had joked about 30 page submission guidelines). The publishers among the audience/twitterstream thought updates during the peer review process “do-able”, and brainstorming around this might make a useful workshop session at a future conference — at what wayposts during the submission/peer review process could updates be sent to authors? (Do any Scholarly Kitchen readers already have such a framework in place?) Acknowledging the role of academics themselves in slowing down review processes, Andy Miah proposed book-sprint style hack days for peer review to incentivize and reward quicker review actions. Meanwhile, James Harwood presented Penelope, a Publisher Natural Language Processing tool (PNLP – geddit?) which performs a range of automated checks on submitted manuscripts (formatting, etc.) and marks up documents with simple “looks good” and “check this?” comments to speed up author feedback/manuscript processing times. It has been used by about 500 authors so far, and typically finds 10 errors per paper. Cengage now provides text and data mining tools (e.g. data visualization software) to help researchers make the most of the content on its platforms without needing to be digital experts in their own right.
  • Trees falling in a forest? Researcher panelist: “it would be great if libraries organized events like today — a showcase of all the services out there to help researchers”; librarians in the audience: <mass implosion>, because I think nearly every librarian there had organized something exactly along these lines and had scant attendance by researchers — the panellists themselves acknowledged that they ignore emails from their libraries, with Andy Miah mentioning that he really only communicates via What’s App. And don’t expect scholarly collaboration networks to provide the answer: all the evidence on the day was that researchers aren’t using these sites for communicating, collaborating or discovering. They are simply shop windows, places to ensure they are visible; collaboration is still growing primarily from face to face connections at conferences. (I don’t think the day provided any answers as to how best to ensure researchers are connected to the guidance and support being offered by libraries and publishers; my own position on this is simply “multi-touch, multi-channel”: tell people over and over and over again, through every available medium, what you are offering. The majority aren’t annoyed by the repetition, they’re oblivious to it.)
  • Change happens. Winters criticized organizations that alienate humanities researchers by using “science” in their messaging — among others, she flashed up the Publons’ homepage with its “speeding up science” tagline. Her point was widely tweeted. Before she’d finished her talk, the Publons homepage had been updated! If only all change were as simple to effect. But what a welcome reminder that change can happen.
Charlie Rapple

Charlie Rapple

Charlie Rapple is co-founder of Kudos, which showcases research to accelerate and broaden its reach and impact. She is also Vice Chair of UKSG and serves on the Editorial Board of UKSG Insights. @charlierapple.bsky.social, x.com./charlierapple and linkedin.com/in/charlierapple. In past lives, Charlie has been an electronic publisher at CatchWord, a marketer at Ingenta, a scholarly comms consultant at TBI Communications, and associate editor of Learned Publishing.

Discussion

15 Thoughts on "Institutional Conservatism in Scholarly Communications: Thoughts from UKSG’s One-day Conference"

If the following essay appears “tl;dr” then here is an 8-word executive summary: Make tenure expectation more like a course rubric.

College administrators would never dispute how essential it is for courses to have precisely delineated rubrics with unambiguous learning outcomes and rigid definitions of grades awarded for classroom performance. The syllabus has grown over the years and its requisite components are often codified. Students must know where they stand. Accreditors must measure precisely.

But compare that to the standard for tenure and promotion. New faculty frequently do not know where they stand. The faculty manual is vague and resists attempts to quantify. Words are used in place of numbers and their meanings are subject to interpretation. Good luck finding a numerical template for a particular discipline.

How is such refusal to quantify justified? Is it that hard to specify a minimum manuscript count within publications that are sufficiently cited? Maybe it was onerous to require such metrics in the dark ages before Google Scholar laid bare a candidate’s publications and citation counts for each manuscript or journal. Today it is easy to search a name in Harzing’s Publish or Perish computer program. No one needs an external letter or rejection rate to clarify whether a journal is cited; Harzing will tell you that the Kentucky Journal of Irreproducible Results is seldom cited and presumably low quality. Unrefereed journals achieve are seldom if ever cited. Obviously a third-year or sixth-year candidate cannot be expected to have mountains of citations, but having their work published in a journal that is heavily cited can be a worthy benchmark.

Books? What about books and their chapters? One need only test for vanity press by entering the publisher name as a journal title in Harzing’s program. Or ask the department chair to find a rank-ordering of publishers for that discipline. “Top-10” says more than “high quality” in the war between numbers and words. Design a rubric.

Non-standard disciplines? In the absence of conventional publications, let’s enumerate the awards for creative works. Have external reviewers assign numerical weights to the enumeration, if necessary, but make a rubric.

Teaching? What’s wrong with a “bare minimum” average number pulled from student evaluations? Arguing that students don’t respond is only evidence that the professor failed to set aside a day in the final week of class to walk to the door and announce, “I’ll be down the hall until someone comes to get me after you’re all finished doing an online evaluation of this course, it’s important, see you soon, bye.” Arguing that the students do not understand quality is a well-founded claim, but because they are the only unbiased witnesses, then the rubric must allow some weight (some percentage) to their evaluations. Build numbers into the rubric.

Letters from colleagues are seldom if ever encouraged to rate the candidate on a scale of one to ten, the result of which would be far less disputable than words interpreting words. Why is the process so devoid of numbers, as compared with rubrics on a syllabus? Perhaps numbers seem unfair to professors, who seldom hesitate to assign numbers and surrogate letters, with or without a plus or a minus attached. But life is already unfair and lack of transparency is the villain with regard to the plight of early career researchers. Make a rubric for tenure.

Thanks, DF! To me your ideas seem practicable. Why is it that institutions aren’t evolving along these lines? Is it easy to criticise from the outside but too enormous a task to contemplate from the inside? Are the people with the ideas not in a position to implement those ideas / vice versa? Does it really just boil down to the current system being fit enough for the purposes of the people who control the system?

(Libraries need to) tell people over and over and over again, through every available medium, what you are offering. The majority aren’t annoyed by the repetition, they’re oblivious to it.

I think this is absolutely right, and it applies to messaging inside the library as well. A library dean to whom I once reported got very frustrated because we told him he needed to communicate a particular message to the library staff. “But I’ve already said that!,” he responded, to which our rejoinder was “Yes, and you’ll probably have to say it at least ten more times before everyone gets it.”

Charlie,

Many thanks for a very engaging report on an intriguing set of discussions, and it’s great to have the humanities properly represented (for once!). I do think, however, that (as is a recurrent danger in debates about scholarly communication) what gets posited as a binary between ‘thrusting, imaginative, tech-savvy, ‘open’ youngish early career researchers’, and ‘the people who control the system’ is actually a lot more fluid and complex: most importantly, such a binary potentially ignores the large majority of middle-ranking tenured faculty who do not control the system but work hard within its structures. This has been, and remains, one of the biggest issues mitigating against (e.g.) the widespread adoption of open models in the arts and social sciences, where (to cite a British instance) the traction of open practices amongst the majority of, say, Senior Lecturers in History or English Literature at middle-ranking institutions remains very low indeed, and the sense these constituencies often have of being hectored both from above (not least via the various compulsions of the Research Assessment Exercise) and from below is not very conducive to a positive response to change in the directions helpfully suggested above. Such constituencies are probably also those helping to maintain the massively majoritarian declared preference for print (in books) amongst arts and social sciences faculty – still running at about 80%, and if anything hardening if the latest faculty surveys are to be believed. Even if many of those who really do ‘control the system’ in resource terms would prefer the opposite to be true…

In fairness to researchers, their lack of receptiveness to messages from other sectors within higher education (e.g. their libraries) is hardly unique! Anybody reading the SK who has ever tried to manage a largeish institution will sympathise 100% with Rick’s point above. There is no substitute for remorseless, even boring repetition of the same message, if it is important above, through every conduit possible. You might, ultimately, get through to at least some of the people some of the time.

… the sense these constituencies often have of being hectored both from above (not least via the various compulsions of the Research Assessment Exercise) and from below is not very conducive to a positive response to change in the directions helpfully suggested above.

Are you saying that Senior Lecturers in History or English Literature at middle-ranking institutions should be hectored from only one of these directions? Which one?

And how does this idea tie in with the point you then make — quite rightly — that “There is no substitute for remorseless, even boring repetition of the same message, if it is important enough, through every conduit possible”?

I agree that there is a big swathe of people who are relatively happy, or functioning within, the status quo. I am pondering how many people are not able to function within that status quo, and whether there are enough of them to warrant change? Hectoring around open is a good example of tackling the symptom rather than the problem. Unless people are incentivised to need to make that change (for example, by different evaluation mechanisms) then of course they stick with the path of least resistance.

Sorry. Typo in penultimate sentence. Important enough, not important above.

However, Michnowicz’s letter to the Higher Education Funding Council for England, asking that peer review contributions be recognized in research evaluation, and received a scant, and negative, response.

Is this unhappy outcome documented anywhere? I would like to help draw attention to it.

Hi Mike – my guess is that it’s not – Sabina threw it in as an aside, almost – but do check in with her (@sabmichnowicz on Twitter, and I have flagged this post to her so hopefully she might read and add a comment!)

At what wayposts during the submission/peer review process could updates be sent to authors?

I wrote about this a while back: Dear journals: communicate with your authors. I listed nine steps that I think authors should be notified about (and six more in the case where the authors appear an editorial decision).

Aha! Thanks Mike. This is a good set of “touch points”. I wonder if there is an article / conference paper in this for one of the editorial conferences such as CSE or ISMTE. Did you liaise with any journals / editors directly about your post and / or get any feedback?

All the feedback I ever saw is in the comments on that blog-post. It didn’t occur to me to seek formal publication for this — it all seemed rather obvious to me — but maybe it is merited: after all, different things are different to different people!

I don’t know the conferences you mention. If you have some experience, and are interested in co-authoring with me, then drop me a line privately — dino@miketaylor.org.uk

Thanks, Mike – I’ve pinged the researcher panel in the first instance and will also chat with colleagues at UKSG (the organization that ran the conference) to see if anyone has appetite to progress towards a code of practice. Will keep you posted / make introductions if anyone bites!

The problem of ‘institutional structures for recognition’ and ‘institutional systems for evaluation’ is at the root of everything. It quashes innovative and nonconformist work and causes unnecessary stress, as it has narrowed over the years to ‘grants and prestigious articles’ in most disciplines, and ‘book’ in the humanities.

I am wondering how those structures operate in the only sector I have not taught in – liberal arts colleges? With so much teaching on your plate, are you still required to publish the equivalent of 12 articles to obtain tenure, as in the research universities I have taught at on three continents? I am rather hoping that performance criteria are a bit more relaxed.

Interesting question, Simon! Not one I can answer myself though I hope another blog reader may be able to chip in.

Comments are closed.