Editor’s Note: Today’s post is by Thilo Koerkel. Thilo is the publisher of the New Ground journal, based near Frankfurt am Main in Germany. He was an editor of physics and astronomy at Spektrum der Wissenschaft, the German edition of Scientific American, and, until 2022, a Strategic Partnerships Manager of Nature Research, the publisher of Nature.
Many of us probably share the following intuitions: to keep up with the ever-growing amount of literature, researchers have to specialize more and more; this reduces the potentially fruitful exchange between specialist bubbles. And: a growing number of political, economic, and societal decisions are made based on science. However, science gives guidance only; it does not make decisions for us. Given the participatory nature of democratic societies, we all need to understand what science generally – not only science from one’s own field of expertise – actually tells us about climate change, future energy systems, COVID-19, multiresistant bacteria, loss of biodiversity, etc.
But would researchers or stakeholders read the original papers in fields beyond their own expertise? Many wouldn’t – and they’d miss the information. The value of press releases, newspapers, and popular science magazines, on the other hand, is limited . These aim at being comprehensible to broader audiences and, to this end, refrain from presenting the full complexity and limitations of the actual findings.
The conceptual gap between these types of document is usually large. An article that falls within that gap, however, might offer an appropriate balance of simplicity and complexity for researchers with different specializations, policymakers, decision-makers, funders, early career scientists, journalists, educated generalists – in short, for those less likely to read the original research but who have an in-depth interest in the science presented.
I’d therefore like to suggest that this gap is a place in its own right that deserves additional coverage.
A much broader and far-reaching interpretation of this gap is to consider it as the very place where actual exchange between researchers from different bubbles happens. A talk on the floors of the institute, a conversation on the sidelines of a conference, a colloquium that yields unexpected ideas, inspiration and insights: science advances by adding new ways of looking at problems, by applying concepts from one field to another, by cross-fertilization of knowledge and technologies. The gap, therefore, is also the place, where a researcher can get inspiration and learn from those who are beyond his or her immediate community.
Of course, there is already some coverage. Certain scientific journals publish accessible editorial sections along with original research – most prominently, probably, Nature’s “front half” – or put effort into providing research summaries. Review journals offer more accessible information, as do large institutions such asscientific academies or the Intergovernmental Panel on Climate Change. Policy briefs inform non-specialized audiences. Many blogs cover important science in impressive detail. Press releases go into more depth than they did in earlier years. Most of these texts are quality-assured and free to read.
Does that settle the matter? Not quite. The existing coverage seems to be relatively small, and the obvious question, “What did the researchers actually do?” is not answered, in many cases.
Some anecdotal evidence, further discussed below, indeed suggests that the gap has not yet been closed and that it is perceived as such by a significant number of stakeholders. I have gained it over the past year while preparing the launch of a journal, New Ground, that went live in October 2022. It is conceived as a multidisciplinary open-access journal and publishes carefully edited representations of selected articles from renowned researchers that have been published in high-ranking peer-reviewed journals. These articles aren’t mere summaries. While keeping the level high, they add context and reasoning to help the readers understand what was done. They denote the technical terms for the major methods used – like Monte Carlo simulation or scRNA-seq – and give a short explanation while not going into details. They cut their own path through the research paper, while aiming at a “flow“ and a “good read“. So they are an independent genre of text, but true to the content they’re based upon. New Ground articles are written by PhD-qualified authors from the respective fields and approved by the corresponding authors of the original articles on which they’re based. Each has a Crossref DOI and is citable by volume number and article number. All articles are published under the CC BY license that allows redistribution by anyone, including commercial media, and even in adapted form, as long as re-users give appropriate credit to the version of record.
The journal’s sole source of revenue is a fee, paid by the researcher or their institution. There are no ads on the site, and no subscription fees are charged. The fee pays the writer, the editor and the proofreader, essentially the intellectual contribution of those involved in producing an article. Writers’ fees vary greatly, depending on many factors, as well as article lengths. Article lengths, in turn, impact the editor’s and proofreader’s efforts. The nature of the underlying scientific publication has the biggest influence: is it highly technical and complex or quite straightforward? If a publication can be suitably represented by a text body of 1,000 to 1,200 words – plus title, teaser, metadata etc. –, costs are less than 2,000 Euro. Typically, New Ground articles are longer. Part of the journal’s endeavor is to find out over time if things can be standardized or should be standardized at all.
The role of the editor, in this case, could also be cause for misunderstanding. It’s about managing the workflow, on the one hand, but it also involves deep engagement with the text itself. This task typically includes careful editing, rewriting, and rearranging to deliver the best possible outcome.
To understand if New Ground actually fills a gap and adds value to the communication across disciplines, I reached out to many researchers and PhD writers. Their responses were very encouraging, the most pointed among them taking New Ground as “a journal dedicated to bringing science to the scientist masses”. The existence of a gap was widely acknowledged, as well as the need for new ideas from other fields. New Groundalso encountered a gratifying level of readiness from even renowned and busy researchers to put their own efforts into articles that are published in a virtually unknown journal. A recent text on multiple sclerosis also generated huge interest on LinkedIn. (In contrast, the journal’s Twitter account is below the visibility limit. Is there something to learn from this about potential audiences of this type of journal?)
It also became clear that covering this ground – this New Ground, so to speak – in this specific way requires substantial efforts from the respective authors of the articles as well as from their editors. No straight line exists between an original research paper and an accessible in-depth article about that paper. Papers aren’t flawless: I can report on extended discussions between the New Ground author and the New Ground editor about enigmatic – although quite central – parts of the original paper that could only be resolved by getting back to the researchers themselves. One researcher pointed out “key misunderstandings” in the first draft of the article; in this case, again, it appeared that some aspects of the research as set out in the paper presented unnecessary challenges to readers (and to the New Ground author whose skills and motivation were beyond doubt). In another case, the nomenclature of the original research was confusing and obscured the findings to a considerable degree.
It’s also quite hard to find authors. How many PhDs are out there who specialize in superconductivity in cuprates, have writing skills, and are not busy doing research or working outside academia?
Compared to the abovementioned outlets that are already filling the gap, at least one feature of New Groundstands out. In most cases, the timing, level and content of coverage in editorial journal sections or other media cannot be influenced by individual researchers. In contrast, any researcher who publishes a relevant paper in a relevant journal can commission a New Ground article to be produced on that paper and to further fill the gap. There is no need to await or hope for sound coverage elsewhere.
New Ground applies selection criteria, though. So far, the journal reported on work published in Nature, Nature Methods, Nature Neuroscience, PNAS, MNRAS, the Journal of Experimental Medicine and others. It takes into consideration factors such as the journals’ impact factor, the researchers’ h-indices or major awards granted to them, while knowing about the much-debated limitations of quality metrics. This is work in progress: sustainability will come from the journal’s reputation, not from adopting low standards.
As of today, however, New Ground is covering only a tiny parcel of the space in debate, and its impact is negligible. While it seems clear to me that New Ground-style articles are valuable for some audiences and that scientists and their funders should have an interest in reaching out beyond their bubbles – to inform and inspire researchers, stakeholders and audiences and also get more visibility –, it is far from clear how willing scientists, their funders, or, more generally speaking, societies are to pay for their production and which would be their preferred way to have them published.
So does the journal add something relevant to the publishing landscape? Can this be done in a sustainable way? If sustainable, would other journals incorporate the approach into their own publishing models, perhaps as an add-on to a published article? Do journals such as New Ground need editorial boards of scientists? Can we think of funders developing an interest in exploring this ground, and into which directions would they steer this exploration? Can we think of other journal types, article types, or publication procedures that would help filling the gap but might be more useful for researchers and stakeholders?
Open questions, as far as the naked eye can see over new ground. Are there any ideas or answers around? Or do you have additional questions to put on the pile and discuss further?
7 Thoughts on "Guest Post — Open Access Beyond Scholarly Journals"
Thilo, for someone such as myself with a broad but relatively shallow scientific background and a professional “need to know” about the latest scientific developments, your new publication looks like a great fit. Please keep up the good work.
Hi Thilo—nice work. Of course, the broader topic of making journal articles more readable has been debated for many years. There are some who note that research articles have become far more complex over time to the point of being almost unreadable today. Others point out that research itself has become more complex and therefore necessitates using increasingly impenetrable writing to communicate accurately and precisely.
I think the truth is probably somewhere closer to the first observation. The accepted writing style in academia continues to “decay” and this decay is deeply embedded to the point where many academics tend to see clear writing as being less authoritative than convoluted writing. To help with readability and accessibility, some have talked over the years about reforming the abstract (making it longer and more colloquial), or maybe adding a colloquial summary to research papers as part of the standard article format. Writing the article itself more clearly is kind of a pipe dream.
It’s one thing for a journal like yours to handle this kind writing, quite another to hope that researchers themselves will have the time and interest. It can be done, but examples of researchers who are also top flight communicators are few and far between—Asimov, Feynman, Sagan, Tyson, Mack, et al. There are many good programs that help train researchers to communicate more clearly, but it just isn’t a high enough priority for most (not necessarily for lack of interest, just lack of time and incentive).
Going forward, I wonder if this is something that AI might be able to help us with? Not to undercut your efforts, but you have a problem of scale here—there’s no way for one journal to fix everything. But a sustainable, scalable approach might be for researchers to feed their finished papers through an AI machine like QuillBot and have it rewrite the paper as “simple.” These tools will stumble for now because they aren’t adequately trained in all the complexities of each field, but as they improve over time, they might be able to help generate more readable abstracts and long-form summaries, both for your journal and also as part of what publishers add as standard article components in the future. In fact, researchers themselves will be able to quickly generate these summaries and check them for accuracy before submitting them to the publisher.
Good luck with your venture!
Hi Glenn – good points, indeed. “Colloquial summaries”, as you write, might be an approach. For some time, I know of so-called Research Summaries from Nature, based on Nature papers, listed here: https://www.nature.com/nature/articles?type=research-summary. There are only 9 of them (and behind the paywall), probably the project has been discontinued since 2021. A free example is here: https://media.nature.com/original/magazine-assets/d41586-021-01112-w/d41586-021-01112-w.pdf, maybe there are others as well. The idea is to some extent comparable to what New Ground does. So the good news is: others try as well.
Training researchers in the field of communication is also a good approach but I don’t expect a fundamental change from it. As far as I can see, most of these efforts are aimed at improving science communication toward the public. Their goal is to increase trust in science, not to advance science itself.
And yes, AI could change things. If not today, then in the future. Let the AI do the work, and let the researcher give it a final touch. However, we’re not there yet. A professor I recently approached on this topic called ChatGPT “misleading and dangerous” after having conducted some experiments with scientific texts, together with his colleagues.
For me, the question goes like this: does an AI that summarizes a scientific publication speed up the writing of a good article on that publication? This would be a “scalable approach”, as you write. So far, it doesn’t. In my experience, an AI can give a rough overview, but it does not save you the trouble to look very deeply into the paper and start again from scratch. Now, that GPT-4 has been released, I’ll have to take another look and see how it might affect editorial work.
Keen observations identified and arguments made by Thilo Koerkel in “Open Access Beyond Scholarly Journals”.
I appreciated his call for articles which substantially scaffold plain-language and esoteric knowledge and discourse as a means of capturing a readership that would otherwise be left derelict within the chasms of academic understanding.
This sounds very much like the JOVE model, where scientists pay to have content created about their work. In the case of JOVE, it’s video content. In the case of New Ground, it’s summary content. It seems to me that the New Ground criterion that the work be “from renowned researchers” is very limiting. A lot of good work is done by early career researchers who may not have accumulated the citations in numerous papers necessary to produce a high h-index.
Yes, Mike, I see similarities with JOVE as well. But the audience of JOVE is probably strictly limited to those who actually work on the respective topics in the labs, while New Ground is suitable for audiences with different backgrounds.
And, yes, limiting New Ground to works “from renowned researchers” or to high-impact journals is an issue. In the beginning, however, reporting on “relevant” works helps to increase the attractiveness of the journal, that’s why I decided to go this way. For now, it is relying on the usual criteria, one might say, and does not question them. A scientific editorial board could change things and focus on “relevant” papers that go beyond what impact factors and h-indexes suggest.