Editors note: This is a guest post by Morressier CEO Sami Benchekroun and Head of Communications Michelle Kuepper.

It can take months or even years for research to go from ideation to publication, with the vast majority of findings generated along the way often restricted to the offline world — hidden in printed posters, computer files, and the pages of lab notebooks.

sprouting plant

This is starting to change, however, as we gain an increased appreciation of the value of early-stage research and acknowledge how much knowledge goes missing when these findings aren’t shared. We’re seeing a generational shift as the world becomes increasingly fast-paced and digital, and early-career researchers are leading the charge in adopting (and expecting) a more open research approach.

Preprint platforms are becoming accepted and even celebrated fixtures of the research community in some fields. This is by no means the case in all disciplines, however, neither has the sharing of other forms of pre-published research reached this level of maturity. Content like raw data, conference posters, and failed results, for example, largely remains shrouded in secrecy. That’s despite the fact that disseminating findings earlier in the research process can boost opportunities for collaboration and spark inspiration. Sharing datasets can also improve the reproducibility of journal articles. Even inconclusive findings may turn out to be valuable in the future, while publishing failed results can help other researchers save time and avoid pursuing the wrong path.

However, the question of how to sort, filter, and structure the huge variety of information that makes up pre-published research remains a challenge. Without the security of a standardized peer review system and (for all its failings!) the Journal Impact Factor to fall back on, researchers are rightly concerned about how they can discover findings that are not only relevant to their work, but of a high standard too.

This shouldn’t be a permanent roadblock. As the opportunities for sharing research change, so too should the methods that are used to filter and disseminate that research. There are a number of ways that we can already go about this, and it seems certain that new strategies will rapidly emerge as sharing pre-published research becomes more commonplace.

For starters, more research documents, including datasets, conference posters, and images, should be digitized and assigned a digital object identifier (DOI) to provide a lasting online link between researchers and their findings. DOIs bring structure to the otherwise unstructured world of pre-published research and – through citation linking and ORCID iDs, for example – help uncover the full scope of an author’s work, which in turn enables them to get the credit they deserve for their contributions. DOIs also make it easier for others to cite pre-published findings in their own research, enabling a more formal discussion around all the content that is generated prior to a final paper.

DOIs are already being used by a number of platforms to structure and attribute pre-published research. Our company Morressier offers researchers the option to assign a DOI to their conference posters, presentations, and abstracts. Extensive interviews with our users have shown time and again that researchers want the opportunity to openly share and gain recognition for their early-stage work. Other organizations like Figshare, F1000, and ResearchGate also assign DOIs to all manner of research output.

Over time, citation rankings could be applied to pre-published research as a way to measure quality and relevance. However, this is not without its challenges, many of which also apply to published articles, including the often lengthy time from when research is first shared to when it starts to be cited. Then there’s the question of exactly how pre-published research – and, therefore, mostly non-peer reviewed content – should be cited, if indeed it should be cited at all. But this shouldn’t stop us implementing DOIs as an important first step in making early-stage research more useful, structured, and trackable. In the longer term, assigning DOIs to pre-published content and making this data citable will help to connect the dots in the research lifecycle.

In addition to citation metrics, other types of metrics could also be used to help evaluate early stage research. Everything on the internet is trackable, and pre-published scientific findings shouldn’t be an exception. Metrics that cover the levels of engagement on a piece of research, such as the number of likes, downloads, comments, and social or email shares, as well as the amount of time spent on a piece of content, can be analyzed to provide an indication of the level of interest in findings. While this approach relies on researchers being actively involved in online communities and can be vulnerable to gamification, it has the potential to help push innovative, pre-published research forward while at the same time flagging less credible findings.

Finally, as the place where initial findings are typically first formally presented and discussed in person, academic conferences play an important role in the dissemination of pre-published research. As well as using peer review of abstracts as an initial quality check, there are also ways to measure interest in conference content by using interaction statistics from the event itself. For example, if posters are presented digitally, valuable insights on the number of views, shares, and downloads can be tracked, and used to highlight popular findings, while content management systems that bring content online enable researchers around the world to access, review, and build upon the findings presented at conferences.

Implementing DOIs, developing citation metrics, and gathering interaction statistics are readily available options that should act as a springboard for a wider discussion about how best to evaluate and share early-stage research. Only by finding methods to filter and highlight the quality of pre-published research will we be able to gain a more complete picture of a researcher’s work and offer scientists new opportunities for recognition at the early stages of their research, as well as when it is formally published. This is especially important for early-career researchers who can struggle to build their reputation without a comprehensive set of published articles under their belt. At the same time, legitimizing pre-published research brings the entire scholarly ecosystem one step closer to a more inclusive, open, connected, and data-driven research lifecycle.



14 Thoughts on "Challenges and Opportunities in Pre-published Research"

Some wise person once said that “repetition teaches the fools the rules, and the wise the lies.” I suspect that most readers of Scholarly Kitchen are wise. Sadly, there is hardly a paragraph in this otherwise thoughtful article that does not contain the term “pre-published,” implying that the modern form of preprints (as opposed to the paper NIH preprints of the sixties) are not made public (i.e. are not to be deemed as actually published). Of course the publishing industry endorses this. It has an incentive to shroud the meaning. However, it is losing despite the casuistry. A growing number of busy researchers are quite happy to publish their work in non-peer-reviewed preprint form and move on to their next project without engaging in the formalities necessary to secure peer-reviewed publication.

Yes, the description of preprints as pre-published begs important questions. But then so does inclusion of the prefix “pre” in preprint.

I can already share materials and get metrics using institutional or non-profit resources.

My institutional repository is happy to take my conference posters if I want to upload them. Most institutions that I have worked for will do a DOI too if I want. Or I can upload my poster to a data repository and get a DOI from them. My IR tracks downloads and some IRs also show altmetrics. Some people also use preprint servers for conference papers which will also give you metrics – BioRxiv for example provides a DOI and tracks downloads and altmetrics.

If I am a meeting organiser and I want my participants to be able to share their materials, I can use OSF Meetings for free.

So I don’t see the problem that Morressier is trying to solve.

The proliferation of easy access to all manner of DOI-identified formats of research disclosure, including conference posters and presentations, carries a risk of even more contributing to the *glut* of research already present in peer-reviewed journal publishing and proliferation of unnecessary journals.
Thus the need for a two-tiered system. Tier 1, ex ante, is the wild frontier of non-journal article formats (posters, youtubes of presentations, random blog musings)–a grand marketplace of ideas–overlaid by tier 2–a staid, ex post stock of peer-reviewed *review* journal articles that provide narratives and discussions of what transpires in the Tier 1 wild frontier. Tier 2, provided that the number of journal articles and journals is greatly trimmed, benefits the intergenerational transmission of knowledge, thus fulfilling the ideal of librarianship to conserve the intellectual record. Tier 2, which plays a synthetic/integrative function, also helps to acculture new scientists coming up to speed on developing areas of research.
A greatly trimmed stock of journals will also help resolve the journal pricing crisis.
Will any of this actually transpire? No time soon.

Thank you for your comment. We were inspired to launch Morressier after working together with hundreds of conferences and talking to both researchers and organizers. The vast majority had trouble accessing the posters and presentations being shared at conferences around the world and, as a result, often missed out on interesting developments in their field of research. Additionally, many were unable to attend conferences themselves due to budgetary constraints. Researchers also informed us that they were looking for one place to discover and share early-stage research. Even today, the majority of conferences continue to rely on paper posters, meaning these documents often end up disappearing post-conference. Our goal is to solve exactly these challenges by making early-stage research (including conference content) easily accessible and providing relevant recommendations, while at the same time offering early-career researchers a platform to build up their profile. Additionally, by assigning DOIs, we’re bringing some consistency into the world of early-stage research.

Conference organizers value the fact that we provide a powerful software that fits into every conference budget – our pricing suits even small events. Our focus is on user experience and offering a well-designed, easy to use software system that has all the features that organizers and researchers are looking for.

Thanks, a quick question. How much conference material, including in the form of posters, eventually gets published in preprint format? This points to another tier of symbiosis than the one suggested in my comment about the need for a greater symbiosis between preprints and a (much contracted) journal market.

Unfortunately we don’t have precise numbers on this, but based on our experience we’ve seen that the vast majority of poster and presentations do not end up published in a preprint format or as journal articles, meaning this information is lost post-conference. We are currently working on ways to connect posters and presentations to their corresponding preprints and articles to provide these kinds of insights in the future.

Very interesting, thanks! An unfortunately overlooked vehicle for science communication.

All preprint servers and conference paper/poster repositories could provide a valuable service to the research ecosystem by providing a prominent dedicated metadata field to link to any subsequent peer-reviewed publication. bioRxiv, for example, does this and provides the link automatically via on-the-fly web search. But most preprint servers and conference repositories do not provide this as a prominent, dedicated metadata field, even for manual entry by the author.

Morressier seems like an interesting development that addresses earlier calls for a move away from journals and Impact Factor and towards a linked network of individual preprints/posters/etc.: thus Brembs et. al. claimed in 2013 in “Deep impact: unintended consequences of journal rank” “we now have technology at our disposal which allows us to perform all of the functions journal rank is supposed to perform in an unbiased, dynamic way on a per-article basis.” What they seem to have in mind is something like a Google algorithm. But of course Google is a for-profit company whose biases are well-documented (see: Algorithms of Opression) and whose monopoly power and functionality as an unregulated utility seem self-reinforcing. I’ll be curious to understand more about the business model of Morressier.

Many thanks for your comment. We’re happy to answer questions about our business model. Rather than charging for access to research, we monetize via our scientific analytics product, premium search functionalities, and content management software for universities, associations, and scientific conferences. We are already working with a number of associations including the International Diabetes Federation and the World Stroke Organization to showcase their early-stage research and provide content management for their conferences.

Hopefully, pre-publication materials are ultimately validated by an accepted peer-reviewed journal article. But what if the article gets rejected? There are “soft” rejections because, say, the work does not make a sufficient contribution to the field, or because it’s poorly written; in these cases, the pre-publication materials still may have value. But what about a “hard” rejection due to serious fundamental flaws? Should an author be expected to disclose rejections letters?

Peer review is in my view inherently good. But one has to ask about the huge costs it incurs in terms of researcher genius-hours better devoted to other things, such as writing review articles.

In reply to your particular question, one could also ask: what if peer review admits an article with fabricated data or that is otherwise plain wrong in its theoretical assumptions? What if peer review merely provides a one way ticket to an echo-chamber? One may also ask: what really are the worries here? Some member of the public reading a preprint disclosing (say) shoddy medical information and then acting on it? (Don’t forget: caveat lector, with respect to any public information!) But are we then going to call on the state to vet all publicly accessible medical research disclosed in blogs, .com websites, whatever? That would be ominous. And surely, no one is going to engage in self-harming behaviors after reading (say) an article on arXiv about HEP. (Well, yes, maybe they’ll take a shoddy econometric model disclosed there and wreak havoc on the economic system, but now we’re talking about a ridiculous scenario.)

My own view is that societies should promote standards of conduct in the preprint sphere. Plus the format very easily enables someone to challenge a paper. This will help reinforce the self-policing that Ginsparg suggested already occurs in the preprint sphere.

As for peer-review, indeed find the most rigorous version possible but devote it to review of tier 2 publishing of the kind mentioned in the other comment above. I.e., review articles along the lines of hard-headed meta reviews. Cf. a really high quality meta-analysis in the biomedical publishing space. Now *that* is a real service to researchers, including the newbies who will be carrying the torch.

Comments are closed.