Editors note: This is a guest post by Morressier CEO Sami Benchekroun and Head of Communications Michelle Kuepper.
It can take months or even years for research to go from ideation to publication, with the vast majority of findings generated along the way often restricted to the offline world — hidden in printed posters, computer files, and the pages of lab notebooks.
This is starting to change, however, as we gain an increased appreciation of the value of early-stage research and acknowledge how much knowledge goes missing when these findings aren’t shared. We’re seeing a generational shift as the world becomes increasingly fast-paced and digital, and early-career researchers are leading the charge in adopting (and expecting) a more open research approach.
Preprint platforms are becoming accepted and even celebrated fixtures of the research community in some fields. This is by no means the case in all disciplines, however, neither has the sharing of other forms of pre-published research reached this level of maturity. Content like raw data, conference posters, and failed results, for example, largely remains shrouded in secrecy. That’s despite the fact that disseminating findings earlier in the research process can boost opportunities for collaboration and spark inspiration. Sharing datasets can also improve the reproducibility of journal articles. Even inconclusive findings may turn out to be valuable in the future, while publishing failed results can help other researchers save time and avoid pursuing the wrong path.
However, the question of how to sort, filter, and structure the huge variety of information that makes up pre-published research remains a challenge. Without the security of a standardized peer review system and (for all its failings!) the Journal Impact Factor to fall back on, researchers are rightly concerned about how they can discover findings that are not only relevant to their work, but of a high standard too.
This shouldn’t be a permanent roadblock. As the opportunities for sharing research change, so too should the methods that are used to filter and disseminate that research. There are a number of ways that we can already go about this, and it seems certain that new strategies will rapidly emerge as sharing pre-published research becomes more commonplace.
For starters, more research documents, including datasets, conference posters, and images, should be digitized and assigned a digital object identifier (DOI) to provide a lasting online link between researchers and their findings. DOIs bring structure to the otherwise unstructured world of pre-published research and – through citation linking and ORCID iDs, for example – help uncover the full scope of an author’s work, which in turn enables them to get the credit they deserve for their contributions. DOIs also make it easier for others to cite pre-published findings in their own research, enabling a more formal discussion around all the content that is generated prior to a final paper.
DOIs are already being used by a number of platforms to structure and attribute pre-published research. Our company Morressier offers researchers the option to assign a DOI to their conference posters, presentations, and abstracts. Extensive interviews with our users have shown time and again that researchers want the opportunity to openly share and gain recognition for their early-stage work. Other organizations like Figshare, F1000, and ResearchGate also assign DOIs to all manner of research output.
Over time, citation rankings could be applied to pre-published research as a way to measure quality and relevance. However, this is not without its challenges, many of which also apply to published articles, including the often lengthy time from when research is first shared to when it starts to be cited. Then there’s the question of exactly how pre-published research – and, therefore, mostly non-peer reviewed content – should be cited, if indeed it should be cited at all. But this shouldn’t stop us implementing DOIs as an important first step in making early-stage research more useful, structured, and trackable. In the longer term, assigning DOIs to pre-published content and making this data citable will help to connect the dots in the research lifecycle.
In addition to citation metrics, other types of metrics could also be used to help evaluate early stage research. Everything on the internet is trackable, and pre-published scientific findings shouldn’t be an exception. Metrics that cover the levels of engagement on a piece of research, such as the number of likes, downloads, comments, and social or email shares, as well as the amount of time spent on a piece of content, can be analyzed to provide an indication of the level of interest in findings. While this approach relies on researchers being actively involved in online communities and can be vulnerable to gamification, it has the potential to help push innovative, pre-published research forward while at the same time flagging less credible findings.
Finally, as the place where initial findings are typically first formally presented and discussed in person, academic conferences play an important role in the dissemination of pre-published research. As well as using peer review of abstracts as an initial quality check, there are also ways to measure interest in conference content by using interaction statistics from the event itself. For example, if posters are presented digitally, valuable insights on the number of views, shares, and downloads can be tracked, and used to highlight popular findings, while content management systems that bring content online enable researchers around the world to access, review, and build upon the findings presented at conferences.
Implementing DOIs, developing citation metrics, and gathering interaction statistics are readily available options that should act as a springboard for a wider discussion about how best to evaluate and share early-stage research. Only by finding methods to filter and highlight the quality of pre-published research will we be able to gain a more complete picture of a researcher’s work and offer scientists new opportunities for recognition at the early stages of their research, as well as when it is formally published. This is especially important for early-career researchers who can struggle to build their reputation without a comprehensive set of published articles under their belt. At the same time, legitimizing pre-published research brings the entire scholarly ecosystem one step closer to a more inclusive, open, connected, and data-driven research lifecycle.